repo
stringclasses
856 values
pull_number
int64
3
127k
instance_id
stringlengths
12
58
issue_numbers
sequencelengths
1
5
base_commit
stringlengths
40
40
patch
stringlengths
67
1.54M
test_patch
stringlengths
0
107M
problem_statement
stringlengths
3
307k
hints_text
stringlengths
0
908k
created_at
timestamp[s]
streamlink/streamlink
4,143
streamlink__streamlink-4143
[ "4100" ]
37ee5347508980ee768ac0695714f01c54600933
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -41,7 +41,7 @@ def format_msg(text, *args, **kwargs): deps = [ "requests>=2.26.0,<3.0", "isodate", - "lxml>=4.6.3", + "lxml>=4.6.4,<5.0", "websocket-client>=0.58.0", # Support for SOCKS proxies "PySocks!=1.5.7,>=1.5.6",
python 3.10: missing lxml wheel on Windows and macOS (streamlink install fails) ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description pip install streamlink and pip3 install --upgrade streamlink ### Debug log ```text E:\Software\Fresh Install\Software>pip --version pip 21.3 from C:\Python310\lib\site-packages\pip (python 3.10) E:\Software\Fresh Install\Software>pip3 install --upgrade streamlink Requirement already satisfied: streamlink in c:\python310\lib\site-packages (1.4.0) Collecting streamlink Using cached streamlink-2.4.0-py3-none-win_amd64.whl (360 kB) Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in c:\python310\lib\site-packages (from streamlink) (1.7.1) Requirement already satisfied: iso3166 in c:\python310\lib\site-packages (from streamlink) (2.0.2) Requirement already satisfied: isodate in c:\python310\lib\site-packages (from streamlink) (0.6.0) Requirement already satisfied: pycryptodome<4,>=3.4.3 in c:\python310\lib\site-packages (from streamlink) (3.11.0) Requirement already satisfied: iso-639 in c:\python310\lib\site-packages (from streamlink) (0.4.5) Requirement already satisfied: requests<3.0,>=2.26.0 in c:\python310\lib\site-packages (from streamlink) (2.26.0) Collecting lxml>=4.6.3 Using cached lxml-4.6.3.tar.gz (3.2 MB) Preparing metadata (setup.py) ... done Requirement already satisfied: websocket-client>=0.58.0 in c:\python310\lib\site-packages (from streamlink) (1.2.1) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\python310\lib\site-packages (from requests<3.0,>=2.26.0->streamlink) (1.26.7) Requirement already satisfied: charset-normalizer~=2.0.0 in c:\python310\lib\site-packages (from requests<3.0,>=2.26.0->streamlink) (2.0.7) Requirement already satisfied: idna<4,>=2.5 in c:\python310\lib\site-packages (from requests<3.0,>=2.26.0->streamlink) (3.3) Requirement already satisfied: certifi>=2017.4.17 in c:\python310\lib\site-packages (from requests<3.0,>=2.26.0->streamlink) (2021.10.8) Requirement already satisfied: six in c:\python310\lib\site-packages (from isodate->streamlink) (1.16.0) Building wheels for collected packages: lxml Building wheel for lxml (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-vgglfm9y\\lxml_3f64fc9669184ac5a35c4d89e9659721\\setup.py'"'"'; __file__='"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-vgglfm9y\\lxml_3f64fc9669184ac5a35c4d89e9659721\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\PC\AppData\Local\Temp\pip-wheel-n93rpids' cwd: C:\Users\PC\AppData\Local\Temp\pip-install-vgglfm9y\lxml_3f64fc9669184ac5a35c4d89e9659721\ Complete output (74 lines): Building lxml version 4.6.3. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.10 creating build\lib.win-amd64-3.10\lxml copying src\lxml\builder.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\sax.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-3.10\lxml creating build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-3.10\lxml\includes creating build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-3.10\lxml\html creating build\lib.win-amd64-3.10\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-3.10\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-3.10\lxml\includes creating build\lib.win-amd64-3.10\lxml\isoschematron\resources creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\rng creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for lxml Running setup.py clean for lxml Failed to build lxml Installing collected packages: lxml, streamlink Running setup.py install for lxml ... error ERROR: Command errored out with exit status 1: command: 'C:\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-vgglfm9y\\lxml_3f64fc9669184ac5a35c4d89e9659721\\setup.py'"'"'; __file__='"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-vgglfm9y\\lxml_3f64fc9669184ac5a35c4d89e9659721\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\PC\AppData\Local\Temp\pip-record-6dzbzm0n\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Python310\Include\lxml' cwd: C:\Users\PC\AppData\Local\Temp\pip-install-vgglfm9y\lxml_3f64fc9669184ac5a35c4d89e9659721\ Complete output (74 lines): Building lxml version 4.6.3. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running install running build running build_py creating build creating build\lib.win-amd64-3.10 creating build\lib.win-amd64-3.10\lxml copying src\lxml\builder.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\sax.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-3.10\lxml creating build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-3.10\lxml\includes creating build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-3.10\lxml\html creating build\lib.win-amd64-3.10\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-3.10\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-3.10\lxml\includes creating build\lib.win-amd64-3.10\lxml\isoschematron\resources creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\rng creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-vgglfm9y\\lxml_3f64fc9669184ac5a35c4d89e9659721\\setup.py'"'"'; __file__='"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-vgglfm9y\\lxml_3f64fc9669184ac5a35c4d89e9659721\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\PC\AppData\Local\Temp\pip-record-6dzbzm0n\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Python310\Include\lxml' Check the logs for full command output. ```
`lxml` is still lacking `cp310` wheels for Windows and macOS and pip will try to install lxml from source (tar-ball). When you don't have the lxml build dependencies installed, the install will fail. - https://pypi.org/project/lxml/#files - https://lxml.de/installation.html#building-lxml-from-dev-sources Check this lxml mailing list message for a request for adding the missing wheels: https://mail.python.org/archives/list/[email protected]/thread/ZMEVOLHJJDMZOFRFYOEWAK5Q3HO7RB75/ I've also already contacted the maintainers but haven't received an answer yet. - https://github.com/streamlink/streamlink/pull/4009 - https://github.com/streamlink/streamlink/pull/4071#issuecomment-935945903 - https://gitter.im/streamlink/streamlink?at=616c531838377967f4595415 - https://gitter.im/streamlink/streamlink?at=616c5497f2cedf67f98ddc55 As you can see in in the commit message of 6943bf173cfed9173f5ef3b84aad6644131f9924, Streamlink currently installs unofficial wheels of lxml in the Windows Python 3.10 CI runners, which were downloaded from here: https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml Windows AMD64 / x86_64 ```sh pip install -U "https://download.lfd.uci.edu/pythonlibs/y2rycu7g/lxml-4.6.3-cp310-cp310-win_amd64.whl; python_version>=3.10 and platform_system=='Windows' and (platform_machine=='AMD64' or platform_machine=='x86_64')" --hash=sha256:556433cadf982a578ee74e989883a07aa9d2a59a04eb2b9f6397c0b4b471ecad ``` Windows x86 ```sh pip install -U "https://download.lfd.uci.edu/pythonlibs/y2rycu7g/lxml-4.6.3-cp310-cp310-win32.whl; python_version>=3.10 and platform_system=='Windows' and platform_machine!='AMD64' and platform_machine!='x86_64')" --hash=sha256:f756d973f89dd0715b5e93301afb9ebdd612ef53ac8986516c1eb0bc6518d85f ``` I got it working by grabbing his commit from the history, I guess it was removed from the github, as I tried sourcing directly and it still failed earlier. ``` pushd "E:\Software\Fresh Install\Software" curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python get-pip.py python -m pip install --disable-pip-version-check --upgrade pip setuptools python -m pip install pycountry python -m pip install "https://github.com/back-to/tmp_wheel/raw/b237059b18110ca298e191340eebb6eb0aef8827/lxml-4.6.3-cp310-cp310-win_amd64.whl python -m pip install "https://github.com/back-to/tmp_wheel/raw/b237059b18110ca298e191340eebb6eb0aef8827/lxml-4.6.3-cp310-cp310-win32.whl python -m pip install streamlink del /q /s "get-pip.py" ``` cleaned it a bit due to running it independently. According to one of the lxml devs, lxml 4.6.4 will presumably be out next week, which will then include wheels for all missing platforms on cp310, which currently (4.6.3) are - win32 - win_amd64 - macosx_x86_64 - manylinux2014_aarch64 Bumping the min version requirement here is probably not necessary, but could be done to ensure that nobody will run into any issues. Then we can also finally add the python 3.10 entry to the package classifiers list and resolve this blocking issue for the next release.
2021-11-03T02:07:30
streamlink/streamlink
4,163
streamlink__streamlink-4163
[ "4161" ]
47b715a3693171cb6e4ee2968627770bb3d5c1bc
diff --git a/src/streamlink/session.py b/src/streamlink/session.py --- a/src/streamlink/session.py +++ b/src/streamlink/session.py @@ -3,7 +3,7 @@ from collections import OrderedDict from functools import lru_cache from socket import AF_INET, AF_INET6 -from typing import Dict, Optional, Type +from typing import Dict, Optional, Tuple, Type import requests import requests.packages.urllib3.util.connection as urllib3_connection @@ -341,10 +341,10 @@ def get_plugin_option(self, plugin, key): return plugin.get_option(key) @lru_cache(maxsize=128) - def resolve_url(self, url: str, follow_redirect: bool = True) -> Plugin: + def resolve_url(self, url: str, follow_redirect: bool = True) -> Tuple[Type[Plugin], str]: """Attempts to find a plugin that can use this URL. - The default protocol (http) will be prefixed to the URL if + The default protocol (https) will be prefixed to the URL if not specified. Raises :exc:`NoPluginError` on failure. @@ -373,11 +373,12 @@ def resolve_url(self, url: str, follow_redirect: bool = True) -> Plugin: priority = prio if candidate: - return candidate(url) + return candidate, url if follow_redirect: # Attempt to handle a redirect URL try: + # noinspection PyArgumentList res = self.http.head(url, allow_redirects=True, acceptable_status=[501]) # Fall back to GET request if server doesn't handle HEAD. @@ -391,10 +392,10 @@ def resolve_url(self, url: str, follow_redirect: bool = True) -> Plugin: raise NoPluginError - def resolve_url_no_redirect(self, url): + def resolve_url_no_redirect(self, url: str) -> Tuple[Type[Plugin], str]: """Attempts to find a plugin that can use this URL. - The default protocol (http) will be prefixed to the URL if + The default protocol (https) will be prefixed to the URL if not specified. Raises :exc:`NoPluginError` on failure. @@ -404,7 +405,7 @@ def resolve_url_no_redirect(self, url): """ return self.resolve_url(url, follow_redirect=False) - def streams(self, url, **params): + def streams(self, url: str, **params): """Attempts to find a plugin and extract streams from the *url*. *params* are passed to :func:`Plugin.streams`. @@ -412,7 +413,9 @@ def streams(self, url, **params): Raises :exc:`NoPluginError` if no plugin is found. """ - plugin = self.resolve_url(url) + pluginclass, resolved_url = self.resolve_url(url) + plugin = pluginclass(resolved_url) + return plugin.streams(**params) def get_plugins(self): diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -13,7 +13,7 @@ from itertools import chain from pathlib import Path from time import sleep -from typing import List +from typing import Dict, List, Type import requests from socks import __version__ as socks_version @@ -24,7 +24,7 @@ from streamlink.cache import Cache from streamlink.exceptions import FatalPluginError from streamlink.plugin import Plugin, PluginOptions -from streamlink.stream.stream import StreamIO +from streamlink.stream.stream import Stream, StreamIO from streamlink.stream.streamprocess import StreamProcess from streamlink.utils.named_pipe import NamedPipe from streamlink_cli.argparser import build_parser @@ -44,10 +44,11 @@ args = None console: ConsoleOutput = None output: Output = None -plugin: Plugin = None stream_fd: StreamIO = None streamlink: Streamlink = None +Streams = Dict[str, Stream] + log = logging.getLogger("streamlink.cli") @@ -174,7 +175,7 @@ def iter_http_requests(server, player): continue -def output_stream_http(plugin, initial_streams, formatter: Formatter, external=False, port=0): +def output_stream_http(plugin: Plugin, initial_streams: Streams, formatter: Formatter, external=False, port=0): """Continuously output the stream over HTTP.""" global output @@ -395,7 +396,7 @@ def read_stream(stream, output, prebuffer, formatter: Formatter, chunk_size=8192 log.info("Stream ended") -def handle_stream(plugin, streams, stream_name): +def handle_stream(plugin: Plugin, streams: Streams, stream_name: str) -> None: """Decides what to do with the selected stream. Depending on arguments it can be one of these: @@ -465,14 +466,14 @@ def handle_stream(plugin, streams, stream_name): break -def fetch_streams(plugin): +def fetch_streams(plugin: Plugin) -> Streams: """Fetches streams using correct parameters.""" return plugin.streams(stream_types=args.stream_types, sorting_excludes=args.stream_sorting_excludes) -def fetch_streams_with_retry(plugin, interval, count): +def fetch_streams_with_retry(plugin: Plugin, interval: float, count: int) -> Streams: """Attempts to fetch streams repeatedly until some are returned or limit hit.""" @@ -504,7 +505,7 @@ def fetch_streams_with_retry(plugin, interval, count): return streams -def resolve_stream_name(streams, stream_name): +def resolve_stream_name(streams: Streams, stream_name: str) -> str: """Returns the real stream name of a synonym.""" if stream_name in STREAM_SYNONYMS and stream_name in streams: @@ -515,7 +516,7 @@ def resolve_stream_name(streams, stream_name): return stream_name -def format_valid_streams(plugin, streams): +def format_valid_streams(plugin: Plugin, streams: Streams) -> str: """Formats a dict of streams. Filters out synonyms and displays them next to @@ -560,8 +561,9 @@ def handle_url(): """ try: - plugin = streamlink.resolve_url(args.url) - setup_plugin_options(streamlink, plugin) + pluginclass, resolved_url = streamlink.resolve_url(args.url) + setup_plugin_options(streamlink, pluginclass) + plugin = pluginclass(resolved_url) log.info(f"Found matching plugin {plugin.module} for URL {args.url}") if args.retry_max or args.retry_streams: @@ -571,8 +573,7 @@ def handle_url(): retry_streams = args.retry_streams if args.retry_max: retry_max = args.retry_max - streams = fetch_streams_with_retry(plugin, retry_streams, - retry_max) + streams = fetch_streams_with_retry(plugin, retry_streams, retry_max) else: streams = fetch_streams(plugin) except NoPluginError: @@ -683,9 +684,9 @@ def setup_config_args(parser, ignore_unknown=False): if streamlink and args.url: # Only load first available plugin config with ignored(NoPluginError): - plugin = streamlink.resolve_url(args.url) + pluginclass, resolved_url = streamlink.resolve_url(args.url) for config_file in CONFIG_FILES: - config_file = config_file.with_name(f"{config_file.name}.{plugin.module}") + config_file = config_file.with_name(f"{config_file.name}.{pluginclass.module}") if not config_file.is_file(): continue if type(config_file) is DeprecatedPath: @@ -881,7 +882,7 @@ def setup_plugin_args(session, parser): plugin.options = PluginOptions(defaults) -def setup_plugin_options(session, plugin): +def setup_plugin_options(session: Streamlink, plugin: Type[Plugin]): """Sets Streamlink plugin options.""" pname = plugin.module required = OrderedDict({})
diff --git a/tests/plugins/test_stream.py b/tests/plugins/test_stream.py --- a/tests/plugins/test_stream.py +++ b/tests/plugins/test_stream.py @@ -1,6 +1,8 @@ import unittest from unittest.mock import patch +import requests_mock + from streamlink import Streamlink from streamlink.plugin.plugin import parse_params, stream_weight from streamlink.stream.akamaihd import AkamaiHDStream @@ -13,12 +15,18 @@ class TestPluginStream(unittest.TestCase): def setUp(self): self.session = Streamlink() + def resolve_url(self, url): + with requests_mock.Mocker() as mock: + mock.register_uri(requests_mock.ANY, requests_mock.ANY, text="") + pluginclass, resolved_url = self.session.resolve_url(url) + return pluginclass(resolved_url) + def assertDictHas(self, a, b): for key, value in a.items(): self.assertEqual(b[key], value) def _test_akamaihd(self, surl, url): - plugin = self.session.resolve_url(surl) + plugin = self.resolve_url(surl) streams = plugin.streams() self.assertTrue("live" in streams) @@ -31,7 +39,7 @@ def _test_akamaihd(self, surl, url): def _test_hls(self, surl, url, mock_parse): mock_parse.return_value = {} - plugin = self.session.resolve_url(surl) + plugin = self.resolve_url(surl) streams = plugin.streams() self.assertIn("live", streams) @@ -45,7 +53,7 @@ def _test_hls(self, surl, url, mock_parse): def _test_hlsvariant(self, surl, url, mock_parse): mock_parse.return_value = {"best": HLSStream(self.session, url)} - plugin = self.session.resolve_url(surl) + plugin = self.resolve_url(surl) streams = plugin.streams() mock_parse.assert_called_with(self.session, url) @@ -58,7 +66,7 @@ def _test_hlsvariant(self, surl, url, mock_parse): self.assertEqual(stream.url, url) def _test_rtmp(self, surl, url, params): - plugin = self.session.resolve_url(surl) + plugin = self.resolve_url(surl) streams = plugin.streams() self.assertIn("live", streams) @@ -69,7 +77,7 @@ def _test_rtmp(self, surl, url, params): self.assertDictHas(params, stream.params) def _test_http(self, surl, url, params): - plugin = self.session.resolve_url(surl) + plugin = self.resolve_url(surl) streams = plugin.streams() self.assertIn("live", streams) diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -25,7 +25,19 @@ setup_config_args ) from streamlink_cli.output import FileOutput, PlayerOutput -from tests.plugin.testplugin import TestPlugin as FakePlugin +from tests.plugin.testplugin import TestPlugin as _TestPlugin + + +class FakePlugin(_TestPlugin): + module = "fake" + arguments = [] + _streams = {} + + def streams(self, *args, **kwargs): + return self._streams + + def _get_streams(self): # pragma: no cover + pass class TestCLIMain(unittest.TestCase): @@ -71,7 +83,7 @@ def test_format_valid_streams(self): "best": c } self.assertEqual( - format_valid_streams(FakePlugin, streams), + format_valid_streams(_TestPlugin, streams), ", ".join([ "audio", "720p (worst)", @@ -87,7 +99,7 @@ def test_format_valid_streams(self): "best-unfiltered": c } self.assertEqual( - format_valid_streams(FakePlugin, streams), + format_valid_streams(_TestPlugin, streams), ", ".join([ "audio", "720p (worst-unfiltered)", @@ -102,10 +114,9 @@ class TestCLIMainJsonAndStreamUrl(unittest.TestCase): def test_handle_stream_with_json_and_stream_url(self, console, args): stream = Mock() streams = dict(best=stream) + plugin = FakePlugin("") - plugin.module = "fake" - plugin.arguments = [] - plugin.streams = Mock(return_value=streams) + plugin._streams = streams handle_stream(plugin, streams, "best") self.assertEqual(console.msg.mock_calls, []) @@ -138,12 +149,11 @@ def test_handle_stream_with_json_and_stream_url(self, console, args): def test_handle_url_with_json_and_stream_url(self, console, args): stream = Mock() streams = dict(worst=Mock(), best=stream) - plugin = FakePlugin("") - plugin.module = "fake" - plugin.arguments = [] - plugin.streams = Mock(return_value=streams) - with patch("streamlink_cli.main.streamlink", resolve_url=Mock(return_value=plugin)): + class _FakePlugin(FakePlugin): + _streams = streams + + with patch("streamlink_cli.main.streamlink", resolve_url=Mock(return_value=(_FakePlugin, ""))): handle_url() self.assertEqual(console.msg.mock_calls, []) self.assertEqual(console.msg_json.mock_calls, [call( @@ -382,7 +392,7 @@ def test_handle_stream_output_stream(self, args: Mock, mock_output_stream: Mock) args.player_continuous_http = False mock_output_stream.return_value = True - plugin = FakePlugin("") + plugin = _TestPlugin("") plugin.author = "AUTHOR" plugin.category = "CATEGORY" plugin.title = "TITLE" @@ -410,7 +420,7 @@ def subject(cls, config_files, **args): def resolve_url(name): if name == "noplugin": raise NoPluginError() - return Mock(module="testplugin") + return Mock(module="testplugin"), name session = Mock() session.resolve_url.side_effect = resolve_url diff --git a/tests/test_plugins_input.py b/tests/test_plugins_input.py --- a/tests/test_plugins_input.py +++ b/tests/test_plugins_input.py @@ -52,6 +52,7 @@ def test_set_via_session(self): session = Streamlink({"user-input-requester": console_input}) session.load_plugins(os.path.join(os.path.dirname(__file__), "plugin")) - p = session.resolve_url("http://test.se/channel") + pluginclass, resolved_url = session.resolve_url("http://test.se/channel") + p = pluginclass(resolved_url) self.assertEqual("username", p.input_ask("username")) self.assertEqual("password", p.input_ask_password("password")) diff --git a/tests/test_session.py b/tests/test_session.py --- a/tests/test_session.py +++ b/tests/test_session.py @@ -4,6 +4,7 @@ from socket import AF_INET, AF_INET6 from unittest.mock import Mock, call, patch +import requests_mock from requests.packages.urllib3.util.connection import allowed_gai_family from streamlink import NoPluginError, Streamlink @@ -20,8 +21,19 @@ def _get_streams(self): class TestSession(unittest.TestCase): + mocker: requests_mock.Mocker + plugin_path = os.path.join(os.path.dirname(__file__), "plugin") + def setUp(self): + self.mocker = requests_mock.Mocker() + self.mocker.register_uri(requests_mock.ANY, requests_mock.ANY, text="") + self.mocker.start() + + def tearDown(self): + self.mocker.stop() + Streamlink.resolve_url.cache_clear() + def subject(self, load_plugins=True): session = Streamlink() if load_plugins: @@ -29,9 +41,18 @@ def subject(self, load_plugins=True): return session - def test_exceptions(self): - session = self.subject() - self.assertRaises(NoPluginError, session.resolve_url, "invalid url", follow_redirect=False) + @staticmethod + def _resolve_url(method, *args, **kwargs) -> Plugin: + pluginclass, resolved_url = method(*args, **kwargs) + return pluginclass(resolved_url) + + def resolve_url(self, session: Streamlink, url: str, *args, **kwargs) -> Plugin: + return self._resolve_url(session.resolve_url, url, *args, **kwargs) + + def resolve_url_no_redirect(self, session: Streamlink, url: str, *args, **kwargs) -> Plugin: + return self._resolve_url(session.resolve_url_no_redirect, url, *args, **kwargs) + + # ---- def test_load_plugins(self): session = self.subject() @@ -76,11 +97,45 @@ def test_load_plugins_syntaxerror(self, mock_log, mock_load_module): def test_resolve_url(self): session = self.subject() plugins = session.get_plugins() - plugin = session.resolve_url("http://test.se/channel") - self.assertTrue(isinstance(plugin, Plugin)) - self.assertTrue(isinstance(plugin, plugins["testplugin"])) + + pluginclass, resolved_url = session.resolve_url("http://test.se/channel") + self.assertTrue(issubclass(pluginclass, Plugin)) + self.assertIs(pluginclass, plugins["testplugin"]) + self.assertEqual(resolved_url, "http://test.se/channel") self.assertTrue(hasattr(session.resolve_url, "cache_info"), "resolve_url has a lookup cache") + def test_resolve_url__noplugin(self): + session = self.subject() + self.mocker.get("http://invalid2", status_code=301, headers={"Location": "http://invalid3"}) + + self.assertRaises(NoPluginError, session.resolve_url, "http://invalid1") + self.assertRaises(NoPluginError, session.resolve_url, "http://invalid2") + + def test_resolve_url__redirected(self): + session = self.subject() + plugins = session.get_plugins() + self.mocker.head("http://redirect1", status_code=501) + self.mocker.get("http://redirect1", status_code=301, headers={"Location": "http://redirect2"}) + self.mocker.head("http://redirect2", status_code=301, headers={"Location": "http://test.se/channel"}) + + pluginclass, resolved_url = session.resolve_url("http://redirect1") + self.assertTrue(issubclass(pluginclass, Plugin)) + self.assertIs(pluginclass, plugins["testplugin"]) + self.assertEqual(resolved_url, "http://test.se/channel") + + def test_resolve_url_no_redirect(self): + session = self.subject() + plugins = session.get_plugins() + + pluginclass, resolved_url = session.resolve_url_no_redirect("http://test.se/channel") + self.assertTrue(issubclass(pluginclass, Plugin)) + self.assertIs(pluginclass, plugins["testplugin"]) + self.assertEqual(resolved_url, "http://test.se/channel") + + def test_resolve_url_no_redirect__noplugin(self): + session = self.subject() + self.assertRaises(NoPluginError, session.resolve_url_no_redirect, "http://invalid") + def test_resolve_url_scheme(self): @pluginmatcher(re.compile("http://insecure")) class PluginHttp(EmptyPlugin): @@ -96,13 +151,13 @@ class PluginHttps(EmptyPlugin): "secure": PluginHttps, } - self.assertRaises(NoPluginError, session.resolve_url, "insecure") - self.assertIsInstance(session.resolve_url("http://insecure"), PluginHttp) - self.assertRaises(NoPluginError, session.resolve_url, "https://insecure") + self.assertRaises(NoPluginError, self.resolve_url, session, "insecure") + self.assertIsInstance(self.resolve_url(session, "http://insecure"), PluginHttp) + self.assertRaises(NoPluginError, self.resolve_url, session, "https://insecure") - self.assertIsInstance(session.resolve_url("secure"), PluginHttps) - self.assertRaises(NoPluginError, session.resolve_url, "http://secure") - self.assertIsInstance(session.resolve_url("https://secure"), PluginHttps) + self.assertIsInstance(self.resolve_url(session, "secure"), PluginHttps) + self.assertRaises(NoPluginError, self.resolve_url, session, "http://secure") + self.assertIsInstance(self.resolve_url(session, "https://secure"), PluginHttps) def test_resolve_url_priority(self): @pluginmatcher(priority=HIGH_PRIORITY, pattern=re.compile( @@ -136,10 +191,10 @@ class NoPriority(EmptyPlugin): "low": LowPriority, "no": NoPriority, } - no = session.resolve_url_no_redirect("no") - low = session.resolve_url_no_redirect("low") - normal = session.resolve_url_no_redirect("normal") - high = session.resolve_url_no_redirect("high") + no = self.resolve_url_no_redirect(session, "no") + low = self.resolve_url_no_redirect(session, "low") + normal = self.resolve_url_no_redirect(session, "normal") + high = self.resolve_url_no_redirect(session, "high") self.assertIsInstance(no, HighPriority) self.assertIsInstance(low, HighPriority) @@ -151,7 +206,7 @@ class NoPriority(EmptyPlugin): "no": NoPriority, } with self.assertRaises(NoPluginError): - session.resolve_url_no_redirect("no") + self.resolve_url_no_redirect(session, "no") @patch("streamlink.session.log") def test_resolve_deprecated(self, mock_log: Mock): @@ -182,19 +237,12 @@ def priority(cls, url): "dep-high": DeprecatedHighPriority, } - self.assertIsInstance(session.resolve_url_no_redirect("low"), DeprecatedHighPriority) + self.assertIsInstance(self.resolve_url_no_redirect(session, "low"), DeprecatedHighPriority) self.assertEqual(mock_log.info.mock_calls, [ call("Resolved plugin dep-normal-one with deprecated can_handle_url API"), call("Resolved plugin dep-high with deprecated can_handle_url API") ]) - def test_resolve_url_no_redirect(self): - session = self.subject() - plugin = session.resolve_url_no_redirect("http://test.se/channel") - plugins = session.get_plugins() - self.assertTrue(isinstance(plugin, Plugin)) - self.assertTrue(isinstance(plugin, plugins["testplugin"])) - def test_options(self): session = self.subject() session.set_option("test_option", "option") @@ -209,7 +257,7 @@ def test_options(self): def test_plugin(self): session = self.subject() - plugin = session.resolve_url("http://test.se/channel") + plugin = self.resolve_url(session, "http://test.se/channel") streams = plugin.streams() self.assertTrue("best" in streams) @@ -223,7 +271,7 @@ def test_plugin(self): def test_plugin_stream_types(self): session = self.subject() - plugin = session.resolve_url("http://test.se/channel") + plugin = self.resolve_url(session, "http://test.se/channel") streams = plugin.streams(stream_types=["http", "rtmp"]) self.assertTrue(isinstance(streams["480p"], HTTPStream)) @@ -236,7 +284,7 @@ def test_plugin_stream_types(self): def test_plugin_stream_sorting_excludes(self): session = self.subject() - plugin = session.resolve_url("http://test.se/channel") + plugin = self.resolve_url(session, "http://test.se/channel") streams = plugin.streams(sorting_excludes=[]) self.assertTrue("best" in streams) @@ -268,7 +316,7 @@ def test_plugin_stream_sorting_excludes(self): self.assertTrue(streams["worst-unfiltered"] is streams["350k"]) self.assertTrue(streams["best-unfiltered"] is streams["1080p"]) - plugin = session.resolve_url("http://test.se/UnsortableStreamNames") + plugin = self.resolve_url(session, "http://test.se/UnsortableStreamNames") streams = plugin.streams() self.assertFalse("best" in streams) self.assertFalse("worst" in streams)
Plugin keep init if running too many sessions in the same time ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Plugin: ```py # encoding: utf-8 import re from streamlink.plugin import Plugin, PluginArgument, PluginArguments, pluginmatcher @pluginmatcher(re.compile( r'(http)?(s)?(://)?(www\.)?test.com/(?P<channel>[^/]+)' )) class Test(Plugin): def __init__(self, url: str) -> None: self.url = url try: self.load_cookies() except RuntimeError: pass # unbound cannot load self.lock = self.session.get_option("lock") with self.lock: print("__init__", self.url) def _get_streams(self): with self.lock: print("_get_streams", self.url) __plugin__ = Test ``` reproduced code: ``` import streamlink import time import threading lock = threading.Lock() def GetStream(url): session = streamlink.Streamlink() session.set_option('lock', lock) session.options.set('lock', lock) while True: streams = session.streams(url) time.sleep(10) count = 200 threading_pool = [] for i in range(count): thread = threading.Thread(target = GetStream, args = ('test.com/{}'.format(i) , )) threading_pool.append(thread) for thread in threading_pool: thread.start() for thread in threading_pool: thread.join() ``` I just found out it's because of "@lru_cache(maxsize=128)" in session.py After I change it to 1024, everything is fine. Anyway, there should be a warning, queue is full or what I just upgrade Streamlink then realize some feature is broken, LOL ### Debug log ```text ```
Related: #4091 The url resolver should return plugin classes and not plugin instances. That would fix this issue, too. I've removed the debug log, because it's causing massive lag with all the parsed links. Might be caused by the refined-github browser extension, but I don't care. The log's not important.
2021-11-10T10:41:08
streamlink/streamlink
4,175
streamlink__streamlink-4175
[ "4170" ]
9d371d9f3d9b5ad99569f71268c4c77785d07a42
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1,5 +1,5 @@ #!/usr/bin/env python -from os import environ, path +from os import path from sys import argv, exit, version_info from textwrap import dedent @@ -38,23 +38,6 @@ def format_msg(text, *args, **kwargs): """)) -deps = [ - "isodate", - "lxml>=4.6.4,<5.0", - "pycryptodome>=3.4.3,<4", - "PySocks!=1.5.7,>=1.5.6", - "requests>=2.26.0,<3.0", - "websocket-client>=1.2.1,<2.0", -] - -# for localization -if environ.get("STREAMLINK_USE_PYCOUNTRY"): - deps.append("pycountry") -else: - deps.append("iso-639") - deps.append("iso3166") - - def is_wheel_for_windows(): if "bdist_wheel" in argv: names = ["win32", "win-amd64", "cygwin"] @@ -96,7 +79,6 @@ def is_wheel_for_windows(): setup( version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), - install_requires=deps, entry_points=entry_points, data_files=data_files, ) diff --git a/src/streamlink/utils/l10n.py b/src/streamlink/utils/l10n.py --- a/src/streamlink/utils/l10n.py +++ b/src/streamlink/utils/l10n.py @@ -1,20 +1,11 @@ import locale import logging - -try: - from iso639 import languages - from iso3166 import countries - - PYCOUNTRY = False -except ImportError: # pragma: no cover - from pycountry import languages, countries - - PYCOUNTRY = True +from pycountry import countries, languages DEFAULT_LANGUAGE = "en" DEFAULT_COUNTRY = "US" -DEFAULT_LANGUAGE_CODE = "{0}_{1}".format(DEFAULT_LANGUAGE, DEFAULT_COUNTRY) +DEFAULT_LANGUAGE_CODE = f"{DEFAULT_LANGUAGE}_{DEFAULT_COUNTRY}" log = logging.getLogger(__name__) @@ -30,14 +21,16 @@ def __init__(self, alpha2, alpha3, numeric, name, official_name=None): @classmethod def get(cls, country): try: - if PYCOUNTRY: - c = countries.lookup(country) - return Country(c.alpha_2, c.alpha_3, c.numeric, c.name, getattr(c, "official_name", c.name)) - else: - c = countries.get(country) - return Country(c.alpha2, c.alpha3, c.numeric, c.name, c.apolitical_name) + c = countries.lookup(country) + return Country( + c.alpha_2, + c.alpha_3, + c.numeric, + c.name, + getattr(c, "official_name", c.name) + ) except (LookupError, KeyError): - raise LookupError("Invalid country code: {0}".format(country)) + raise LookupError(f"Invalid country code: {country}") def __eq__(self, other): return ( @@ -66,38 +59,23 @@ def __init__(self, alpha2, alpha3, name, bibliographic=None): @classmethod def get(cls, language): try: - if PYCOUNTRY: - lang = (languages.get(alpha_2=language) - or languages.get(alpha_3=language) - or languages.get(bibliographic=language) - or languages.get(name=language)) - if not lang: - raise KeyError(language) - return Language( - # some languages don't have an alpha_2 code - getattr(lang, "alpha_2", ""), - lang.alpha_3, - lang.name, - getattr(lang, "bibliographic", "") - ) - else: - lang = None - if len(language) == 2: - lang = languages.get(alpha2=language) - elif len(language) == 3: - for code_type in ['part2b', 'part2t', 'part3']: - try: - lang = languages.get(**{code_type: language}) - break - except KeyError: - pass - if not lang: - raise KeyError(language) - else: - raise KeyError(language) - return Language(lang.alpha2, lang.part3, lang.name, lang.part2b or lang.part2t) + lang = ( + languages.get(alpha_2=language) + or languages.get(alpha_3=language) + or languages.get(bibliographic=language) + or languages.get(name=language) + ) + if not lang: + raise KeyError(language) + return Language( + # some languages don't have an alpha_2 code + getattr(lang, "alpha_2", ""), + lang.alpha_3, + lang.name, + getattr(lang, "bibliographic", "") + ) except (LookupError, KeyError): - raise LookupError("Invalid language code: {0}".format(language)) + raise LookupError(f"Invalid language code: {language}") def __eq__(self, other): return ( @@ -130,7 +108,7 @@ def language_code(self): def _parse_locale_code(self, language_code): parts = language_code.split("_", 1) if len(parts) != 2 or len(parts[0]) != 2 or len(parts[1]) != 2: - raise LookupError("Invalid language code: {0}".format(language_code)) + raise LookupError(f"Invalid language code: {language_code}") return self.get_language(parts[0]), self.get_country(parts[1]) @language_code.setter @@ -156,7 +134,7 @@ def language_code(self, language_code): self._language_code = DEFAULT_LANGUAGE_CODE else: raise - log.debug("Language code: {0}".format(self._language_code)) + log.debug(f"Language code: {self._language_code}") def equivalent(self, language=None, country=None): equivalent = True
diff --git a/tests/utils/test_l10n.py b/tests/utils/test_l10n.py --- a/tests/utils/test_l10n.py +++ b/tests/utils/test_l10n.py @@ -3,23 +3,8 @@ import streamlink.utils.l10n as l10n -try: - import iso639 # noqa: F401 - import iso3166 # noqa: F401 - ISO639 = True -except ImportError: # pragma: no cover - ISO639 = False - -try: - import pycountry # noqa: F401 - - PYCOUNTRY = True -except ImportError: # pragma: no cover - PYCOUNTRY = False - - -class LocalizationTestsMixin: +class TestLocalization(unittest.TestCase): def test_language_code_us(self): locale = l10n.Localization("en_US") self.assertEqual("en_US", locale.language_code) @@ -117,29 +102,6 @@ def test_language_a3_no_a2(self): self.assertEqual(a.name, "Desano") self.assertEqual(a.bibliographic, "") - [email protected](not ISO639, "iso639+iso3166 modules are required to test iso639+iso3166 Localization") -class TestLocalization(LocalizationTestsMixin, unittest.TestCase): - def setUp(self): - l10n.PYCOUNTRY = False - - def test_pycountry(self): - self.assertEqual(False, l10n.PYCOUNTRY) - - [email protected](not PYCOUNTRY, "pycountry module required to test pycountry Localization") -class TestLocalizationPyCountry(LocalizationTestsMixin, unittest.TestCase): - """Duplicate of all the Localization tests but using PyCountry instead of the iso* modules""" - - def setUp(self): - from pycountry import languages, countries - l10n.countries = countries - l10n.languages = languages - l10n.PYCOUNTRY = True - - def test_pycountry(self): - self.assertEqual(True, l10n.PYCOUNTRY) - # issue #3057: generic "en" lookups via pycountry yield the "En" language, but not "English" def test_language_en(self): english_a = l10n.Localization.get_language("en")
Environment variables for defining alternative dependencies in setup.py As mentioned in #4113, I'd prefer having dependencies defined declaratively via setup.cfg instead of defining a list and adding it to `setup(install_requires=...)` in setup.py. Optional dependencies can be added via the `extra_requires`, but this unfortunately can't be used for defining two different sets of dependencies (exclusive-or). https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#declaring-required-dependency https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies ---- **Let's take a look at the env vars in setup.py for choosing alternative dependencies:** ## STREAMLINK_USE_PYCRYPTO The `STREAMLINK_USE_PYCRYPTO` env var and the `pycrypto` dependency was added in #713 by @mmetak due to `pycryptodome` not being in the repos of some Linux distros at that time (Arch and Fedora), but that isn't the case anymore. Everyone's switched to `pycryptodome` or `pycryptodomex` by applying patches. This means we should remove the env var. - **Arch Linux** (community) - 2018-08-27 https://github.com/archlinux/svntogit-community/commit/4fa0fa3f789fbae45f8b3ecd2fee6f9450e327c6 - **Arch Linux** (AUR) - 2020-04-29 https://aur.archlinux.org/cgit/aur.git/commit/?h=streamlink-git&id=92b1679a6616b2d89c60ba8a0e25c3b050439c17 - **Fedora** - 2019-08-20 https://src.fedoraproject.org/rpms/python-streamlink/c/49187b8b7ce5e6836225d332a31bec5e5a30c20c?branch=rawhide https://src.fedoraproject.org/rpms/python-streamlink/tree/rawhide - **Void** - 2018-06-06 https://github.com/void-linux/void-packages/commit/0a9396a547609eeeb0972c571203beb9f23d3d80 - **Debian** / **Ubuntu** - 2019-08-23 (as always a pain to research, thanks :+1:) https://launchpadlibrarian.net/452638112/streamlink_1.2.0+dfsg-1~webupd8~disco2_1.3.0+dfsg-1~webupd8~disco.diff.gz - **Gentoo** - 2017-03-05 https://gitweb.gentoo.org/repo/gentoo.git/commit/net-misc/streamlink/streamlink-9999.ebuild?id=d377f979b7ff6cbb1185dc02b18b4770eff56a3f ## STREAMLINK_USE_PYCOUNTRY If we remove the `STREAMLINK_USE_PYCRYPTO` env var, then `STREAMLINK_USE_PYCOUNTRY` will be the only one left for choosing alternative dependencies. Most packages define pycountry as a dependency, but some don't. However, I think those which don't only do this because we're not using it as the default. nixos for example doesn't use it but has pycountry in its repos. `STREAMLINK_USE_PYCOUNTRY` was added in #585 by @beardypig because of test failures on some Linux distros when using the default `iso-639` and `iso3166`. `iso-639` however hasn't been updated anymore since April 2016, more than 5 years ago - https://pypi.org/project/iso-639/#history - https://pypi.org/project/iso3166/#history `pycountry` was updated last in 2020 - https://pypi.org/project/pycountry/#history We could default to `pycountry` and drop `iso-639`+`iso3166` if we want to remove alternative dependencies and the env var. I think `pycountry` was made optional instead of switching dependencies back then because it's not as lightweight as `iso-639`, is that right, @beardypig ?
That's right. As iso-639 appears to be unmaintained we should drop it. It would tidy up the codebase as well :) The pycrypto removal is trivial, so I've opened a PR for this here: #4174 @beardypig if you want, please go ahead and do the same for pycountry after the pycrypto changes have been merged. Most packages are using pycountry and we should not depend on packages which are unmaintained.
2021-11-14T13:34:00
streamlink/streamlink
4,190
streamlink__streamlink-4190
[ "4188" ]
33ccd2eb52be286adad3714a297d186d894c53c3
diff --git a/src/streamlink_cli/__init__.py b/src/streamlink_cli/__init__.py --- a/src/streamlink_cli/__init__.py +++ b/src/streamlink_cli/__init__.py @@ -0,0 +1,12 @@ +from signal import SIGINT, SIGTERM, signal +from sys import exit + + +def _exit(*_): + # don't raise a KeyboardInterrupt until streamlink_cli has been fully initialized + exit(128 | 2) + + +# override default SIGINT handler (and set SIGTERM handler) as early as possible +signal(SIGINT, _exit) +signal(SIGTERM, _exit) diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -685,7 +685,9 @@ def setup_config_args(parser, ignore_unknown=False): def setup_signals(): - # Handle SIGTERM just like SIGINT + # restore default behavior of raising a KeyboardInterrupt on SIGINT (and SIGTERM) + # so cleanup code can be run when the user stops execution + signal.signal(signal.SIGINT, signal.default_int_handler) signal.signal(signal.SIGTERM, signal.default_int_handler) @@ -1005,8 +1007,6 @@ def main(): log_file = args.logfile if log_level != "none" else None setup_logger_and_console(console_out, log_file, log_level, args.json) - setup_signals() - setup_streamlink() # load additional plugins setup_plugins(args.plugin_dirs) @@ -1025,6 +1025,8 @@ def main(): log_current_versions() log_current_arguments(streamlink, parser) + setup_signals() + if args.version_check or args.auto_version_check: with ignored(Exception): check_version(force=args.version_check)
diff --git a/tests/__init__.py b/tests/__init__.py --- a/tests/__init__.py +++ b/tests/__init__.py @@ -1,8 +1,18 @@ import os +import signal import warnings import pytest +# import streamlink_cli as early as possible to execute its default signal overrides +# noinspection PyUnresolvedReferences +import streamlink_cli # noqa: F401 + + +# immediately restore default signal handlers for the test runner +signal.signal(signal.SIGINT, signal.default_int_handler) +signal.signal(signal.SIGTERM, signal.default_int_handler) + def catch_warnings(record=False, module=None): def _catch_warnings_wrapper(f): diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -20,7 +20,6 @@ format_valid_streams, handle_stream, handle_url, - log_current_arguments, resolve_stream_name, setup_config_args ) @@ -500,14 +499,13 @@ def subject(cls, argv): session = Streamlink() session.load_plugins(os.path.join(os.path.dirname(__file__), "plugin")) - def _log_current_arguments(*args, **kwargs): - log_current_arguments(*args, **kwargs) - raise SystemExit + # stop test execution at the setup_signals() call, as we're not interested in what comes afterwards + class StopTest(Exception): + pass with patch("streamlink_cli.main.streamlink", session), \ - patch("streamlink_cli.main.log_current_arguments", side_effect=_log_current_arguments), \ + patch("streamlink_cli.main.setup_signals", side_effect=StopTest), \ patch("streamlink_cli.main.CONFIG_FILES", []), \ - patch("streamlink_cli.main.setup_signals"), \ patch("streamlink_cli.main.setup_streamlink"), \ patch("streamlink_cli.main.setup_plugins"), \ patch("streamlink_cli.main.setup_http_session"), \ @@ -516,7 +514,7 @@ def _log_current_arguments(*args, **kwargs): mock_argv.__getitem__.side_effect = lambda x: argv[x] try: streamlink_cli.main.main() - except SystemExit: + except StopTest: pass def tearDown(self):
Brief exit message for KeyboardInterrupt ### Checklist - [X] This is a feature request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22feature+request%22) ### Description Thanks for maintaining streamlink! It entertains me a lot. :+1: :rocket: I'd like an option to reduce the verbosity of `KeyboardInterrupt` errors, ideally to one line at most. I run `streamlink 2.1.1+12.g3fdd208.dirty` on Ubuntu focal as the last step of a script. Sometimes a message from an early step of that script means that I need a manual prepare step before I should start watching, so I press Ctrl+c to abort. Most of the times, this causes streamlinks's python to print about 20 lines of Traceback before the message "KeyboardInterrupt", and a useless blank line after that. Thus I have to scroll up to see the messages from my script. It would be nice to have an easy way to opt-out of that stack trace and the blank line.
> 2.1.1+12.g3fdd208.dirty This version is outdated and modified, so no support for that here... > KeyboardInterrupt Most `KeyboardInterrupt` exceptions should be caught by `streamlink_cli`. Some of them are also caught in the `streamlink` module/package, so it depends on how you run Streamlink, which isn't clear from your description. Could you please post the entire call stack of the `KeyboardInterrupt` exception you're seeing? This is important to know so that this can be fixed, as `SIGINT` is not supposed to print `KeyboardInterrupt` exceptions. However, what I can already tell you without looking at the code, any `KeyboardInterrupt` that gets raised before Streamlink gets properly initialized won't be caught and thus gets printed to `stderr`. This could be improved, but not fixed entirely AFAIA, unless we override the default signal behavior as the first step before doing anything else. Any kind of error log gets written to `stderr` instead of `stdout`, so if you don't want to see that in your script, then there are ways to suppress that. `SIGTERM` should get interpreted the same as `SIGINT` (configured by `streamlink_cli`), but `SIGKILL` could also be an option if necessary, albeit a bit ugly. Thanks for the quick feedback! I already considered piping stderr trough sed to optimize it, but I still have hopes that a python tweak will be easier. > any KeyboardInterrupt that gets raised before Streamlink gets properly initialized won't be caught I checked with various timings and can confirm that this is my problem. When I give it enough time to initialize, the error message is just > Interrupted! Exiting... Detailed log for my outdated version: <details> ```text $ git reset --hard 3fdd208dbeeede10d80483c3e47b5f2752d1be7f HEAD is now at 3fdd208d plugins.nicolive: fix proxy arguments (#3710) $ rm -- src/streamlink/plugins/{mjunoon,ustvnow}.py \ # ^-- Avoid errors about missing libraries $ export PYTHONPATH="$PYTHONPATH:$PWD/src" $ CAMP='python3 -m streamlink_cli twitch.tv/basecamp' $ timeout --signal=SIGINT 1s $CAMP [cli][info] Found matching plugin twitch for URL twitch.tv/basecamp Interrupted! Exiting... $ timeout --signal=SIGINT 0.7s $CAMP Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/tmp/streamlink-repo/src/streamlink_cli/__main__.py", line 3, in <module> main() File "/tmp/streamlink-repo/src/streamlink_cli/main.py", line 1032, in main setup_streamlink() File "/tmp/streamlink-repo/src/streamlink_cli/main.py", line 732, in setup_streamlink streamlink = Streamlink({"user-input-requester": ConsoleUserInputRequester(console)}) File "/tmp/streamlink-repo/src/streamlink/session.py", line 79, in __init__ self.load_builtin_plugins() File "/tmp/streamlink-repo/src/streamlink/session.py", line 445, in load_builtin_plugins self.load_plugins(plugins.__path__[0]) File "/tmp/streamlink-repo/src/streamlink/session.py", line 458, in load_plugins mod = load_module(module_name, path) File "/tmp/streamlink-repo/src/streamlink/utils/__init__.py", line 24, in load_module spec.loader.exec_module(mod) File "<frozen importlib._bootstrap_external>", line 848, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/tmp/streamlink-repo/src/streamlink/plugins/nbc.py", line 4, in <module> from streamlink.plugins.theplatform import ThePlatform File "/tmp/streamlink-repo/src/streamlink/plugins/theplatform.py", line 10, in <module> class ThePlatform(Plugin): File "/tmp/streamlink-repo/src/streamlink/plugins/theplatform.py", line 15, in ThePlatform release_re = re.compile(r'''tp:releaseUrl\s*=\s*"(.*?)"''') File "/usr/lib/python3.8/re.py", line 252, in compile return _compile(pattern, flags) File "/usr/lib/python3.8/re.py", line 304, in _compile p = sre_compile.compile(pattern, flags) File "/usr/lib/python3.8/sre_compile.py", line 764, in compile p = sre_parse.parse(p, flags) File "/usr/lib/python3.8/sre_parse.py", line 940, in parse source = Tokenizer(str) File "/usr/lib/python3.8/sre_parse.py", line 229, in __init__ self.decoded_string = string KeyboardInterrupt ``` </details> I tried to check with the current version, too, but with ```text HEAD is now at 33ccd2eb plugins.crunchyroll: add metadata attributes (#4185) ``` it fails with > ModuleNotFoundError: No module named 'Crypto.Util.Padding' which, last time I tried, was too cumbersome to obtain. The default behavior of the signal handlers can probably be changed until streamlink_cli is properly initialized. There shouldn't be a reason why we want a KeyboardInterrupt to be printed to stderr unless we're fully initialized. I will take a look at this later today. > it fails with > > ModuleNotFoundError: No module named 'Crypto.Util.Padding' You are missing dependencies and have probably not installed Streamlink correctly. If a package is not available for your distro, then install it in a clean virtual environment or just use the appimage. https://streamlink.github.io/install.html#pypi-package-and-source-code https://streamlink.github.io/install.html#appimages
2021-11-20T13:25:27
streamlink/streamlink
4,200
streamlink__streamlink-4200
[ "4191" ]
175d4748561c7154bb80c5a47dae22039e45d4ce
diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py --- a/src/streamlink/plugins/ard_mediathek.py +++ b/src/streamlink/plugins/ard_mediathek.py @@ -4,29 +4,7 @@ from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream -from streamlink.stream.http import HTTPStream -from streamlink.utils.url import update_scheme -MEDIA_URL = "http://www.ardmediathek.de/play/media/{0}" -QUALITY_MAP = { - "auto": "auto", - 4: "1080p", - 3: "720p", - 2: "544p", - 1: "360p", - 0: "144p" -} - -_media_id_re = re.compile(r"/play/(?:media|config|sola)/(\d+)") -_media_schema = validate.Schema({ - "_mediaArray": [{ - "_mediaStreamArray": [{ - validate.optional("_server"): validate.text, - "_stream": validate.any(validate.text, [validate.text]), - "_quality": validate.any(int, validate.text) - }] - }] -}) log = logging.getLogger(__name__) @@ -35,54 +13,63 @@ r"https?://(?:(\w+\.)?ardmediathek\.de/|mediathek\.daserste\.de/)" )) class ARDMediathek(Plugin): - def _get_http_streams(self, info): - name = QUALITY_MAP.get(info["_quality"], "vod") - urls = info["_stream"] - if not isinstance(info["_stream"], list): - urls = [urls] + def _get_streams(self): + data_json = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_findtext(".//script[@id='fetchedContextValue'][@type='application/json']"), + validate.any(None, validate.all( + validate.parse_json(), + {str: dict}, + validate.transform(lambda obj: list(obj.items())), + validate.filter(lambda item: item[0].startswith("https://api.ardmediathek.de/page-gateway/pages/")), + validate.any(validate.get((0, 1)), []) + )) + )) + if not data_json: + return - for url in urls: - stream = HTTPStream(self.session, update_scheme("https://", url)) - yield name, stream + schema_data = validate.Schema({ + "id": str, + "widgets": validate.all( + [dict], + validate.filter(lambda item: item.get("mediaCollection")), + validate.get(0), + { + "geoblocked": bool, + "publicationService": { + "name": str, + }, + "title": str, + "mediaCollection": { + "embedded": { + "_mediaArray": [{ + "_mediaStreamArray": [{ + "_quality": validate.any(str, int), + "_stream": validate.url() + }] + }] + } + } + } + ) + }) + data = schema_data.validate(data_json) - def _get_hls_streams(self, info): - return HLSStream.parse_variant_playlist(self.session, update_scheme("https://", info["_stream"])).items() + log.debug(f"Found media id: {data['id']}") + data_media = data["widgets"] - def _get_streams(self): - res = self.session.http.get(self.url) - match = _media_id_re.search(res.text) - if match: - media_id = match.group(1) - else: + if data_media["geoblocked"]: + log.info("The content is not available in your region") return - log.debug("Found media id: {0}".format(media_id)) - res = self.session.http.get(MEDIA_URL.format(media_id)) - media = self.session.http.json(res, schema=_media_schema) - log.trace("{0!r}".format(media)) + self.author = data_media["publicationService"]["name"] + self.title = data_media["title"] - for media in media["_mediaArray"]: + for media in data_media["mediaCollection"]["embedded"]["_mediaArray"]: for stream in media["_mediaStreamArray"]: - stream_ = stream["_stream"] - if isinstance(stream_, list): - if not stream_: - continue - stream_ = stream_[0] - - stream_ = update_scheme("https://", stream_) - if ".m3u8" in stream_: - parser = self._get_hls_streams - parser_name = "HLS" - elif (".mp4" in stream_ and ".f4m" not in stream_): - parser = self._get_http_streams - parser_name = "HTTP" - else: - log.error("Unexpected stream type: '{0}'".format(stream_)) - - try: - yield from parser(stream) - except OSError as err: - log.error("Failed to extract {0} streams: {1}".format(parser_name, err)) + if stream["_quality"] != "auto" or ".m3u8" not in stream["_stream"]: + continue + return HLSStream.parse_variant_playlist(self.session, stream["_stream"]) __plugin__ = ARDMediathek
plugins.ard_mediathek: The ARD plugin is not loading anymore. ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description It seems that the ARD (German public broadcaster) has made some changes to their site. On their site, their stream is working much better. However, when trying to use Streamlink, the ARD stream will not open. [cli][info] Found matching plugin ard_mediathek for URL https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/ error: No playable streams found on this URL: https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/ streamlink --version streamlink 3.0.1+1.g33ccd2e ### Debug log ```text [cli][debug] OS: Linux-5.4.0-90-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 3.0.1+1.g33ccd2e [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/ [cli][debug] stream=['540p'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin ard_mediathek for URL https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/ error: No playable streams found on this URL: https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/ ```
Does the existing plugin work for VoD content? I have a fix for the live channels within `https://www.ardmediathek.de/live/`, but it would be easier to get some info from you on what is still working (if anything), so I can look at those URLs and decide if the existing `media_id` code should be kept. Thanks. It's the exact same error for VoD: [cli][info] Found matching plugin ard_mediathek for URL https://www.ardmediathek.de/video/wataha-einsatz-an-der-grenze-europas/folge-1-anschlag-s01-e01/mdr/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy9hOWQxNTg1Yy1iMzQyLTQ3NGYtYTM2NS0wZWY2M2JjZjNhMDY/ error: No playable streams found on this URL: https://www.ardmediathek.de/video/wataha-einsatz-an-der-grenze-europas/folge-1-anschlag-s01-e01/mdr/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy9hOWQxNTg1Yy1iMzQyLTQ3NGYtYTM2NS0wZWY2M2JjZjNhMDY/ [cli][info] Found matching plugin ard_mediathek for URL https://www.ardmediathek.de/video/white-sands-strand-der-geheimnisse/folge-1-warmer-empfang-s01-e01/ndr/Y3JpZDovL25kci5kZS80MmEwZmQwMC03YTYwLTQ4N2EtYWZhZC1lZDBmYjYzZjgzMDQ/ error: No playable streams found on this URL: https://www.ardmediathek.de/video/white-sands-strand-der-geheimnisse/folge-1-warmer-empfang-s01-e01/ndr/Y3JpZDovL25kci5kZS80MmEwZmQwMC03YTYwLTQ4N2EtYWZhZC1lZDBmYjYzZjgzMDQ/ Live: [cli][info] Found matching plugin ard_mediathek for URL https://www.ardmediathek.de/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL2xpdmUvY2xpcC9hYmNhMDdhMy0zNDc2LTQ4NTEtYjE2Mi1mZGU4ZjY0NmQ0YzQ/ error: No playable streams found on this URL: https://www.ardmediathek.de/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL2xpdmUvY2xpcC9hYmNhMDdhMy0zNDc2LTQ4NTEtYjE2Mi1mZGU4ZjY0NmQ0YzQ/ It works fine in the browser. The ZDF plugin is working fine.
2021-11-21T19:58:16
streamlink/streamlink
4,202
streamlink__streamlink-4202
[ "4200" ]
365fa1b0076be0a12325fbf8ba1fb2de0477cbf8
diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py --- a/src/streamlink/plugins/ard_mediathek.py +++ b/src/streamlink/plugins/ard_mediathek.py @@ -4,6 +4,7 @@ from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream +from streamlink.stream.http import HTTPStream log = logging.getLogger(__name__) @@ -13,6 +14,14 @@ r"https?://(?:(\w+\.)?ardmediathek\.de/|mediathek\.daserste\.de/)" )) class ARDMediathek(Plugin): + _QUALITY_MAP = { + 4: "1080p", + 3: "720p", + 2: "540p", + 1: "360p", + 0: "270p" + } + def _get_streams(self): data_json = self.session.http.get(self.url, schema=validate.Schema( validate.parse_html(), @@ -34,42 +43,64 @@ def _get_streams(self): [dict], validate.filter(lambda item: item.get("mediaCollection")), validate.get(0), - { - "geoblocked": bool, - "publicationService": { - "name": str, + validate.any(None, validate.all( + { + "geoblocked": bool, + "publicationService": { + "name": str, + }, + "show": validate.any(None, validate.all( + {"title": str}, + validate.get("title") + )), + "title": str, + "mediaCollection": { + "embedded": { + "_mediaArray": [validate.all( + { + "_mediaStreamArray": [validate.all( + { + "_quality": validate.any(str, int), + "_stream": validate.url(), + }, + validate.union_get("_quality", "_stream") + )] + }, + validate.get("_mediaStreamArray"), + validate.transform(dict) + )] + } + }, }, - "title": str, - "mediaCollection": { - "embedded": { - "_mediaArray": [{ - "_mediaStreamArray": [{ - "_quality": validate.any(str, int), - "_stream": validate.url() - }] - }] - } - } - } + validate.union_get( + "geoblocked", + ("mediaCollection", "embedded", "_mediaArray", 0), + ("publicationService", "name"), + "title", + "show", + ) + )) ) }) data = schema_data.validate(data_json) log.debug(f"Found media id: {data['id']}") - data_media = data["widgets"] + if not data["widgets"]: + log.info("The content is unavailable") + return - if data_media["geoblocked"]: + geoblocked, media, self.author, self.title, show = data["widgets"] + if geoblocked: log.info("The content is not available in your region") return + if show: + self.title = f"{show}: {self.title}" - self.author = data_media["publicationService"]["name"] - self.title = data_media["title"] - - for media in data_media["mediaCollection"]["embedded"]["_mediaArray"]: - for stream in media["_mediaStreamArray"]: - if stream["_quality"] != "auto" or ".m3u8" not in stream["_stream"]: - continue - return HLSStream.parse_variant_playlist(self.session, stream["_stream"]) + if media.get("auto"): + yield from HLSStream.parse_variant_playlist(self.session, media.get("auto")).items() + else: + for quality, stream in media.items(): + yield self._QUALITY_MAP.get(quality, quality), HTTPStream(self.session, stream) __plugin__ = ARDMediathek
plugins.ard_mediathek: rewrite plugin Resolves #4191 One issue I couldn't fix is the text encoding of the metadata which gets messed up by `validate.parse_html()`. See the VOD title down below... https://github.com/streamlink/streamlink/blob/175d4748561c7154bb80c5a47dae22039e45d4ce/src/streamlink/utils/parse.py#L54-L55 Some VODs also have a second title, eg. if it's a TV show, but I couldn't be bothered to implement this. Not important. ---- Das Erste - Live: ``` $ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/' best [cli.output][debug] Opening subprocess: mpv "--force-media-title=Das Erste - Das Erste" - ``` WDR - Live: ``` $ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/live/Y3JpZDovL3dkci5kZS9CZWl0cmFnLTNkYTY2NGRlLTE4YzItNDY1MC1hNGZmLTRmNjQxNDcyMDcyYg/' best [cli.output][debug] Opening subprocess: mpv "--force-media-title=WDR - WDR Fernsehen im Livestream" - ``` VOD ``` $ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/video/dokus-im-ersten/wirecard-die-milliarden-luege/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3JlcG9ydGFnZSBfIGRva3VtZW50YXRpb24gaW0gZXJzdGVuL2NlMjQ0OWM4LTQ4YTUtNGIyNC1iMTdlLWNhOTNjMDQ5OTc4Zg/' best [cli.output][debug] Opening subprocess: mpv "--force-media-title=Das Erste - Wirecard - Die Milliarden-Lüge" - ```
I don't think the metadata is a big enough deal to be worth fixing. @bastimeyer My bad for merging too quickly!
2021-11-22T08:54:10
streamlink/streamlink
4,210
streamlink__streamlink-4210
[ "4209" ]
aa34299267cd6723265a81630e14e5d11cbcf57f
diff --git a/src/streamlink/utils/parse.py b/src/streamlink/utils/parse.py --- a/src/streamlink/utils/parse.py +++ b/src/streamlink/utils/parse.py @@ -48,8 +48,12 @@ def parse_html( """Wrapper around lxml.etree.HTML with some extras. Provides these extra features: + - Removes XML declarations of invalid XHTML5 documents - Wraps errors in custom exception with a snippet of the data in the message """ + if isinstance(data, str) and data.lstrip().startswith("<?xml"): + data = re.sub(r"^\s*<\?xml.+?\?>", "", data) + return _parse(HTML, data, name, exception, schema, *args, **kwargs)
diff --git a/tests/utils/test_parse.py b/tests/utils/test_parse.py --- a/tests/utils/test_parse.py +++ b/tests/utils/test_parse.py @@ -89,6 +89,12 @@ def test_parse_html_encoding(self): tree = parse_html(b"""<!DOCTYPE html><html><body>\xC3\xA4</body></html>""") self.assertEqual(tree.xpath(".//body/text()"), ["ä"]) + def test_parse_html_xhtml5(self): + tree = parse_html("""<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE html><html><body>ä?></body></html>""") + self.assertEqual(tree.xpath(".//body/text()"), ["ä?>"]) + tree = parse_html(b"""<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE html><html><body>\xC3\xA4?></body></html>""") + self.assertEqual(tree.xpath(".//body/text()"), ["ä?>"]) + def test_parse_qsd(self): self.assertEqual( {"test": "1", "foo": "bar"},
plugins.tviplayer: unable to handle CNN Portugal ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description - issue: - the new `tviplayer` plugin is unable to handle https://tviplayer.iol.pt/direto/CNN - of note, the previous TVI 24 became CNN Portugal after #4199. - to reproduce: ```sh streamlink https://tviplayer.iol.pt/direto/CNN ``` ```sh [cli][info] Found matching plugin tviplayer for URL https://tviplayer.iol.pt/direto/CNN error: Unable to validate response text: Unable to parse HTML: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. ('<?xml version=\'1.0\' encoding=\'U ...) ``` ### Debug log ```text streamlink --loglevel debug https://tviplayer.iol.pt/direto/CNN [cli][debug] OS: Linux-5.10.0-9-amd64-x86_64-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 3.0.2 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://tviplayer.iol.pt/direto/CNN [cli][debug] --loglevel=debug [cli][info] Found matching plugin tviplayer for URL https://tviplayer.iol.pt/direto/CNN error: Unable to validate response text: Unable to parse HTML: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. ('<?xml version=\'1.0\' encoding=\'U ...) ```
because of https://github.com/streamlink/streamlink/pull/4201 works fine without it. > Unable to parse HTML: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. ('<?xml version=\'1.0\' encoding=\'U ...) This is what the site returns, which is XHTML5, a subset of XML, not HTML5. However, they also return invalid HTTP headers and invalid XML syntax, so web browsers will just ignore the XML declaration and parse it as regular HTML5. lxml on the other hand will raise an error due to the invalid XML declaration in the HTML5 content. ```html <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html> <html lang="pt" xmlns="http://www.w3.org/1999/xhtml"><head id="j_idt2"> <meta charset="utf-8" /> ``` - https://en.wikipedia.org/wiki/HTML5#XHTML_5_(XML-serialized_HTML_5) - https://validator.w3.org/nu/?doc=https%3A%2F%2Ftviplayer.iol.pt%2Fdireto%2FCNN - https://www.w3.org/International/questions/qa-html-encoding-declarations.en#xml > XHTML5: An XHTML5 document is served as XML and has XML syntax. XML parsers do not recognise the encoding declarations in meta elements. They only recognise the XML declaration. > > The XML declaration is only required if the page is not being served as UTF-8 (or UTF-16) This means that we need to detect this edge case in the `parse_html` method, because `parse_xml` can't be used here due to the invalid XML syntax. When the input are bytes, like it was previously, the HTML parser simply ignores the XML declaration for some reason.
2021-11-26T17:28:08
streamlink/streamlink
4,213
streamlink__streamlink-4213
[ "4212" ]
6d218dc9f53829bab0f7f4b3dbf35e2fcc2050d4
diff --git a/src/streamlink_cli/argparser.py b/src/streamlink_cli/argparser.py --- a/src/streamlink_cli/argparser.py +++ b/src/streamlink_cli/argparser.py @@ -548,7 +548,7 @@ def build_parser(): Example: - %(prog)s --output "~/recordings/{author}/{category}/{id}-{time:%Y%m%d%H%M%S}.ts" <URL> [STREAM] + %(prog)s --output "~/recordings/{author}/{category}/{id}-{time:%%Y%%m%%d%%H%%M%%S}.ts" <URL> [STREAM] """ ) output.add_argument( @@ -587,7 +587,7 @@ def build_parser(): Example: - %(prog)s --record "~/recordings/{author}/{category}/{id}-{time:%Y%m%d%H%M%S}.ts" <URL> [STREAM] + %(prog)s --record "~/recordings/{author}/{category}/{id}-{time:%%Y%%m%%d%%H%%M%%S}.ts" <URL> [STREAM] """ ) output.add_argument( @@ -604,7 +604,7 @@ def build_parser(): Example: - %(prog)s --record-and-pipe "~/recordings/{author}/{category}/{id}-{time:%Y%m%d%H%M%S}.ts" <URL> [STREAM] + %(prog)s --record-and-pipe "~/recordings/{author}/{category}/{id}-{time:%%Y%%m%%d%%H%%M%%S}.ts" <URL> [STREAM] """ ) output.add_argument( diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -1032,7 +1032,9 @@ def main(): with ignored(Exception): check_version(force=args.version_check) - if args.plugins: + if args.help: + parser.print_help() + elif args.plugins: print_plugins() elif args.can_handle_url: try: @@ -1065,8 +1067,6 @@ def main(): stream_fd.close() except KeyboardInterrupt: error_code = 130 - elif args.help: - parser.print_help() else: usage = parser.format_usage() console.msg(
diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -3,6 +3,7 @@ import sys import unittest from pathlib import Path, PosixPath, WindowsPath +from textwrap import dedent from unittest.mock import Mock, call, patch import freezegun @@ -740,3 +741,78 @@ def test_logfile_path_auto(self, mock_open, mock_stdout): mock_write=mock_open("C:\\foo\\2000-01-02_03-04-05.log", "a").write, mock_stdout=mock_stdout ) + + +class TestCLIMainPrint(unittest.TestCase): + def subject(self): + with patch.object(Streamlink, "load_builtin_plugins"), \ + patch.object(Streamlink, "resolve_url") as mock_resolve_url, \ + patch.object(Streamlink, "resolve_url_no_redirect") as mock_resolve_url_no_redirect: + session = Streamlink() + session.load_plugins(os.path.join(os.path.dirname(__file__), "plugin")) + with patch("streamlink_cli.main.streamlink", session), \ + patch("streamlink_cli.main.CONFIG_FILES", []), \ + patch("streamlink_cli.main.setup_streamlink"), \ + patch("streamlink_cli.main.setup_plugins"), \ + patch("streamlink_cli.main.setup_http_session"), \ + patch("streamlink_cli.main.setup_signals"), \ + patch("streamlink_cli.main.setup_options") as mock_setup_options: + with self.assertRaises(SystemExit) as cm: + streamlink_cli.main.main() + self.assertEqual(cm.exception.code, 0) + mock_resolve_url.assert_not_called() + mock_resolve_url_no_redirect.assert_not_called() + mock_setup_options.assert_not_called() + + @staticmethod + def get_stdout(mock_stdout): + return "".join([call_arg[0][0] for call_arg in mock_stdout.write.call_args_list]) + + @patch("sys.stdout") + @patch("sys.argv", ["streamlink"]) + def test_print_usage(self, mock_stdout): + self.subject() + self.assertEqual( + self.get_stdout(mock_stdout), + "usage: streamlink [OPTIONS] <URL> [STREAM]\n\n" + + "Use -h/--help to see the available options or read the manual at https://streamlink.github.io\n" + ) + + @patch("sys.stdout") + @patch("sys.argv", ["streamlink", "--help"]) + def test_print_help(self, mock_stdout): + self.subject() + output = self.get_stdout(mock_stdout) + self.assertIn( + "usage: streamlink [OPTIONS] <URL> [STREAM]", + output + ) + self.assertIn( + dedent(""" + Streamlink is a command-line utility that extracts streams from various + services and pipes them into a video player of choice. + """), + output + ) + self.assertIn( + dedent(""" + For more in-depth documentation see: + https://streamlink.github.io + + Please report broken plugins or bugs to the issue tracker on Github: + https://github.com/streamlink/streamlink/issues + """), + output + ) + + @patch("sys.stdout") + @patch("sys.argv", ["streamlink", "--plugins"]) + def test_print_plugins(self, mock_stdout): + self.subject() + self.assertEqual(self.get_stdout(mock_stdout), "Loaded plugins: testplugin\n") + + @patch("sys.stdout") + @patch("sys.argv", ["streamlink", "--plugins", "--json"]) + def test_print_plugins_json(self, mock_stdout): + self.subject() + self.assertEqual(self.get_stdout(mock_stdout), """[\n "testplugin"\n]\n""")
streamlink -h is raising TypeError: not enough arguments for format string ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Hi everyone. Just as the title says, the latest version from pip is raising the above error due to non-escaped % in the argparser.py. Specifically, in the three lines on the file argparser.py that contains: `{time:%Y%m%d%H%M%S}` replacing with `{time:%%Y%%m%%d%%H%%M%%S}` should fix the issue. Thanks for the great work anyway, takeshi ### Debug log ```text Traceback (most recent call last): File "/home/user/.local/bin/streamlink", line 8, in <module> sys.exit(main()) File "/home/user/.local/lib/python3.8/site-packages/streamlink_cli/main.py", line 1069, in main parser.print_help() File "/usr/lib/python3.8/argparse.py", line 2509, in print_help self._print_message(self.format_help(), file) File "/home/user/.local/lib/python3.8/site-packages/streamlink_cli/argparser.py", line 105, in format_help return formatter.format_help() File "/usr/lib/python3.8/argparse.py", line 294, in format_help help = self._root_section.format_help() File "/usr/lib/python3.8/argparse.py", line 225, in format_help item_help = join([func(*args) for func, args in self.items]) File "/usr/lib/python3.8/argparse.py", line 225, in <listcomp> item_help = join([func(*args) for func, args in self.items]) File "/usr/lib/python3.8/argparse.py", line 225, in format_help item_help = join([func(*args) for func, args in self.items]) File "/usr/lib/python3.8/argparse.py", line 225, in <listcomp> item_help = join([func(*args) for func, args in self.items]) File "/usr/lib/python3.8/argparse.py", line 541, in _format_action help_text = self._expand_help(action) File "/usr/lib/python3.8/argparse.py", line 636, in _expand_help return self._get_help_string(action) % params TypeError: not enough arguments for format string ```
2021-11-26T21:57:46
streamlink/streamlink
4,222
streamlink__streamlink-4222
[ "4221" ]
46f4841b82a1e8c9939260f057e3d8a2214039c0
diff --git a/src/streamlink/plugins/youtube.py b/src/streamlink/plugins/youtube.py --- a/src/streamlink/plugins/youtube.py +++ b/src/streamlink/plugins/youtube.py @@ -146,16 +146,26 @@ def _schema_videodetails(cls, data): validate.optional("isLowLatencyLiveStream"): validate.transform(bool), validate.optional("isPrivate"): validate.transform(bool), }, - "microformat": { - "playerMicroformatRenderer": { + "microformat": validate.all( + validate.any( + validate.all( + {"playerMicroformatRenderer": dict}, + validate.get("playerMicroformatRenderer") + ), + validate.all( + {"microformatDataRenderer": dict}, + validate.get("microformatDataRenderer") + ) + ), + { "category": str } - } + ) }, validate.union_get( ("videoDetails", "videoId"), ("videoDetails", "author"), - ("microformat", "playerMicroformatRenderer", "category"), + ("microformat", "category"), ("videoDetails", "title"), ("videoDetails", "isLive") )
plugins.youtube: latest stable is unable to parse https://www.youtube.com/channel/<channel_id>/live ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description - issue: - the latest stable release (`3.0.3`) is unable to parse Youtube URLs of format `https://www.youtube.com/channel/<channel_id>/live`, whereas the previous release (`3.0.2`) was able to do so; - however, removal of the `/live` suffix (`https://www.youtube.com/channel/<channel_id>`) fixes the issue; - in addition, the issue **does not** apply to any of the following formats: - `https://www.youtube.com/c/todonoticias` - `https://www.youtube.com/c/todonoticias/live` - `https://www.youtube.com/user/c5n` - `https://www.youtube.com/user/c5n/live` - example of URLs that cause the issue: - `https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live`: ``` streamlink -l debug https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live [cli][debug] OS: Linux-5.10.0-9-amd64-x86_64-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live [cli][debug] --loglevel=debug [cli][info] Found matching plugin youtube for URL https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live error: Unable to validate result: Unable to validate key 'microformat': Key 'playerMicroformatRenderer' not found in {'microformatDataRenderer': {'urlCanonical': 'https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live', 'title': 'Drum & Bass Non-Stop Bangers - To Vibe/Dance To', 'description': "Welcome to our new 'Drum & Bass Non-Stop Bangers' playlist. We’ll be rolling out the finest Hospital selections continuously so you can get your 174BPM fix w...", 'thumbnail': {'thumbnails': [{'url': 'https://i.ytimg.com/vi/jdkegu3Zexg/maxresdefault_live.jpg', 'width': 1280, 'height': 720}]}, 'siteName': 'YouTube', 'appName': 'YouTube', 'androidPackage': 'com.google.android.youtube', 'iosAppStoreId': '544007664', 'iosAppArguments': 'https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live', 'ogType': 'video.other', 'urlApplinksWeb': 'https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=applinks', 'urlApplinksIos': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=applinks', 'urlApplinksAndroid': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=applinks', 'urlTwitterIos': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=twitter-deep-link', 'urlTwitterAndroid': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=twitter-deep-link', 'twitterCardType': 'player', 'twitterSiteHandle': '@YouTube', 'schemaDotOrgType': 'http://schema.org/VideoObject', 'noindex': True, 'unlisted': False, 'paid': False, 'familySafe': True, 'tags': ['hospital Records', 'Drum & Bass', 'dnb', 'liquid dnb', 'liquid funk', 'liquid drum and bass', 'drum and bass', 'liquid', 'D&B', 'Hospital Records', 'Jungle', 'Hospitality', 'Electronic', 'Junglist', 'Hospitality DNB', 'playlist', 'radio stream', '24/7 radio stream', 'mix', 'continuous mix', 'drum and bass playlist', 'Bangers', 'Rave', 'Dance', 'Drum & Bass Non-Stop Bangers', 'Non Stop Music', 'Non Stop Bangers', 'Non Stop', '24/7 music', '24/7 drum and bass', 'Drum n Bass', 'DnB', 'D+B', 'Electronic Stream', 'DnB Stream', 'D&B Stream', 'D+B Stream', 'Stream'], 'availableCountries': ['AD', 'AE', 'AF', 'AG', 'AI', 'AL', 'AM', 'AO', 'AQ', 'AR', 'AS', 'AT', 'AU', 'AW', 'AX', 'AZ', 'BA', 'BB', 'BD', 'BE', 'BF', 'BG', 'BH', 'BI', 'BJ', 'BL', 'BM', 'BN', 'BO', 'BQ', 'BR', 'BS', 'BT', 'BV', 'BW', 'BY', 'BZ', 'CA', 'CC', 'CD', 'CF', 'CG', 'CH', 'CI', 'CK', 'CL', 'CM', 'CN', 'CO', 'CR', 'CU', 'CV', 'CW', 'CX', 'CY', 'CZ', 'DE', 'DJ', 'DK', 'DM', 'DO', 'DZ', 'EC', 'EE', 'EG', 'EH', 'ER', 'ES', 'ET', 'FI', 'FJ', 'FK', 'FM', 'FO', 'FR', 'GA', 'GB', 'GD', 'GE', 'GF', 'GG', 'GH', 'GI', 'GL', 'GM', 'GN', 'GP', 'GQ', 'GR', 'GS', 'GT', 'GU', 'GW', 'GY', 'HK', 'HM', 'HN', 'HR', 'HT', 'HU', 'ID', 'IE', 'IL', 'IM', 'IN', 'IO', 'IQ', 'IR', 'IS', 'IT', 'JE', 'JM', 'JO', 'JP', 'KE', 'KG', 'KH', 'KI', 'KM', 'KN', 'KP', 'KR', 'KW', 'KY', 'KZ', 'LA', 'LB', 'LC', 'LI', 'LK', 'LR', 'LS', 'LT', 'LU', 'LV', 'LY', 'MA', 'MC', 'MD', 'ME', 'MF', 'MG', 'MH', 'MK', 'ML', 'MM', 'MN', 'MO', 'MP', 'MQ', 'MR', 'MS', 'MT', 'MU', 'MV', 'MW', 'MX', 'MY', 'MZ', 'NA', 'NC', 'NE', 'NF', 'NG', 'NI', 'NL', 'NO', 'NP', 'NR', 'NU', 'NZ', 'OM', 'PA', 'PE', 'PF', 'PG', 'PH', 'PK', 'PL', 'PM', 'PN', 'PR', 'PS', 'PT', 'PW', 'PY', 'QA', 'RE', 'RO', 'RS', 'RU', 'RW', 'SA', 'SB', 'SC', 'SD', 'SE', 'SG', 'SH', 'SI', 'SJ', 'SK', 'SL', 'SM', 'SN', 'SO', 'SR', 'SS', 'ST', 'SV', 'SX', 'SY', 'SZ', 'TC', 'TD', 'TF', 'TG', 'TH', 'TJ', 'TK', 'TL', 'TM', 'TN', 'TO', 'TR', 'TT', 'TV', 'TW', 'TZ', 'UA', 'UG', 'UM', 'US', 'UY', 'UZ', 'VA', 'VC', 'VE', 'VG', 'VI', 'VN', 'VU', 'WF', 'WS', 'YE', 'YT', 'ZA', 'ZM', 'ZW'], 'pageOwnerDetails': {'name': 'Hospital Records', 'externalChannelId': 'UCw49uOTAJjGUdoAeUcp7tOg', 'youtubeProfileUrl': 'http://www.youtube.com/user/HospitalRecords'}, 'videoDetails': {'externalVideoId': 'jdkegu3Zexg'}, 'embedDetails': {'iframeUrl': 'https://www.youtube.com/embed/live_stream?channel=UCw49uOTAJjGUdoAeUcp7tOg', 'flashUrl': 'http://www.youtube.com/v/jdkegu3Zexg?version=3&autohide=1', 'flashSecureUrl': 'https://www.youtube.com/v/jdkegu3Zexg?version=3&autohide=1', 'width': 480, 'height': 360}, 'linkAlternates': [{'hrefUrl': 'https://m.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live'}, {'hrefUrl': 'android-app://com.google.android.youtube/http/youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live'}, {'hrefUrl': 'ios-app://544007664/http/youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live'}, {'hrefUrl': 'https://www.youtube.com/oembed?format=json&url=https%3A%2F%2Fwww.youtube.com%2Fchannel%2FUCw49uOTAJjGUdoAeUcp7tOg%2Flive', 'title': 'Drum & Bass Non-Stop Bangers - To Vibe/Dance To', 'alternateType': 'application/json+oembed'}, {'hrefUrl': 'https://www.youtube.com/oembed?format=xml&url=https%3A%2F%2Fwww.youtube.com%2Fchannel%2FUCw49uOTAJjGUdoAeUcp7tOg%2Flive', 'title': 'Drum & Bass Non-Stop Bangers - To Vibe/Dance To', 'alternateType': 'text/xml+oembed'}], 'viewCount': '153114', 'publishDate': '1969-12-31', 'category': 'Music', 'uploadDate': '1969-12-31'}} ``` - `https://youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live`: ``` streamlink -l debug https://youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live [cli][debug] OS: Linux-5.10.0-9-amd64-x86_64-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live [cli][debug] --loglevel=debug [cli][info] Found matching plugin youtube for URL https://youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live error: Unable to validate result: Unable to validate key 'microformat': Key 'playerMicroformatRenderer' not found in {'microformatDataRenderer': {'urlCanonical': 'https://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live', 'title': 'TN EN VIVO | Mirá la programación de @Todo Noticias durante las 24 horas del día', 'description': 'SUSCRIBITE A NUESTRO CANAL → https://bit.ly/365OTiS TN - TODO NOTICIAS EN VIVO LAS 24 HS El canal líder de noticias de la Argentina. ★ Mirá más contenid...', 'thumbnail': {'thumbnails': [{'url': 'https://i.ytimg.com/vi/wHn1_QVoXGM/maxresdefault_live.jpg', 'width': 1280, 'height': 720}]}, 'siteName': 'YouTube', 'appName': 'YouTube', 'androidPackage': 'com.google.android.youtube', 'iosAppStoreId': '544007664', 'iosAppArguments': 'https://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live', 'ogType': 'video.other', 'urlApplinksWeb': 'https://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live?feature=applinks', 'urlApplinksIos': 'vnd.youtube://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live?feature=applinks', 'urlApplinksAndroid': 'vnd.youtube://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live?feature=applinks', 'urlTwitterIos': 'vnd.youtube://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live?feature=twitter-deep-link', 'urlTwitterAndroid': 'vnd.youtube://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live?feature=twitter-deep-link', 'twitterCardType': 'player', 'twitterSiteHandle': '@YouTube', 'schemaDotOrgType': 'http://schema.org/VideoObject', 'noindex': True, 'unlisted': False, 'paid': False, 'familySafe': True, 'tags': ['noticia ultimo momento', 'breaking news', 'TN', 'Todo Noticias Argentina', 'todo noticias vivo', 'tn en vivo', 'tn en vivo online', 'tn noticias en vivo', 'tn en vivo todo noticias', 'la nacion', 'c5n en vivo', 'a24 en vivo', 'canal 26 en vivo', 'programas de tn', 'noticias en vivo', 'en vivo', 'canales de noticias', 'cronica tv en vivo', 'noticias en vivo argentina', 'tnt en vivo', 'telefe en vivo', 'canales en vivo', 'noticiero en vivo', 'tv en vivo', 'canales en vivo argentina', 'america tv en vivo', 'canal 7 en vivo'], 'availableCountries': ['AD', 'AE', 'AF', 'AG', 'AI', 'AL', 'AM', 'AO', 'AQ', 'AR', 'AS', 'AT', 'AU', 'AW', 'AX', 'AZ', 'BA', 'BB', 'BD', 'BE', 'BF', 'BG', 'BH', 'BI', 'BJ', 'BL', 'BM', 'BN', 'BO', 'BQ', 'BR', 'BS', 'BT', 'BV', 'BW', 'BY', 'BZ', 'CA', 'CC', 'CD', 'CF', 'CG', 'CH', 'CI', 'CK', 'CL', 'CM', 'CN', 'CO', 'CR', 'CU', 'CV', 'CW', 'CX', 'CY', 'CZ', 'DE', 'DJ', 'DK', 'DM', 'DO', 'DZ', 'EC', 'EE', 'EG', 'EH', 'ER', 'ES', 'ET', 'FI', 'FJ', 'FK', 'FM', 'FO', 'FR', 'GA', 'GB', 'GD', 'GE', 'GF', 'GG', 'GH', 'GI', 'GL', 'GM', 'GN', 'GP', 'GQ', 'GR', 'GS', 'GT', 'GU', 'GW', 'GY', 'HK', 'HM', 'HN', 'HR', 'HT', 'HU', 'ID', 'IE', 'IL', 'IM', 'IN', 'IO', 'IQ', 'IR', 'IS', 'IT', 'JE', 'JM', 'JO', 'JP', 'KE', 'KG', 'KH', 'KI', 'KM', 'KN', 'KP', 'KR', 'KW', 'KY', 'KZ', 'LA', 'LB', 'LC', 'LI', 'LK', 'LR', 'LS', 'LT', 'LU', 'LV', 'LY', 'MA', 'MC', 'MD', 'ME', 'MF', 'MG', 'MH', 'MK', 'ML', 'MM', 'MN', 'MO', 'MP', 'MQ', 'MR', 'MS', 'MT', 'MU', 'MV', 'MW', 'MX', 'MY', 'MZ', 'NA', 'NC', 'NE', 'NF', 'NG', 'NI', 'NL', 'NO', 'NP', 'NR', 'NU', 'NZ', 'OM', 'PA', 'PE', 'PF', 'PG', 'PH', 'PK', 'PL', 'PM', 'PN', 'PR', 'PS', 'PT', 'PW', 'PY', 'QA', 'RE', 'RO', 'RS', 'RU', 'RW', 'SA', 'SB', 'SC', 'SD', 'SE', 'SG', 'SH', 'SI', 'SJ', 'SK', 'SL', 'SM', 'SN', 'SO', 'SR', 'SS', 'ST', 'SV', 'SX', 'SY', 'SZ', 'TC', 'TD', 'TF', 'TG', 'TH', 'TJ', 'TK', 'TL', 'TM', 'TN', 'TO', 'TR', 'TT', 'TV', 'TW', 'TZ', 'UA', 'UG', 'UM', 'US', 'UY', 'UZ', 'VA', 'VC', 'VE', 'VG', 'VI', 'VN', 'VU', 'WF', 'WS', 'YE', 'YT', 'ZA', 'ZM', 'ZW'], 'pageOwnerDetails': {'name': 'Todo Noticias', 'externalChannelId': 'UCj6PcyLvpnIRT_2W_mwa9Aw', 'youtubeProfileUrl': 'http://www.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw'}, 'videoDetails': {'externalVideoId': 'wHn1_QVoXGM'}, 'embedDetails': {'iframeUrl': 'https://www.youtube.com/embed/live_stream?channel=UCj6PcyLvpnIRT_2W_mwa9Aw', 'flashUrl': 'http://www.youtube.com/v/wHn1_QVoXGM?version=3&autohide=1', 'flashSecureUrl': 'https://www.youtube.com/v/wHn1_QVoXGM?version=3&autohide=1', 'width': 480, 'height': 360}, 'linkAlternates': [{'hrefUrl': 'https://m.youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live'}, {'hrefUrl': 'android-app://com.google.android.youtube/http/youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live'}, {'hrefUrl': 'ios-app://544007664/http/youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live'}, {'hrefUrl': 'https://www.youtube.com/oembed?format=json&url=https%3A%2F%2Fwww.youtube.com%2Fchannel%2FUCj6PcyLvpnIRT_2W_mwa9Aw%2Flive', 'title': 'TN EN VIVO | Mirá la programación de @Todo Noticias durante las 24 horas del día', 'alternateType': 'application/json+oembed'}, {'hrefUrl': 'https://www.youtube.com/oembed?format=xml&url=https%3A%2F%2Fwww.youtube.com%2Fchannel%2FUCj6PcyLvpnIRT_2W_mwa9Aw%2Flive', 'title': 'TN EN VIVO | Mirá la programación de @Todo Noticias durante las 24 horas del día', 'alternateType': 'text/xml+oembed'}], 'viewCount': '384547261', 'publishDate': '1969-12-31', 'category': 'News & Politics', 'uploadDate': '1969-12-31'}} ``` - to reproduce: 1. `pip3 install --upgrade streamlink==3.0.3` 2. `streamlink https://youtube.com/channel/UCj6PcyLvpnIRT_2W_mwa9Aw/live` ### Debug log ```text streamlink -l debug https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live [cli][debug] OS: Linux-5.10.0-9-amd64-x86_64-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live [cli][debug] --loglevel=debug [cli][info] Found matching plugin youtube for URL https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live error: Unable to validate result: Unable to validate key 'microformat': Key 'playerMicroformatRenderer' not found in {'microformatDataRenderer': {'urlCanonical': 'https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live', 'title': 'Drum & Bass Non-Stop Bangers - To Vibe/Dance To', 'description': "Welcome to our new 'Drum & Bass Non-Stop Bangers' playlist. We’ll be rolling out the finest Hospital selections continuously so you can get your 174BPM fix w...", 'thumbnail': {'thumbnails': [{'url': 'https://i.ytimg.com/vi/jdkegu3Zexg/maxresdefault_live.jpg', 'width': 1280, 'height': 720}]}, 'siteName': 'YouTube', 'appName': 'YouTube', 'androidPackage': 'com.google.android.youtube', 'iosAppStoreId': '544007664', 'iosAppArguments': 'https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live', 'ogType': 'video.other', 'urlApplinksWeb': 'https://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=applinks', 'urlApplinksIos': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=applinks', 'urlApplinksAndroid': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=applinks', 'urlTwitterIos': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=twitter-deep-link', 'urlTwitterAndroid': 'vnd.youtube://www.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live?feature=twitter-deep-link', 'twitterCardType': 'player', 'twitterSiteHandle': '@YouTube', 'schemaDotOrgType': 'http://schema.org/VideoObject', 'noindex': True, 'unlisted': False, 'paid': False, 'familySafe': True, 'tags': ['hospital Records', 'Drum & Bass', 'dnb', 'liquid dnb', 'liquid funk', 'liquid drum and bass', 'drum and bass', 'liquid', 'D&B', 'Hospital Records', 'Jungle', 'Hospitality', 'Electronic', 'Junglist', 'Hospitality DNB', 'playlist', 'radio stream', '24/7 radio stream', 'mix', 'continuous mix', 'drum and bass playlist', 'Bangers', 'Rave', 'Dance', 'Drum & Bass Non-Stop Bangers', 'Non Stop Music', 'Non Stop Bangers', 'Non Stop', '24/7 music', '24/7 drum and bass', 'Drum n Bass', 'DnB', 'D+B', 'Electronic Stream', 'DnB Stream', 'D&B Stream', 'D+B Stream', 'Stream'], 'availableCountries': ['AD', 'AE', 'AF', 'AG', 'AI', 'AL', 'AM', 'AO', 'AQ', 'AR', 'AS', 'AT', 'AU', 'AW', 'AX', 'AZ', 'BA', 'BB', 'BD', 'BE', 'BF', 'BG', 'BH', 'BI', 'BJ', 'BL', 'BM', 'BN', 'BO', 'BQ', 'BR', 'BS', 'BT', 'BV', 'BW', 'BY', 'BZ', 'CA', 'CC', 'CD', 'CF', 'CG', 'CH', 'CI', 'CK', 'CL', 'CM', 'CN', 'CO', 'CR', 'CU', 'CV', 'CW', 'CX', 'CY', 'CZ', 'DE', 'DJ', 'DK', 'DM', 'DO', 'DZ', 'EC', 'EE', 'EG', 'EH', 'ER', 'ES', 'ET', 'FI', 'FJ', 'FK', 'FM', 'FO', 'FR', 'GA', 'GB', 'GD', 'GE', 'GF', 'GG', 'GH', 'GI', 'GL', 'GM', 'GN', 'GP', 'GQ', 'GR', 'GS', 'GT', 'GU', 'GW', 'GY', 'HK', 'HM', 'HN', 'HR', 'HT', 'HU', 'ID', 'IE', 'IL', 'IM', 'IN', 'IO', 'IQ', 'IR', 'IS', 'IT', 'JE', 'JM', 'JO', 'JP', 'KE', 'KG', 'KH', 'KI', 'KM', 'KN', 'KP', 'KR', 'KW', 'KY', 'KZ', 'LA', 'LB', 'LC', 'LI', 'LK', 'LR', 'LS', 'LT', 'LU', 'LV', 'LY', 'MA', 'MC', 'MD', 'ME', 'MF', 'MG', 'MH', 'MK', 'ML', 'MM', 'MN', 'MO', 'MP', 'MQ', 'MR', 'MS', 'MT', 'MU', 'MV', 'MW', 'MX', 'MY', 'MZ', 'NA', 'NC', 'NE', 'NF', 'NG', 'NI', 'NL', 'NO', 'NP', 'NR', 'NU', 'NZ', 'OM', 'PA', 'PE', 'PF', 'PG', 'PH', 'PK', 'PL', 'PM', 'PN', 'PR', 'PS', 'PT', 'PW', 'PY', 'QA', 'RE', 'RO', 'RS', 'RU', 'RW', 'SA', 'SB', 'SC', 'SD', 'SE', 'SG', 'SH', 'SI', 'SJ', 'SK', 'SL', 'SM', 'SN', 'SO', 'SR', 'SS', 'ST', 'SV', 'SX', 'SY', 'SZ', 'TC', 'TD', 'TF', 'TG', 'TH', 'TJ', 'TK', 'TL', 'TM', 'TN', 'TO', 'TR', 'TT', 'TV', 'TW', 'TZ', 'UA', 'UG', 'UM', 'US', 'UY', 'UZ', 'VA', 'VC', 'VE', 'VG', 'VI', 'VN', 'VU', 'WF', 'WS', 'YE', 'YT', 'ZA', 'ZM', 'ZW'], 'pageOwnerDetails': {'name': 'Hospital Records', 'externalChannelId': 'UCw49uOTAJjGUdoAeUcp7tOg', 'youtubeProfileUrl': 'http://www.youtube.com/user/HospitalRecords'}, 'videoDetails': {'externalVideoId': 'jdkegu3Zexg'}, 'embedDetails': {'iframeUrl': 'https://www.youtube.com/embed/live_stream?channel=UCw49uOTAJjGUdoAeUcp7tOg', 'flashUrl': 'http://www.youtube.com/v/jdkegu3Zexg?version=3&autohide=1', 'flashSecureUrl': 'https://www.youtube.com/v/jdkegu3Zexg?version=3&autohide=1', 'width': 480, 'height': 360}, 'linkAlternates': [{'hrefUrl': 'https://m.youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live'}, {'hrefUrl': 'android-app://com.google.android.youtube/http/youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live'}, {'hrefUrl': 'ios-app://544007664/http/youtube.com/channel/UCw49uOTAJjGUdoAeUcp7tOg/live'}, {'hrefUrl': 'https://www.youtube.com/oembed?format=json&url=https%3A%2F%2Fwww.youtube.com%2Fchannel%2FUCw49uOTAJjGUdoAeUcp7tOg%2Flive', 'title': 'Drum & Bass Non-Stop Bangers - To Vibe/Dance To', 'alternateType': 'application/json+oembed'}, {'hrefUrl': 'https://www.youtube.com/oembed?format=xml&url=https%3A%2F%2Fwww.youtube.com%2Fchannel%2FUCw49uOTAJjGUdoAeUcp7tOg%2Flive', 'title': 'Drum & Bass Non-Stop Bangers - To Vibe/Dance To', 'alternateType': 'text/xml+oembed'}], 'viewCount': '153133', 'publishDate': '1969-12-31', 'category': 'Music', 'uploadDate': '1969-12-31'}} ```
2021-11-29T22:23:40
streamlink/streamlink
4,232
streamlink__streamlink-4232
[ "3462" ]
dd63e63c81f0fe3b7acf79ab094ec393b516b981
diff --git a/src/streamlink/plugins/pluto.py b/src/streamlink/plugins/pluto.py --- a/src/streamlink/plugins/pluto.py +++ b/src/streamlink/plugins/pluto.py @@ -1,99 +1,171 @@ import logging import re +from urllib.parse import parse_qs, urljoin from uuid import uuid4 from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate -from streamlink.stream.ffmpegmux import MuxedStream -from streamlink.stream.hls import HLSStream +from streamlink.stream.hls import HLSStream, HLSStreamReader, HLSStreamWriter from streamlink.utils.url import update_qsd log = logging.getLogger(__name__) -@pluginmatcher(re.compile(r''' +class PlutoHLSStreamWriter(HLSStreamWriter): + ad_re = re.compile(r"_ad/creative/|dai\.google\.com") + + def should_filter_sequence(self, sequence): + return self.ad_re.search(sequence.segment.uri) is not None or super().should_filter_sequence(sequence) + + +class PlutoHLSStreamReader(HLSStreamReader): + __writer__ = PlutoHLSStreamWriter + + +class PlutoHLSStream(HLSStream): + __shortname__ = "hls-pluto" + __reader__ = PlutoHLSStreamReader + + +@pluginmatcher(re.compile(r""" https?://(?:www\.)?pluto\.tv/(?:\w{2}/)?(?: - live-tv/(?P<slug_live>[^/?]+)/?$ + live-tv/(?P<slug_live>[^/]+) | - on-demand/series/(?P<slug_series>[^/]+)/season/\d+/episode/(?P<slug_episode>[^/]+)$ + on-demand/series/(?P<slug_series>[^/]+)(?:/season/\d+)?/episode/(?P<slug_episode>[^/]+) | - on-demand/movies/(?P<slug_movies>[^/]+)$ - ) -''', re.VERBOSE)) + on-demand/movies/(?P<slug_movies>[^/]+) + )/?$ +""", re.VERBOSE)) class Pluto(Plugin): - def _schema_media(self, slug): - return validate.Schema( - [{ - 'name': str, - 'slug': str, - validate.optional('stitched'): { - 'urls': [ - { - 'type': str, - 'url': validate.url(), - } - ] - } - }], - validate.filter(lambda k: k['slug'].lower() == slug.lower()), - validate.get(0), + def _get_api_data(self, type, slug, filter=None): + log.debug(f"slug={slug}") + app_version = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//head/meta[@name='appVersion']/@content"), + validate.any(None, str), + )) + if not app_version: + return + + log.debug(f"app_version={app_version}") + + return self.session.http.get( + "https://boot.pluto.tv/v4/start", + params={ + "appName": "web", + "appVersion": app_version, + "deviceVersion": "94.0.0", + "deviceModel": "web", + "deviceMake": "firefox", + "deviceType": "web", + "clientID": str(uuid4()), + "clientModelNumber": "1.0", + type: slug, + }, + schema=validate.Schema( + validate.parse_json(), { + "servers": { + "stitcher": validate.url(), + }, + validate.optional("EPG"): [{ + "name": str, + "id": str, + "slug": str, + "stitched": { + "path": str, + }, + }], + validate.optional("VOD"): [{ + "name": str, + "id": str, + "slug": str, + "genre": str, + "stitched": { + "path": str, + }, + validate.optional("seasons"): [{ + "episodes": validate.all([{ + "name": str, + "_id": str, + "slug": str, + "stitched": { + "path": str, + }, + }], validate.filter(lambda k: filter and k["slug"] == filter)), + }], + }], + "sessionToken": str, + "stitcherParams": str, + }, + ), ) - def _get_streams(self): - data = None + def _get_playlist(self, host, path, params, token): + qs = parse_qs(params) + qs["jwt"] = token + yield from PlutoHLSStream.parse_variant_playlist(self.session, update_qsd(urljoin(host, path), qs)).items() + @staticmethod + def _get_media_data(data, key, slug): + media = data.get(key) + if media and media[0]["slug"] == slug: + return media[0] + + def _get_streams(self): m = self.match.groupdict() - if m['slug_live']: - res = self.session.http.get('https://api.pluto.tv/v2/channels') - data = self.session.http.json(res, - schema=self._schema_media(m['slug_live'])) - elif m['slug_series'] and m['slug_episode']: - res = self.session.http.get(f'http://api.pluto.tv/v3/vod/slugs/{m["slug_series"]}') - data = self.session.http.json( - res, - schema=validate.Schema( - {'seasons': validate.all( - [{'episodes': self._schema_media(m['slug_episode'])}], - validate.filter(lambda k: k['episodes'] is not None))}, - validate.get('seasons'), - validate.get(0), - validate.any(None, validate.get('episodes')) - ), - ) - elif m['slug_movies']: - res = self.session.http.get('https://api.pluto.tv/v3/vod/categories', - params={'includeItems': 'true', 'deviceType': 'web'}) - data = self.session.http.json( - res, - schema=validate.Schema( - {'categories': validate.all( - [{'items': self._schema_media(m['slug_movies'])}], - validate.filter(lambda k: k['items'] is not None))}, - validate.get('categories'), - validate.get(0), - validate.any(None, validate.get('items')), - ), - ) - - log.trace(f'{data!r}') - if data is None or not data.get('stitched'): + if m["slug_live"]: + data = self._get_api_data("channelSlug", m["slug_live"]) + media = self._get_media_data(data, "EPG", m["slug_live"]) + if not media: + return + + self.id = media["id"] + self.title = media["name"] + path = media["stitched"]["path"] + + elif m["slug_series"] and m["slug_episode"]: + data = self._get_api_data("episodeSlugs", m["slug_series"], filter=m["slug_episode"]) + media = self._get_media_data(data, "VOD", m["slug_series"]) + if not media or "seasons" not in media: + return + + for season in media["seasons"]: + if season["episodes"]: + episode = season["episodes"][0] + if episode["slug"] == m["slug_episode"]: + break + else: + return + + self.author = media["name"] + self.category = media["genre"] + self.id = episode["_id"] + self.title = episode["name"] + path = episode["stitched"]["path"] + + elif m["slug_movies"]: + data = self._get_api_data("episodeSlugs", m["slug_movies"]) + media = self._get_media_data(data, "VOD", m["slug_movies"]) + if not media: + return + + self.category = media["genre"] + self.id = media["id"] + self.title = media["name"] + path = media["stitched"]["path"] + + else: return - self.title = data['name'] - stream_url_no_sid = data['stitched']['urls'][0]['url'] - device_id = str(uuid4()) - stream_url = update_qsd(stream_url_no_sid, { - 'deviceId': device_id, - 'sid': device_id, - 'deviceType': 'web', - 'deviceMake': 'Firefox', - 'deviceModel': 'Firefox', - 'appName': 'web', - }) - - self.session.set_option('ffmpeg-fout', 'mpegts') - for q, s in HLSStream.parse_variant_playlist(self.session, stream_url).items(): - yield q, MuxedStream(self.session, s) + log.trace(f"data={data!r}") + log.debug(f"path={path}") + + return self._get_playlist( + data["servers"]["stitcher"], + path, + data["stitcherParams"], + data["sessionToken"], + ) __plugin__ = Pluto
diff --git a/tests/plugins/test_pluto.py b/tests/plugins/test_pluto.py --- a/tests/plugins/test_pluto.py +++ b/tests/plugins/test_pluto.py @@ -23,7 +23,13 @@ class TestPluginCanHandleUrlPluto(PluginCanHandleUrl): 'http://www.pluto.tv/lc/on-demand/series/leverage/season/1/episode/the-nigerian-job-2009-1-1', 'http://pluto.tv/lc/on-demand/series/fear-factor-usa-(lf)/season/5/episode/underwater-safe-bob-car-ramp-2004-5-3', 'https://www.pluto.tv/lc/on-demand/movies/dr.-no-1963-1-1', + 'https://www.pluto.tv/lc/on-demand/movies/dr.-no-1963-1-1/', 'http://pluto.tv/lc/on-demand/movies/the-last-dragon-(1985)-1-1', + 'http://pluto.tv/lc/on-demand/movies/the-last-dragon-(1985)-1-1/', + 'https://pluto.tv/en/on-demand/series/great-british-menu-ptv1/episode/north-west-fish-2009-5-7-ptv1', + 'https://pluto.tv/en/on-demand/series/great-british-menu-ptv1/episode/north-west-fish-2009-5-7-ptv1/', + 'https://www.pluto.tv/en/on-demand/series/great-british-menu-ptv1/episode/north-west-fish-2009-5-7-ptv1', + 'https://www.pluto.tv/en/on-demand/series/great-british-menu-ptv1/episode/north-west-fish-2009-5-7-ptv1/', ] should_not_match = [ @@ -47,4 +53,8 @@ class TestPluginCanHandleUrlPluto(PluginCanHandleUrl): 'http://pluto.tv/lc/on-demand/series/dr.-no-1963-1-1', 'http://pluto.tv/lc/on-demand/movies/leverage/season/1/episode/the-nigerian-job-2009-1-1', 'http://pluto.tv/lc/on-demand/fear-factor-usa-(lf)/season/5/episode/underwater-safe-bob-car-ramp-2004-5-3', + 'https://pluto.tv/en/on-demand/series/great-british-menu-ptv1/episode/north-west-fish-2009-5-7-ptv1/extra', + 'https://pluto.tv/en/on-demand/series/great-british-menu-ptv1/season/5/episode/north-west-fish-2009-5-7-ptv1/extra', + 'https://www.pluto.tv/en/on-demand/series/great-british-menu-ptv1/episode/north-west-fish-2009-5-7-ptv1/extra', + 'https://www.pluto.tv/en/on-demand/series/great-british-menu-ptv1/season/5/episode/north-west-fish-2009-5-7-ptv1/extra', ]
PlutoTV stops streaming when switching to a commercial <!-- Thanks for reporting a bug! USE THE TEMPLATE. Otherwise your bug report may be rejected. First, see the contribution guidelines: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink Also check the list of open and closed bug reports: https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22 Please see the text preview to avoid unnecessary formatting errors. --> ## Bug Report <!-- Replace the space character between the square brackets with an x in order to check the boxes --> - [x ] This is a bug report and I have read the contribution guidelines. - [ ] I am using the latest development version from the master branch. ### Description <!-- Explain the bug as thoroughly as you can. Don't leave out information which is necessary for us to reproduce and debug this issue. --> Streaming stops whenever plutoTV switches to a local commercial. I have seen this on various pluto news channels such as NEWSY and Newsmax. After a news segment, the channel will switch to local commercial and a message from pluto states that they will be right back - but the stream hangs at that point, never to return unless, the stream is restarted. ### Expected / Actual behavior <!-- What do you expect to happen, and what is actually happening? --> I expect that if the channel switches segments that the stream will continue to play as normal, but it does not do this, the stream stops ### Reproduction steps / Explicit stream URLs to test <!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. --> 1. ...Tune to plutotv- Newsy and watch for 5-10 minutes until the segment changes 2. ...for me at the change, streaming stops 3. ... ### Log output <!-- TEXT LOG OUTPUT IS REQUIRED for a bug report! Use the `--loglevel debug` parameter and avoid using parameters which suppress log output. https://streamlink.github.io/cli.html#cmdoption-l Make sure to **remove usernames and passwords** You can copy the output to https://gist.github.com/ or paste it below. Don't post screenshots of the log output and instead copy the text from your terminal application. --> ``` REPLACE THIS TEXT WITH THE LOG OUTPUT ``` ### Additional comments, etc. [Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
What video player are you using? Some will not handle the stream delay as well as others. Have you tried mpv? Hi, I am using both the web-based (Chrome) Nextpvr client and Kodi as a client to nextpvr. I am using Nextpvr v5 (latest rev) as the program that is interfacing with Streamlink. Darryl Markowitz From: Ian Cameron [mailto:[email protected]] Sent: Monday, December 28, 2020 9:36 AM To: streamlink/streamlink Cc: TranzPhat; Author Subject: Re: [streamlink/streamlink] PlutoTV stops streaming when switching to a commercial (#3462) What video player are you using? Some will not handle the stream delay as well as others. Have you tried mpv? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <https://github.com/streamlink/streamlink/issues/3462#issuecomment-751732561> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/ASI3BNIA5NFIIEFRQHXFS2DSXCJWLANCNFSM4VMCWDVA> . <https://github.com/notifications/beacon/ASI3BNM63TZXFCCEA5H2UR3SXCJWLA5CNFSM4VMCWDVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOFTHIOUI.gif> -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus I don't think your issue is with the PlutoTV plugin. I've got it streaming right now direct to mpv and it seems to be working fine. I don't know anything about the other software you're using, so you're probably going to have to seek help elsewhere I'm afraid. Ian, I am kind of new at this, so I apologize in advance. I am using ffmpeg to start the stream, this is the call: C:\Program Files\NextPVR\Other\ffmpeg.exe -y -analyzeduration 10M -i http://127.0.0.1:8866/live?channeloid=8366&transcoder=b048e544-2dae-4769-9455-b96dec75c990&client=f605f5a0-6d55-4a10-a25c-39c15d86bf26 -map_metadata -1 -threads 0 -ignore_unknown -map 0:v:0? -map 0:a:0 -map -0:s -vcodec copy -acodec copy -hls_time 3 -start_number 0 -hls_list_size 403 -y "C:\Users\Public\NPVR-data\web\temp\Top Stories by Newsy 2-b048e5442dae47699455b96dec75c990.m3u8" Not sure if this helps any. From: Ian Cameron [mailto:[email protected]] Sent: Monday, December 28, 2020 10:05 AM To: streamlink/streamlink Cc: TranzPhat; Author Subject: Re: [streamlink/streamlink] PlutoTV stops streaming when switching to a commercial (#3462) I don't think your issue is with the PlutoTV plugin. I've got it streaming right now direct to mpv and it seems to be working fine. I don't know anything about the other software you're using, so you're probably going to have to seek help elsewhere I'm afraid. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <https://github.com/streamlink/streamlink/issues/3462#issuecomment-751741755> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/ASI3BNIQVQUYLEBWOOVYLBTSXCNDJANCNFSM4VMCWDVA> . <https://github.com/notifications/beacon/ASI3BNI4ZH4VWIHAPSMMYQLSXCNDJA5CNFSM4VMCWDVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOFTHKWOY.gif> -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus For me It varies by channel music channels are the worst. streamlink https://pluto.tv/live-tv/mtv-spankin-new best streamlink https://pluto.tv/live-tv/vevo-pop best For me it is when this pluto image comes ![image](https://user-images.githubusercontent.com/2148031/103225621-57610f00-48f8-11eb-9679-77d6c5cc3d98.png) followed by a pluto commercial not a channel commercial. These fail with mpv, ffplay, vlc etc. I can't seem to duplicate it with Newsy but the Pluto commercials aren't as frequent. Post the **debug log** output (`--loglevel debug`), as mentioned in the issue template. If your player crashes during an ad transition then it might be because of a stream discontinuity, which is not supported by Streamlink. I just got an `ffmpeg pipe copy` failure. Running with `--ffmpeg-verbose` also. I think it gets stuck at this point: ``` $ streamlink -l debug --ffmpeg-verbose https://pluto.tv/live-tv/mtv-spankin-new 240p [cli][debug] OS: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.5 [cli][debug] Streamlink: 2.0.0+1.g374130a [cli][debug] Requests(2.25.1), Socks(1.7.1), Websocket(0.57.0) [cli][info] Found matching plugin pluto for URL https://pluto.tv/live-tv/mtv-spankin-new [utils.l10n][debug] Language code: en_GB [cli][info] Available streams: 240p (worst), 480p, 720p_alt, 720p (best) [cli][info] Opening stream: 240p (muxed-stream) [stream.ffmpegmux][debug] Opening hls substream [stream.hls][debug] Reloading playlist [stream.hls][debug] First Sequence: 267960612; Last Sequence: 267960617 [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 267960615; End Sequence: None [stream.hls][debug] Adding segment 267960615 to queue [stream.hls][debug] Adding segment 267960616 to queue [stream.hls][debug] Adding segment 267960617 to queue [stream.ffmpegmux][debug] ffmpeg command: ffmpeg -nostats -y -i /tmp/ffmpeg-290343-476 -c:v copy -c:a copy -map 0 -f mpegts pipe:1 [stream.ffmpegmux][debug] Starting copy to pipe: /tmp/ffmpeg-290343-476 [cli][debug] Pre-buffering 8192 bytes ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 9 (Ubuntu 9.3.0-10ubuntu2) configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared WARNING: library configuration mismatch avcodec configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared --enable-version3 --disable-doc --disable-programs --enable-libaribb24 --enable-liblensfun --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libtesseract --enable-libvo_amrwbenc libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 [stream.hls][debug] Download of segment 267960615 complete Input #0, mpegts, from '/tmp/ffmpeg-290343-476': Duration: N/A, start: 38068.935844, bitrate: N/A Program 1 Stream #0:0[0x7d0](eng): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 123 kb/s Stream #0:1[0x3e8]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, progressive), 426x240 [SAR 1:1 DAR 71:40], Closed Captions, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc Output #0, mpegts, to 'pipe:1': Metadata: encoder : Lavf58.29.100 Stream #0:0(eng): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 123 kb/s Stream #0:1: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, progressive), 426x240 [SAR 1:1 DAR 71:40], q=2-31, 29.97 fps, 29.97 tbr, 90k tbn, 90k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (copy) Press [q] to stop, [?] for help [cli][info] Starting player: mpv [cli.output][debug] Opening subprocess: mpv --force-media-title=https://pluto.tv/live-tv/mtv-spankin-new - [cli][debug] Writing stream to output [stream.hls][debug] Reloading playlist [stream.hls][debug] Download of segment 267960616 complete [stream.hls][debug] Adding segment 267960618 to queue [stream.hls][debug] Adding segment 267960619 to queue [stream.hls][debug] Download of segment 267960617 complete ... stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 267960672 to queue [stream.hls][debug] Adding segment 267960673 to queue [stream.hls][debug] Download of segment 267960655 complete [mpegts @ 0x55c85b7ee740] New video stream 0:2 at pos:16900636 and DTS:1.42133s [mpegts @ 0x55c85b7ee740] New audio stream 0:3 at pos:16907968 and DTS:1.4s ... [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 267960679 to queue [stream.hls][debug] Download of segment 267960658 complete [stream.hls][debug] Adding segment 267960680 to queue [mpegts @ 0x55c85b7ee740] DTS 129840 < 1474320 out of order [stream.hls][debug] Download of segment 267960659 complete [stream.hls][debug] Adding segment 267960681 to queue [stream.ffmpegmux][error] Pipe copy aborted: /tmp/ffmpeg-290343-476 [stream.hls][debug] Download of segment 267960660 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 267960694 to queue [stream.hls][debug] Download of segment 267960661 complete [stream.hls][debug] Adding segment 267960695 to queue ``` Streamlink is carrying on with the segment downloads after. Just to note: I've edited the post above to include a bit more info in the log output. The new stream mappings are probably the likely cause I imagine. they just use different AD types **redirector** ADs which currently won't work _(used for Livechannels, but can be replaced with a valid segment)_ `uri=https://...-.akamaized.net/../redirector/...?url=https%3A%2F%2Fplutotv-.akamaized.net...` `title='AD'` **creative** ADs which seems to work `uri=https://....pluto.tv/..._ad/creative/..._ad/720p/.../hls/hls_...-00000.ts` `title=None` @back-to Following up on your last comment, I wonder if you can explain a little further: > they just use different AD types What is an AD type, please? > redirector ADs which currently won't work Is there a plan to get them to work? For my own use case, I'm trying to save the output of a Pluto TV stream to a file. Would it be possible to get `streamlink` to save its output to a different file each time the stream source changes? If it is, would that help because, in theory, the adverts would get saved to a different file and I'd "just" need to stitch together the programme parts? I've tried using `ffmpeg -c copy` to repair a file copy of a stream from Pluto but playback breaks a few minutes into the adverts. I've also tried `-c:v libx264 -crf 22 -preset slow` to re-mux the file but that doesn't work either :( I'm also getting slightly different output when running `streamlink` and `ffmpeg` in debug mode compared with @mkbloke: ``` streamlink.exe -l debug --ffmpeg-verbose https://pluto.tv/en/live-tv/baywatch-gb best [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.8 [cli][debug] Streamlink: 3.0.2 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://pluto.tv/en/live-tv/baywatch-gb [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][debug] --ffmpeg-verbose=True [cli][info] Found matching plugin pluto for URL https://pluto.tv/en/live-tv/baywatch-gb [utils.l10n][debug] Language code: en_GB [cli][info] Available streams: 570k (worst), 1000k, 1500k, 2100k, 3100k (best) [cli][info] Opening stream: 3100k (muxed-stream) [stream.ffmpegmux][debug] Opening hls substream [stream.hls][debug] Reloading playlist [utils.named_pipe][info] Creating pipe streamlinkpipe-5904-1-3924 [stream.ffmpegmux][debug] ffmpeg command: C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe -nostats -y -i \\.\pipe\streamlinkpipe-5904-1-3924 -c:v copy -c:a copy -map 0 -f mpegts pipe:1 [stream.ffmpegmux][debug] Starting copy to pipe: \\.\pipe\streamlinkpipe-5904-1-3924 [cli][debug] Pre-buffering 8192 bytes ffmpeg version n4.4.1-20211030 Copyright (c) 2000-2021 the FFmpeg developers built with gcc 10-win32 (GCC) 20210610 configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=i686-w64-mingw32- --arch=i686 --target-os=mingw32 --enable-gpl --enable-version3 --disable-debug --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl --enable-libvmaf --enable-vulkan --disable-libxcb --disable-xlib --enable-amf --enable-libaom --enable-avisynth --enable-libdav1d --disable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --disable-frei0r --enable-libglslang --enable-libgme --enable-libass --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libmfx --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --disable-librav1e --enable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --disable-libsvtav1 --enable-libtwolame --disable-libuavs3d --disable-libdrm --disable-vaapi --enable-libvidstab --enable-libx264 --enable-libx265 --disable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp --extra-version=20211030 libavutil 56. 70.100 / 56. 70.100 libavcodec 58.134.100 / 58.134.100 libavformat 58. 76.100 / 58. 76.100 libavdevice 58. 13.100 / 58. 13.100 libavfilter 7.110.100 / 7.110.100 libswscale 5. 9.100 / 5. 9.100 libswresample 3. 9.100 / 3. 9.100 libpostproc 55. 9.100 / 55. 9.100 [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] First Sequence: 2; Last Sequence: 6 [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 4; End Sequence: None [stream.hls][debug] Adding segment 4 to queue [stream.hls][debug] Adding segment 5 to queue [stream.hls][debug] Adding segment 6 to queue [stream.hls][debug] Segment 4 complete [stream.hls][debug] Segment 5 complete Input #0, mpegts, from '\\.\pipe\streamlinkpipe-5904-1-3924': Duration: N/A, start: 16.466667, bitrate: N/A Program 1 Metadata: service_name : Service service_provider: Hybrik Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 30 fps, 30 tbr, 90k tbn, 60 tbc Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 96 kb/s Output #0, mpegts, to 'pipe:1': Metadata: encoder : Lavf58.76.100 Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 30 fps, 30 tbr, 90k tbn, 90k tbc Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 96 kb/s Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (copy) Press [q] to stop, [?] for help [cli][info] Starting player: "C:\Program Files\VideoLAN\VLC\vlc.exe" [cli.output][debug] Opening subprocess: "C:\Program Files\VideoLAN\VLC\vlc.exe" --input-title-format https://pluto.tv/en/live-tv/baywatch-gb - [stream.hls][debug] Segment 6 complete [cli][debug] Writing stream to output [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 7 to queue [stream.hls][debug] Segment 7 complete [mpegts @ 05a28980] New video stream 0:2 at pos:4954176 and DTS:0.111111s [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 8 to queue [stream.hls][debug] Segment 8 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 9 to queue [stream.hls][debug] Segment 9 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 10 to queue [stream.hls][debug] Segment 10 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 11 to queue [stream.hls][debug] Segment 11 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 12 to queue [stream.hls][debug] Segment 12 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 13 to queue [stream.hls][debug] Adding segment 14 to queue [stream.hls][debug] Segment 13 complete [stream.hls][debug] Segment 14 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 15 to queue [stream.hls][debug] Segment 15 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 16 to queue [stream.hls][debug] Segment 16 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 17 to queue [stream.hls][debug] Segment 17 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 18 to queue [stream.hls][debug] Segment 18 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] Adding segment 19 to queue [stream.hls][debug] Segment 19 complete [stream.hls][debug] Reloading playlist [stream.hls][debug] Segments in this playlist are encrypted ``` If `streamlink` is started during a programme, VLC seems to be OK showing the video and audio until the end of the first advert. After that, it is audio only. Once the main programme restarts, stopping and restarting `streamlink` resumes playback of video and audio, although the output still reports "Segments in this playlist are encrypted" so I'm not sure how relevant that is.
2021-12-04T01:22:03
streamlink/streamlink
4,238
streamlink__streamlink-4238
[ "4236" ]
dd63e63c81f0fe3b7acf79ab094ec393b516b981
diff --git a/src/streamlink/plugins/ustreamtv.py b/src/streamlink/plugins/ustreamtv.py --- a/src/streamlink/plugins/ustreamtv.py +++ b/src/streamlink/plugins/ustreamtv.py @@ -57,7 +57,7 @@ def url(self, base: str, template: str) -> str: class UStreamTVWsClient(WebsocketClient): - API_URL = "wss://r{0}-1-{1}-{2}-ws-{3}.ums.ustream.tv:1935/1/ustream" + API_URL = "wss://r{0}-1-{1}-{2}-ws-{3}.ums.services.video.ibm.com/1/ustream" APP_ID = 3 APP_VERSION = 2
plugins.ustreamtv: [plugin.api.websocket][error] EOF occurred in violation of protocol (_ssl.c:1129) ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Installed the latest stable build on Windows 11 and the command fails every time with > **[plugin.api.websocket][error] [Errno 2] No such file or directory** this command line `streamlink --loglevel debug https://video.ibm.com/nasahdtv` See the debug info for the result. Note - although the debug log says I am running windows 10 - it is most definitely windows 11 (via winver). This is a new laptop (DELL) that came pre-installed with Windows 11 I didn't see any reference to this issue - but perhaps I have missed something. Thanks for any tips. ### Debug log ```text C:\Users\liamk>streamlink --loglevel debug https://video.ibm.com/nasahdtv [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.8 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://video.ibm.com/nasahdtv [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin ustreamtv for URL https://video.ibm.com/nasahdtv [plugins.ustreamtv][debug] Connecting to UStream API: media_id=6540154, application=channel, referrer=https://video.ibm.com/nasahdtv, cluster=live [plugin.api.websocket][debug] Connecting to: wss://r2935561-1-6540154-channel-ws-live.ums.ustream.tv:1935/1/ustream [plugins.ustreamtv][debug] Waiting for stream data (for at most 15 seconds)... [plugin.api.websocket][error] [Errno 2] No such file or directory [plugin.api.websocket][debug] Closed: wss://r2935561-1-6540154-channel-ws-live.ums.ustream.tv:1935/1/ustream [plugins.ustreamtv][error] Waiting for stream data timed out. error: No playable streams found on this URL: https://video.ibm.com/nasahdtv ```
``` $ streamlink -l debug 'https://video.ibm.com/nasahdtv' best [cli][debug] OS: Linux-5.15.5-2-git-x86_64-with-glibc2.33 [cli][debug] Python: 3.9.9 [cli][debug] Streamlink: 3.0.3+4.gdd63e63 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://video.ibm.com/nasahdtv [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player=mpv [cli][info] Found matching plugin ustreamtv for URL https://video.ibm.com/nasahdtv [plugins.ustreamtv][debug] Connecting to UStream API: media_id=6540154, application=channel, referrer=https://video.ibm.com/nasahdtv, cluster=live [plugin.api.websocket][debug] Connecting to: wss://r2727247-1-6540154-channel-ws-live.ums.ustream.tv:1935/1/ustream [plugins.ustreamtv][debug] Waiting for stream data (for at most 15 seconds)... [plugin.api.websocket][error] EOF occurred in violation of protocol (_ssl.c:1129) [plugin.api.websocket][debug] Closed: wss://r2727247-1-6540154-channel-ws-live.ums.ustream.tv:1935/1/ustream [plugins.ustreamtv][error] Waiting for stream data timed out. error: No playable streams found on this URL: https://video.ibm.com/nasahdtv ``` > [plugin.api.websocket][error] EOF occurred in violation of protocol (_ssl.c:1129) Not sure yet what this is about... By the way, I'm running streamlink version 2.2.0 on a windows 10 box - and that is working just fine. > [plugin.api.websocket][error] EOF occurred in violation of protocol (_ssl.c:1129) `API_URL = "wss://r{0}-1-{1}-{2}-ws-{3}.ums.services.video.ibm.com/1/ustream"` would fix this for Linux, can't test it on Win10 right now. > By the way, I'm running streamlink version 2.2.0 on a windows 10 box - and that is working just fine. because it is just `ws://`, it would also work on Linux `API_URL = "ws://r{0}-1-{1}-{2}-ws-{3}.ums.ustream.tv:1935/1/ustream"` https://github.com/streamlink/streamlink/blob/2.2.0/src/streamlink/plugins/ustreamtv.py#L39 > > [plugin.api.websocket][error] EOF occurred in violation of protocol (_ssl.c:1129) > > `API_URL = "wss://r{0}-1-{1}-{2}-ws-{3}.ums.services.video.ibm.com/1/ustream"` > > would fix this for Linux, can't test it on Win10 right now. > Thank you, it fixes the problem on macOS 11.6.1, Streamlink: 3.0.3.
2021-12-08T16:42:12
streamlink/streamlink
4,252
streamlink__streamlink-4252
[ "4248" ]
820d13c1b391ab78b25897f3408fe92af462550e
diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -305,6 +305,9 @@ def output_stream(stream, formatter: Formatter): """Open stream, create output and finally write the stream to output.""" global output + # create output before opening the stream, so file outputs can prompt on existing output + output = create_output(formatter) + success_open = False for i in range(args.retry_open): try: @@ -315,9 +318,7 @@ def output_stream(stream, formatter: Formatter): log.error(f"Try {i + 1}/{args.retry_open}: Could not open stream {stream} ({err})") if not success_open: - console.exit(f"Could not open stream {stream}, tried {args.retry_open} times, exiting") - - output = create_output(formatter) + return console.exit(f"Could not open stream {stream}, tried {args.retry_open} times, exiting") try: output.open()
diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -10,6 +10,7 @@ import streamlink_cli.main import tests.resources +from streamlink.exceptions import StreamError from streamlink.session import Streamlink from streamlink.stream.stream import Stream from streamlink_cli.compat import DeprecatedPath, is_win32, stdout @@ -21,6 +22,7 @@ format_valid_streams, handle_stream, handle_url, + output_stream, resolve_stream_name, setup_config_args ) @@ -412,6 +414,32 @@ def test_handle_stream_output_stream(self, args: Mock, mock_output_stream: Mock) ) +class TestCLIMainOutputStream(unittest.TestCase): + @patch("streamlink_cli.main.args", Mock(retry_open=2)) + @patch("streamlink_cli.main.log") + @patch("streamlink_cli.main.console") + def test_stream_failure_no_output_open(self, mock_console: Mock, mock_log: Mock): + output = Mock() + stream = Mock( + __str__=lambda _: "fake-stream", + open=Mock(side_effect=StreamError("failure")) + ) + formatter = Formatter({}) + + with patch("streamlink_cli.main.output", Mock()), \ + patch("streamlink_cli.main.create_output", return_value=output): + output_stream(stream, formatter) + + self.assertEqual(mock_log.error.call_args_list, [ + call("Try 1/2: Could not open stream fake-stream (Could not open stream: failure)"), + call("Try 2/2: Could not open stream fake-stream (Could not open stream: failure)"), + ]) + self.assertEqual(mock_console.exit.call_args_list, [ + call("Could not open stream fake-stream, tried 2 times, exiting") + ]) + self.assertFalse(output.open.called, "Does not open the output on stream error") + + @patch("streamlink_cli.main.log") class TestCLIMainSetupConfigArgs(unittest.TestCase): configdir = Path(tests.resources.__path__[0], "cli", "config")
Streamlink is starting download before user's response for file overwrite ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description When file output option `-o` is used to an existing file, Streamlink prints confirmation message for file overwrite and asks for user's response. However, before the response, Streamlink has already started downloading and continues. If the user responds with "n", this will result in a waste of bandwidth. It can also be confusing when used with `-l debug`, as the debug messages are printed as if confirmation is ignored. It would be better to check the output file before opening the stream. In the debug log below, I responded with "n" after `[stream.segmented][debug] Closing writer thread` message. ### Debug log ```text [cli][debug] OS: macOS 10.12.6 [cli][debug] Python: 3.9.9 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://www.dailymotion.com/video/x853fvr [cli][debug] stream=['480p'] [cli][debug] --loglevel=debug [cli][debug] --output=video.ts [cli][debug] --hls-duration=20 [cli][info] Found matching plugin dailymotion for URL https://www.dailymotion.com/video/x853fvr [plugins.dailymotion][debug] Found media ID: x853fvr [utils.l10n][debug] Language code: en_US [cli][info] Available streams: 112p_alt (worst), 112p, 184p_alt, 184p, 288p_alt, 288p, 480p_alt, 480p, 720p_alt, 720p, 1080p_alt, 1080p (best) [cli][info] Opening stream: 480p (hls) [stream.hls][debug] Reloading playlist [cli][debug] Pre-buffering 8192 bytes [stream.hls][debug] First Sequence: 0; Last Sequence: 53 [stream.hls][debug] Start offset: 0; Duration: 20; Start Sequence: 0; End Sequence: 53 [stream.hls][debug] Adding segment 0 to queue [stream.hls][debug] Adding segment 1 to queue [stream.hls][debug] Adding segment 2 to queue [stream.hls][debug] Adding segment 3 to queue [stream.hls][debug] Adding segment 4 to queue [stream.hls][debug] Adding segment 5 to queue [stream.hls][debug] Adding segment 6 to queue [stream.hls][info] Stopping stream early after 20 [stream.segmented][debug] Closing worker thread [stream.hls][debug] Segment 0 complete [cli][debug] Checking file output File video.ts already exists! Overwrite it? [y/N] [stream.hls][debug] Segment 1 complete [stream.hls][debug] Segment 2 complete [stream.hls][debug] Segment 3 complete [stream.hls][debug] Segment 4 complete [stream.hls][debug] Segment 5 complete [stream.hls][debug] Segment 6 complete [stream.segmented][debug] Closing writer thread n [cli][info] Closing currently open stream... ```
2021-12-14T14:31:01
streamlink/streamlink
4,254
streamlink__streamlink-4254
[ "4228" ]
59496acce6abcf31fdf531af8bd71a721f8d40bb
diff --git a/src/streamlink/plugins/albavision.py b/src/streamlink/plugins/albavision.py --- a/src/streamlink/plugins/albavision.py +++ b/src/streamlink/plugins/albavision.py @@ -1,59 +1,154 @@ -""" -Support for the live streams on Albavision sites - - http://www.tvc.com.ec/envivo - - http://www.rts.com.ec/envivo - - http://www.elnueve.com.ar/en-vivo - - http://www.atv.pe/envivo/ATV - - http://www.atv.pe/envivo/ATVMas -""" import logging import re import time -from urllib.parse import quote, urlencode, urlparse -from streamlink.plugin import Plugin, PluginError, pluginmatcher +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream -from streamlink.utils.url import update_scheme +from streamlink.utils.url import update_qsd log = logging.getLogger(__name__) -@pluginmatcher(re.compile( - r"https?://(?:www\.)?(tvc\.com\.ec|rts\.com\.ec|elnueve\.com\.ar|atv\.pe)/en-?vivo(?:/ATV(?:Mas)?)?" -)) +@pluginmatcher(re.compile(r""" + https?://(?:www\.)? + ( + antena7\.com\.do + | + atv\.pe + | + c9n\.com\.py + | + canal10\.com\.ni + | + canal12\.com\.sv + | + chapintv\.com + | + elnueve\.com\.ar + | + redbolivision\.tv\.bo + | + repretel\.com + | + rts\.com\.ec + | + snt\.com\.py + | + tvc\.com\.ec + | + vtv\.com\.hn + ) + / + (?: + (?: + en-?vivo(?:-atv(?:mas)?|-canal-?\d{1,2})? + ) + | + upptv + ) + (?:/|\#)?$ +""", re.VERBOSE)) class Albavision(Plugin): - _token_input_re = re.compile(r"Math.floor\(Date.now\(\) / 3600000\),'([a-f0-9OK]+)'") - _live_url_re = re.compile(r"LIVE_URL = '(.*?)';") - _playlist_re = re.compile(r"file:\s*'(http.*m3u8)'") - _token_url_re = re.compile(r"https://.*/token/.*?\?rsk=") - - _channel_urls = { - 'ATV': 'http://dgrzfw9otv9ra.cloudfront.net/player_atv.html?iut=', - 'ATVMas': 'http://dgrzfw9otv9ra.cloudfront.net/player_atv_mas.html?iut=', - 'Canal5': 'http://dxejh4fchgs18.cloudfront.net/player_televicentro.html?iut=', - 'Guayaquil': 'http://d2a6tcnofawcbm.cloudfront.net/player_rts.html?iut=', - 'Quito': 'http://d3aacg6baj4jn0.cloudfront.net/reproductor_rts_o_quito.html?iut=', - } - def __init__(self, url): super().__init__(url) self._page = None @property def page(self): - if not self._page: - self._page = self.session.http.get(self.url) + if self._page is None: + self._page = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + )) return self._page - def _get_token_url(self, channelnumber): - token = self._get_live_url_token(channelnumber) - if token: - m = self._token_url_re.findall(self.page.text) - token_url = m and m[channelnumber] - if token_url: - return token_url + token - else: - log.error("Could not find site token") + def _is_token_based_site(self): + schema = validate.Schema( + validate.xml_xpath_string(".//script[contains(text(), 'jQuery.get')]/text()"), + ) + is_token_based_site = validate.validate(schema, self.page) is not None + log.debug(f"is_token_based_site={is_token_based_site}") + return is_token_based_site + + def _get_live_url(self): + live_url_re = re.compile(r"""LIVE_URL\s*=\s*['"]([^'"]+)['"]""") + schema = validate.Schema( + validate.xml_xpath_string(".//script[contains(text(), 'LIVE_URL')]/text()"), + validate.any(None, validate.all( + validate.transform(live_url_re.search), + validate.any(None, validate.all( + validate.get(1), + validate.url(), + )), + )), + ) + live_url = validate.validate(schema, self.page) + log.debug(f"live_url={live_url}") + return live_url + + def _get_token_req_url(self): + token_req_host_re = re.compile(r"""jQuery\.get\s*\(['"]([^'"]+)['"]""") + schema = validate.Schema( + validate.xml_xpath_string(".//script[contains(text(), 'LIVE_URL')]/text()"), + validate.any(None, validate.all( + validate.transform(token_req_host_re.search), + validate.any(None, validate.all( + validate.get(1), + validate.url(), + )), + )), + ) + token_req_host = validate.validate(schema, self.page) + log.debug(f"token_req_host={token_req_host}") + + token_req_str_re = re.compile(r"""Math\.floor\(Date\.now\(\)\s*/\s*3600000\),\s*['"]([^'"]+)['"]""") + schema = validate.Schema( + validate.xml_xpath_string(".//script[contains(text(), 'LIVE_URL')]/text()"), + validate.any(None, validate.all( + validate.transform(token_req_str_re.search), + validate.any(None, validate.all( + validate.get(1), + str, + )), + )), + ) + token_req_str = validate.validate(schema, self.page) + log.debug(f"token_req_str={token_req_str}") + if not token_req_str: + return + + date = int(time.time() // 3600) + token_req_token = self.transform_token(token_req_str, date) or self.transform_token(token_req_str, date - 1) + + if token_req_host and token_req_token: + return update_qsd(token_req_host, {"rsk": token_req_token}) + + def _get_token(self): + token_req_url = self._get_token_req_url() + if not token_req_url: + return + + res = self.session.http.get(token_req_url, schema=validate.Schema( + validate.parse_json(), { + "success": bool, + validate.optional("error"): int, + validate.optional("token"): str, + }, + )) + + if not res["success"]: + if res["error"]: + log.error(f"Token request failed with error: {res['error']}") + else: + log.error("Token request failed") + return + + if not res["token"]: + log.error("Token not found in response") + return + token = res["token"] + log.debug(f"token={token}") + return token @staticmethod def transform_token(token_in, date): @@ -67,59 +162,21 @@ def transform_token(token_in, date): if token_out.endswith("OK"): return token_out[:-2] else: - log.error("Invalid site token: {0} => {1}".format(token_in, token_out)) - - def _get_live_url_token(self, channelnumber): - m = self._token_input_re.findall(self.page.text) - log.debug("Token input: {0}".format(m[channelnumber])) - if m: - date = int(time.time() // 3600) - return self.transform_token(m[channelnumber], date) or self.transform_token(m[channelnumber], date - 1) - - def _get_token(self, channelnumber): - token_url = self._get_token_url(channelnumber) - if token_url: - res = self.session.http.get(token_url) - data = self.session.http.json(res) - if data['success']: - return data['token'] + log.error(f"Invalid site token: {token_in} => {token_out}") def _get_streams(self): - m = self._live_url_re.search(self.page.text) - playlist_url = m and update_scheme("https://", m.group(1), force=False) - player_url = self.url - live_channel = None - p = urlparse(player_url) - channelnumber = 0 - if p.netloc.endswith("tvc.com.ec"): - live_channel = "Canal5" - elif p.netloc.endswith("rts.com.ec"): - live_channel = "Guayaquil" - elif p.netloc.endswith("atv.pe"): - if p.path.endswith(("ATVMas", "ATVMas/")): - live_channel = "ATVMas" - channelnumber = 1 - else: - live_channel = "ATV" - token = self._get_token(channelnumber) - log.debug("token {0}".format(token)) - if playlist_url: - log.debug("Found playlist URL in the page") - else: - if live_channel: - log.debug("Live channel: {0}".format(live_channel)) - player_url = self._channel_urls[live_channel] + quote(token) - page = self.session.http.get(player_url, raise_for_status=False) - if "block access from your country." in page.text: - raise PluginError("Content is geo-locked") - m = self._playlist_re.search(page.text) - playlist_url = m and update_scheme("https://", m.group(1), force=False) - else: - log.error("Could not find the live channel") + live_url = self._get_live_url() + if not live_url: + log.info("This stream may be off-air or not available in your country") + return - if playlist_url: - stream_url = "{0}?{1}".format(playlist_url, urlencode({"iut": token})) - return HLSStream.parse_variant_playlist(self.session, stream_url, headers={"referer": player_url}) + if self._is_token_based_site(): + token = self._get_token() + if not token: + return + return HLSStream.parse_variant_playlist(self.session, update_qsd(live_url, {"iut": token})) + else: + return HLSStream.parse_variant_playlist(self.session, live_url) __plugin__ = Albavision
diff --git a/tests/plugins/test_albavision.py b/tests/plugins/test_albavision.py --- a/tests/plugins/test_albavision.py +++ b/tests/plugins/test_albavision.py @@ -8,15 +8,50 @@ class TestPluginCanHandleUrlAlbavision(PluginCanHandleUrl): __plugin__ = Albavision should_match = [ - "https://www.elnueve.com.ar/en-vivo", - "http://www.rts.com.ec/envivo", - "https://www.tvc.com.ec/envivo", - "http://www.atv.pe/envivo/ATV", - "http://www.atv.pe/envivo/ATVMas", + "http://antena7.com.do/envivo-canal-7/", + "http://www.antena7.com.do/envivo-canal-7/", + "https://antena7.com.do/envivo-canal-7/", + "https://www.antena7.com.do/envivo-canal7", + "https://www.antena7.com.do/envivo-canal7/", + "https://www.antena7.com.do/envivo-canal7#", + "https://www.antena7.com.do/envivo-canal-7#", + "https://www.antena7.com.do/en-vivo-canal-99/", + "https://www.antena7.com.do/en-vivo-canal-99#", + # All channel URLs from supported sites + "https://www.antena7.com.do/envivo-canal-7/", + "https://www.antena7.com.do/envivo-canal-21/", + "https://www.atv.pe/envivo-atv", + "https://www.atv.pe/envivo-atvmas", + "https://www.c9n.com.py/envivo/", + "https://www.canal10.com.ni/envivo/", + "https://www.canal12.com.sv/envivo/", + "https://www.chapintv.com/envivo-canal-3/", + "https://www.chapintv.com/envivo-canal-7/", + "https://www.chapintv.com/envivo-canal-23/", + "https://www.elnueve.com.ar/en-vivo/", + "https://www.redbolivision.tv.bo/envivo-canal-5/", + "https://www.redbolivision.tv.bo/upptv/", + "https://www.repretel.com/envivo-canal2/", + "https://www.repretel.com/envivo-canal4/", + "https://www.repretel.com/envivo-canal6/", + "https://www.repretel.com/en-vivo-canal-11/", + "https://www.rts.com.ec/envivo/", + "https://www.snt.com.py/envivo/", + "https://www.tvc.com.ec/envivo/", + "https://www.vtv.com.hn/envivo/", ] should_not_match = [ - "https://news.now.com/home/local", - "http://media.now.com.hk/", + "https://fake.antena7.com.do/envivo-canal-7/", + "https://www.antena7.com.do/envivo-canal123", + "https://www.antena7.com.do/envivo-canal123/", + "https://www.antena7.com.do/envivo-canal-123", + "https://www.antena7.com.do/envivo-canal-123/", + "https://www.antena7.com.do/envivo-canal-123#", + "https://www.antena7.com.do/envivo-canalabc", + "https://www.antena7.com.do/envivo-canal-abc", + "https://www.antena7.com.do/envivo-canal-7/extra", + "https://www.antena7.com.do/envivo-canal-7#extra", + "https://www.antena7.com.do/something", ]
Plugin Albavision: Peruvian channels ATV and ATVMas not working ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Could someone please have a look at the plugin "albavision"? The channels ATV and ATVMás don't start. The URLs have slightly changed, but according to the error message in the log, there must be a bigger problem (fetching the token fails). ### Debug log ```text C:\Users\malvinas2>streamlink http://www.atv.pe/envivo/ATVMas [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.8 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=http://www.atv.pe/envivo/ATVMas [cli][debug] --loglevel=debug [cli][debug] --player="D:\VideoLAN\VLC\vlc.exe" --file-caching=5000 [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin albavision for URL http://www.atv.pe/envivo/ATVMas Traceback (most recent call last): File "runpy.py", line 197, in _run_module_as_main File "runpy.py", line 87, in _run_code File "C:\Program Files (x86)\Streamlink\bin\streamlink.exe\__main__.py", line 18, in <module> File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 1056, in main handle_url() File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 565, in handle_url streams = fetch_streams(plugin) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 459, in fetch_streams return plugin.streams(stream_types=args.stream_types, File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugin\plugin.py", line 336, in streams ostreams = self._get_streams() File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\albavision.py", line 104, in _get_streams token = self._get_token(channelnumber) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\albavision.py", line 80, in _get_token token_url = self._get_token_url(channelnumber) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\albavision.py", line 49, in _get_token_url token = self._get_live_url_token(channelnumber) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\albavision.py", line 74, in _get_live_url_token log.debug("Token input: {0}".format(m[channelnumber])) IndexError: list index out of range ```
direct urls can be used instead of the plugin https://d2tr4gdfol9ja.cloudfront.net/atv/smil:atv.smil/playlist.m3u8 https://d2tr4gdfol9ja.cloudfront.net/atv/smil:atv-mas.smil/playlist.m3u8 I was going to look at this at some point. Do you think you'll be removing this plugin? > Do you think you'll be removing this plugin? probably not, but it needs an update/cleanup. Only ATV can be used without a token, the other sites still use them like https://github.com/streamlink/streamlink/issues/4224 `https://do-antena7-antena7-live.ned.media/antena7/smil:antena7.smil/playlist.m3u8?iut=XXX` I don't know if this is just a bad habit of Latin American broadcasters or a general problem. In any case, they often change the type of broadcast: E.g. they start as a direct stream from their website, a short time later they switch to a CDN and another few months later they offer the stream on YouTube. And so on, and so on... Perhaps it is better to put plugins from such providers into a kind of quarantine instead of removing them completely? 'El Nueve', an argentinian TV-channel included in the Albavision-plugin, can also be used without a token: [https://d2r6nlw8e5ph9i.cloudfront.net/elnueve/smil:elnueve.smil/playlist.m3u8](https://d2r6nlw8e5ph9i.cloudfront.net/elnueve/smil:elnueve.smil/playlist.m3u8) Interestingly, two of the resolutions on "El Nueve" are broken: 243p and 360p. Neither of those seem to work in the web player either. It looks like an audio stream issue to me. In mpv you just get a static picture (presumably because it's waiting to sync the audio), vlc ploughs on and plays the video without any audio. Both 486p and 720p are working though.
2021-12-15T00:30:20
streamlink/streamlink
4,255
streamlink__streamlink-4255
[ "4249" ]
7f87f5bcdbfd371696c6f1056c22da102ff38bbe
diff --git a/src/streamlink/plugins/pluto.py b/src/streamlink/plugins/pluto.py --- a/src/streamlink/plugins/pluto.py +++ b/src/streamlink/plugins/pluto.py @@ -12,7 +12,7 @@ class PlutoHLSStreamWriter(HLSStreamWriter): - ad_re = re.compile(r"_ad/creative/|dai\.google\.com") + ad_re = re.compile(r"_ad/creative/|dai\.google\.com|Pluto_TV_OandO/.*Bumper") def should_filter_sequence(self, sequence): return self.ad_re.search(sequence.segment.uri) is not None or super().should_filter_sequence(sequence)
pluto.tv: audio and video async after ad-break ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description I've installed the latest master with pip on my ubuntu 21.04 desktop and tested the new pluto.tv plugin. Good work, ads no longer stop the playback from OnDemand videos, but there is still one small issue left, at least on ubuntu. If I playback an OnDemand video with `streamlink https://pluto.tv...` it finds the plugin and downloads the video. But when the ad starts (black window with colorful font "Werbung" after about 20mins) it shows a few frames of it from the beginning and a few frames from the end of the ad-break. This seems to be only video, because if I download the file with the -o option, it has no timestamps, because of the incomplete GOP in these few frames. If I remux the file with mkvtoolnix, it has timestamps, but then audio and video are out of sync after the ad-break (audio and video are sync before the break). I've checked a few movies, all show the same behaviour. If I pipe the stream to ffplay, bad players crash (propably because of the incomplete GOP) at the ad-break, good players show only pixelation for a sec or two. Is it possible to filter these few ad-frames ("Werbung"), too? ### Debug log ```text ./streamlink https://pluto.tv/de/on-demand/movies/meatballs-de-1978-1-1 best -o /home/user/Videos/Babyspeck_und_Fleischkloesschen.mp4 -l debug [cli][debug] OS: Linux-5.11.0-41-generic-x86_64-with-glibc2.33 [cli][debug] Python: 3.9.5 [cli][debug] Streamlink: 3.0.3+7.ge999ac5 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://pluto.tv/de/on-demand/movies/meatballs-de-1978-1-1 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --output=/home/user/Videos/Babyspeck_und_Fleischkloesschen.mp4 [cli][info] Found matching plugin pluto for URL https://pluto.tv/de/on-demand/ movies/meatballs-de-1978-1-1 [plugins.pluto][debug] slug=meatballs-de-1978-1-1 [plugins.pluto][debug] app_version=5.106.0- f3e2ac48d1dbe8189dc784777108b725b4be6be2 [plugins.pluto][debug] path=/stitch/hls/episode/6041ea2ae7c979001a897fe4/ master.m3u8 [utils.l10n][debug] Language code: de_DE [cli][info] Available streams: 570k (worst), 1000k, 1500k, 2100k, 3100k (best) [cli][info] Opening stream: 3100k (hls-pluto) [stream.hls][debug] Reloading playlist [cli][debug] Pre-buffering 8192 bytes [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] First Sequence: 0; Last Sequence: 1171 [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 0; End Sequence: 1171 [stream.hls][debug] Adding segment 0 to queue [stream.hls][debug] Adding segment 1 to queue [stream.hls][debug] Adding segment 2 to queue [stream.hls][debug] Adding segment 3 to queue [stream.hls][debug] Adding segment 4 to queue [stream.hls][debug] Adding segment 5 to queue [stream.hls][debug] Adding segment 6 to queue [stream.hls][debug] Adding segment 7 to queue [stream.hls][debug] Adding segment 8 to queue [stream.hls][debug] Adding segment 9 to queue [stream.hls][debug] Adding segment 10 to queue [stream.hls][debug] Adding segment 11 to queue [stream.hls][debug] Adding segment 12 to queue [stream.hls][debug] Adding segment 13 to queue [stream.hls][debug] Adding segment 14 to queue [stream.hls][debug] Adding segment 15 to queue [stream.hls][debug] Adding segment 16 to queue [stream.hls][debug] Adding segment 17 to queue [stream.hls][debug] Adding segment 18 to queue [stream.hls][debug] Adding segment 19 to queue [stream.hls][debug] Adding segment 20 to queue [stream.hls][debug] Adding segment 21 to queue [stream.hls][debug] Segment 0 complete [cli][debug] Checking file output [stream.hls][debug] Adding segment 22 to queue [cli][debug] Writing stream to output [stream.hls][debug] Segment 1 complete [stream.hls][debug] Adding segment 23 to queue [stream.hls][debug] Segment 2 complete [stream.hls][debug] Adding segment 24 to queue [stream.hls][debug] Segment 3 complete [stream.hls][debug] Adding segment 25 to queue [download][.._Fleischkloesschen.mp4] Written 3.4 MB (0s @ 4.9 MB/s) [stream.hls][debug] Segment 4 complete [stream.hls][debug] Adding segment 26 to queue [stream.hls][debug] Segment 5 complete [stream.hls][debug] Adding segment 27 to queue [download][.._Fleischkloesschen.mp4] Written 6.1 MB (1s @ 4.8 MB/s) [stream.hls][debug] Segment 6 complete [stream.hls][debug] Adding segment 28 to queue [stream.hls][debug] Segment 7 complete [stream.hls][debug] Adding segment 29 to queue [download][.._Fleischkloesschen.mp4] Written 9.2 MB (1s @ 4.8 MB/s) [stream.hls][debug] Segment 8 complete [stream.hls][debug] Adding segment 30 to queue [download][.._Fleischkloesschen.mp4] Written 10.7 MB (2s @ 4.2 MB/s) [stream.hls][debug] Segment 9 complete [stream.hls][debug] Adding segment 31 to queue [stream.hls][debug] Segment 10 complete [download][.._Fleischkloesschen.mp4] Written 13.7 MB (3s @ 4.1 MB/s) [stream.hls][debug] Adding segment 32 to queue [stream.hls] [debug] Segment 11 complete [stream.hls][debug] Adding segment 33 to queue [stream.hls][debug] Segment 12 complete [download][.._Fleischkloesschen.mp4] Written 16.9 MB (4s @ 4.0 MB/s) [stream.hls][debug] Adding segment 34 to queue [stream.hls] [debug] Segment 13 complete [stream.hls][debug] Adding segment 35 to queue [stream.hls][debug] Segment 14 complete [stream.hls][debug] Adding segment 36 to queue [download][.._Fleischkloesschen.mp4] Written 20.0 MB (4s @ 3.8 MB/s) [stream.hls][debug] Segment 15 complete [stream.hls][debug] Adding segment 37 to queue [stream.hls][debug] Segment 16 complete [download][.._Fleischkloesschen.mp4] Written 23.2 MB (5s @ 3.7 MB/s) [stream.hls][debug] Adding segment 38 to queue [stream.hls] [debug] Segment 17 complete [stream.hls][debug] Adding segment 39 to queue [stream.hls][debug] Segment 18 complete [stream.hls][debug] Adding segment 40 to queue [download][.._Fleischkloesschen.mp4] Written 26.1 MB (6s @ 4.3 MB/s) ... ```
At the very end of [this comment](https://github.com/streamlink/streamlink/issues/3462#issuecomment-982884257) is: > I would like to keep the bumper segments ...which I guess is what you are talking about here. Regarding the timestamps on the downloaded file, have you tried running `ffmpeg -c copy` over the file in order to fix the timestamps before any further processing? It is claimed [here](https://github.com/streamlink/streamlink/issues/3462#issuecomment-985402948) that doing so can fix the timestamps. If I remux the downloaded file with `ffmpeg -c copy` then the timestamps are OK (as with mkvtoolnix alone) but after that I can convert the file to mkv without getting async. If the bumper segments are there by design, not by accident, then everything is OK. They are a bit annoying while watching, but if they are technical necessary, no problem. Sorry, I should have been a bit more clear (my bad) - the retaining of the bumpers was by request, not technical necessity. I don't know why @AdamNo wanted to keep them. Perhaps it's because it makes it obvious that the adverts are currently playing when you see the first one. Without them it would not be clear whether the video had stopped for the adverts or some other reason. I could look at removing them also. It depends what the overall consensus of the Pluto plugin users is. I have no view one way or the other. Recordings aren't the problem, and I recommend the remux to NextPVR users anyway. For live streaming It would depend if losing the bumper segments, causes long gaps in the stream causing the stream reader to exit, typically after the m3u8 "cache" is exhausted. > I could look at removing them also. It depends what the overall consensus of the Pluto plugin users is. I have no view one way or the other. Is there any way of passing a command line option to control this behaviour? I think that recordings probably don't want them but live streaming might? i vote for keeping the ad bumper. when i watch live tv via tvheadend & kodi, the picture freezes with the start ad bumper and resumes with end ad bumper. it's a clear indicator what's going on. tvh applies some tsfix and it plays just fine. ``` 2021-12-15 14:22:41.346 tsfix: transport stream AAC-LATM, DTS discontinuity. DTS = 4777922, last = 4779840 2021-12-15 14:22:41.845 tsfix: transport stream AAC-LATM, DTS discontinuity. DTS = 4791360, last = 4793282 ``` on the other hand and for the sake of having no timestamp issues, you can remove the ad bumpers. i'll keep for my streamlink installation in file streamlink/src/streamlink/plugins/pluto.py the current filter. ``` ad_re = re.compile(r"_ad/creative/|dai\.google\.com") ``` so, whatever you guys decide, it's fine with me. If it's only a change of the regex, does someone here know the regular expression to remove the bumpers? Than everyone can adjust the behavior and patch the file, no matter what gets the default. I would prefer "no bumpers", but I understand, that the bumpers are an advantage if you stream and don't download
2021-12-15T22:50:17
streamlink/streamlink
4,258
streamlink__streamlink-4258
[ "4235" ]
f8cd5f19fa448774d1e6d8184e52817cc7546158
diff --git a/src/streamlink/logger.py b/src/streamlink/logger.py --- a/src/streamlink/logger.py +++ b/src/streamlink/logger.py @@ -1,24 +1,41 @@ import logging +import sys from datetime import datetime -from logging import CRITICAL, DEBUG, ERROR, INFO, NOTSET, WARNING +from logging import CRITICAL, DEBUG, ERROR, INFO, WARNING +from pathlib import Path from threading import Lock +from typing import IO, List, Optional, Union + +FORMAT_STYLE = "{" +FORMAT_BASE = "[{name}][{levelname}] {message}" +FORMAT_DATE = "%H:%M:%S" +REMOVE_BASE = ["streamlink", "streamlink_cli"] + +# Make NONE ("none") the highest possible level that suppresses all log messages: +# `logging.NOTSET` (equal to 0) can't be used as the "none" level because of `logging.Logger.getEffectiveLevel()`, which +# loops through the logger instance's ancestor chain and checks whether the instance's level is NOTSET. If it is NOTSET, +# then it continues with the parent logger, which means that if the level of `streamlink.logger.root` was set to "none" and +# its value NOTSET, then it would continue with `logging.root` whose default level is `logging.WARNING` (equal to 30). +NONE = sys.maxsize +# Add "trace" to Streamlink's log levels TRACE = 5 -_levelToName = { + +# Define Streamlink's log levels (and register both lowercase and uppercase names) +_levelToNames = { + NONE: "none", CRITICAL: "critical", ERROR: "error", WARNING: "warning", INFO: "info", DEBUG: "debug", TRACE: "trace", - NOTSET: "none", } -_nameToLevel = {name: level for level, name in _levelToName.items()} -for level, name in _levelToName.items(): - logging.addLevelName(level, name) +for _level, _name in _levelToNames.items(): + logging.addLevelName(_level, _name.upper()) + logging.addLevelName(_level, _name) -levels = [name for _, name in _levelToName.items()] _config_lock = Lock() @@ -60,40 +77,50 @@ def format(self, record): return super().format(record) -def basicConfig(**kwargs) -> logging.StreamHandler: +# noinspection PyShadowingBuiltins,PyPep8Naming +def basicConfig( + filename: Optional[Union[str, Path]] = None, + filemode: str = "a", + stream: Optional[IO] = None, + level: Optional[str] = None, + format: str = FORMAT_BASE, + style: str = FORMAT_STYLE, + datefmt: str = FORMAT_DATE, + remove_base: Optional[List[str]] = None +) -> Union[logging.FileHandler, logging.StreamHandler]: with _config_lock: - filename = kwargs.get("filename") - if filename: - mode = kwargs.get("filemode", "a") - handler = logging.FileHandler(filename, mode) + if filename is not None: + handler = logging.FileHandler(filename, filemode) else: - stream = kwargs.get("stream") handler = logging.StreamHandler(stream) - fs = kwargs.get("format", BASIC_FORMAT) - style = kwargs.get("style", FORMAT_STYLE) - dfs = kwargs.get("datefmt", FORMAT_DATE) - remove_base = kwargs.get("remove_base", REMOVE_BASE) - formatter = StringFormatter(fs, dfs, style=style, remove_base=remove_base) + formatter = StringFormatter( + format, + datefmt, + style=style, + remove_base=remove_base or REMOVE_BASE + ) handler.setFormatter(formatter) root.addHandler(handler) - level = kwargs.get("level") if level is not None: root.setLevel(level) return handler -BASIC_FORMAT = "[{name}][{levelname}] {message}" -FORMAT_STYLE = "{" -FORMAT_DATE = "%H:%M:%S" -REMOVE_BASE = ["streamlink", "streamlink_cli"] - - logging.setLoggerClass(StreamlinkLogger) root = logging.getLogger("streamlink") root.setLevel(WARNING) +levels = list(_levelToNames.values()) + -__all__ = ["StreamlinkLogger", "TRACE", "basicConfig", "root", "levels"] +__all__ = [ + "NONE", + "TRACE", + "StreamlinkLogger", + "basicConfig", + "root", + "levels", +] diff --git a/src/streamlink_cli/console.py b/src/streamlink_cli/console.py --- a/src/streamlink_cli/console.py +++ b/src/streamlink_cli/console.py @@ -48,6 +48,8 @@ def askpass(self, prompt: str) -> Union[None, str]: return getpass(prompt, self.output) def msg(self, msg: str) -> None: + if self.json: + return self.output.write(f"{msg}\n") def msg_json(self, *objs: Any, **keywords: Any) -> None: @@ -77,7 +79,7 @@ def msg_json(self, *objs: Any, **keywords: Any) -> None: out.update(**keywords) msg = dumps(out, cls=JSONEncoder, indent=2) - self.msg(msg) + self.output.write(f"{msg}\n") if type(out) is dict and out.get("error"): sys.exit(1)
diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -1,4 +1,5 @@ import datetime +import logging import os import sys import unittest @@ -526,7 +527,7 @@ def test_custom_multiple(self, mock_log): class _TestCLIMainLogging(unittest.TestCase): @classmethod - def subject(cls, argv): + def subject(cls, argv, **kwargs): session = Streamlink() session.load_plugins(os.path.join(os.path.dirname(__file__), "plugin")) @@ -552,12 +553,16 @@ def tearDown(self): streamlink_cli.main.logger.root.handlers.clear() # python >=3.7.2: https://bugs.python.org/issue35046 - _write_calls = ( - ([call("[cli][info] foo\n")] - if sys.version_info >= (3, 7, 2) - else [call("[cli][info] foo"), call("\n")]) - + [call("bar\n")] + _write_call_log_cli_info = ( + [call("[cli][info] foo\n")] + if sys.version_info >= (3, 7, 2) else + [call("[cli][info] foo"), call("\n")] ) + _write_call_console_msg = [call("bar\n")] + _write_call_console_msg_error = [call("error: bar\n")] + _write_call_console_msg_json = [call("{\n \"error\": \"bar\"\n}\n")] + + _write_calls = _write_call_log_cli_info + _write_call_console_msg def write_file_and_assert(self, mock_mkdir: Mock, mock_write: Mock, mock_stdout: Mock): streamlink_cli.main.log.info("foo") @@ -567,7 +572,59 @@ def write_file_and_assert(self, mock_mkdir: Mock, mock_write: Mock, mock_stdout: self.assertFalse(mock_stdout.write.called) -class TestCLIMainLogging(_TestCLIMainLogging): +class TestCLIMainLoggingStreams(_TestCLIMainLogging): + # python >=3.7.2: https://bugs.python.org/issue35046 + _write_call_log_testcli_err = ( + [call("[test_cli_main][error] baz\n")] + if sys.version_info >= (3, 7, 2) else + [call("[test_cli_main][error] baz"), call("\n")] + ) + + def subject(self, argv, stream=None): + super().subject(argv) + childlogger = logging.getLogger("streamlink.test_cli_main") + + with self.assertRaises(SystemExit): + streamlink_cli.main.log.info("foo") + childlogger.error("baz") + streamlink_cli.main.console.exit("bar") + + self.assertIs(streamlink_cli.main.log.parent.handlers[0].stream, stream) + self.assertIs(childlogger.parent.handlers[0].stream, stream) + self.assertIs(streamlink_cli.main.console.output, stream) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_no_pipe_no_json(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink"], mock_stdout) + self.assertEqual(mock_stdout.write.mock_calls, + self._write_call_log_cli_info + self._write_call_log_testcli_err + self._write_call_console_msg_error) + self.assertEqual(mock_stderr.write.mock_calls, []) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_no_pipe_json(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--json"], mock_stdout) + self.assertEqual(mock_stdout.write.mock_calls, self._write_call_console_msg_json) + self.assertEqual(mock_stderr.write.mock_calls, []) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_pipe_no_json(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--stdout"], mock_stderr) + self.assertEqual(mock_stdout.write.mock_calls, []) + self.assertEqual(mock_stderr.write.mock_calls, + self._write_call_log_cli_info + self._write_call_log_testcli_err + self._write_call_console_msg_error) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_pipe_json(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--stdout", "--json"], mock_stderr) + self.assertEqual(mock_stdout.write.mock_calls, []) + self.assertEqual(mock_stderr.write.mock_calls, self._write_call_console_msg_json) + + +class TestCLIMainLoggingInfos(_TestCLIMainLogging): @unittest.skipIf(is_win32, "test only applicable on a POSIX OS") @patch("streamlink_cli.main.log") @patch("streamlink_cli.main.os.geteuid", Mock(return_value=0)) diff --git a/tests/test_console.py b/tests/test_console.py --- a/tests/test_console.py +++ b/tests/test_console.py @@ -6,21 +6,17 @@ class TestConsoleOutput(unittest.TestCase): - def test_msg_format(self): + def test_msg(self): output = StringIO() console = ConsoleOutput(output) console.msg("foo") + console.msg_json({"test": 1}) self.assertEqual("foo\n", output.getvalue()) - def test_msg_json_not_set(self): - output = StringIO() - console = ConsoleOutput(output) - self.assertEqual(None, console.msg_json({"test": 1})) - self.assertEqual("", output.getvalue()) - def test_msg_json(self): output = StringIO() console = ConsoleOutput(output, json=True) + console.msg("foo") console.msg_json({"test": 1}) self.assertEqual('{\n "test": 1\n}\n', output.getvalue()) diff --git a/tests/test_log.py b/tests/test_log.py --- a/tests/test_log.py +++ b/tests/test_log.py @@ -15,6 +15,34 @@ def _new_logger(cls, format="[{name}][{levelname}] {message}", style="{", **para logger.basicConfig(stream=output, format=format, style=style, **params) return logging.getLogger("streamlink.test"), output + def test_level_names(self): + self.assertEqual(logger.levels, [ + "none", "critical", "error", "warning", "info", "debug", "trace" + ]) + self.assertEqual(logging.getLevelName(logger.NONE), "none") + self.assertEqual(logging.getLevelName(logger.CRITICAL), "critical") + self.assertEqual(logging.getLevelName(logger.ERROR), "error") + self.assertEqual(logging.getLevelName(logger.WARNING), "warning") + self.assertEqual(logging.getLevelName(logger.INFO), "info") + self.assertEqual(logging.getLevelName(logger.DEBUG), "debug") + self.assertEqual(logging.getLevelName(logger.TRACE), "trace") + + self.assertEqual(logging.getLevelName("none"), logger.NONE) + self.assertEqual(logging.getLevelName("critical"), logger.CRITICAL) + self.assertEqual(logging.getLevelName("error"), logger.ERROR) + self.assertEqual(logging.getLevelName("warning"), logger.WARNING) + self.assertEqual(logging.getLevelName("info"), logger.INFO) + self.assertEqual(logging.getLevelName("debug"), logger.DEBUG) + self.assertEqual(logging.getLevelName("trace"), logger.TRACE) + + self.assertEqual(logging.getLevelName("NONE"), logger.NONE) + self.assertEqual(logging.getLevelName("CRITICAL"), logger.CRITICAL) + self.assertEqual(logging.getLevelName("ERROR"), logger.ERROR) + self.assertEqual(logging.getLevelName("WARNING"), logger.WARNING) + self.assertEqual(logging.getLevelName("INFO"), logger.INFO) + self.assertEqual(logging.getLevelName("DEBUG"), logger.DEBUG) + self.assertEqual(logging.getLevelName("TRACE"), logger.TRACE) + def test_level(self): log, output = self._new_logger() logger.root.setLevel("info") @@ -25,6 +53,17 @@ def test_level(self): log.debug("test") self.assertNotEqual(output.tell(), 0) + def test_level_none(self): + log, output = self._new_logger() + logger.root.setLevel("none") + log.critical("test") + log.error("test") + log.warning("test") + log.info("test") + log.debug("test") + log.trace("test") + self.assertEqual(output.tell(), 0) + def test_output(self): log, output = self._new_logger() logger.root.setLevel("debug")
output with --json flag is not parseable if handled error occurs ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description ### Steps to reproduce 1. Run streamlink against a YouTube channel that is not live, e.g. `streamlink youtube.com/c/LudwigAhgren -j` 2. Output is unparseable as JSON: ### Expected Behavior All output on stdout should be parseable when the `--json` is enabled (barring an unrecoverable error) ```json { "error": "No playable streams found on this URL: youtube.com/c/LudwigAhgren" } ``` ### Actual Behavior Plugin errors are printed to stdout along with the parseable JSON output ```json [plugins.youtube][error] Could not find videoId on channel page { "error": "No playable streams found on this URL: youtube.com/c/LudwigAhgren" } ``` ### Possible Solutions * Write error messages to stderr instead of stdout when `--json` flag is enabled (or always) * Suppress error messages when `--json` flag is enabled, since the content is duplicated in the JSON output ### Debug log ```text [plugins.youtube][error] Could not find videoId on channel page { "error": "No playable streams found on this URL: youtube.com/c/LudwigAhgren" } ``` *Note*: the debug log is with the proper `--loglevel debug` param, it seems `--json` suppresses debug messages already. System info is as follows: ``` [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.8 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) ```
The problem here is how the logger and console output stream is set up when `--json` is set. It uses the same stream output, which is usually fine, but `console.exit` has special logic when `--json` is set, so any regular console/log message prior to the `console.exit` call gets printed to the same output stream. The logger and console output should not share the same stream output when `--json` is set, and the logger should print to `stderr` while `console` should print to `stdout`. - https://github.com/streamlink/streamlink/blob/3.0.3/src/streamlink_cli/main.py#L1009 - https://github.com/streamlink/streamlink/blob/3.0.3/src/streamlink_cli/main.py#L967-L987 - https://github.com/streamlink/streamlink/blob/3.0.3/src/streamlink_cli/console.py#L85-L87 - https://github.com/streamlink/streamlink/blob/3.0.3/src/streamlink_cli/main.py#L571-L572 - https://github.com/streamlink/streamlink/blob/3.0.3/src/streamlink/plugins/youtube.py#L321 Had a quick look at this again. The way `--json` is implemented is not particularly great. And neither is the entire streamlink_cli, but that's not the point of this thread. The JSON output is either done via `console.msg_json(data)` or `console.exit(data)` (if `ConsoleOutput` was initialized with `json=True`). As I said earlier, both the logger and the console instances share the same output stream. The output stream however varies depending on whether streamlink_cli outputs stream data to `stdout` when `--stdout` is set (or related arguments like `--output=-`, etc). This means that outputting the console's JSON data to `stdout` and the logger's log messages to `stderr` for example when `--json` is set is not always possible, so fixing this issue is not that simple. `console.msg_json(data)` and `console.exit()` are meant as the last output method call for printing messages, because there can only be one JSON output. In order to respect the `--json` argument, all logger calls would either need to be suppressed when it is set (which is bad), or they need to be buffered and then included in the final JSON output, so that it can be parsed. That however does not guarantee that there will be a `console.msg_json(data)` or `console.exit()` call at the end, so the `ConsoleOutput` would need to be made aware of a `SystemExit`.
2021-12-19T09:53:06
streamlink/streamlink
4,286
streamlink__streamlink-4286
[ "4283" ]
8eb0c74aba45f09c6a8da4bf395e4ac50caebd57
diff --git a/src/streamlink/plugins/foxtr.py b/src/streamlink/plugins/foxtr.py --- a/src/streamlink/plugins/foxtr.py +++ b/src/streamlink/plugins/foxtr.py @@ -1,6 +1,7 @@ import re from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream @@ -8,13 +9,12 @@ r"https?://(?:www\.)?fox(?:play)?\.com\.tr/" )) class FoxTR(Plugin): - playervars_re = re.compile(r"source\s*:\s*\[\s*\{\s*videoSrc\s*:\s*(?:mobilecheck\(\)\s*\?\s*)?'([^']+)'") - def _get_streams(self): - res = self.session.http.get(self.url) - match = self.playervars_re.search(res.text) - if match: - stream_url = match.group(1) + re_streams = re.compile(r"""(['"])(?P<url>https://\S+/foxtv\.m3u8\S+)\1""") + res = self.session.http.get(self.url, schema=validate.Schema( + validate.transform(re_streams.findall) + )) + for _, stream_url in res: return HLSStream.parse_variant_playlist(self.session, stream_url)
plugins.foxtr: No playable streams found ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Plugin does not work ### Debug log ```text [cli][debug] OS: Windows 10 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 3.0.1+61.gae63436 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.fox.com.tr/canli-yayin [cli][debug] --loglevel=debug [cli][info] Found matching plugin foxtr for URL https://www.fox.com.tr/canli-yayin error: No playable streams found on this URL: https://www.fox.com.tr/canli-yayin ```
> Latest build from the master branch ae63436 is not part of streamlink/streamlink@master This is from your own fork which is not supported here... > error: No playable streams found on this URL: https://www.fox.com.tr/canli-yayin The site has made some changes. Extracting the HLS URLs should still be easy and is just a simple regex change in the plugin: https://github.com/streamlink/streamlink/blob/8eb0c74aba45f09c6a8da4bf395e4ac50caebd57/src/streamlink/plugins/foxtr.py#L11
2022-01-08T17:41:02
streamlink/streamlink
4,292
streamlink__streamlink-4292
[ "4291" ]
f9e7239f93b9865c60a0c32c69cd41b16837318e
diff --git a/src/streamlink/plugins/twitch.py b/src/streamlink/plugins/twitch.py --- a/src/streamlink/plugins/twitch.py +++ b/src/streamlink/plugins/twitch.py @@ -462,11 +462,11 @@ def hosted_channel(self, channel): (?: videos/(?P<videos_id>\d+) | - (?P<channel>[^/]+) + (?P<channel>[^/?]+) (?: /video/(?P<video_id>\d+) | - /clip/(?P<clip_name>[\w-]+) + /clip/(?P<clip_name>[^/?]+) )? ) """, re.VERBOSE))
plugins.twitch: does not work if the link has certain query parameters ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest master https://github.com/streamlink/streamlink/commit/f9e7239f93b9865c60a0c32c69cd41b16837318e ### Description Twitch links with query parameters like "?tt_content=channel_name&tt_medium=embed" etc. do not work when passing to CLI. ### Debug log ```text D:\>streamlink "https://www.twitch.tv/averagejonas?tt_content=channel_name&tt_medium=embed" --loglevel debug [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.1 [cli][debug] Streamlink: 3.0.3+31.gf9e7239 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://www.twitch.tv/averagejonas?tt_content=channel_name&tt_medium=embed [cli][debug] --loglevel=debug [cli][info] Found matching plugin twitch for URL https://www.twitch.tv/averagejonas?tt_content=channel_name&tt_medium=embed [plugins.twitch][debug] Getting live HLS streams for averagejonas?tt_content=channel_name&tt_medium=embed error: No playable streams found on this URL: https://www.twitch.tv/averagejonas?tt_content=channel_name&tt_medium=embed ```
2022-01-11T23:11:55
streamlink/streamlink
4,301
streamlink__streamlink-4301
[ "4300" ]
43976df55cb20ab45b5822bf9c758d4ef5fc55e8
diff --git a/src/streamlink/stream/hls_playlist.py b/src/streamlink/stream/hls_playlist.py --- a/src/streamlink/stream/hls_playlist.py +++ b/src/streamlink/stream/hls_playlist.py @@ -27,7 +27,7 @@ class ExtInf(NamedTuple): # EXT-X-BYTERANGE class ByteRange(NamedTuple): # version >= 4 range: int - offset: int + offset: Optional[int] # EXT-X-DATERANGE @@ -152,7 +152,7 @@ def is_date_in_daterange(cls, date: Segment.date, daterange: DateRange): class M3U8Parser: _extinf_re = re.compile(r"(?P<duration>\d+(\.\d+)?)(,(?P<title>.+))?") _attr_re = re.compile(r"([A-Z\-]+)=(\d+\.\d+|0x[0-9A-z]+|\d+x\d+|\d+|\"(.+?)\"|[0-9A-z\-]+)") - _range_re = re.compile(r"(?P<range>\d+)(@(?P<offset>.+))?") + _range_re = re.compile(r"(?P<range>\d+)(?:@(?P<offset>\d+))?") _tag_re = re.compile(r"#(?P<tag>[\w-]+)(:(?P<value>.+))?") _res_re = re.compile(r"(\d+)x(\d+)") @@ -214,7 +214,10 @@ def parse_bool(value: str) -> bool: def parse_byterange(self, value: str) -> Optional[ByteRange]: match = self._range_re.match(value) - return None if match is None else ByteRange(int(match.group("range")), int(match.group("offset") or 0)) + if match is None: + return None + _range, offset = match.groups() + return ByteRange(int(_range), int(offset) if offset is not None else None) def parse_extinf(self, value: str) -> Tuple[float, Optional[str]]: match = self._extinf_re.match(value)
ValueError: Padding is incorrect. ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description `streamlink --http-header "User-Agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" --http-header "Referer=https://www.log2base2.com/" "https://d2bzjah96pmxyc.cloudfront.net/C/Functions/Recursion/index.m3u8" best -l debug -o out1.ts` `hls_audio_160k_v4.m3u8` has these `#EXT-X-BYTERANGE` lines. #EXT-X-BYTERANGE:228432@0 #EXT-X-BYTERANGE:224288 #EXT-X-BYTERANGE:224864 ... And streamlink is setting these Range headers for HTTP GET requests. Range: bytes=0-228431 Range: bytes=0-224287 Range: bytes=0-224863 ... Comment by: @kikuyan ### Debug log ```text streamlink --http-header "User-Agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" --http-header "Referer=https://www.log2base2.com/" "https://d2bzjah96pmxyc.cloudfront.net/C/Functions/Recursion/index.m3u8" best -l debug -o out1.ts [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.8 [cli][debug] Streamlink: 3.0.3 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://d2bzjah96pmxyc.cloudfront.net/C/Functions/Recursion/index.m3u8 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --output=out1.ts [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][debug] --http-header=[('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36'), ('Referer', 'https://www.log2base2.com/')] [cli][info] Found matching plugin hls for URL https://d2bzjah96pmxyc.cloudfront.net/C/Functions/Recursion/index.m3u8 [plugins.hls][debug] URL=https://d2bzjah96pmxyc.cloudfront.net/C/Functions/Recursion/index.m3u8; params={} [utils.l10n][debug] Language code: en_GB [stream.hls][debug] Using external audio tracks for stream 1080p (language=None, name=Default) [stream.hls][debug] Using external audio tracks for stream 1080p_alt (language=None, name=Default) [stream.hls][debug] Using external audio tracks for stream 576p (language=None, name=Default) [stream.hls][debug] Using external audio tracks for stream 360p (language=None, name=Default) [stream.hls][debug] Using external audio tracks for stream 224p (language=None, name=Default) [stream.hls][debug] Using external audio tracks for stream 73k (language=None, name=Default) [cli][info] Available streams: 73k (worst), 224p, 360p, 576p, 1080p_alt, 1080p (best) [cli][info] Opening stream: 1080p (hls-multi) [stream.ffmpegmux][debug] Opening hls substream [stream.hls][debug] Reloading playlist [stream.ffmpegmux][debug] Opening hls substream [stream.hls][debug] Reloading playlist [utils.named_pipe][info] Creating pipe streamlinkpipe-6436-1-825 [utils.named_pipe][info] Creating pipe streamlinkpipe-6436-2-8256 [stream.ffmpegmux][debug] ffmpeg command: C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe -nostats -y -i \.\pipe\streamlinkpipe-6436-1-825 -i \.\pipe\streamlinkpipe-6436-2-8256 -c:v copy -c:a copy -map 0:v? -map 0:a? -map 1:a -f mpegts pipe:1 [stream.ffmpegmux][debug] Starting copy to pipe: \.\pipe\streamlinkpipe-6436-1-825 [stream.ffmpegmux][debug] Starting copy to pipe: \.\pipe\streamlinkpipe-6436-2-8256 [cli][debug] Pre-buffering 8192 bytes [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] First Sequence: 0; Last Sequence: 24 [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 0; End Sequence: 24 [stream.hls][debug] Adding segment 0 to queue [stream.hls][debug] Adding segment 1 to queue [stream.hls][debug] Adding segment 2 to queue [stream.hls][debug] Adding segment 3 to queue [stream.hls][debug] Adding segment 4 to queue [stream.hls][debug] Adding segment 5 to queue [stream.hls][debug] Adding segment 6 to queue [stream.hls][debug] Adding segment 7 to queue [stream.hls][debug] Adding segment 8 to queue [stream.hls][debug] Adding segment 9 to queue [stream.hls][debug] Adding segment 10 to queue [stream.hls][debug] Adding segment 11 to queue [stream.hls][debug] Adding segment 12 to queue [stream.hls][debug] Adding segment 13 to queue [stream.hls][debug] Adding segment 14 to queue [stream.hls][debug] Adding segment 15 to queue [stream.hls][debug] Adding segment 16 to queue [stream.hls][debug] Adding segment 17 to queue [stream.hls][debug] Adding segment 18 to queue [stream.hls][debug] Adding segment 19 to queue [stream.hls][debug] Adding segment 20 to queue [stream.hls][debug] Adding segment 21 to queue [stream.hls][debug] Segments in this playlist are encrypted [stream.hls][debug] First Sequence: 0; Last Sequence: 25 [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 0; End Sequence: 25 [stream.hls][debug] Adding segment 0 to queue [stream.hls][debug] Adding segment 1 to queue [stream.hls][debug] Adding segment 2 to queue [stream.hls][debug] Adding segment 3 to queue [stream.hls][debug] Adding segment 4 to queue [stream.hls][debug] Adding segment 5 to queue [stream.hls][debug] Adding segment 6 to queue [stream.hls][debug] Adding segment 7 to queue [stream.hls][debug] Adding segment 8 to queue [stream.hls][debug] Adding segment 9 to queue [stream.hls][debug] Adding segment 10 to queue [stream.hls][debug] Adding segment 11 to queue [stream.hls][debug] Adding segment 12 to queue [stream.hls][debug] Adding segment 13 to queue [stream.hls][debug] Adding segment 14 to queue [stream.hls][debug] Adding segment 15 to queue [stream.hls][debug] Adding segment 16 to queue [stream.hls][debug] Adding segment 17 to queue [stream.hls][debug] Adding segment 18 to queue [stream.hls][debug] Adding segment 19 to queue [stream.hls][debug] Adding segment 20 to queue [stream.hls][debug] Adding segment 21 to queue [stream.hls][debug] Segment 0 complete [stream.hls][debug] Adding segment 22 to queue [stream.hls][debug] Segment 0 complete [stream.hls][debug] Adding segment 22 to queue [cli][debug] Checking file output [cli][debug] Writing stream to output Exception in thread Thread-HLSStreamWriter: Traceback (most recent call last): File "threading.py", line 973, in _bootstrap_inner File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\stream\segmented.py", line 200, in run self.write(segment, result, *data) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\stream\hls.py", line 162, in write return self._write(sequence, *args, **kwargs) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\stream\hls.py", line 192, in _write chunk = unpad(decrypted_chunk, AES.block_size, style="pkcs7") File "C:\Program Files (x86)\Streamlink\pkgs\Crypto\Util\Padding.py", line 92, in unpad raise ValueError("Padding is incorrect.") ValueError: Padding is incorrect. Exception in thread Thread-HLSStreamWriter: Traceback (most recent call last): File "threading.py", line 973, in _bootstrap_inner File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\stream\segmented.py", line 200, in run self.write(segment, result, *data) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\stream\hls.py", line 162, in write return self._write(sequence, *args, **kwargs) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\stream\hls.py", line 192, in _write chunk = unpad(decrypted_chunk, AES.block_size, style="pkcs7") File "C:\Program Files (x86)\Streamlink\pkgs\Crypto\Util\Padding.py", line 92, in unpad raise ValueError("Padding is incorrect.") ValueError: Padding is incorrect. ```
There's a bug in the HLS playlist parser: https://github.com/streamlink/streamlink/blob/3.0.3/src/streamlink/stream/hls_playlist.py#L217 The [BYTERANGE tag's offset value is optional](https://datatracker.ietf.org/doc/html/rfc8216#section-4.3.2.2) but the parser always sets the value to 0 if it doesn't exist, which is obviously wrong. [The `create_request_params` code where the `Range` header gets set](https://github.com/streamlink/streamlink/blob/3.0.3/src/streamlink/stream/hls.py#L90-L106) stems from [the Livestreamer era in 2013](39008e9df12723e59b77c2da428a062ea7e1d857) and hasn't been changed since then apart from some minor stuff. Unfortunately nobody has written tests for HLS byteranges, so a side effect in the HLS playlist parser has broken the byterange requests. However, there's another issue with how the `Range` header gets set. The `HLSStreamWriter` has an infinitely growing cache for offsets of previous segments (`byterange_offsets`). This cache depends on the segment's URL, which is incorrect. The HLS spec says that only the previous segment should be used for missing offset values, and this previous segment can have a different URL. Another problem is that the `HLSStreamWriter.fetch` method (which calls `create_request_params`) is run by other threads from the writer's thread-pool, so a sequential order of requests is not guaranteed. The individual `Segment` objects don't have a reference to their previous segment, and the `Sequence` object which holds the `Segment` object and its sequence number currently doesn't get passed to the `create_request_params` method, so a single "previous segment byterange offset cache" value can't be used without changing that. Let me quickly fix the HLS playlist parser first though before I take a look at the other issue.
2022-01-19T08:30:56
streamlink/streamlink
4,326
streamlink__streamlink-4326
[ "4325" ]
711418045debdc37fb9ea6f42fc737872b8eb342
diff --git a/src/streamlink/plugins/pluzz.py b/src/streamlink/plugins/pluzz.py --- a/src/streamlink/plugins/pluzz.py +++ b/src/streamlink/plugins/pluzz.py @@ -98,7 +98,7 @@ def _get_streams(self): validate.parse_json(), { "video": { - "workflow": "token-akamai", + "workflow": validate.any("token-akamai", "dai"), "format": validate.any("dash", "hls"), "token": validate.url(), "url": validate.url()
plugins.pluzz : Allow Beijing 2022 streams ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Plugin is unable to retrieve streams from webpage steps : - get the live link : https://www.france.tv/beijing-h24/direct.html - launch `streamlink "https://www.france.tv/beijing-h24/direct.html"` ### Debug log ```text [cli][debug] OS: Linux-5.17.0-051700rc2-lowlatency-x86_64-with-glibc2.34 [cli][debug] Python: 3.10.2 [cli][debug] Streamlink: 3.1.1 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.france.tv/beijing-h24/direct.html [cli][debug] --loglevel=debug [cli][info] Found matching plugin pluzz for URL https://www.france.tv/beijing-h24/direct.html [plugins.pluzz][debug] Country: FR [plugins.pluzz][debug] Video ID: 35cdf7ed-f565-40b6-b2a7-256e1ad17965 error: Unable to validate response text: Unable to validate key 'video': Unable to validate key 'workflow': 'dai' does not equal 'token-akamai' ```
2022-02-02T13:30:59
streamlink/streamlink
4,335
streamlink__streamlink-4335
[ "4332" ]
eecb0e340dfa1b153bd1f354453ba6433d50e7de
diff --git a/src/streamlink/plugins/filmon.py b/src/streamlink/plugins/filmon.py --- a/src/streamlink/plugins/filmon.py +++ b/src/streamlink/plugins/filmon.py @@ -6,8 +6,10 @@ from streamlink.exceptions import PluginError, StreamError from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate +from streamlink.plugin.api.http_session import TLSSecLevel1Adapter from streamlink.stream.hls import HLSStream, HLSStreamReader, HLSStreamWorker, Sequence from streamlink.stream.hls_playlist import load as load_hls_playlist +from streamlink.stream.http import HTTPStream log = logging.getLogger(__name__) @@ -122,24 +124,32 @@ class FilmOnAPI: def __init__(self, session): self.session = session - channel_url = "http://www.filmon.com/api-v2/channel/{0}?protocol=hls" - vod_url = "http://www.filmon.com/vod/info/{0}" + channel_url = "https://www.filmon.com/ajax/getChannelInfo" + vod_url = "https://vms-admin.filmon.com/api/video/movie?id={0}" stream_schema = { - "quality": validate.text, + "quality": str, "url": validate.url(), "watch-timeout": int } - api_schema = validate.Schema( + channel_schema = validate.Schema( { - "data": { + "streams": validate.any( + {str: stream_schema}, + [stream_schema] + ) + } + ) + vod_schema = validate.Schema( + { + "response": { "streams": validate.any( - {validate.text: stream_schema}, + {str: stream_schema}, [stream_schema] ) } }, - validate.get("data") + validate.get("response") ) def channel(self, channel): @@ -150,11 +160,13 @@ def channel(self, channel): # retry for 50X errors try: - res = self.session.http.get(self.channel_url.format(channel)) + res = self.session.http.post(self.channel_url, + data={"channel_id": channel, "quality": "low"}, + headers={"X-Requested-With": "XMLHttpRequest"}) if res: # retry for invalid response data try: - return self.session.http.json(res, schema=self.api_schema) + return self.session.http.json(res, schema=self.channel_schema) except PluginError: log.debug("invalid or non-JSON data received") continue @@ -165,7 +177,7 @@ def channel(self, channel): def vod(self, vod_id): res = self.session.http.get(self.vod_url.format(vod_id)) - return self.session.http.json(res, schema=self.api_schema) + return self.session.http.json(res, schema=self.vod_schema) @pluginmatcher(re.compile(r""" @@ -183,7 +195,7 @@ def vod(self, vod_id): (?P<is_group>group/) )(?:channel_id=)?(?P<channel>[-_\w]+) | - vod/view/(?P<vod_id>\d+)- + vod/view/(?P<vod_id>[^/?&]+) ) """, re.VERBOSE)) class Filmon(Plugin): @@ -220,14 +232,28 @@ def _get_streams(self): vod_id = self.match.group("vod_id") is_group = self.match.group("is_group") + adapter = TLSSecLevel1Adapter() + self.session.http.mount("https://filmon.com", adapter) + self.session.http.mount("https://www.filmon.com", adapter) + self.session.http.mount("https://vms-admin.filmon.com/", adapter) + + # get cookies + self.session.http.get(self.url) + if vod_id: data = self.api.vod(vod_id) for _, stream in data["streams"].items(): - streams = HLSStream.parse_variant_playlist(self.session, stream["url"]) - if not streams: - yield stream["quality"], HLSStream(self.session, stream["url"]) + if stream["url"].endswith(".m3u8"): + streams = HLSStream.parse_variant_playlist(self.session, stream["url"]) + if not streams: + yield stream["quality"], HLSStream(self.session, stream["url"]) + else: + yield from streams.items() + elif stream["url"].endswith(".mp4"): + yield stream["quality"], HTTPStream(self.session, stream["url"]) else: - yield from streams.items() + log.error("Unsupported stream type") + return else: if channel and not channel.isdigit(): _id = self.cache.get(channel)
diff --git a/tests/plugins/test_filmon.py b/tests/plugins/test_filmon.py --- a/tests/plugins/test_filmon.py +++ b/tests/plugins/test_filmon.py @@ -31,7 +31,10 @@ class TestPluginCanHandleUrlFilmon(PluginCanHandleUrl): ('https://www.filmon.com/tv/channel-4', [None, 'channel-4', None]), ('https://www.filmon.tv/tv/55', [None, '55', None]), ('http://www.filmon.tv/group/comedy', ['group/', 'comedy', None]), - ('http://www.filmon.tv/vod/view/10250-0-crime-boss', [None, None, '10250']) + ('http://www.filmon.tv/vod/view/10250-0-crime-boss', [None, None, '10250-0-crime-boss']), + ('http://www.filmon.tv/vod/view/10250-0-crime-boss/extra', [None, None, '10250-0-crime-boss']), + ('http://www.filmon.tv/vod/view/10250-0-crime-boss?extra', [None, None, '10250-0-crime-boss']), + ('http://www.filmon.tv/vod/view/10250-0-crime-boss&extra', [None, None, '10250-0-crime-boss']) ]) def test_match_url(url, expected): Filmon.bind(Mock(), "tests.plugins.test_filmon")
plugins.filmon: all channels fail to start. Debug show invalid server response ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description I have tried diferent versions of streamlink including the latest. Ubuntu and windows. When running streamlink.exe https://www.filmon.com/tv/itv1 low --player-external-http --player-external-http-port 23555 --player-continuous-http it says channel sleep and invaild server responce. ### Debug log ```text C:\Program Files (x86)\Streamlink\bin>streamlink.exe https://www.filmon.com/tv/itv1 low --player-external-http --player-external-http-port 23555 --player-continuous-http --loglevel debug [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.8 [cli][debug] Streamlink: 3.1.1 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.filmon.com/tv/itv1 [cli][debug] stream=['low'] [cli][debug] --loglevel=debug [cli][debug] --player-continuous-http=True [cli][debug] --player-external-http=True [cli][debug] --player-external-http-port=23555 [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin filmon for URL https://www.filmon.com/tv/itv1 [plugins.filmon][debug] Found channel ID: 11 [plugins.filmon][debug] invalid server response [plugins.filmon][debug] channel sleep 1 [plugins.filmon][debug] invalid server response [plugins.filmon][debug] channel sleep 2 [plugins.filmon][debug] invalid server response [plugins.filmon][debug] channel sleep 3 [plugins.filmon][debug] invalid server response [plugins.filmon][debug] channel sleep 4 [plugins.filmon][debug] invalid server response [plugins.filmon][debug] Reset cached channel: itv1 error: Unable to find 'self.api.channel' for 11 ```
Hmm, 404 for: http://www.filmon.com/api-v2/channel/{0}?protocol=hls Looks to be available on: http://www.filmon.com/tv/api/channel/{0}?protocol=hls but that requires a session key for the API. The website performs a POST to: https://www.filmon.com/ajax/getChannelInfo ``` Content-Type: application/x-www-form-urlencoded; charset=UTF-8 ``` with body: ``` channel_id=851&quality=low ``` and response: ``` { "id": 851, "logo": "https://static.filmon.com/assets/channels/851/logo.png?v2", "big_logo": "https://static.filmon.com/assets/channels/851/big_logo.png", "title": "5*", "alias": "5-star", "description": "5 star is an entertainment channel for people who like their drama sexier, their documentaries harder, their movies bigger and their soaps soapier. 5 Star is unpredictable, mischievous and relevant with exclusive new shows, blockbuster movies and US series. 5 Star is for people who like to be entertained....", "has_description": true, "group": "UK LIVE TV", "group_id": 5, "group_alias": "uk-live-tv", "type": "standard", "is_free": false, "is_free_sd_mode": false, "free_sd": false, "is_adult": false, "adult_content": false, "is_interactive": false, "is_vod": false, "is_vox": false, "has_tvguide": true, "recordable": true, "chat_keyword": "5-star", "created_at": 1341853217, "is360": false, "has360Sound": false, "jwplatform_media_id": "UrXk5bS4", "youtube_playlist_id": null, "preload_message": "Tuning your antenna", "preload_timeout": 3, "is_local": true, "preload_intro": { "name": "mp4:promo_short.mp4", "url": "rtmp://vod-static.la3.edge.filmon.com/static/" }, "is_favorite": false, "images": { "logos": [ <SNIPPED> ] }, "extra_big_logo": "https://static.filmon.com/assets/channels/851/extra_big_logo.png", "schedule": true, "tvguide": [], "now_playing": null, "next_playing": null, "content_rating": "0", "watch_free_time": 30, "seekable": false, "serverside_record": false, "upnp_enabled": true, "expire_timeout": 30, "watch-timeout": "30", "expire_time": 1644002602, "subscriptions": { <SNIPPED> }, "streams": [ { "id": 1, "quality": "high", "url": "https://edge-823-ch-gv.filmon.com/live/851.high.stream/playlist.m3u8?id=0ad5aac39bb13fbe9156feccde25a24b9cc4d8e2b6e69fa51da411a8ed21acb01a34f53502dd935dbf6fa94c30881f48cb04e0769a1eb5399828c3be8978075b37de6b8a1d995f3e451e9bfcb066fed4aedeefffaac11a94b621328381fae8b23ad39ce9394e02d438981374e838fe5df14ec790c0cb32466dd28977721bd0343398df49c70302891de22d8b3c9b4ba45f697b03e424d985", "name": "HD", "is_adaptive": "0", "watch-timeout": 30 }, { "id": 2, "quality": "low", "url": "https://edge-823-ch-gv.filmon.com/live/851.low.stream/playlist.m3u8?id=0ad5aac39bb13fbe9156feccde25a24b9cc4d8e2b6e69fa51da411a8ed21acb05754a1af9e5ac6f49aae0fc0bbf50627bf2d318bc15f46661961ce802c13bddd0d17d7d7f5270ae52da16e0392517fedc2269996507b6d916b98f70059382aeea78be535a03691553fe63f75a00d4732c8cdd4777d724735131582431fd5ae4b50752fe3e672cd7df2007edbdd3bab9838ce25427aaa02c2", "name": "SD", "is_adaptive": "0", "watch-timeout": 30 } ], "serverURL": "https://edge-823-ch-gv.filmon.com/live/851.low.stream/playlist.m3u8?id=0ad5aac39bb13fbe9156feccde25a24b9cc4d8e2b6e69fa51da411a8ed21acb05754a1af9e5ac6f49aae0fc0bbf50627bf2d318bc15f46661961ce802c13bddd0d17d7d7f5270ae52da16e0392517fedc2269996507b6d916b98f70059382aeea78be535a03691553fe63f75a00d4732c8cdd4777d724735131582431fd5ae4b50752fe3e672cd7df2007edbdd3bab9838ce25427aaa02c2", "streamName": "SD", "enable_local_tvguide": true, "server_time": 1644002572 } ```
2022-02-05T23:36:51
streamlink/streamlink
4,338
streamlink__streamlink-4338
[ "4337" ]
40b995aa7bdaa9e0e85f6e9c86be0a63117423d5
diff --git a/src/streamlink/plugins/pandalive.py b/src/streamlink/plugins/pandalive.py --- a/src/streamlink/plugins/pandalive.py +++ b/src/streamlink/plugins/pandalive.py @@ -12,19 +12,11 @@ r"https?://(?:www\.)?pandalive\.co\.kr/" )) class Pandalive(Plugin): - _room_id_re = re.compile(r"roomid\s*=\s*String\.fromCharCode\((.*)\)") - def _get_streams(self): + re_media_code = re.compile(r"""routePath:\s*(["'])(\\u002F|/)live(\\u002F|/)play(\\u002F|/)(?P<id>\d+)\1""") media_code = self.session.http.get(self.url, schema=validate.Schema( - validate.parse_html(), - validate.xml_xpath_string(".//script[contains(text(), 'roomid')]/text()"), - validate.any(None, validate.all( - validate.transform(self._room_id_re.search), - validate.any(None, validate.all( - validate.get(1), - validate.transform(lambda s: "".join(map(lambda c: chr(int(c)), s.split(",")))), - )), - )), + validate.transform(re_media_code.search), + validate.any(None, validate.get("id")) )) if not media_code: @@ -34,10 +26,14 @@ def _get_streams(self): json = self.session.http.post( "https://api.pandalive.co.kr/v1/live/play", - data={"action": "watch", "mediaCode": media_code}, + data={ + "action": "watch", + "userIdx": media_code + }, schema=validate.Schema( - validate.parse_json(), { - validate.optional("media"): { + validate.parse_json(), + { + "media": { "title": str, "userId": str, "userNick": str, @@ -45,14 +41,20 @@ def _get_streams(self): "isLive": bool, "liveType": str, }, - validate.optional("PlayList"): { - "hls2": [{ + "PlayList": { + validate.optional("hls"): [{ + "url": validate.url(), + }], + validate.optional("hls2"): [{ + "url": validate.url(), + }], + validate.optional("hls3"): [{ "url": validate.url(), }], }, "result": bool, "message": str, - }, + } ) ) @@ -73,7 +75,12 @@ def _get_streams(self): self.author = f"{json['media']['userNick']} ({json['media']['userId']})" self.title = f"{json['media']['title']}" - return HLSStream.parse_variant_playlist(self.session, json["PlayList"]["hls2"][0]["url"]) + playlist = json["PlayList"] + for key in ("hls", "hls2", "hls3"): + # use the first available HLS stream + if key in playlist and playlist[key]: + # all stream qualities share the same URL, so just use the first one + return HLSStream.parse_variant_playlist(self.session, playlist[key][0]["url"]) __plugin__ = Pandalive
plugins.pandatv: No Playable Streams Found ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description No Playable stream found with plugin https://www.pandalive.co.kr/live/play/19178412 or any other link of a streamer that is online would work ### Debug log ```text streamlink --loglevel debug https://www.pandalive.co.kr/live/play/19178412 best -o h:\testing1.ts [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.8 [cli][debug] Streamlink: 3.1.1 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.pandalive.co.kr/live/play/19178412 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --output=h:\testing1.ts [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin pandalive for URL https://www.pandalive.co.kr/live/play/19178412 error: No playable streams found on this URL: https://www.pandalive.co.kr/live/play/19178412 ```
2022-02-08T13:50:09
streamlink/streamlink
4,349
streamlink__streamlink-4349
[ "4348" ]
88a40cfff5464d6e87bf6c7e9722ee3574bc1b7a
diff --git a/src/streamlink/plugins/openrectv.py b/src/streamlink/plugins/openrectv.py --- a/src/streamlink/plugins/openrectv.py +++ b/src/streamlink/plugins/openrectv.py @@ -29,6 +29,10 @@ class OPENRECtv(Plugin): "url": validate.any(None, validate.url()), "url_public": validate.any(None, validate.url()), "url_ull": validate.any(None, validate.url()), + }, + validate.optional("subs_trial_media"): { + "url": validate.any(None, validate.url()), + "url_ull": validate.any(None, validate.url()), } }) @@ -133,7 +137,7 @@ def _get_streams(self): m3u8_file = subs_data["data"]["items"][0]["media"]["url"] # streaming elif mdata["onair_status"] == 1: - m3u8_file = mdata["media"]["url_ull"] + m3u8_file = mdata["media"]["url_ull"] or mdata["subs_trial_media"]["url_ull"] # archive elif mdata["onair_status"] == 2 and mdata["media"]["url_public"] is not None: m3u8_file = mdata["media"]["url_public"].replace("public.m3u8", "playlist.m3u8")
Openrec.tv: cannot play livestream when it's in `member_trial` ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Cannot access to a subscription livestream with free-trial, with or without logging in. I can do so with my browser. ### Debug log ```text G:\>streamlink https://www.openrec.tv/live/n9ze3ylno84 best --loglevel=debug [cli][debug] OS: Windows 7 [cli][debug] Python: 3.8.7 [cli][debug] Streamlink: 3.1.1 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.openrec.tv/live/n9ze3ylno84 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player=D:\Program Files\SMPlayer\mpv\mpv.exe [cli][debug] --ffmpeg-ffmpeg=D:\Program Files\!cli\ffmpeg.exe [cli][info] Found matching plugin openrectv for URL https://www.openrec.tv/live/n9ze3ylno84 [plugins.openrectv][debug] Got valid detail response [plugins.openrectv][debug] Found video: きどまだSeason2 #20 (n9ze3ylno84) error: No playable streams found on this URL: https://www.openrec.tv/live/n9ze3ylno84 ``` ```text G:\>streamlink https://www.openrec.tv/live/n9ze3ylno84 best --loglevel=debug --openrectv-email=**** --o penrectv-password=**** [cli][debug] OS: Windows 7 [cli][debug] Python: 3.8.7 [cli][debug] Streamlink: 3.1.1 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1) [cli][debug] Arguments: [cli][debug] url=https://www.openrec.tv/live/n9ze3ylno84 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player=D:\Program Files\SMPlayer\mpv\mpv.exe [cli][debug] --ffmpeg-ffmpeg=D:\Program Files\!cli\ffmpeg.exe [cli][debug] --openrectv-email=**** [cli][debug] --openrectv-password=******** [cli][info] Found matching plugin openrectv for URL https://www.openrec.tv/live/n9ze3ylno84 [plugins.openrectv][debug] Logged in as **** [plugins.openrectv][debug] Got valid detail response [plugins.openrectv][debug] Found video: きどまだSeason2 #20 (n9ze3ylno84) error: No playable streams found on this URL: https://www.openrec.tv/live/n9ze3ylno84 ```
2022-02-14T11:31:07
streamlink/streamlink
4,355
streamlink__streamlink-4355
[ "4354" ]
cad3ef5e551859607abf1acc66be1dc329ade534
diff --git a/src/streamlink/plugins/ard_live.py b/src/streamlink/plugins/ard_live.py --- a/src/streamlink/plugins/ard_live.py +++ b/src/streamlink/plugins/ard_live.py @@ -14,6 +14,7 @@ r"https?://((www|live)\.)?daserste\.de/" )) class ARDLive(Plugin): + _URL_DATA_BASE = "https://www.daserste.de/" _QUALITY_MAP = { 4: "1080p", 3: "720p", @@ -36,7 +37,7 @@ def _get_streams(self): except PluginError: return - data_url = urljoin(self.url, data_url) + data_url = urljoin(self._URL_DATA_BASE, data_url) log.debug(f"Player URL: '{data_url}'") self.title, media = self.session.http.get(data_url, schema=validate.Schema(
plugins.ard_live: Unable to parse MEDIAINFO ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description On streamlink 3.1.1 Linux: ``` [cli][info] Found matching plugin ard_live for URL https://live.daserste.de/ error: Unable to validate response text: Unable to parse MEDIAINFO: Expecting value: line 1 column 1 (char 0) ('<!DOCTYPE HTML>\n<html lang="de" i ...) ``` Streamlink 2.0.0 Windows works fine. Can't find a working 2.0.0 Linux build to verify. 3.1.1 seems to expect a player url at `https://live.daserste.de/live-de-102~playerJson.json` and 2.0.0 at `https://www.daserste.de/live/live-de-102~playerJson.json`. Is there a commandline arg to override it? ### Debug log ```text [cli][debug] OS: Linux-5.15.2-arch1-1-x86_64-with-glibc2.35 [cli][debug] Python: 3.10.2 [cli][debug] Streamlink: 3.1.1 [cli][debug] Requests(2.27.0), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://live.daserste.de/ [cli][debug] --loglevel=debug [cli][info] Found matching plugin ard_live for URL https://live.daserste.de/ [plugins.ard_live][debug] Player URL: 'https://live.daserste.de/live-de-102~playerJson.json' error: Unable to validate response text: Unable to parse MEDIAINFO: Expecting value: line 1 column 1 (char 0) ('<!DOCTYPE HTML>\n<html lang="de" i ...) ```
https://live.daserste.de/ redirects to https://www.daserste.de/live/index.html which is working fine: ``` $ streamlink 'https://www.daserste.de/live/index.html' best -l debug [cli][debug] OS: Linux-5.16.10-1-git-x86_64-with-glibc2.35 [cli][debug] Python: 3.10.2 [cli][debug] Streamlink: 3.1.1+8.g3bae9af [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://www.daserste.de/live/index.html [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player=mpv [cli][info] Found matching plugin ard_live for URL https://www.daserste.de/live/index.html [plugins.ard_live][debug] Player URL: 'https://www.daserste.de/live/live-de-102~playerJson.json' [utils.l10n][debug] Language code: en_US [cli][info] Available streams: 270p (worst), 360p, 540p, 720p, 1080p (best) [cli][info] Opening stream: 1080p (hls) ``` The problem here is that the `data_url` is relative to the input URL, and the URL is invalid when using the `live.daserste.de` input. https://github.com/streamlink/streamlink/blob/3.1.1/src/streamlink/plugins/ard_live.py#L39
2022-02-17T08:57:19
streamlink/streamlink
4,368
streamlink__streamlink-4368
[ "4367" ]
19fdc30689bc83286db9ce48af0e98a6d71ae3db
diff --git a/src/streamlink/plugins/goltelevision.py b/src/streamlink/plugins/goltelevision.py --- a/src/streamlink/plugins/goltelevision.py +++ b/src/streamlink/plugins/goltelevision.py @@ -5,27 +5,20 @@ from streamlink.stream.hls import HLSStream -class GOLTelevisionHLSStream(HLSStream): - @classmethod - def _get_variant_playlist(cls, res): - res.encoding = "UTF-8" - return super()._get_variant_playlist(res) - - @pluginmatcher(re.compile( r"https?://(?:www\.)?goltelevision\.com/en-directo" )) class GOLTelevision(Plugin): def _get_streams(self): url = self.session.http.get( - "https://www.goltelevision.com/api/manifest/live", + "https://play.goltelevision.com/api/stream/live", schema=validate.Schema( validate.parse_json(), {"manifest": validate.url()}, validate.get("manifest") ) ) - return GOLTelevisionHLSStream.parse_variant_playlist(self.session, url) + return HLSStream.parse_variant_playlist(self.session, url) __plugin__ = GOLTelevision
plugins.goltelevision: error: Unable to open URL (500 Server Error: Internal Server Error for url...) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Hi, I´m using the latest version of [appimage ](https://github.com/streamlink/streamlink-appimage/releases)and I can´t get the url info for goltelevision.com Feel free to ask me anything else to test from my side. Thanks and best regards ### Debug log ```text root@NUC:/home/nico/streamlink# ./streamlink --version streamlink 3.1.1 root@NUC:/home/nico/streamlink# ./streamlink -l debug http://www.goltelevision.com/en-directo [cli][info] streamlink is running as root! Be careful! [cli][debug] OS: Linux-5.13.0-30-generic-x86_64-with-glibc2.31 [cli][debug] Python: 3.10.1 [cli][debug] Streamlink: 3.1.1 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=http://www.goltelevision.com/en-directo [cli][debug] --loglevel=debug [cli][info] Found matching plugin goltelevision for URL http://www.goltelevision.com/en-directo [utils.l10n][debug] Language code: es_ES error: Unable to open URL: https://www.goltelevision.com/api/stream/live.m3u8?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdHJlYW0iOiJsaXZlLm0zdTgiLCJpcCI6Ijg4LjAuMTkyLjIwNSIsImV4cGlyZXNJbiI6IjI4ODAwcyIsImlhdCI6MTY0NTkxNDQ1OCwiZXhwIjoxNjQ1OTQzMjU4fQ.5ji7bkZ1nUtAMzD1gZuy04-OZtCYvKfqNTOzxt1_r6Y (500 Server Error: Internal Server Error for url: https://www.goltelevision.com/api/stream/live.m3u8?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdHJlYW0iOiJsaXZlLm0zdTgiLCJpcCI6Ijg4LjAuMTkyLjIwNSIsImV4cGlyZXNJbiI6IjI4ODAwcyIsImlhdCI6MTY0NTkxNDQ1OCwiZXhwIjoxNjQ1OTQzMjU4fQ.5ji7bkZ1nUtAMzD1gZuy04-OZtCYvKfqNTOzxt1_r6Y) ```
The API URL where the HLS stream URL is retrieved from has been changed on their website, but the old one which Streamlink's plugin is using still exists and returns invalid stream URLs which lead to status code 500 server responses when trying to access the stream. The new API URL is `https://play.goltelevision.com/api/stream/live`: https://github.com/streamlink/streamlink/blob/3.1.1/src/streamlink/plugins/goltelevision.py The custom `GOLTelevisionHLSStream` subclass of `HLSStream` can also be removed with the recent version bump of `charset_normalizer`, as it has fixed the encoding detection issues when there's no charset data in the server's content-type response header. Let me open a PR real quick.
2022-02-27T00:45:44
streamlink/streamlink
4,416
streamlink__streamlink-4416
[ "4370" ]
03d1863db453d75725ed65315a24d45e1f42bf85
diff --git a/src/streamlink/plugins/cmmedia.py b/src/streamlink/plugins/cmmedia.py new file mode 100644 --- /dev/null +++ b/src/streamlink/plugins/cmmedia.py @@ -0,0 +1,119 @@ +""" +$description Live TV channel and video on-demand service from CMM, a Spanish public, state-owned broadcaster. +$url cmmedia.es +$type live, vod +$notes Content not licensed for digital distribution is unavailable via the live channel stream. +""" + +import logging +import re +from functools import partial + +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate +from streamlink.stream.hls import HLSStream + +log = logging.getLogger(__name__) + + +@pluginmatcher(re.compile(r"https?://(?:www\.)?cmmedia\.es")) +class CMMedia(Plugin): + iframe_url_re = re.compile(r'{getKs\("([^"]+)"') + json_re = re.compile(r"twindow\.kalturaIframePackageData\s*=\s*({.*});[\\nt]*var isIE8") + unescape_quotes = partial(re.compile(r'\\"').sub, r'"') + partner_ids_re = re.compile(r"/p/(\d+)/sp/(\d+)/") + + @staticmethod + def is_restricted(data): + items = {k: v for k, v in data.items() if k.startswith("is")} + for k, v in items.items(): + log.debug(f"{k}: {v}") + + restrictions = [k for k, v in items.items() if v] + if restrictions: + log.error(f"This site is restricted: ({', '.join(restrictions)})") + return True + + return False + + def _get_streams(self): + parsed_html = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html())) + + iframe_url = validate.validate(validate.Schema( + validate.xml_xpath_string(".//script[contains(text(), '/embedIframeJs/')]/text()"), + validate.any(None, validate.all( + validate.transform(self.iframe_url_re.search), + validate.any(None, validate.all(validate.get(1), validate.url())), + )), + ), parsed_html) + + if not iframe_url: + return + + m = self.partner_ids_re.search(iframe_url) + if not m: + log.error("Failed to find partner IDs in IFRAME URL") + return + + p = m.group(1) + sp = m.group(2) + + json = self.session.http.get(iframe_url, schema=validate.Schema( + validate.transform(self.json_re.search), + validate.any(None, validate.all( + validate.get(1), + validate.transform(self.unescape_quotes), + validate.parse_json(), + validate.any({ + "entryResult": { + "contextData": { + "isSiteRestricted": bool, + "isCountryRestricted": bool, + "isSessionRestricted": bool, + "isIpAddressRestricted": bool, + "isUserAgentRestricted": bool, + "flavorAssets": [{ + "id": str, + }], + }, + "meta": { + "id": str, + "name": str, + "categories": validate.any(None, str), + }, + }, + }, {"error": str}), + )), + )) + + if not json: + return + + if "error" in json: + log.error(f"API error: {json['error']}") + return + + json = json.get("entryResult") + + if self.is_restricted(json["contextData"]): + return + + self.id = json["meta"]["id"] + self.title = json["meta"]["name"] + self.author = validate.validate(validate.Schema( + validate.xml_xpath_string(".//h1[contains(@class, 'btn-title')]/text()"), + ), parsed_html) + if json["meta"]["categories"]: + self.category = json["meta"]["categories"] + + for asset in json["contextData"]["flavorAssets"]: + yield from HLSStream.parse_variant_playlist( + self.session, ( + f"https://cdnapisec.kaltura.com/p/{p}/sp/{sp}/playManifest/entryId/{json['meta']['id']}" + f"/flavorIds/{asset['id']}/format/applehttp/protocol/https/a.m3u8" + ), + name_fmt="{pixels}_{bitrate}", + ).items() + + +__plugin__ = CMMedia
diff --git a/tests/plugins/test_cmmedia.py b/tests/plugins/test_cmmedia.py new file mode 100644 --- /dev/null +++ b/tests/plugins/test_cmmedia.py @@ -0,0 +1,19 @@ +from streamlink.plugins.cmmedia import CMMedia +from tests.plugins import PluginCanHandleUrl + + +class TestPluginCanHandleUrlCMMedia(PluginCanHandleUrl): + __plugin__ = CMMedia + + should_match = [ + "http://cmmedia.es", + "http://www.cmmedia.es", + "http://cmmedia.es/any/path", + "http://cmmedia.es/any/path?x", + "http://www.cmmedia.es/any/path?x&y", + "https://cmmedia.es", + "https://www.cmmedia.es", + "https://cmmedia.es/any/path", + "https://cmmedia.es/any/path?x", + "https://www.cmmedia.es/any/path?x&y", + ]
https://www.cmmedia.es/en-directo/tv / https://klive.kaltura.com ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description Hi, This is a plugin development request to include CMM Media in Streamlink. CMM Media is a TV Broadcaster of [Castilla La Mancha](url=https://en.wikipedia.org/wiki/Castilla%E2%80%93La_Mancha) region, in Spain. Is run under the gobernment of the of Castilla La Mancha. The content provided are basically news, documentaries and special programs from that region of Spain ### Input URLs Information obtained from VideoDownload Helper: **masterManifest:** https://klive.kaltura.com/s/env/cluster-1-b.live.nvp1/live/hls/p/2288691/e/0_xs45iy5i/tl/main/st/0/t/DznN37GHRy-bwqducFv4Xw/index-s32.m3u8?__hdnea__=st=1646058301~exp=1646144701~acl=/s/env/cluster-1-b.live.nvp1/live/hls/p/2288691/e/0_xs45iy5i/tl/main/st/0/t/DznN37GHRy-bwqducFv4Xw/index-s32.m3u8*~hmac=3f226bcf7b99800114dff69ce810b3e3b0f36873eaf3503216917d6ba83f23c6 mediaManifest: https://klive.kaltura.com/s/env/cluster-1-b.live.nvp1/live/hls/p/2288691/e/0_xs45iy5i/tl/main/st/0/t/DznN37GHRy-bwqducFv4Xw/index-s32.m3u8?__hdnea__=st=1646058301~exp=1646144701~acl=/s/env/cluster-1-b.live.nvp1/live/hls/p/2288691/e/0_xs45iy5i/tl/main/st/0/t/DznN37GHRy-bwqducFv4Xw/index-s32.m3u8*~hmac=3f226bcf7b99800114dff69ce810b3e3b0f36873eaf3503216917d6ba83f23c6 topUrl: https://www.cmmedia.es/en-directo/tv/ url: https://klive.kaltura.com/s/env/cluster-1-b.live.nvp1/live/hls/p/2288691/e/0_xs45iy5i/tl/main/st/0/t/DznN37GHRy-bwqducFv4Xw/index-s32.m3u8?__hdnea__=st=1646058301~exp=1646144701~acl=/s/env/cluster-1-b.live.nvp1/live/hls/p/2288691/e/0_xs45iy5i/tl/main/st/0/t/DznN37GHRy-bwqducFv4Xw/index-s32.m3u8*~hmac=3f226bcf7b99800114dff69ce810b3e3b0f36873eaf3503216917d6ba83f23c6 If you need anything else, just let me know.
What's the deal with adding TV channels now? Since #4344 was merged I'm a bit confused as to what the current situation is. Can somebody say yes or no for this, please?
2022-03-31T23:29:35
streamlink/streamlink
4,431
streamlink__streamlink-4431
[ "4225" ]
dadf29ebe551c686147afd02b16f6919730bd394
diff --git a/src/streamlink/plugins/htv.py b/src/streamlink/plugins/htv.py new file mode 100644 --- /dev/null +++ b/src/streamlink/plugins/htv.py @@ -0,0 +1,99 @@ +""" +$description Vietnamese live TV channels owned by the People's Committee of Ho Chi Minh City. +$url htv.com.vn +$type live +$region Vietnam +""" + +import logging +import re +from datetime import date + +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate +from streamlink.stream.hls import HLSStream + +log = logging.getLogger(__name__) + + +@pluginmatcher(re.compile( + r"https?://(?:www\.)?htv\.com\.vn/truc-tuyen(?:\?channel=(?P<channel>\w+)&?|$)" +)) +class HTV(Plugin): + hls_url_re = re.compile(r'var\s+iosUrl\s*=\s*"([^"]+)"') + + def get_channels(self): + data = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath(".//*[contains(@class,'channel-list')]//a[@data-id][@data-code]"), + [ + validate.union_get("data-id", "data-code"), + ], + )) + + return {k: v for k, v in data} + + def _get_streams(self): + channels = self.get_channels() + + if not channels: + log.error("No channels found") + return + + log.debug(f"channels={channels}") + + channel_id = self.match.group("channel") + if channel_id is None: + channel_id, channel_code = next(iter(channels.items())) + elif channel_id in channels: + channel_code = channels[channel_id] + else: + log.error(f"Unknown channel ID: {channel_id}") + return + + log.info(f"Channel: {channel_code}") + + json = self.session.http.post( + "https://www.htv.com.vn/HTVModule/Services/htvService.aspx", + data={ + "method": "GetScheduleList", + "channelid": channel_id, + "template": "AjaxSchedules.xslt", + "channelcode": channel_code, + "date": date.today().strftime("%d-%m-%Y"), + }, + schema=validate.Schema( + validate.parse_json(), + { + "success": bool, + "chanelUrl": validate.url(), + }, + ), + ) + + if not json["success"]: + log.error("API error: success not true") + return + + hls_url = self.session.http.get( + json["chanelUrl"], + headers={"Referer": self.url}, + schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[contains(text(), 'playlist.m3u8')]/text()"), + validate.any(None, validate.all( + validate.transform(self.hls_url_re.search), + validate.any(None, validate.all(validate.get(1), validate.url())), + )), + ), + ) + + if hls_url: + return HLSStream.parse_variant_playlist( + self.session, + hls_url, + headers={"Referer": "https://hplus.com.vn/"}, + ) + + +__plugin__ = HTV
diff --git a/tests/plugins/test_htv.py b/tests/plugins/test_htv.py new file mode 100644 --- /dev/null +++ b/tests/plugins/test_htv.py @@ -0,0 +1,24 @@ +from streamlink.plugins.htv import HTV +from tests.plugins import PluginCanHandleUrl + + +class TestPluginCanHandleUrlHTV(PluginCanHandleUrl): + __plugin__ = HTV + + should_match_groups = [ + ("https://htv.com.vn/truc-tuyen", {}), + ("https://htv.com.vn/truc-tuyen?channel=123", {"channel": "123"}), + ("https://htv.com.vn/truc-tuyen?channel=123&foo", {"channel": "123"}), + ("https://www.htv.com.vn/truc-tuyen", {}), + ("https://www.htv.com.vn/truc-tuyen?channel=123", {"channel": "123"}), + ("https://www.htv.com.vn/truc-tuyen?channel=123&foo", {"channel": "123"}), + ] + + should_not_match = [ + "https://htv.com.vn/", + "https://htv.com.vn/any/path", + "https://htv.com.vn/truc-tuyen?foo", + "https://www.htv.com.vn/", + "https://www.htv.com.vn/any/path", + "https://www.htv.com.vn/truc-tuyen?foo", + ]
htv.com.vn / htv-livecdn.fptplay.net/htvonline ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description Hi! I'd like to add the following plugin: https://htv-livecdn.fptplay.net/htvonline in order to get the following channel: - **HTV7**: http://www.htv.com.vn/truc-tuyen ### Input URLs http://www.htv.com.vn/truc-tuyen
Looks like we can use regex to target `iosUrl` and `iosUrl_mb` javascript variable that stores m3u8 urls. Upon using SoftEther VPN with public VPN Gate lists it kinda worked (for `iosUrl_mb` only, no idea why I got `ERR_HTTP2_PROTOCOL_ERROR` for `iosUrl`). ![image](https://user-images.githubusercontent.com/11626920/147865077-f4889387-ae32-4f25-b267-3bf5aebea70a.png) What's the deal with adding TV channels now? Since #4344 was merged I'm a bit confused as to what the current situation is. Can somebody say yes or no for this, please? The annoying thing about this site is that there are apparently no canonical URLs for each of the 4 channels. This means a plugin given the URL: `https://www.htv.com.vn/truc-tuyen` will only play the channel that the page starts with by default (currently HTV7). The only way around this would be to add a plugin command line option to select the other channels. I don't know if the Streamlink maintainers would approve of that solution though.
2022-04-04T23:59:30
streamlink/streamlink
4,437
streamlink__streamlink-4437
[ "4418" ]
fcee0d2f66d05fcb177393ed25dabbac1b9993dd
diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -55,6 +55,7 @@ def get_formatter(plugin: Plugin): return Formatter( { "url": lambda: args.url, + "plugin": lambda: plugin.module, "id": lambda: plugin.get_id(), "author": lambda: plugin.get_author(), "category": lambda: plugin.get_category(),
diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -404,24 +404,17 @@ class TestCLIMainHandleStream(unittest.TestCase): @patch("streamlink_cli.main.output_stream") @patch("streamlink_cli.main.args") def test_handle_stream_output_stream(self, args: Mock, mock_output_stream: Mock): - """ - Test that the formatter does define the correct variables - """ args.json = False args.subprocess_cmdline = False args.stream_url = False args.output = False args.stdout = False - args.url = "URL" args.player_passthrough = [] args.player_external_http = False args.player_continuous_http = False mock_output_stream.return_value = True - plugin = _TestPlugin("") - plugin.author = "AUTHOR" - plugin.category = "CATEGORY" - plugin.title = "TITLE" + plugin = FakePlugin("") stream = Stream(session=Mock()) streams = {"best": stream} @@ -430,10 +423,6 @@ def test_handle_stream_output_stream(self, args: Mock, mock_output_stream: Mock) paramStream, paramFormatter = mock_output_stream.call_args[0] self.assertIs(paramStream, stream) self.assertIsInstance(paramFormatter, Formatter) - self.assertEqual( - paramFormatter.title("{url} - {author} - {category}/{game} - {title}"), - "URL - AUTHOR - CATEGORY/CATEGORY - TITLE" - ) class TestCLIMainOutputStream(unittest.TestCase): diff --git a/tests/test_cli_main_formatter.py b/tests/test_cli_main_formatter.py new file mode 100644 --- /dev/null +++ b/tests/test_cli_main_formatter.py @@ -0,0 +1,43 @@ +from unittest.mock import Mock, patch + +import pytest + +from streamlink.plugin import Plugin +from streamlink_cli.main import get_formatter +from streamlink_cli.utils import datetime + + [email protected](scope="module") +def plugin(): + class FakePlugin(Plugin): + def _get_streams(self): # pragma: no cover + pass + + plugin = FakePlugin("https://foo/bar") + plugin.id = "ID" + plugin.author = "AUTHOR" + plugin.category = "CATEGORY" + plugin.title = "TITLE" + plugin.bind(Mock(), "FAKE") + + return plugin + + [email protected]("formatterinput,expected", [ + ("{url}", "https://foo/bar"), + ("{plugin}", "FAKE"), + ("{id}", "ID"), + ("{author}", "AUTHOR"), + ("{category}", "CATEGORY"), + ("{game}", "CATEGORY"), + ("{title}", "TITLE"), + ("{time}", "2000-01-01_00-00-00"), + ("{time:%Y}", "2000"), +]) +# workaround for freezegun not being able to patch the subclassed datetime class in streamlink_cli.utils +# which defines the default datetime->str conversion format (needed for path outputs) +@patch("streamlink_cli.utils.datetime.now", Mock(return_value=datetime(2000, 1, 1, 0, 0, 0, 0))) +@patch("streamlink_cli.main.args", Mock(url="https://foo/bar")) +def test_get_formatter(plugin, formatterinput, expected): + formatter = get_formatter(plugin) + assert formatter.title(formatterinput) == expected
Add {plugin_name} metadata variable ### Checklist - [X] This is a feature request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22feature+request%22) ### Description `yt-dlp` has [`webpage_url_domain`](https://github.com/yt-dlp/yt-dlp#user-content-output-template) as an output template field that I use to organize my downloads by site (`youtube.com`, `twitch.tv`, `vimeo.com` etc.). It'd be nice if a `url_domain` variable could be added to have parody with that of `yt-dlp` so both tools could coexist nicely and use the same organization directory structure.
I can only see this being useful for (custom) config files, if the output path is defined there and the user's input URL set on the CLI. When not using a config file with a predefined output path, the input URL already implies which path you should use in the `--output` or `--record` argument value, so adding a new metadata var is not really necessary and the path can easily be set manually. Plugins also support different domains, so adding a `{plugin}` or `{pluginname}` variable would make more sense IMO. > I can only see this being useful for (custom) config files, if the output path is defined there and the user's input URL set on the CLI. This is indeed how I intend to use this if added. In my `youtube-dl` config I currently have: ``` # ~/.config/yt-dlp/config --output "~/vids/%(webpage_url_domain)s/%(uploader_id)s/%(upload_date>%Y.%m.%d)s_%(id)s_%(title)s.%(ext)s" ``` In my `streamlink` config I _wish_ to have: ``` output=~/vids/{url_domain}/{author}/{time:%Y.%m.%d}_{id}_{title}.ts ``` This would allow the two tools to both use the same directory naming scheme. > Plugins also support different domains, so adding a `{plugin}` or `{pluginname}` variable would make more sense IMO. ~~This does make more sense and I also would prefer this but unfortunately `yt-dlp` & friends don't have any such counterpart I could use currently, I'll open an issue on `yt-dlp` to address this though.~~ I'm an idiot, `yt-dlp` already has this working nicely, let's do that instead of what I originally proposed :+1: For Bigo.tv is worked bot?
2022-04-07T23:15:47
streamlink/streamlink
4,442
streamlink__streamlink-4442
[ "4277" ]
81ed15aacac27162c2e6a5e6854e1ed0ccf8175b
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -13,7 +13,7 @@ def format_msg(text, *args, **kwargs): CURRENT_PYTHON = version_info[:2] -REQUIRED_PYTHON = (3, 6) +REQUIRED_PYTHON = (3, 7) # This check and everything above must remain compatible with older Python versions if CURRENT_PYTHON < REQUIRED_PYTHON:
Python 3.6 has reached its EOL Python 3.6 has had its official EOL on 2021-12-23 https://endoflife.date/python https://www.python.org/dev/peps/pep-0494/#lifespan Dropping Python 3.6 and setting the minimum version requirement to 3.7 enables the following main features: - forward references for type hints (with `__future__` import) - contextvars and asyncio improvements - dataclasses as a mutable and subclassable alternative to named tuples - `collections.OrderedDict` has become obsolete, as ordered `dict`s are now standardized (not an implementation detail anymore like in 3.6) https://docs.python.org/3/whatsnew/3.7.html Dropping 3.6 doesn't have to be done right now, but it's something we should keep in mind for one of the next releases.
Btw, some of Streamlink's dependencies have already dropped py36 in their recent releases. If there are any dependency issues, we'll have to drop py36 here too before we can upgrade. The removal is not blocked by anything other than avoiding an unnecessary major version bump that could otherwise be combined with other breaking changes. I'm fine with dropping this support whenever and doing another release.
2022-04-10T01:36:06
streamlink/streamlink
4,462
streamlink__streamlink-4462
[ "4461" ]
aa91920b77636be376766c12cffc1ad146c13d2a
diff --git a/src/streamlink_cli/argparser.py b/src/streamlink_cli/argparser.py --- a/src/streamlink_cli/argparser.py +++ b/src/streamlink_cli/argparser.py @@ -538,7 +538,8 @@ def build_parser(): "-o", "--output", metavar="FILENAME", help=""" - Write stream data to FILENAME instead of playing it. + Write stream data to FILENAME instead of playing it. If FILENAME is set to - (dash), then the stream data will be + written to stdout, similar to the --stdout argument. You will be prompted if the file already exists. @@ -577,7 +578,8 @@ def build_parser(): "-r", "--record", metavar="FILENAME", help=""" - Open the stream in the player, while at the same time writing it to FILENAME. + Open the stream in the player, while at the same time writing it to FILENAME. If FILENAME is set to - (dash), then the + stream data will be written to stdout, similar to the --stdout argument, while still opening the player. You will be prompted if the file already exists. diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -132,7 +132,10 @@ def create_output(formatter: Formatter): http = create_http_server() if args.record: - record = check_file_output(formatter.path(args.record, args.fs_safe_rules), args.force) + if args.record == "-": + record = FileOutput(fd=stdout) + else: + record = check_file_output(formatter.path(args.record, args.fs_safe_rules), args.force) log.info(f"Starting player: {args.player}") @@ -1003,7 +1006,7 @@ def main(): # Console output should be on stderr if we are outputting # a stream to stdout. - if args.stdout or args.output == "-" or args.record_and_pipe: + if args.stdout or args.output == "-" or args.record == "-" or args.record_and_pipe: console_out = sys.stderr else: console_out = sys.stdout
diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -373,6 +373,33 @@ def test_create_output_record(self, mock_check_file_output: Mock, args: Mock): self.assertIsNone(output.record.fd) self.assertIsNone(output.record.record) + @patch("streamlink_cli.main.args") + @patch("streamlink_cli.main.DEFAULT_STREAM_METADATA", {"title": "bar"}) + def test_create_output_record_stdout(self, args: Mock): + formatter = Formatter({ + "author": lambda: "foo" + }) + args.output = None + args.stdout = None + args.record = "-" + args.record_and_pipe = None + args.force = False + args.fs_safe_rules = None + args.title = "{author} - {title}" + args.url = "URL" + args.player = "mpv" + args.player_args = "" + args.player_fifo = None + args.player_http = None + + output = create_output(formatter) + self.assertIsInstance(output, PlayerOutput) + self.assertEqual(output.title, "foo - bar") + self.assertIsInstance(output.record, FileOutput) + self.assertIsNone(output.record.filename) + self.assertEqual(output.record.fd, stdout) + self.assertIsNone(output.record.record) + @patch("streamlink_cli.main.args") @patch("streamlink_cli.main.console") def test_create_output_record_and_other_file_output(self, console: Mock, args: Mock): @@ -603,6 +630,36 @@ def subject(self, argv, stream=None): self.assertIs(childlogger.parent.handlers[0].stream, stream) self.assertIs(streamlink_cli.main.console.output, stream) + @patch("sys.stderr") + @patch("sys.stdout") + def test_stream_stdout(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--stdout"], mock_stderr) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_stream_output_eq_file(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--output=foo"], mock_stdout) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_stream_output_eq_dash(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--output=-"], mock_stderr) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_stream_record_eq_file(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--record=foo"], mock_stdout) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_stream_record_eq_dash(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--record=-"], mock_stderr) + + @patch("sys.stderr") + @patch("sys.stdout") + def test_stream_record_and_pipe(self, mock_stdout: Mock, mock_stderr: Mock): + self.subject(["streamlink", "--record-and-pipe=foo"], mock_stderr) + @patch("sys.stderr") @patch("sys.stdout") def test_no_pipe_no_json(self, mock_stdout: Mock, mock_stderr: Mock):
Option to get audio with video, but save do disk audio only ### Checklist - [X] This is a feature request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22feature+request%22) ### Description There was another ticket https://github.com/streamlink/streamlink/issues/1776, but this is a different one. It is pretty clear to me that some sources are streaming audio with video only, but I would like to see that streamlink gets audio with video, extracts audio on the fly and writes it to audio file. The purpose is to save disk space and SSD life.
You can already do this by piping the output into `ffmpeg` and supplying the right arguments to it. As @mkbloke already pointed out, use ffmpeg. This can be achieved in multiple ways via the available [file output options](https://streamlink.github.io/latest/cli.html#file-output-options). Adding CLI options for that doesn't make any sense. One of these options could be [`--record`](https://streamlink.github.io/latest/cli.html#cmdoption-record) with the value being set to `-`, but unfortunately, [`-` does not get interpreted as `stdout` in the `--record` logic](https://github.com/streamlink/streamlink/blob/aa91920b77636be376766c12cffc1ad146c13d2a/src/streamlink_cli/main.py#L134-L135), so piping the stream into ffmpeg and remuxing it while watching it in the player that gets spawned by streamlink at the same time is not possible right now. That could be easily added though. ```sh streamlink --record=- --player=player "$INPUT" "$QUALITY" \ | ffmpeg -i pipe:0 -vn -c:a copy "$file.m4a" ``` For now, you'll have to use `--stdout`, clone the output stream and pipe them into both ffmpeg and the video player. I'm leaving this thread open the implementation of the `--record=-` value. I'll take a look at this tomorrow or so. Should be fairly simple from what it looks like. A workaround for `--record=-` could of course also be writing to a named pipe on the file system. ```bash mkfifo ./foo streamlink --record=./foo --player=player "$INPUT" "$QUALITY" & ffmpeg -i ./foo -vn -c:a copy "$file.m4a" ``` @mkbloke , @bastimeyer , thank you for showing this capability. Could you please make it clear for me whether such a piping uses disk as an intermediate media between streamlink and ffmpeg, or the data thansfer happens entirely in RAM preventing excessive SSD wear out? https://en.wikipedia.org/wiki/Pipeline_(Unix) Why are you even worried about SSD wear if you're trying to archive media files on it. This is complete non-sense. If you're worried about wear, which is rather ridiculous in these days anyway, then write to other storage mediums. I only use SSD as an editing medium. I then store audios on a normal HDD, on a server.
2022-04-16T23:28:33
streamlink/streamlink
4,467
streamlink__streamlink-4467
[ "4455" ]
6a9efb2c07dfd6e4f257a911459f788df0fb2c92
diff --git a/src/streamlink/plugin/api/utils.py b/src/streamlink/plugin/api/utils.py deleted file mode 100644 --- a/src/streamlink/plugin/api/utils.py +++ /dev/null @@ -1,28 +0,0 @@ -"""Useful wrappers and other tools.""" -import re -from collections import namedtuple - -from streamlink.utils.parse import parse_json, parse_qsd as parse_query, parse_xml - -__all__ = ["parse_json", "parse_xml", "parse_query"] - - -tag_re = re.compile(r'''(?=<(?P<tag>[a-zA-Z]+)(?P<attr>.*?)(?P<end>/)?>(?:(?P<inner>.*?)</\s*(?P=tag)\s*>)?)''', - re.MULTILINE | re.DOTALL) -attr_re = re.compile(r'''\s*(?P<key>[\w-]+)\s*(?:=\s*(?P<quote>["']?)(?P<value>.*?)(?P=quote)\s*)?''') -Tag = namedtuple("Tag", "tag attributes text") - - -def itertags(html, tag): - """ - Brute force regex based HTML tag parser. This is a rough-and-ready searcher to find HTML tags when - standards compliance is not required. Will find tags that are commented out, or inside script tag etc. - - :param html: HTML page - :param tag: tag name to find - :return: generator with Tags - """ - for match in tag_re.finditer(html): - if match.group("tag") == tag: - attrs = {a.group("key").lower(): a.group("value") for a in attr_re.finditer(match.group("attr"))} - yield Tag(match.group("tag"), attrs, match.group("inner"))
diff --git a/tests/test_plugin_utils.py b/tests/test_plugin_utils.py deleted file mode 100644 --- a/tests/test_plugin_utils.py +++ /dev/null @@ -1,97 +0,0 @@ -import sys -import unittest - -from streamlink.plugin.api.utils import itertags - - -def unsupported_versions_1979(): - """Unsupported python versions for itertags - 3.7.0 - 3.7.2 and 3.8.0a1 - - https://github.com/streamlink/streamlink/issues/1979 - - https://bugs.python.org/issue34294 - """ - v = sys.version_info - return (v.major == 3) and ( - # 3.7.0 - 3.7.2 - (v.minor == 7 and v.micro <= 2) - # 3.8.0a1 - or (v.minor == 8 and v.micro == 0 and v.releaselevel == 'alpha' and v.serial <= 1) - ) - - -class TestPluginUtil(unittest.TestCase): - test_html = """ -<!doctype html> -<html lang="en" class="no-js"> -<title>Title</title> -<meta property="og:type" content= "website" /> -<meta property="og:url" content="http://test.se/"/> -<meta property="og:site_name" content="Test" /> -<script src="https://test.se/test.js"></script> -<link rel="stylesheet" type="text/css" href="https://test.se/test.css"> -<script>Tester.ready(function () { -alert("Hello, world!"); });</script> -<p> -<a -href="http://test.se/foo">bar</a> -</p> -</html> - """ # noqa: W291 - - def test_itertags_single_text(self): - title = list(itertags(self.test_html, "title")) - self.assertTrue(len(title), 1) - self.assertEqual(title[0].tag, "title") - self.assertEqual(title[0].text, "Title") - self.assertEqual(title[0].attributes, {}) - - def test_itertags_attrs_text(self): - script = list(itertags(self.test_html, "script")) - self.assertTrue(len(script), 2) - self.assertEqual(script[0].tag, "script") - self.assertEqual(script[0].text, "") - self.assertEqual(script[0].attributes, {"src": "https://test.se/test.js"}) - - self.assertEqual(script[1].tag, "script") - self.assertEqual(script[1].text.strip(), """Tester.ready(function () {\nalert("Hello, world!"); });""") - self.assertEqual(script[1].attributes, {}) - - @unittest.skipIf(unsupported_versions_1979(), - "python3.7 issue, see bpo-34294") - def test_itertags_multi_attrs(self): - metas = list(itertags(self.test_html, "meta")) - self.assertTrue(len(metas), 3) - self.assertTrue(all(meta.tag == "meta" for meta in metas)) - - self.assertEqual(metas[0].text, None) - self.assertEqual(metas[1].text, None) - self.assertEqual(metas[2].text, None) - - self.assertEqual(metas[0].attributes, {"property": "og:type", "content": "website"}) - self.assertEqual(metas[1].attributes, {"property": "og:url", "content": "http://test.se/"}) - self.assertEqual(metas[2].attributes, {"property": "og:site_name", "content": "Test"}) - - def test_multi_line_a(self): - anchor = list(itertags(self.test_html, "a")) - self.assertTrue(len(anchor), 1) - self.assertEqual(anchor[0].tag, "a") - self.assertEqual(anchor[0].text, "bar") - self.assertEqual(anchor[0].attributes, {"href": "http://test.se/foo"}) - - @unittest.skipIf(unsupported_versions_1979(), - "python3.7 issue, see bpo-34294") - def test_no_end_tag(self): - links = list(itertags(self.test_html, "link")) - self.assertTrue(len(links), 1) - self.assertEqual(links[0].tag, "link") - self.assertEqual(links[0].text, None) - self.assertEqual(links[0].attributes, {"rel": "stylesheet", - "type": "text/css", - "href": "https://test.se/test.css"}) - - def test_tag_inner_tag(self): - links = list(itertags(self.test_html, "p")) - self.assertTrue(len(links), 1) - self.assertEqual(links[0].tag, "p") - self.assertEqual(links[0].text.strip(), '<a \nhref="http://test.se/foo">bar</a>') - self.assertEqual(links[0].attributes, {})
Remove `streamlink.plugin.api.utils.itertags` [`streamlink.plugin.api.utils.itertags`](https://github.com/streamlink/streamlink/blob/3.2.0/src/streamlink/plugin/api/utils.py#L16-L28) has become obsolete ever since `lxml` was added as a dependency to Streamlink for parsing HTML. `itertags` is a hacky implementation via regexes, which is not only slow, but it's also impossible to correctly parse HTML nodes with regular expressions, so it shouldn't be used when better and much faster solutions are available. It also always requires unescaping tag values, which is annoying. We've already updated and replaced lots of plugins which were previously using it, but there are still some left: ``` $ GIT_PAGER=cat git grep -F 'from streamlink.plugin.api.utils import' a1ce471f a1ce471f:src/streamlink/plugins/cdnbg.py:from streamlink.plugin.api.utils import itertags a1ce471f:src/streamlink/plugins/facebook.py:from streamlink.plugin.api.utils import itertags a1ce471f:src/streamlink/plugins/funimationnow.py:from streamlink.plugin.api.utils import itertags a1ce471f:src/streamlink/plugins/senategov.py:from streamlink.plugin.api.utils import itertags a1ce471f:src/streamlink/plugins/vrtbe.py:from streamlink.plugin.api.utils import itertags a1ce471f:tests/test_plugin_utils.py:from streamlink.plugin.api.utils import itertags ``` - [x] cdnbg - [x] facebook - [x] funimationnow - [x] senategov - [x] vrtbe Once every last plugin has been updated, the entire `streamlink.plugin.api.utils` module can be removed, as it only contains the `itertags` function and some other useless export aliases which are not even used anymore in Streamlink's codebase. If we care about plugin-API stability (something which has never been discussed), removing this would be considered a breaking change. Since we've just dropped py36, that's something which could be included in the 4.0.0 release.
Regarding facebook, as I already posted on Gitter/Matrix, I have a dev branch with a plugin rewrite here: https://github.com/streamlink/streamlink/compare/master...bastimeyer:plugins/facebook/rewrite I lost interest though, because I don't have a FB account, and all new ones I've created got blocked pretty much immediately. That's why my rewrite is incomplete and basically just a translation and improvement of the old logic, which means that the changes are most likely not working. ---- Regarding VRTbe, this site uses DRM for both HLS and DASH streams. The last plugin update was in June 2018 via e70053511331db579e4adde085ebf049d34e4ecf, and the commit message says that it only applies to high quality streams. I can't confirm this, for both live and vod streams. And from what I can see, [the current plugin implementation](https://github.com/streamlink/streamlink/blob/3.2.0/src/streamlink/plugins/vrtbe.py#L44) is not working (no HTMLElement with a `vrtvideo` class). The livestream page also lists all live streams at once, which makes it pretty much useless. - https://www.vrt.be/vrtnu/livestream/ - https://www.vrt.be/vrtnu/kanalen/een/ ``` #EXT-X-SESSION-KEY:METHOD=SAMPLE-AES,URI="skd://...",KEYFORMAT="com.apple.streamingkeydelivery",KEYFORMATVERSIONS="1" ``` ```xml <!-- Common Encryption --> <ContentProtection xmlns="urn:mpeg:dash:schema:mpd:2011" xmlns:cenc="urn:mpeg:cenc:2013" schemeIdUri="urn:mpeg:dash:mp4protection:2011" value="cenc" cenc:default_KID="F0A275F8-F9A1-EDD5-546C-4ABB1DB127AA"> </ContentProtection> <!-- PlayReady --> <ContentProtection xmlns="urn:mpeg:dash:schema:mpd:2011" schemeIdUri="urn:uuid:9A04F079-9840-4286-AB92-E65BE0885F95" value="MSPR 2.0"> </ContentProtection> <!-- Widevine --> <ContentProtection xmlns="urn:mpeg:dash:schema:mpd:2011" schemeIdUri="urn:uuid:EDEF8BA9-79D6-4ACE-A3C8-27DCD51D21ED"> </ContentProtection> <!-- Marlin --> <ContentProtection xmlns="urn:mpeg:dash:schema:mpd:2011" schemeIdUri="urn:uuid:5E629AF5-38DA-4063-8977-97FFBD9902D4"> <MarlinContentIds xmlns="urn:marlin:mas:1-0:services:schemas:mpd"> <MarlinContentId>urn:marlin:kid:f0a275f8f9a1edd5546c4abb1db127aa</MarlinContentId> </MarlinContentIds> </ContentProtection> ``` For VRTbe I've merged the PR removing it. For FB there's nothing I can do either here as I don't use FB nor do I have an account. It's been broken for quite some time and the number of issues we get related to FB not working is relatively low so I'd be fine with removing it. If someone who regularly uses FB is willing to develop and maintain that plugin we can reconsider adding it in the future.
2022-04-18T11:16:29
streamlink/streamlink
4,471
streamlink__streamlink-4471
[ "4436" ]
e4490ad3519ff53ea53b3c8fc4356cf519dc5d1f
diff --git a/src/streamlink/plugins/trovo.py b/src/streamlink/plugins/trovo.py new file mode 100644 --- /dev/null +++ b/src/streamlink/plugins/trovo.py @@ -0,0 +1,181 @@ +""" +$description Global video game live streaming and video hosting platform, owned by Tencent. +$url trovo.live +$type live, vod +""" + +import logging +import random +import re + +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate +from streamlink.stream.hls import HLSStream +from streamlink.utils.url import update_scheme + +log = logging.getLogger(__name__) + + +@pluginmatcher(re.compile(r""" + https?://(?:www\.)?trovo\.live/ + (?: + (?: + (?:clip|video)/(?P<video_id>[^/?&]+) + ) + | + (?P<user>[^/?&]+) + ) +""", re.VERBOSE)) +class Trovo(Plugin): + @staticmethod + def generate_qid(): + return f"{random.getrandbits(40):010x}".upper() + + def get_vod(self, video_id): + json = self.session.http.post( + f"https://gql.trovo.live/?qid={self.generate_qid()}", + json=[{ + "operationName": "batchGetVodDetailInfo", + "variables": { + "params": { + "vids": [video_id], + }, + }, + "extensions": { + "persistedQuery": { + "version": 1, + "sha256Hash": "ceae0355d66476e21a1dd8e8af9f68de95b4019da2cda8b177c9a2255dad31d0", + }, + }, + }], + schema=validate.Schema( + validate.parse_json(), + [{ + "data": { + "batchGetVodDetailInfo": { + "VodDetailInfos": validate.any( + { + video_id: { + "streamerInfo": { + "userName": str, + }, + "vodInfo": { + "playInfos": [{ + "desc": validate.all(validate.transform(lambda s: s.lower()), str), + "playUrl": validate.url(), + }], + "vid": str, + "title": str, + "categoryName": str, + "playbackRights": { + "playbackRightsSetting": str, + "playbackRights": str, + }, + }, + }, + }, + {}, + ), + }, + }, + }], + validate.get((0, "data", "batchGetVodDetailInfo", "VodDetailInfos", video_id)), + ), + ) + + if not json: + log.error("Video not found") + return + + log.debug(json["vodInfo"]["playbackRights"]) + self.id = json["vodInfo"]["vid"] + self.author = json["streamerInfo"]["userName"] + self.title = json["vodInfo"]["title"] + self.category = json["vodInfo"]["categoryName"] + + for s in json["vodInfo"]["playInfos"]: + q = s["desc"] + if "(source)" in q: + q = f"source_{q.replace('(source)', '')}" + yield q, HLSStream(self.session, update_scheme("https:", s["playUrl"])) + + def get_live(self, user): + json = self.session.http.post( + f"https://api-web.trovo.live/graphql?qid={self.generate_qid()}", + json=[{ + "operationName": "live_LiveReaderService_GetLiveInfo", + "variables": { + "params": { + "userName": user, + }, + }, + }], + schema=validate.Schema( + validate.parse_json(), + validate.any( + [{ + "data": { + validate.optional("live_LiveReaderService_GetLiveInfo"): { + "streamerInfo": { + "userName": str, + }, + "programInfo": { + "id": str, + "title": str, + "streamInfo": [{ + "desc": validate.all(validate.transform(lambda s: s.lower()), str), + "playUrl": validate.transform(lambda s: s.replace(".flv?", ".m3u8?")), + }], + }, + "categoryInfo": { + "shortName": str, + }, + "isLive": int, + }, + }, + }], + [{ + "errors": [{ + "message": validate.transform(lambda s: s.replace('\\"', '"')), + }], + }], + ), + validate.get(0), + ), + ) + + if "errors" in json: + errors = [e["message"] for e in json["errors"]] + log.error(f"API error(s): {', '.join(errors)}") + return + + if json["data"]["live_LiveReaderService_GetLiveInfo"]: + json = json["data"]["live_LiveReaderService_GetLiveInfo"] + else: + log.error("Stream data not found") + return + + if not json["isLive"]: + log.error("This stream is no longer live") + return + + self.id = json["programInfo"]["id"] + self.author = json["streamerInfo"]["userName"] + self.title = json["programInfo"]["title"] + self.category = json["categoryInfo"]["shortName"] + + for s in json["programInfo"]["streamInfo"]: + if s["playUrl"]: + yield s["desc"], HLSStream(self.session, update_scheme("https:", s["playUrl"])) + + def _get_streams(self): + self.session.http.headers.update({"Origin": "https://trovo.live"}) + url_data = self.match.groupdict() + + if url_data["video_id"]: + return self.get_vod(url_data["video_id"]) + elif url_data["user"]: + return self.get_live(url_data["user"]) + + +__plugin__ = Trovo
diff --git a/tests/plugins/test_trovo.py b/tests/plugins/test_trovo.py new file mode 100644 --- /dev/null +++ b/tests/plugins/test_trovo.py @@ -0,0 +1,20 @@ +from streamlink.plugins.trovo import Trovo +from tests.plugins import PluginCanHandleUrl + + +class TestPluginCanHandleUrlTrovo(PluginCanHandleUrl): + __plugin__ = Trovo + + should_match_groups = [ + ("https://trovo.live/UserName", {"user": "UserName"}), + ("https://trovo.live/clip/clip_123", {"video_id": "clip_123"}), + ("https://trovo.live/video/video_456", {"video_id": "video_456"}), + ("https://www.trovo.live/UserName", {"user": "UserName"}), + ("https://www.trovo.live/clip/clip_123", {"video_id": "clip_123"}), + ("https://www.trovo.live/video/video_456", {"video_id": "video_456"}), + ] + + should_not_match = [ + "https://trovo.live/", + "https://www.trovo.live/", + ]
trovo.live ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description trovo(https://trovo.live/) is a streaming media platform launched by Tencent. I have seen the Plugin Request record in issues before(#3069 , #3076 ), but it was not considered because trovo was in the beta stage at that time. At present, trovo has removed the beta logo and ended the beta stage. Hope to re-apply for plugin support for this platform. ### Input URLs 1. https://trovo.live/BloodyTV1 2. https://trovo.live/Sibuyas 3. https://trovo.live/SpaK
It looks like it's possible to get HLS from the site, even though they are using either streamed or chunked FLV - I've not quite worked out which (but I think it's the latter), the whole site is a bit weird compared to the usual... Viewer numbers are a mixed bag - not sure what's considered worthwhile. Looking at https://trovo.live/trending/streamers might be useful in that regard. > It looks like it's possible to get HLS from the site, even though they are using either streamed or chunked FLV - I've not quite worked out which (but I think it's the latter), the whole site is a bit weird compared to the usual... > > Viewer numbers are a mixed bag - not sure what's considered worthwhile. Looking at https://trovo.live/trending/streamers might be useful in that regard. it may be chunked FLV. It is said that it will sponsor some game events to promote its own site. You can view the distribution of specific sections and the number of viewers: https://trovo.live/trending
2022-04-19T01:49:09
streamlink/streamlink
4,473
streamlink__streamlink-4473
[ "4472" ]
67780648d12355313954ad4fd3d0c03f38bcd6d2
diff --git a/src/streamlink/plugins/showroom.py b/src/streamlink/plugins/showroom.py --- a/src/streamlink/plugins/showroom.py +++ b/src/streamlink/plugins/showroom.py @@ -1,5 +1,5 @@ """ -$description Japanese live streaming service used primarily by Japanese idols & voice actors and their fans. +$description Japanese live-streaming service used primarily by Japanese idols & voice actors and their fans. $url showroom-live.com $type live """ @@ -9,53 +9,64 @@ from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate -from streamlink.stream.hls import HLSStream, HLSStreamReader, HLSStreamWorker +from streamlink.stream.hls import HLSStream log = logging.getLogger(__name__) -class ShowroomHLSStreamWorker(HLSStreamWorker): - def _playlist_reload_time(self, playlist, sequences): - return 1.5 - - -class ShowroomHLSStreamReader(HLSStreamReader): - __worker__ = ShowroomHLSStreamWorker - - -class ShowroomHLSStream(HLSStream): - __reader__ = ShowroomHLSStreamReader - - @pluginmatcher(re.compile( r"https?://(?:\w+\.)?showroom-live\.com/" )) class Showroom(Plugin): + LIVE_STATUS = 2 + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.session.set_option("hls-playlist-reload-time", "segment") + def _get_streams(self): - data = self.session.http.get( + re_room_id = re.compile(r"share_url:\"https:[^?]+?\?room_id=(?P<room_id>\d+)\"") + room_id = self.session.http.get( self.url, schema=validate.Schema( validate.parse_html(), - validate.xml_xpath_string(".//script[@id='js-live-data'][@data-json]/@data-json"), + validate.xml_xpath_string(".//script[contains(text(),'share_url:\"https:')][1]/text()"), validate.any(None, validate.all( - validate.parse_json(), - {"is_live": int, - "room_id": int, - validate.optional("room"): {"content_region_permission": int, "is_free": int}}, + validate.transform(re_room_id.search), + validate.any(None, validate.get("room_id")) )) ) ) - if not data: # URL without livestream + if not room_id: return - log.debug(f"{data!r}") - if data["is_live"] != 1: + live_status, self.title = self.session.http.get( + "https://www.showroom-live.com/api/live/live_info", + params={ + "room_id": room_id + }, + schema=validate.Schema( + validate.parse_json(), + { + "live_status": int, + "room_name": str, + }, + validate.union_get( + "live_status", + "room_name", + ) + ) + ) + if live_status != self.LIVE_STATUS: log.info("This stream is currently offline") return url = self.session.http.get( "https://www.showroom-live.com/api/live/streaming_url", - params={"room_id": data["room_id"], "abr_available": 1}, + params={ + "room_id": room_id, + "abr_available": 1, + }, schema=validate.Schema( validate.parse_json(), {"streaming_url_list": [{ @@ -67,11 +78,13 @@ def _get_streams(self): validate.get((0, "url")) ), ) + res = self.session.http.get(url, acceptable_status=(200, 403, 404)) if res.headers["Content-Type"] != "application/x-mpegURL": log.error("This stream is restricted") return - return ShowroomHLSStream.parse_variant_playlist(self.session, url) + + return HLSStream.parse_variant_playlist(self.session, url) __plugin__ = Showroom
plugins.showroom: The rules of room URLs changed recently and Streamlink cannot catch the stream ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description The (official certified) individual room URL of SHOWROOM was like: https://www.showroom-live.com/ROOM_NAME But recently (maybe just today) it has been changed to: https://www.showroom-live.com/r/ROOM_NAME Streamlink worked well on this site (until April 18th) before the URL rules changed, but now it cannot catch the stream using either of the URLs above, only showing: ![image](https://user-images.githubusercontent.com/48208459/163987392-1eeef378-1512-4bca-a1fb-af9c1bdb0376.png) ![image](https://user-images.githubusercontent.com/48208459/163987776-b310005f-72ca-4303-8985-9e8da01ad195.png) While at the same time, this room is indeed on streaming: ![image](https://user-images.githubusercontent.com/48208459/163987622-9fbeb464-7257-4563-a62e-36bec0928b76.png) ### Debug log ```text [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.10 [cli][debug] Streamlink: 3.2.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://www.showroom-live.com/r/46_endosakura [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --locale=ja_JP [cli][debug] --player=C:\Program Files (x86)\Pure Codec\x64\PotPlayerMini64.exe [cli][debug] --player-no-close=True [cli][debug] --retry-streams=3.0 [cli][debug] --stream-segment-threads=10 [cli][debug] --stream-timeout=600.0 [cli][debug] --hls-live-restart=True [cli][debug] --hls-segment-threads=10 [cli][debug] --ffmpeg-ffmpeg=ffmpeg.exe [cli][debug] --http-proxy=http://127.0.0.1:7890 [cli][debug] --http-header=[('referer', '')] [cli][debug] --twitch-disable-ads=True [cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/46_endosakura [cli][info] Waiting for streams, retrying every 3.0 second(s) ```
2022-04-19T11:43:43
streamlink/streamlink
4,507
streamlink__streamlink-4507
[ "4501" ]
867b9b3b66aab57c0fcb3ab117a275f29a23b71a
diff --git a/src/streamlink/plugins/hiplayer.py b/src/streamlink/plugins/hiplayer.py new file mode 100644 --- /dev/null +++ b/src/streamlink/plugins/hiplayer.py @@ -0,0 +1,102 @@ +""" +$description United Arab Emirates CDN hosting live content for various websites in The Middle East. +$url cnbcarabia.com +$url media.gov.kw +$url rotana.net +$type live +$region various +""" + +import base64 +import logging +import re + +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate +from streamlink.stream.hls import HLSStream + +log = logging.getLogger(__name__) + + +@pluginmatcher(re.compile(r""" + https?://(?:www\.)? + ( + cnbcarabia\.com + | + media\.gov\.kw + | + rotana\.net + ) +""", re.VERBOSE)) +class HiPlayer(Plugin): + DAI_URL = "https://pubads.g.doubleclick.net/ssai/event/{0}/streams" + js_url_re = re.compile(r"""['"](https://hiplayer.hibridcdn.net/l/[^'"]+)['"]""") + base64_data_re = re.compile(r"i\s*=\s*\[(.*)\]\.join") + + def _get_streams(self): + js_url = self.session.http.get( + self.url, + schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[contains(text(), 'https://hiplayer.hibridcdn.net/l/')]/text()"), + validate.any( + None, + validate.all( + validate.transform(self.js_url_re.search), + validate.any(None, validate.all(validate.get(1), validate.url())), + ), + ), + ), + ) + + if not js_url: + return + + log.debug(f"JS URL={js_url}") + + data = self.session.http.get( + js_url, + schema=validate.Schema( + validate.transform(self.base64_data_re.search), + validate.any( + None, + validate.all( + validate.get(1), + validate.transform(lambda s: re.sub(r"['\", ]", "", s)), + validate.transform(lambda s: base64.b64decode(s)), + validate.parse_json(), + validate.any( + None, + { + "daiEnabled": bool, + "daiAssetKey": str, + "daiApiKey": str, + "streamUrl": validate.any(validate.url(), ""), + }, + ), + ), + ), + ), + ) + + hls_url = data["streamUrl"] + + if data["daiEnabled"]: + log.debug("daiEnabled=true") + hls_url = self.session.http.post( + self.DAI_URL.format(data['daiAssetKey']), + data={"api-key": data["daiApiKey"]}, + schema=validate.Schema( + validate.parse_json(), + { + "stream_manifest": validate.url(), + }, + validate.get("stream_manifest"), + ), + ) + + if hls_url: + return HLSStream.parse_variant_playlist(self.session, hls_url) + + +__plugin__ = HiPlayer
diff --git a/tests/plugins/test_hiplayer.py b/tests/plugins/test_hiplayer.py new file mode 100644 --- /dev/null +++ b/tests/plugins/test_hiplayer.py @@ -0,0 +1,15 @@ +from streamlink.plugins.hiplayer import HiPlayer +from tests.plugins import PluginCanHandleUrl + + +class TestPluginCanHandleUrlHiPlayer(PluginCanHandleUrl): + __plugin__ = HiPlayer + + should_match = [ + "https://www.cnbcarabia.com/any", + "https://www.cnbcarabia.com/any/path", + "https://www.media.gov.kw/any", + "https://www.media.gov.kw/any/path", + "https://rotana.net/any", + "https://rotana.net/any/path", + ]
plugins.rotana: Plugin is broken ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description The plugin no longer extracts the m3u8 ### Debug log ```text ~$ streamlink https://rotana.net/live-clip best --loglevel debug [cli][debug] OS: macOS 12.3.0 [cli][debug] Python: 3.10.2 [cli][debug] Streamlink: 3.2.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.1) [cli][debug] Arguments: [cli][debug] url=https://rotana.net/live-clip [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin rotana for URL https://rotana.net/live-clip [plugins.rotana][debug] No video URL found. error: No playable streams found on this URL: https://rotana.net/live-clip ~$ ```
2022-05-01T22:31:37
streamlink/streamlink
4,510
streamlink__streamlink-4510
[ "4453" ]
bcd624c1753404e7119837a4e050f785485818d3
diff --git a/src/streamlink/plugins/crunchyroll.py b/src/streamlink/plugins/crunchyroll.py --- a/src/streamlink/plugins/crunchyroll.py +++ b/src/streamlink/plugins/crunchyroll.py @@ -96,16 +96,14 @@ def __init__(self, msg, code): class CrunchyrollAPI: _api_url = "https://api.crunchyroll.com/{0}.0.json" _default_locale = "en_US" - _user_agent = "Dalvik/1.6.0 (Linux; U; Android 4.4.2; Android SDK built for x86 Build/KK)" - _version_code = 444 - _version_name = "2.1.10" - _access_token = "WveH9VkPLrXvuNm" - _access_type = "com.crunchyroll.crunchyroid" + _version_name = "1.3.1.0" + _access_token = "LNDJgOit5yaRIWN" + _access_type = "com.crunchyroll.windows.desktop" def __init__(self, cache, session, session_id=None, locale=_default_locale): """Abstract the API to access to Crunchyroll data. - Can take saved credentials to use on it's calls to the API. + Can take saved credentials to use on its calls to the API. """ self.cache = cache self.session = session @@ -116,17 +114,6 @@ def __init__(self, cache, session, session_id=None, locale=_default_locale): self.auth = cache.get("auth") self.device_id = cache.get("device_id") or self.generate_device_id() self.locale = locale - self.headers = { - "X-Android-Device-Is-GoogleTV": "0", - "X-Android-Device-Product": "google_sdk_x86", - "X-Android-Device-Model": "Android SDK built for x86", - "Using-Brightcove-Player": "1", - "X-Android-Release": "4.4.2", - "X-Android-SDK": "19", - "X-Android-Application-Version-Name": self._version_name, - "X-Android-Application-Version-Code": str(self._version_code), - 'User-Agent': self._user_agent - } def _api_call(self, entrypoint, params=None, schema=None): """Makes a call against the api. @@ -148,17 +135,17 @@ def _api_call(self, entrypoint, params=None, schema=None): "device_id": self.device_id, "device_type": self._access_type, "access_token": self._access_token, - "version": self._version_code }) params.update({ - "locale": self.locale.replace('_', ''), + "locale": self.locale.replace("_", ""), + "version": self._version_name, + "connectivity_type": "ethernet", }) if self.session_id: params["session_id"] = self.session_id - # The certificate used by Crunchyroll cannot be verified in some environments. - res = self.session.http.post(url, data=params, headers=self.headers, verify=False) + res = self.session.http.post(url, data=params) json_res = self.session.http.json(res, schema=_api_schema) if json_res["error"]: @@ -175,7 +162,7 @@ def _api_call(self, entrypoint, params=None, schema=None): def generate_device_id(self): device_id = str(uuid4()) # cache the device id - self.cache.set("device_id", 365 * 24 * 60 * 60) + self.cache.set("device_id", device_id, expires=365 * 24 * 60 * 60) log.debug("Device ID: {0}".format(device_id)) return device_id @@ -212,7 +199,7 @@ def authenticate(self): data = self._api_call("authenticate", {"auth": self.auth}, schema=_login_schema) except CrunchyrollAPIError: self.auth = None - self.cache.set("auth", None, expires_at=0) + self.cache.set("auth", None, expires=0) log.warning("Saved credentials have expired") return @@ -248,10 +235,17 @@ def get_info(self, media_id, fields=None, schema=None): (?: /(en-gb|es|es-es|pt-pt|pt-br|fr|de|ar|it|ru) )? - (?:/[^/&?]+)? - /[^/&?]+-(?P<media_id>\d+) + (?: + (?: + (?:/[^/&?]+)? + /[^/&?]+-(?P<media_id>\d+) + ) + | + /watch/(?P<beta_id>\w+)/[\w-]+ + ) """, re.VERBOSE)) class Crunchyroll(Plugin): + arguments = PluginArguments( PluginArgument( "username", @@ -305,9 +299,34 @@ def stream_weight(cls, key): return Plugin.stream_weight(key) def _get_streams(self): - api = self._create_api() - media_id = int(self.match.group("media_id")) + beta_json_re = re.compile(r"window.__INITIAL_STATE__\s*=\s*({.*});") + + beta_id = self.match.group("beta_id") + if beta_id: + json = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[contains(text(), 'window.__INITIAL_STATE__')]/text()"), + validate.any(None, validate.all( + validate.transform(beta_json_re.search), + validate.any(None, validate.all( + validate.get(1), + validate.parse_json(), + validate.any(None, validate.all( + {"content": {"byId": {str: {"external_id": validate.all( + validate.transform(lambda s: int(s.replace("EPI.", ""))), + )}}}}, + validate.get(("content", "byId")), + )), + )), + )), + )) + if not json or beta_id not in json: + return + media_id = json[beta_id]["external_id"] + else: + media_id = int(self.match.group("media_id")) + api = self._create_api() try: # the media.stream_data field is required, no stream data is returned otherwise info = api.get_info(media_id, fields=["media.name", "media.series_name", @@ -320,6 +339,7 @@ def _get_streams(self): streams = {} + self.id = media_id self.title = info.get("name") self.author = info.get("series_name") self.category = info.get("media_type") @@ -352,14 +372,13 @@ def _get_streams(self): return streams def _create_api(self): - """Creates a new CrunchyrollAPI object, initiates it's session and + """Creates a new CrunchyrollAPI object, initiates its session and tries to authenticate it either by using saved credentials or the user's username and password. """ if self.options.get("purge_credentials"): - self.cache.set("session_id", None, 0) - self.cache.set("auth", None, 0) - self.cache.set("session_id", None, 0) + self.cache.set("device_id", None, expires=0) + self.cache.set("auth", None, expires=0) # use the crunchyroll locale as an override, for backwards compatibility locale = self.get_option("locale") or self.session.localization.language_code
diff --git a/tests/plugins/test_crunchyroll.py b/tests/plugins/test_crunchyroll.py --- a/tests/plugins/test_crunchyroll.py +++ b/tests/plugins/test_crunchyroll.py @@ -5,18 +5,25 @@ class TestPluginCanHandleUrlCrunchyroll(PluginCanHandleUrl): __plugin__ = Crunchyroll - should_match = [ - "http://www.crunchyroll.com/idol-incidents/episode-1-why-become-a-dietwoman-728233", - "http://www.crunchyroll.com/ru/idol-incidents/episode-1-why-become-a-dietwoman-728233", - "http://www.crunchyroll.com/idol-incidents/media-728233", - "http://www.crunchyroll.com/fr/idol-incidents/media-728233", - "http://www.crunchyroll.com/media-728233", - "http://www.crunchyroll.com/de/media-728233", - "http://www.crunchyroll.fr/media-728233", - "http://www.crunchyroll.fr/es/media-728233" + should_match_groups = [ + ("http://www.crunchyroll.com/idol-incidents/episode-1-a-title-728233", {"media_id": "728233"}), + ("http://www.crunchyroll.com/ru/idol-incidents/episode-1-a-title-728233", {"media_id": "728233"}), + ("http://www.crunchyroll.com/idol-incidents/media-728233", {"media_id": "728233"}), + ("http://www.crunchyroll.com/fr/idol-incidents/media-728233", {"media_id": "728233"}), + ("http://www.crunchyroll.com/media-728233", {"media_id": "728233"}), + ("http://www.crunchyroll.com/de/media-728233", {"media_id": "728233"}), + ("http://www.crunchyroll.fr/media-728233", {"media_id": "728233"}), + ("http://www.crunchyroll.fr/es/media-728233", {"media_id": "728233"}), + ("https://beta.crunchyroll.com/watch/GRNQ5DDZR/Game-Over", {"beta_id": "GRNQ5DDZR"}), + ("https://beta.crunchyroll.com/watch/ValidID123/any/thing?x&y", {"beta_id": "ValidID123"}), ] should_not_match = [ "http://www.crunchyroll.com/gintama", "http://www.crunchyroll.es/gintama", + "http://beta.crunchyroll.com/", + "http://beta.crunchyroll.com/something", + "http://beta.crunchyroll.com/watch/", + "http://beta.crunchyroll.com/watch/not-a-valid-id", + "http://beta.crunchyroll.com/watch/not-a-valid-id/a-title", ]
plugins.crunchyroll: Unauthenticated request error, login system may have changed ? ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description The Crunchyroll plugin worked well last Sunday. Now I get an Unauthenticated request message, even though my username/password haven't changed. I provide the log for an episode I know for a fact played well this weekend. My best guess would be a change in the way Crunchyroll deals with logins, as I can now activate the CR app on my TV just by writing an activation code on my web browser session instead of writing the whole password, which was not an option weeks ago. ### Debug log ```text C:\Users\Stella>streamlink --loglevel debug crunchyroll.com/fr/spy-x-family/episode-1-operation-strix-842454 [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.10 [cli][debug] Streamlink: 3.2.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=crunchyroll.com/fr/spy-x-family/episode-1-operation-strix-842454 [cli][debug] --loglevel=debug [cli][debug] --locale=en_US [cli][debug] --player="C:\Users\Stella\Documents\MPV\mpv.exe" [cli][debug] --player-args=--keep-open=yes --cache=yes [cli][debug] --player-no-close=True [cli][debug] --title={author} - {category} - {title} [cli][debug] --default-stream=['best'] [cli][debug] --ringbuffer-size=134217728 [cli][debug] --stream-segment-threads=3 [cli][debug] --hls-segment-threads=3 [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][debug] --crunchyroll-username=******** [cli][debug] --crunchyroll-password=******** [cli][info] Found matching plugin crunchyroll for URL crunchyroll.com/fr/spy-x-family/episode-1-operation-strix-842454 [utils.l10n][debug] Language code: en_US [plugins.crunchyroll][debug] Creating session with locale: en_US Traceback (most recent call last): File "runpy.py", line 197, in _run_module_as_main File "runpy.py", line 87, in _run_code File "C:\Program Files (x86)\Streamlink\bin\streamlink.exe\__main__.py", line 18, in <module> File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 1061, in main handle_url() File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 570, in handle_url streams = fetch_streams(plugin) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 464, in fetch_streams return plugin.streams(stream_types=args.stream_types, File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugin\plugin.py", line 336, in streams ostreams = self._get_streams() File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\crunchyroll.py", line 307, in _get_streams api = self._create_api() File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\crunchyroll.py", line 372, in _create_api api.start_session() File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\crunchyroll.py", line 190, in start_session self.session_id = self._api_call("start_session", params, schema=_session_schema) File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\crunchyroll.py", line 166, in _api_call raise CrunchyrollAPIError(err_msg, err_code) streamlink.plugins.crunchyroll.CrunchyrollAPIError: Unauthenticated request ```
Yes, same error here, it seems that Crunchyroll changed the access token again, tested on stable and nightly build Same error, would be great to know how streamlink devs get that access token... > would be great to know how streamlink devs get that access token 1. https://github.com/streamlink/streamlink/blame/e4490ad3519ff53ea53b3c8fc4356cf519dc5d1f/src/streamlink/plugins/crunchyroll.py#L102 2. https://github.com/streamlink/streamlink/commit/09f46a33b0b1918ce7e0ad0534788e0fd786caea 3. https://github.com/streamlink/streamlink/pull/2788 4. https://github.com/streamlink/streamlink/issues/2785#issuecomment-581081985 > > would be great to know how streamlink devs get that access token > > 1. https://github.com/streamlink/streamlink/blame/e4490ad3519ff53ea53b3c8fc4356cf519dc5d1f/src/streamlink/plugins/crunchyroll.py#L102 > > 2. [09f46a3](https://github.com/streamlink/streamlink/commit/09f46a33b0b1918ce7e0ad0534788e0fd786caea) > > 3. [Update Crunchyroll access token. Fixes streamlink/streamlink issue #2785. #2788](https://github.com/streamlink/streamlink/pull/2788) > > 4. [Can't authenticate Crunchyroll, CrunchyrollAPIError: Unauthenticated request #2785 (comment)](https://github.com/streamlink/streamlink/issues/2785#issuecomment-581081985) yeah, I've check all stuff related to crunchyroll plugin, but haven't found where can I get the new token when they change it yet. Seems that in June this plugin will be gone (at least login logic), since crunchyroll is migrating to the new API. The access token comes from the Android app. You'll need to either install the app on a phone and run it from there or perhaps it could be run via an Android emulator. You'll need to configure a proxy and use something like mitmproxy to spy on the network requests. Hopefully you will find a POST request with `access_token` in the payload data. It's possible that things have changed now. The plugin seems to be based on app version 2.1.0, but the current Crunchyroll app version appears to be 3.18.0 (I think). It's possible they have not just changed the token value, but the whole API. This failure could therefore be because they've turned the old API off. > The access token comes from the Android app. You'll need to either install the app on a phone and run it from there or perhaps it could be run via an Android emulator. You'll need to configure a proxy and use something like mitmproxy to spy on the network requests. Hopefully you will find a POST request with `access_token` in the payload data. > > It's possible that things have changed now. The plugin seems to be based on app version 2.1.0, but the current Crunchyroll app version appears to be 3.18.0 (I think). It's possible they have not just changed the token value, but the whole API. This failure could therefore be because they've turned the old API off. Yeah, I've installed several versions on Crunchyroll app on android emulator, I run HttpToolkit to intercept that traffic, but the token seems encrypted `fgS4DSihTXaEj0HuWiaWlr:APA91bEjq73PQC327j0T1g5c41hzbo-bCW1Q3DCYzFepsczHDIBnmAMxNg1-eu56CLa5R6AINoL0TPavBtjB9ldm06uH_lI-m187ekOwehzMGxC6IvrHaaa5oVdwPtp1TF9ahWwRobYu` So this method wouldn't be a workaround this time. I installed the app on my phone and looked at the traffic. I saw a similarly long token, but that appears to be a bearer token, which is another thing. I don't have a Crunchyroll account anyway, so that doesn't help. My phone is 32 bit ARMv7, with Android 7.1. I tried using the last 2.x release I could find via APKPure (version 2.5.0), but it did not appear in the app menu, so I gave up with that one. It seems to me, having looked at both APKPure and APKMirror that the 3.x releases target Android 6.0+ while 2.5.0 targets Android 5.0+. If you can manage to get version 2.5.0 installed and running somewhere, it might yield the token you're looking for (if it works at all). I did manage to get version 2.3.0 from APK mirror installed and it ran, but then produced an error message. The network traffic did yield a token, but it was an old one, so no good. So, I'm not sure about this, but it could be that with an Android 5.1 based device you might get a 2.x version, which might have the token we're all looking for. Or it could be that the plugin needs a rewrite to do things the way they're done in the 3.x app releases. I'm not sure why the plugin is based on the Android app, perhaps the website uses DRM or did other stuff in JS that would have made the plugin implementation a lot harder... > I installed the app on my phone and looked at the traffic. I saw a similarly long token, but that appears to be a bearer token, which is another thing. I don't have a Crunchyroll account anyway, so that doesn't help. > > My phone is 32 bit ARMv7, with Android 7.1. I tried using the last 2.x release I could find via APKPure (version 2.5.0), but it did not appear in the app menu, so I gave up with that one. It seems to me, having looked at both APKPure and APKMirror that the 3.x releases target Android 6.0+ while 2.5.0 targets Android 5.0+. If you can manage to get version 2.5.0 installed and running somewhere, it might yield the token you're looking for (if it works at all). I did manage to get version 2.3.0 from APK mirror installed and it ran, but then produced an error message. The network traffic did yield a token, but it was an old one, so no good. > > So, I'm not sure about this, but it could be that with an Android 5.1 based device you might get a 2.x version, which might have the token we're all looking for. Or it could be that the plugin needs a rewrite to do things the way they're done in the 3.x app releases. I'm not sure why the plugin is based on the Android app, perhaps the website uses DRM or did other stuff in JS that would have made the plugin implementation a lot harder... Didn't know about the android versions, I'll try to run a 5.1 based emulator an see what happens. I had no luck , I tried v2.5.0 and v2.6.0 and both uses the old keys in the request, from v3.0.0 and newer releases points to the new API so can't get the token from this point. And the new login system seems to defer a lot. So a plugin rewrite is mandatory. That's disappointing to hear, but thanks for trying it out. There's one last chance to see if this works, seems that the 2.5.0 app version received the last update in 30th March, 2022. Just the day before this plugin stop working. I tried to download this version fom apkpure.com but this folks split the apk so I can't install it. If i could get the apk of this version and install it, maybe i'll be able to get the token. At least to make it work until June. That is likely the same one I tried on my phone and could not get to work (the one I was talking about in my post, above). Maybe you will have better luck with it. As I guess you may have worked out, you'll need to install the APKPure app to install the split APKs from that site. I'm confused about the dates shown - are they release dates or simply upload dates? I was not able determine which. One can use the token from Windows desktop client, instead of the android. Changing these lines https://github.com/streamlink/streamlink/blob/867b9b3b66aab57c0fcb3ab117a275f29a23b71a/src/streamlink/plugins/crunchyroll.py#L102-L103 to ``` python _access_token = "LNDJgOit5yaRIWN" _access_type = "com.crunchyroll.windows.desktop" ``` Seems to fix the issue. This token has not changed since at least 2020, so I think it's more stable. ### Debug log ``` [cli][debug] OS: Windows 8.1 [cli][debug] Python: 3.10.3 [cli][debug] Streamlink: 4.0.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [cli][debug] Arguments: [cli][debug] url=https://www.crunchyroll.com/hunter-x-hunter/episode-148-past-x-and-x-future-654039 [cli][debug] --loglevel=debug [cli][debug] --player=C:\mpv\mpv.com [cli][debug] --player-args=--pause [cli][debug] --verbose-player=True [cli][debug] --player-no-close=True [cli][debug] --title={title} -!- {author} -!- {category} [cli][debug] --stream-sorting-excludes=['None', '>=1440p60'] [cli][debug] --ringbuffer-size=67108864 [cli][debug] --stream-segment-threads=4 [cli][debug] --mux-subtitles=True [cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin crunchyroll for URL https://www.crunchyroll.com/hunter-x-hunter/episode-148-past-x-and-x-future-654039 [utils.l10n][debug] Language code: pt_BR [plugins.crunchyroll][debug] Creating session with locale: pt_BR [plugins.crunchyroll][debug] Session created with ID: a552879f769b72538fdb60edf2945efd [plugins.crunchyroll][warning] No authentication provided, you won't be able to access premium restricted content [plugins.crunchyroll][debug] Loading streams from adaptive playlist [utils.l10n][debug] Language code: pt_BR [utils.l10n][debug] Language code: pt_BR [utils.l10n][debug] Language code: pt_BR Available streams: 240p_alt (worst), 240p, 360p_alt, 360p, 480p_alt, 480p, 720p_alt, 720p, 1080p_alt, 1080p (best) ``` > ```python > LNDJgOit5yaRIWN > ``` You have saved us we are eternally grateful. I didn't know that there was a Windows app. Thanks for sharing. @kyldery, that's excellent, thanks very much. I did not know there was a Windows app either!
2022-05-02T15:43:14
streamlink/streamlink
4,515
streamlink__streamlink-4515
[ "4513" ]
e8c7e0ff7ee8388f75e85bb47a5430528124751b
diff --git a/src/streamlink/plugins/youtube.py b/src/streamlink/plugins/youtube.py --- a/src/streamlink/plugins/youtube.py +++ b/src/streamlink/plugins/youtube.py @@ -1,5 +1,5 @@ """ -$description Global live streaming and video hosting social platform owned by Google. +$description Global live-streaming and video hosting social platform owned by Google. $url youtube.com $url youtu.be $type live, vod @@ -9,7 +9,6 @@ import json import logging import re -from html import unescape from urllib.parse import urlparse, urlunparse from streamlink.plugin import Plugin, PluginError, pluginmatcher @@ -115,7 +114,18 @@ def stream_weight(cls, stream): def _schema_consent(data): schema_consent = validate.Schema( validate.parse_html(), - validate.xml_findall(".//input[@type='hidden']") + validate.any( + validate.xml_find(".//form[@action='https://consent.youtube.com/s']"), + validate.all( + validate.xml_xpath(".//form[@action='https://consent.youtube.com/save']"), + validate.filter(lambda elem: elem.xpath(".//input[@type='hidden'][@name='set_ytc'][@value='true']")), + validate.get(0), + ) + ), + validate.union(( + validate.get("action"), + validate.xml_xpath(".//input[@type='hidden']"), + )), ) return schema_consent.validate(data) @@ -253,12 +263,14 @@ def _create_adaptive_streams(self, adaptive_formats): def _get_res(self, url): res = self.session.http.get(url) if urlparse(res.url).netloc == "consent.youtube.com": + target, elems = self._schema_consent(res.text) c_data = { - elem.attrib.get("name"): unescape(elem.attrib.get("value")) - for elem in self._schema_consent(res.text) + elem.attrib.get("name"): elem.attrib.get("value") + for elem in elems } - log.debug(f"c_data_keys: {', '.join(c_data.keys())}") - res = self.session.http.post("https://consent.youtube.com/s", data=c_data) + log.debug(f"consent target: {target}") + log.debug(f"consent data: {', '.join(c_data.keys())}") + res = self.session.http.post(target, data=c_data) return res @staticmethod
plugins.youtube: 400 Client Error: Bad request for url: https://consent.youtube.com/s ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description I randomly get this error on Windows 10 when fetching YouTube URL using channel id : > Unable to open URL: https://consent.youtube.com/s (400 Client Error: Bad Request for url: https://consent.youtube.com/s) This is random, sometimes stream fetching works, sometimes no. And this is related with using the following type of URL, using the channel id instead of a video id : https://www.youtube.com/c/TinyKittens/live https://www.youtube.com/channel/UCeL2LSl91k2VccR7XEh5IKg/live It always works fine with URLs containing the "watch" parameter : https://www.youtube.com/watch?v=Pss7fUAUqrs I'm doing the exact same test on Debian with Python 3.7 and it always works fine. ### Debug log ```text ===== Output when it fails : > streamlink https://www.youtube.com/c/TinyKittens/live --loglevel debug [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.1 [cli][debug] Streamlink: 4.0.1 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [cli][debug] Arguments: [cli][debug] url=https://www.youtube.com/c/TinyKittens/live [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin youtube for URL https://www.youtube.com/c/TinyKittens/live [plugins.youtube][debug] c_data_keys: gl, m, pc, continue, x, v, bl, hl, src, uxe, set_eom, set_ytc, set_apyt error: Unable to open URL: https://consent.youtube.com/s (400 Client Error: Bad Request for url: https://consent.youtube.com/s) ===== Output when it randomly works : > streamlink https://www.youtube.com/c/TinyKittens/live --loglevel debug [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.10 [cli][debug] Streamlink: 3.2.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://www.youtube.com/c/TinyKittens/live [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin youtube for URL https://www.youtube.com/c/TinyKittens/live [plugins.youtube][debug] c_data_keys: gl, m, pc, continue, ca, x, v, t, hl, src, uxe [plugins.youtube][debug] Using video ID: Pss7fUAUqrs [plugins.youtube][debug] This video is live. [utils.l10n][debug] Language code: fr_FR Available streams: 144p (worst), 240p, 360p, 480p, 720p (best) ```
2022-05-05T13:08:35
streamlink/streamlink
4,517
streamlink__streamlink-4517
[ "4516" ]
bcd624c1753404e7119837a4e050f785485818d3
diff --git a/src/streamlink/stream/segmented.py b/src/streamlink/stream/segmented.py --- a/src/streamlink/stream/segmented.py +++ b/src/streamlink/stream/segmented.py @@ -3,7 +3,7 @@ from concurrent import futures from concurrent.futures import Future, ThreadPoolExecutor from sys import version_info -from threading import Event, Thread +from threading import Event, Thread, current_thread from typing import Any, Optional from streamlink.buffers import RingBuffer @@ -228,6 +228,12 @@ def close(self): self.writer.close() self.buffer.close() + current = current_thread() + if current is not self.worker: # pragma: no branch + self.worker.join(timeout=self.timeout) + if current is not self.writer: # pragma: no branch + self.writer.join(timeout=self.timeout) + def read(self, size): return self.buffer.read( size,
SegmentedStreamWriter.close() does not reliably finish before CLI exits (race condition) ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description ### Background I was doing some works on latest `master` branch, adding some feature on Segmented/HLS streams, and it requires some cleanup, so I added them in the end of `SegmentedStreamWriter.close()`, like ```python self.closed = True self.reader.close() self.executor.shutdown(wait=True, cancel_futures=True) __my_extra_cleanup() ``` And I found my cleanup code does not always being executed, when HLS streams end normally. ### Problem When an HLS stream ends normally, the whole shutdown process is triggered by the last line of `SegmentedStreamWriter.run()`, `self.close()`. Then `SegmentedStreamWriter.close()` calls `SegmentedStreamReader.close()`, then the iteration loop of `stream_cli.main:read_stream()` will reach its end, then `main()` exits. The problem is, `SegmentedStreamWriter.run() -> SegmentedStreamWriter.close()` was run in a separated thread, which means `SegmentedStreamWriter.close()` cannot reliably finish its work before main thread exits. ### To reproduce To reliably trigger the problem, adding a sleep to original `SegmentedStreamWriter.close()`, like ```python self.closed = True self.reader.close() self.executor.shutdown(wait=True, cancel_futures=True) time.sleep(3) log.debug("SegmentedStreamWriter.close() ends") ``` Then run the CLI with a short HLS stream, the `SegmentedStreamWriter.close() ends` message never appears. ### Debug log ```text ...... [cli][info] Stream ended [cli][info] Closing currently open stream... ``` ### Expected result ```text ...... [cli][info] Stream ended [cli][info] Closing currently open stream... [stream.segmented][debug] SegmentedStreamWriter.close() ends ```
2022-05-07T20:45:26
streamlink/streamlink
4,531
streamlink__streamlink-4531
[ "4412" ]
f08bc5a8adbd9333ed262ed020e9c6c99d6b274f
diff --git a/src/streamlink/plugins/mildom.py b/src/streamlink/plugins/mildom.py --- a/src/streamlink/plugins/mildom.py +++ b/src/streamlink/plugins/mildom.py @@ -6,6 +6,7 @@ import logging import re +from time import time from uuid import uuid4 from streamlink.plugin import Plugin, pluginmatcher @@ -16,21 +17,56 @@ log = logging.getLogger(__name__) -@pluginmatcher(re.compile(r""" - https?://(?:www\.)?mildom\.com/ - (?: - playback/(\d+)(/(?P<video_id>(\d+)-(\w+))) - | - (?P<channel_id>\d+) - ) -""", re.VERBOSE)) -class Mildom(Plugin): - def _get_vod_streams(self, video_id): +class MildomHLSStream(HLSStream): + __shortname__ = "hls-mildom" + expiry_time = 60 * 120 + + def __init__(self, session_, api, server, token, quality, **args): + self.session = session_ + self.api = api + self.server = server + self.token = token + self.quality = quality + self._url = self.build_hls_url() + super().__init__(self.session, self._url, **args) + self.expiry = time() + + def build_hls_url(self): + if not self.server or not self.token: + raise ValueError("server and token must be set") + return url_concat(self.server, f"{self.api.channel_id}{self.quality}.m3u8?{self.token}") + + @property + def url(self): + if time() - self.expiry > MildomHLSStream.expiry_time: + self.expiry = time() + self.token = self.api.get_token() + self._url = self.build_hls_url() + log.debug("Updated HLS playlist URL query string") + return self._url + + +class MildomAPI: + def __init__(self, session, channel_id=None, video_id=None): + self.session = session + self.channel_id = channel_id + self.video_id = video_id + + def _is_api_error(self, data): + log.trace(f"{data!r}") + if data["code"] != 0: + log.debug(data.get("message", "Mildom API returned an error")) + return True + return False + + def get_vod_streams_data(self): + if not self.video_id: + return data = self.session.http.get( "https://cloudac.mildom.com/nonolive/videocontent/playback/getPlaybackDetail", params={ "__platform": "web", - "v_id": video_id, + "v_id": self.video_id, }, schema=validate.Schema(validate.parse_json(), { "code": int, @@ -42,88 +78,75 @@ def _get_vod_streams(self, video_id): }, }) ) - log.trace(f"{data!r}") - if data["code"] != 0: - log.debug(data.get("message", "Mildom API returned an error")) + if self._is_api_error(data): return - for stream in data["body"]["playback"]["video_link"]: - yield stream["name"], HLSStream(self.session, stream["url"]) + if data.get("body"): + return data["body"]["playback"]["video_link"] - def _get_live_streams(self, channel_id): - # Get quality info and check if user is live1 - data = self.session.http.get( - "https://cloudac.mildom.com/nonolive/gappserv/live/enterstudio", + def get_token(self): + if not self.channel_id: + return + data = self.session.http.post( + "https://cloudac.mildom.com/nonolive/gappserv/live/token", params={ "__platform": "web", - "user_id": channel_id, + "__guest_id": "pc-gp-{}".format(uuid4()), }, headers={"Accept-Language": "en"}, + json={"host_id": self.channel_id, "type": "hls"}, schema=validate.Schema( validate.parse_json(), { "code": int, validate.optional("message"): str, validate.optional("body"): { - validate.optional("status"): int, - "anchor_live": int, - validate.optional("live_type"): int, - "ext": { - "cmode_params": [{ - "cmode": str, - "name": str, - }], - validate.optional("live_mode"): int, - }, - }, - }, + "data": [ + {"token": str, } + ], + } + } ) ) - log.trace(f"{data!r}") - if data["code"] != 0: - log.debug(data.get("message", "Mildom API returned an error")) - return - if data["body"]["anchor_live"] != 11: - log.debug("User doesn't appear to be live") + if self._is_api_error(data): return - qualities = [] - for quality_info in data["body"]["ext"]["cmode_params"]: - qualities.append((quality_info["name"], "_" + quality_info["cmode"] if quality_info["cmode"] != "raw" else "")) + if data.get("body"): + return data["body"]["data"][0]["token"] - # Get token - data = self.session.http.post( - "https://cloudac.mildom.com/nonolive/gappserv/live/token", + def get_server(self): + if not self.channel_id: + return + data = self.session.http.get( + "https://cloudac.mildom.com/nonolive/gappserv/live/liveserver", params={ "__platform": "web", - "__guest_id": "pc-gp-{}".format(uuid4()), + "user_id": self.channel_id, + "live_server_type": "hls", }, headers={"Accept-Language": "en"}, - json={"host_id": channel_id, "type": "hls"}, schema=validate.Schema( validate.parse_json(), { "code": int, validate.optional("message"): str, validate.optional("body"): { - "data": [ - {"token": str, } - ], + "stream_server": validate.url(), } } ) ) - log.trace(f"{data!r}") - if data["code"] != 0: - log.debug(data.get("message", "Mildom API returned an error")) + if self._is_api_error(data): return - token = data["body"]["data"][0]["token"] + if data.get("body"): + return data["body"]["stream_server"] - # Create stream URLs + def get_live_streams_data(self): + if not self.channel_id: + return data = self.session.http.get( - "https://cloudac.mildom.com/nonolive/gappserv/live/liveserver", + "https://cloudac.mildom.com/nonolive/gappserv/live/enterstudio", params={ "__platform": "web", - "user_id": channel_id, - "live_server_type": "hls", + "user_id": self.channel_id, }, headers={"Accept-Language": "en"}, schema=validate.Schema( @@ -132,28 +155,61 @@ def _get_live_streams(self, channel_id): "code": int, validate.optional("message"): str, validate.optional("body"): { - "stream_server": validate.url(), - } - } + validate.optional("status"): int, + "anchor_live": int, + validate.optional("live_type"): int, + "ext": { + "cmode_params": [{ + "cmode": str, + "name": str, + }], + validate.optional("live_mode"): int, + }, + }, + }, ) ) - log.trace(f"{data!r}") - if data["code"] != 0: - log.debug(data.get("message", "Mildom API returned an error")) + if self._is_api_error(data): return - base_url = url_concat(data["body"]["stream_server"], f"{channel_id}{{}}.m3u8?{token}") - self.session.http.headers.update({"Referer": "https://www.mildom.com/"}) - for quality in qualities: - yield quality[0], HLSStream(self.session, base_url.format(quality[1])) + if data.get("body"): + return data["body"] + +@pluginmatcher(re.compile(r""" + https?://(?:www\.)?mildom\.com/ + (?: + playback/(\d+)(/(?P<video_id>(\d+)-(\w+))) + | + (?P<channel_id>\d+) + ) +""", re.VERBOSE)) +class Mildom(Plugin): def _get_streams(self): - channel_id = self.match.group("channel_id") - video_id = self.match.group("video_id") - if video_id: - return self._get_vod_streams(video_id) + api = MildomAPI(self.session, channel_id=self.match.group("channel_id"), video_id=self.match.group("video_id")) + + if api.video_id: + data = api.get_vod_streams_data() + if data: + for stream in data: + yield stream["name"], HLSStream(self.session, stream["url"]) else: - return self._get_live_streams(channel_id) - return + data = api.get_live_streams_data() + if not data: + return + + if data["anchor_live"] != 11: + log.debug("User doesn't appear to be live") + return + + qualities = [] + for quality_info in data["ext"]["cmode_params"]: + qualities.append((quality_info["name"], "_" + quality_info["cmode"] if quality_info["cmode"] != "raw" else "")) + + server = api.get_server() + token = api.get_token() + self.session.http.headers.update({"Referer": "https://www.mildom.com/"}) + for quality in qualities: + yield quality[0], MildomHLSStream(self.session, api, server, token, quality[1]) __plugin__ = Mildom
plugins.mildom: Download stopping abruptly and exiting ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Lately Streamlink’s been stopping the download at around 2 hours. The most recent error I got was earlier when both my friend and I were downloading the same stream and it crashed, this is the log: ``` [stream.hls][warning] Failed to reload playlist: Unable to open URL: https://txlvb-play-hls.mildom.tv/live/11868304.m3u8?rid=11868304&uid=0&token=6da7b7e157b81f5f39bdc9a59fb1ec82&sign=1e7eeb3b98a4a194&time=1648562383&did=pc-gp-d3e84717-29d7-4b52-871a-ac9acecb89a5&cnt=1&expire=0 (403 Client Error: Forbidden for url: https://txlvb-play-hls.mildom.tv/live/11868304.m3u8?rid=11868304&uid=0&token=6da7b7e157b81f5f39bdc9a59fb1ec82&sign=1e7eeb3b98a4a194&time=1648562383&did=pc-gp-d3e84717-29d7-4b52-871a-ac9acecb89a5&cnt=1&expire=0) [stream.hls][warning] Failed to reload playlist: Unable to open URL: https://txlvb-play-hls.mildom.tv/live/11868304.m3u8?rid=11868304&uid=0&token=6da7b7e157b81f5f39bdc9a59fb1ec82&sign=1e7eeb3b98a4a194&time=1648562383&did=pc-gp-d3e84717-29d7-4b52-871a-ac9acecb89a5&cnt=1&expire=0 (403 Client Error: Forbidden for url: https://txlvb-play-hls.mildom.tv/live/11868304.m3u8?rid=11868304&uid=0&token=6da7b7e157b81f5f39bdc9a59fb1ec82&sign=1e7eeb3b98a4a194&time=1648562383&did=pc-gp-d3e84717-29d7-4b52-871a-ac9acecb89a5&cnt=1&expire=0) [stream.hls][warning] Failed to reload playlist: Unable to open URL: https://txlvb-play-hls.mildom.tv/live/11868304.m3u8?rid=11868304&uid=0&token=6da7b7e157b81f5f39bdc9a59fb1ec82&sign=1e7eeb3b98a4a194&time=1648562383&did=pc-gp-d3e84717-29d7-4b52-871a-ac9acecb89a5&cnt=1&expire=0 (403 Client Error: Forbidden for url: https://txlvb-play-hls.mildom.tv/live/11868304.m3u8?rid=11868304&uid=0&token=6da7b7e157b81f5f39bdc9a59fb1ec82&sign=1e7eeb3b98a4a194&time=1648562383&did=pc-gp-d3e84717-29d7-4b52-871a-ac9acecb89a5&cnt=1&expire=0) error: Error when reading from stream: Read timeout, exiting [cli][info] Stream ended [cli][info] Closing currently open stream... ``` The code that I’ve been using is: `streamlink --output C:\Users\USER\Videos\selly.ts https://www.mildom.com/11868304 best` ### Debug log ```text Microsoft Windows [Version 10.0.19042.1526] (c) Microsoft Corporation. All rights reserved. C:\Users\USER>streamlink --loglevel debug [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.1 [cli][debug] Streamlink: 3.2.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe usage: streamlink [OPTIONS] <URL> [STREAM] Use -h/--help to see the available options or read the manual at https://streamlink.github.io C:\Users\USER> ```
2022-05-15T17:56:12
streamlink/streamlink
4,536
streamlink__streamlink-4536
[ "4451" ]
db6252e3eceac4f1aac7efc99aa114907d695b7f
diff --git a/src/streamlink/plugins/useetv.py b/src/streamlink/plugins/useetv.py new file mode 100644 --- /dev/null +++ b/src/streamlink/plugins/useetv.py @@ -0,0 +1,47 @@ +""" +$description Live TV channels and video on-demand service from UseeTV, owned by Telkom Indonesia. +$url useetv.com +$type live, vod +""" + +import re + +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate +from streamlink.stream.dash import DASHStream +from streamlink.stream.hls import HLSStream + + +@pluginmatcher(re.compile(r"https?://(?:www\.)?useetv\.com/")) +class UseeTV(Plugin): + def find_url(self): + url_re = re.compile(r"""['"](https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd)[^'"]+)['"]""") + + return self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.any( + validate.all( + validate.xml_xpath_string(""" + .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text() + """), + str, + validate.transform(url_re.search), + validate.any(None, validate.all(validate.get(1), validate.url())), + ), + validate.all( + validate.xml_xpath_string(".//video[@id='video-player']/source/@src"), + validate.any(None, validate.url()), + ), + ), + )) + + def _get_streams(self): + url = self.find_url() + + if url and ".m3u8" in url: + return HLSStream.parse_variant_playlist(self.session, url) + elif url and ".mpd" in url: + return DASHStream.parse_manifest(self.session, url) + + +__plugin__ = UseeTV
diff --git a/tests/plugins/test_useetv.py b/tests/plugins/test_useetv.py new file mode 100644 --- /dev/null +++ b/tests/plugins/test_useetv.py @@ -0,0 +1,17 @@ +from streamlink.plugins.useetv import UseeTV +from tests.plugins import PluginCanHandleUrl + + +class TestPluginCanHandleUrlUseeTV(PluginCanHandleUrl): + __plugin__ = UseeTV + + should_match = [ + "http://useetv.com/any", + "http://useetv.com/any/path", + "http://www.useetv.com/any", + "http://www.useetv.com/any/path", + "https://useetv.com/any", + "https://useetv.com/any/path", + "https://www.useetv.com/any", + "https://www.useetv.com/any/path", + ]
https://www.useetv.com/ ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description LIVETV OFFCIAL BY INDONESIA , GOT ALOT OF ENGLISH CHANNELS ### Input URLs https://www.useetv.com/livetv/seatoday https://cdn09jtedge.useetv.com/joss/133/seatoday/chunklist_w1268139421_b844100_sleng.m3u8?enc=9Q3OzUmDj1HhXf-juvVnLA&uid=&exp=1649876868
I don't know if this is worth writing a plugin for or not - the playlist URLs seem to be embedded in the JS of the page HTML and can easily be extracted by searching for `m3u8` and then played with the link copied out: ``` $ streamlink --http-no-ssl-verify 'https://streaming.useetv.com/joss/133/seatoday/playlist.m3u8?enc=x0KVJdJswyEo8fAVod3x1w&exp=1651028204&encp=__UhcXzshmaYYIPFuJ0KVw&expp=1651036904' 540p [cli][info] Found matching plugin hls for URL https://streaming.useetv.com/joss/133/seatoday/playlist.m3u8?enc=x0KVJdJswyEo8fAVod3x1w&exp=1651028204&encp=__UhcXzshmaYYIPFuJ0KVw&expp=1651036904 [cli][info] Available streams: 270p (worst), 360p, 540p, 720p, 1080p (best) [cli][info] Opening stream: 540p (hls) [cli][info] Starting player: mpv [cli][info] Player closed [cli][info] Stream ended [cli][info] Closing currently open stream... ```
2022-05-19T21:12:31
streamlink/streamlink
4,548
streamlink__streamlink-4548
[ "4545" ]
ec93f520ac9ddeae39dac641ac7ba4460c1043ac
diff --git a/src/streamlink/plugins/blazetv.py b/src/streamlink/plugins/blazetv.py new file mode 100644 --- /dev/null +++ b/src/streamlink/plugins/blazetv.py @@ -0,0 +1,108 @@ +""" +$description British live TV channel and video on-demand service from Blaze, owned by A&E Networks UK. +$url blaze.tv +$type live, vod +$region United Kingdom +""" + +import logging +import re + +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate +from streamlink.stream.hls import HLSStream + +log = logging.getLogger(__name__) + + +@pluginmatcher(re.compile( + r"https?://(?:watch\.)?blaze\.tv/(?:(?P<is_live>live)|watch/replay/\d+)" +)) +class BlazeTV(Plugin): + def _get_live_uvid(self, parsed_html): + return validate.validate(validate.Schema( + validate.xml_xpath_string(".//div[@id='live-player-root']/@data-player-uvid"), + str, + ), parsed_html) + + def _get_vod_uvid(self, parsed_html): + json_re = re.compile(r"window\.nowPlaying\.setData\(({.*?})\);") + return validate.validate(validate.Schema( + validate.xml_xpath_string(".//script[contains(text(), 'window.nowPlaying.setData')]"), + str, + validate.transform(json_re.search), + validate.any( + None, + validate.all( + validate.get(1), + validate.parse_json(), + { + "id": str, + "series_title": str, + "title": str, + "season": str, + "episode": str, + }, + ), + ), + ), parsed_html) + + def _get_tokenizer(self, type, uvid): + return self.session.http.get( + f"https://watch.blaze.tv/stream/{type}/widevine/{uvid}", + schema=validate.Schema( + validate.parse_json(), + { + "tokenizer": { + "url": validate.url(), + "uvid": str, + "expiry": int, + "token": str, + }, + }, + validate.get("tokenizer"), + ), + ) + + def _get_streams(self): + is_live = self.match.group("is_live") + parsed_html = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html())) + + if is_live: + uvid = self._get_live_uvid(parsed_html) + if not uvid or not uvid.isdecimal(): + return + token_data = self._get_tokenizer("live", uvid) + self.id = uvid + self.author = "Blaze" + self.title = "Live TV" + self.category = "Live" + else: + data = self._get_vod_uvid(parsed_html) + if not data["id"] or not data["id"].isdecimal(): + return + token_data = self._get_tokenizer("replay", data["id"]) + self.id = data["id"] + self.author = data["series_title"] + self.title = data["title"] + self.category = f"S{data['season']}E{data['episode']}" + + log.trace(f"token_data={token_data!r}") + + hls_url = self.session.http.get( + token_data["url"], + headers={ + "Token": token_data["token"], + "Token-Expiry": str(token_data["expiry"]), + "Uvid": token_data["uvid"], + }, + schema=validate.Schema( + validate.parse_json(), + {"Streams": {"Adaptive": validate.url()}}, + validate.get(("Streams", "Adaptive")), + ), + ) + return HLSStream.parse_variant_playlist(self.session, hls_url) + + +__plugin__ = BlazeTV
diff --git a/tests/plugins/test_blazetv.py b/tests/plugins/test_blazetv.py new file mode 100644 --- /dev/null +++ b/tests/plugins/test_blazetv.py @@ -0,0 +1,18 @@ +from streamlink.plugins.blazetv import BlazeTV +from tests.plugins import PluginCanHandleUrl + + +class TestPluginCanHandleUrlBlazeTV(PluginCanHandleUrl): + __plugin__ = BlazeTV + + should_match_groups = [ + ("https://blaze.tv/live", {"is_live": "live"}), + ("https://watch.blaze.tv/live/", {"is_live": "live"}), + ("https://watch.blaze.tv/watch/replay/123456", {}), + ] + + should_not_match = [ + "https://blaze.tv/abc", + "https://watch.blaze.tv/watch/replay/", + "https://watch.blaze.tv/watch/replay/abc123", + ]
Blaze TV, a channel from the UK ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description Blaze TV is a TV channel from the UK, which mainly broadcasts entertainment shows, but sometimes airs documentaries too. Also has VOD content of the shows that are currently on the air or that have been, so that the listener can watch them at their own leisure. No signin is required to watch the channel but you must be in the UK or have a UK VPN or proxy. The streaming URL is protected by a token, a signature, a policy and a pair id of some sort, but I'm not sure whether or not that would be classed as DRM. ### Input URLs 1. URL to watch the stream: http://watch.blaze.tv/live 2. Streaming API: https://live.blaze.tv/ 3. Blaze TV's VODs are on the homepage directly at https://blaze.tv, just click 'Watch Now' under the episode names. Or there is a link in the top menu. https://watch.blaze.tv/replay/553
2022-05-24T18:51:36
streamlink/streamlink
4,550
streamlink__streamlink-4550
[ "4547" ]
34e986a836caf9aadbec58b51f9a2dd3ef3572ec
diff --git a/src/streamlink/plugins/useetv.py b/src/streamlink/plugins/useetv.py --- a/src/streamlink/plugins/useetv.py +++ b/src/streamlink/plugins/useetv.py @@ -4,6 +4,7 @@ $type live, vod """ +import logging import re from streamlink.plugin import Plugin, pluginmatcher @@ -11,32 +12,46 @@ from streamlink.stream.dash import DASHStream from streamlink.stream.hls import HLSStream +log = logging.getLogger(__name__) + @pluginmatcher(re.compile(r"https?://(?:www\.)?useetv\.com/")) class UseeTV(Plugin): - def find_url(self): - url_re = re.compile(r"""['"](https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd)[^'"]+)['"]""") + def _get_streams(self): + root = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html())) + + for needle, errormsg in ( + ( + "This service is not available in your Country", + "The content is not available in your region", + ), + ( + "Silahkan login Menggunakan akun MyIndihome dan berlangganan minipack", + "The content is not available without a subscription", + ), + ): + if validate.Schema(validate.xml_xpath(f""".//script[contains(text(), '"{needle}"')]""")).validate(root): + log.error(errormsg) + return - return self.session.http.get(self.url, schema=validate.Schema( - validate.parse_html(), + url = validate.Schema( validate.any( validate.all( validate.xml_xpath_string(""" .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text() """), str, - validate.transform(url_re.search), - validate.any(None, validate.all(validate.get(1), validate.url())), + validate.transform( + re.compile(r"""(?P<q>['"])(?P<url>https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd).+?)(?P=q)""").search + ), + validate.any(None, validate.all(validate.get("url"), validate.url())), ), validate.all( validate.xml_xpath_string(".//video[@id='video-player']/source/@src"), validate.any(None, validate.url()), ), - ), - )) - - def _get_streams(self): - url = self.find_url() + ) + ).validate(root) if url and ".m3u8" in url: return HLSStream.parse_variant_playlist(self.session, url)
plugins.useetv: log if no link has been found <!-- Thanks for opening a pull request! Before you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink If possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures: https://streamlink.github.io/latest/developing.html#validating-changes Also don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers. Thank you very much! --> **Why this PR ?** This PR has been made to verify if no link has been found. Indeed, USeeTV doesn't provide all his channels worldwide. Some channels are blocked for Indonesian people only, and some others need a subscription to work (see beIN Asia as an example). Some channels like SeaToday would work, but channels like this one : ![image](https://user-images.githubusercontent.com/30985701/170096616-4d22b9aa-9972-418e-8bc6-1c99be1c1e88.png) will only show a Geo-restriction message above the player, telling the end-user he has no access to the stream. This also reflects inside the player, meaning no link can be scraped.
Thanks. You'll also need to add the following if this is going to be merged: ``` log = logging.getLogger(__name__) ``` Add it after the imports with one blank line between and another two blank lines after to pass the linting check. > Thanks. You'll also need to add the following if this is going to be merged: > > ``` > log = logging.getLogger(__name__) > ``` > > Add it after the imports with one blank line between and another two blank lines after to pass the linting check. Sure, going to add it.
2022-05-25T12:04:49
streamlink/streamlink
4,553
streamlink__streamlink-4553
[ "4539" ]
d8f0e1ea76aa577563df9f97c2f84f6862d88eaa
diff --git a/src/streamlink/plugins/nicolive.py b/src/streamlink/plugins/nicolive.py --- a/src/streamlink/plugins/nicolive.py +++ b/src/streamlink/plugins/nicolive.py @@ -260,10 +260,16 @@ def niconico_web_login(self): self.LOGIN_URL, data={"mail_tel": email, "password": password}, params=self.LOGIN_URL_PARAMS, - schema=validate.Schema(validate.parse_html())) + schema=validate.Schema(validate.parse_html()), + ) + + if self.session.http.cookies.get("user_session"): + log.info("Logged in.") + self.save_cookies() + return input_with_value = {} - for elem in root.xpath(".//input"): + for elem in root.xpath(".//form[@action]//input"): if elem.attrib.get("value"): input_with_value[elem.attrib.get("name")] = elem.attrib.get("value") else: @@ -283,7 +289,8 @@ def niconico_web_login(self): root = self.session.http.post( urljoin("https://account.nicovideo.jp", root.xpath("string(.//form[@action]/@action)")), data=input_with_value, - schema=validate.Schema(validate.parse_html())) + schema=validate.Schema(validate.parse_html()), + ) log.debug(f"Cookies: {self.session.http.cookies.get_dict()}") if self.session.http.cookies.get("user_session") is None: error = root.xpath("string(//div[@class='formError']/div/text())")
plugins.nicolive: Niconico login broken (404) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [ ] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description The niconico function using login is broken. This error is shown: 404 Client Error: Not Found for url: https://account.nicovideo.jp/ The plugin seems to be pointing to https://account.nicovideo.jp/login/redirector which does not work anymore. Pointing to https://account.nicovideo.jp/login instead should solve the issue. ### Debug log ```text Used syntax: streamlink -o "output.ts" https://live.nicovideo.jp/watch/[video-ID] best --niconico-email [mailaddress] --niconico-password [PW] ```
**Post the entire output when setting the `--loglevel=debug` parameter.** The nicolive email logins were fixed recently in #4380. This looks like you're not using the latest version. > **Post the entire output when setting the `--loglevel=debug` parameter.** > > The nicolive email logins were fixed recently in #4380. This looks like you're not using the latest version. Full log: ``` [cli][debug] OS: Windows 10 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 4.0.1 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [cli][debug] Arguments: [cli][debug] url=https://live.nicovideo.jp/watch/lv336832924 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --output=output.ts [cli][debug] --ffmpeg-ffmpeg=C:\Users\xxx\AppData\Local\Streamlink\ffmpeg\ffmpeg.exe [cli][debug] --niconico-email=******** [cli][debug] --niconico-password=******** [cli][info] Found matching plugin nicolive for URL https://live.nicovideo.jp/watch/lv336832924 [plugins.nicolive][info] Logging in via provided email and password [plugins.nicolive][debug] unknown input: None [plugins.nicolive][debug] unknown input: None error: Unable to open URL: https://account.nicovideo.jp (404 Client Error: Not Found for url: https://account.nicovideo.jp/) ``` --- The video itself will only be available for a couple more hours, you might have to test with a different one on live.nicovideo.jp I tested with both, latest stable, as well as latest nightly. I can't reproduce this. Invalid logins get caught, valid logins via email that require a confirmation code are working fine, and so are cached session cookies after a successful login. Apart from that, the stream URL doesn't seem to require a login. > [plugins.nicolive][info] Logging in via provided email and password > [plugins.nicolive][debug] unknown input: None > [plugins.nicolive][debug] unknown input: None > error: Unable to open URL: https://account.nicovideo.jp/ (404 Client Error: Not Found for url: https://account.nicovideo.jp/) This looks like you're seeing a different page after trying to logging in which the plugin can't handle. This might be caused by regional differences. > I can't reproduce this. > > Invalid logins get caught, valid logins via email that require a confirmation code are working fine, and so are cached session cookies after a successful login. > > Apart from that, the stream URL doesn't seem to require a login. > > > [plugins.nicolive][info] Logging in via provided email and password > > [plugins.nicolive][debug] unknown input: None > > [plugins.nicolive][debug] unknown input: None > > error: Unable to open URL: https://account.nicovideo.jp/ (404 Client Error: Not Found for url: https://account.nicovideo.jp/) > > This looks like you're seeing a different page after trying to logging in which the plugin can't handle. This might be caused by regional differences. The video has a "members-only" part in the second half that is not shown if you are a guest/non-member user. This is a "timeshift" archive video of a live-stream. Regional should not be an issue, since I am accessing from Japan. Please see line 161 in this file: https://github.com/streamlink/streamlink/blob/master/src/streamlink/plugins/nicolive.py The link "https://account.nicovideo.jp/login/redirector" when accessed from browser is giving the error straight away. Like mentioned in OP, changing this to https://account.nicovideo.jp/login might solve the issue. (I cannot do commits, sorry) https://account.nicovideo.jp/login/redirector is the target of the login form of https://account.nicovideo.jp/login, and it requires a POST request with the `mail_tel` and `password` parameters. The plugin is doing this correctly, as you can see here: https://github.com/streamlink/streamlink/blob/473fda0a452745ea207dbcfec72bc83f7356bf99/src/streamlink/plugins/nicolive.py#L259-L263 As said, the issue is caused by the response of this request, which looks to be different in certain cases. works with an IP from Europe, does not work with an IP from Japan. Plugin needs some changes.
2022-05-27T15:44:11
streamlink/streamlink
4,566
streamlink__streamlink-4566
[ "4565" ]
ea7a243a11b33b6bbe4ae5163e94a8dffd4efd63
diff --git a/src/streamlink/plugins/tvtoya.py b/src/streamlink/plugins/tvtoya.py --- a/src/streamlink/plugins/tvtoya.py +++ b/src/streamlink/plugins/tvtoya.py @@ -4,30 +4,42 @@ $type live """ -import logging import re -from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin import Plugin, PluginError, pluginmatcher +from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream -log = logging.getLogger(__name__) - @pluginmatcher(re.compile( - r"https?://(?:www\.)?tvtoya\.pl/live" + r"https?://(?:www\.)?tvtoya\.pl/player/live" )) class TVToya(Plugin): - _playlist_re = re.compile(r'<source src="([^"]+)" type="application/x-mpegURL">') - def _get_streams(self): - self.session.set_option('hls-live-edge', 10) - res = self.session.http.get(self.url) - playlist_m = self._playlist_re.search(res.text) - - if playlist_m: - return HLSStream.parse_variant_playlist(self.session, playlist_m.group(1)) - else: - log.debug("Could not find stream data") + try: + hls = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[@type='application/json'][@id='__NEXT_DATA__']/text()"), + str, + validate.parse_json(), + { + "props": { + "pageProps": { + "type": "live", + "url": validate.all( + str, + validate.transform(lambda url: url.replace("https:////", "https://")), + validate.url(path=validate.endswith(".m3u8")), + ) + } + } + }, + validate.get(("props", "pageProps", "url")), + )) + except PluginError: + return + + return HLSStream.parse_variant_playlist(self.session, hls) __plugin__ = TVToya
diff --git a/tests/plugins/test_tvtoya.py b/tests/plugins/test_tvtoya.py --- a/tests/plugins/test_tvtoya.py +++ b/tests/plugins/test_tvtoya.py @@ -6,14 +6,15 @@ class TestPluginCanHandleUrlTVRPlus(PluginCanHandleUrl): __plugin__ = TVToya should_match = [ - "https://tvtoya.pl/live", - "http://tvtoya.pl/live", + "http://tvtoya.pl/player/live", + "https://tvtoya.pl/player/live", ] should_not_match = [ - "https://tvtoya.pl", "http://tvtoya.pl", - "http://tvtoya.pl/other-page", "http://tvtoya.pl/", + "http://tvtoya.pl/live", + "https://tvtoya.pl", "https://tvtoya.pl/", + "https://tvtoya.pl/live", ]
tvtoya.py ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Can I ask for a fix to the action https://tvtoya.pl/player/live ### Debug log ```text root@zgemmah7:~# /usr/sbin/streamlinksrv manualstart debug [streamlinksrv][info] ####################mod j00zek##################### [streamlinksrv][info] Fri Jun 3 18:49:11 2022 Server (1.8.3 - 22.05.28.0954) started [streamlinksrv][info] Host: zgemmah7 [streamlinksrv][info] Port: 8088 [streamlinksrv][info] OS: Linux-4.10.12-armv7l-with-glibc2.35 [streamlinksrv][info] Python: 3.10.4 [streamlinksrv][info] Streamlink: 4.1.0+0.gea7a243a.dirty [streamlinksrv][info] Log level: debug [streamlinksrv][debug] Options Parser: N/A [streamlinksrv][debug] youtube-dl: 2021.12.17 [streamlinksrv][info] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [streamlinksrv][info] ################################################### [streamlinksrv][debug] Received URL: http://tvtoya.pl/live [streamlinksrv][info] Processing URL: http://tvtoya.pl/live [streamlinksrv][debug] Arguments: [streamlinksrv][debug] url=http://tvtoya.pl/live [streamlinksrv][debug] stream=['best'] [streamlinksrv][info] Found matching plugin tvtoya for URL http://tvtoya.pl/live [streamlinksrv][error] Plugin error: Unable to open URL: http://tvtoya.pl/live (404 Client Error: Not Found for url: https://tvtoya.pl/live) [streamlinksrv][debug] Send Offline clip: /usr/lib/enigma2/python/Plugins/Extensions/StreamlinkConfig/streams/PluginError.mp4 127.0.0.1 - - [03/Jun/2022 18:49:26] "GET /http://tvtoya.pl/live HTTP/1.1" 200 - [streamlinksrv][debug] finish >>> ---------------------------------------- Exception occurred during processing of request from ('127.0.0.1', 49746) Traceback (most recent call last): File "/usr/lib/python3.10/socketserver.py", line 683, in process_request_thread File "/usr/lib/python3.10/socketserver.py", line 360, in finish_request File "/usr/lib/python3.10/socketserver.py", line 747, in __init__ File "/usr/sbin/streamlinksrv", line 1156, in handle BaseHTTPRequestHandler.handle(self) File "/usr/lib/python3.10/http/server.py", line 425, in handle File "/usr/lib/python3.10/http/server.py", line 414, in handle_one_request ValueError: I/O operation on closed file. ---------------------------------------- ```
2022-06-03T17:25:06
streamlink/streamlink
4,572
streamlink__streamlink-4572
[ "4559" ]
7cb0ebe96f659f3eeb7b36e58eb545cd2cb9894c
diff --git a/src/streamlink/plugins/aloula.py b/src/streamlink/plugins/aloula.py new file mode 100644 --- /dev/null +++ b/src/streamlink/plugins/aloula.py @@ -0,0 +1,113 @@ +""" +$description Live TV channels and video on-demand service from the SBA, a Saudi, state-owned broadcaster. +$url aloula.sa +$type live, vod +""" + +import logging +import re + +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate +from streamlink.stream.hls import HLSStream + +log = logging.getLogger(__name__) + + +@pluginmatcher(re.compile(r""" + https?://(?:www\.)?aloula\.sa/(?:\w{2}/)? + (?: + live/(?P<live_slug>[^/?&]+) + | + episode/(?P<vod_id>\d+) + ) +""", re.VERBOSE)) +class Aloula(Plugin): + def get_live(self, live_slug): + live_data = self.session.http.get( + "https://aloula.faulio.com/api/v1/channels", + schema=validate.Schema( + validate.parse_json(), + [{ + "id": int, + "url": str, + "title": str, + "has_live": bool, + "has_vod": bool, + "streams": { + "hls": validate.url(), + }, + }], + validate.filter(lambda k: k["url"] == live_slug), + ), + ) + if not live_data: + return + live_data = live_data[0] + log.trace(f"{live_data!r}") + + if not live_data["has_live"]: + log.error("Stream is not live") + return + + self.id = live_data["id"] + self.author = "SBA" + self.title = live_data["title"] + self.category = "Live" + return HLSStream.parse_variant_playlist(self.session, live_data["streams"]["hls"]) + + def get_vod(self, vod_id): + vod_data = self.session.http.get( + f"https://aloula.faulio.com/api/v1/video/{vod_id}", + acceptable_status=(200, 401), + schema=validate.Schema( + validate.parse_json(), + validate.any( + validate.all( + {"blocks": [{ + "id": str, + "program_title": str, + "title": str, + "season_number": int, + "episode": int, + }]}, + validate.get(("blocks", 0)), + ), + {"cms_error": str, "message": str}, + ), + ), + ) + + log.trace(f"{vod_data!r}") + if "cms_error" in vod_data and vod_data["cms_error"] == "auth": + log.error("This stream requires a logged-in session cookie to be supplied") + return + if "cms_error" in vod_data: + log.error(f"API error: {vod_data['cms_error']} ({vod_data['message']})") + return + self.id = vod_data["id"] + self.author = vod_data["program_title"] + self.title = vod_data["title"] + self.category = f"S{vod_data['season_number']}E{vod_data['episode']}" + + hls_url = self.session.http.get( + f"https://aloula.faulio.com/api/v1/video/{vod_id}/player", + schema=validate.Schema( + validate.parse_json(), + {"settings": {"protocols": {"hls": validate.url()}}}, + validate.get(("settings", "protocols", "hls")), + ), + ) + return HLSStream.parse_variant_playlist(self.session, hls_url) + + def _get_streams(self): + live_slug = self.match.group("live_slug") + vod_id = self.match.group("vod_id") + + if live_slug: + return self.get_live(live_slug) + elif vod_id: + return self.get_vod(vod_id) + + +__plugin__ = Aloula
diff --git a/tests/plugins/test_aloula.py b/tests/plugins/test_aloula.py new file mode 100644 --- /dev/null +++ b/tests/plugins/test_aloula.py @@ -0,0 +1,31 @@ +from streamlink.plugins.aloula import Aloula +from tests.plugins import PluginCanHandleUrl + + +class TestPluginCanHandleUrlAloula(PluginCanHandleUrl): + __plugin__ = Aloula + + should_match_groups = [ + ("https://www.aloula.sa/live/slug", {"live_slug": "slug"}), + ("https://www.aloula.sa/en/live/slug", {"live_slug": "slug"}), + ("https://www.aloula.sa/de/live/slug/abc", {"live_slug": "slug"}), + ("https://www.aloula.sa/episode/123", {"vod_id": "123"}), + ("https://www.aloula.sa/en/episode/123", {"vod_id": "123"}), + ("https://www.aloula.sa/episode/123abc/456", {"vod_id": "123"}), + ("https://www.aloula.sa/de/episode/123abc/456", {"vod_id": "123"}), + ("https://www.aloula.sa/episode/123?continue=8", {"vod_id": "123"}), + ("https://www.aloula.sa/xx/episode/123?continue=8", {"vod_id": "123"}), + ] + + should_not_match = [ + "https://www.aloula.sa/en/any", + "https://www.aloula.sa/de/any/path", + "https://www.aloula.sa/live/", + "https://www.aloula.sa/abc/live/slug", + "https://www.aloula.sa/en/live/", + "https://www.aloula.sa/episode/", + "https://www.aloula.sa/abc/episode/123", + "https://www.aloula.sa/en/episode/", + "https://www.aloula.sa/episode/abc", + "https://www.aloula.sa/de/episode/abc", + ]
Aloula ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description https://www.aloula.sa/ https://www.aloula.sa/en Aloula, free video-on-demand and TV catch up service in the Kingdom of Saudi Arabia and the Middle East. ### Input URLs https://www.aloula.sa/live/saudiatv https://www.aloula.sa/live/sbc-channel https://www.aloula.sa/live/thikrayat-tv https://www.aloula.sa/live/alekhbariya https://www.aloula.sa/live/ksa-sports1 https://www.aloula.sa/live/ksa-sports2 https://www.aloula.sa/live/qurantvsa https://www.aloula.sa/live/sunnatvsa https://www.aloula.sa/live/riyadhradio https://www.aloula.sa/live/jeddahradio https://www.aloula.sa/live/jeddahradio https://www.aloula.sa/live/nidaalislam https://www.aloula.sa/live/saudiaradio https://www.aloula.sa/en/live/saudiatv https://www.aloula.sa/en/live/sbc-channel https://www.aloula.sa/en/live/thikrayat-tv https://www.aloula.sa/en/live/alekhbariya https://www.aloula.sa/en/live/ksa-sports1 https://www.aloula.sa/en/live/ksa-sports2 https://www.aloula.sa/en/live/qurantvsa https://www.aloula.sa/en/live/sunnatvsa https://www.aloula.sa/en/live/riyadhradio https://www.aloula.sa/en/live/jeddahradio https://www.aloula.sa/en/live/jeddahradio https://www.aloula.sa/en/live/nidaalislam https://www.aloula.sa/en/live/saudiaradio It gets the channels by changing the number here https://aloula.faulio.com/api/v1/channels/1 https://aloula.faulio.com/api/v1/channels/2 e.t.c.
2022-06-04T21:31:11
streamlink/streamlink
4,576
streamlink__streamlink-4576
[ "4485" ]
c319aa445e7577134d61da587c6338730b82a4c8
diff --git a/src/streamlink/plugins/youtube.py b/src/streamlink/plugins/youtube.py --- a/src/streamlink/plugins/youtube.py +++ b/src/streamlink/plugins/youtube.py @@ -44,7 +44,7 @@ """, re.VERBOSE)) class YouTube(Plugin): _re_ytInitialData = re.compile(r"""var\s+ytInitialData\s*=\s*({.*?})\s*;\s*</script>""", re.DOTALL) - _re_ytInitialPlayerResponse = re.compile(r"""var\s+ytInitialPlayerResponse\s*=\s*({.*?});\s*var\s+meta\s*=""", re.DOTALL) + _re_ytInitialPlayerResponse = re.compile(r"""var\s+ytInitialPlayerResponse\s*=\s*({.*?});\s*var\s+\w+\s*=""", re.DOTALL) _re_mime_type = re.compile(r"""^(?P<type>\w+)/(?P<container>\w+); codecs="(?P<codecs>.+)"$""") _url_canonical = "https://www.youtube.com/watch?v={video_id}"
plugins.youtube: --http-cookie not working anymore on members content ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description I have been using streamlink for recording members-only livestreams for a while. Using Chrome I get my `--http-cookie` value by copying the cookie value in the Network tab in Dev Tools but recently this does not seem to work. I have been using the same method as before so I am unsure as to what the cause may be. I have tried other CLI utilities like yt-dlp to access locked content using the same cookie (but as a .txt file) and they perform fine. ### Debug log ```text [cli][info] Found matching plugin youtube for URL https://www.youtube.com/watch?v=cK_L718jsv4 [plugins.youtube][error] Could not get video info - UNPLAYABLE: Join this channel to get access to members-only content like this video, and other exclusive perks. [cli][info] Waiting for streams, retrying every 1.0 second(s) [plugins.youtube][error] Could not get video info - UNPLAYABLE: Join this channel to get access to members-only content like this video, and other exclusive perks. Interrupted! Exiting... [cli][debug] OS: Windows 10 [cli][debug] Python: 3.9.10 [cli][debug] Streamlink: 3.2.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [cli][debug] Arguments: [cli][debug] url=https://www.youtube.com/watch?v=cK_L718jsv4 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --logfile=D:\Downloads\streamlink temp\log.txt [cli][debug] --player=D:\Programs\VLC\vlc.exe [cli][debug] --output=D:\Downloads\streamlink temp\Unarchived_Stream 1.mp4 [cli][debug] --retry-streams=1.0 [cli][debug] --stream-segment-attempts=100 [cli][debug] --stream-segment-threads=3 [cli][debug] --stream-segment-timeout=600.0 [cli][debug] --stream-timeout=600.0 [cli][debug] --hls-segment-stream-data=True [cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][debug] --http-cookie=[('omitted')] [cli][info] Found matching plugin youtube for URL https://www.youtube.com/watch?v=cK_L718jsv4 [plugins.youtube][debug] Missing initial player response [plugins.youtube][error] Could not get video info - UNPLAYABLE: Join this channel to get access to members-only content like this video, and other exclusive perks. [cli][info] Waiting for streams, retrying every 1.0 second(s) [plugins.youtube][debug] Missing initial player response [plugins.youtube][error] Could not get video info - UNPLAYABLE: Join this channel to get access to members-only content like this video, and other exclusive perks. [plugins.youtube][debug] Missing initial player response [plugins.youtube][error] Could not get video info - UNPLAYABLE: Join this channel to get access to members-only content like this video, and other exclusive perks. [plugins.youtube][debug] Missing initial player response [plugins.youtube][error] Could not get video info - UNPLAYABLE: Join this channel to get access to members-only content like this video, and other exclusive perks. ```
The youtube plugin doesn't support any kind of authentication. If you're setting custom http headers or cookies, then you're on your own, and if something changes on the site which has been working for you previously and now stopped working, then you'll have to figure it out yourself. Omitting any useful data is also not great if you're looking for help. > [cli][debug] --http-cookie=[('omitted')] This log message doesn't make any sense btw. `--http-cookie` requires `key=value` argument values, and the data is stored as a list of key+value tuples, not as a list of one-element tuples. https://streamlink.github.io/latest/cli.html#cmdoption-http-cookie Apologies for omitting it. The log for that part is something like this: `[cli][debug] --http-cookie=[('VISITOR_INFO1_LIVE', 'value')]` The command I usually enter for `--http-cookie` is usually `--http-cookie "VISITOR_INFO1_LIVE=value1; NID=value2; ..."` and so on. This is usually enough to start the recording process in the past, however this is not the case recently. Looking clearly at the debug log now, I see my mistake in entering multiple key=value pairs into one `--http-cookie`, as this would end up being stored as one tuple. But am still confused as to why this incorrect method was working in previous releases. > But am still confused as to why this incorrect method was working in previous releases. Because the `Cookie` HTTP header gets set in the `key1=value1; key2=value2; key3=value3` format, so technically, you can set `value1; key2=value2; key3=value3` as the value of `key1` and it won't make a difference when making a request with this specific header. It messes up the client-side though which also stores individual cookies and their values. I've used it correctly now. My debug log now outputs the cookies correctly but I still get the same response: ``` [cli][debug] --http-cookie=[('VISITER_INFO1_live,'value1'),('NID','value2')]...[('__Secure-3PSIDCC','valueN')] ... [plugins.youtube][debug] Missing initial player response [plugins.youtube][error] Could not get video info - UNPLAYABLE: Join this channel to get access to members-only content like this video, and other exclusive perks. ``` Same for me. From the debug level logs I'm sure http cookies are loaded, but still fail with the `Could not get video info - UNPLAYABLE: ...` error. OK, I find the root cause. The key part is here: [youtube.py](https://github.com/streamlink/streamlink/blob/master/src/streamlink/plugins/youtube.py#L356) ``` data = self._get_data_from_regex(res, self._re_ytInitialPlayerResponse, "initial player response") ``` and the regex is declared in the same file: ``` _re_ytInitialPlayerResponse = re.compile(r"""var\s+ytInitialPlayerResponse\s*=\s*({.*?});\s*var\s+meta\s*=""", re.DOTALL) ``` The question is: **`ytInitialPlayerResponse` is not always followed by `var meta = ...`** In this case (member only live video), the subsequent script is: ``` ;var head = document.getElementsByTagName('head')[0]; var meta = document.createElement('meta'); meta.name = 'referrer'; meta.content = 'origin'; head.appendChild(meta); var noindexMeta = document.createElement('meta'); noindexMeta.name = 'robots'; ... ``` So the regex match should always failed and then it run `_get_data_from_api` function as fallback, finally it returns the last error `UNPLAYABLE:...` (maybe it does not recognize the h5 cookies?) The simplest fix is just update the regex pattern, at least it works for me at the time. Suggest adding the `bug` tag back here.
2022-06-07T16:18:28
streamlink/streamlink
4,590
streamlink__streamlink-4590
[ "4589" ]
1b73229846ab725033acce675c15bda4cee8e936
diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -600,7 +600,7 @@ def handle_url(): except NoPluginError: console.exit(f"No plugin can handle URL: {args.url}") except PluginError as err: - console.exit(err) + console.exit(str(err)) if not streams: console.exit(f"No playable streams found on this URL: {args.url}")
diff --git a/tests/test_cli_main.py b/tests/test_cli_main.py --- a/tests/test_cli_main.py +++ b/tests/test_cli_main.py @@ -8,10 +8,11 @@ from unittest.mock import Mock, call, patch import freezegun +import pytest import streamlink_cli.main import tests.resources -from streamlink.exceptions import StreamError +from streamlink.exceptions import PluginError, StreamError from streamlink.session import Streamlink from streamlink.stream.stream import Stream from streamlink_cli.compat import DeprecatedPath, is_win32, stdout @@ -111,6 +112,20 @@ def test_format_valid_streams(self): ) +class TestCLIMainHandleUrl: + @pytest.mark.parametrize("side_effect,expected", [ + (NoPluginError("foo"), "No plugin can handle URL: fakeurl"), + (PluginError("bar"), "bar"), + ]) + def test_error(self, side_effect, expected): + with patch("streamlink_cli.main.args", Mock(url="fakeurl")), \ + patch("streamlink_cli.main.streamlink", resolve_url=Mock(side_effect=side_effect)), \ + patch("streamlink_cli.main.console", exit=Mock(side_effect=SystemExit)) as mock_console: + with pytest.raises(SystemExit): + handle_url() + assert mock_console.exit.mock_calls == [call(expected)] + + class TestCLIMainJsonAndStreamUrl(unittest.TestCase): @patch("streamlink_cli.main.args", json=True, stream_url=True, subprocess_cmdline=False) @patch("streamlink_cli.main.console")
Using -j/--json option causes uncaught exception on 404 URLs ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description When using the `-j`/`--json` option on 404 URLs, there is an uncaught exception. ### Debug log ```text $ streamlink -l debug bbc.co.uk/iplayer/live/xxx [cli][debug] OS: Linux-5.4.0-110-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 4.1.0+13.g2a5b71fa [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.0 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.14.1 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.0 [cli][debug] websocket-client: 1.3.2 [cli][debug] Arguments: [cli][debug] url=bbc.co.uk/iplayer/live/xxx [cli][debug] --loglevel=debug [cli][debug] --player=mpv [cli][debug] [email protected] [cli][debug] --bbciplayer-password=******** [cli][info] Found matching plugin bbciplayer for URL bbc.co.uk/iplayer/live/xxx [plugins.bbciplayer][info] A TV License is required to watch BBC iPlayer streams, see the BBC website for more information: https://www.bbc.co.uk/iplayer/help/tvlicence error: Unable to open URL: https://account.bbc.com/signin (404 Client Error: Not Found for url: https://www.bbc.co.uk/iplayer/live/xxx) $ streamlink -j bbc.co.uk/iplayer/live/xxx Traceback (most recent call last): File "/home/user/slx/lib/python3.8/site-packages/streamlink/plugin/api/http_session.py", line 214, in request res.raise_for_status() File "/home/user/slx/lib/python3.8/site-packages/requests/models.py", line 1022, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://www.bbc.co.uk/iplayer/live/xxx During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/slx/lib/python3.8/site-packages/streamlink_cli/main.py", line 599, in handle_url streams = fetch_streams(plugin) File "/home/user/slx/lib/python3.8/site-packages/streamlink_cli/main.py", line 493, in fetch_streams return plugin.streams(stream_types=args.stream_types, File "/home/user/slx/lib/python3.8/site-packages/streamlink/plugin/plugin.py", line 362, in streams ostreams = list(ostreams) File "/home/user/slx/lib/python3.8/site-packages/streamlink/plugins/bbciplayer.py", line 197, in _get_streams if not self.login(self.url): File "/home/user/slx/lib/python3.8/site-packages/streamlink/plugins/bbciplayer.py", line 175, in login res = self.session.http.post( File "/home/user/slx/lib/python3.8/site-packages/requests/sessions.py", line 635, in post return self.request("POST", url, data=data, json=json, **kwargs) File "/home/user/slx/lib/python3.8/site-packages/streamlink/plugin/api/http_session.py", line 222, in request raise err streamlink.exceptions.PluginError: Unable to open URL: https://account.bbc.com/signin (404 Client Error: Not Found for url: https://www.bbc.co.uk/iplayer/live/xxx) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/slx/bin/streamlink", line 8, in <module> sys.exit(main()) File "/home/user/slx/lib/python3.8/site-packages/streamlink_cli/main.py", line 1101, in main handle_url() File "/home/user/slx/lib/python3.8/site-packages/streamlink_cli/main.py", line 603, in handle_url console.exit(err) File "/home/user/slx/lib/python3.8/site-packages/streamlink_cli/console.py", line 91, in exit self.msg_json(error=msg) File "/home/user/slx/lib/python3.8/site-packages/streamlink_cli/console.py", line 83, in msg_json msg = dumps(out, cls=JSONEncoder, indent=2) File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps return cls( File "/usr/lib/python3.8/json/encoder.py", line 201, in encode chunks = list(chunks) File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.8/json/encoder.py", line 438, in _iterencode o = _default(o) File "/home/user/slx/lib/python3.8/site-packages/streamlink_cli/utils/__init__.py", line 24, in default return json.JSONEncoder.default(self, obj) File "/usr/lib/python3.8/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type PluginError is not JSON serializable ```
2022-06-12T19:49:30
streamlink/streamlink
4,608
streamlink__streamlink-4608
[ "4604" ]
ac30353869e7ddd9342bcf4ad9bd8c2fdcb263bd
diff --git a/src/streamlink/plugin/api/websocket.py b/src/streamlink/plugin/api/websocket.py --- a/src/streamlink/plugin/api/websocket.py +++ b/src/streamlink/plugin/api/websocket.py @@ -14,6 +14,13 @@ class WebsocketClient(Thread): + OPCODE_CONT: int = ABNF.OPCODE_CONT + OPCODE_TEXT: int = ABNF.OPCODE_TEXT + OPCODE_BINARY: int = ABNF.OPCODE_BINARY + OPCODE_CLOSE: int = ABNF.OPCODE_CLOSE + OPCODE_PING: int = ABNF.OPCODE_PING + OPCODE_PONG: int = ABNF.OPCODE_PONG + _id: int = 0 ws: WebSocketApp @@ -157,17 +164,17 @@ def on_error(self, wsapp: WebSocketApp, error: Exception) -> None: def on_close(self, wsapp: WebSocketApp, status: int, message: str) -> None: log.debug(f"Closed: {wsapp.url}") # pragma: no cover - def on_ping(self, wsapp: WebSocketApp, data: str) -> None: + def on_ping(self, wsapp: WebSocketApp, data: bytes) -> None: pass # pragma: no cover - def on_pong(self, wsapp: WebSocketApp, data: str) -> None: + def on_pong(self, wsapp: WebSocketApp, data: bytes) -> None: pass # pragma: no cover def on_message(self, wsapp: WebSocketApp, data: str) -> None: pass # pragma: no cover - def on_cont_message(self, wsapp: WebSocketApp, data: str, cont: Any) -> None: + def on_cont_message(self, wsapp: WebSocketApp, data: bytes, cont: Any) -> None: pass # pragma: no cover - def on_data(self, wsapp: WebSocketApp, data: str, data_type: int, cont: Any) -> None: + def on_data(self, wsapp: WebSocketApp, data: Union[bytes, str], data_type: int, cont: Any) -> None: pass # pragma: no cover diff --git a/src/streamlink/plugins/twitcasting.py b/src/streamlink/plugins/twitcasting.py --- a/src/streamlink/plugins/twitcasting.py +++ b/src/streamlink/plugins/twitcasting.py @@ -102,7 +102,10 @@ def on_close(self, *args, **kwargs): super().on_close(*args, **kwargs) self.buffer.close() - def on_message(self, wsapp, data: str) -> None: + def on_data(self, wsapp, data, data_type, cont): + if data_type == self.OPCODE_TEXT: + data = bytes(data, "utf-8") + try: self.buffer.write(data) except Exception as err:
diff --git a/tests/test_api_websocket.py b/tests/test_api_websocket.py --- a/tests/test_api_websocket.py +++ b/tests/test_api_websocket.py @@ -2,6 +2,7 @@ from threading import Event from unittest.mock import Mock, call, patch +import pytest from websocket import ABNF, STATUS_NORMAL # type: ignore[import] from streamlink.logger import DEBUG, TRACE @@ -9,6 +10,18 @@ from streamlink.session import Streamlink [email protected]("name,value", [ + ("OPCODE_CONT", ABNF.OPCODE_CONT), + ("OPCODE_TEXT", ABNF.OPCODE_TEXT), + ("OPCODE_BINARY", ABNF.OPCODE_BINARY), + ("OPCODE_CLOSE", ABNF.OPCODE_CLOSE), + ("OPCODE_PING", ABNF.OPCODE_PING), + ("OPCODE_PONG", ABNF.OPCODE_PONG), +]) +def test_opcode_export(name, value): + assert getattr(WebsocketClient, name) == value + + class TestWebsocketClient(unittest.TestCase): def setUp(self): self.session = Streamlink()
plugins.twitcasting: Download ends abruptly with an error: string argument without an encoding ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Sometimes a download would end abruptly with an error. I can't reliably reproduce it, so I don't have `--loglevel debug` logs, but I do have the error message. ### Debug log ```text [plugins.twitcasting][error] string argument without an encoding [plugin.api.websocket][error] cannot join current thread ```
I'm running it with `--loglevel debug` now, so when I encounter this bug again I would be able to provide the logs. > I'm running it with --loglevel debug now, so when I encounter this bug again I would be able to provide the logs. it will be the same as `string argument without an encoding` not that useful --- if you could change this https://github.com/streamlink/streamlink/blob/9e4a6577fa1d7efa9b828b13de18b9508b9599de/src/streamlink/plugins/twitcasting.py#L105-L110= to ```py def on_message(self, wsapp, data: str) -> None: try: self.buffer.write(data) except Exception as err: log.error(err) print("---") print(data) print("---") self.close() ``` it would be more useful Ok, edited the file. Can I remove the `--loglevel debug` flag or is it required for this codepath to be hit? > Can I remove the `--loglevel debug` flag Debug logging is required for posting issues because - as explained in the issue template/form - it includes the version number and your informations about your OS, which is important for diagnosing issues. The remaining debug log should be irrelevant for this issue here. The issue seems to be related to string message data returned by `websocket-client`'s `on_message` callback. Streamlink requires binary `bytes` data in its buffer, but the websocket message data is a UTF-8 `string` which gets converted back to `bytes`, which requires an encoding parameter that's missing. From what it looks like, the plugin should implement the `on_data` callback instead of `on_message`, because `on_data` supports both binary data and text data, according to the op-code of the websocket response frame. It's also possible that there's an issue with incomplete responses, which are handled by `on_cont_message` and `on_data`. I will have a look at this later. Here are the version numbers: ``` [cli][debug] OS: Linux-5.10.0-14-amd64-x86_64-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 4.1.0 [cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.3) ``` Debian 11 x86_64, using Debian's python 3.9 with streamlink installed inside a virtualenv via pip.
2022-06-18T04:11:29
streamlink/streamlink
4,613
streamlink__streamlink-4613
[ "4612" ]
1c1dbf4f8842dbfe14872ef23c4bc000097828d5
diff --git a/src/streamlink/plugins/vk.py b/src/streamlink/plugins/vk.py --- a/src/streamlink/plugins/vk.py +++ b/src/streamlink/plugins/vk.py @@ -1,5 +1,5 @@ """ -$description Russian live streaming and video hosting social platform. +$description Russian live-streaming and video hosting social platform. $url vk.com $type live, vod """ @@ -67,10 +67,11 @@ def _get_streams(self): data = self.session.http.post( self.API_URL, params={ - "act": "show_inline", + "act": "show", "al": "1", "video": video_id, }, + headers={"Referer": self.url}, schema=validate.Schema( validate.transform(lambda text: re.sub(r"^\s*<!--\s*", "", text)), validate.parse_json(),
plugins.vk: Could not parse API response ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Cannot open streams from vk.com. ### Debug log ```text [cli][debug] OS: Linux-5.18.5-zen1-1-zen-x86_64-with-glibc2.35 [cli][debug] Python: 3.10.5 [cli][debug] Streamlink: 4.1.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [cli][debug] Arguments: [cli][debug] url=https://vk.com/video-211154316_456239169 [cli][debug] --loglevel=debug [cli][debug] --player=mpv [cli][debug] --default-stream=['best'] [cli][debug] --twitch-disable-ads=True [cli][info] Found matching plugin vk for URL https://vk.com/video-211154316_456239169 [plugins.vk][debug] video ID: -211154316_456239169 [plugins.vk][error] Could not parse API response error: No playable streams found on this URL: https://vk.com/video-211154316_456239169 ```
2022-06-20T00:39:16
streamlink/streamlink
4,625
streamlink__streamlink-4625
[ "4624" ]
c3806e71d9556ce417d3eeee8f4903ef6720bb43
diff --git a/src/streamlink/plugins/twitch.py b/src/streamlink/plugins/twitch.py --- a/src/streamlink/plugins/twitch.py +++ b/src/streamlink/plugins/twitch.py @@ -8,6 +8,7 @@ import json import logging import re +import sys from datetime import datetime from random import random from typing import List, NamedTuple, Optional @@ -528,6 +529,12 @@ class Twitch(Plugin): ) ) + @classmethod + def stream_weight(cls, stream): + if stream == "source": + return sys.maxsize, stream + return super().stream_weight(stream) + def __init__(self, url): super().__init__(url) match = self.match.groupdict()
diff --git a/tests/plugins/test_twitch.py b/tests/plugins/test_twitch.py --- a/tests/plugins/test_twitch.py +++ b/tests/plugins/test_twitch.py @@ -8,6 +8,7 @@ from streamlink.plugins.twitch import Twitch, TwitchHLSStream, TwitchHLSStreamReader, TwitchHLSStreamWriter from tests.mixins.stream_hls import EventedHLSStreamWriter, Playlist, Segment as _Segment, Tag, TestMixinStreamHLS from tests.plugins import PluginCanHandleUrl +from tests.resources import text class TestPluginCanHandleUrlTwitch(PluginCanHandleUrl): @@ -95,6 +96,25 @@ class _TwitchHLSStream(TwitchHLSStream): __reader__ = _TwitchHLSStreamReader +def test_stream_weight(): + session = Streamlink() + Twitch.bind(session, "tests.plugins.test_twitch") + plugin = Twitch("http://twitch.tv/foo") + + with text("hls/test_master_twitch_vod.m3u8") as fh: + playlist = fh.read() + with requests_mock.Mocker() as mocker: + mocker.register_uri(requests_mock.ANY, requests_mock.ANY, exc=requests_mock.exceptions.InvalidRequest) + mocker.request(method="GET", url="http://mocked/master.m3u8", text=playlist) + streams = TwitchHLSStream.parse_variant_playlist(session, "http://mocked/master.m3u8") + with patch.object(plugin, "_get_streams", return_value=streams): + data = plugin.streams() + + assert list(data.keys()) == ["audio", "160p30", "360p30", "480p30", "720p30", "720p60", "source", "worst", "best"] + assert data["best"] is data["source"] + assert data["worst"] is data["160p30"] + + @patch("streamlink.stream.hls.HLSStreamWorker.wait", MagicMock(return_value=True)) class TestTwitchHLSStream(TestMixinStreamHLS, unittest.TestCase): __stream__ = _TwitchHLSStream diff --git a/tests/resources/hls/test_master_twitch_vod.m3u8 b/tests/resources/hls/test_master_twitch_vod.m3u8 new file mode 100644 --- /dev/null +++ b/tests/resources/hls/test_master_twitch_vod.m3u8 @@ -0,0 +1,22 @@ +#EXTM3U +#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="chunked",NAME="Source",AUTOSELECT=YES,DEFAULT=YES +#EXT-X-STREAM-INF:BANDWIDTH=2830316,CODECS="avc1.64002A,mp4a.40.2",RESOLUTION=1920x1080,VIDEO="chunked" +source.m3u8 +#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="720p60",NAME="720p60",AUTOSELECT=YES,DEFAULT=YES +#EXT-X-STREAM-INF:BANDWIDTH=3070556,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720,VIDEO="720p60" +720p60.m3u8 +#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="720p30",NAME="720p30",AUTOSELECT=YES,DEFAULT=YES +#EXT-X-STREAM-INF:BANDWIDTH=2166929,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720,VIDEO="720p30" +720p30.m3u8 +#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="480p30",NAME="480p30",AUTOSELECT=YES,DEFAULT=YES +#EXT-X-STREAM-INF:BANDWIDTH=1417102,CODECS="avc1.4D401E,mp4a.40.2",RESOLUTION=852x480,VIDEO="480p30" +480p30.m3u8 +#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="audio_only",NAME="Audio Only",AUTOSELECT=NO,DEFAULT=NO +#EXT-X-STREAM-INF:BANDWIDTH=216931,CODECS="mp4a.40.2",VIDEO="audio_only" +audio_only.m3u8 +#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="360p30",NAME="360p30",AUTOSELECT=YES,DEFAULT=YES +#EXT-X-STREAM-INF:BANDWIDTH=694948,CODECS="avc1.4D401E,mp4a.40.2",RESOLUTION=640x360,VIDEO="360p30" +360p30.m3u8 +#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="160p30",NAME="160p30",AUTOSELECT=YES,DEFAULT=YES +#EXT-X-STREAM-INF:BANDWIDTH=285241,CODECS="avc1.4D400C,mp4a.40.2",RESOLUTION=284x160,VIDEO="160p30" +160p30.m3u8
plugins.twitch: `source` not always considered `best` ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description It seems that in some cases Twitch is returning `Source` (as opposed to e.g. `Source (1080p60)`) as the stream quality name, and this is not being picked up by the synonym `best`. This can be seen also in the quality selection menu in twitch.tv itself. I know that Twitch used to do something like this a while ago, but these streams are from a few days ago. An example of the normal (correct) behaviour: ``` [~]$ streamlink https://www.twitch.tv/videos/1511444066 best [cli][info] Found matching plugin twitch for URL https://www.twitch.tv/videos/1511444066 [cli][info] Available streams: audio, 160p (worst), 360p, 480p, 720p60, 1080p60 (best) [cli][info] Opening stream: 1080p60 (hls) ``` An example of the "new" (incorrect) behaviour: ``` [~]$ streamlink https://www.twitch.tv/videos/1510768607 best [cli][info] Found matching plugin twitch for URL https://www.twitch.tv/videos/1510768607 [cli][info] Available streams: source, audio, 160p30 (worst), 360p30, 480p30, 720p30, 720p60 (best) [cli][info] Opening stream: 720p60 (hls) ``` (in this case `source` is actually 1080p60) I'm not sure how widespread this is on Twitch and whether it's an issue that's here to stay, but we could probably safely assume that any stream with "source" in the name is the best available, unless there's a nuance I'm missing. I found this to be the case with a version 3 of streamlink, but I've updated to 4.1.0 via homebrew and it's the same there. ### Debug log ```text [~]$ streamlink --loglevel debug https://www.twitch.tv/videos/1510768607 best [cli][debug] OS: macOS 12.4 [cli][debug] Python: 3.10.4 [cli][debug] Streamlink: 4.1.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [cli][debug] Arguments: [cli][debug] url=https://www.twitch.tv/videos/1510768607 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player-passthrough=['hls'] [cli][debug] --default-stream=['best'] [cli][info] Found matching plugin twitch for URL https://www.twitch.tv/videos/1510768607 [plugins.twitch][debug] Getting HLS streams for video ID 1510768607 [utils.l10n][debug] Language code: en_GB [cli][info] Available streams: source, audio, 160p30 (worst), 360p30, 480p30, 720p30, 720p60 (best) [cli][info] Opening stream: 720p60 (hls) [cli][info] Starting player: /Applications/VLC.app/Contents/MacOS/VLC [cli.output][debug] Calling: /Applications/VLC.app/Contents/MacOS/VLC --input-title-format https://www.twitch.tv/videos/1510768607 https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/720p60/index-dvr.m3u8 ```
The reason for this is that streams are ranked by their "name", and not by the "resolution" attribute. For most streams on Twitch, the streams are named after their resolution, but here, the input stream is named "Source", and not after its resolution, namely "1920x1080". That's why it gets ignored when ranking the streams. The Twitch plugin needs to change its stream weighting method. ``` $ curl -sSL (streamlink --stream-url https://www.twitch.tv/videos/1510768607) #EXTM3U #EXT-X-TWITCH-INFO:ORIGIN="s3",B="false",REGION="EU",USER-IP="REDACTED",SERVING-ID="403b750570374f6d95c1359ba8b5a23a",CLUSTER="metro_vod",USER-COUNTRY="DE",MANIFEST-CLUSTER="metro_vod" #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="chunked",NAME="Source",AUTOSELECT=YES,DEFAULT=YES #EXT-X-STREAM-INF:BANDWIDTH=2830316,CODECS="avc1.64002A,mp4a.40.2",RESOLUTION=1920x1080,VIDEO="chunked" https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/chunked/index-dvr.m3u8 #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="720p60",NAME="720p60",AUTOSELECT=YES,DEFAULT=YES #EXT-X-STREAM-INF:BANDWIDTH=3070556,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720,VIDEO="720p60" https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/720p60/index-dvr.m3u8 #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="720p30",NAME="720p30",AUTOSELECT=YES,DEFAULT=YES #EXT-X-STREAM-INF:BANDWIDTH=2166929,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720,VIDEO="720p30" https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/720p30/index-dvr.m3u8 #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="480p30",NAME="480p30",AUTOSELECT=YES,DEFAULT=YES #EXT-X-STREAM-INF:BANDWIDTH=1417102,CODECS="avc1.4D401E,mp4a.40.2",RESOLUTION=852x480,VIDEO="480p30" https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/480p30/index-dvr.m3u8 #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="audio_only",NAME="Audio Only",AUTOSELECT=NO,DEFAULT=NO #EXT-X-STREAM-INF:BANDWIDTH=216931,CODECS="mp4a.40.2",VIDEO="audio_only" https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/audio_only/index-dvr.m3u8 #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="360p30",NAME="360p30",AUTOSELECT=YES,DEFAULT=YES #EXT-X-STREAM-INF:BANDWIDTH=694948,CODECS="avc1.4D401E,mp4a.40.2",RESOLUTION=640x360,VIDEO="360p30" https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/360p30/index-dvr.m3u8 #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="160p30",NAME="160p30",AUTOSELECT=YES,DEFAULT=YES #EXT-X-STREAM-INF:BANDWIDTH=285241,CODECS="avc1.4D400C,mp4a.40.2",RESOLUTION=284x160,VIDEO="160p30" https://d1ymi26ma8va5x.cloudfront.net/22d09dbe54b85b4fa0e7_sips__45632383932_1655889465/160p30/index-dvr.m3u8 ``` You can always select multiple streams though, eg. `source,best`: ``` $ streamlink https://www.twitch.tv/videos/1510768607 source,best [cli][info] Found matching plugin twitch for URL https://www.twitch.tv/videos/1510768607 [cli][info] Available streams: source, audio, 160p30 (worst), 360p30, 480p30, 720p30, 720p60 (best) [cli][info] Opening stream: source (hls) ``` Another workaround is passing the resolved HLS URL to Streamlink again and setting the `name_key` HLS parameter for selecting a different method of naming the streams which they then can get properly ranked by: ``` $ streamlink "hls://$(streamlink --stream-url https://www.twitch.tv/videos/1510768607) name_key=pixels" [cli][info] Found matching plugin hls for URL ... name_key=pixels Available streams: audio, 160p (worst), 360p, 480p, 720p_alt, 720p, 1080p (best) ``` https://streamlink.github.io/cli/protocols.html#protocol-parameters ---- I'm not a fan of the whole stream ranking+selection implementation of Streamlink. This hasn't been changed since the Livestreamer fork and is in need of an update, which would be quite a big breaking change. The stream ranking works like this: when Streamlink calls a plugin's `Plugin.streams()` method, it gets a mapping of `names->streams` for ranking the streams after calling `self._get_streams()`. This is bad, because those names are either taken directly from the various `HLSStream.parse_variant_playlist()` calls (with default `name_key` parameters - unless explicitly changed by a plugin) or `DASHStream.parse_manifest()` calls, or they are custom names for simple `HTTPStreams` or custom stream implementations set by the Plugin implementation. This makes it impossible to change the ranking afterwards, and it also is really awkward in terms of removing/renaming stream duplicates. What should be done instead is having a proper stream metadata API and making plugins return simple list of streams. The ranking can then be done via the properly annotated metadata. ---- I'll have a look at changing the weighting of the Twitch streams later.
2022-06-24T10:16:49
streamlink/streamlink
4,628
streamlink__streamlink-4628
[ "4627" ]
2c564dbe18a4d18ed048a2e832655617a01a508a
diff --git a/src/streamlink/plugins/twitcasting.py b/src/streamlink/plugins/twitcasting.py --- a/src/streamlink/plugins/twitcasting.py +++ b/src/streamlink/plugins/twitcasting.py @@ -104,7 +104,7 @@ def on_close(self, *args, **kwargs): def on_data(self, wsapp, data, data_type, cont): if data_type == self.OPCODE_TEXT: - data = bytes(data, "utf-8") + return try: self.buffer.write(data)
plugins.twitcasting: Writes JSON into video files when it shouldn't ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description https://github.com/streamlink/streamlink/pull/4608 introduced a bug of JSON being written to the output file. - When running streamlink on a channel that is live but only for members, using `-o out.mp4` flag to output to a file, it creates a video file containing just a single JSON line in it: ``` $ cat out.mp4 {"type":"status","code":403,"text":"Access Forbidden"} ``` The expected behavior is that it doesn't create the file in such situation, like it used to behave before https://github.com/streamlink/streamlink/pull/4608 fixes were made. - It also adds `{"type":"status","code":504,"text":"End of Live"}` at the end of video files when the stream ends: ``` $ xxd -s -128 -c 16 out.ts 24b5bee9: 5c75 7cc6 7e38 e099 55d9 6257 59d8 eb6e \u|.~8..U.bWY..n 24b5bef9: b7aa 49bb ef3a dd18 7767 8c77 7dc6 6ade ..I..:..wg.w}.j. 24b5bf09: 6d54 2175 2acf 0926 400f 0449 2bc6 a816 mT!u*..&@..I+... 24b5bf19: 3523 72e9 db4d 6c5a 5aba ec75 3c0a ad72 5#r..MlZZ..u<..r 24b5bf29: 2258 0b2f ebc2 b50a 7ed3 bbbd 8d30 c77b "X./....~....0.{ 24b5bf39: 2274 7970 6522 3a22 7374 6174 7573 222c "type":"status", 24b5bf49: 2263 6f64 6522 3a35 3034 2c22 7465 7874 "code":504,"text 24b5bf59: 223a 2245 6e64 206f 6620 4c69 7665 227d ":"End of Live"} ``` ![streamlink_json](https://user-images.githubusercontent.com/1855294/175794392-7bbaa204-60ac-4170-962b-3d6dac0be9ae.png) - Perhaps it shouldn't be writing any `response['type'] == 'status'` to the file? - While at it, maybe there is something else that it's writing to a video file that it shouldn't? As mentioned in https://github.com/streamlink/streamlink/issues/4604#issuecomment-1166177130, Twitcasting also sends `{"type":"event","code":100,"text":""}` sometimes. Would that get written into the video file too? Is that something that should be written into it? ### Debug log ```text [cli][debug] OS: Linux-5.10.0-14-amd64-x86_64-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 4.1.0+37.g2c564dbe [cli][debug] Dependencies: [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.7.1 [cli][debug] pycountry: 20.7.3 [cli][debug] pycryptodome: 3.10.1 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.0 [cli][debug] websocket-client: 1.2.3 [cli][debug] Arguments: [cli][debug] url=https://twitcasting.tv/[REDACTED] [cli][debug] stream=['best'] [cli][debug] --config=['../config'] [cli][debug] --loglevel=debug [cli][debug] --output=[REDACTED] [cli][debug] --retry-streams=1.0 [cli][debug] --retry-max=300 [cli][info] Found matching plugin twitcasting for URL https://twitcasting.tv/[REDACTED] [plugins.twitcasting][debug] Live stream info: {'movie': {'id': [REDACTED], 'live': True}, 'fmp4': {'host': '202-218-171-197.twitcasting.tv', 'proto': 'wss', 'source': False, 'mobilesource': False}} [plugins.twitcasting][debug] Real stream url: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base [cli][info] Available streams: base (worst, best) [cli][info] Opening stream: base (stream) [cli][info] Writing output to [REDACTED] [cli][debug] Checking file output [plugin.api.websocket][debug] Connecting to: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base [cli][debug] Pre-buffering 8192 bytes [plugin.api.websocket][debug] Connected: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base [cli][debug] Writing stream to output [plugin.api.websocket][error] Connection to remote host was lost. [plugin.api.websocket][debug] Closed: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base [cli][info] Stream ended [cli][info] Closing currently open stream... ```
As you can see in the diff of 67b9fca3b52b6e5651d42b58ad3177d15618a823, what I did in order to fix the exception from being raised when writing the content to Streamlink's RingBuffer, was turning the `OPCODE_TEXT` responses from the websocket stream into `bytes`, because it looked like twitcasting was accidentally sending the wrong opcode (#4604). From what it looks like though, they are sending two separate message types on the same websocket connection for some reason: binary stream data, and auxiliary text JSON data. Could you please apply the following diff which basically discards everything that's not `OPCODE_BINARY` or `OPCODE_CONT`, and see if that fixes the issue? ```diff diff --git a/src/streamlink/plugins/twitcasting.py b/src/streamlink/plugins/twitcasting.py index e2f8fb09..206845d4 100644 --- a/src/streamlink/plugins/twitcasting.py +++ b/src/streamlink/plugins/twitcasting.py @@ -104,7 +104,7 @@ class TwitCastingWsClient(WebsocketClient): def on_data(self, wsapp, data, data_type, cont): if data_type == self.OPCODE_TEXT: - data = bytes(data, "utf-8") + return try: self.buffer.write(data) ```
2022-06-26T11:18:08
streamlink/streamlink
4,630
streamlink__streamlink-4630
[ "4311" ]
1622010c87f2d53915e73d7341da12f4572800cf
diff --git a/src/streamlink/stream/dash.py b/src/streamlink/stream/dash.py --- a/src/streamlink/stream/dash.py +++ b/src/streamlink/stream/dash.py @@ -2,13 +2,15 @@ import datetime import itertools import logging -import os.path from collections import defaultdict +from contextlib import contextmanager +from pathlib import Path +from time import time from typing import Dict, Optional from urllib.parse import urlparse, urlunparse from streamlink import PluginError, StreamError -from streamlink.stream.dash_manifest import MPD, Representation, freeze_timeline, sleep_until, sleeper, utc +from streamlink.stream.dash_manifest import MPD, Representation, Segment, freeze_timeline, utc from streamlink.stream.ffmpegmux import FFMPEGMuxer from streamlink.stream.segmented import SegmentedStreamReader, SegmentedStreamWorker, SegmentedStreamWriter from streamlink.stream.stream import Stream @@ -19,6 +21,10 @@ class DASHStreamWriter(SegmentedStreamWriter): + @staticmethod + def _get_segment_name(segment: Segment) -> str: + return Path(urlparse(segment.url).path).resolve().name + def fetch(self, segment, retries=None): if self.closed or not retries: return @@ -29,9 +35,11 @@ def fetch(self, segment, retries=None): now = datetime.datetime.now(tz=utc) if segment.available_at > now: time_to_wait = (segment.available_at - now).total_seconds() - fname = os.path.basename(urlparse(segment.url).path) - log.debug("Waiting for segment: {fname} ({wait:.01f}s)".format(fname=fname, wait=time_to_wait)) - sleep_until(segment.available_at) + fname = self._get_segment_name(segment) + log.debug(f"Waiting for segment: {fname} ({time_to_wait:.01f}s)") + if not self.wait(time_to_wait): + log.debug(f"Waiting for segment: {fname} aborted") + return if segment.range: start, length = segment.range @@ -39,7 +47,7 @@ def fetch(self, segment, retries=None): end = start + length - 1 else: end = "" - headers["Range"] = "bytes={0}-{1}".format(start, end) + headers["Range"] = f"bytes={start}-{end}" return self.session.http.get(segment.url, timeout=self.timeout, @@ -51,14 +59,14 @@ def fetch(self, segment, retries=None): return self.fetch(segment, retries - 1) def write(self, segment, res, chunk_size=8192): + name = self._get_segment_name(segment) for chunk in res.iter_content(chunk_size): - if not self.closed: - self.reader.buffer.write(chunk) - else: - log.warning("Download of segment: {} aborted".format(segment.url)) + if self.closed: + log.warning(f"Download of segment: {name} aborted") return + self.reader.buffer.write(chunk) - log.debug("Download of segment: {} complete".format(segment.url)) + log.debug(f"Download of segment: {name} complete") class DASHStreamWorker(SegmentedStreamWorker): @@ -67,6 +75,17 @@ def __init__(self, *args, **kwargs): self.mpd = self.stream.mpd self.period = self.stream.period + @contextmanager + def sleeper(self, duration): + """ + Do something and then wait for a given duration minus the time it took doing something + """ + s = time() + yield + time_to_sleep = duration - (time() - s) + if time_to_sleep > 0: + self.wait(time_to_sleep) + @staticmethod def get_representation(mpd, representation_id, mime_type): for aset in mpd.periods[0].adaptationSets: @@ -80,28 +99,35 @@ def iter_segments(self): while not self.closed: # find the representation by ID representation = self.get_representation(self.mpd, self.reader.representation_id, self.reader.mime_type) - refresh_wait = max(self.mpd.minimumUpdatePeriod.total_seconds(), - self.mpd.periods[0].duration.total_seconds()) or 5 if self.mpd.type == "static": refresh_wait = 5 + else: + refresh_wait = max( + self.mpd.minimumUpdatePeriod.total_seconds(), + self.mpd.periods[0].duration.total_seconds(), + ) or 5 + + with self.sleeper(refresh_wait * back_off_factor): + if not representation: + continue + + for segment in representation.segments(init=init): + if self.closed: + break + yield segment + + # close worker if type is not dynamic (all segments were put into writer queue) + if self.mpd.type != "dynamic": + self.close() + return + + if not self.reload(): + back_off_factor = max(back_off_factor * 1.3, 10.0) + else: + back_off_factor = 1 - with sleeper(refresh_wait * back_off_factor): - if representation: - for segment in representation.segments(init=init): - if self.closed: - break - yield segment - # log.debug(f"Adding segment {segment.url} to queue") - - if self.mpd.type == "dynamic": - if not self.reload(): - back_off_factor = max(back_off_factor * 1.3, 10.0) - else: - back_off_factor = 1 - else: - return - init = False + init = False def reload(self): if self.closed: diff --git a/src/streamlink/stream/dash_manifest.py b/src/streamlink/stream/dash_manifest.py --- a/src/streamlink/stream/dash_manifest.py +++ b/src/streamlink/stream/dash_manifest.py @@ -3,7 +3,6 @@ import logging import math import re -import time from collections import defaultdict, namedtuple from contextlib import contextmanager from itertools import count, repeat @@ -45,22 +44,6 @@ def freeze_timeline(mpd): mpd.timelines = timelines -@contextmanager -def sleeper(duration): - s = time.time() - yield - time_to_sleep = duration - (time.time() - s) - if time_to_sleep > 0: - time.sleep(time_to_sleep) - - -def sleep_until(walltime): - c = datetime.datetime.now(tz=utc) - time_to_wait = (walltime - c).total_seconds() - if time_to_wait > 0: - time.sleep(time_to_wait) - - class MPDParsers: @staticmethod def bool_str(v): diff --git a/src/streamlink/stream/segmented.py b/src/streamlink/stream/segmented.py --- a/src/streamlink/stream/segmented.py +++ b/src/streamlink/stream/segmented.py @@ -36,7 +36,20 @@ def shutdown(self, wait=True, cancel_futures=False): # pragma: no cover t.join() -class SegmentedStreamWorker(Thread): +class AwaitableMixin: + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self._wait = Event() + + def wait(self, time: float) -> bool: + """ + Pause the thread for a specified time. + Return False if interrupted by another thread and True if the time runs out normally. + """ + return not self._wait.wait(time) + + +class SegmentedStreamWorker(AwaitableMixin, Thread): """The general worker thread. This thread is responsible for queueing up segments in the @@ -44,16 +57,13 @@ class SegmentedStreamWorker(Thread): """ def __init__(self, reader, **kwargs): + super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}") self.closed = False self.reader = reader self.writer = reader.writer self.stream = reader.stream self.session = reader.session - self._wait = Event() - - super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}") - def close(self): """Shuts down the thread.""" if self.closed: # pragma: no cover @@ -64,14 +74,6 @@ def close(self): self.closed = True self._wait.set() - def wait(self, time): - """Pauses the thread for a specified time. - - Returns False if interrupted by another thread and True if the - time runs out normally. - """ - return not self._wait.wait(time) - def iter_segments(self): """The iterator that generates segments for the worker thread. @@ -91,7 +93,7 @@ def run(self): self.close() -class SegmentedStreamWriter(Thread): +class SegmentedStreamWriter(AwaitableMixin, Thread): """The writer thread. This thread is responsible for fetching segments, processing them @@ -99,6 +101,7 @@ class SegmentedStreamWriter(Thread): """ def __init__(self, reader, size=20, retries=None, threads=None, timeout=None): + super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}") self.closed = False self.reader = reader self.stream = reader.stream @@ -119,8 +122,6 @@ def __init__(self, reader, size=20, retries=None, threads=None, timeout=None): self.executor = CompatThreadPoolExecutor(max_workers=self.threads) self.futures = queue.Queue(size) - super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}") - def close(self): """Shuts down the thread, its executor and closes the reader (worker thread and buffer).""" if self.closed: # pragma: no cover @@ -131,6 +132,7 @@ def close(self): self.closed = True self.reader.close() self.executor.shutdown(wait=True, cancel_futures=True) + self._wait.set() def put(self, segment): """Adds a segment to the download pool and write queue."""
diff --git a/tests/stream/test_dash.py b/tests/stream/test_dash.py --- a/tests/stream/test_dash.py +++ b/tests/stream/test_dash.py @@ -1,6 +1,9 @@ import unittest +from typing import List from unittest.mock import ANY, MagicMock, Mock, call, patch +import pytest + from streamlink import PluginError from streamlink.stream.dash import DASHStream, DASHStreamWorker from streamlink.stream.dash_manifest import MPD @@ -300,124 +303,145 @@ def test_parse_manifest_with_duplicated_resolutions_sorted_bandwidth(self, mpdCl self.assertEqual(streams["1080p_alt2"].video_representation.bandwidth, 32.0) -class TestDASHStreamWorker(unittest.TestCase): - @patch("streamlink.stream.dash_manifest.time.sleep") - @patch('streamlink.stream.dash.MPD') - def test_dynamic_reload(self, mpdClass, sleep): - reader = MagicMock() +class TestDASHStreamWorker: + @pytest.fixture + def mock_time(self, monkeypatch: pytest.MonkeyPatch) -> Mock: + mock = Mock(return_value=1) + monkeypatch.setattr("streamlink.stream.dash.time", mock) + return mock + + @pytest.fixture(autouse=True) + def mock_wait(self, monkeypatch: pytest.MonkeyPatch) -> Mock: + mock = Mock(return_value=True) + monkeypatch.setattr("streamlink.stream.dash.DASHStreamWorker.wait", mock) + return mock + + @pytest.fixture + def representation(self) -> Mock: + return Mock(id=1, mimeType="video/mp4", height=720) + + @pytest.fixture + def segments(self) -> List[Mock]: + return [ + Mock(url="init_segment"), + Mock(url="first_segment"), + Mock(url="second_segment"), + ] + + @pytest.fixture + def mpd(self, representation) -> Mock: + return Mock( + publishTime=1, + minimumUpdatePeriod=Mock(total_seconds=Mock(return_value=0)), + periods=[ + Mock( + duration=Mock(total_seconds=Mock(return_value=0)), + adaptationSets=[ + Mock( + contentProtection=None, + representations=[representation], + ), + ], + ), + ], + ) + + @pytest.fixture + def worker(self, mpd): + reader = MagicMock(representation_id=1, mime_type="video/mp4") worker = DASHStreamWorker(reader) - reader.representation_id = 1 - reader.mime_type = "video/mp4" - - representation = Mock(id=1, mimeType="video/mp4", height=720) - segments = [Mock(url="init_segment"), Mock(url="first_segment"), Mock(url="second_segment")] - representation.segments.return_value = [segments[0]] - mpdClass.return_value = worker.mpd = Mock(dynamic=True, - publishTime=1, - periods=[ - Mock(adaptationSets=[ - Mock(contentProtection=None, - representations=[ - representation - ]) - ]) - ]) - worker.mpd.type = "dynamic" - worker.mpd.minimumUpdatePeriod.total_seconds.return_value = 0 - worker.mpd.periods[0].duration.total_seconds.return_value = 0 + worker.mpd = mpd + return worker + + def test_dynamic_reload( + self, + monkeypatch: pytest.MonkeyPatch, + worker: DASHStreamWorker, + representation: Mock, + segments: List[Mock], + mpd: Mock, + ): + mpd.dynamic = True + mpd.type = "dynamic" + monkeypatch.setattr("streamlink.stream.dash.MPD", lambda *args, **kwargs: mpd) segment_iter = worker.iter_segments() representation.segments.return_value = segments[:1] - self.assertEqual(next(segment_iter), segments[0]) - representation.segments.assert_called_with(init=True) + assert next(segment_iter) is segments[0] + assert representation.segments.call_args_list == [call(init=True)] + assert not worker._wait.is_set() + representation.segments.reset_mock() representation.segments.return_value = segments[1:] - self.assertSequenceEqual([next(segment_iter), next(segment_iter)], segments[1:]) - representation.segments.assert_called_with(init=False) - - @patch("streamlink.stream.dash_manifest.time.sleep") - def test_static(self, sleep): - reader = MagicMock() - worker = DASHStreamWorker(reader) - reader.representation_id = 1 - reader.mime_type = "video/mp4" - - representation = Mock(id=1, mimeType="video/mp4", height=720) - segments = [Mock(url="init_segment"), Mock(url="first_segment"), Mock(url="second_segment")] - representation.segments.return_value = [segments[0]] - worker.mpd = Mock(dynamic=False, - publishTime=1, - periods=[ - Mock(adaptationSets=[ - Mock(contentProtection=None, - representations=[ - representation - ]) - ]) - ]) - worker.mpd.type = "static" - worker.mpd.minimumUpdatePeriod.total_seconds.return_value = 0 - worker.mpd.periods[0].duration.total_seconds.return_value = 0 + assert [next(segment_iter), next(segment_iter)] == segments[1:] + assert representation.segments.call_args_list == [call(), call(init=False)] + assert not worker._wait.is_set() + + def test_static( + self, + worker: DASHStreamWorker, + representation: Mock, + segments: List[Mock], + mpd: Mock, + ): + mpd.dynamic = False + mpd.type = "static" representation.segments.return_value = segments - self.assertSequenceEqual(list(worker.iter_segments()), segments) - representation.segments.assert_called_with(init=True) - - @patch("streamlink.stream.dash_manifest.time.time") - @patch("streamlink.stream.dash_manifest.time.sleep") - def test_static_refresh_wait(self, sleep, time): + assert list(worker.iter_segments()) == segments + assert representation.segments.call_args_list == [call(init=True)] + assert worker._wait.is_set() + + @pytest.mark.parametrize("duration", [ + 0, + 204.32, + ]) + def test_static_refresh_wait( + self, + duration: float, + mock_wait: Mock, + mock_time: Mock, + worker: DASHStreamWorker, + representation: Mock, + segments: List[Mock], + mpd: Mock, + ): """ Verify the fix for https://github.com/streamlink/streamlink/issues/2873 """ - time.return_value = 1 - reader = MagicMock() - worker = DASHStreamWorker(reader) - reader.representation_id = 1 - reader.mime_type = "video/mp4" - - representation = Mock(id=1, mimeType="video/mp4", height=720) - segments = [Mock(url="init_segment"), Mock(url="first_segment"), Mock(url="second_segment")] - representation.segments.return_value = [segments[0]] - worker.mpd = Mock(dynamic=False, - publishTime=1, - periods=[ - Mock(adaptationSets=[ - Mock(contentProtection=None, - representations=[ - representation - ]) - ]) - ]) - worker.mpd.type = "static" - for duration in (0, 204.32): - worker.mpd.minimumUpdatePeriod.total_seconds.return_value = 0 - worker.mpd.periods[0].duration.total_seconds.return_value = duration - - representation.segments.return_value = segments - self.assertSequenceEqual(list(worker.iter_segments()), segments) - representation.segments.assert_called_with(init=True) - sleep.assert_called_with(5) - - @patch("streamlink.stream.dash_manifest.time.sleep") - def test_duplicate_rep_id(self, sleep): + mpd.dynamic = False + mpd.type = "static" + mpd.periods[0].duration.total_seconds.return_value = duration + + representation.segments.return_value = segments + assert list(worker.iter_segments()) == segments + assert representation.segments.call_args_list == [call(init=True)] + assert mock_wait.call_args_list == [call(5)] + assert worker._wait.is_set() + + def test_duplicate_rep_id(self): representation_vid = Mock(id=1, mimeType="video/mp4", height=720) - representation_aud = Mock(id=1, mimeType="audio/aac", lang='en') - - mpd = Mock(dynamic=False, - publishTime=1, - periods=[ - Mock(adaptationSets=[ - Mock(contentProtection=None, - representations=[ - representation_vid - ]), - Mock(contentProtection=None, - representations=[ - representation_aud - ]) - ]) - ]) - - self.assertEqual(representation_vid, DASHStreamWorker.get_representation(mpd, 1, "video/mp4")) - self.assertEqual(representation_aud, DASHStreamWorker.get_representation(mpd, 1, "audio/aac")) + representation_aud = Mock(id=1, mimeType="audio/aac", lang="en") + + mpd = Mock( + dynamic=False, + publishTime=1, + periods=[ + Mock( + adaptationSets=[ + Mock( + contentProtection=None, + representations=[representation_vid], + ), + Mock( + contentProtection=None, + representations=[representation_aud], + ), + ], + ), + ], + ) + + assert DASHStreamWorker.get_representation(mpd, 1, "video/mp4") is representation_vid + assert DASHStreamWorker.get_representation(mpd, 1, "audio/aac") is representation_aud
stream.dash (or stream.segmented): takes long time to finish dash manifest ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description When streamlink processes DASH manifest and makes an output, it takes a very long time in finishing. It took about 5 seconds in my test every time. I generated and tested the input files as follows. ``` $ mkdir /tmp/test $ ffmpeg -f lavfi -i testsrc \ -use_template 0 \ -init_seg_name 'init$RepresentationID$.$ext$' \ -media_seg_name 'seg$RepresentationID$-$Number%02d$.$ext$' \ -dash_segment_type mp4 \ -hls_playlist 1 -hls_master_name master.m3u8 \ -f dash -t 30 /tmp/test/manifest.mpd ``` Then ran streamlink. ``` $ streamlink -l trace dash://file:///tmp/test/manifest.mpd best -o /dev/null $ streamlink -l trace hls://file:///tmp/test/master.m3u8 best -o /dev/null ``` In the following log, please notice these lines for DASH manifest. For HLS playlist, streamlink finishes soon. ``` [00:59:47.924248][stream.dash][debug] Download of segment: file:///tmp/test/seg0-03.m4s complete [00:59:52.922646][stream.segmented][debug] Closing worker thread ``` ### Debug log ```text # log for DASH manifest [00:59:47.815443][cli][debug] OS: macOS 10.12.6 [00:59:47.815670][cli][debug] Python: 3.9.10 [00:59:47.815745][cli][debug] Streamlink: 3.1.0 [00:59:47.815865][cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [00:59:47.816320][cli][debug] Arguments: [00:59:47.816447][cli][debug] url=dash://file:///tmp/test/manifest.mpd [00:59:47.816647][cli][debug] stream=['best'] [00:59:47.816800][cli][debug] --loglevel=trace [00:59:47.816950][cli][debug] --output=/dev/null [00:59:47.817352][cli][info] Found matching plugin dash for URL dash://file:///tmp/test/manifest.mpd [00:59:47.817557][plugins.dash][debug] Parsing MPD URL: file:///tmp/test/manifest.mpd [00:59:47.915656][utils.l10n][debug] Language code: en_US [00:59:47.915826][stream.dash][debug] Available languages for DASH audio streams: NONE (using: n/a) [00:59:47.917304][cli][info] Available streams: 240p (worst, best) [00:59:47.917486][cli][info] Opening stream: 240p (dash) [00:59:47.917880][cli][debug] Checking file output [00:59:47.918306][stream.dash][debug] Opening DASH reader for: 0 (video/mp4) [00:59:47.919071][cli][debug] Pre-buffering 8192 bytes [00:59:47.920896][stream.dash][debug] Download of segment: file:///tmp/test/init0.m4s complete [00:59:47.921416][cli][debug] Writing stream to output [00:59:47.922346][stream.dash][debug] Download of segment: file:///tmp/test/seg0-01.m4s complete [00:59:47.923426][stream.dash][debug] Download of segment: file:///tmp/test/seg0-02.m4s complete [00:59:47.924248][stream.dash][debug] Download of segment: file:///tmp/test/seg0-03.m4s complete [00:59:52.922646][stream.segmented][debug] Closing worker thread [00:59:52.923078][stream.segmented][debug] Closing writer thread [00:59:52.923513][cli][info] Stream ended [00:59:52.923871][cli][info] Closing currently open stream... # log for HLS playlist for comparison [01:00:01.271367][cli][debug] OS: macOS 10.12.6 [01:00:01.271601][cli][debug] Python: 3.9.10 [01:00:01.271713][cli][debug] Streamlink: 3.1.0 [01:00:01.271866][cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [01:00:01.272300][cli][debug] Arguments: [01:00:01.272396][cli][debug] url=hls://file:///tmp/test/master.m3u8 [01:00:01.272525][cli][debug] stream=['best'] [01:00:01.272673][cli][debug] --loglevel=trace [01:00:01.272824][cli][debug] --output=/dev/null [01:00:01.273216][cli][info] Found matching plugin hls for URL hls://file:///tmp/test/master.m3u8 [01:00:01.273371][plugins.hls][debug] URL=file:///tmp/test/master.m3u8; params={} [01:00:01.359532][utils.l10n][debug] Language code: en_US [01:00:01.365674][cli][info] Available streams: 240p (worst, best) [01:00:01.365858][cli][info] Opening stream: 240p (hls) [01:00:01.366116][cli][debug] Checking file output [01:00:01.367035][stream.hls][debug] Reloading playlist [01:00:01.367991][cli][debug] Pre-buffering 8192 bytes [01:00:01.380139][stream.hls][debug] First Sequence: 1; Last Sequence: 3 [01:00:01.380287][stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1; End Sequence: 3 [01:00:01.380366][stream.hls][debug] Adding segment 1 to queue [01:00:01.381053][stream.hls][debug] Adding segment 2 to queue [01:00:01.381717][stream.hls][debug] Adding segment 3 to queue [01:00:01.381872][stream.segmented][debug] Closing worker thread [01:00:01.382355][stream.hls][debug] Segment initialization 1 complete [01:00:01.383110][cli][debug] Writing stream to output [01:00:01.384028][stream.hls][debug] Segment 1 complete [01:00:01.384196][stream.hls][debug] Segment initialization 2 complete [01:00:01.385400][stream.hls][debug] Segment 2 complete [01:00:01.385591][stream.hls][debug] Segment initialization 3 complete [01:00:01.386479][stream.hls][debug] Segment 3 complete [01:00:01.386975][stream.segmented][debug] Closing writer thread [01:00:01.387420][cli][info] Stream ended [01:00:01.387602][cli][info] Closing currently open stream... ```
Duplicate of #3137 This doesn't seem to be a duplicate of that issue. I'm not using Windows and this report is not related to stream.ffmpegmux nor utils.named_pipe. I ran streamlink with options similar to that issue and had no delay in finishing. But changed stream 480p to worst (because of YouTube throttling) and vlc to mpv. (From the output of `youtube-dl -F`, both v 135 for 480p and v 160 for worst are mp4_dash containers so using worst shouldn't make any difference.) ``` $ streamlink -l trace https://www.youtube.com/watch?v=ET9HeSFYlEs worst --stream-type muxed-stream -p mpv -v --player-no-close ``` ### Debug log There is a gap in the time shown in the log, but when the playback is finished and mpv window is closed, streamlink also finishes immediately. '10:49:47' (start writing stream to output) plus '00:04:13' (duration of the stream) gets '10:54:00'. And, if I quit mpv with 'q' during playback, streamlink will finish immediately as expected. ``` [10:49:45.823986][cli][debug] OS: macOS 10.12.6 [10:49:45.824211][cli][debug] Python: 3.9.10 [10:49:45.824286][cli][debug] Streamlink: 3.1.0 [10:49:45.824424][cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.2.3) [10:49:45.824811][cli][debug] Arguments: [10:49:45.824898][cli][debug] url=https://www.youtube.com/watch?v=ET9HeSFYlEs [10:49:45.825062][cli][debug] stream=['worst'] [10:49:45.825215][cli][debug] --loglevel=trace [10:49:45.825351][cli][debug] --player=mpv [10:49:45.825487][cli][debug] --verbose-player=True [10:49:45.825620][cli][debug] --player-no-close=True [10:49:45.825761][cli][debug] --stream-types=['muxed-stream'] [10:49:45.826159][cli][info] Found matching plugin youtube for URL https://www.youtube.com/watch?v=ET9HeSFYlEs [10:49:46.309300][plugins.youtube][trace] videoDetails = ('ET9HeSFYlEs', 'Jump Cuts', 'Comedy', 'Ajith fans vs vijay fans - a conclusion', None) [10:49:46.309481][plugins.youtube][debug] Using video ID: ET9HeSFYlEs [10:49:46.313456][plugins.youtube][debug] MuxedStream: v 137 a 251 = 1080p [10:49:46.313610][plugins.youtube][debug] MuxedStream: v 135 a 251 = 480p [10:49:46.313710][plugins.youtube][debug] MuxedStream: v 133 a 251 = 240p [10:49:46.313872][plugins.youtube][debug] MuxedStream: v 160 a 251 = 144p [10:49:46.315920][cli][info] Available streams: 144p (worst), 240p, 480p, 1080p (best) [10:49:46.316080][cli][info] Opening stream: 144p (muxed-stream) [10:49:46.316220][cli][info] Starting player: mpv [10:49:46.316471][stream.ffmpegmux][debug] Opening http substream [10:49:46.351239][stream.ffmpegmux][debug] Opening http substream [10:49:46.401414][utils.named_pipe][info] Creating pipe streamlinkpipe-4405-1-3045 [10:49:46.403148][utils.named_pipe][info] Creating pipe streamlinkpipe-4405-2-3335 [10:49:46.403735][stream.ffmpegmux][debug] ffmpeg command: ffmpeg -nostats -y -i /tmp/streamlinkpipe-4405-1-3045 -i /tmp/streamlinkpipe-4405-2-3335 -c:v copy -c:a copy -map 0 -map 1 -f matroska pipe:1 [10:49:46.404102][stream.ffmpegmux][debug] Starting copy to pipe: /tmp/streamlinkpipe-4405-1-3045 [10:49:46.404495][stream.ffmpegmux][debug] Starting copy to pipe: /tmp/streamlinkpipe-4405-2-3335 [10:49:46.412891][cli][debug] Pre-buffering 8192 bytes [10:49:46.609740][cli.output][debug] Opening subprocess: mpv --force-media-title=https://www.youtube.com/watch?v=ET9HeSFYlEs - [file] Reading from stdin... [10:49:47.119582][cli][debug] Writing stream to output (+) Video --vid=1 (*) (h264 256x144 23.976fps) (+) Audio --aid=1 --alang=eng (*) (opus 2ch 48000Hz) [vo/gpu] opengl cocoa backend is deprecated, use vo=libmpv instead AO: [coreaudio] 48000Hz stereo 2ch floatp VO: [gpu] 256x144 yuv420p [10:53:44.688544][stream.ffmpegmux][debug] Pipe copy complete: /tmp/streamlinkpipe-4405-1-3045 [10:53:47.487416][stream.ffmpegmux][debug] Pipe copy complete: /tmp/streamlinkpipe-4405-2-3335 [10:53:54.922372][stream.ffmpegmux][debug] Closing ffmpeg thread [10:53:54.922803][stream.ffmpegmux][debug] Closed all the substreams [10:53:54.922967][cli][info] Stream ended AV: 00:04:13 / 00:04:13 (100%) A-V: 0.000 Exiting... (End of file) [10:54:00.553670][cli][info] Closing currently open stream... ``` Let me see... It's possible (or rather likely) that this is caused by the `sleep_until` method call in the `DASHStreamWriter` thread. - https://github.com/streamlink/streamlink/blame/3.1.1/src/streamlink/stream/dash_manifest.py#L70-L74 - https://github.com/streamlink/streamlink/blame/3.1.1/src/streamlink/stream/dash.py#L36 I haven't been involved with Streamlink's DASH implementation, but I recently rewrote the UStream plugin which was previously using this same method and its own writer thread was stalling because of that. The writer thread shouldn't call `time.sleep` and should instead wait for a thread-lock to time out which other threads (main thread or worker thread) can set to "cancel" the writer thread. That would solve the issue. The `DASHStreamWorker` thread has the same issue, which is probably the main reason here, because there are no "Waiting for segment" log messages in your log output. The worker basically does the same, but with the `sleeper` method. - https://github.com/streamlink/streamlink/blame/3.1.1/src/streamlink/stream/dash_manifest.py#L61-L67 - https://github.com/streamlink/streamlink/blame/3.1.1/src/streamlink/stream/dash.py#L91
2022-06-28T13:23:45
streamlink/streamlink
4,632
streamlink__streamlink-4632
[ "4622" ]
ed134af4ab354a4683963e5e684681a79d84e7af
diff --git a/src/streamlink/plugins/rtve.py b/src/streamlink/plugins/rtve.py --- a/src/streamlink/plugins/rtve.py +++ b/src/streamlink/plugins/rtve.py @@ -5,171 +5,190 @@ $region Spain """ -import base64 import logging import re +from base64 import b64decode +from io import BytesIO +from typing import Iterator, Sequence, Tuple from urllib.parse import urlparse -from Crypto.Cipher import Blowfish - from streamlink.plugin import Plugin, PluginArgument, PluginArguments, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.ffmpegmux import MuxedStream from streamlink.stream.hls import HLSStream from streamlink.stream.http import HTTPStream +from streamlink.utils.url import update_scheme log = logging.getLogger(__name__) -class ZTNRClient: - base_url = "https://ztnr.rtve.es/ztnr/res/" - block_size = 16 +class Base64Reader: + def __init__(self, data: str): + stream = BytesIO(b64decode(data)) - def __init__(self, key, session): - self.cipher = Blowfish.new(key, Blowfish.MODE_ECB) - self.session = session + def _iterate(): + while True: + chunk = stream.read(1) + if len(chunk) == 0: # pragma: no cover + return + yield ord(chunk) - @classmethod - def pad(cls, data): - n = cls.block_size - len(data) % cls.block_size - return data + bytes(chr(cls.block_size - len(data) % cls.block_size), "utf8") * n + self._iterator: Iterator[int] = _iterate() - @staticmethod - def unpad(data): - return data[0:-data[-1]] + def read(self, num: int) -> Sequence[int]: + res = [] + for _ in range(num): + item = next(self._iterator, None) + if item is None: # pragma: no cover + break + res.append(item) + return res + + def skip(self, num: int) -> None: + self.read(num) + + def read_chars(self, num: int) -> str: + return "".join(chr(item) for item in self.read(num)) - def encrypt(self, data): - return base64.b64encode(self.cipher.encrypt(self.pad(bytes(data, "utf-8"))), altchars=b"-_").decode("ascii") + def read_int(self) -> int: + a, b, c, d = self.read(4) + return a << 24 | b << 16 | c << 8 | d - def decrypt(self, data): - return self.unpad(self.cipher.decrypt(base64.b64decode(data, altchars=b"-_"))) + def read_chunk(self) -> Tuple[str, Sequence[int]]: + size = self.read_int() + chunktype = self.read_chars(4) + chunkdata = self.read(size) + if len(chunkdata) != size: # pragma: no cover + raise ValueError("Invalid chunk length") + self.skip(4) + return chunktype, chunkdata - def request(self, data, *args, **kwargs): - res = self.session.http.get(self.base_url + self.encrypt(data), *args, **kwargs) - return self.decrypt(res.content) - def get_cdn_list(self, vid, manager="apedemak", vtype="video", lang="es", schema=None): - data = self.request("{id}_{manager}_{type}_{lang}".format(id=vid, manager=manager, type=vtype, lang=lang)) - if schema: - return schema.validate(data) - else: - return data +class ZTNR: + @staticmethod + def _get_alphabet(text: str) -> str: + res = [] + j = 0 + k = 0 + for char in text: + if k > 0: + k -= 1 + else: + res.append(char) + j = (j + 1) % 4 + k = j + return "".join(res) + + @staticmethod + def _get_url(text: str, alphabet: str) -> str: + res = [] + j = 0 + n = 0 + k = 3 + cont = 0 + for char in text: + if j == 0: + n = int(char) * 10 + j = 1 + elif k > 0: + k -= 1 + else: + res.append(alphabet[n + int(char)]) + j = 0 + k = cont % 4 + cont += 1 + return "".join(res) + + @classmethod + def _get_source(cls, alphabet: str, data: str) -> str: + return cls._get_url(data, cls._get_alphabet(alphabet)) + + @classmethod + def translate(cls, data: str) -> Iterator[Tuple[str, str]]: + reader = Base64Reader(data.replace("\n", "")) + reader.skip(8) + chunk_type, chunk_data = reader.read_chunk() + while chunk_type != "IEND": + if chunk_type == "tEXt": + content = "".join(chr(item) for item in chunk_data if item > 0) + if "#" not in content or "%%" not in content: # pragma: no cover + continue + alphabet, content = content.split("#", 1) + quality, content = content.split("%%", 1) + yield quality, cls._get_source(alphabet, content) + chunk_type, chunk_data = reader.read_chunk() @pluginmatcher(re.compile( r"https?://(?:www\.)?rtve\.es/play/videos/.+" )) class Rtve(Plugin): - _re_idAsset = re.compile(r"\"idAsset\":\"(\d+)\"") - secret_key = base64.b64decode("eWVMJmRhRDM=") - cdn_schema = validate.Schema( - validate.parse_xml(invalid_char_entities=True), - validate.xml_findall(".//preset"), - [ - validate.union({ - "quality": validate.all(validate.getattr("attrib"), - validate.get("type")), - "urls": validate.all( - validate.xml_findall(".//url"), - [validate.getattr("text")] - ) - }) - ] - ) - subtitles_api = "https://www.rtve.es/api/videos/{id}/subtitulos.json" - subtitles_schema = validate.Schema({ - "page": { - "items": [{ - "src": validate.url(), - "lang": validate.text - }] - } - }, - validate.get("page"), - validate.get("items")) - video_api = "https://www.rtve.es/api/videos/{id}.json" - video_schema = validate.Schema({ - "page": { - "items": [{ - "qualities": [{ - "preset": validate.text, - "height": int - }] - }] - } - }, - validate.get("page"), - validate.get("items"), - validate.get(0)) - arguments = PluginArguments( - PluginArgument("mux-subtitles", is_global=True) + PluginArgument("mux-subtitles", is_global=True), ) - def __init__(self, url): - super().__init__(url) - self.zclient = ZTNRClient(self.secret_key, self.session) - - def _get_subtitles(self, content_id): - res = self.session.http.get(self.subtitles_api.format(id=content_id)) - return self.session.http.json(res, schema=self.subtitles_schema) - - def _get_quality_map(self, content_id): - res = self.session.http.get(self.video_api.format(id=content_id)) - data = self.session.http.json(res, schema=self.video_schema) - qmap = {} - for item in data["qualities"]: - qname = {"MED": "Media", "HIGH": "Alta", "ORIGINAL": "Original"}.get(item["preset"], item["preset"]) - qmap[qname] = f"{item['height']}p" - return qmap + URL_VIDEOS = "https://ztnr.rtve.es/ztnr/movil/thumbnail/rtveplayw/videos/{id}.png?q=v2" + URL_SUBTITLES = "https://www.rtve.es/api/videos/{id}/subtitulos.json" def _get_streams(self): - res = self.session.http.get(self.url) - m = self._re_idAsset.search(res.text) - if m: - content_id = m.group(1) - log.debug(f"Found content with id: {content_id}") - stream_data = self.zclient.get_cdn_list(content_id, schema=self.cdn_schema) - quality_map = None - - streams = [] - for stream in stream_data: - # only use one stream - _one_m3u8 = False - _one_mp4 = False - for url in stream["urls"]: - p_url = urlparse(url) - if p_url.path.endswith(".m3u8"): - if _one_m3u8: - continue - try: - streams.extend(HLSStream.parse_variant_playlist(self.session, url).items()) - _one_m3u8 = True - except OSError as err: - log.error(str(err)) - elif p_url.path.endswith(".mp4"): - if _one_mp4: - continue - if quality_map is None: # only make the request when it is necessary - quality_map = self._get_quality_map(content_id) - # rename the HTTP sources to match the HLS sources - quality = quality_map.get(stream["quality"], stream["quality"]) - streams.append((quality, HTTPStream(self.session, url))) - _one_mp4 = True - - subtitles = None - if self.get_option("mux_subtitles"): - subtitles = self._get_subtitles(content_id) - if subtitles: - substreams = {} - for i, subtitle in enumerate(subtitles): - substreams[subtitle["lang"]] = HTTPStream(self.session, subtitle["src"]) - - for q, s in streams: - yield q, MuxedStream(self.session, s, subtitles=substreams) - else: - for s in streams: - yield s + self.id = self.session.http.get(self.url, schema=validate.Schema( + validate.transform(re.compile(r"\bdata-setup='({.+?})'", re.DOTALL).search), + validate.any(None, validate.all( + validate.get(1), + validate.parse_json(), + { + "idAsset": validate.any(int, validate.all(str, validate.transform(int))), + }, + validate.get("idAsset") + )), + )) + if not self.id: + return + + urls = self.session.http.get( + self.URL_VIDEOS.format(id=self.id), + schema=validate.Schema( + validate.transform(ZTNR.translate), + validate.transform(list), + [(str, validate.url())], + ), + ) + + url = next((url for _, url in urls if urlparse(url).path.endswith(".m3u8")), None) + if not url: + url = next((url for _, url in urls if urlparse(url).path.endswith(".mp4")), None) + if url: + yield "vod", HTTPStream(self.session, url) + return + + streams = HLSStream.parse_variant_playlist(self.session, url).items() + + if self.options.get("mux-subtitles"): + subs = self.session.http.get( + self.URL_SUBTITLES.format(id=self.id), + schema=validate.Schema( + validate.parse_json(), + { + "page": { + "items": [{ + "lang": str, + "src": validate.url(), + }] + } + }, + validate.get(("page", "items")), + ), + ) + if subs: + subtitles = { + s["lang"]: HTTPStream(self.session, update_scheme("https://", s["src"], force=True)) + for s in subs + } + for quality, stream in streams: + yield quality, MuxedStream(self.session, stream, subtitles=subtitles) + return + + yield from streams __plugin__ = Rtve
diff --git a/tests/plugins/test_rtve.py b/tests/plugins/test_rtve.py --- a/tests/plugins/test_rtve.py +++ b/tests/plugins/test_rtve.py @@ -1,4 +1,4 @@ -from streamlink.plugins.rtve import Rtve +from streamlink.plugins.rtve import Rtve, ZTNR from tests.plugins import PluginCanHandleUrl @@ -6,16 +6,42 @@ class TestPluginCanHandleUrlRtve(PluginCanHandleUrl): __plugin__ = Rtve should_match = [ - 'https://www.rtve.es/play/videos/directo/la-1/', - 'https://www.rtve.es/play/videos/directo/canales-lineales/24h/', - 'https://www.rtve.es/play/videos/rebelion-en-el-reino-salvaje/mata-reyes/5803959/', + "https://www.rtve.es/play/videos/directo/la-1/", + "https://www.rtve.es/play/videos/directo/canales-lineales/24h/", + "https://www.rtve.es/play/videos/rebelion-en-el-reino-salvaje/mata-reyes/5803959/", ] should_not_match = [ - 'https://www.rtve.es', - 'http://www.rtve.es/directo/la-1', - 'http://www.rtve.es/directo/la-2/', - 'http://www.rtve.es/directo/teledeporte/', - 'http://www.rtve.es/directo/canal-24h/', - 'http://www.rtve.es/infantil/directo/', + "https://www.rtve.es", + "http://www.rtve.es/directo/la-1", + "http://www.rtve.es/directo/la-2/", + "http://www.rtve.es/directo/teledeporte/", + "http://www.rtve.es/directo/canal-24h/", + "http://www.rtve.es/infantil/directo/", + ] + + +def test_translate(): + # real payload with modified end (IEND chunk of size 0), to reduce test size + data = \ + "iVBORw0KGgoAAAANSUhEUgAAAVQAAAFUCAIAAAD08FPiAAACr3RFWHRXczlVSWdtM2ZPTGY4b2R4" \ + "dWo5aHZnRlRhOndvZEtxN3pLOG5oNGRpbT1vREBTWHhOMGtzUVomNndAWkV5cz1GOUlCSiYxdDcy" \ + "QmdDOFM2NGFVJmh1Nzk2bUpwOFVJOE1DJlpAY2lzdGcmbEUmRE5DZFV4SHpEOFgvLmppZ1l4b3M1" \ + "QU1lOnl3ZS04VlBwQkZvLlFMUWZHTy1vQjNVeHhfVDF1JkRSQTpPP2J4Wm0zbFlxS3IjAEhEX1JF" \ + "QURZJSUwNTYwNzI4Mjg4MzUyNjQyMzUxMTA0Mzg0NzI4NzY4NDEyODAzODU0ODMwMDQ3NzcwNDEx" \ + "MDAyODE1MzM3NDU3ODAxMDg3MjgxNTg1MzMzNDE3MTYxMTE4NzQ1MTU3MjYxOTUwNzI4NzEyNDgw" \ + "MzI4NTM1ODM1ODU3MzQyNzE0NjcyODE2NTgzNDI4NTE0NTg1MzIwMzgxODU3NDY0NzUwODI3OTQ0" \ + "ODg3NjEzMTUzNDMxMTUxNzYzNDU1NzE0MDA1MDUzNDIxODE0ODYyNDIzODM2MTczMzQ0NjAwNTIw" \ + "NTU2NDYyNDgxODYzNDA2MzA4MTE0ODUxMTQ2Mzg2MzYyMjQ4Mjc3MjIyMjUzNjMxMjI1MjEzMTU0" \ + "NjI1NjIyMjM3MTA4NjEwNjI0NTYyNTMxNTA2ODEyMjQ2MzYzNzE0MzY4MDU1MTgxNTQ2NTU3MTMx" \ + "NTI0NzU4MTU2NjAxMjY0MjA1MDU2MzcwMDM3NzcwMjA0MTYxMzE3MjQxMTI2NzYzMzUyNjY3NTQ1" \ + "NTA1MTUxNTc2NTEzMTUwNjcxNDcyMDI2MTQyMjczNTI4NzExNjA4NTU3NjIzMzMxMzU0NDM1Mzgw" \ + "MTI0MTQzMTU1MTMyNzc4ODI1MjcyMjUwMjY4MzYyMDUzMjQzNjA0MTYyMzkhB8fSAAAAAElFTkQAAAAACg==" + + assert list(ZTNR.translate(data)) == [ + ( + "HD_READY", + "https://rtvehlsvodlote7modo2.rtve.es/mediavodv2/resources/TE_NGVA/mp4/5/3/1656573649835.mp4/video.m3u8" + + "?hls_no_audio_only=true&idasset=6638770" + ), ]
plugins.rtve: live streams broken ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Since 22 June 2022 the "live" channels have stoped working. vod works perfect Example streamlink --stdout --default-stream best --url https://www.rtve.es/play/videos/directo/canales-lineales/la-1 Error message: [cli][info] Found matching plugin rtve for URL https://www.rtve.es/play/videos/directo/canales-lineales/la-1 error: No playable streams found on this URL: https://www.rtve.es/play/videos/directo/canales-lineales/la-1 ### Debug log ```text [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.5 [cli][debug] Streamlink: 4.1.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [cli][debug] Arguments: [cli][debug] url=https://www.rtve.es/play/videos/directo/canales-lineales/la-1 [cli][debug] --loglevel=debug [cli][debug] --stdout=True [cli][debug] --url=https://www.rtve.es/play/videos/directo/canales-lineales/la-1 [cli][debug] --default-stream=['best'] [cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin rtve for URL https://www.rtve.es/play/videos/directo/canales-lineales/la-1 [plugins.rtve][debug] Found content with id: 1688877 error: No playable streams found on this URL: https://www.rtve.es/play/videos/directo/canales-lineales/la-1 ```
Fixing the live streams is pretty simple. I have already done that on my local branch. I haven't had a closer look at the VODs yet though. There seem to be two kinds of VODs, regular short videos and movies with subtitles, but they all appear to be working via the regular HLS stream URL format the live streams are using. The plugin currently implements lots of stuff which seems to be pointless now (ZTNRClient, CDN stuff, video API), but as said, I haven't had a closer look yet at the other kind of content. https://github.com/streamlink/streamlink/blob/cc455586c55ea9bdbcd3a4c58f62b9bf9a0dddd0/src/streamlink/plugins/rtve.py Great, thanks, i will wait until the modification is upload to the main branch. Since the link provided only links the original one and not the modified. Thanks again. I had another look at this. As said, updating the live streams is fairly easy. The VODs and movies however are accessed differently now via their website: obfuscated HLS URLs are embedded in PNG files with custom base64 alphabets. The HLS URL obfuscation requires re-implementing lots of JS code in Python, from the `rtve/ztnrThumbnail` module of this JS file: https://js2.rtve.es/player/pf_video.js The old API which the plugin is currently using for getting VOD HTTP URLs seems to be still working for whatever reason, but it's likely that it won't stay available for long. I will submit a PR for updating the live streams later today or tomorrow. It's not worth the time implementing the de-obfuscation of VOD HLS URLs. Also no idea about the available movies and their subtitles.
2022-06-30T22:15:41
streamlink/streamlink
4,634
streamlink__streamlink-4634
[ "3137" ]
ad8cf544236a420808111efeec6122d8a41467e5
diff --git a/src/streamlink/stream/ffmpegmux.py b/src/streamlink/stream/ffmpegmux.py --- a/src/streamlink/stream/ffmpegmux.py +++ b/src/streamlink/stream/ffmpegmux.py @@ -1,3 +1,4 @@ +import concurrent.futures import logging import subprocess import sys @@ -183,10 +184,13 @@ def close(self): self.process.stdout.close() # close the streams + futures = [] + executor = concurrent.futures.ThreadPoolExecutor() for stream in self.streams: if hasattr(stream, "close") and callable(stream.close): - stream.close() + futures.append(executor.submit(stream.close)) + concurrent.futures.wait(futures, return_when=concurrent.futures.ALL_COMPLETED) log.debug("Closed all the substreams") if self.close_errorlog:
Delay in closing of Muxed Streams <!-- Thanks for reporting a bug! USE THE TEMPLATE. Otherwise your bug report may be rejected. First, see the contribution guidelines: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink Also check the list of open and closed bug reports: https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22 Please see the text preview to avoid unnecessary formatting errors. --> ## Bug Report <!-- Replace [ ] with [x] in order to check the box --> - [x] This is a bug report and I have read the contribution guidelines. ### Description <!-- Explain the bug as thoroughly as you can. Don't leave out information which is necessary for us to reproduce and debug this issue. --> ### Expected / Actual behavior **Expected:** Muxed Streams to close as fast as other streams. **Actual:** Muxed Streams take much longer to close than other stream-types. Muxed Streams takes around 9 seconds to close on average in my system, whereas other streams close almost instantly when the player is closed or when terminated from the command line. ### Reproduction steps / Explicit stream URLs to test <!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. --> 1. Stream a supported video using the --stream-type: muxed-stream parameter or simply play a 1080p Youtube video. 2. Close the stream using Command C. 3. Note how long the close operation takes. ### Log output <!-- TEXT LOG OUTPUT IS REQUIRED for a bug report! Use the `--loglevel debug` parameter and avoid using parameters which suppress log output. https://streamlink.github.io/cli.html#cmdoption-l Make sure to **remove usernames and passwords** You can copy the output to https://gist.github.com/ or paste it below. --> ``` > streamlink https://www.youtube.com/watch?v=ET9HeSFYlEs 480p --stream-type muxed-stream [cli][info] Found matching plugin youtube for URL https://www.youtube.com/watch?v=ET9HeSFYlEs [cli][info] Opening stream: 480p (muxed-stream) [cli][info] Starting player: "C:\Program Files (x86)\VideoLAN\VLC\vlc.exe" [cli][info] Stream ended Interrupted! Exiting... [cli][info] Closing currently open stream... ``` ### Additional comments, screenshots, etc. The lines ``` [cli][info] Stream ended Interrupted! Exiting... [cli][info] Closing currently open stream... ``` only appear after the delay period. I've figured this to be because of the need to close named pipes in muxed-streams. ```python def close(self): if self.pipe: windll.kernel32.DisconnectNamedPipe(self.pipe) else: self.fifo.close() os.unlink(self.path) ``` The above code is likely the cause of the delay. Can this delay be reduced? ### Environments Windows 10 v. 1909 Python 3.8.3 MSVSC 1.47.3
2022-07-01T22:49:43
streamlink/streamlink
4,636
streamlink__streamlink-4636
[ "4635" ]
b277114d53ad07abe079283df738d6d6ae2746f8
diff --git a/src/streamlink/plugins/trovo.py b/src/streamlink/plugins/trovo.py --- a/src/streamlink/plugins/trovo.py +++ b/src/streamlink/plugins/trovo.py @@ -7,6 +7,7 @@ import logging import random import re +import sys from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate @@ -17,16 +18,15 @@ @pluginmatcher(re.compile(r""" - https?://(?:www\.)?trovo\.live/ - (?: - (?: - (?:clip|video)/(?P<video_id>[^/?&]+) - ) - | - (?P<user>[^/?&]+) - ) + https?://(?:www\.)?trovo\.live/s/(?P<user>[^/?&]+)(?:/\d+\?vid=(?P<video_id>[^/?&]+))? """, re.VERBOSE)) class Trovo(Plugin): + @classmethod + def stream_weight(cls, stream): + if stream == "source": + return sys.maxsize, stream + return super().stream_weight(stream) + @staticmethod def generate_qid(): return f"{random.getrandbits(40):010x}".upper() @@ -96,7 +96,7 @@ def get_vod(self, video_id): for s in json["vodInfo"]["playInfos"]: q = s["desc"] if "(source)" in q: - q = f"source_{q.replace('(source)', '')}" + q = "source" yield q, HLSStream(self.session, update_scheme("https:", s["playUrl"])) def get_live(self, user):
diff --git a/tests/plugins/test_trovo.py b/tests/plugins/test_trovo.py --- a/tests/plugins/test_trovo.py +++ b/tests/plugins/test_trovo.py @@ -6,15 +6,21 @@ class TestPluginCanHandleUrlTrovo(PluginCanHandleUrl): __plugin__ = Trovo should_match_groups = [ - ("https://trovo.live/UserName", {"user": "UserName"}), - ("https://trovo.live/clip/clip_123", {"video_id": "clip_123"}), - ("https://trovo.live/video/video_456", {"video_id": "video_456"}), - ("https://www.trovo.live/UserName", {"user": "UserName"}), - ("https://www.trovo.live/clip/clip_123", {"video_id": "clip_123"}), - ("https://www.trovo.live/video/video_456", {"video_id": "video_456"}), + ("https://trovo.live/s/UserName", {"user": "UserName"}), + ("https://trovo.live/s/UserName/abc", {"user": "UserName"}), + ("https://trovo.live/s/UserName/123", {"user": "UserName"}), + ("https://trovo.live/s/UserName/123?vid=vc-456&adtag=", {"user": "UserName", "video_id": "vc-456"}), + ("https://trovo.live/s/UserName/123?vid=ltv-1_2_3&adtag=", {"user": "UserName", "video_id": "ltv-1_2_3"}), + ("https://www.trovo.live/s/UserName", {"user": "UserName"}), + ("https://www.trovo.live/s/UserName/abc", {"user": "UserName"}), + ("https://www.trovo.live/s/UserName/123", {"user": "UserName"}), + ("https://www.trovo.live/s/UserName/123?vid=vc-456&adtag=", {"user": "UserName", "video_id": "vc-456"}), + ("https://www.trovo.live/s/UserName/123?vid=ltv-1_2_3&adtag=", {"user": "UserName", "video_id": "ltv-1_2_3"}), ] should_not_match = [ "https://trovo.live/", "https://www.trovo.live/", + "https://www.trovo.live/s/", + "https://www.trovo.live/other/", ]
plugins.trovo: API error(s): {"ret":10505,"msg":"Data not exist"} when using plugin ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description When trying to use streamlink with any Trovo stream, I get a {"ret":10505,"msg":"Data not exist"} It seems they change something when adding their Discord-Like interface Step 1. Get any stream URL from Trovo 2. Try streamlink https://trovo.live/s/TheALounge/221604214 (If it can help, on another Github project, I found a similare case : https://github.com/yt-dlp/yt-dlp/issues/4135) ### Debug log ```text streamlink https://trovo.live/s/TheALounge/221604214 best --loglevel debug -o "F:\Live\Theta di Gon %DATE:~-4%-%DATE:~-7,2%-%DATE:~-10,2% %TIME::=-% Bis.mp4" --retry-streams 4 --hls-segment-threads 4 [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.5 [cli][debug] Streamlink: 4.1.0 [cli][debug] Requests(2.27.1), Socks(1.7.1), Websocket(1.3.2) [cli][debug] Arguments: [cli][debug] url=https://trovo.live/s/TheALounge/221604214 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --output=F:\Live\2022-07-03 14-18-25,94 Bis.mp4 [cli][debug] --retry-streams=4.0 [cli][debug] --hls-segment-threads=4 [cli][debug] --ffmpeg-ffmpeg=F:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin trovo for URL https://trovo.live/s/TheALounge/221604214 [plugins.trovo][error] API error(s): {"ret":10505,"msg":"Data not exist"} [cli][info] Waiting for streams, retrying every 4.0 second(s) ```
It looks like they changed the URL path a little bit for live streams. It is now prepended with `s/` - it wasn't originally. It's an easy fix. Can you test or provide links to some past streams and clips, please? It seems that Trovo no longer allows people to obtain the share URLs for these without logging in - and I do not have an account. Thanks.
2022-07-03T21:52:10
streamlink/streamlink
4,638
streamlink__streamlink-4638
[ "4637" ]
2632287f254cfcb10450d065ab2119b1c37eac18
diff --git a/src/streamlink/plugins/vk.py b/src/streamlink/plugins/vk.py --- a/src/streamlink/plugins/vk.py +++ b/src/streamlink/plugins/vk.py @@ -6,6 +6,7 @@ import logging import re +from hashlib import md5 from urllib.parse import parse_qsl, unquote, urlparse from streamlink.exceptions import NoStreamsError @@ -13,6 +14,7 @@ from streamlink.plugin.api import validate from streamlink.stream.dash import DASHStream from streamlink.stream.hls import HLSStream +from streamlink.utils.url import update_qsd log = logging.getLogger(__name__) @@ -25,6 +27,22 @@ )) class VK(Plugin): API_URL = "https://vk.com/al_video.php" + HASH_COOKIE = "hash429" + + def _get_cookies(self): + def on_response(res, **kwargs): + if res.headers.get("x-waf-redirect") == "1": + if not res.headers.get("X-WAF-Backend-Status"): + log.debug("Getting WAF cookie") + cookie = res.cookies.get(self.HASH_COOKIE) + key = md5(cookie.encode("utf-8")).hexdigest() + res.headers["Location"] = update_qsd(res.headers["Location"], qsd={"key": key}) + return res + elif res.headers.get("X-WAF-Backend-Status") == "challenge_success": + self.session.http.cookies.update(res.cookies) + return res + + self.session.http.get("https://vk.com/", hooks={"response": on_response}) def _has_video_id(self): return any(m for m in self.matches[:-1]) @@ -56,21 +74,19 @@ def follow_vk_redirect(self): raise NoStreamsError(self.url) def _get_streams(self): + self._get_cookies() self.follow_vk_redirect() video_id = self.match.group("video_id") if not video_id: return - log.debug(f"video ID: {video_id}") + log.debug(f"Video ID: {video_id}") try: data = self.session.http.post( self.API_URL, - params={ - "act": "show", - "al": "1", - "video": video_id, - }, + params={"act": "show"}, + data={"act": "show", "al": "1", "video": video_id}, headers={"Referer": self.url}, schema=validate.Schema( validate.transform(lambda text: re.sub(r"^\s*<!--\s*", "", text)),
plugins.vk: fixes required ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Needs support for the web application firewall cookie. ~Seems necessary to add all parameters to the API POST data in `_get_streams()` now.~ ref: https://github.com/streamlink/streamlink/pull/4613#issuecomment-1173040359 ### Debug log ```text $ streamlink -l debug https://vk.com/video-211154316_456239169 [cli][debug] OS: Linux-4.9.0-18-amd64-x86_64-with-debian-9.13 [cli][debug] Python: 3.7.3 [cli][debug] Streamlink: 4.1.0+45.gb277114d [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.3 [cli][debug] importlib-metadata: 4.12.0 [cli][debug] Arguments: [cli][debug] url=https://vk.com/video-211154316_456239169 [cli][debug] --loglevel=debug [cli][info] Found matching plugin vk for URL https://vk.com/video-211154316_456239169 [plugins.vk][debug] video ID: -211154316_456239169 [plugins.vk][error] Could not parse API response error: No playable streams found on this URL: https://vk.com/video-211154316_456239169 ```
2022-07-04T18:15:39
streamlink/streamlink
4,641
streamlink__streamlink-4641
[ "4640" ]
ec9f5a9156845c8b80ed7a206d4b0e1c9fb92ddf
diff --git a/src/streamlink/plugins/vidio.py b/src/streamlink/plugins/vidio.py --- a/src/streamlink/plugins/vidio.py +++ b/src/streamlink/plugins/vidio.py @@ -5,64 +5,75 @@ """ import logging import re +from urllib.parse import urlsplit, urlunsplit from streamlink.plugin import Plugin, pluginmatcher -from streamlink.plugin.api import useragents, validate +from streamlink.plugin.api import validate +from streamlink.stream.dash import DASHStream from streamlink.stream.hls import HLSStream log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r"https?://(?:www\.)?vidio\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\d+)-(?P<name>[^/?#&]+)" + r"https?://(?:www\.)?vidio\.com/" )) class Vidio(Plugin): - _playlist_re = re.compile(r'''hls-url=["'](?P<url>[^"']+)["']''') - _data_id_re = re.compile(r'''meta\s+data-id=["'](?P<id>[^"']+)["']''') - - csrf_tokens_url = "https://www.vidio.com/csrf_tokens" tokens_url = "https://www.vidio.com/live/{id}/tokens" - token_schema = validate.Schema(validate.parse_json(), - {"token": validate.text}, - validate.get("token")) - - def get_csrf_tokens(self): - return self.session.http.get( - self.csrf_tokens_url, - schema=self.token_schema - ) - def get_url_tokens(self, stream_id): - log.debug("Getting stream tokens") - csrf_token = self.get_csrf_tokens() + def _get_stream_token(self, stream_id, stream_type): + log.debug("Getting stream token") return self.session.http.post( self.tokens_url.format(id=stream_id), - files={"authenticity_token": (None, csrf_token)}, - headers={ - "User-Agent": useragents.CHROME, - "Referer": self.url - }, - schema=self.token_schema + params={"type": stream_type}, + headers={"Referer": self.url}, + schema=validate.Schema( + validate.parse_json(), + {"token": str}, + validate.get("token"), + ), ) def _get_streams(self): - res = self.session.http.get(self.url) + stream_id, has_token, hls_url, dash_url = self.session.http.get( + self.url, + schema=validate.Schema( + validate.parse_html(), + validate.xml_find(".//*[@data-video-id]"), + validate.union(( + validate.get("data-video-id"), + validate.all( + validate.get("data-video-has-token"), + validate.transform(lambda val: val and val != "false"), + ), + validate.get("data-vjs-clip-hls-url"), + validate.get("data-vjs-clip-dash-url"), + )), + ), + ) - plmatch = self._playlist_re.search(res.text) - idmatch = self._data_id_re.search(res.text) + if dash_url and has_token: + token = self._get_stream_token(stream_id, "dash") + parsed = urlsplit(dash_url) + dash_url = urlunsplit(parsed._replace(path=f"{token}{parsed.path}")) + return DASHStream.parse_manifest( + self.session, + dash_url, + headers={"Referer": self.url}, + ) - hls_url = plmatch and plmatch.group("url") - stream_id = idmatch and idmatch.group("id") + if not hls_url: + return - tokens = self.get_url_tokens(stream_id) + if has_token: + token = self._get_stream_token(stream_id, "hls") + hls_url = f"{hls_url}?{token}" - if hls_url: - log.debug("HLS URL: {0}".format(hls_url)) - log.debug("Tokens: {0}".format(tokens)) - return HLSStream.parse_variant_playlist(self.session, - hls_url + "?" + tokens, - headers={"User-Agent": useragents.CHROME, - "Referer": self.url}) + return HLSStream.parse_variant_playlist( + self.session, + hls_url, + headers={"Referer": self.url}, + ) __plugin__ = Vidio
plugins.vidio: streams broken ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Hello Admin Could you guys take a look vidio plugin it is broken now. Unable to open url 403 Client Error Forbidden for url Here is the the link https://www.vidio.com/live/204-sctv (403 Client Error: Forbidden for url: https://www.vidio.com/live/204-sctv) https://www.vidio.com/ ### Debug log ```text Hello Admin Could you guys take a look vidio plugin it is broken now. Unable to open url 403 Client Error Forbidden for url Here is the the link https://www.vidio.com/live/204-sctv (403 Client Error: Forbidden for url: https://www.vidio.com/live/204-sctv) https://www.vidio.com/ ```
Please provide debug log output, as specified in the issue template and update your post accordingly. I cannot reproduce this with the URL you have specified, the plugin is fine working for me.
2022-07-07T12:17:55
streamlink/streamlink
4,651
streamlink__streamlink-4651
[ "4648" ]
c387278fa23802d2ad9467b6ce71ee98243f6cd5
diff --git a/docs/ext_argparse.py b/docs/ext_argparse.py --- a/docs/ext_argparse.py +++ b/docs/ext_argparse.py @@ -131,21 +131,22 @@ def generate_group_rst(self, group): yield f" **Supported plugins:** {', '.join(action.plugins)}" yield "" - def generate_parser_rst(self, parser, depth=0): + def generate_parser_rst(self, parser, parent=None, depth=0): if depth >= len(self._headlines): return - for group in parser._action_groups: + for group in parser.NESTED_ARGUMENT_GROUPS[parent]: + is_parent = group in parser.NESTED_ARGUMENT_GROUPS # Exclude empty groups - if not group._group_actions and not group._action_groups: + if not group._group_actions and not is_parent: continue title = group.title yield "" yield title yield self._headlines[depth] * len(title) yield from self.generate_group_rst(group) - if group._action_groups: + if is_parent: yield "" - yield from self.generate_parser_rst(group, depth + 1) + yield from self.generate_parser_rst(parser, group, depth + 1) def run(self): module = self.options.get("module") diff --git a/src/streamlink_cli/argparser.py b/src/streamlink_cli/argparser.py --- a/src/streamlink_cli/argparser.py +++ b/src/streamlink_cli/argparser.py @@ -3,6 +3,7 @@ import re from string import printable from textwrap import dedent +from typing import Dict, List, Optional from streamlink import __version__ as streamlink_version, logger from streamlink.utils.args import ( @@ -26,6 +27,27 @@ class ArgumentParser(argparse.ArgumentParser): + # noinspection PyUnresolvedReferences,PyProtectedMember + NESTED_ARGUMENT_GROUPS: Dict[Optional[argparse._ArgumentGroup], List[argparse._ArgumentGroup]] + + def __init__(self, *args, **kwargs): + self.NESTED_ARGUMENT_GROUPS = {} + super().__init__(*args, **kwargs) + + # noinspection PyUnresolvedReferences,PyProtectedMember + def add_argument_group( + self, + *args, + parent: Optional[argparse._ArgumentGroup] = None, + **kwargs + ) -> argparse._ArgumentGroup: + group = super().add_argument_group(*args, **kwargs) + if parent not in self.NESTED_ARGUMENT_GROUPS: + self.NESTED_ARGUMENT_GROUPS[parent] = [group] + else: + self.NESTED_ARGUMENT_GROUPS[parent].append(group) + return group + def convert_arg_line_to_args(self, line): # Strip any non-printable characters that might be in the # beginning of the line (e.g. Unicode BOM marker). @@ -87,16 +109,18 @@ def format_help(self): # description formatter.add_text(self.description) - def format_group(group): + def format_group(parent): + if parent not in self.NESTED_ARGUMENT_GROUPS: + return # positionals, optionals and user-defined groups - for action_group in group._action_groups: + for action_group in self.NESTED_ARGUMENT_GROUPS[parent]: formatter.start_section(action_group.title) formatter.add_text(action_group.description) formatter.add_arguments(action_group._group_actions) format_group(action_group) formatter.end_section() - format_group(self) + format_group(None) # epilog formatter.add_text(self.epilog) @@ -118,11 +142,10 @@ class HelpFormatter(argparse.RawDescriptionHelpFormatter): def __init__(self, max_help_position=4, *args, **kwargs): # A smaller indent for args help. kwargs["max_help_position"] = max_help_position - argparse.RawDescriptionHelpFormatter.__init__(self, *args, **kwargs) + super().__init__(*args, **kwargs) def _split_lines(self, text, width): - text = dedent(text).strip() + "\n\n" - return text.splitlines() + return f"{dedent(text).strip()}\n\n".splitlines() def build_parser(): @@ -758,8 +781,8 @@ def build_parser(): ) transport = parser.add_argument_group("Stream transport options") - transport_hls = transport.add_argument_group("HLS options") - transport_ffmpeg = transport.add_argument_group("FFmpeg options") + transport_hls = parser.add_argument_group("HLS options", parent=transport) + transport_ffmpeg = parser.add_argument_group("FFmpeg options", parent=transport) transport.add_argument( "--ringbuffer-size", @@ -1167,4 +1190,4 @@ def build_parser(): return parser -__all__ = ["build_parser"] +__all__ = ["ArgumentParser", "build_parser"] diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -24,7 +24,7 @@ from streamlink.plugin import Plugin, PluginOptions from streamlink.stream.stream import Stream, StreamIO from streamlink.utils.named_pipe import NamedPipe -from streamlink_cli.argparser import build_parser +from streamlink_cli.argparser import ArgumentParser, build_parser from streamlink_cli.compat import DeprecatedPath, importlib_metadata, is_win32, stdout from streamlink_cli.console import ConsoleOutput, ConsoleUserInputRequester from streamlink_cli.constants import CONFIG_FILES, DEFAULT_STREAM_METADATA, LOG_DIR, PLUGIN_DIRS, STREAM_SYNONYMS @@ -855,13 +855,13 @@ def setup_options(): streamlink.set_option("locale", args.locale) -def setup_plugin_args(session, parser): +def setup_plugin_args(session: Streamlink, parser: ArgumentParser): """Sets Streamlink plugin options.""" plugin_args = parser.add_argument_group("Plugin options") for pname, plugin in session.plugins.items(): defaults = {} - group = plugin_args.add_argument_group(pname.capitalize()) + group = parser.add_argument_group(pname.capitalize(), parent=plugin_args) for parg in plugin.arguments: if not parg.is_global:
diff --git a/tests/plugins/test_funimationnow.py b/tests/plugins/test_funimationnow.py --- a/tests/plugins/test_funimationnow.py +++ b/tests/plugins/test_funimationnow.py @@ -22,7 +22,8 @@ def test_arguments(self): from streamlink_cli.main import setup_plugin_args session = Streamlink() parser = MagicMock() - group = parser.add_argument_group("Plugin Options").add_argument_group("FunimationNow") + plugins = parser.add_argument_group("Plugin Options") + group = parser.add_argument_group("FunimationNow", parent=plugins) session.plugins = { 'funimationnow': FunimationNow diff --git a/tests/plugins/test_ustreamtv.py b/tests/plugins/test_ustreamtv.py --- a/tests/plugins/test_ustreamtv.py +++ b/tests/plugins/test_ustreamtv.py @@ -25,7 +25,8 @@ def test_arguments(self): from streamlink_cli.main import setup_plugin_args session = Streamlink() parser = MagicMock() - group = parser.add_argument_group("Plugin Options").add_argument_group("UStreamTV") + plugins = parser.add_argument_group("Plugin Options") + group = parser.add_argument_group("UStreamTV", parent=plugins) session.plugins = { 'ustreamtv': UStreamTV diff --git a/tests/test_options.py b/tests/test_options.py --- a/tests/test_options.py +++ b/tests/test_options.py @@ -3,6 +3,7 @@ from unittest.mock import Mock, patch from streamlink.options import Argument, Arguments, Options +from streamlink_cli.argparser import ArgumentParser from streamlink_cli.main import setup_plugin_args, setup_plugin_options @@ -116,7 +117,7 @@ class TestSetupOptions(unittest.TestCase): def test_setup_plugin_args(self): session = Mock() plugin = Mock() - parser = argparse.ArgumentParser(add_help=False) + parser = ArgumentParser(add_help=False) parser.add_argument("--global-arg1", default=123) parser.add_argument("--global-arg2", default=456) @@ -131,9 +132,11 @@ def test_setup_plugin_args(self): setup_plugin_args(session, parser) group_plugins = next((grp for grp in parser._action_groups if grp.title == "Plugin options"), None) # pragma: no branch - self.assertIsNotNone(group_plugins, "Adds the 'Plugin options' arguments group") - group_plugin = next((grp for grp in group_plugins._action_groups if grp.title == "Mock"), None) # pragma: no branch - self.assertIsNotNone(group_plugin, "Adds the 'Mock' arguments group to the 'Plugin options' group") + assert group_plugins is not None, "Adds the 'Plugin options' arguments group" + assert group_plugins in parser.NESTED_ARGUMENT_GROUPS[None], "Adds the 'Plugin options' arguments group" + group_plugin = next((grp for grp in parser._action_groups if grp.title == "Mock"), None) # pragma: no branch + assert group_plugin is not None, "Adds the 'Mock' arguments group" + assert group_plugin in parser.NESTED_ARGUMENT_GROUPS[group_plugins], "Adds the 'Mock' arguments group" self.assertEqual( [item for action in group_plugin._group_actions for item in action.option_strings], ["--mock-test1", "--mock-test2", "--mock-test3"], @@ -154,7 +157,7 @@ def test_setup_plugin_args(self): def test_setup_plugin_options(self): session = Mock() plugin = Mock(module="plugin") - parser = argparse.ArgumentParser() + parser = ArgumentParser() parser.add_argument("--foo-foo", default=123) session.plugins = {"plugin": plugin}
cli.argparser: nested argparse groups will be deprecated in python 3.11 See https://github.com/python/cpython/commit/30322c497e0b8d978f7a0de95985aac9c5daf1ac ---- streamlink_cli currently defines nested argparse groups (as well as nested plugin args) for better organization in the docs: - https://github.com/streamlink/streamlink/blob/4.2.0/src/streamlink_cli/argparser.py#L760-L762 - https://github.com/streamlink/streamlink/blob/4.2.0/src/streamlink_cli/main.py#L861-L864 - https://github.com/streamlink/streamlink/blob/4.2.0/docs/ext_argparse.py#L134-L148 - https://streamlink.github.io/cli.html#stream-transport-options This needs to be changed and a different way needs to be found to have nested groups in the docs.
2022-07-12T16:09:13
streamlink/streamlink
4,668
streamlink__streamlink-4668
[ "4664" ]
1741c59d9d2f0d05c083c23dd64038bfd0d846ce
diff --git a/src/streamlink/plugins/nbcnews.py b/src/streamlink/plugins/nbcnews.py --- a/src/streamlink/plugins/nbcnews.py +++ b/src/streamlink/plugins/nbcnews.py @@ -15,80 +15,74 @@ @pluginmatcher(re.compile( - r'https?://(?:www\.)?nbcnews\.com/now' + r"https?://(?:www\.)?nbcnews\.com/now" )) class NBCNews(Plugin): - json_data_re = re.compile( - r'<script type="application/ld\+json">({.*?})</script>' - ) - api_url = 'https://stream.nbcnews.com/data/live_sources_{}.json' - token_url = 'https://tokens.playmakerservices.com/' - - api_schema = validate.Schema( - validate.parse_json(), - { - 'videoSources': [{ - 'sourceUrl': validate.url(), - 'type': validate.text, - }], - }, - validate.get('videoSources'), - validate.get(0), - ) - - token_schema = validate.Schema( - validate.parse_json(), - {'akamai': [{ - 'tokenizedUrl': validate.url(), - }]}, - validate.get('akamai'), - validate.get(0), - validate.get('tokenizedUrl'), - ) - - json_data_schema = validate.Schema( - validate.transform(json_data_re.search), - validate.any(None, validate.all( - validate.get(1), - validate.parse_json(), - {"embedUrl": validate.url()}, - validate.get("embedUrl"), - validate.transform(lambda url: url.split("/")[-1]) - )) - ) + URL_API = "https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop" + URL_TOKEN = "https://tokens.playmakerservices.com/" title = "NBC News Now" def _get_streams(self): - video_id = self.session.http.get(self.url, schema=self.json_data_schema) - if video_id is None: + self.id = self.session.http.get( + self.url, + schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[@type='application/ld+json'][1]/text()"), + validate.any(None, validate.all( + str, + validate.parse_json(), + {"embedUrl": validate.url()}, + validate.get("embedUrl"), + validate.transform(lambda embed_url: embed_url.split("/")[-1]) + )) + ), + ) + if self.id is None: return - log.debug('API ID: {0}'.format(video_id)) + log.debug(f"API ID: {self.id}") - api_url = self.api_url.format(video_id) - stream = self.session.http.get(api_url, schema=self.api_schema) - log.trace('{0!r}'.format(stream)) - if stream['type'].lower() != 'live': - log.error('Invalid stream type "{0}"'.format(stream['type'])) - return + stream = self.session.http.get( + self.URL_API.format(self.id), + schema=validate.Schema( + validate.parse_json(), + { + "videoSources": [{ + "cdnSources": { + "primary": [{ + "sourceUrl": validate.url(path=validate.endswith(".m3u8")), + }], + }, + }], + }, + validate.get(("videoSources", 0, "cdnSources", "primary", 0, "sourceUrl")), + ), + ) - json_post_data = { - 'requestorId': 'nbcnews', - 'pid': video_id, - 'application': 'NBCSports', - 'version': 'v1', - 'platform': 'desktop', - 'token': '', - 'resourceId': '', - 'inPath': 'false', - 'authenticationType': 'unauth', - 'cdn': 'akamai', - 'url': stream['sourceUrl'], - } url = self.session.http.post( - self.token_url, - json=json_post_data, - schema=self.token_schema, + self.URL_TOKEN, + json={ + "requestorId": "nbcnews", + "pid": self.id, + "application": "NBCSports", + "version": "v1", + "platform": "desktop", + "token": "", + "resourceId": "", + "inPath": "false", + "authenticationType": "unauth", + "cdn": "akamai", + "url": stream, + }, + schema=validate.Schema( + validate.parse_json(), + { + "akamai": [{ + "tokenizedUrl": validate.url(), + }], + }, + validate.get(("akamai", 0, "tokenizedUrl")), + ), ) return HLSStream.parse_variant_playlist(self.session, url)
plugins.nbcnews: Plugin fix The stream.nbcnews.com API host used in the plugin recently stopped working. It fixes of plugin for the new API endpoint.
2022-07-20T22:16:21
streamlink/streamlink
4,679
streamlink__streamlink-4679
[ "4557" ]
68152fecdc4a7521221cf90f76f0e4a45acc8337
diff --git a/src/streamlink/plugins/livestream.py b/src/streamlink/plugins/livestream.py --- a/src/streamlink/plugins/livestream.py +++ b/src/streamlink/plugins/livestream.py @@ -1,52 +1,122 @@ """ -$description Global live streaming and video on-demand hosting platform. +$description Global live-streaming and video on-demand hosting platform. $url livestream.com $type live """ import logging import re +from operator import itemgetter from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream -from streamlink.utils.parse import parse_json log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r"https?://(?:www\.)?livestream\.com/" + r""" + https?://(?P<subdomain>api\.new\.|www\.)?livestream\.com + /accounts/(?P<account>\d+) + (?: + /events/(?P<event>\d+) + | + /[^/]+ + )? + (?:/videos/(?P<video>\d+))? + """, + re.VERBOSE, )) class Livestream(Plugin): - _config_re = re.compile(r"window.config = ({.+})") - _stream_config_schema = validate.Schema(validate.any({ - "event": { - "stream_info": validate.any({ - "is_live": bool, - "secure_m3u8_url": validate.url(scheme="https"), - }, None), - } - }, {}), validate.get("event", {}), validate.get("stream_info", {})) + URL_API_EVENTS = "https://api.new.livestream.com/accounts/{account}/events" + URL_API_EVENTS_EVENT = "https://api.new.livestream.com/accounts/{account}/events/{event}" + URL_API_VIDEO = "https://api.new.livestream.com/accounts/{account}/events/{event}/videos/{video}" def _get_streams(self): - res = self.session.http.get(self.url) - m = self._config_re.search(res.text) - if not m: - log.debug("Unable to find _config_re") - return - - stream_info = parse_json(m.group(1), "config JSON", - schema=self._stream_config_schema) - - log.trace("stream_info: {0!r}".format(stream_info)) - if not (stream_info and stream_info["is_live"]): - log.debug("Stream might be Off Air") - return - - m3u8_url = stream_info.get("secure_m3u8_url") - if m3u8_url: - yield from HLSStream.parse_variant_playlist(self.session, m3u8_url).items() + subdomain, account, event, video = itemgetter("subdomain", "account", "event", "video")(self.match.groupdict()) + + if event is None: + if video is None or subdomain == "api.new.": + event = self.session.http.get( + self.URL_API_EVENTS.format(account=account), + schema=validate.Schema( + validate.parse_json(), + {"data": [dict]}, + validate.get(("data", 0)), + validate.any(None, validate.all( + {"id": int}, + validate.get("id"), + )), + ), + ) + else: + event = self.session.http.get( + self.url, + schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[contains(text(), 'window.config = ')][1]/text()"), + validate.any(None, validate.all( + str, + validate.transform(re.compile(r"^window\.config\s*=\s*(\{.+});?\s*$").match), + validate.any(None, validate.all( + validate.get(1), + validate.parse_json(), + {"event": {"id": int}}, + validate.get(("event", "id")), + )) + )), + ), + ) + if event is None: + log.error("Could not find event ID") + return + + if video is None: + self.id, self.title, is_live, m3u8_url = self.session.http.get( + self.URL_API_EVENTS_EVENT.format(account=account, event=event), + schema=validate.Schema( + validate.parse_json(), + { + "stream_info": { + "broadcast_id": int, + validate.optional("stream_title"): validate.any(None, str), + "is_live": bool, + "secure_m3u8_url": validate.url(path=validate.endswith(".m3u8")), + }, + }, + validate.get("stream_info"), + validate.union_get( + "broadcast_id", + "stream_title", + "is_live", + "secure_m3u8_url", + ), + ), + ) + if not is_live: + log.error("The stream is not available") + return + + else: + self.id, self.title, m3u8_url = self.session.http.get( + self.URL_API_VIDEO.format(account=account, event=event, video=video), + schema=validate.Schema( + validate.parse_json(), + { + "id": int, + validate.optional("description"): validate.any(None, str), + "secure_m3u8_url": validate.url(path=validate.endswith(".m3u8")), + }, + validate.union_get( + "id", + "description", + "secure_m3u8_url", + ), + ), + ) + + yield from HLSStream.parse_variant_playlist(self.session, m3u8_url).items() __plugin__ = Livestream
diff --git a/tests/plugins/test_livestream.py b/tests/plugins/test_livestream.py --- a/tests/plugins/test_livestream.py +++ b/tests/plugins/test_livestream.py @@ -5,8 +5,58 @@ class TestPluginCanHandleUrlLivestream(PluginCanHandleUrl): __plugin__ = Livestream - should_match = [ - 'https://livestream.com/', - 'https://www.livestream.com/', - 'https://livestream.com/accounts/22300508/events/6675945', + should_match_groups = [ + # no event/video + ( + "https://livestream.com/accounts/12182108/", + {"account": "12182108"}, + ), + ( + "https://livestream.com/accounts/1538473/eaglecam", + {"account": "1538473"}, + ), + ( + "https://www.livestream.com/accounts/12182108/", + {"subdomain": "www.", "account": "12182108"}, + ), + # no event/video via API URL + ( + "https://api.new.livestream.com/accounts/12182108/", + {"subdomain": "api.new.", "account": "12182108"}, + ), + # event + ( + "https://livestream.com/accounts/12182108/events/4004765", + {"account": "12182108", "event": "4004765"}, + ), + ( + "https://www.livestream.com/accounts/12182108/events/4004765", + {"subdomain": "www.", "account": "12182108", "event": "4004765"}, + ), + # event via API URL + ( + "https://api.new.livestream.com/accounts/12182108/events/4004765", + {"subdomain": "api.new.", "account": "12182108", "event": "4004765"}, + ), + # video without event + ( + "https://livestream.com/accounts/4175709/neelix/videos/119637915", + {"account": "4175709", "video": "119637915"}, + ), + # video with event + ( + "https://livestream.com/accounts/844142/events/5602516/videos/216545361", + {"account": "844142", "event": "5602516", "video": "216545361"}, + ), + # video with event via API URL + ( + "https://api.new.livestream.com/accounts/844142/events/5602516/videos/216545361", + {"subdomain": "api.new.", "account": "844142", "event": "5602516", "video": "216545361"}, + ), + ] + + should_not_match = [ + "https://livestream.com/", + "https://www.livestream.com/", + "https://api.new.livestream.com/", ]
plugins.livestream: query API for hidden channels <!-- Thanks for opening a pull request! Before you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink If possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures: https://streamlink.github.io/latest/developing.html#validating-changes Also don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers. Thank you very much! --> ## Context In certain cases, the channel is hidden (see : https://api.new.livestream.com/accounts/29565692/events/9321713/), and to make Streamlink work in those kind of links, it needs to be scraped on API links only. ## What this PR does This PR aims to add another way of scraping Livestream channels. ## Technical explanation Using the api.new.livestream.com base API endpoint, it will be possible to get all kinds of links. A condition is put to verify if the stream_info response is available, and another condition is put to verify the nature of given URL (if link is api.new.livestream.com, get JSON and verify if the stream_info part is available. else, contact the API URL available inside the code, and get the stream.) ## PR Classification This would be considered as an improvement, based off the original plugin.
Putting it as draft for the moment. Will work on it this weekend, thank you for your patience. Sorry for the long time without any answer, I was busy with work and didn't have the time to do anything on this project. Now that I have the time to rewrite it, here's my new proposition. Using the API url will permit you to reduce the work needed to access a link, since it needed a specific regex onto the Livestream page to get the necessary stream_info data.
2022-07-25T23:21:39
streamlink/streamlink
4,685
streamlink__streamlink-4685
[ "4684" ]
b03b352602fc0de9da8cc3cf518f900362d3d166
diff --git a/src/streamlink/plugins/huya.py b/src/streamlink/plugins/huya.py --- a/src/streamlink/plugins/huya.py +++ b/src/streamlink/plugins/huya.py @@ -8,74 +8,96 @@ import logging import re from html import unescape as html_unescape -from typing import Any, Dict +from typing import Dict from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.http import HTTPStream -from streamlink.utils.parse import parse_json log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r'https?://(?:www\.)?huya\.com/(?P<channel>[^/]+)' + r"https?://(?:www\.)?huya\.com/(?P<channel>[^/]+)" )) class Huya(Plugin): - _re_stream = re.compile(r'"stream"\s?:\s?"([^"]+)"') - _schema_data = validate.Schema( - { - # 'status': int, - # 'msg': validate.any(None, str), - 'data': [{ - 'gameStreamInfoList': [{ - 'sCdnType': str, - 'sStreamName': str, - 'sFlvUrl': str, - 'sFlvUrlSuffix': str, - 'sFlvAntiCode': validate.all(str, validate.transform(lambda v: html_unescape(v))), - # 'sHlsUrl': str, - # 'sHlsUrlSuffix': str, - # 'sHlsAntiCode': validate.all(str, validate.transform(lambda v: html_unescape(v))), - validate.optional('iIsMultiStream'): int, - 'iPCPriorityRate': int, - }] - }], - # 'vMultiStreamInfo': [{ - # 'sDisplayName': str, - # 'iBitRate': int, - # }], - }, - validate.get('data'), - validate.get(0), - validate.get('gameStreamInfoList'), - ) - QUALITY_WEIGHTS: Dict[str, Any] = {} + QUALITY_WEIGHTS: Dict[str, int] = {} @classmethod def stream_weight(cls, key): weight = cls.QUALITY_WEIGHTS.get(key) if weight: - return weight, 'huya' + return weight, "huya" - return Plugin.stream_weight(key) + return super().stream_weight(key) def _get_streams(self): - res = self.session.http.get(self.url) - data = self._re_stream.search(res.text) - + data = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[contains(text(),'var hyPlayerConfig = {')][1]/text()"), + validate.any(None, validate.transform( + re.compile(r"""(?P<q>"?)stream(?P=q)\s*:\s*(?:"(?P<base64>.+?)"|(?P<json>\{.+?})\s*}\s*;)""").search, + )), + validate.any(None, validate.all( + validate.any( + validate.all( + validate.get("base64"), + str, + validate.transform(base64.b64decode), + ), + validate.all( + validate.get("json"), + str, + ), + ), + validate.parse_json(), + { + "data": [{ + "gameLiveInfo": { + "liveId": int, + "nick": str, + "roomName": str, + }, + "gameStreamInfoList": [validate.all( + { + "sCdnType": str, + "iPCPriorityRate": int, + "sStreamName": str, + "sFlvUrl": str, + "sFlvUrlSuffix": str, + "sFlvAntiCode": validate.all(str, validate.transform(lambda v: html_unescape(v))), + }, + validate.union_get( + "sCdnType", + "iPCPriorityRate", + "sStreamName", + "sFlvUrl", + "sFlvUrlSuffix", + "sFlvAntiCode", + )), + ], + }], + }, + validate.get(("data", 0)), + validate.union_get( + ("gameLiveInfo", "liveId"), + ("gameLiveInfo", "nick"), + ("gameLiveInfo", "roomName"), + "gameStreamInfoList", + ), + )), + )) if not data: return - data = parse_json(base64.b64decode(data.group(1)), schema=self._schema_data) - for info in data: - log.trace(f'{info!r}') - flv_url = f'{info["sFlvUrl"]}/{info["sStreamName"]}.{info["sFlvUrlSuffix"]}?{info["sFlvAntiCode"]}' - name = f'source_{info["sCdnType"].lower()}' - self.QUALITY_WEIGHTS[name] = info['iPCPriorityRate'] - yield name, HTTPStream(self.session, flv_url) + self.id, self.author, self.title, streamdata = data + + for cdntype, priority, streamname, flvurl, suffix, anticode in streamdata: + name = f"source_{cdntype.lower()}" + self.QUALITY_WEIGHTS[name] = priority + yield name, HTTPStream(self.session, f"{flvurl}/{streamname}.{suffix}?{anticode}") - log.debug(f'QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}') + log.debug(f"QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}") __plugin__ = Huya
diff --git a/tests/plugins/test_huya.py b/tests/plugins/test_huya.py --- a/tests/plugins/test_huya.py +++ b/tests/plugins/test_huya.py @@ -6,11 +6,13 @@ class TestPluginCanHandleUrlHuya(PluginCanHandleUrl): __plugin__ = Huya should_match = [ - 'http://www.huya.com/123123123', - 'http://www.huya.com/name', - 'https://www.huya.com/123123123', + "http://www.huya.com/123123123", + "http://www.huya.com/name", + "https://www.huya.com/123123123", + "https://www.huya.com/name", ] should_not_match = [ - 'http://www.huya.com', + "http://www.huya.com", + "https://www.huya.com", ]
huya live not work ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description url with pure-digit doesn't work. ### Debug log ```text X:\>streamlink https://www.huya.com/688062 [cli][info] Found matching plugin huya for URL https://www.huya.com/688062 error: No playable streams found on this URL: https://www.huya.com/688062 X:\>streamlink https://www.huya.com/dongerlol [cli][info] Found matching plugin huya for URL https://www.huya.com/dongerlol Available streams: source_tx (worst), source_hw, source_al, source_bd (best) X:\>streamlink https://www.huya.com/688062 [cli][info] Found matching plugin huya for URL https://www.huya.com/688062 error: No playable streams found on this URL: https://www.huya.com/688062 X:\>streamlink https://www.huya.com/dongerlol [cli][info] Found matching plugin huya for URL https://www.huya.com/dongerlol Available streams: source_tx (worst), source_hw, source_al, source_bd (best) X:\>streamlink https://www.huya.com/mengxing1221 [cli][info] Found matching plugin huya for URL https://www.huya.com/mengxing1221 Available streams: source_tx (worst), source_hw, source_al, source_bd (best) X:\>streamlink https://www.huya.com/2337957882 [cli][info] Found matching plugin huya for URL https://www.huya.com/2337957882 error: No playable streams found on this URL: https://www.huya.com/2337957882 ```
2022-07-29T10:54:17
streamlink/streamlink
4,703
streamlink__streamlink-4703
[ "4687" ]
cb56e063c96250f5f8090b2c298872c10edbc725
diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py --- a/src/streamlink/plugins/ard_mediathek.py +++ b/src/streamlink/plugins/ard_mediathek.py @@ -19,29 +19,50 @@ @pluginmatcher(re.compile( - r"https?://(?:(\w+\.)?ardmediathek\.de/|mediathek\.daserste\.de/)" + r""" + https?://(\w+\.)?ardmediathek\.de/ + (?: + live/(?:[^/]+/)?(?P<id_live>\w+) + | + video/(?:[^/]+/[^/]+/[^/]+/)?(?P<id_video>\w+) + ) + (?:\?|$) + """, + re.VERBOSE, )) class ARDMediathek(Plugin): + _URL_API = "https://api.ardmediathek.de/page-gateway/pages/ard/item/{item}" _QUALITY_MAP = { 4: "1080p", 3: "720p", 2: "540p", 1: "360p", - 0: "270p" + 0: "270p", } def _get_streams(self): data_json = self.session.http.get(self.url, schema=validate.Schema( validate.parse_html(), - validate.xml_findtext(".//script[@id='fetchedContextValue'][@type='application/json']"), - validate.any(None, validate.all( + validate.xml_xpath_string(".//script[@type='application/json'][@id='fetchedContextValue2'][1]/text()"), + validate.none_or_all( validate.parse_json(), - {str: dict}, - validate.transform(lambda obj: list(obj.items())), + [validate.list(str, {"data": dict})], validate.filter(lambda item: item[0].startswith("https://api.ardmediathek.de/page-gateway/pages/")), - validate.any(validate.get((0, 1)), []) - )) + validate.any( + validate.get((0, 1, "data")), + [], + ), + ), )) + if not data_json: + data_json = self.session.http.get( + self._URL_API.format(item=self.match.group("id_live") or self.match.group("id_video")), + params={ + "devicetype": "pc", + "embedded": "false", + }, + schema=validate.Schema(validate.parse_json()), + ) if not data_json: return
diff --git a/tests/plugins/test_ard_mediathek.py b/tests/plugins/test_ard_mediathek.py --- a/tests/plugins/test_ard_mediathek.py +++ b/tests/plugins/test_ard_mediathek.py @@ -5,12 +5,35 @@ class TestPluginCanHandleUrlARDMediathek(PluginCanHandleUrl): __plugin__ = ARDMediathek - should_match = [ - 'http://mediathek.daserste.de/live', - 'http://www.ardmediathek.de/tv/Sportschau/' - ] - - should_not_match = [ - 'https://daserste.de/live/index.html', - 'https://www.daserste.de/live/index.html', + should_match_groups = [ + ( + "https://www.ardmediathek.de/live/" + + "Y3JpZDovL2Rhc2Vyc3RlLmRlL2xpdmUvY2xpcC9hYmNhMDdhMy0zNDc2LTQ4NTEtYjE2Mi1mZGU4ZjY0NmQ0YzQ", + {"id_live": "Y3JpZDovL2Rhc2Vyc3RlLmRlL2xpdmUvY2xpcC9hYmNhMDdhMy0zNDc2LTQ4NTEtYjE2Mi1mZGU4ZjY0NmQ0YzQ"}, + ), + ( + "https://www.ardmediathek.de/live/" + + "Y3JpZDovL2Rhc2Vyc3RlLmRlL2xpdmUvY2xpcC9hYmNhMDdhMy0zNDc2LTQ4NTEtYjE2Mi1mZGU4ZjY0NmQ0YzQ?toolbarType=default", + {"id_live": "Y3JpZDovL2Rhc2Vyc3RlLmRlL2xpdmUvY2xpcC9hYmNhMDdhMy0zNDc2LTQ4NTEtYjE2Mi1mZGU4ZjY0NmQ0YzQ"}, + ), + ( + "https://www.ardmediathek.de/live/tagesschau24/" + + "Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvbGl2ZXN0cmVhbQ", + {"id_live": "Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvbGl2ZXN0cmVhbQ"}, + ), + ( + "https://www.ardmediathek.de/video/" + + "Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvOWE4NGIzODgtZDEzNS00ZWU0LWI4ODEtZDYyNTQzYjg3ZmJlLzE", + {"id_video": "Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvOWE4NGIzODgtZDEzNS00ZWU0LWI4ODEtZDYyNTQzYjg3ZmJlLzE"}, + ), + ( + "https://www.ardmediathek.de/video/arte/blackfish-der-killerwal/arte/" + + "Y3JpZDovL2FydGUudHYvdmlkZW9zLzA1MDMyNy0wMDAtQQ", + {"id_video": "Y3JpZDovL2FydGUudHYvdmlkZW9zLzA1MDMyNy0wMDAtQQ"}, + ), + ( + "https://www.ardmediathek.de/video/expeditionen-ins-tierreich/die-revolte-der-schimpansen/ndr/" + + "Y3JpZDovL25kci5kZS9jY2E3M2MzZS00ZTljLTRhOWItODE3MC05MjhjM2MwNWEyMDM?toolbarType=default", + {"id_video": "Y3JpZDovL25kci5kZS9jY2E3M2MzZS00ZTljLTRhOWItODE3MC05MjhjM2MwNWEyMDM"}, + ), ]
plugins.ard_mediathek: No playable streams found on this URL ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description The ARD (German Public TV) seems not to be working at the moment with Streamlink. It gives a 'error: No playable streams found on this URL: https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU' ### Debug log ```text [cli][debug] OS: Linux-5.10.0-15-amd64-x86_64-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 0.0.0+unknown [cli][debug] Dependencies: [cli][debug] PySocks: 1.7.1 [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.8.0 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.14.1 [cli][debug] requests: 2.27.1 [cli][debug] websocket-client: 1.3.1 [cli][debug] importlib-metadata: 4.11.3 [cli][debug] Arguments: [cli][debug] url=https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU [cli][debug] stream=['720p'] [cli][debug] --loglevel=debug [cli][debug] --output=blah.mp4 [cli][debug] --http-proxy=socks5h://[email protected]:8200 [cli][info] Found matching plugin ard_mediathek for URL https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU error: No playable streams found on this URL: https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU ```
The response of the initial HTTP request seems to be empty now, so the plugin can't find the stream URL. Not sure yet why. > ```html > <script id="fetchedContextValue" type="application/json"> > null > </script> > ``` The same request gets made by their site though and proper JSON data is included. That means that most likely some headers are missing. ---- A couple of live streams on `ard_mediathek` appear to be static now: - DasErste: https://mcdn.daserste.de/daserste/de/master.m3u8 - WDR: https://mcdn.wdr.de/wdr/wdrfs/de/master.m3u8 - NDR: https://mcdn.ndr.de/ndr/hls/ndr_fs/ndr_nds/master.m3u8 - BR: https://mcdn.br.de/br/fs/bfs_sued/hls/de/master.m3u8 - ArdOne: https://mcdn.one.ard.de/ardone/hls/master.m3u8 - Alpha: https://mcdn.br.de/br/fs/ard_alpha/hls/de/master.m3u8 Others are hosed on Akamai and are probably not static: - SWR: https://swrbwd-hls.akamaized.net/hls/live/2018672/swrbwd/master.m3u8 - HR: https://hrhlsde.akamaized.net/hls/live/2024526/hrhlsde/index.m3u8 - MDR: https://mdrtvsnhls.akamaized.net/hls/live/2016928/mdrtvsn/index.m3u8 - RadioBremen: https://rbhlslive.akamaized.net/hls/live/2020435/rbfs/master.m3u8 - RBB: https://rbb-hls-brandenburg.akamaized.net/hls/live/2017825/rbb_brandenburg/index.m3u8 - SR: https://srfs.akamaized.net/hls/live/689649/srfsgeo/index.m3u8 - Arte: https://arteliveext.akamaized.net/hls/live/2030993/artelive_de/index.m3u8 - KiKa: https://kikageohls.akamaized.net/hls/live/2022693-b/livetvkika_de/master.m3u8 - 3Sat: https://zdf-hls-18.akamaized.net/hls/live/2016501/dach/high/master.m3u8 - TagesSchau24: https://tagesschau.akamaized.net/hls/live/2020115/tagesschau/tagesschau_1/master.m3u8 - Phoenix: https://zdf-hls-19.akamaized.net/hls/live/2016502/de/high/master.m3u8 - DeutscheWelle: https://dwamdstream107.akamaized.net/hls/live/2017968/dwstream107/index.m3u8 Had a second look at the plugin and managed to fix the JSON data retrieval and validation, so that the correct stream URLs get found. That would be the diff (not submitting a PR yet): ```diff diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py index c2c51c14..5cc1cdfd 100644 --- a/src/streamlink/plugins/ard_mediathek.py +++ b/src/streamlink/plugins/ard_mediathek.py @@ -33,14 +33,20 @@ class ARDMediathek(Plugin): def _get_streams(self): data_json = self.session.http.get(self.url, schema=validate.Schema( validate.parse_html(), - validate.xml_findtext(".//script[@id='fetchedContextValue'][@type='application/json']"), + validate.xml_findtext(".//script[@id='fetchedContextValue2'][@type='application/json']"), validate.any(None, validate.all( validate.parse_json(), - {str: dict}, - validate.transform(lambda obj: list(obj.items())), - validate.filter(lambda item: item[0].startswith("https://api.ardmediathek.de/page-gateway/pages/")), - validate.any(validate.get((0, 1)), []) - )) + [list], + validate.filter(lambda item: ( + len(item) == 2 + and type(item[0]) is str + and type(item[1]) is dict + and item[0].startswith("https://api.ardmediathek.de/page-gateway/pages/") + )), + validate.get((0, 1)), + {"data": dict}, + validate.get("data"), + )), )) if not data_json: return ``` However, all the static streams I've listed earlier have a separate HLS audio playlist, and instead of using MPEG-TS containers for the audio segments, ADTS containers are used for the AAC audio data, and somehow this causes problems when muxing the two streams into the output. From what it looks like, the audio stream alone is causing issues when trying to play it on the fly. Saving it and playing it back afterwards works though. @back-to, do you have any idea? Video: https://mcdn.daserste.de/daserste/de/master_1920p_5000.m3u8 Audio: https://mcdn.daserste.de/daserste/de/master_audio1_128.m3u8 https://datatracker.ietf.org/doc/html/rfc8216#section-3.4 https://github.com/streamlink/streamlink/issues/3534#issuecomment-778171025
2022-08-05T06:30:14
streamlink/streamlink
4,729
streamlink__streamlink-4729
[ "4727" ]
d5818db058b02d1f2c4218eb527d81a23b1599c9
diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py --- a/src/streamlink/plugins/picarto.py +++ b/src/streamlink/plugins/picarto.py @@ -33,40 +33,37 @@ class Picarto(Plugin): HLS_URL = "https://{netloc}/stream/hls/{file_name}/index.m3u8" def get_live(self, username): - netloc = self.session.http.get(self.url, schema=validate.Schema( - validate.parse_html(), - validate.xml_xpath_string(".//script[contains(@src,'/stream/player.js')][1]/@src"), - validate.any(None, validate.transform(lambda src: urlparse(src).netloc)) - )) - if not netloc: - log.error("Could not find server netloc") - return - - channel, multistreams = self.session.http.get(self.API_URL_LIVE.format(username=username), schema=validate.Schema( - validate.parse_json(), - { - "channel": validate.any(None, { - "stream_name": str, - "title": str, - "online": bool, - "private": bool, - "categories": [{"label": str}], - }), - "getMultiStreams": validate.any(None, { - "multistream": bool, - "streams": [{ - "name": str, + channel, multistreams, loadbalancer = self.session.http.get( + self.API_URL_LIVE.format(username=username), + schema=validate.Schema( + validate.parse_json(), + { + "channel": validate.any(None, { + "stream_name": str, + "title": str, "online": bool, - }], - }), - }, - validate.union_get("channel", "getMultiStreams") - )) - if not channel or not multistreams: + "private": bool, + "categories": [{"label": str}], + }), + "getMultiStreams": validate.any(None, { + "multistream": bool, + "streams": [{ + "name": str, + "online": bool, + }], + }), + "getLoadBalancerUrl": validate.any(None, { + "url": validate.any(None, validate.transform(lambda url: urlparse(url).netloc)) + }) + }, + validate.union_get("channel", "getMultiStreams", "getLoadBalancerUrl"), + ) + ) + if not channel or not multistreams or not loadbalancer: log.debug("Missing channel or streaming data") return - log.trace(f"netloc={netloc!r}") + log.trace(f"loadbalancer={loadbalancer!r}") log.trace(f"channel={channel!r}") log.trace(f"multistreams={multistreams!r}") @@ -83,7 +80,7 @@ def get_live(self, username): self.title = channel["title"] hls_url = self.HLS_URL.format( - netloc=netloc, + netloc=loadbalancer["url"], file_name=channel["stream_name"] ) @@ -110,7 +107,7 @@ def get_vod(self, vod_id): validate.parse_json(), {"data": { "video": validate.any(None, { - "id": str, + "id": int, "title": str, "file_name": str, "video_recording_image_url": str,
plugins.picarto: Could not find server netloc ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Plugin suddenly stopped working today. Checked on multiple streams as well as on Linux and Windows 10 with the same result. I can still manually watch the streams on VLC with "https://1-edge1-eu-west.picarto.tv/stream/hls/golive%2bUSERNAME/index.m3u8" as URL source. ### Debug log ```text C:\PICARTO>streamlink https://picarto.tv/USERNAME best -l debug [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.5 [cli][debug] Streamlink: 4.2.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.3 [cli][debug] Arguments: [cli][debug] url=https://picarto.tv/USERNAME [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin picarto for URL https://picarto.tv/USERNAME [plugins.picarto][debug] Type=Live [plugins.picarto][error] Could not find server netloc error: No playable streams found on this URL: https://picarto.tv/USERNAME ```
2022-08-10T17:27:09
streamlink/streamlink
4,739
streamlink__streamlink-4739
[ "4738" ]
038d9ecb0b653efa09afcfd429b3d84c79eb97d0
diff --git a/src/streamlink_cli/argparser.py b/src/streamlink_cli/argparser.py --- a/src/streamlink_cli/argparser.py +++ b/src/streamlink_cli/argparser.py @@ -482,14 +482,33 @@ def build_parser(): useful to allow external devices like smartphones or streaming boxes to watch streams they wouldn't be able to otherwise. - Behavior will be similar to the continuous HTTP option, but no player - program will be started, and the server will listen on all available + The default behavior is similar to the --player-continuous-http option, + but no player program will be started, and the server will listen on all available connections instead of just in the local (loopback) interface. + Optionally, the --player-external-http-continuous option allows for disabling + the continuous run-mode, so that Streamlink will stop when the stream ends. + The URLs that can be used to access the stream will be printed to the console, and the server can be interrupted using CTRL-C. """ ) + player.add_argument( + "--player-external-http-continuous", + type=boolean, + metavar="{yes,true,1,on,no,false,0,off}", + default=True, + help=""" + Set the run-mode of --player-external-http to continuous or non-continuous. + + In the continuous run-mode, Streamlink will keep running after the stream has ended + and will wait for the next HTTP request being made unless it gets shut down via CTRL-C. + + If set to non-continuous, Streamlink will stop once the stream has ended. + + Default is true. + """ + ) player.add_argument( "--player-external-http-port", metavar="PORT", diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -197,6 +197,7 @@ def output_stream_http( initial_streams: Dict[str, Stream], formatter: Formatter, external: bool = False, + continuous: bool = True, port: int = 0, ): """Continuously output the stream over HTTP.""" @@ -267,6 +268,9 @@ def output_stream_http( log.debug("Writing stream to player") read_stream(stream_fd, server, prebuffer, formatter) + if not continuous: + break + server.close(True) if player: @@ -474,8 +478,14 @@ def handle_stream(plugin: Plugin, streams: Dict[str, Stream], stream_name: str) log.info(f"Opening stream: {stream_name} ({stream_type})") success = output_stream_passthrough(stream, formatter) elif args.player_external_http: - return output_stream_http(plugin, streams, formatter, external=True, - port=args.player_external_http_port) + return output_stream_http( + plugin, + streams, + formatter, + external=True, + continuous=args.player_external_http_continuous, + port=args.player_external_http_port, + ) elif args.player_continuous_http and not file_output: return output_stream_http(plugin, streams, formatter) else:
cli: option for running --player-external-http in "non-continuous" mode ### Checklist - [X] This is a feature request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22feature+request%22) ### Description I frequently run streamlink with the `player-external-http` flags within a tmux session to have the ability to swap between devices. This works quite well, but I'll occasionally encounter errors where streamlink is still running idle off in a session somewhere long after the target stream has wrapped up. It would be nice if we could be able to customize behavior of the app when a target stream ends (or is down for X amount of time to allow for unintended disconnects on the source stream's end). The current behavior is largely fine for desktop use, but the ability to get more granular control would be appreciated for headless use cases.
2022-08-16T09:06:03
streamlink/streamlink
4,743
streamlink__streamlink-4743
[ "4742" ]
038d9ecb0b653efa09afcfd429b3d84c79eb97d0
diff --git a/src/streamlink/plugins/cbsnews.py b/src/streamlink/plugins/cbsnews.py --- a/src/streamlink/plugins/cbsnews.py +++ b/src/streamlink/plugins/cbsnews.py @@ -1,5 +1,5 @@ """ -$description 24-hour live streaming world news channel, based in the United States of America. +$description 24-hour live-streaming world news channel, based in the United States of America. $url cbsnews.com $type live """ @@ -12,32 +12,33 @@ @pluginmatcher(re.compile( - r"https?://www\.cbsnews\.com/live/" + r"https?://(?:www\.)?cbsnews\.com/(?:\w+/)?live/?" )) class CBSNews(Plugin): def _get_streams(self): - items = self.session.http.get(self.url, schema=validate.Schema( - re.compile(r"CBSNEWS.defaultPayload = (\{.*)"), + data = self.session.http.get(self.url, schema=validate.Schema( + re.compile(r"CBSNEWS\.defaultPayload\s*=\s*(\{.*?})\s*\n"), validate.none_or_all( validate.get(1), validate.parse_json(), { - "items": [ - validate.all( - { - "video": validate.url(), - "format": "application/x-mpegURL", - }, - validate.get("video"), - ), - ], + "items": [{ + "id": str, + "canonicalTitle": str, + "video": validate.url(), + "format": "application/x-mpegURL", + }], }, - validate.get("items"), + validate.get(("items", 0)), + validate.union_get("id", "canonicalTitle", "video"), ), )) - if items: - for hls_url in items: - yield from HLSStream.parse_variant_playlist(self.session, hls_url).items() + if not data: + return + + self.id, self.title, hls_url = data + + return HLSStream.parse_variant_playlist(self.session, hls_url) __plugin__ = CBSNews
diff --git a/tests/plugins/test_cbsnews.py b/tests/plugins/test_cbsnews.py --- a/tests/plugins/test_cbsnews.py +++ b/tests/plugins/test_cbsnews.py @@ -6,9 +6,18 @@ class TestPluginCanHandleUrlCBSNews(PluginCanHandleUrl): __plugin__ = CBSNews should_match = [ - "https://www.cbsnews.com/live/cbs-sports-hq/", - "https://www.cbsnews.com/live/cbsn-local-bay-area/", + "https://cbsnews.com/live", + "https://cbsnews.com/live/cbs-sports-hq", + "https://cbsnews.com/sanfrancisco/live", + "https://cbsnews.com/live/", + "https://cbsnews.com/live/cbs-sports-hq/", + "https://cbsnews.com/sanfrancisco/live/", "https://www.cbsnews.com/live/", + "https://www.cbsnews.com/live/cbs-sports-hq/", + "https://www.cbsnews.com/sanfrancisco/live/", + "https://www.cbsnews.com/live/#x", + "https://www.cbsnews.com/live/cbs-sports-hq/#x", + "https://www.cbsnews.com/sanfrancisco/live/#x", ] should_not_match = [
plugins.cbsnews: livestreams are not parsed for local stations ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Livestreams for local CBSNews stations are not parsed by streamlink. Example: https://www.cbsnews.com/losangeles/live/ is a valid livestream (it plays fine in an incognito browser) but streamlink reports no valid plugin to handle the URL. ### Debug log ```text $streamlink https://www.cbsnews.com/losangeles/live --loglevel debug [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.6 [cli][debug] Streamlink: 4.3.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.3 [cli][debug] Arguments: [cli][debug] url=https://www.cbsnews.com/losangeles/live [cli][debug] --loglevel=debug [cli][debug] --player="C:\Program Files\MPC-HC\mpc-hc64.exe" [cli][debug] --player-args=/new /play /close [cli][debug] --title={author} - {category} - {title} [cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe error: No plugin can handle URL: https://www.cbsnews.com/losangeles/live ```
2022-08-17T06:15:33
streamlink/streamlink
4,759
streamlink__streamlink-4759
[ "4755" ]
c1a14d6338a36794cb235e00e79f4faed1614776
diff --git a/src/streamlink/plugins/atresplayer.py b/src/streamlink/plugins/atresplayer.py --- a/src/streamlink/plugins/atresplayer.py +++ b/src/streamlink/plugins/atresplayer.py @@ -7,12 +7,12 @@ import logging import re +from urllib.parse import urlparse from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.dash import DASHStream from streamlink.stream.hls import HLSStream -from streamlink.utils.data import search_dict from streamlink.utils.url import update_scheme log = logging.getLogger(__name__) @@ -24,15 +24,15 @@ class AtresPlayer(Plugin): def _get_streams(self): self.url = update_scheme("https://", self.url) + path = urlparse(self.url).path api_url = self.session.http.get(self.url, schema=validate.Schema( re.compile(r"""window.__PRELOADED_STATE__\s*=\s*({.*?});""", re.DOTALL), validate.none_or_all( validate.get(1), validate.parse_json(), - validate.transform(search_dict, key="href"), - [validate.url()], - validate.get(0), + {"links": {path: {"href": validate.url()}}}, + validate.get(("links", path, "href")), ), )) if not api_url: @@ -41,37 +41,46 @@ def _get_streams(self): player_api_url = self.session.http.get(api_url, schema=validate.Schema( validate.parse_json(), - validate.transform(search_dict, key="urlVideo"), + {"urlVideo": validate.url()}, + validate.get("urlVideo"), )) - stream_schema = validate.Schema( + log.debug(f"Player API URL: {player_api_url}") + sources = self.session.http.get(player_api_url, acceptable_status=(200, 403), schema=validate.Schema( validate.parse_json(), - { - "sources": [ - validate.all( - { - "src": validate.url(), - validate.optional("type"): str, - }, - ), - ], - }, - validate.get("sources"), - ) + validate.any( + { + "error": str, + "error_description": str, + }, + { + "sources": [ + validate.all( + { + "src": validate.url(), + validate.optional("type"): str, + }, + validate.union_get("type", "src"), + ), + ], + }, + ), + )) + if "error" in sources: + log.error(f"Player API error: {sources['error']} - {sources['error_description']}") + return - for api_url in player_api_url: - log.debug(f"Player API URL: {api_url}") - for source in self.session.http.get(api_url, schema=stream_schema): - log.debug(f"Stream source: {source['src']} ({source.get('type', 'n/a')})") + for streamtype, streamsrc in sources.get("sources"): + log.debug(f"Stream source: {streamsrc} ({streamtype or 'n/a'})") - if "type" not in source or source["type"] == "application/vnd.apple.mpegurl": - streams = HLSStream.parse_variant_playlist(self.session, source["src"]) - if not streams: - yield "live", HLSStream(self.session, source["src"]) - else: - yield from streams.items() - elif source["type"] == "application/dash+xml": - yield from DASHStream.parse_manifest(self.session, source["src"]).items() + if streamtype == "application/vnd.apple.mpegurl": + streams = HLSStream.parse_variant_playlist(self.session, streamsrc) + if not streams: + yield "live", HLSStream(self.session, streamsrc) + else: + yield from streams.items() + elif streamtype == "application/dash+xml": + yield from DASHStream.parse_manifest(self.session, streamsrc).items() __plugin__ = AtresPlayer
plugins.atresplayer: Live streams is not working. ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description As for today, Atresplayer live streams is not working. ### Debug log ```text [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.6 [cli][debug] Streamlink: 4.3.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.3 [cli][debug] Arguments: [cli][debug] url=https://www.atresplayer.com/directos/antena3/ [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --hls-live-edge=1 [cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/antena3/ error: Unable to validate response text: ValidationError(NoneOrAllSchema): ValidationError(type): Type of <generator object search_dict at 0x000002C64BA79930> should be list, but is generator ```
The validation schema is indeed broken. However, similar to the mitele plugin #4726, the site requires a paid subscription for all channels (I checked with a local German IP as well as a Spanish proxy/VPN). Submitting a fix for the validation schemas is thus pointless if I can't check whether the plugin is working correctly. @bastimeyer Neither atresplayer nor mitele requires a paid subscription as long as you have a spanish IP. I just loaded those two pages with a spanish VPN and it works perfectly: https://www.atresplayer.com/directos/antena3/ https://www.mitele.es/directo/telecinco/ The VPN I am using gets blocked and they are showing a requirement for a paid subscription, on their site as well as on the API. ``` $ curl -s 'https://api.atresplayer.com/player/v1/live/5a6a165a7ed1a834493ebf6a' {"error":"required_paid","error_description":"Required paid"} $ curl -s 'https://api.atresplayer.com/player/v1/live/5a6a165a7ed1a834493ebf6a' {"error":"vpn","error_description":"Using VPN","detail":{"ip":"redacted","time":"2022-08-20T20:07:14Z"}} ``` I can open a PR with my changes for you to test though.
2022-08-20T20:17:18
streamlink/streamlink
4,760
streamlink__streamlink-4760
[ "4726" ]
c1a14d6338a36794cb235e00e79f4faed1614776
diff --git a/src/streamlink/plugins/mitele.py b/src/streamlink/plugins/mitele.py --- a/src/streamlink/plugins/mitele.py +++ b/src/streamlink/plugins/mitele.py @@ -21,83 +21,91 @@ r"https?://(?:www\.)?mitele\.es/directo/(?P<channel>[\w-]+)" )) class Mitele(Plugin): - caronte_url = "https://caronte.mediaset.es/delivery/channel/mmc/{channel}/mtweb" - gbx_url = "https://mab.mediaset.es/1.0.0/get?oid=mtmw&eid=%2Fapi%2Fmtmw%2Fv2%2Fgbx%2Fmtweb%2Flive%2Fmmc%2F{channel}" + URL_CARONTE = "https://caronte.mediaset.es/delivery/channel/mmc/{channel}/mtweb" + URL_GBX = "https://mab.mediaset.es/1.0.0/get" - error_schema = validate.Schema({"code": int}) - caronte_schema = validate.Schema(validate.parse_json(), validate.any( - { - "cerbero": validate.url(), - "bbx": str, - "dls": [{ - "lid": validate.all(int, validate.transform(str)), - "format": validate.any("hls", "dash", "smooth"), - "stream": validate.url(), - validate.optional("assetKey"): str, - "drm": bool, - }], - }, - error_schema, - )) - gbx_schema = validate.Schema( - validate.parse_json(), - {"gbx": str}, - validate.get("gbx") - ) - cerbero_schema = validate.Schema( - validate.parse_json(), - validate.any( - validate.all( - {"tokens": {str: {"cdn": str}}}, - validate.get("tokens") - ), - error_schema, - ) - ) - token_errors = { - 4038: "User has no privileges" + TOKEN_ERRORS = { + 4038: "User has no privileges", } def _get_streams(self): channel = self.match.group("channel") - pdata = self.session.http.get(self.caronte_url.format(channel=channel), - acceptable_status=(200, 403, 404), - schema=self.caronte_schema) - gbx = self.session.http.get(self.gbx_url.format(channel=channel), - schema=self.gbx_schema) - + pdata = self.session.http.get( + self.URL_CARONTE.format(channel=channel), + acceptable_status=(200, 403, 404), + schema=validate.Schema( + validate.parse_json(), + validate.any( + {"code": int}, + { + "cerbero": validate.url(), + "bbx": str, + "dls": validate.all( + [{ + "drm": bool, + "format": str, + "stream": validate.url(), + "lid": validate.all(int, validate.transform(str)), + validate.optional("assetKey"): str, + }], + validate.filter(lambda obj: obj["format"] == "hls") + ), + }, + ), + ), + ) if "code" in pdata: - log.error("error getting pdata: {}".format(pdata["code"])) + log.error(f"Error getting pdata: {pdata['code']}") return - tokens = self.session.http.post(pdata["cerbero"], - acceptable_status=(200, 403, 404), - json={"bbx": pdata["bbx"], "gbx": gbx}, - headers={"origin": "https://www.mitele.es"}, - schema=self.cerbero_schema) + gbx = self.session.http.get( + self.URL_GBX, + params={ + "oid": "mtmw", + "eid": f"/api/mtmw/v2/gbx/mtweb/live/mmc/{channel}", + }, + schema=validate.Schema( + validate.parse_json(), + {"gbx": str}, + validate.get("gbx"), + ), + ) + tokens = self.session.http.post( + pdata["cerbero"], + acceptable_status=(200, 403, 404), + json={ + "bbx": pdata["bbx"], + "gbx": gbx, + }, + headers={"origin": "https://www.mitele.es"}, + schema=validate.Schema( + validate.parse_json(), + validate.any( + {"code": int}, + validate.all( + {"tokens": {str: {"cdn": str}}}, + validate.get("tokens") + ), + ), + ), + ) if "code" in tokens: - log.error("Could not get stream tokens: {} ({})".format(tokens["code"], - self.token_errors.get(tokens["code"], "unknown error"))) + tokenerrors = self.TOKEN_ERRORS.get(tokens["code"], "unknown error") + log.error(f"Could not get stream tokens: {tokens['code']} ({tokenerrors})") return - list_urls = [] + urls = set() for stream in pdata["dls"]: if stream["drm"]: log.warning("Stream may be protected by DRM") - else: - sformat = stream.get("format") - log.debug("Stream: {} ({})".format(stream["stream"], sformat or "n/a")) - cdn_token = tokens.get(stream["lid"], {}).get("cdn", "") - qsd = parse_qsd(cdn_token) - if sformat == "hls": - list_urls.append(update_qsd(stream["stream"], qsd)) - - if not list_urls: - return + continue + cdn_token = tokens.get(stream["lid"], {}).get("cdn", "") + qsd = parse_qsd(cdn_token) + urls.add(update_qsd(stream["stream"], qsd, quote_via=lambda string, *_, **__: string)) - for url in list(set(list_urls)): + for url in urls: yield from HLSStream.parse_variant_playlist(self.session, url, name_fmt="{pixels}_{bitrate}").items()
plugins.mitele: 403 Client Error: Forbidden token format ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description New changes on API ### Debug log ```text [cli][debug] OS: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 [cli][debug] Python: 3.10.5 [cli][debug] Streamlink: 4.2.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.3 [cli][debug] Arguments: [cli][debug] url=https://mitele.es/directo/telecinco [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player=vlc [cli][info] Found matching plugin mitele for URL https://mitele.es/directo/telecinco [plugins.mitele][debug] Stream: https://directos.mitele.es/orilinear01/live/linear01/main/main.isml/web.m3u8 (hls) [plugins.mitele][debug] Stream: https://directos.mitele.es/orilinear17/live/linear17/playlist/playlist.isml/ctv.m3u8 (hls) [plugins.mitele][debug] Stream: https://livek.mediaset.es/orilinear01/live/linear01/main/main.isml/web.m3u8 (hls) [plugins.mitele][debug] Stream: https://livek.mediaset.es/orilinear17/live/linear17/playlist/playlist.isml/ctv.m3u8 (hls) [utils.l10n][debug] Language code: es_ES error: Unable to open URL: https://livek.mediaset.es/orilinear17/live/linear17/playlist/playlist.isml/ctv.m3u8?hdnts=st%3D1660114575~exp%3D1660200974~acl%3D%2F%2A~hmac%3D2e0d916d43994680b72c50d5dfecadb812c920ee14a4cb918789c3aa8d803f54 (403 Client Error: Forbidden token format for url: https://livek.mediaset.es/orilinear17/live/linear17/playlist/playlist.isml/ctv.m3u8?hdnts=st%3D1660114575~exp%3D1660200974~acl%3D%2F%2A~hmac%3D2e0d916d43994680b72c50d5dfecadb812c920ee14a4cb918789c3aa8d803f54) ```
> 403 Client Error: Forbidden token format The site requires a monthly subscription to access that channel. Not sure if that's because of me trying to access it with a German IP, but other channels are showing a regular geo-protection message. Are other channels working fine? > > 403 Client Error: Forbidden token format > > The site requires a monthly subscription to access that channel. Not sure if that's because of me trying to access it with a German IP, but other channels are showing a regular geo-protection message. > > Are other channels working fine? Yes, and can access this channel free too. It has geoblocked for Spain. The issue still continue. I am accessing from Spain, an still it gives the "Client Error: Forbidden token" reported. On the website directly "https://www.mitele.es/directo/telecinco/" , without any subscription works perfect on Chrome.
2022-08-20T21:27:48
streamlink/streamlink
4,761
streamlink__streamlink-4761
[ "4753" ]
8c1430ea0579550fc24de9c7189e3df5f07466f4
diff --git a/src/streamlink/plugins/ustreamtv.py b/src/streamlink/plugins/ustreamtv.py --- a/src/streamlink/plugins/ustreamtv.py +++ b/src/streamlink/plugins/ustreamtv.py @@ -462,13 +462,15 @@ def open(self): @pluginmatcher(re.compile(r""" - https?://(?:(www\.)?ustream\.tv|video\.ibm\.com) - (?: - (/embed/|/channel/id/)(?P<channel_id>\d+) - )? - (?: - (/embed)?/recorded/(?P<video_id>\d+) - )? + https?://(?:(?:www\.)?ustream\.tv|video\.ibm\.com) + (?: + /combined-embed + /(?P<combined_channel_id>\d+) + (?:/video/(?P<combined_video_id>\d+))? + | + (?:(?:/embed/|/channel/(?:id/)?)(?P<channel_id>\d+))? + (?:(?:/embed)?/recorded/(?P<video_id>\d+))? + ) """, re.VERBOSE)) @pluginargument( "password", @@ -481,11 +483,11 @@ class UStreamTV(Plugin): STREAM_READY_TIMEOUT = 15 def _get_media_app(self): - video_id = self.match.group("video_id") + video_id = self.match.group("video_id") or self.match.group("combined_video_id") if video_id: return video_id, "recorded" - channel_id = self.match.group("channel_id") + channel_id = self.match.group("channel_id") or self.match.group("combined_channel_id") if not channel_id: channel_id = self.session.http.get( self.url,
diff --git a/tests/plugins/test_ustreamtv.py b/tests/plugins/test_ustreamtv.py --- a/tests/plugins/test_ustreamtv.py +++ b/tests/plugins/test_ustreamtv.py @@ -9,14 +9,47 @@ class TestPluginCanHandleUrlUStreamTV(PluginCanHandleUrl): __plugin__ = UStreamTV - should_match = [ - "http://www.ustream.tv/streamlink", - "http://www.ustream.tv/channel/id/1234", - "http://www.ustream.tv/embed/1234", - "http://www.ustream.tv/recorded/6543", - "http://www.ustream.tv/embed/recorded/6543", - "https://video.ibm.com/channel/H5rQLwmTGrW", - "https://video.ibm.com/recorded/124680279", + should_match_groups = [ + ( + "https://www.ustream.tv/nasahdtv", + {}, + ), + ( + "https://www.ustream.tv/channel/6540154", + {"channel_id": "6540154"}, + ), + ( + "https://www.ustream.tv/channel/id/6540154", + {"channel_id": "6540154"}, + ), + ( + "https://www.ustream.tv/embed/6540154", + {"channel_id": "6540154"}, + ), + ( + "https://www.ustream.tv/recorded/132041157", + {"video_id": "132041157"}, + ), + ( + "https://www.ustream.tv/embed/recorded/132041157", + {"video_id": "132041157"}, + ), + ( + "https://www.ustream.tv/combined-embed/6540154", + {"combined_channel_id": "6540154"}, + ), + ( + "https://www.ustream.tv/combined-embed/6540154/video/132041157", + {"combined_channel_id": "6540154", "combined_video_id": "132041157"}, + ), + ( + "https://video.ibm.com/nasahdtv", + {}, + ), + ( + "https://video.ibm.com/recorded/132041157", + {"video_id": "132041157"}, + ), ]
plugins.ustreamtv: Plug-in does not handle combined-embed video streams ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Many videos on IBM's video streaming service (ustream.tv) are now recorded as chunked .m4v, which is not detected by the plug-in. The source only seems to handle .mp4 streams. Example URL: https://www.ustream.tv/combined-embed/11748282/video/132038074 ### Debug log ```text C:\Users\Chad\Desktop>streamlink -v https://www.ustream.tv/combined-embed/11748282/video/132038074 [cli][info] Found matching plugin ustreamtv for URL https://www.ustream.tv/combined-embed/11748282/video/132038074 error: No playable streams found on this URL: https://www.ustream.tv/combined-embed/11748282/video/132038074 Sample header: https://uhsakamai-a.akamaihd.net/sjc/omega/vod/us1-37ed7ef2-e250-4085-b0e2-9f5bbf7d69e8/10584000/53926414920000/plain/rfc/4/chunk_0_a0e49b82a5.m4vh Sample chunks: https://uhsakamai-a.akamaihd.net/sjc/omega/vod/us1-37ed7ef2-e250-4085-b0e2-9f5bbf7d69e8/10584000/53926414920000/plain/rfc/4/chunk_229_d777c5fb22.m4v https://uhsakamai-a.akamaihd.net/sjc/omega/vod/us1-37ed7ef2-e250-4085-b0e2-9f5bbf7d69e8/10584000/53926414920000/plain/rfc/2/chunk_230_1c8466259a.m4v ```
> are now recorded as chunked .m4v, which is not detected by the plug-in. The source only seems to handle .mp4 streams. That's not true. The plugin does support initialization segments, as you can see in the `UStreamTVStreamWriter` class: https://github.com/streamlink/streamlink/blob/2c0925335ebeb8dd5dda95dee79d6b39f36375c4/src/streamlink/plugins/ustreamtv.py#L340-L383 > error: No playable streams found on this URL: https://www.ustream.tv/combined-embed/11748282/video/132038074 The issue is the URL format which the plugin does not support. If you change it to the one it expects while keeping the channel ID and video ID, the stream works fine: ``` $ streamlink -l debug 'ustream.tv/channel/id/11748282/recorded/132038074' best [cli][debug] OS: Linux-5.19.2-1-git-x86_64-with-glibc2.36 [cli][debug] Python: 3.10.6 [cli][debug] Streamlink: 4.3.0+7.gb6166ad7 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.0 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.14.1 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.2 [cli][debug] Arguments: [cli][debug] url=ustream.tv/channel/id/11748282/recorded/132038074 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player=mpv [cli][info] Found matching plugin ustreamtv for URL ustream.tv/channel/id/11748282/recorded/132038074 [plugins.ustreamtv][debug] Connecting to UStream API: media_id=132038074, application=recorded, referrer=https://ustream.tv/channel/id/11748282/recorded/132038074, cluster=live [plugin.api.websocket][debug] Connecting to: wss://r3563127-1-132038074-recorded-ws-live.ums.services.video.ibm.com/1/ustream [plugins.ustreamtv][debug] Waiting for stream data (for at most 15 seconds)... [plugins.ustreamtv][debug] Processing 'moduleInfo' - 'cdnConfig' [plugins.ustreamtv][debug] Processing 'moduleInfo' - 'stream' [cli][info] Available streams: 252p+a83k (worst), 252p+a188k, 360p+a83k, 360p+a188k, 486p+a83k, 486p+a188k, 720p+a83k, 720p+a188k, 1072p+a83k, 1072p+a188k (best) [cli][info] Opening stream: 1072p+a188k (muxed-stream) [cli][info] Starting player: mpv [stream.ffmpegmux][debug] Opening ustreamtv substream [plugins.ustreamtv][debug] Stream opened, keeping websocket connection alive [plugins.ustreamtv][debug] Adding video segment 0 to queue [stream.ffmpegmux][debug] Opening ustreamtv substream [plugins.ustreamtv][debug] Adding video segment 1 to queue [plugins.ustreamtv][debug] Adding video segment 2 to queue [plugins.ustreamtv][debug] Adding video segment 3 to queue [plugins.ustreamtv][debug] Adding video segment 4 to queue [plugins.ustreamtv][debug] Adding video segment 5 to queue [plugins.ustreamtv][debug] Adding video segment 6 to queue [plugins.ustreamtv][debug] Adding video segment 7 to queue [plugins.ustreamtv][debug] Adding video segment 8 to queue [plugins.ustreamtv][debug] Adding video segment 9 to queue [plugins.ustreamtv][debug] Adding video segment 10 to queue [plugins.ustreamtv][debug] Adding video segment 11 to queue [plugins.ustreamtv][debug] Adding video segment 12 to queue [plugins.ustreamtv][debug] Adding video segment 13 to queue [plugins.ustreamtv][debug] Adding video segment 14 to queue [plugins.ustreamtv][debug] Adding video segment 15 to queue [plugins.ustreamtv][debug] Adding video segment 16 to queue [plugins.ustreamtv][debug] Adding video segment 17 to queue [plugins.ustreamtv][debug] Adding video segment 18 to queue [plugins.ustreamtv][debug] Adding video segment 19 to queue [plugins.ustreamtv][debug] Adding video segment 20 to queue [plugins.ustreamtv][debug] Adding audio segment 0 to queue [utils.named_pipe][info] Creating pipe streamlinkpipe-87806-1-2459 [plugins.ustreamtv][debug] Adding audio segment 1 to queue [plugins.ustreamtv][debug] Adding audio segment 2 to queue [plugins.ustreamtv][debug] Adding audio segment 3 to queue [plugins.ustreamtv][debug] Adding audio segment 4 to queue [plugins.ustreamtv][debug] Adding audio segment 5 to queue [plugins.ustreamtv][debug] Adding audio segment 6 to queue [plugins.ustreamtv][debug] Adding audio segment 7 to queue [plugins.ustreamtv][debug] Adding audio segment 8 to queue [plugins.ustreamtv][debug] Adding audio segment 9 to queue [plugins.ustreamtv][debug] Adding audio segment 10 to queue [plugins.ustreamtv][debug] Adding audio segment 11 to queue [plugins.ustreamtv][debug] Adding audio segment 12 to queue [plugins.ustreamtv][debug] Adding audio segment 13 to queue [plugins.ustreamtv][debug] Adding audio segment 14 to queue [plugins.ustreamtv][debug] Adding audio segment 15 to queue [plugins.ustreamtv][debug] Adding audio segment 16 to queue [plugins.ustreamtv][debug] Adding audio segment 17 to queue [plugins.ustreamtv][debug] Adding audio segment 18 to queue [plugins.ustreamtv][debug] Adding audio segment 19 to queue [plugins.ustreamtv][debug] Adding audio segment 20 to queue [utils.named_pipe][info] Creating pipe streamlinkpipe-87806-2-3406 [stream.ffmpegmux][debug] ffmpeg command: /usr/bin/ffmpeg -nostats -y -i /tmp/streamlinkpipe-87806-1-2459 -i /tmp/streamlinkpipe-87806-2-3406 -c:v copy -c:a copy -map 0 -map 1 -f matroska pipe:1 [stream.ffmpegmux][debug] Starting copy to pipe: /tmp/streamlinkpipe-87806-1-2459 [stream.ffmpegmux][debug] Starting copy to pipe: /tmp/streamlinkpipe-87806-2-3406 [cli][debug] Pre-buffering 8192 bytes ``` This means that the plugin's URL matcher needs to be updated for these particular URL formats: `combined-embed/{channel_id}/video/{video_id}` https://github.com/streamlink/streamlink/blob/2c0925335ebeb8dd5dda95dee79d6b39f36375c4/src/streamlink/plugins/ustreamtv.py#L464-L472 Sorry about the misdiagnosis, but it was 4:00am and I got sloppy reviewing the source. Fortunately, this fix seems easier and there's also a simple workaround now that we know how to map the URLs. The title has been updated.
2022-08-21T09:00:45
streamlink/streamlink
4,763
streamlink__streamlink-4763
[ "4762" ]
8f24ad51acdbca5592dca5bf80c90d0d885e90df
diff --git a/src/streamlink/plugins/huya.py b/src/streamlink/plugins/huya.py --- a/src/streamlink/plugins/huya.py +++ b/src/streamlink/plugins/huya.py @@ -54,7 +54,7 @@ def _get_streams(self): { "data": [{ "gameLiveInfo": { - "liveId": int, + "liveId": str, "nick": str, "roomName": str, },
plugins.huya: As of today, Huya plugin has been broken ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description When I try to open any public Huya stream I get an error message. Assuming Huya has changed how live IDs are handled and switched to strings. ### Debug log ```text hina@Hinas-MacBook-Pro ~ % streamlink https://www.huya.com/660108 best --loglevel debug [cli][debug] OS: macOS 12.5 [cli][debug] Python: 3.10.6 [cli][debug] Streamlink: 4.3.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.3 [cli][debug] Arguments: [cli][debug] url=https://www.huya.com/660108 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin huya for URL https://www.huya.com/660108 error: Unable to validate response text: ValidationError(NoneOrAllSchema): ValidationError(dict): Unable to validate value of key 'data' Context(AnySchema): ValidationError(dict): Unable to validate value of key 'gameLiveInfo' Context(dict): Unable to validate value of key 'liveId' Context(type): Type of '7134607205476108031' should be int, but is str hina@Hinas-MacBook-Pro ~ % ```
2022-08-22T11:58:58
streamlink/streamlink
4,764
streamlink__streamlink-4764
[ "4752" ]
38b5c76dd147c665ecc13d48a05b0230032f25fc
diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -32,7 +32,6 @@ from streamlink_cli.output import FileOutput, PlayerOutput from streamlink_cli.utils import Formatter, HTTPServer, datetime, ignored from streamlink_cli.utils.progress import Progress -from streamlink_cli.utils.terminal import TerminalOutput ACCEPTABLE_ERRNO = (errno.EPIPE, errno.EINVAL, errno.ECONNRESET) @@ -398,7 +397,10 @@ def read_stream(stream, output, prebuffer, formatter: Formatter, chunk_size=8192 iter(partial(stream.read, chunk_size), b"") ) if show_progress or show_record_progress: - progress = Progress(output=TerminalOutput(sys.stderr)) + progress = Progress( + sys.stderr, + output.filename or output.record.filename, + ) stream_iterator = progress.iter(stream_iterator) try: diff --git a/src/streamlink_cli/utils/progress.py b/src/streamlink_cli/utils/progress.py --- a/src/streamlink_cli/utils/progress.py +++ b/src/streamlink_cli/utils/progress.py @@ -1,35 +1,166 @@ +import os from collections import deque from math import floor +from pathlib import PurePath +from shutil import get_terminal_size +from string import Formatter as StringFormatter from threading import Event, RLock, Thread from time import time -from typing import Deque, Iterable, Iterator, Optional, Tuple +from typing import Callable, Deque, Dict, Iterable, Iterator, List, Optional, TextIO, Tuple, Union -from streamlink_cli.utils.terminal import TerminalOutput +from streamlink.compat import is_win32 + + +_stringformatter = StringFormatter() +_TFormat = Iterable[Iterable[Tuple[str, Optional[str], Optional[str], Optional[str]]]] class ProgressFormatter: - FORMATS: Iterable[str] = ( + # Store formats as a tuple of lists of parsed format strings, + # so when iterating, we don't have to parse over and over again. + # Reserve at least 15 characters for the path, so it can be truncated with enough useful information. + FORMATS: _TFormat = tuple(list(_stringformatter.parse(fmt)) for fmt in ( + "[download] Written {written} to {path:15} ({elapsed} @ {speed})", "[download] Written {written} ({elapsed} @ {speed})", "[download] {written} ({elapsed} @ {speed})", "[download] {written} ({elapsed})", "[download] {written}", - ) - FORMATS_NOSPEED: Iterable[str] = ( + )) + FORMATS_NOSPEED: _TFormat = tuple(list(_stringformatter.parse(fmt)) for fmt in ( + "[download] Written {written} to {path:15} ({elapsed})", "[download] Written {written} ({elapsed})", "[download] {written} ({elapsed})", "[download] {written}", + )) + + # Use U+2026 (HORIZONTAL ELLIPSIS) to be able to distinguish between "." and ".." when truncating relative paths + ELLIPSIS: str = "…" + + # widths generated from + # https://www.unicode.org/Public/4.0-Update/EastAsianWidth-4.0.0.txt + # See https://github.com/streamlink/streamlink/pull/2032 + WIDTHS: Iterable[Tuple[int, int]] = ( + (13, 1), + (15, 0), + (126, 1), + (159, 0), + (687, 1), + (710, 0), + (711, 1), + (727, 0), + (733, 1), + (879, 0), + (1154, 1), + (1161, 0), + (4347, 1), + (4447, 2), + (7467, 1), + (7521, 0), + (8369, 1), + (8426, 0), + (9000, 1), + (9002, 2), + (11021, 1), + (12350, 2), + (12351, 1), + (12438, 2), + (12442, 0), + (19893, 2), + (19967, 1), + (55203, 2), + (63743, 1), + (64106, 2), + (65039, 1), + (65059, 0), + (65131, 2), + (65279, 1), + (65376, 2), + (65500, 1), + (65510, 2), + (120831, 1), + (262141, 2), + (1114109, 1), ) + # On Windows, we need one less space, or we overflow the line for some reason. + gap = 1 if is_win32 else 0 + + @classmethod + def term_width(cls): + return get_terminal_size().columns - cls.gap + + @classmethod + def _get_width(cls, ordinal: int) -> int: + """Return the width of a specific unicode character when it would be displayed.""" + for unicode, width in cls.WIDTHS: # pragma: no branch + if ordinal <= unicode: + return width + return 1 # pragma: no cover + @classmethod - def format(cls, max_size: int, formats: Iterable[str] = FORMATS, **params) -> str: - output = "" + def width(cls, value: str): + """Return the overall width of a string when it would be displayed.""" + return sum(map(cls._get_width, map(ord, value))) + + @classmethod + def cut(cls, value: str, max_width: int) -> str: + """Cut off the beginning of a string until its display width fits into the output size.""" + current = value + for i in range(len(value)): # pragma: no branch + current = value[i:] + if cls.width(current) <= max_width: + break + return current + + @classmethod + def format(cls, formats: _TFormat, params: Dict[str, Union[str, Callable[[int], str]]]) -> str: + term_width = cls.term_width() + static: List[str] = [] + variable: List[Tuple[int, Callable[[int], str], int]] = [] for fmt in formats: - output = fmt.format(**params) - if len(output) <= max_size: + static.clear() + variable.clear() + length = 0 + # Get literal texts, static segments and variable segments from the parsed format + # and calculate the overall length of the literal texts and static segments after substituting them. + for literal_text, field_name, format_spec, conversion in fmt: + static.append(literal_text) + length += len(literal_text) + if field_name is None: + continue + if field_name not in params: + break + value_or_callable = params[field_name] + if not callable(value_or_callable): + static.append(value_or_callable) + length += len(value_or_callable) + else: + variable.append((len(static), value_or_callable, int(format_spec or 0))) + static.append("") + else: + # No variable segments? Just check if the resulting string fits into the size constraints. + if not variable: + if length > term_width: + continue + else: + break + + # Get the available space for each variable segment (share space equally and round down). + max_width = int((term_width - length) / len(variable)) + # If at least one variable segment doesn't fit, continue with the next format. + if max_width < 1 or any(max_width < min_width for _, __, min_width in variable): + continue + # All variable segments fit, so finally format them, but continue with the next format if there's an error. + # noinspection PyBroadException + try: + for idx, fn, _ in variable: + static[idx] = fn(max_width) + except Exception: + continue break - return output + return "".join(static) @staticmethod def _round(num: float, n: int = 2) -> float: @@ -69,19 +200,37 @@ def format_time(cls, elapsed: float) -> str: else: return f"{hours}{minutes}{int(elapsed % 60):1d}s" + @classmethod + def format_path(cls, path: PurePath, max_width: int) -> str: + # Quick check if the path fits + string = str(path) + width = cls.width(string) + if width <= max_width: + return string + + # Since the path doesn't fit, we always need to add an ellipsis. + # On Windows, we also need to add the "drive" part (which is an empty string on PurePosixPath) + max_width -= cls.width(path.drive) + cls.width(cls.ELLIPSIS) + + # Ignore the path's first part, aka the "anchor" (drive + root) + parts = os.path.sep.join(path.parts[1:]) + truncated = cls.cut(parts, max_width) + + return f"{path.drive}{cls.ELLIPSIS}{truncated}" + class Progress(Thread): def __init__( self, - output: Optional[TerminalOutput] = None, - formatter: Optional[ProgressFormatter] = None, + stream: TextIO, + path: PurePath, interval: float = 0.25, history: int = 20, threshold: int = 2, ): """ - :param output: The output class - :param formatter: The formatter class + :param stream: The output stream + :param path: The path that's being written :param interval: Time in seconds between updates :param history: Number of seconds of how long download speed history is kept :param threshold: Number of seconds until download speed is shown @@ -91,14 +240,10 @@ def __init__( self._wait = Event() self._lock = RLock() - if output is None: - output = TerminalOutput() - if formatter is None: - formatter = ProgressFormatter() - - self.output: TerminalOutput = output - self.formatter: ProgressFormatter = formatter + self.formatter = ProgressFormatter() + self.stream: TextIO = stream + self.path: PurePath = path self.interval: float = interval self.history: Deque[Tuple[float, int]] = deque(maxlen=int(history / interval)) self.threshold: int = int(threshold / interval) @@ -131,7 +276,7 @@ def run(self): while not self._wait.wait(self.interval): self.update() finally: - self.output.end() + self.print_end() def update(self): with self._lock: @@ -150,12 +295,25 @@ def update(self): formats = formatter.FORMATS speed = formatter.format_filesize(sum(size for _, size in history) / (now - history[0][0]), "/s") - status = self.formatter.format( - self.output.term_width() - 1, - formats, + params = dict( written=formatter.format_filesize(self.overall), elapsed=formatter.format_time(now - self.started), speed=speed, + path=lambda max_width: formatter.format_path(self.path, max_width), ) - self.output.print_inplace(status) + status = formatter.format(formats, params) + + self.print_inplace(status) + + def print_inplace(self, msg: str): + """Clears the previous line and prints a new one.""" + term_width = self.formatter.term_width() + spacing = term_width - self.formatter.width(msg) + + self.stream.write(f"\r{msg}{' ' * max(0, spacing)}") + self.stream.flush() + + def print_end(self): + self.stream.write("\n") + self.stream.flush() diff --git a/src/streamlink_cli/utils/terminal.py b/src/streamlink_cli/utils/terminal.py deleted file mode 100644 --- a/src/streamlink_cli/utils/terminal.py +++ /dev/null @@ -1,100 +0,0 @@ -from shutil import get_terminal_size -from sys import stderr -from typing import TextIO - -from streamlink.compat import is_win32 - - -class TerminalOutput: - # widths generated from - # https://www.unicode.org/Public/4.0-Update/EastAsianWidth-4.0.0.txt - # See https://github.com/streamlink/streamlink/pull/2032 - WIDTHS = ( - (13, 1), - (15, 0), - (126, 1), - (159, 0), - (687, 1), - (710, 0), - (711, 1), - (727, 0), - (733, 1), - (879, 0), - (1154, 1), - (1161, 0), - (4347, 1), - (4447, 2), - (7467, 1), - (7521, 0), - (8369, 1), - (8426, 0), - (9000, 1), - (9002, 2), - (11021, 1), - (12350, 2), - (12351, 1), - (12438, 2), - (12442, 0), - (19893, 2), - (19967, 1), - (55203, 2), - (63743, 1), - (64106, 2), - (65039, 1), - (65059, 0), - (65131, 2), - (65279, 1), - (65376, 2), - (65500, 1), - (65510, 2), - (120831, 1), - (262141, 2), - (1114109, 1), - ) - - def __init__(self, stream: TextIO = stderr): - self.stream = stream - - @classmethod - def _get_width(cls, ordinal: int) -> int: - """Returns the width of a specific unicode character when it would be displayed.""" - for unicode, width in cls.WIDTHS: # pragma: no branch - if ordinal <= unicode: - return width - return 1 # pragma: no cover - - @classmethod - def term_width(cls): - return get_terminal_size().columns - - @classmethod - def width(cls, value: str): - """Returns the overall width of a string when it would be displayed.""" - return sum(map(cls._get_width, map(ord, value))) - - @classmethod - def cut(cls, value: str, max_len: int) -> str: - """Cuts off the beginning of a string until its display width fits into the output size.""" - current = value - for i in range(len(value)): # pragma: no branch - current = value[i:] - if cls.width(current) <= max_len: - break - return current - - def print_inplace(self, msg: str): - """Clears the previous line and prints a new one.""" - term_width = self.term_width() - spacing = term_width - self.width(msg) - - # On Windows, we need one less space, or we overflow the line for some reason. - if is_win32: - spacing -= 1 - - self.stream.write(f"\r{msg}") - self.stream.write(" " * max(0, spacing)) - self.stream.flush() - - def end(self): - self.stream.write("\n") - self.stream.flush()
diff --git a/tests/cli/utils/test_progress.py b/tests/cli/utils/test_progress.py --- a/tests/cli/utils/test_progress.py +++ b/tests/cli/utils/test_progress.py @@ -1,19 +1,29 @@ +from io import StringIO +from pathlib import PurePosixPath, PureWindowsPath from time import time +from unittest.mock import Mock, call, patch import freezegun import pytest from streamlink_cli.utils.progress import Progress, ProgressFormatter -from streamlink_cli.utils.terminal import TerminalOutput +from tests import posix_only, windows_only class TestProgressFormatter: @pytest.fixture(scope="class") def params(self): - return dict(written="WRITTEN", elapsed="ELAPSED", speed="SPEED") + return dict( + written="WRITTEN", + elapsed="ELAPSED", + speed="SPEED", + path=lambda *_: "PATH", + ) - @pytest.mark.parametrize("max_size,expected", [ - (99, "[download] Written WRITTEN (ELAPSED @ SPEED)"), + @pytest.mark.parametrize("term_width,expected", [ + (99, "[download] Written WRITTEN to PATH (ELAPSED @ SPEED)"), + (63, "[download] Written WRITTEN to PATH (ELAPSED @ SPEED)"), + (62, "[download] Written WRITTEN (ELAPSED @ SPEED)"), (44, "[download] Written WRITTEN (ELAPSED @ SPEED)"), (43, "[download] WRITTEN (ELAPSED @ SPEED)"), (36, "[download] WRITTEN (ELAPSED @ SPEED)"), @@ -22,19 +32,33 @@ def params(self): (27, "[download] WRITTEN"), (1, "[download] WRITTEN"), ]) - def test_format(self, max_size, params, expected): - assert ProgressFormatter.format(max_size, ProgressFormatter.FORMATS, **params) == expected - - @pytest.mark.parametrize("max_size,expected", [ - (99, "[download] Written WRITTEN (ELAPSED)"), + def test_format(self, params, term_width, expected): + with patch("streamlink_cli.utils.progress.ProgressFormatter.term_width", lambda: term_width): + assert ProgressFormatter.format(ProgressFormatter.FORMATS, params) == expected + + @pytest.mark.parametrize("term_width,expected", [ + (99, "[download] Written WRITTEN to PATH (ELAPSED)"), + (55, "[download] Written WRITTEN to PATH (ELAPSED)"), + (54, "[download] Written WRITTEN (ELAPSED)"), (36, "[download] Written WRITTEN (ELAPSED)"), (35, "[download] WRITTEN (ELAPSED)"), (28, "[download] WRITTEN (ELAPSED)"), (27, "[download] WRITTEN"), (1, "[download] WRITTEN"), ]) - def test_format_nospeed(self, max_size, params, expected): - assert ProgressFormatter.format(max_size, ProgressFormatter.FORMATS_NOSPEED, **params) == expected + def test_format_nospeed(self, params, term_width, expected): + with patch("streamlink_cli.utils.progress.ProgressFormatter.term_width", lambda: term_width): + assert ProgressFormatter.format(ProgressFormatter.FORMATS_NOSPEED, params) == expected + + def test_format_missing(self, params): + with patch("streamlink_cli.utils.progress.ProgressFormatter.term_width", lambda: 99): + assert ProgressFormatter.format(ProgressFormatter.FORMATS, {"written": "0"}) == "[download] 0" + + def test_format_error(self, params): + with patch("streamlink_cli.utils.progress.ProgressFormatter.term_width", lambda: 99): + params = dict(**params) + params["path"] = Mock(side_effect=ValueError("fail")) + assert ProgressFormatter.format(ProgressFormatter.FORMATS, params) == "[download] Written WRITTEN (ELAPSED @ SPEED)" @pytest.mark.parametrize("size,expected", [ (0, "0 bytes"), @@ -77,66 +101,151 @@ def test_format_filesize(self, size, expected): def test_format_time(self, elapsed, expected): assert ProgressFormatter.format_time(elapsed) == expected + _path_posix = PurePosixPath("/foobar/baz/some file name") + _path_windows = PureWindowsPath("C:\\foobar\\baz\\some file name") + _path_windows_unc = PureWindowsPath("\\\\?\\foobar\\baz\\some file name") + + @pytest.mark.parametrize("path,max_width,expected", [ + pytest.param(_path_posix, 26, "/foobar/baz/some file name", id="posix - full path"), + pytest.param(_path_posix, 25, "…oobar/baz/some file name", id="posix - truncated by 1"), + pytest.param(_path_posix, 24, "…obar/baz/some file name", id="posix - truncated by 2"), + pytest.param(_path_posix, 23, "…bar/baz/some file name", id="posix - truncated by 3"), + pytest.param(_path_posix, 22, "…ar/baz/some file name", id="posix - truncated by 4"), + pytest.param(_path_posix, 21, "…r/baz/some file name", id="posix - truncated by 5"), + pytest.param(_path_posix, 20, "…/baz/some file name", id="posix - truncated by 6"), + pytest.param(_path_posix, 19, "…baz/some file name", id="posix - truncated by 7 (cuts off separator)"), + pytest.param(_path_posix, 16, "…/some file name", id="posix - truncated (all parts except name)"), + pytest.param(_path_posix, 15, "…some file name", id="posix - truncated (name without separator)"), + pytest.param(_path_posix, 14, "…ome file name", id="posix - truncated name"), + pytest.param(_path_windows, 28, "C:\\foobar\\baz\\some file name", id="windows - full path"), + pytest.param(_path_windows, 27, "C:…oobar\\baz\\some file name", id="windows - truncated by 1"), + pytest.param(_path_windows, 26, "C:…obar\\baz\\some file name", id="windows - truncated by 2"), + pytest.param(_path_windows, 25, "C:…bar\\baz\\some file name", id="windows - truncated by 3"), + pytest.param(_path_windows, 24, "C:…ar\\baz\\some file name", id="windows - truncated by 4"), + pytest.param(_path_windows, 23, "C:…r\\baz\\some file name", id="windows - truncated by 5"), + pytest.param(_path_windows, 22, "C:…\\baz\\some file name", id="windows - truncated by 6"), + pytest.param(_path_windows, 21, "C:…baz\\some file name", id="windows - truncated by 7 (cuts off separator)"), + pytest.param(_path_windows, 18, "C:…\\some file name", id="windows - truncated (all parts except name)"), + pytest.param(_path_windows, 17, "C:…some file name", id="windows - truncated (name without separator)"), + pytest.param(_path_windows, 16, "C:…ome file name", id="windows - truncated name"), + pytest.param(_path_windows_unc, 29, "\\\\?\\foobar\\baz\\some file name", id="windows UNC - full path"), + pytest.param(_path_windows_unc, 28, "\\\\?\\…obar\\baz\\some file name", id="windows UNC - truncated by 1"), + pytest.param(_path_windows_unc, 20, "\\\\?\\…\\some file name", id="windows UNC - truncated (all parts except name)"), + pytest.param(_path_windows_unc, 19, "\\\\?\\…some file name", id="windows UNC - truncated (name without separator)"), + pytest.param(_path_windows_unc, 18, "\\\\?\\…ome file name", id="windows UNC - truncated name"), + ]) + def test_format_path(self, path, max_width, expected): + with patch("os.path.sep", "\\" if type(path) is PureWindowsPath else "/"): + assert ProgressFormatter.format_path(path, max_width) == expected -class FakeOutput(TerminalOutput): - def __init__(self): - super().__init__() - self.buffer = [] - - # noinspection PyMethodMayBeStatic - def term_width(self): - return 50 - def print_inplace(self, msg): - self.buffer.append(msg) +class TestWidth: + @pytest.mark.parametrize("chars,expected", [ + ("ABCDEFGHIJ", 10), + ("A你好世界こんにちは안녕하세요B", 30), + ("·「」『』【】-=!@#¥%……&×()", 30), + ]) + def test_width(self, chars, expected): + assert ProgressFormatter.width(chars) == expected - def end(self): # pragma: no cover - self.buffer.append("\n") + @pytest.mark.parametrize("prefix,max_len,expected", [ + ("你好世界こんにちは안녕하세요CD", 10, "녕하세요CD"), + ("你好世界こんにちは안녕하세요CD", 9, "하세요CD"), + ("你好世界こんにちは안녕하세요CD", 23, "こんにちは안녕하세요CD"), + ]) + def test_cut(self, prefix, max_len, expected): + assert ProgressFormatter.cut(prefix, max_len) == expected + + +class TestPrint: + @pytest.fixture(autouse=True) + def _terminal_size(self): + with patch("streamlink_cli.utils.progress.get_terminal_size") as mock_get_terminal_size: + mock_get_terminal_size.return_value = Mock(columns=10) + yield + + @pytest.fixture + def stream(self): + return StringIO() + + @pytest.fixture + def progress(self, stream: StringIO): + yield Progress(stream, Mock()) + + @posix_only + def test_print_posix(self, progress: Progress, stream: StringIO): + progress.print_inplace("foo") + progress.print_inplace("barbaz") + progress.print_inplace("0123456789") + progress.print_inplace("abcdefghijk") + progress.print_end() + assert stream.getvalue() == "\rfoo \rbarbaz \r0123456789\rabcdefghijk\n" + + @windows_only + def test_print_windows(self, progress: Progress, stream: StringIO): + progress.print_inplace("foo") + progress.print_inplace("barbaz") + progress.print_inplace("0123456789") + progress.print_inplace("abcdefghijk") + progress.print_end() + assert stream.getvalue() == "\rfoo \rbarbaz \r0123456789\rabcdefghijk\n" class TestProgress: def test_download_speed(self): kib = b"\x00" * 1024 - output = FakeOutput() + output_write = Mock() progress = Progress( - output=output, + Mock(write=output_write), + PurePosixPath("../../the/path/where/we/write/to"), interval=1, history=3, threshold=2, ) - with freezegun.freeze_time("2000-01-01T00:00:00Z") as frozen_time: + with freezegun.freeze_time("2000-01-01T00:00:00Z") as frozen_time, \ + patch("os.path.sep", "/"), \ + patch("streamlink_cli.utils.progress.ProgressFormatter.term_width", Mock(return_value=70)) as mock_width: progress.started = time() - assert not output.buffer + assert not output_write.call_args_list progress.update() - assert output.buffer[-1] == "[download] Written 0 bytes (0s)" + assert output_write.call_args_list[-1] \ + == call("\r[download] Written 0 bytes to ../../the/path/where/we/write/to (0s) ") frozen_time.tick() progress.put(kib * 1) progress.update() - assert output.buffer[-1] == "[download] Written 1.00 KiB (1s @ 1.00 KiB/s)" + assert output_write.call_args_list[-1] \ + == call("\r[download] Written 1.00 KiB to …th/where/we/write/to (1s @ 1.00 KiB/s)") frozen_time.tick() + mock_width.return_value = 65 progress.put(kib * 3) progress.update() - assert output.buffer[-1] == "[download] Written 4.00 KiB (2s @ 2.00 KiB/s)" + assert output_write.call_args_list[-1] \ + == call("\r[download] Written 4.00 KiB to …ere/we/write/to (2s @ 2.00 KiB/s)") frozen_time.tick() + mock_width.return_value = 60 progress.put(kib * 5) progress.update() - assert output.buffer[-1] == "[download] Written 9.00 KiB (3s @ 4.50 KiB/s)" + assert output_write.call_args_list[-1] \ + == call("\r[download] Written 9.00 KiB (3s @ 4.50 KiB/s) ") frozen_time.tick() progress.put(kib * 7) progress.update() - assert output.buffer[-1] == "[download] Written 16.00 KiB (4s @ 7.50 KiB/s)" + assert output_write.call_args_list[-1] \ + == call("\r[download] Written 16.00 KiB (4s @ 7.50 KiB/s) ") frozen_time.tick() progress.put(kib * 5) progress.update() - assert output.buffer[-1] == "[download] Written 21.00 KiB (5s @ 8.50 KiB/s)" + assert output_write.call_args_list[-1] \ + == call("\r[download] Written 21.00 KiB (5s @ 8.50 KiB/s) ") frozen_time.tick() progress.update() - assert output.buffer[-1] == "[download] Written 21.00 KiB (6s @ 6.00 KiB/s)" + assert output_write.call_args_list[-1] \ + == call("\r[download] Written 21.00 KiB (6s @ 6.00 KiB/s) ") diff --git a/tests/cli/utils/test_terminal.py b/tests/cli/utils/test_terminal.py deleted file mode 100644 --- a/tests/cli/utils/test_terminal.py +++ /dev/null @@ -1,60 +0,0 @@ -from io import StringIO - -import pytest - -from streamlink_cli.utils.terminal import TerminalOutput -from tests import posix_only, windows_only - - -class TestWidth: - @pytest.mark.parametrize("chars,expected", [ - ("ABCDEFGHIJ", 10), - ("A你好世界こんにちは안녕하세요B", 30), - ("·「」『』【】-=!@#¥%……&×()", 30), - ]) - def test_width(self, chars, expected): - assert TerminalOutput.width(chars) == expected - - @pytest.mark.parametrize("prefix,max_len,expected", [ - ("你好世界こんにちは안녕하세요CD", 10, "녕하세요CD"), - ("你好世界こんにちは안녕하세요CD", 9, "하세요CD"), - ("你好世界こんにちは안녕하세요CD", 23, "こんにちは안녕하세요CD"), - ]) - def test_cut(self, prefix, max_len, expected): - assert TerminalOutput.cut(prefix, max_len) == expected - - -class TestOutput: - @pytest.fixture(autouse=True) - def _terminal_size(self, monkeypatch: pytest.MonkeyPatch): - class TerminalSize: - columns = 10 - - terminalsize = TerminalSize() - monkeypatch.setattr("streamlink_cli.utils.terminal.get_terminal_size", lambda: terminalsize) - - @pytest.fixture - def stream(self): - return StringIO() - - @pytest.fixture - def output(self, stream: StringIO): - yield TerminalOutput(stream) - - @posix_only - def test_print_posix(self, output: TerminalOutput, stream: StringIO): - output.print_inplace("foo") - output.print_inplace("barbaz") - output.print_inplace("0123456789") - output.print_inplace("abcdefghijk") - output.end() - assert stream.getvalue() == "\rfoo \rbarbaz \r0123456789\rabcdefghijk\n" - - @windows_only - def test_print_windows(self, output: TerminalOutput, stream: StringIO): - output.print_inplace("foo") - output.print_inplace("barbaz") - output.print_inplace("0123456789") - output.print_inplace("abcdefghijk") - output.end() - assert stream.getvalue() == "\rfoo \rbarbaz \r0123456789\rabcdefghijk\n"
No filename displayed in HLS livestream downloading since 4.3.0 ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description In 4.2.0 and earlier versions, when downloading hls livestream, the destination filename is always being displayed in the latest prompted progress line. But from 4.3.0, the filename is no longer displayed, only size/duration/speed. Like this: ![image](https://user-images.githubusercontent.com/48208459/185729234-dac6b527-ed89-4173-bd9e-1afe97b5a20c.png) For users who need to download multiple streams in the same time, this change might make it much harder to monitor the progress of different files. ### Debug log ```text N/A ```
This was removed intentionally, because it was showing a truncated file name instead of the full path which already gets logged. #4656 - cli: rewrite progress output (4.3.0) > Output format changes: > - Remove "prefix" from output format (full path already gets logged) #4336 - cli: log resolved file output path (3.2.0) > Streamlink currently doesn't log the full path when writing the output to a file. This is an issue when the output path is dynamically generated from the input argument via the stream's metadata, because the user doesn't know where the data was actually written to. The output path gets already logged if the file already exists, but that's not really helpful. > This was removed intentionally, because it was showing a truncated file name instead of the full path which already gets logged. > > #4656 - cli: rewrite progress output (4.3.0) > > > Output format changes: > > > > * Remove "prefix" from output format (full path already gets logged) > > #4336 - cli: log resolved file output path (3.2.0) > > > Streamlink currently doesn't log the full path when writing the output to a file. This is an issue when the output path is dynamically generated from the input argument via the stream's metadata, because the user doesn't know where the data was actually written to. The output path gets already logged if the file already exists, but that's not really helpful. Thank you for answering but I'm not sure I've fully understood what you mean - Would it be fixed or it's so far not considered as an issue to be adjusted? If not, how can I customize to let the filename displayed in the progress output? It's really important to me. :-[ > Would it be fixed or it's so far not considered as an issue to be adjusted? > how can I customize to let the filename displayed in the progress output You can't customize the progress output, at least not without modifying the code. Letting the user customize it via additional command line arguments - while possible and probably not even hard to implement due to the Formatter class that's already used for the `--output` stuff - would be a total overkill. The progress output is not meant as a stable interface for parsing, in case that's what you're asking for. And for humans, the old file name implementation was bad and was removed because of that (see below). Since you posted (a screenshot) of a debug log with download progress output, I just want to mention that the current implementation still has room for improvement, as it doesn't clear the progress output when the logger writes new lines, so lots of unnecessary progress output lines remain in between the log output. Both logging/output mechanisms work independently of each other while writing to the same stdout stream, which is the reason for this. ---- This is the output of 4.3.0, and as you can see, it's reduced to the actual useful information, while the full output path already gets logged (implemented in 3.2.0). This is important when the output path contains metadata variables or when the output file has a long name. ``` $ streamlink youtube.com/hospitalrecords best -o /dev/null [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) [cli][info] Opening stream: 1080p (hls) [cli][info] Writing output to /dev/null [download] Written 8.11 MiB (11s @ 720.81 KiB/s) ... $ streamlink youtube.com/hospitalrecords best -o '/tmp/records/{author} - {title}.ts' [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) [cli][info] Opening stream: 1080p (hls) [cli][info] Writing output to /tmp/records/Hospital Records - Drum & Bass Non-Stop Liquid - To Chill _ Relax Too.ts [download] Written 5.74 MiB (8s @ 733.96 KiB/s) ... ``` In the earlier versions, the same commands show this output, which is not useful at all: ``` $ streamlink youtube.com/hospitalrecords best -o /dev/null [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) [cli][info] Opening stream: 1080p (hls) [cli][info] Writing output to /dev/null [download][null] Written 4.5 MB (5s @ 772.5 KB/s) ... $ streamlink youtube.com/hospitalrecords best -o '/tmp/records/{author} - {title}.ts' [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) [cli][info] Opening stream: 1080p (hls) [cli][info] Writing output to /tmp/records/Hospital Records - Drum & Bass Non-Stop Liquid - To Chill _ Relax Too.ts [download][..To Chill _ Relax Too.ts] Written 4.4 MB (5s @ 764.2 KB/s) ... ``` > > Would it be fixed or it's so far not considered as an issue to be adjusted? > > how can I customize to let the filename displayed in the progress output > > You can't customize the progress output, at least not without modifying the code. Letting the user customize it via additional command line arguments - while possible and probably not even hard to implement due to the Formatter class that's already used for the `--output` stuff - would be a total overkill. The progress output is not meant as a stable interface for parsing, in case that's what you're asking for. And for humans, the old file name implementation was bad and was removed because of that (see below). > > Since you posted (a screenshot) of a debug log with download progress output, I just want to mention that the current implementation still has room for improvement, as it doesn't clear the progress output when the logger writes new lines, so lots of unnecessary progress output lines remain in between the log output. Both logging/output mechanisms work independently of each other while writing to the same stdout stream, which is the reason for this. > > This is the output of 4.3.0, and as you can see, it's reduced to the actual useful information, while the full output path already gets logged (implemented in 3.2.0). This is important when the output path contains metadata variables or when the output file has a long name. > > ``` > $ streamlink youtube.com/hospitalrecords best -o /dev/null > [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords > [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) > [cli][info] Opening stream: 1080p (hls) > [cli][info] Writing output to > /dev/null > [download] Written 8.11 MiB (11s @ 720.81 KiB/s) > ... > > $ streamlink youtube.com/hospitalrecords best -o '/tmp/records/{author} - {title}.ts' > [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords > [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) > [cli][info] Opening stream: 1080p (hls) > [cli][info] Writing output to > /tmp/records/Hospital Records - Drum & Bass Non-Stop Liquid - To Chill _ Relax Too.ts > [download] Written 5.74 MiB (8s @ 733.96 KiB/s) > ... > ``` > > In the earlier versions, the same commands show this output, which is not useful at all: > > ``` > $ streamlink youtube.com/hospitalrecords best -o /dev/null > [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords > [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) > [cli][info] Opening stream: 1080p (hls) > [cli][info] Writing output to > /dev/null > [download][null] Written 4.5 MB (5s @ 772.5 KB/s) > ... > > $ streamlink youtube.com/hospitalrecords best -o '/tmp/records/{author} - {title}.ts' > [cli][info] Found matching plugin youtube for URL youtube.com/hospitalrecords > [cli][info] Available streams: 144p (worst), 240p, 360p, 480p, 720p, 1080p (best) > [cli][info] Opening stream: 1080p (hls) > [cli][info] Writing output to > /tmp/records/Hospital Records - Drum & Bass Non-Stop Liquid - To Chill _ Relax Too.ts > [download][..To Chill _ Relax Too.ts] Written 4.4 MB (5s @ 764.2 KB/s) > ... > ``` Again, thank you for your patient reply. However, as a long-time Streamlink user with little development experience, I would like to state some opinions based on my situation. As I said in the previous reply, displaying the file name in the progress output is very important for users who need to download multiple files at the same time. Once there is a network problem or some other, I can quickly find the source of the problem through the information. It may display in incomplete ways, but as long as it is distinguishable to the users from other concurrent files, it will always be useful, even it is not so in the eyes of the development team. Therefore, even if the development team thinks that it is not perfect at present, there are still users who need it. Second, while for open source community softwares like Streamlink, the development team is not obligated to meet the needs of all users, the deprecation of a feature that involves display/output might still should be considered a breaking update. Based on your previous reply, maybe this is just "throwing away a buggy and useless feature" for the development team. However, for users like me who were used to seeing it in the previous versions, I still felt very shocked when I didn't see this deprecation reminder in the Changelog, but found it abandoned during use. And, based on your last reply, maybe I can understand it this way: In Info-level output, the full filename record displayed before the download starts will not be flushed by the progress information in Debug-level, and it might possibly be a temporary solution to see filenames on the screen. But in contradiction to this, Streamlink is a software that is being developed at a high speed and with the help of community feedback. Users like me are used to enabling the Debug mode in default. It not only helps us to quickly locate problems, but also provides timely and useful information during feedback to you. So still, perhaps the ultimate and best solution for me, is to display filenames even in Debug mode. It might bother you to read such a bunch of words and may you kindly excuse me for that, but please don't judge any of your work as "not useful at all" without listening to users first. We appreciate your team and your work, as much as how you love them. > the deprecation of a feature that involves display/output might still should be considered a breaking update It is not, because it's not part of a stable interface. By that logic, we weren't able to change or fix anything without bumping the major version in very release. The entire `streamlink_cli` module is probably not even considered stable apart from the available CLI arguments. But since I don't want to just decline your request and close the thread, I can take a look at appending the output path to the progress output, so that it can make use of all the available space (which it was probably not allowed to). > > the deprecation of a feature that involves display/output might still should be considered a breaking update > > It is not, because it's not part of a stable interface. By that logic, we weren't able to change or fix anything without bumping the major version in very release. The entire `streamlink_cli` module is probably not even considered stable apart from the available CLI arguments. > > But since I don't want to just decline your request and close the thread, I can take a look at appending the output path to the progress output, so that it can make use of all the available space (which it was probably not allowed to). I appreciate your kindness and look forward to the upcoming updates. Thank you so much.
2022-08-22T12:34:12
streamlink/streamlink
4,766
streamlink__streamlink-4766
[ "4757" ]
38b5c76dd147c665ecc13d48a05b0230032f25fc
diff --git a/src/streamlink/plugins/rtve.py b/src/streamlink/plugins/rtve.py --- a/src/streamlink/plugins/rtve.py +++ b/src/streamlink/plugins/rtve.py @@ -127,6 +127,7 @@ def translate(cls, data: str) -> Iterator[Tuple[str, str]]: is_global=True, ) class Rtve(Plugin): + URL_M3U8 = "https://ztnr.rtve.es/ztnr/{id}.m3u8" URL_VIDEOS = "https://ztnr.rtve.es/ztnr/movil/thumbnail/rtveplayw/videos/{id}.png?q=v2" URL_SUBTITLES = "https://www.rtve.es/api/videos/{id}/subtitulos.json" @@ -145,6 +146,8 @@ def _get_streams(self): if not self.id: return + # check obfuscated stream URLs via self.URL_VIDEOS and ZTNR.translate() first + # self.URL_M3U8 appears to be valid for all streams, but doesn't provide any content in same cases urls = self.session.http.get( self.URL_VIDEOS.format(id=self.id), schema=validate.Schema( @@ -154,12 +157,16 @@ def _get_streams(self): ), ) - url = next((url for _, url in urls if urlparse(url).path.endswith(".m3u8")), None) - if not url: - url = next((url for _, url in urls if urlparse(url).path.endswith(".mp4")), None) - if url: - yield "vod", HTTPStream(self.session, url) - return + # then fall back to self.URL_M3U8 + if not urls: + url = self.URL_M3U8.format(id=self.id) + else: + url = next((url for _, url in urls if urlparse(url).path.endswith(".m3u8")), None) + if not url: + url = next((url for _, url in urls if urlparse(url).path.endswith(".mp4")), None) + if url: + yield "vod", HTTPStream(self.session, url) + return streams = HLSStream.parse_variant_playlist(self.session, url).items() @@ -173,8 +180,8 @@ def _get_streams(self): "items": [{ "lang": str, "src": validate.url(), - }] - } + }], + }, }, validate.get(("page", "items")), ),
plugins.rtve: Live streams not working. ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description As for today, live streams from RTVE is not working. ### Debug log ```text [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.6 [cli][debug] Streamlink: 4.3.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.3.3 [cli][debug] Arguments: [cli][debug] url=https://www.rtve.es/play/videos/directo/la-1/ [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin rtve for URL https://www.rtve.es/play/videos/directo/la-1/ error: No playable streams found on this URL: https://www.rtve.es/play/videos/directo/la-1/ ```
The plugin has just been rewritten recently in the 4.2.0 release (#4632) where it needed to parse stream URLs from a custom binary format in base64 encoded data disguised as a PNG image request. Now the site has completely changed again. From what it looks like, stream URLs can be requested via `https://ztnr.rtve.es/ztnr/{id}.m3u8` (the ID extraction of the current plugin implementation is still working) which then redirects to the actual stream URL. This yields the following static stream URLs for live streams (geo-blocked segment requests, even with Spanish proxy/VPN): - https://rtvelivestream.akamaized.net/segments/la1/la1_main.m3u8 - https://rtvelivestreamv6.akamaized.net/rtvesec/la2/la2_bkp.m3u8 - https://rtvelivestreamv6.akamaized.net/segments/24h/24h_main.m3u8 - https://rtvelivestreamv6.akamaized.net/segments/tdp/tdp_main.m3u8 - https://rtvelivestreamv6.akamaized.net/segments/clan/clan_main.m3u8 However, VOD content except for TV shows (I think) are still working fine with the current plugin implementation, eg: - https://www.rtve.es/play/videos/informe-semanal/la-semilla-de-la-guerra/6670279/
2022-08-24T11:17:40
streamlink/streamlink
4,770
streamlink__streamlink-4770
[ "4769" ]
60e8724385023ea4ed59c103f1bed479fda4f84a
diff --git a/src/streamlink/plugins/albavision.py b/src/streamlink/plugins/albavision.py --- a/src/streamlink/plugins/albavision.py +++ b/src/streamlink/plugins/albavision.py @@ -93,9 +93,9 @@ def _get_live_url(self): schema = validate.Schema( validate.xml_xpath_string(".//script[contains(text(), 'LIVE_URL')]/text()"), validate.none_or_all( - re.compile(r"""LIVE_URL\s*=\s*(?P<q>['"])(.+?)(?P=q)"""), + re.compile(r"""LIVE_URL\s*=\s*(?P<q>['"])(?P<url>.+?)(?P=q)"""), validate.none_or_all( - validate.get(1), + validate.get("url"), validate.url(), ), ), @@ -108,9 +108,9 @@ def _get_token_req_url(self): schema = validate.Schema( validate.xml_xpath_string(".//script[contains(text(), 'LIVE_URL')]/text()"), validate.none_or_all( - re.compile(r"""jQuery\.get\s*\((?P<q>['"])(.+?)(?P=q)"""), + re.compile(r"""jQuery\.get\s*\((?P<q>['"])(?P<token>.+?)(?P=q)"""), validate.none_or_all( - validate.get(1), + validate.get("token"), validate.url(), ), ), @@ -121,8 +121,8 @@ def _get_token_req_url(self): schema = validate.Schema( validate.xml_xpath_string(".//script[contains(text(), 'LIVE_URL')]/text()"), validate.none_or_all( - re.compile(r"""Math\.floor\(Date\.now\(\)\s*/\s*3600000\),\s*(?P<q>['"])(.+?)(?P=q)"""), - validate.none_or_all(validate.get(1)), + re.compile(r"""Math\.floor\(Date\.now\(\)\s*/\s*3600000\),\s*(?P<q>['"])(?P<token>.+?)(?P=q)"""), + validate.none_or_all(validate.get("token")), ), ) token_req_str = schema.validate(self.page) diff --git a/src/streamlink/plugins/hiplayer.py b/src/streamlink/plugins/hiplayer.py --- a/src/streamlink/plugins/hiplayer.py +++ b/src/streamlink/plugins/hiplayer.py @@ -38,8 +38,11 @@ def _get_streams(self): validate.parse_html(), validate.xml_xpath_string(".//script[contains(text(), 'https://hiplayer.hibridcdn.net/l/')]/text()"), validate.none_or_all( - re.compile(r"""(?P<q>['"])(https://hiplayer.hibridcdn.net/l/.+?)(?P=q)"""), - validate.none_or_all(validate.get(1), validate.url()), + re.compile(r"""(?P<q>['"])(?P<url>https://hiplayer.hibridcdn.net/l/.+?)(?P=q)"""), + validate.none_or_all( + validate.get("url"), + validate.url(), + ), ), ), ) diff --git a/src/streamlink/plugins/htv.py b/src/streamlink/plugins/htv.py --- a/src/streamlink/plugins/htv.py +++ b/src/streamlink/plugins/htv.py @@ -80,8 +80,11 @@ def _get_streams(self): validate.parse_html(), validate.xml_xpath_string(".//script[contains(text(), 'playlist.m3u8')]/text()"), validate.none_or_all( - re.compile(r"""var\s+iosUrl\s*=\s*(?P<q>")(.+?)(?P=q)"""), - validate.none_or_all(validate.get(1), validate.url()), + re.compile(r"""var\s+iosUrl\s*=\s*(?P<q>")(?P<url>.+?)(?P=q)"""), + validate.none_or_all( + validate.get("url"), + validate.url(), + ), ), ), )
plugins.hiplayer not working ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description The plugin was added in https://github.com/streamlink/streamlink/pull/4507 and it has stopped working. Input URL are available on the pull request. Thanks ### Debug log ```text ~$ streamlink --version streamlink 4.3.0 ~$ ~$ streamlink 'https://rotana.net/live-clip' best --stream-url --loglevel debug error: Unable to validate response text: ValidationError(NoneOrAllSchema): ValidationError(NoneOrAllSchema): ValidationError(url): "'" is not a valid URL ~$ streamlink 'https://www.cnbcarabia.com/Pages/live' best --stream-url --loglevel debug error: Unable to validate response text: ValidationError(NoneOrAllSchema): ValidationError(NoneOrAllSchema): ValidationError(url): "'" is not a valid URL ~$ streamlink 'https://www.media.gov.kw/LiveTV.aspx?PanChannel=KTV1' best --stream-url --loglevel debug error: Unable to validate response text: ValidationError(NoneOrAllSchema): ValidationError(NoneOrAllSchema): ValidationError(url): "'" is not a valid URL ~$ ```
The issue got introduced by #4702 where some named capture groups were added to regular expressions, and then the indexed capture group getters were not updated respectively. The fix is simple, but I will have to take a look at the other plugins which were affected by this as well.
2022-08-26T12:49:33
streamlink/streamlink
4,779
streamlink__streamlink-4779
[ "4682" ]
a2876cbca74d0d53145aedfd47578126deae5d18
diff --git a/src/streamlink/plugins/okru.py b/src/streamlink/plugins/okru.py --- a/src/streamlink/plugins/okru.py +++ b/src/streamlink/plugins/okru.py @@ -1,12 +1,12 @@ """ -$description Russian live streaming and video hosting social platform. +$description Russian live-streaming and video hosting social platform. $url ok.ru $type live, vod """ import logging import re -from urllib.parse import unquote +from urllib.parse import unquote, urlparse, urlunparse from streamlink.plugin import Plugin, PluginError, pluginmatcher from streamlink.plugin.api import validate @@ -18,7 +18,7 @@ @pluginmatcher(re.compile( - r'https?://(?:www\.)?ok\.ru/' + r"https?://(?:\w+\.)?ok\.ru/" )) class OKru(Plugin): QUALITY_WEIGHTS = { @@ -42,6 +42,11 @@ def stream_weight(cls, key): return super().stream_weight(key) + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + parsed = urlparse(self.url) + self.url = urlunparse(parsed._replace(netloc=re.sub(r"^m(obile)?\.", "", parsed.netloc))) + def _get_streams(self): schema_metadata = validate.Schema( validate.parse_json(),
diff --git a/tests/plugins/test_okru.py b/tests/plugins/test_okru.py --- a/tests/plugins/test_okru.py +++ b/tests/plugins/test_okru.py @@ -1,3 +1,5 @@ +import pytest + from streamlink.plugins.okru import OKru from tests.plugins import PluginCanHandleUrl @@ -6,8 +8,20 @@ class TestPluginCanHandleUrlOKru(PluginCanHandleUrl): __plugin__ = OKru should_match = [ - 'https://ok.ru/live/12345', - 'http://ok.ru/live/12345', - 'http://www.ok.ru/live/12345', - 'https://ok.ru/video/266205792931', + "http://ok.ru/live/12345", + "https://ok.ru/live/12345", + "https://m.ok.ru/live/12345", + "https://mobile.ok.ru/live/12345", + "https://www.ok.ru/live/12345", + "https://ok.ru/video/266205792931", ] + + +class TestOKru: + @pytest.mark.parametrize("url,expected", [ + ("https://m.ok.ru/live/12345", "https://ok.ru/live/12345"), + ("https://mobile.ok.ru/live/12345", "https://ok.ru/live/12345"), + ]) + def test_url_mobile(self, url, expected): + plugin = OKru(url) + assert plugin.url == expected
mobile.ok.ru ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description it is the mobile version of ok.ru cuz it is banned here in Egypt ### Input URLs mobile.ok.ru
The video url is the one with data-objid="xxx" while xxx is the videoid You just need to remove "m." or "mobile." from the URL e.g ``` ~$ streamlink 'https://m.ok.ru/live/1366902775399' error: No plugin can handle URL: https://m.ok.ru/live/1366902775399 ~$ streamlink 'https://mobile.ok.ru/live/1366902775399' error: No plugin can handle URL: https://mobile.ok.ru/live/1366902775399 ~$ streamlink 'https://ok.ru/live/1366902775399' [cli][info] Found matching plugin okru for URL https://ok.ru/live/1366902775399 Available streams: 240p (worst), 360p, 480p, 720p (best) ``` @bastimeyer I tried updating the regex to allow the "m." or "mobile." urls to load with the okru plugin but it seems the extraction code needs to be updated for the mobile pages. Maybe the plugin can just remove the "m." or "mobile." from the URL? ``` ~$ streamlink 'https://mobile.ok.ru/live/1366902775399' [cli][info] Found matching plugin okru for URL https://mobile.ok.ru/live/1366902775399 [plugins.okru][error] Could not find metadata error: No playable streams found on this URL: https://mobile.ok.ru/live/1366902775399 ~$ streamlink 'https://m.ok.ru/live/1366902775399' [cli][info] Found matching plugin okru for URL https://m.ok.ru/live/1366902775399 [plugins.okru][error] Could not find metadata error: No playable streams found on this URL: https://m.ok.ru/live/1366902775399 ~$ ``` > but it seems the extraction code needs to be updated for the mobile pages > You just need to remove "m." or "mobile." from the URL Take a look at the YouTube plugin. There's logic for modifying the URL before the page gets accessed. https://github.com/streamlink/streamlink/blob/b7b0353b5168a713f4f265ceec432c734aee94cf/src/streamlink/plugins/youtube.py#L79-L91 That should do the trick. If you can confirm these changes, then I will submit a PR. ```diff diff --git a/src/streamlink/plugins/okru.py b/src/streamlink/plugins/okru.py index bcb1f699..39c90bd1 100644 --- a/src/streamlink/plugins/okru.py +++ b/src/streamlink/plugins/okru.py @@ -1,12 +1,12 @@ """ -$description Russian live streaming and video hosting social platform. +$description Russian live-streaming and video hosting social platform. $url ok.ru $type live, vod """ import logging import re -from urllib.parse import unquote +from urllib.parse import unquote, urlparse, urlunparse from streamlink.plugin import Plugin, PluginError, pluginmatcher from streamlink.plugin.api import validate @@ -18,7 +18,7 @@ log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r'https?://(?:www\.)?ok\.ru/' + r"https?://(?:\w+\.)?ok\.ru/" )) class OKru(Plugin): QUALITY_WEIGHTS = { @@ -42,6 +42,11 @@ class OKru(Plugin): return super().stream_weight(key) + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + parsed = urlparse(self.url) + self.url = urlunparse(parsed._replace(netloc=re.sub(r"^m(obile)?\.", "", parsed.netloc))) + def _get_streams(self): schema_metadata = validate.Schema( validate.parse_json(), diff --git a/tests/plugins/test_okru.py b/tests/plugins/test_okru.py index f6aa00c1..acf47987 100644 --- a/tests/plugins/test_okru.py +++ b/tests/plugins/test_okru.py @@ -1,3 +1,5 @@ +import pytest + from streamlink.plugins.okru import OKru from tests.plugins import PluginCanHandleUrl @@ -6,8 +8,20 @@ class TestPluginCanHandleUrlOKru(PluginCanHandleUrl): __plugin__ = OKru should_match = [ - 'https://ok.ru/live/12345', - 'http://ok.ru/live/12345', - 'http://www.ok.ru/live/12345', - 'https://ok.ru/video/266205792931', + "http://ok.ru/live/12345", + "https://ok.ru/live/12345", + "https://m.ok.ru/live/12345", + "https://mobile.ok.ru/live/12345", + "https://www.ok.ru/live/12345", + "https://ok.ru/video/266205792931", ] + + +class TestOKru: + @pytest.mark.parametrize("url,expected", [ + ("https://m.ok.ru/live/12345", "https://ok.ru/live/12345"), + ("https://mobile.ok.ru/live/12345", "https://ok.ru/live/12345"), + ]) + def test_url_mobile(self, url, expected): + plugin = OKru(url) + assert plugin.url == expected ``` Thankyou @bastimeyer. It worked perfectly. ``` ~$ streamlink 'https://mobile.ok.ru/live/1366902775399' best [cli][info] Found matching plugin okru for URL https://mobile.ok.ru/live/1366902775399 [cli][info] Available streams: 240p (worst), 360p, 480p, 720p (best) [cli][info] Opening stream: 720p (hls) [cli][info] Starting player: /opt/homebrew/bin/VLC ^[[A[cli][info] Player closed [cli][info] Stream ended [cli][info] Closing currently open stream... ~$ streamlink 'https://m.ok.ru/live/1366902775399' best [cli][info] Found matching plugin okru for URL https://m.ok.ru/live/1366902775399 [cli][info] Available streams: 240p (worst), 360p, 480p, 720p (best) [cli][info] Opening stream: 720p (hls) [cli][info] Starting player: /opt/homebrew/bin/VLC ^[[A[cli][info] Player closed [cli][info] Stream ended [cli][info] Closing currently open stream... ~$ streamlink 'https://ok.ru/live/1366902775399' best [cli][info] Found matching plugin okru for URL https://ok.ru/live/1366902775399 [cli][info] Available streams: 240p (worst), 360p, 480p, 720p (best) [cli][info] Opening stream: 720p (hls) [cli][info] Starting player: /opt/homebrew/bin/VLC ^[[A[cli][info] Player closed [cli][info] Stream ended [cli][info] Closing currently open stream... ```
2022-08-27T17:53:33
streamlink/streamlink
4,780
streamlink__streamlink-4780
[ "4682" ]
9530604436730b2b850fcbefeee55cccc45a4d91
diff --git a/src/streamlink/plugins/okru.py b/src/streamlink/plugins/okru.py --- a/src/streamlink/plugins/okru.py +++ b/src/streamlink/plugins/okru.py @@ -1,14 +1,15 @@ """ $description Russian live-streaming and video hosting social platform. $url ok.ru +$url mobile.ok.ru $type live, vod """ import logging import re -from urllib.parse import unquote, urlparse, urlunparse +from urllib.parse import unquote, urlparse -from streamlink.plugin import Plugin, PluginError, pluginmatcher +from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.dash import DASHStream from streamlink.stream.hls import HLSStream @@ -17,9 +18,8 @@ log = logging.getLogger(__name__) -@pluginmatcher(re.compile( - r"https?://(?:\w+\.)?ok\.ru/" -)) +@pluginmatcher(re.compile(r"https?://(?:www\.)?ok\.ru/")) +@pluginmatcher(re.compile(r"https?://m(?:obile)?\.ok\.ru/")) class OKru(Plugin): QUALITY_WEIGHTS = { "full": 1080, @@ -42,12 +42,38 @@ def stream_weight(cls, key): return super().stream_weight(key) - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - parsed = urlparse(self.url) - self.url = urlunparse(parsed._replace(netloc=re.sub(r"^m(obile)?\.", "", parsed.netloc))) + def _get_streams_mobile(self): + data = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_find(".//a[@data-video]"), + validate.get("data-video"), + validate.none_or_all( + str, + validate.parse_json(), + { + "videoName": str, + "videoSrc": validate.url(), + "movieId": str, + }, + validate.union_get("movieId", "videoName", "videoSrc"), + ), + )) + if not data: + return - def _get_streams(self): + self.id, self.title, url = data + + stream_url = self.session.http.head(url).headers.get("Location") + if not stream_url: + return + + return ( + HLSStream.parse_variant_playlist(self.session, stream_url) + if urlparse(stream_url).path.endswith(".m3u8") else + {"vod": HTTPStream(self.session, stream_url)} + ) + + def _get_streams_default(self): schema_metadata = validate.Schema( validate.parse_json(), { @@ -59,32 +85,28 @@ def _get_streams(self): validate.optional("videos"): [validate.all( { "name": str, - "url": validate.url() + "url": validate.url(), }, - validate.union_get("name", "url") - )] - } + validate.union_get("name", "url"), + )], + }, ) - try: - metadata, metadata_url = self.session.http.get(self.url, schema=validate.Schema( - validate.parse_html(), - validate.xml_find(".//*[@data-options]"), - validate.get("data-options"), - validate.parse_json(), - {"flashvars": { - validate.optional("metadata"): str, - validate.optional("metadataUrl"): validate.all( - validate.transform(unquote), - validate.url() - ) - }}, - validate.get("flashvars"), - validate.union_get("metadata", "metadataUrl") - )) - except PluginError: - log.error("Could not find metadata") - return + metadata, metadata_url = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_find(".//*[@data-options]"), + validate.get("data-options"), + validate.parse_json(), + {"flashvars": { + validate.optional("metadata"): str, + validate.optional("metadataUrl"): validate.all( + validate.transform(unquote), + validate.url(), + ), + }}, + validate.get("flashvars"), + validate.union_get("metadata", "metadataUrl"), + )) self.session.http.headers.update({"Referer": self.url}) @@ -93,11 +115,7 @@ def _get_streams(self): log.trace(f"{metadata!r}") - try: - data = schema_metadata.validate(metadata) - except PluginError: - log.error("Could not parse metadata") - return + data = schema_metadata.validate(metadata) self.author = data.get("author") self.title = data.get("movie") @@ -114,5 +132,8 @@ def _get_streams(self): for name, url in data.get("videos", []) } + def _get_streams(self): + return self._get_streams_default() if self.matches[0] else self._get_streams_mobile() + __plugin__ = OKru
diff --git a/tests/plugins/test_okru.py b/tests/plugins/test_okru.py --- a/tests/plugins/test_okru.py +++ b/tests/plugins/test_okru.py @@ -1,5 +1,3 @@ -import pytest - from streamlink.plugins.okru import OKru from tests.plugins import PluginCanHandleUrl @@ -15,13 +13,3 @@ class TestPluginCanHandleUrlOKru(PluginCanHandleUrl): "https://www.ok.ru/live/12345", "https://ok.ru/video/266205792931", ] - - -class TestOKru: - @pytest.mark.parametrize("url,expected", [ - ("https://m.ok.ru/live/12345", "https://ok.ru/live/12345"), - ("https://mobile.ok.ru/live/12345", "https://ok.ru/live/12345"), - ]) - def test_url_mobile(self, url, expected): - plugin = OKru(url) - assert plugin.url == expected
mobile.ok.ru ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description it is the mobile version of ok.ru cuz it is banned here in Egypt ### Input URLs mobile.ok.ru
The video url is the one with data-objid="xxx" while xxx is the videoid You just need to remove "m." or "mobile." from the URL e.g ``` ~$ streamlink 'https://m.ok.ru/live/1366902775399' error: No plugin can handle URL: https://m.ok.ru/live/1366902775399 ~$ streamlink 'https://mobile.ok.ru/live/1366902775399' error: No plugin can handle URL: https://mobile.ok.ru/live/1366902775399 ~$ streamlink 'https://ok.ru/live/1366902775399' [cli][info] Found matching plugin okru for URL https://ok.ru/live/1366902775399 Available streams: 240p (worst), 360p, 480p, 720p (best) ``` @bastimeyer I tried updating the regex to allow the "m." or "mobile." urls to load with the okru plugin but it seems the extraction code needs to be updated for the mobile pages. Maybe the plugin can just remove the "m." or "mobile." from the URL? ``` ~$ streamlink 'https://mobile.ok.ru/live/1366902775399' [cli][info] Found matching plugin okru for URL https://mobile.ok.ru/live/1366902775399 [plugins.okru][error] Could not find metadata error: No playable streams found on this URL: https://mobile.ok.ru/live/1366902775399 ~$ streamlink 'https://m.ok.ru/live/1366902775399' [cli][info] Found matching plugin okru for URL https://m.ok.ru/live/1366902775399 [plugins.okru][error] Could not find metadata error: No playable streams found on this URL: https://m.ok.ru/live/1366902775399 ~$ ``` > but it seems the extraction code needs to be updated for the mobile pages > You just need to remove "m." or "mobile." from the URL Take a look at the YouTube plugin. There's logic for modifying the URL before the page gets accessed. https://github.com/streamlink/streamlink/blob/b7b0353b5168a713f4f265ceec432c734aee94cf/src/streamlink/plugins/youtube.py#L79-L91 That should do the trick. If you can confirm these changes, then I will submit a PR. ```diff diff --git a/src/streamlink/plugins/okru.py b/src/streamlink/plugins/okru.py index bcb1f699..39c90bd1 100644 --- a/src/streamlink/plugins/okru.py +++ b/src/streamlink/plugins/okru.py @@ -1,12 +1,12 @@ """ -$description Russian live streaming and video hosting social platform. +$description Russian live-streaming and video hosting social platform. $url ok.ru $type live, vod """ import logging import re -from urllib.parse import unquote +from urllib.parse import unquote, urlparse, urlunparse from streamlink.plugin import Plugin, PluginError, pluginmatcher from streamlink.plugin.api import validate @@ -18,7 +18,7 @@ log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r'https?://(?:www\.)?ok\.ru/' + r"https?://(?:\w+\.)?ok\.ru/" )) class OKru(Plugin): QUALITY_WEIGHTS = { @@ -42,6 +42,11 @@ class OKru(Plugin): return super().stream_weight(key) + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + parsed = urlparse(self.url) + self.url = urlunparse(parsed._replace(netloc=re.sub(r"^m(obile)?\.", "", parsed.netloc))) + def _get_streams(self): schema_metadata = validate.Schema( validate.parse_json(), diff --git a/tests/plugins/test_okru.py b/tests/plugins/test_okru.py index f6aa00c1..acf47987 100644 --- a/tests/plugins/test_okru.py +++ b/tests/plugins/test_okru.py @@ -1,3 +1,5 @@ +import pytest + from streamlink.plugins.okru import OKru from tests.plugins import PluginCanHandleUrl @@ -6,8 +8,20 @@ class TestPluginCanHandleUrlOKru(PluginCanHandleUrl): __plugin__ = OKru should_match = [ - 'https://ok.ru/live/12345', - 'http://ok.ru/live/12345', - 'http://www.ok.ru/live/12345', - 'https://ok.ru/video/266205792931', + "http://ok.ru/live/12345", + "https://ok.ru/live/12345", + "https://m.ok.ru/live/12345", + "https://mobile.ok.ru/live/12345", + "https://www.ok.ru/live/12345", + "https://ok.ru/video/266205792931", ] + + +class TestOKru: + @pytest.mark.parametrize("url,expected", [ + ("https://m.ok.ru/live/12345", "https://ok.ru/live/12345"), + ("https://mobile.ok.ru/live/12345", "https://ok.ru/live/12345"), + ]) + def test_url_mobile(self, url, expected): + plugin = OKru(url) + assert plugin.url == expected ``` Thankyou @bastimeyer. It worked perfectly. ``` ~$ streamlink 'https://mobile.ok.ru/live/1366902775399' best [cli][info] Found matching plugin okru for URL https://mobile.ok.ru/live/1366902775399 [cli][info] Available streams: 240p (worst), 360p, 480p, 720p (best) [cli][info] Opening stream: 720p (hls) [cli][info] Starting player: /opt/homebrew/bin/VLC ^[[A[cli][info] Player closed [cli][info] Stream ended [cli][info] Closing currently open stream... ~$ streamlink 'https://m.ok.ru/live/1366902775399' best [cli][info] Found matching plugin okru for URL https://m.ok.ru/live/1366902775399 [cli][info] Available streams: 240p (worst), 360p, 480p, 720p (best) [cli][info] Opening stream: 720p (hls) [cli][info] Starting player: /opt/homebrew/bin/VLC ^[[A[cli][info] Player closed [cli][info] Stream ended [cli][info] Closing currently open stream... ~$ streamlink 'https://ok.ru/live/1366902775399' best [cli][info] Found matching plugin okru for URL https://ok.ru/live/1366902775399 [cli][info] Available streams: 240p (worst), 360p, 480p, 720p (best) [cli][info] Opening stream: 720p (hls) [cli][info] Starting player: /opt/homebrew/bin/VLC ^[[A[cli][info] Player closed [cli][info] Stream ended [cli][info] Closing currently open stream... ``` I don't think the proposed solution will do what the OP was asking for. I don't know how Egypt is enabling its blocking - whether via transparent proxy or DNS, but either way it seems likely that the proposed solution will still be blocked for the OP. True. I didn't check OP's comment... :/ I only have that one link posted in https://github.com/streamlink/streamlink/issues/4682#issuecomment-1229226006 though. The site requires having an account, and I don't want to create one. The HLS URL, video ID and video title can be found in the JSON data of this XPath query `string(.//a[@data-video][1]/@data-video)`, so implementing the mobile site is trivial if the same layout gets used everywhere else.
2022-08-27T21:06:07
streamlink/streamlink
4,830
streamlink__streamlink-4830
[ "4829" ]
64e8be7addc58f77cf508549804ad018769a6a9b
diff --git a/src/streamlink_cli/utils/progress.py b/src/streamlink_cli/utils/progress.py --- a/src/streamlink_cli/utils/progress.py +++ b/src/streamlink_cli/utils/progress.py @@ -200,7 +200,7 @@ def format_path(cls, path: PurePath, max_width: int) -> str: max_width -= cls.width(path.drive) + cls.width(cls.ELLIPSIS) # Ignore the path's first part, aka the "anchor" (drive + root) - parts = os.path.sep.join(path.parts[1:]) + parts = os.path.sep.join(path.parts[1:] if path.drive else path.parts) truncated = cls.cut(parts, max_width) return f"{path.drive}{cls.ELLIPSIS}{truncated}"
diff --git a/tests/cli/utils/test_progress.py b/tests/cli/utils/test_progress.py --- a/tests/cli/utils/test_progress.py +++ b/tests/cli/utils/test_progress.py @@ -102,7 +102,8 @@ def test_format_time(self, elapsed, expected): assert ProgressFormatter.format_time(elapsed) == expected _path_posix = PurePosixPath("/foobar/baz/some file name") - _path_windows = PureWindowsPath("C:\\foobar\\baz\\some file name") + _path_windows_abs = PureWindowsPath("C:\\foobar\\baz\\some file name") + _path_windows_rel = PureWindowsPath("foobar\\baz\\some file name") _path_windows_unc = PureWindowsPath("\\\\?\\foobar\\baz\\some file name") @pytest.mark.parametrize("path,max_width,expected", [ @@ -117,17 +118,27 @@ def test_format_time(self, elapsed, expected): pytest.param(_path_posix, 16, "…/some file name", id="posix - truncated (all parts except name)"), pytest.param(_path_posix, 15, "…some file name", id="posix - truncated (name without separator)"), pytest.param(_path_posix, 14, "…ome file name", id="posix - truncated name"), - pytest.param(_path_windows, 28, "C:\\foobar\\baz\\some file name", id="windows - full path"), - pytest.param(_path_windows, 27, "C:…oobar\\baz\\some file name", id="windows - truncated by 1"), - pytest.param(_path_windows, 26, "C:…obar\\baz\\some file name", id="windows - truncated by 2"), - pytest.param(_path_windows, 25, "C:…bar\\baz\\some file name", id="windows - truncated by 3"), - pytest.param(_path_windows, 24, "C:…ar\\baz\\some file name", id="windows - truncated by 4"), - pytest.param(_path_windows, 23, "C:…r\\baz\\some file name", id="windows - truncated by 5"), - pytest.param(_path_windows, 22, "C:…\\baz\\some file name", id="windows - truncated by 6"), - pytest.param(_path_windows, 21, "C:…baz\\some file name", id="windows - truncated by 7 (cuts off separator)"), - pytest.param(_path_windows, 18, "C:…\\some file name", id="windows - truncated (all parts except name)"), - pytest.param(_path_windows, 17, "C:…some file name", id="windows - truncated (name without separator)"), - pytest.param(_path_windows, 16, "C:…ome file name", id="windows - truncated name"), + pytest.param(_path_windows_abs, 28, "C:\\foobar\\baz\\some file name", id="windows abs - full path"), + pytest.param(_path_windows_abs, 27, "C:…oobar\\baz\\some file name", id="windows abs - truncated by 1"), + pytest.param(_path_windows_abs, 26, "C:…obar\\baz\\some file name", id="windows abs - truncated by 2"), + pytest.param(_path_windows_abs, 25, "C:…bar\\baz\\some file name", id="windows abs - truncated by 3"), + pytest.param(_path_windows_abs, 24, "C:…ar\\baz\\some file name", id="windows abs - truncated by 4"), + pytest.param(_path_windows_abs, 23, "C:…r\\baz\\some file name", id="windows abs - truncated by 5"), + pytest.param(_path_windows_abs, 22, "C:…\\baz\\some file name", id="windows abs - truncated by 6"), + pytest.param(_path_windows_abs, 21, "C:…baz\\some file name", id="windows abs - truncated by 7 (cuts off separator)"), + pytest.param(_path_windows_abs, 18, "C:…\\some file name", id="windows abs - truncated (all parts except name)"), + pytest.param(_path_windows_abs, 17, "C:…some file name", id="windows abs - truncated (name without separator)"), + pytest.param(_path_windows_abs, 16, "C:…ome file name", id="windows abs - truncated name"), + pytest.param(_path_windows_rel, 25, "foobar\\baz\\some file name", id="windows rel - full path"), + pytest.param(_path_windows_rel, 24, "…obar\\baz\\some file name", id="windows rel - truncated by 1"), + pytest.param(_path_windows_rel, 23, "…bar\\baz\\some file name", id="windows rel - truncated by 2"), + pytest.param(_path_windows_rel, 22, "…ar\\baz\\some file name", id="windows rel - truncated by 3"), + pytest.param(_path_windows_rel, 21, "…r\\baz\\some file name", id="windows rel - truncated by 4"), + pytest.param(_path_windows_rel, 20, "…\\baz\\some file name", id="windows rel - truncated by 5"), + pytest.param(_path_windows_rel, 19, "…baz\\some file name", id="windows rel - truncated by 6 (cuts off separator)"), + pytest.param(_path_windows_rel, 16, "…\\some file name", id="windows rel - truncated (all parts except name)"), + pytest.param(_path_windows_rel, 15, "…some file name", id="windows rel - truncated (name without separator)"), + pytest.param(_path_windows_rel, 14, "…ome file name", id="windows rel - truncated name"), pytest.param(_path_windows_unc, 29, "\\\\?\\foobar\\baz\\some file name", id="windows UNC - full path"), pytest.param(_path_windows_unc, 28, "\\\\?\\…obar\\baz\\some file name", id="windows UNC - truncated by 1"), pytest.param(_path_windows_unc, 20, "\\\\?\\…\\some file name", id="windows UNC - truncated (all parts except name)"),
cli.utils.progress: relative paths on Windows don't get truncated correctly ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Hello again, I'm the submitter of #4752 and thank you so much for improving it as #4764 said. But after I updated it to 5.0.0, I found that it doesn't behave as your screen record https://github.com/streamlink/streamlink/pull/4764#issuecomment-1223952285 showed. I recorded some clips in debug / default loglevel in PowerShell / CMD. Here are the clips: https://user-images.githubusercontent.com/48208459/190840790-f2053d51-9f83-4866-9f24-719364f88352.mp4 https://user-images.githubusercontent.com/48208459/190840793-f376856c-c3d1-4dc4-8399-09e55251035b.mp4 https://user-images.githubusercontent.com/48208459/190840796-1130d9d9-2bc9-4baf-ab16-ada8a905cfcd.mp4 https://user-images.githubusercontent.com/48208459/190840799-b7fb2fca-13ed-45d8-a620-ea5b2fb19e03.mp4 You can see that in Windows it shows either "..." or the full filename and can't show the last strings to fill the row as your record did. And when showing "..." it shows several times a second even in default loglevel. ### Debug log ```text Please refer to the screen records above. ```
2022-09-17T10:45:02
streamlink/streamlink
4,839
streamlink__streamlink-4839
[ "4837" ]
e907538896397a1920e55682ea784cab3c1ef92f
diff --git a/src/streamlink/plugins/mitele.py b/src/streamlink/plugins/mitele.py --- a/src/streamlink/plugins/mitele.py +++ b/src/streamlink/plugins/mitele.py @@ -45,11 +45,17 @@ def _get_streams(self): [{ "drm": bool, "format": str, - "stream": validate.url(), - "lid": validate.all(int, validate.transform(str)), + "stream": validate.all( + validate.transform(str.strip), + validate.url(), + ), + "lid": validate.all( + int, + validate.transform(str), + ), validate.optional("assetKey"): str, }], - validate.filter(lambda obj: obj["format"] == "hls") + validate.filter(lambda obj: obj["format"] == "hls"), ), }, ), @@ -86,7 +92,7 @@ def _get_streams(self): {"code": int}, validate.all( {"tokens": {str: {"cdn": str}}}, - validate.get("tokens") + validate.get("tokens"), ), ), ),
plugins.mitele: Unable to validate value of key 'stream' ### Checklist - [X] This is a bug report and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22) - [x] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Only this error with one url (bemad). ### Debug log ```text [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.7 [cli][debug] Streamlink: 5.0.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=https://mitele.es/directo/bemad [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe [cli][info] Found matching plugin mitele for URL https://mitele.es/directo/bemad error: Unable to validate response text: ValidationError(AnySchema): ValidationError(dict): Key 'code' not found in <{'cerbero': 'https://cerbero.pro.3d99a4cb092e6d20ff56d8...> ValidationError(dict): Unable to validate value of key 'dls' Context(AnySchema): ValidationError(dict): Unable to validate value of key 'stream' Context(url): <'\u2002https://directos.mitele.es/orilinear02/live/line...> is not a valid URL ```
> `<'\u2002https://directos.mitele.es/orilinear02/live/line...)> is not a valid URL` Why is there a `U+2002` character at the beginning of one of the stream URLs?!?! weird... https://www.fileformat.info/info/unicode/char/2002/index.htm That diff should fix the validation error ```diff diff --git a/src/streamlink/plugins/mitele.py b/src/streamlink/plugins/mitele.py index 0a9f2f39..d59cfaa0 100644 --- a/src/streamlink/plugins/mitele.py +++ b/src/streamlink/plugins/mitele.py @@ -45,7 +45,11 @@ class Mitele(Plugin): [{ "drm": bool, "format": str, - "stream": validate.url(), + "stream": validate.all( + validate.transform(str.lstrip), + validate.url(), + ), "lid": validate.all(int, validate.transform(str)), validate.optional("assetKey"): str, }], ``` but the site is responding with error code `40313` when trying to acquire the streaming token afterwards. This might be an issue with geo-blocking. My VPN unfortunately gets blocked by the site, so if you could apply the diff and report back if it's working, that would be appreciated, so I can open a PR with the fix. > > `<'\u2002https://directos.mitele.es/orilinear02/live/line...)> is not a valid URL` > > Why is there a `U+2002` character at the beginning of one of the stream URLs?!?! weird... https://www.fileformat.info/info/unicode/char/2002/index.htm > > That diff should fix the validation error > > ```diff > diff --git a/src/streamlink/plugins/mitele.py b/src/streamlink/plugins/mitele.py > index 0a9f2f39..d59cfaa0 100644 > --- a/src/streamlink/plugins/mitele.py > +++ b/src/streamlink/plugins/mitele.py > @@ -45,7 +45,11 @@ class Mitele(Plugin): > [{ > "drm": bool, > "format": str, > - "stream": validate.url(), > + "stream": validate.all( > + validate.transform(str.lstrip), > + validate.url(), > + ), > "lid": validate.all(int, validate.transform(str)), > validate.optional("assetKey"): str, > }], > ``` > > but the site is responding with error code `40313` when trying to acquire the streaming token afterwards. This might be an issue with geo-blocking. My VPN unfortunately gets blocked by the site, so if you could apply the diff and report back if it's working, that would be appreciated, so I can open a PR with the fix. From Spain works correctly, thanks.
2022-09-20T14:48:00
streamlink/streamlink
4,840
streamlink__streamlink-4840
[ "4838" ]
72f83c74f067ff25b6abe1b66f7408c05376f423
diff --git a/src/streamlink/plugins/rtve.py b/src/streamlink/plugins/rtve.py --- a/src/streamlink/plugins/rtve.py +++ b/src/streamlink/plugins/rtve.py @@ -12,7 +12,7 @@ from typing import Iterator, Sequence, Tuple from urllib.parse import urlparse -from streamlink.plugin import Plugin, pluginargument, pluginmatcher +from streamlink.plugin import Plugin, PluginError, pluginargument, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.ffmpegmux import MuxedStream from streamlink.stream.hls import HLSStream @@ -29,7 +29,7 @@ def __init__(self, data: str): def _iterate(): while True: chunk = stream.read(1) - if len(chunk) == 0: # pragma: no cover + if len(chunk) == 0: return yield ord(chunk) @@ -39,7 +39,7 @@ def read(self, num: int) -> Sequence[int]: res = [] for _ in range(num): item = next(self._iterator, None) - if item is None: # pragma: no cover + if item is None: break res.append(item) return res @@ -63,6 +63,14 @@ def read_chunk(self) -> Tuple[str, Sequence[int]]: self.skip(4) return chunktype, chunkdata + def __iter__(self): + self.skip(8) + while True: + try: + yield self.read_chunk() + except ValueError: + return + class ZTNR: @staticmethod @@ -106,17 +114,16 @@ def _get_source(cls, alphabet: str, data: str) -> str: @classmethod def translate(cls, data: str) -> Iterator[Tuple[str, str]]: reader = Base64Reader(data.replace("\n", "")) - reader.skip(8) - chunk_type, chunk_data = reader.read_chunk() - while chunk_type != "IEND": + for chunk_type, chunk_data in reader: + if chunk_type == "IEND": + break if chunk_type == "tEXt": content = "".join(chr(item) for item in chunk_data if item > 0) - if "#" not in content or "%%" not in content: # pragma: no cover + if "#" not in content or "%%" not in content: continue alphabet, content = content.split("#", 1) quality, content = content.split("%%", 1) yield quality, cls._get_source(alphabet, content) - chunk_type, chunk_data = reader.read_chunk() @pluginmatcher(re.compile( @@ -147,18 +154,19 @@ def _get_streams(self): return # check obfuscated stream URLs via self.URL_VIDEOS and ZTNR.translate() first - # self.URL_M3U8 appears to be valid for all streams, but doesn't provide any content in same cases - urls = self.session.http.get( - self.URL_VIDEOS.format(id=self.id), - schema=validate.Schema( - validate.transform(ZTNR.translate), - validate.transform(list), - [(str, validate.url())], - ), - ) - - # then fall back to self.URL_M3U8 - if not urls: + # self.URL_M3U8 appears to be valid for all streams, but doesn't provide any content in some cases + try: + urls = self.session.http.get( + self.URL_VIDEOS.format(id=self.id), + schema=validate.Schema( + validate.transform(ZTNR.translate), + validate.transform(list), + [(str, validate.url())], + validate.length(1), + ), + ) + except PluginError: + # catch HTTP errors and validation errors, and fall back to generic HLS URL template url = self.URL_M3U8.format(id=self.id) else: url = next((url for _, url in urls if urlparse(url).path.endswith(".m3u8")), None)
diff --git a/tests/plugins/test_rtve.py b/tests/plugins/test_rtve.py --- a/tests/plugins/test_rtve.py +++ b/tests/plugins/test_rtve.py @@ -8,7 +8,7 @@ class TestPluginCanHandleUrlRtve(PluginCanHandleUrl): should_match = [ "https://www.rtve.es/play/videos/directo/la-1/", "https://www.rtve.es/play/videos/directo/canales-lineales/24h/", - "https://www.rtve.es/play/videos/rebelion-en-el-reino-salvaje/mata-reyes/5803959/", + "https://www.rtve.es/play/videos/informe-semanal/la-semilla-de-la-guerra/6670279/", ] should_not_match = [ @@ -21,7 +21,28 @@ class TestPluginCanHandleUrlRtve(PluginCanHandleUrl): ] -def test_translate(): +def test_translate_no_content(): + assert list(ZTNR.translate("")) == [] + + +def test_translate_no_streams(): + # real payload without any tEXt chunks that match the expected format + data = \ + "iVBORw0KGgoAAAANSUhEUgAAAsAAAAGMAQMAAADuk4YmAAAAA1BMVEX///+nxBvIAAAAAXRSTlMA" \ + "QObYZgAAADlJREFUeF7twDEBAAAAwiD7p7bGDlgYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" \ + "AAAAAAAAwAGJrAABgPqdWQAAAcp0RVh0ak9lNmRyNkUtV2hmeEE0dERMdS9FOTlCT2d3MF9HMDdG" \ + "RmxQNy1ZLTdFOFRac0MmbD93VEp5SENvUUlseVY1bjdrYmF2ZkhUUjc4aTBHAEBxY08zdk4yYldE" \ + "bm09TDVaNGMyVVpNdklVbS5LVUNCUTdZNVpfSUZMVmRMNlN0VE14TmFPLUFGaTF6ai9YenE6PVg9" \ + "dnJBb3BFU3BBJlpoWFViSER3MCZxbj9AS0d1Si5OSnAudiMwMTYxNDA2MDU2NjcyMDE3MzI4NTcw" \ + "ODcwMDc3MDI3NjIwNjczMTA0ODEyNDY3MzMwNzgxMDQwMTE4NzQ4MDYwMjIwODgxNTI0ODEzNjQ4" \ + "MjU0MTEyMzEyNjUxMzc2NTM3MTMzNzgwNTYwNDE0NjI4NDM1NjIzNTA1MTAxNjYwMDExNzE4MDQx" \ + "MTc3MDMxNTQ2MDEzNDUwMDQ2MTg4MDgwNzMxNDM3MjgwMDQ4NDA3Mzg0MzYxODA0NjU0NDYzMTY1" \ + "NDIxMzY4ODAzNTQ3MjMyMjYzODUwMzY5MTE3MTMwOTMzMjAwNDg1MDExNTE4MTgxMTgwMTAwNjU0" \ + "NTg1MzcxNDQ5MDM5MzY2ODMxNTc0MjUyNDVZsdrfAAAAAElFTkSuQmCC" + assert list(ZTNR.translate(data)) == [] + + +def test_translate_has_streams(): # real payload with modified end (IEND chunk of size 0), to reduce test size data = \ "iVBORw0KGgoAAAANSUhEUgAAAVQAAAFUCAIAAAD08FPiAAACr3RFWHRXczlVSWdtM2ZPTGY4b2R4" \
plugins.rtve: ZTNR.translate() runs endlessly ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description RTVE plugin is not resolving any streams and stuck after message "Found matching plugin rtve for URL". Upon debugging in Python the following while block seems to cause an endless loop: https://github.com/streamlink/streamlink/blob/master/src/streamlink/plugins/rtve.py#L111 Thanks for the good work! Cheers. ### Debug log ```text bin\streamlink.exe -l debug https://rtve.es/play/videos/directo/canales-lineales/24h [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.7 [cli][debug] Streamlink: 5.0.0 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=https://rtve.es/play/videos/directo/canales-lineales/24h [cli][debug] --loglevel=debug [cli][info] Found matching plugin rtve for URL https://rtve.es/play/videos/directo/canales-lineales/24h ```
Indeed, there's a bug in the while-loop. That while-loop should be turned into an iterator which yields tuples of chunk types and chunk data, so that the `continue` and `break` statements can be used in the loop body without having to call `read_chunks()` again. This is why there's an infinite loop, because the next data doesn't get read before executing the `continue` statement. However, the obfuscated data format which is being decoded here seems to have changed, too. As I mentioned in the last plugin issue thread, the only streams which were still relying on this data were some VODs, but live streams were not and a generic HLS URL template could be used instead: https://github.com/streamlink/streamlink/issues/4757#issuecomment-1221357795 But when looking at the response of the VOD I listed in the linked comment and comparing it to the response of the URL you've posted, something seems to have changed now. As said, if no streams are found in the obfuscated data, then the plugin continues with a generic HLS URL template, but it's possible that stream URLs are still present in the data. I'll first have to look at the site's JS implementation again before I can submit a PR with a fix for the while-loop. JS code seems to be unchanged https://js2.rtve.es/pages/app-player/0.0.79/js/pf_directo.js ```js function translate(img, callback) { var chunk, reader, arrSources = [], arrCalidades = [], calidad; try { reader = new Base64Reader(img); reader.skip(8); chunk = readChunk(reader); while (chunk.type !== 'IEND') { if (chunk.type === 'tEXt') { var data = chunk.data, a = '', i = 0; for (i = 0; i < data.length; ++i){ if (data[i] !== 0){ a += String.fromCharCode(data[i]); } } calidad = a.indexOf('%%') === -1 ? '' : a.split('#')[1].split('%%')[0]; a = a.indexOf('%%') === -1 ? a : a.split('#')[0]+'#'+a.split('#')[1].split('%%')[1]; arrCalidades.push(calidad); arrSources.push(getSource(a)); } chunk = readChunk(reader); } } catch (e){ console.error('RtvePlayer', 'ztnrThumbnail', 'No se ha podido procesar la img en modeTwo.'); } callback(arrSources, arrCalidades); } ``` Now I really don't know if this whole de-obfuscation implementation is actually still used and could actually be removed. Thanks for checking. I was about to ask if you need help reviewing the JS obfuscation but seems it is unchanged. On Web site, it's also the URL_M3U8 that are playing, so maybe they don't use the obfuscation anymore. But the obfuscated code is still there. Could be some change in the alfabet used? > Could be some change in the alfabet used? The `tEXt` chunks all have a different format. For example: ``` jOe6dr6E-WhfxA4tDLu/E99BOgw0_G07FFlP7-Y-7E8TZsC&l?wTJyHCoQIlyV5n7kbavfHTR78i0G@qcO3vN2bWDnm=L5Z4c2UZMvIUm.KUCBQ7Y5Z_IFLVdL6StTMxNaO-AFi1zj/Xzq:=X=vrAopESpA&ZhXUbHDw0&[email protected]#016140605667201732857087007702762067310481246733078104011874806022088152481364825411231265137653713378056041462843562350510166001171804117703154601345004618808073143728004840738436180465446316542136880354723226385036911713093320048501151818118010065458537144903936683157425245 ``` ``` Ws9UIgm3fOLf8odxuj9hvgFTa:wodKq7zK8nh4dim=oD@SXxN0ksQZ&6w@ZEys=F9IBJ&1t72BgC8S64aU&hu796mJp8UI8MC&Z@cistg&lE&DNCdUxHzD8X/.jigYxos5AMe:ywe-8VPpBFo.QLQfGO-oB3Uxx_T1u&DRA:O?bxZm3lYqKr#HD_READY%%056072828835264235110438472876841280385483004777041100281533745780108728158533341716111874515726195072871248032853583585734271467281658342851458532038185746475082794488761315343115176345571400505342181486242383617334460052055646248186340630811485114638636224827722225363122521315462562223710861062456253150681224636371436805518154655713152475815660126420505637003777020416131724112676335266754550515157651315067147202614227352871160855762333135443538012414315513277882527225026836205324360416239 ``` I don't know what this data represents and how it gets decoded, but since the JS code still uses the same logic for extracting the stream URLs as before when I implemented it (I've only posted the `translate` function and not the rest), I doubt it's a new format and just something else which we don't need, so basically junk data. As you said for yourself, the `URL_M3U8`-based HLS URLs are working fine, even for VODs. I'll keep the `ZNTR` implementation included, as it doesn't hurt having it, except for one useless HTTP request and some useless CPU cycles. It's possible that some streams still require it, and I don't want to unnecessarily break stuff.
2022-09-20T16:09:56
streamlink/streamlink
4,850
streamlink__streamlink-4850
[ "4848" ]
facf193437b51d14093fa00d20c7fd14d28a4747
diff --git a/src/streamlink/plugins/vinhlongtv.py b/src/streamlink/plugins/vinhlongtv.py --- a/src/streamlink/plugins/vinhlongtv.py +++ b/src/streamlink/plugins/vinhlongtv.py @@ -7,39 +7,67 @@ import logging import re +from datetime import datetime +from hashlib import md5 + +from isodate import UTC # type: ignore[import] from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream + log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r'https?://(?:www\.)?thvli\.vn/live/(?P<channel>[^/]+)' + r"https?://(?:www\.)?thvli\.vn/live/(?P<channel>[^/]+)" )) class VinhLongTV(Plugin): - api_url = 'http://api.thvli.vn/backend/cm/detail/{0}/' + _API_URL = "https://api.thvli.vn/backend/cm/get_detail/{channel}/" + _API_KEY_DATE = "Kh0ngDuLieu" + _API_KEY_TIME = "C0R0i" + _API_KEY_SECRET = "Kh0aAnT0an" + + def _get_headers(self): + now = datetime.now(tz=UTC) + date = now.strftime("%Y%m%d") + time = now.strftime("%H%M%S") + dtstr = f"{date}{time}" + dthash = md5(dtstr.encode()).hexdigest() + key_value = f"{dthash[:3]}{dthash[-3:]}" + key_access = f"{self._API_KEY_DATE}{date}{self._API_KEY_TIME}{time}{self._API_KEY_SECRET}{key_value}" - _data_schema = validate.Schema( - { - 'link_play': validate.text, - }, - validate.get('link_play') - ) + return { + "X-SFD-Date": dtstr, + "X-SFD-Key": md5(key_access.encode()).hexdigest(), + } def _get_streams(self): - channel = self.match.group('channel') + channel = self.match.group("channel") + params = {"timezone": "UTC"} + headers = self._get_headers() - res = self.session.http.get(self.api_url.format(channel)) - hls_url = self.session.http.json(res, schema=self._data_schema) - log.debug('URL={0}'.format(hls_url)) + self.id, self.title, hls_url = self.session.http.get( + self._API_URL.format(channel=channel), + params=params, + headers=headers, + schema=validate.Schema( + validate.parse_json(), + { + "id": str, + "title": str, + "link_play": str, + }, + validate.union_get( + "id", + "title", + "link_play", + ), + ), + ) - streams = HLSStream.parse_variant_playlist(self.session, hls_url) - if not streams: - return {'live': HLSStream(self.session, hls_url)} - else: - return streams + return HLSStream.parse_variant_playlist(self.session, hls_url) __plugin__ = VinhLongTV
diff --git a/tests/plugins/test_vinhlongtv.py b/tests/plugins/test_vinhlongtv.py --- a/tests/plugins/test_vinhlongtv.py +++ b/tests/plugins/test_vinhlongtv.py @@ -1,3 +1,7 @@ +from unittest.mock import Mock + +from freezegun import freeze_time + from streamlink.plugins.vinhlongtv import VinhLongTV from tests.plugins import PluginCanHandleUrl @@ -6,7 +10,17 @@ class TestPluginCanHandleUrlVinhLongTV(PluginCanHandleUrl): __plugin__ = VinhLongTV should_match = [ - 'http://thvli.vn/live/thvl1-hd/aab94d1f-44e1-4992-8633-6d46da08db42', - 'http://thvli.vn/live/thvl2-hd/bc60bddb-99ac-416e-be26-eb4d0852f5cc', - 'http://thvli.vn/live/phat-thanh/c87174ba-7aeb-4cb4-af95-d59de715464c', + "https://www.thvli.vn/live/thvl1-hd", + "https://www.thvli.vn/live/thvl2-hd", + "https://www.thvli.vn/live/thvl3-hd", + "https://www.thvli.vn/live/thvl4-hd", ] + + +@freeze_time("2022-09-25T00:04:45Z") +def test_headers(): + # noinspection PyUnresolvedReferences + assert VinhLongTV(Mock(), "")._get_headers() == { + "X-SFD-Date": "20220925000445", + "X-SFD-Key": "3507c190ae8befda3bfa8e2c00af3c7a", + }
plugins.vinhlongtv: update backend API URL <!-- Thanks for opening a pull request! Before you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink If possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures: https://streamlink.github.io/latest/developing.html#validating-changes Also don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers. Thank you very much! --> This is the new backend URL to get the links from Vinhlong TV. Slight change. Proof picture : ![image](https://user-images.githubusercontent.com/30985701/192102124-2e4af4ca-08ee-4aff-8056-5636810d7dfd.png)
Thanks. Before I'm going to merge this, let me quickly fix some other issues in the plugin file, so I don't have to open another pull request. My intention was to quickly fix the validation schema definition and the plugin's code style, but apparently there's more to the API URL change. The site appears to be setting not only the timezone name for accessing streams outside of the target region, but it also sets two additional headers, `x-sfd-date` and `x-sfd-key`. If the timezone or additional headers are not set, then the API response is 403. ``` $ curl -IL 'https://api.thvli.vn/backend/cm/get_detail/thvl1-hd' HTTP/2 301 access-control-allow-origin: * content-type: text/html; charset=utf-8 location: https://api.thvli.vn/backend/cm/get_detail/thvl1-hd/ x-frame-options: SAMEORIGIN date: Sat, 24 Sep 2022 16:35:47 GMT HTTP/2 403 access-control-allow-origin: * allow: GET, HEAD, OPTIONS content-type: application/json x-frame-options: SAMEORIGIN date: Sat, 24 Sep 2022 16:35:47 GMT ``` ``` $ curl -IL 'https://api.thvli.vn/backend/cm/get_detail/thvl1-hd/?timezone=Europe/Paris' HTTP/2 403 access-control-allow-origin: * allow: GET, HEAD, OPTIONS content-type: application/json x-frame-options: SAMEORIGIN date: Sat, 24 Sep 2022 16:46:58 GMT ``` ``` $ curl -IL -H 'X-SFD-Date: 20220924184513' -H 'X-SFD-Key: d13957d58ca0ae35568214470629f0b5' 'https://api.thvli.vn/backend/cm/get_detail/thvl1-hd/?timezone=Europe/Paris' HTTP/2 200 access-control-allow-origin: * allow: GET, HEAD, OPTIONS cache-control: max-age=600 content-type: application/json expires: Sat, 24 Sep 2022 16:56:41 GMT last-modified: Sat, 24 Sep 2022 16:46:41 GMT x-frame-options: SAMEORIGIN date: Sat, 24 Sep 2022 16:46:41 GMT ``` This is the code responsible for generating the headers, without the signing key part (`I()(h).toString()`) ```js function(e, t, n) { var c = Date.now() , a = 0; localStorage.getItem("TIME_DIFF") && (a = parseInt(localStorage.getItem("TIME_DIFF"))), c -= a; var s = new Date(c) , i = s.getDate() < 10 ? "0" + s.getDate() : s.getDate().toString() , r = s.getMonth() + 1 < 10 ? "0" + (s.getMonth() + 1) : (s.getMonth() + 1).toString() , l = s.getFullYear().toString() + r + i , o = (s.getHours() < 10 ? "0" + s.getHours() : s.getHours().toString()) + (s.getMinutes() < 10 ? "0" + s.getMinutes() : s.getMinutes().toString()) + (s.getSeconds() < 10 ? "0" + s.getSeconds() : s.getSeconds().toString()) , d = I()(l + o).toString() , h = "Kh0ngDuLieu" + l + "C0R0i" + o + "Kh0aAnT0an" + (d.substring(0, 3) + d.substring(d.length - 3)); return E()({ headers: { "X-SFD-Key": I()(h).toString(), "X-SFD-Date": l + o }, method: e, url: "".concat("https://api.thvli.vn/backend/cm/").concat(t), data: n }) } ``` As you can see, this involves the current date, as well as hardcoded strings in the minified/obfuscated JS. And the resulting string is then "signed" (haven't checked in detail yet), so that the server can check it back and verify the client's signature. Since I don't have access to a VPN to Vietnam, I don't know if the plugin works from this region. It does not from a different region when the timezone and headers are unset, and the result is a 403 API response. If we can't get any confirmation that the plugin is working from within Vietnam (with the updated API URL), then it'll have to be removed. Otherwise, the whole `x-sdf-{date,key}` header logic would need to be re-implemented, in addition to the timezone names. The timezone names are another problem, because Python's standard library doesn't provide methods for getting the human-readable timezone names, unlike the "ECMAScript Internationalization API" in NodeJS or in the browser's DOM via `Intl.DateTimeFormat().resolvedOptions().timeZone`, so adding the `pytz` dependency would be required, which is not ideal. It is also the only endpoint that is verified by authentification. All the others do not seem to have it. So, may I ask, did you actually test the plugin yourself, or did you just compare the API URL with your browser's dev tools? Because it looks like the headers are always required. ---- I had another look at it, and apparently the site publishes JS source maps, so here's the non-minified code for making API calls: ```js const callApi = (method, endpoint, data) => { let currentTimestamp = Date.now() let timeDiff = 0 if (localStorage.getItem('TIME_DIFF')) { timeDiff = parseInt(localStorage.getItem('TIME_DIFF')) } currentTimestamp = currentTimestamp - timeDiff const date = new Date(currentTimestamp) const day = date.getDate() < 10 ? ('0' + date.getDate()) : (date.getDate()).toString() const month = (date.getMonth() + 1) < 10 ? ('0' + (date.getMonth() + 1)) : (date.getMonth() + 1).toString() const year = date.getFullYear().toString() const hour = date.getHours() < 10 ? ('0' + date.getHours()) : (date.getHours()).toString() const minute = date.getMinutes() < 10 ? ('0' + date.getMinutes()) : (date.getMinutes()).toString() const second = date.getSeconds() < 10 ? ('0' + date.getSeconds()) : (date.getSeconds()).toString() const dateValue = year + month + day const timeValue = hour + minute + second const md5Value = (MD5(dateValue + timeValue)).toString() const keyValue = md5Value.substring(0, 3) + md5Value.substring(md5Value.length - 3) const keyAccess = process.env.REACT_APP_ACCESS_KEY_DATE + dateValue + process.env.REACT_APP_ACCESS_KEY_TIME + timeValue + process.env.REACT_APP_ACCESS_KEY_SECRET + keyValue return axios({ headers: { 'X-SFD-Key': MD5(keyAccess).toString(), 'X-SFD-Date': dateValue + timeValue }, method: method, url: `${process.env.REACT_APP_API_URL}${endpoint}`, data: data }) } ``` with the following env vars set while they've built their production code: - `REACT_APP_ACCESS_KEY_DATE=Kh0ngDuLieu` - `REACT_APP_ACCESS_KEY_TIME=C0R0i` - `REACT_APP_ACCESS_KEY_SECRET=Kh0aAnT0an` - `REACT_APP_API_URL=https://api.thvli.vn/backend/cm/` The `MD5` function comes from here and works as expected with byte inputs: https://github.com/brix/crypto-js/blob/4.1.1/md5.js However, as said, in order to implement the right API calls, we need human-readable names of the user's timezone, and that requires adding another dependency to Streamlink. Since the timezone is most likely used for setting the right time on the server for checking the provided MD5 (as it depends on the user's current time), it's possible that generating a hash from the UTC timezone and setting the `Europe/London` timezone string will work regardless of the user's local time.
2022-09-25T00:17:52
streamlink/streamlink
4,851
streamlink__streamlink-4851
[ "4816" ]
ee230e47ca54545f65aba2af8c41df161b350032
diff --git a/src/streamlink/plugins/raiplay.py b/src/streamlink/plugins/raiplay.py --- a/src/streamlink/plugins/raiplay.py +++ b/src/streamlink/plugins/raiplay.py @@ -7,11 +7,12 @@ import logging import re -from urllib.parse import urlparse, urlunparse +from urllib.parse import parse_qsl, urlparse, urlunparse from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream +from streamlink.utils.url import update_qsd log = logging.getLogger(__name__) @@ -20,6 +21,8 @@ r"https?://(?:www\.)?raiplay\.it/dirette/(\w+)/?" )) class RaiPlay(Plugin): + _DEFAULT_MEDIAPOLIS_OUTPUT = "64" + def _get_streams(self): json_url = self.session.http.get(self.url, schema=validate.Schema( validate.parse_html(), @@ -31,19 +34,44 @@ def _get_streams(self): json_url = urlunparse(urlparse(self.url)._replace(path=json_url)) log.debug(f"Found JSON URL: {json_url}") - stream_url = self.session.http.get(json_url, schema=validate.Schema( + content_url = self.session.http.get(json_url, schema=validate.Schema( validate.parse_json(), {"video": {"content_url": validate.url()}}, validate.get(("video", "content_url")), )) - log.debug(f"Found stream URL: {stream_url}") + if not content_url: + log.error("Missing content URL") + return + + content_url = content_url.replace("/relinkerServlet.mp4", "/relinkerServlet.htm") + parsed = urlparse(content_url) + params = dict(parse_qsl(parsed.query)) + if not parsed.path.endswith(".xml") and not params.get("output"): + params["output"] = self._DEFAULT_MEDIAPOLIS_OUTPUT + content_url = update_qsd(urlunparse(parsed), params) - res = self.session.http.request("HEAD", stream_url) + res = self.session.http.head(content_url) # status code will be 200 even if geo-blocked, so check the returned content-type if not res or not res.headers or res.headers["Content-Type"] == "video/mp4": log.error("Geo-restricted content") return + stream_url = self.session.http.get( + content_url, + schema=validate.Schema( + validate.parse_xml(), + validate.xml_element(tag="Mediapolis"), + validate.xml_xpath_string("./url[@type='content']/text()"), + validate.none_or_all( + validate.transform(str.strip), + validate.url(path=validate.endswith(".m3u8")), + ), + ), + ) + if not stream_url: + log.error("Missing stream URL") + return + yield from HLSStream.parse_variant_playlist(self.session, stream_url).items()
plugins.raiplay: Malformed HLS Playlist ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Plugin stopped working. ### Debug log ```text streamlink https://www.raiplay.it/dirette/rai1 best [cli][info] Found matching plugin raiplay for URL https://www.raiplay.it/dirette/rai1 [stream.hls_playlist][warning] Malformed HLS Playlist. Expected #EXTM3U, but got <Mediapolis> error: Failed to parse playlist: Missing #EXTM3U header ```
Post the entire **debug log**, as requested by the plugin issue form. Thesite is geo-blocked and it also blocks popular VPN services, so I can't access its streams. The plugin requests the correct API endpoint and then tries to access the resolved stream URL, which appears to be `https://mediapolis.rai.it/relinker/relinkerServlet.htm?cont=2606803` for the `rai1` channel. https://github.com/streamlink/streamlink/blob/4.3.0/src/streamlink/plugins/raiplay.py#L39-L45 If it's geo-blocked, then the returned content is not an HLS playlist, but a video instead, which the plugin handles correctly. ``` $ curl -sIL "https://mediapolis.rai.it/relinker/relinkerServlet.htm?cont=2606803" HTTP/1.1 302 Moved Temporarily Date: Sat, 10 Sep 2022 14:37:54 GMT Server: Apache Access-Control-Allow-Credentials: true Access-Control-Allow-Methods: GET, OPTIONS, HEAD Vary: X-Forwarded-Proto Location: https://download-rai-it.akamaized.net/video_no_available.mp4 Content-Type: text/html HTTP/1.1 200 OK Accept-Ranges: bytes ETag: "51a83016a9c866c8213e52cdbcd19d3d:1415293655" Last-Modified: Thu, 06 Nov 2014 17:07:35 GMT Server: AkamaiNetStorage Content-Length: 1687158 Cache-Control: max-age=29673477 Date: Sat, 10 Sep 2022 14:37:55 GMT Connection: keep-alive Akamai-Mon-Iucid-Del: 631717 Alt-Svc: h3-Q050=":443"; ma=93600,quic=":443"; ma=93600; v="46,43" Content-Type: video/mp4 Access-Control-Max-Age: 86400 Access-Control-Allow-Credentials: true Access-Control-Expose-Headers: Server,range,hdntl,hdnts,Akamai-Mon-Iucid-Ing,Akamai-Mon-Iucid-Del Access-Control-Allow-Headers: origin,range,hdntl,hdnts Access-Control-Allow-Methods: GET,POST,OPTIONS Access-Control-Allow-Origin: * ``` **Please post the output of the `curl` command shown above.** (in addition to the debug log mentioned above) > `Expected #EXTM3U, but got <Mediapolis>` Btw, this looks like there's some XML or another custom HTTP response instead of the HLS playlist content. Either way, I can't access it, and I've checked several different VPN services. Here are the debug logs for rai1, rai2 and the curl: ```text streamlink -l debug https://www.raiplay.it/dirette/rai1 best [cli][info] Found matching plugin raiplay for URL https://www.raiplay.it/dirette/rai1 [plugins.raiplay][debug] Found JSON URL: https://www.raiplay.it/dirette/rai1.json [plugins.raiplay][debug] Found stream URL: https://mediapolis.rai.it/relinker/relinkerServlet.htm?cont=2606803 [utils.l10n][debug] Language code: en_US [stream.hls_playlist][warning] Malformed HLS Playlist. Expected #EXTM3U, but got <Mediapolis> error: Failed to parse playlist: Missing #EXTM3U header ``` ```text streamlink -l debug https://www.raiplay.it/dirette/rai2 best [cli][info] Found matching plugin raiplay for URL https://www.raiplay.it/dirette/rai2 [plugins.raiplay][debug] Found JSON URL: https://www.raiplay.it/dirette/rai2.json [plugins.raiplay][debug] Found stream URL: https://mediapolis.rai.it/relinker/relinkerServlet.htm?cont=308718 [utils.l10n][debug] Language code: en_US [stream.hls_playlist][warning] Malformed HLS Playlist. Expected #EXTM3U, but got <Mediapolis> error: Failed to parse playlist: Missing #EXTM3U header ``` ```text curl -sIL "https://mediapolis.rai.it/relinker/relinkerServlet.htm?cont=2606803" HTTP/1.1 200 OK HTTP/1.1 200 OK Date: Sat, 10 Sep 2022 19:05:11 GMT Server: Apache Access-Control-Allow-Credentials: true Access-Control-Allow-Methods: GET, OPTIONS, HEAD Vary: X-Forwarded-Proto X-Powered-By: JSP/2.2 Set-Cookie: JSESSIONID=vIbyMqx1bQc4ofepIrde1voL; Path=/relinker Cache-Control: private Expires: Sat, 10 Sep 2022 19:05:31 GMT Content-Language: it-IT Content-Length: 27 Content-Type: text/xml ``` If you want, you can post the XML output of ``` $ curl -sL "https://mediapolis.rai.it/relinker/relinkerServlet.htm?cont=2606803" ``` but I don't think that debugging it this way makes sense. As said, the result can be anything, but if I have to ask someone else for any server responses, then this is just stupid. Fixing and potentially re-implementing the plugin requires being able to see and understand all the HTTP responses. Rai News24 is normally not geo blocked, if that is of any help. https://www.raiplay.it/dirette/rainews24
2022-09-25T17:49:46
streamlink/streamlink
4,858
streamlink__streamlink-4858
[ "4857" ]
4b6077d1dab47c71e1492ea63ed5fb094b35af48
diff --git a/src/streamlink/plugins/hiplayer.py b/src/streamlink/plugins/hiplayer.py --- a/src/streamlink/plugins/hiplayer.py +++ b/src/streamlink/plugins/hiplayer.py @@ -46,8 +46,8 @@ def _get_streams(self): ), ), ) - if not js_url: + log.error("Could not find JS URL") return log.debug(f"JS URL={js_url}") @@ -55,9 +55,9 @@ def _get_streams(self): data = self.session.http.get( js_url, schema=validate.Schema( - re.compile(r"i\s*=\s*\[(.*)]\.join"), + re.compile(r"var \w+\s*=\s*\[(?P<data>.+)]\.join\([\"']{2}\)"), validate.none_or_all( - validate.get(1), + validate.get("data"), validate.transform(lambda s: re.sub(r"['\", ]", "", s)), validate.transform(lambda s: base64.b64decode(s)), validate.parse_json(), @@ -73,13 +73,16 @@ def _get_streams(self): ), ), ) + if not data: + log.error("Could not find base64 encoded JSON data") + return hls_url = data["streamUrl"] if data["daiEnabled"]: log.debug("daiEnabled=true") hls_url = self.session.http.post( - self.DAI_URL.format(data['daiAssetKey']), + self.DAI_URL.format(data["daiAssetKey"]), data={"api-key": data["daiApiKey"]}, schema=validate.Schema( validate.parse_json(),
plugins.hiplayer: plugin broken ('NoneType' object is not subscriptable) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description The hiplayer plugin has stopped working **Sample links** streamlink https://rotana.net/live-clip streamlink https://www.media.gov.kw/LiveTV.aspx?PanChannel=KTV1 ### Debug log ```text pi@raspberrypi:~ $ streamlink https://rotana.net/live-clip --loglevel debug [cli][debug] OS: Linux-5.10.103-v7l+-armv7l-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 5.0.1+6.g4b6077d1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.6.4 [cli][debug] pycountry: 20.7.3 [cli][debug] pycryptodome: 3.11.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.26.0 [cli][debug] websocket-client: 1.2.1 [cli][debug] importlib-metadata: 1.6.0 [cli][debug] Arguments: [cli][debug] url=https://rotana.net/live-clip [cli][debug] --loglevel=debug [cli][info] Found matching plugin hiplayer for URL https://rotana.net/live-clip [plugins.hiplayer][debug] JS URL=https://hiplayer.hibridcdn.net/l/rotana-clip Traceback (most recent call last): File "/home/pi/.local/bin/streamlink", line 8, in <module> sys.exit(main()) File "/home/pi/.local/lib/python3.9/site-packages/streamlink_cli/main.py", line 975, in main handle_url() File "/home/pi/.local/lib/python3.9/site-packages/streamlink_cli/main.py", line 606, in handle_url streams = fetch_streams(plugin) File "/home/pi/.local/lib/python3.9/site-packages/streamlink_cli/main.py", line 500, in fetch_streams return plugin.streams(stream_types=args.stream_types, File "/home/pi/.local/lib/python3.9/site-packages/streamlink/plugin/plugin.py", line 376, in streams ostreams = self._get_streams() File "/home/pi/.local/lib/python3.9/site-packages/streamlink/plugins/hiplayer.py", line 77, in _get_streams hls_url = data["streamUrl"] TypeError: 'NoneType' object is not subscriptable pi@raspberrypi:~ $ ``` ```
The site has made changes and it looks like the regex for getting the base64 encoded JSON data from the minified/obfuscated JS file is broken due to a different variable name from the JS minifier's mangling algorithm. https://github.com/streamlink/streamlink/blob/5.0.1/src/streamlink/plugins/hiplayer.py#L58 The plugin doesn't check for a `None` response when accessing a property, hence the `TypeError`, which should also get fixed. I'll open a PR with a fix in an hour or two.
2022-09-30T22:27:26
streamlink/streamlink
4,863
streamlink__streamlink-4863
[ "4862" ]
fe1604eacdd3d6c0263315b212adf6739c8e33ed
diff --git a/src/streamlink/logger.py b/src/streamlink/logger.py --- a/src/streamlink/logger.py +++ b/src/streamlink/logger.py @@ -3,6 +3,7 @@ from datetime import datetime from logging import CRITICAL, DEBUG, ERROR, INFO, WARNING from pathlib import Path +from sys import version_info from threading import Lock from typing import IO, List, Optional, TYPE_CHECKING, Union @@ -46,9 +47,18 @@ class StreamlinkLogger(_BaseLoggerClass): - def trace(self, message, *args, **kws): - if self.isEnabledFor(TRACE): - self._log(TRACE, message, args, **kws) + # fix module name that gets read from the call stack in the logging module + # https://github.com/python/cpython/commit/5ca6d7469be53960843df39bb900e9c3359f127f + if version_info >= (3, 11): + def trace(self, message, *args, **kws): + if self.isEnabledFor(TRACE): + # increase the stacklevel by one and skip the `trace()` call here + kws["stacklevel"] = 2 + self._log(TRACE, message, args, **kws) + else: + def trace(self, message, *args, **kws): + if self.isEnabledFor(TRACE): + self._log(TRACE, message, args, **kws) class StringFormatter(logging.Formatter):
diff --git a/tests/test_logger.py b/tests/test_logger.py --- a/tests/test_logger.py +++ b/tests/test_logger.py @@ -87,6 +87,17 @@ def test_trace_no_output(self, log: logging.Logger, output: StringIO): log.trace("test") # type: ignore[attr-defined] assert output.getvalue() == "" + # https://github.com/streamlink/streamlink/issues/4862 + def test_trace_module_name(self, caplog: pytest.LogCaptureFixture, log: logging.Logger): + caplog.set_level(1) + log = logging.getLogger(self.__class__.__module__) + log.trace("foo") # type: ignore[attr-defined] + log.log(logger.TRACE, "bar") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("test_logger", "trace", "foo"), + ("test_logger", "trace", "bar"), + ] + def test_debug_out_at_trace(self, log: logging.Logger, output: StringIO): log.setLevel("trace") log.debug("test")
logger: custom trace() log level has incorrect module name on Python 3.11 Python 3.11 has changed how the call stack gets read when creating a new log record: https://github.com/python/cpython/commit/5ca6d7469be53960843df39bb900e9c3359f127f This causes the log record's module name to be set to Streamlink's `logger` module where the custom `trace()` method is defined, which is incorrect. Example: https://github.com/streamlink/streamlink/actions/runs/3168863166/jobs/5160366442#step:5:285 https://github.com/streamlink/streamlink/pull/4861#issuecomment-1264654242 Is this worth of a Python but report, or do we need to change how the custom `trace()` method is defined? - `Logger._log()` https://github.com/python/cpython/blob/v3.11.0rc2/Lib/logging/__init__.py#L1610-L1634 - `Logger.findCaller()` https://github.com/python/cpython/blob/v3.11.0rc2/Lib/logging/__init__.py#L1561-L1593 Since there's no way to pass a custom call stack to the `Logger._log()` method, the `stacklevel` needs to be increased from 1 to 2 on py311 from what it looks like, so that it skips the `logger` module at the beginning of the call stack: ```py class StreamlinkLogger(_BaseLoggerClass): if sys.version_info < (3, 11): def trace(self, message, *args, **kws): if self.isEnabledFor(TRACE): self._log(TRACE, message, args, **kws) else: def trace(self, message, *args, **kws): if self.isEnabledFor(TRACE): kws["stacklevel"] = 2 self._log(TRACE, message, args, **kws) ```
2022-10-02T15:22:13
streamlink/streamlink
4,876
streamlink__streamlink-4876
[ "4872" ]
204c5b565dce4143d76a28c85a793e4ed90bd4de
diff --git a/src/streamlink/plugins/goltelevision.py b/src/streamlink/plugins/goltelevision.py --- a/src/streamlink/plugins/goltelevision.py +++ b/src/streamlink/plugins/goltelevision.py @@ -17,12 +17,16 @@ )) class GOLTelevision(Plugin): def _get_streams(self): + self.session.http.headers.update({ + "Origin": "https://goltelevision.com", + "Referer": "https://goltelevision.com/", + }) url = self.session.http.get( "https://play.goltelevision.com/api/stream/live", schema=validate.Schema( validate.parse_json(), {"manifest": validate.url()}, - validate.get("manifest") + validate.get("manifest"), ) ) return HLSStream.parse_variant_playlist(self.session, url)
plugins.goltelevision: 401 Client Error: Unauthorized for url ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Using the latest app image with Tvheadend with command: pipe:///usr/local/bin/streamlink -O http://www.goltelevision.com/en-directo best 2022-10-09 19:34:55.795 mpegts: GOL TV HD in Streams - tuning on IPTV #1 2022-10-09 19:34:55.840 spawn: Executing "/usr/local/bin/streamlink" 2022-10-09 19:34:55.840 subscription: 0109: "scan" subscribing to mux "GOL TV HD", weight: 6, adapter: "IPTV #1", network: "Streams", service: "Raw PID Subscription" 2022-10-09 19:34:56.216 spawn: [cli][info] Found matching plugin goltelevision for URL http://www.goltelevision.com/en-directo 2022-10-09 19:34:56.348 spawn: error: Unable to open URL: https://play.goltelevision.com/api/stream/live (401 Client Error: Unauthorized for url: https://play.goltelevision.com/api/stream/live) 2022-10-09 19:34:56.430 iptv: stdin pipe 88 unexpectedly closed: No data Seems like it´s expecting an URL starting with https but actually the website is using: http://www.goltelevision.com/en-directo Don´t know if this is the root cause of the issue. ### Debug log ```text nud@NUC:~/streamlink$ ./streamlink -l debug http://www.goltelevision.com/en-directo [cli][debug] OS: Linux-5.15.0-48-generic-x86_64-with-glibc2.31 [cli][debug] Python: 3.10.7 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=http://www.goltelevision.com/en-directo [cli][debug] --loglevel=debug [cli][info] Found matching plugin goltelevision for URL http://www.goltelevision.com/en-directo error: Unable to open URL: https://play.goltelevision.com/api/stream/live (401 Client Error: Unauthorized for url: https://play.goltelevision.com/api/stream/live) If I try this command: ./streamlink -l debug https://www.goltelevision.com/en-directo ./streamlink -l debug https://www.goltelevision.com/en-directo [cli][debug] OS: Linux-5.15.0-48-generic-x86_64-with-glibc2.31 [cli][debug] Python: 3.10.7 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=https://www.goltelevision.com/en-directo [cli][debug] --loglevel=debug [cli][info] Found matching plugin goltelevision for URL https://www.goltelevision.com/en-directo error: Unable to open URL: https://play.goltelevision.com/api/stream/live (401 Client Error: Unauthorized for url: https://play.goltelevision.com/api/stream/live) I have the same result. nuc@NUC:~/streamlink$ ./streamlink --version-check [cli][info] Your Streamlink version (5.0.1) is up to date! nuc@NUC:~/streamlink$ ./streamlink --version streamlink 5.0.1 nuc@NUC:~/streamlink$ ./streamlink --plugins Loaded plugins: abematv, adultswim, afreeca, albavision, aloula, app17, ard_live, ard_mediathek, artetv, atpchallenger, atresplayer, bbciplayer, bfmtv, bigo, bilibili, blazetv, bloomberg, booyah, brightcove, btv, cbsnews, cdnbg, ceskatelevize, cinergroup, clubbingtv, cmmedia, cnews, crunchyroll, dailymotion, dash, delfi, deutschewelle, dlive, dogan, dogus, drdk, earthcam, egame, euronews, facebook, filmon, foxtr, funimationnow, galatasaraytv, **goltelevision**, goodgame, googledrive, gulli, hiplayer, hls, http, htv, huajiao, huya, idf1, invintus, kugou, linelive, livestream, lnk, lrt, ltv_lsm_lv, mdstrm, mediaklikk, mediavitrina, mildom, mitele, mjunoon, mrtmk, n13tv, nbcnews, nhkworld, nicolive, nimotv, nos, nownews, nrk, ntv, okru, olympicchannel, oneplusone, onetv, openrectv, orf_tvthek, pandalive, picarto, piczel, pixiv, pluto, pluzz, qq, radiko, radionet, raiplay, reuters, rtbf, rtpa, rtpplay, rtve, rtvs, ruv, sbscokr, schoolism, showroom, sportal, sportschau, ssh101, stadium, steam, streamable, streann, stv, svtplay, swisstxt, telefe, tf1, trovo, turkuvaz, tv360, tv3cat, tv4play, tv5monde, tv8, tv999, tvibo, tviplayer, tvp, tvrby, tvrplus, tvtoya, twitcasting, twitch, useetv, ustreamtv, ustvnow, vidio, vimeo, vinhlongtv, vk, vlive, vtvgo, wasd, webtv, welt, wwenetwork, youtube, yupptv, zattoo, zdf_mediathek, zeenews, zengatv, zhanqi ```
The API endpoint where the stream URL is read from requires the `Referer` and `Origin` HTTP headers to be set. That's all. I'll submit a PR in a couple of minutes.
2022-10-09T22:02:47
streamlink/streamlink
4,877
streamlink__streamlink-4877
[ "4875" ]
204c5b565dce4143d76a28c85a793e4ed90bd4de
diff --git a/src/streamlink/plugins/atresplayer.py b/src/streamlink/plugins/atresplayer.py --- a/src/streamlink/plugins/atresplayer.py +++ b/src/streamlink/plugins/atresplayer.py @@ -22,10 +22,12 @@ r"https?://(?:www\.)?atresplayer\.com/" )) class AtresPlayer(Plugin): + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.url = update_scheme("https://", f"{self.url.rstrip('/')}/") + def _get_streams(self): - self.url = update_scheme("https://", self.url) path = urlparse(self.url).path - api_url = self.session.http.get(self.url, schema=validate.Schema( re.compile(r"""window.__PRELOADED_STATE__\s*=\s*({.*?});""", re.DOTALL), validate.none_or_all(
diff --git a/tests/plugins/test_atresplayer.py b/tests/plugins/test_atresplayer.py --- a/tests/plugins/test_atresplayer.py +++ b/tests/plugins/test_atresplayer.py @@ -1,3 +1,7 @@ +from unittest.mock import Mock + +import pytest + from streamlink.plugins.atresplayer import AtresPlayer from tests.plugins import PluginCanHandleUrl @@ -6,8 +10,20 @@ class TestPluginCanHandleUrlAtresPlayer(PluginCanHandleUrl): __plugin__ = AtresPlayer should_match = [ - 'http://www.atresplayer.com/directos/antena3/', - 'http://www.atresplayer.com/directos/lasexta/', - 'https://www.atresplayer.com/directos/antena3/', - 'https://www.atresplayer.com/flooxer/programas/unas/temporada-1/dario-eme-hache-sindy-takanashi-entrevista_123/', + "http://www.atresplayer.com/directos/antena3/", + "http://www.atresplayer.com/directos/lasexta/", + "https://www.atresplayer.com/directos/antena3/", + "https://www.atresplayer.com/flooxer/programas/unas/temporada-1/dario-eme-hache-sindy-takanashi-entrevista_123/", ] + + +class TestAtresPlayer: + @pytest.mark.parametrize("url,expected", [ + ("http://www.atresplayer.com/directos/antena3", "https://www.atresplayer.com/directos/antena3/"), + ("http://www.atresplayer.com/directos/antena3/", "https://www.atresplayer.com/directos/antena3/"), + ("https://www.atresplayer.com/directos/antena3", "https://www.atresplayer.com/directos/antena3/"), + ("https://www.atresplayer.com/directos/antena3/", "https://www.atresplayer.com/directos/antena3/"), + ]) + def test_url(self, url, expected): + plugin = AtresPlayer(Mock(), url) + assert plugin.url == expected
plugins.atresplayer: error: Unable to validate response text: ValidationError(NoneOrAllSchema) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Using the latest app image with Tvheadend with command: pipe:///usr/local/bin/streamlink -O https://www.atresplayer.com/directos/nova best 2022-10-09 23:21:29.885 mpegts: nova HD in Streams - tuning on IPTV #1 2022-10-09 23:21:29.927 subscription: 0121: "scan" subscribing to mux "nova HD", weight: 6, adapter: "IPTV #1", network: "Streams", service: "Raw PID Subscription" 2022-10-09 23:21:29.927 spawn: Executing "/usr/local/bin/streamlink" 2022-10-09 23:21:30.352 spawn: [cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/nova/ 2022-10-09 23:21:30.621 spawn: [cli][info] Available streams: 360p (worst), 480p, 720p, 1080p (best) 2022-10-09 23:21:30.621 spawn: [cli][info] Opening stream: 1080p (hls) 2022-10-09 23:21:44.927 mpegts: nova HD in Streams - scan no data, failed 2022-10-09 23:21:44.927 subscription: 0121: "scan" unsubscribing ### Debug log ```text nico@NUC:~/streamlink$ ./streamlink -l debug https://www.atresplayer.com/directos/nova [cli][debug] OS: Linux-5.15.0-48-generic-x86_64-with-glibc2.31 [cli][debug] Python: 3.10.7 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=https://www.atresplayer.com/directos/nova [cli][debug] --loglevel=debug [cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/nova error: Unable to validate response text: ValidationError(NoneOrAllSchema): ValidationError(dict): Unable to validate value of key 'links' Context(dict): Key '/directos/nova' not found in <{'/directos/nova/': {'url': '/directos/nova/', 'redirec...> nuc@NUC:~/streamlink$ ./streamlink --version-check [cli][info] Your Streamlink version (5.0.1) is up to date! nuc@NUC:~/streamlink$ ./streamlink --version streamlink 5.0.1 nuc@NUC:~/streamlink$ ./streamlink --plugins Loaded plugins: abematv, adultswim, afreeca, albavision, aloula, app17, ard_live, ard_mediathek, artetv, atpchallenger, atresplayer, bbciplayer, bfmtv, bigo, bilibili, blazetv, bloomberg, booyah, brightcove, btv, cbsnews, cdnbg, ceskatelevize, cinergroup, clubbingtv, cmmedia, cnews, crunchyroll, dailymotion, dash, delfi, deutschewelle, dlive, dogan, dogus, drdk, earthcam, egame, euronews, facebook, filmon, foxtr, funimationnow, galatasaraytv, **goltelevision**, goodgame, googledrive, gulli, hiplayer, hls, http, htv, huajiao, huya, idf1, invintus, kugou, linelive, livestream, lnk, lrt, ltv_lsm_lv, mdstrm, mediaklikk, mediavitrina, mildom, mitele, mjunoon, mrtmk, n13tv, nbcnews, nhkworld, nicolive, nimotv, nos, nownews, nrk, ntv, okru, olympicchannel, oneplusone, onetv, openrectv, orf_tvthek, pandalive, picarto, piczel, pixiv, pluto, pluzz, qq, radiko, radionet, raiplay, reuters, rtbf, rtpa, rtpplay, rtve, rtvs, ruv, sbscokr, schoolism, showroom, sportal, sportschau, ssh101, stadium, steam, streamable, streann, stv, svtplay, swisstxt, telefe, tf1, trovo, turkuvaz, tv360, tv3cat, tv4play, tv5monde, tv8, tv999, tvibo, tviplayer, tvp, tvrby, tvrplus, tvtoya, twitcasting, twitch, useetv, ustreamtv, ustvnow, vidio, vimeo, vinhlongtv, vk, vlive, vtvgo, wasd, webtv, welt, wwenetwork, youtube, yupptv, zattoo, zdf_mediathek, zeenews, zengatv, zhanqi ```
> `https://www.atresplayer.com/directos/nova` The URL passed to streamlink has to end with a `/` character. Otherwise the JSON payload contains an invalid key. This doesn't mean though that the plugin shouldn't receive a fix for that.
2022-10-09T22:07:20
streamlink/streamlink
4,885
streamlink__streamlink-4885
[ "4884" ]
bfcd3725c5028b1e866e492a63fc6278b4b0522b
diff --git a/src/streamlink/plugins/btv.py b/src/streamlink/plugins/btv.py --- a/src/streamlink/plugins/btv.py +++ b/src/streamlink/plugins/btv.py @@ -44,14 +44,11 @@ def _get_streams(self): validate.parse_json(), { "status": "ok", - "config": str, + "info": { + "file": validate.url(path=validate.endswith(".m3u8")), + }, }, - validate.get("config"), - re.compile(r"src: \"(http.*?)\""), - validate.none_or_all( - validate.get(1), - validate.url(), - ), + validate.get(("info", "file")), ), ), ), @@ -63,7 +60,7 @@ def _get_streams(self): log.error("The content is not available in your region") return - return HLSStream.parse_variant_playlist(self.session, stream_url) + return {"live": HLSStream(self.session, stream_url)} __plugin__ = BTV
plugins.btv: No playable streams found ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description The plugin is not functional. I am attaching a log. ### Debug log ```text streamlink --loglevel debug "https://btvplus.bg/live/" best [cli][debug] OS: Linux-5.15.0-50-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.6.4 [cli][debug] pycountry: 19.8.18 [cli][debug] pycryptodome: 3.9.9 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.26.0 [cli][debug] websocket-client: 1.2.1 [cli][debug] Arguments: [cli][debug] url=https://btvplus.bg/live/ [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin btv for URL https://btvplus.bg/live/ [utils.l10n][debug] Language code: bg_BG error: No playable streams found on this URL: https://btvplus.bg/live/ ```
2022-10-13T05:32:41
streamlink/streamlink
4,887
streamlink__streamlink-4887
[ "4886" ]
bfcd3725c5028b1e866e492a63fc6278b4b0522b
diff --git a/src/streamlink/plugins/tv8.py b/src/streamlink/plugins/tv8.py --- a/src/streamlink/plugins/tv8.py +++ b/src/streamlink/plugins/tv8.py @@ -18,11 +18,9 @@ r"https?://www\.tv8\.com\.tr/canli-yayin" )) class TV8(Plugin): - title = "TV8" - def _get_streams(self): hls_url = self.session.http.get(self.url, schema=validate.Schema( - re.compile(r"""file\s*:\s*(?P<q>["'])(?P<hls_url>https?://.*?\.m3u8.*?)(?P=q)"""), + re.compile(r"""var\s+videoUrl\s*=\s*(?P<q>["'])(?P<hls_url>https?://.*?\.m3u8.*?)(?P=q)"""), validate.any(None, validate.get("hls_url")), )) if hls_url is not None:
plugins.tv8: Doesn't work ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Getting 403 error ### Debug log ```text [cli][debug] OS: Windows 7 [cli][debug] Python: 3.8.14 [cli][debug] Streamlink: 5.0.1+17.gbfcd372 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=https://www.tv8.com.tr/canli-yayin [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --player="mpv.exe" [cli][debug] --ffmpeg-ffmpeg=ffmpeg.exe [cli][info] Found matching plugin tv8 for URL https://www.tv8.com.tr/canli-yayin [utils.l10n][debug] Language code: en_US error: Unable to open URL: https://tv8-tb-live.ercdn.net/tv8-geo/playlist.m3u8?st=xxxx&e=xxxx (403 Client Error: Forbidden for url: https://tv8-tb-live.ercdn.net/tv8-geo/playlist.m3u8?st=xxxx&e=xxxx) ```
2022-10-14T19:37:02
streamlink/streamlink
4,892
streamlink__streamlink-4892
[ "4793", "4889" ]
e4166bcc12f59857c9c69895bcc155b314e40ae0
diff --git a/src/streamlink/plugins/tf1.py b/src/streamlink/plugins/tf1.py --- a/src/streamlink/plugins/tf1.py +++ b/src/streamlink/plugins/tf1.py @@ -1,6 +1,7 @@ """ $description French live TV channels from TF1 Group, including LCI and TF1. $url tf1.fr +$url tf1info.fr $url lci.fr $type live $region France @@ -10,57 +11,88 @@ import re from streamlink.plugin import Plugin, PluginError, pluginmatcher -from streamlink.plugin.api import useragents -from streamlink.stream.dash import DASHStream +from streamlink.plugin.api import useragents, validate from streamlink.stream.hls import HLSStream log = logging.getLogger(__name__) -@pluginmatcher(re.compile( - r"https?://(?:www\.)?(?:tf1\.fr/([\w-]+)/direct|(lci)\.fr/direct)/?" -)) +@pluginmatcher(re.compile(r""" + https?://(?:www\.)? + (?: + tf1\.fr/(?: + (?P<live>[\w-]+)/direct/? + | + stream/(?P<stream>[\w-]+) + ) + | + (?P<lci>tf1info|lci)\.fr/direct/? + ) +""", re.VERBOSE)) class TF1(Plugin): - api_url = "https://mediainfo.tf1.fr/mediainfocombo/{}?context=MYTF1&pver=4001000" + _URL_API = "https://mediainfo.tf1.fr/mediainfocombo/{channel_id}" - def api_call(self, channel, useragent=useragents.CHROME): - url = self.api_url.format(f"L_{channel.upper()}") - req = self.session.http.get(url, - headers={"User-Agent": useragent}) - return self.session.http.json(req) + def _get_channel(self): + if self.match["live"]: + channel = self.match["live"] + channel_id = f"L_{channel.upper()}" + elif self.match["lci"]: + channel = "LCI" + channel_id = "L_LCI" + elif self.match["stream"]: + channel = self.match["stream"] + channel_id = f"L_FAST_v2l-{channel}" + else: # pragma: no cover + raise PluginError("Invalid channel") - def get_stream_urls(self, channel): - for useragent in [useragents.CHROME, useragents.IPHONE_6]: - data = self.api_call(channel, useragent) + return channel, channel_id - if 'delivery' not in data or 'url' not in data['delivery']: - continue - - log.debug("Got {format} stream {url}".format(**data['delivery'])) - yield data['delivery']['format'], data['delivery']['url'] + def _api_call(self, channel_id): + return self.session.http.get( + self._URL_API.format(channel_id=channel_id), + params={ + "context": "MYTF1", + "pver": "4001000", + }, + headers={ + # forces HLS streams + "User-Agent": useragents.IPHONE, + }, + schema=validate.Schema( + validate.parse_json(), + { + "delivery": validate.any( + validate.all( + { + "code": 200, + "format": "hls", + "url": validate.url(), + }, + validate.union_get("code", "url"), + ), + validate.all( + { + "code": int, + "error": str, + }, + validate.union_get("code", "error"), + ), + ), + }, + validate.get("delivery"), + ), + ) def _get_streams(self): - m = self.match - if m: - channel = m.group(1) or m.group(2) - log.debug("Found channel {0}".format(channel)) - for sformat, url in self.get_stream_urls(channel): - try: - if sformat == "dash": - yield from DASHStream.parse_manifest( - self.session, - url, - headers={"User-Agent": useragents.CHROME} - ).items() - if sformat == "hls": - yield from HLSStream.parse_variant_playlist( - self.session, - url, - headers={"User-Agent": useragents.IPHONE}, - ).items() - except PluginError as e: - log.error("Could not open {0} stream".format(sformat)) - log.debug("Failed with error: {0}".format(e)) + channel, channel_id = self._get_channel() + log.debug(f"Found channel {channel} ({channel_id})") + + code, data = self._api_call(channel_id) + if code != 200: + log.error(data) + return + + return HLSStream.parse_variant_playlist(self.session, data) __plugin__ = TF1
diff --git a/tests/plugins/test_tf1.py b/tests/plugins/test_tf1.py --- a/tests/plugins/test_tf1.py +++ b/tests/plugins/test_tf1.py @@ -5,14 +5,19 @@ class TestPluginCanHandleUrlTF1(PluginCanHandleUrl): __plugin__ = TF1 - should_match = [ - "http://tf1.fr/tf1/direct/", - "http://tf1.fr/tfx/direct/", - "http://tf1.fr/tf1-series-films/direct/", - "http://lci.fr/direct", - "http://www.lci.fr/direct", - "http://tf1.fr/tmc/direct", - "http://tf1.fr/lci/direct", + should_match_groups = [ + ("https://tf1.fr/tf1/direct", {"live": "tf1"}), + ("https://www.tf1.fr/tf1/direct", {"live": "tf1"}), + ("https://www.tf1.fr/tfx/direct", {"live": "tfx"}), + ("https://www.tf1.fr/tf1-series-films/direct", {"live": "tf1-series-films"}), + + ("https://lci.fr/direct", {"lci": "lci"}), + ("https://www.lci.fr/direct", {"lci": "lci"}), + ("https://tf1info.fr/direct/", {"lci": "tf1info"}), + ("https://www.tf1info.fr/direct/", {"lci": "tf1info"}), + + ("https://www.tf1.fr/stream/chante-69061019", {"stream": "chante-69061019"}), + ("https://www.tf1.fr/stream/arsene-lupin-39652174", {"stream": "arsene-lupin-39652174"}), ] should_not_match = [
plugins.tf1: add support for "stream" VODs ### Checklist - [X] This is a plugin request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+request%22) ### Description Hi Team, French channel provider TF1 launched a new AVOD service called "stream" Maybe should be possible to upddate current plugin to support them. ### Input URLs https://www.tf1.fr/stream/chante-69061019 https://www.tf1.fr/stream/thriller-fiction-89242722 https://www.tf1.fr/stream/classique-fiction-83644275 and a lot more.... plugins.tf1 : add 'Stream' channels <!-- Thanks for opening a pull request! Before you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink If possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures: https://streamlink.github.io/latest/developing.html#validating-changes Also don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers. Thank you very much! --> Allow 'Stream' channels. No account needed
URL presents itself like this : https://stream.tf1.fr/v1/v2l/prd-v2l-*show name*-*programID*/dash-hd.mpd?Expires=*time in epoch*&Signature=*most certainly a mixture of server data + account*&Key-Pair-Id=*linked to signature* you can get the link from there : https://mediainfo.tf1.fr/mediainfocombo/L_FAST_v2l-*show name*-*programID*?pver=4022002&stream_utc_time=*get the time in utc format*&context=MYTF1&topDomain=unknown&platform=web&device=desktop&os=windows&osVersion=10.0&playerVersion=4.22.2&productName=mytf1&productVersion=2.39.7&browser=*user agent*&browserVersion=*user agent major version* dash url is invalid (only works on TF1 website) hls url is valid no account needed (don't understand this label) in progress https://github.com/streamlink/streamlink/pull/4889 > no account needed (don't understand this label) Are you using a French IP address? When I added the label it was because I'd visited the site and it reported: `Pour regarder l’ensemble des replay et le direct sur MYTF1, il vous suffit de vous connecter !` (`To watch all replays and live on MYTF1, all you have to do is log in!`), but that was from a UK IP address. Checking your updated plugin from a UK IP address: ``` $ streamlink -l debug 'https://www.tf1.fr/stream/chante-69061019' [session][debug] Plugin tf1 is being overridden by /home/user/.local/share/streamlink/plugins/tf1.py [cli][debug] OS: Linux-5.4.0-125-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 5.0.1+17.gbfcd3725 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.0 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.14.1 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.27.1 [cli][debug] websocket-client: 1.3.2 [cli][debug] importlib-metadata: 4.11.4 [cli][debug] Arguments: [cli][debug] url=https://www.tf1.fr/stream/chante-69061019 [cli][debug] --loglevel=debug [cli][debug] --player=mpv [cli][info] Found matching plugin tf1 for URL https://www.tf1.fr/stream/chante-69061019 [plugins.tf1][debug] Found channel chante-69061019 error: No playable streams found on this URL: https://www.tf1.fr/stream/chante-69061019 ``` So I guess it's due to an IP geo-restriction, which may or may not still be enforced even when logged into an account, but I won't check that as I don't really want to create an account just for that purpose. I'll remove the label.
2022-10-18T12:32:13
streamlink/streamlink
4,905
streamlink__streamlink-4905
[ "4904" ]
2aecef7d7fb2c1686a4461ecf4cdb08955309839
diff --git a/src/streamlink/plugins/tvp.py b/src/streamlink/plugins/tvp.py --- a/src/streamlink/plugins/tvp.py +++ b/src/streamlink/plugins/tvp.py @@ -1,57 +1,138 @@ """ -$description Live TV channels from TVP, a Polish public, state-owned broadcaster. -$url tvpstream.vod.tvp.pl -$type live -$region Poland +$description Live TV channels and VODs from TVP, a Polish public, state-owned broadcaster. +$url stream.tvp.pl +$type live, vod +$notes Some VODs may be geo-restricted. Authentication is not supported. """ import logging import re -from streamlink.plugin import Plugin, PluginError, pluginmatcher +from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream -from streamlink.stream.http import HTTPStream log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r'https?://tvpstream\.vod\.tvp\.pl' + r""" + https?:// + (?: + (?:tvpstream\.vod|stream)\.tvp\.pl(?:/(?:\?channel_id=(?P<video_id>\d+))?)?$ + | + vod\.tvp\.pl/[^/]+/.+,(?P<vod_id>\d+)$ + ) + """, + re.VERBOSE, )) class TVP(Plugin): - player_url = 'https://www.tvp.pl/sess/tvplayer.php?object_id={0}&autoplay=true' + _URL_PLAYER = "https://stream.tvp.pl/sess/TVPlayer2/embed.php" + _URL_VOD = "https://vod.tvp.pl/api/products/{vod_id}/videos/playlist" - _stream_re = re.compile(r'''src:["'](?P<url>[^"']+\.(?:m3u8|mp4))["']''') - _video_id_re = re.compile(r'''class=["']tvp_player["'][^>]+data-video-id=["'](?P<video_id>\d+)["']''') + def _get_video_id(self): + return self.session.http.get( + self.url, + headers={ + # required, otherwise the next request for retrieving the HLS URL will be aborted by the server + "Connection": "close", + }, + schema=validate.Schema( + re.compile(r"window\.__channels\s*=\s*(?P<json>\[.+?])\s*;", re.DOTALL), + validate.none_or_all( + validate.get("json"), + validate.parse_json(), + [{ + "items": validate.none_or_all( + [{ + "video_id": int, + }], + ), + }], + validate.get((0, "items", 0, "video_id")), + ), + ), + ) - def get_embed_url(self): - res = self.session.http.get(self.url) + def _get_live(self, video_id): + video_id = video_id or self._get_video_id() + if not video_id: + log.error("Could not find video ID") + return - m = self._video_id_re.search(res.text) - if not m: - raise PluginError('Unable to find a video id') + log.debug(f"video ID: {video_id}") - video_id = m.group('video_id') - log.debug('Found video id: {0}'.format(video_id)) - return self.player_url.format(video_id) + return self.session.http.get( + self._URL_PLAYER, + params={ + "ID": video_id, + "autoPlay": "without_audio", + }, + headers={ + "Referer": self.url, + }, + schema=validate.Schema( + re.compile(r"window\.__api__\s*=\s*(?P<json>\{.+?})\s*;", re.DOTALL), + validate.get("json"), + validate.parse_json(), + { + "result": { + "content": { + "files": validate.all( + [{ + "type": str, + "url": validate.url(), + }], + validate.filter(lambda item: item["type"] == "hls"), + ), + }, + }, + }, + validate.get(("result", "content", "files", 0, "url")), + ), + ) + + def _get_vod(self, vod_id): + data = self.session.http.get( + self._URL_VOD.format(vod_id=vod_id), + params={ + "platform": "BROWSER", + "videoType": "MOVIE", + }, + acceptable_status=(200, 403), + schema=validate.Schema( + validate.parse_json(), + validate.any( + {"code": "GEOIP_FILTER_FAILED"}, + validate.all( + { + "sources": { + validate.optional("HLS"): [{ + "src": validate.url(), + }], + }, + }, + validate.get("sources"), + ), + ), + ), + ) + + if data.get("code") == "GEOIP_FILTER_FAILED": + log.error("The content is not available in your region") + return + + if data.get("HLS"): + return data["HLS"][0]["src"] def _get_streams(self): - embed_url = self.get_embed_url() - res = self.session.http.get(embed_url) - m = self._stream_re.findall(res.text) - if not m: - raise PluginError('Unable to find a stream url') - - streams = [] - for url in m: - log.debug('URL={0}'.format(url)) - if url.endswith('.m3u8'): - streams.extend(HLSStream.parse_variant_playlist(self.session, url, name_fmt="{pixels}_{bitrate}").items()) - - elif url.endswith('.mp4'): - streams.append(('vod', HTTPStream(self.session, url))) - - return streams + if self.match["vod_id"]: + hls_url = self._get_vod(self.match["vod_id"]) + else: + hls_url = self._get_live(self.match["video_id"]) + + if hls_url: + return HLSStream.parse_variant_playlist(self.session, hls_url) __plugin__ = TVP
diff --git a/tests/plugins/test_tvp.py b/tests/plugins/test_tvp.py --- a/tests/plugins/test_tvp.py +++ b/tests/plugins/test_tvp.py @@ -5,12 +5,31 @@ class TestPluginCanHandleUrlTVP(PluginCanHandleUrl): __plugin__ = TVP - should_match = [ - 'http://tvpstream.vod.tvp.pl/?channel_id=14327511', - 'http://tvpstream.vod.tvp.pl/?channel_id=1455', + should_match_groups = [ + # live + ("https://stream.tvp.pl", {}), + ("https://stream.tvp.pl/", {}), + ("https://stream.tvp.pl/?channel_id=63759349", {"video_id": "63759349"}), + ("https://stream.tvp.pl/?channel_id=14812849", {"video_id": "14812849"}), + # old live URLs + ("https://tvpstream.vod.tvp.pl", {}), + ("https://tvpstream.vod.tvp.pl/", {}), + ("https://tvpstream.vod.tvp.pl/?channel_id=63759349", {"video_id": "63759349"}), + ("https://tvpstream.vod.tvp.pl/?channel_id=14812849", {"video_id": "14812849"}), + + # VOD + ( + "https://vod.tvp.pl/filmy-dokumentalne,163/krolowa-wladczyni-i-matka,284734", + {"vod_id": "284734"}, + ), + # VOD episode + ( + "https://vod.tvp.pl/programy,88/z-davidem-attenborough-dokola-swiata-odcinki,284703/odcinek-2,S01E02,319220", + {"vod_id": "319220"}, + ), ] should_not_match = [ - 'http://tvp.pl/', - 'http://vod.tvp.pl/', + "https://tvp.pl/", + "https://vod.tvp.pl/", ]
plugins.tvp: Adapt to new version of streaming portal ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description TVP has just today released a new version of their portal. The new URL is ```stream.tvp.pl```. ### Debug log ```text $ streamlink tvpstream.vod.tvp.pl --loglevel=debug [cli][debug] OS: Linux-5.15.0-48-generic-x86_64-with-glibc2.35 [cli][debug] Python: 3.10.6 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.8.0 [cli][debug] pycountry: 20.7.3 [cli][debug] pycryptodome: 3.14.1 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.27.1 [cli][debug] websocket-client: 1.2.3 [cli][debug] importlib-metadata: 4.6.4 [cli][debug] Arguments: [cli][debug] url=tvpstream.vod.tvp.pl [cli][debug] stream=['720p_2400k'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin tvp for URL tvpstream.vod.tvp.pl error: Unable to find a video id ```
2022-10-26T13:01:40
streamlink/streamlink
4,910
streamlink__streamlink-4910
[ "4909" ]
37dc0b20292d70ecb520783182598f97cd5f6de0
diff --git a/src/streamlink/plugins/dailymotion.py b/src/streamlink/plugins/dailymotion.py --- a/src/streamlink/plugins/dailymotion.py +++ b/src/streamlink/plugins/dailymotion.py @@ -45,7 +45,9 @@ def _get_streams_from_media(self, media_id): "error": {"title": str}, }, { - "owner.username": str, + "owner": { + "username": str, + }, "title": str, "qualities": { str: [{ @@ -67,7 +69,7 @@ def _get_streams_from_media(self, media_id): return self.id = media_id - self.author = media["owner.username"] + self.author = media["owner"]["username"] self.title = media["title"] for quality, streams in media["qualities"].items():
plugins.dailymotion: Stopped working, ValidationError ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Dailymotion plugin stopped working some days ago. Now unable to open any video or live from the site. ### Debug log ```text streamlink -l debug https://www.dailymotion.com/video/x2lefik best [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.8 [cli][debug] Streamlink: 5.0.1+21.g2aecef7 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=https://www.dailymotion.com/video/x2lefik [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][debug] --ffmpeg-ffmpeg=ffmpeg.exe [cli][info] Found matching plugin dailymotion for URL https://www.dailymotion.com/video/x2lefik [plugins.dailymotion][debug] Found media ID: x2lefik error: Unable to validate response text: ValidationError(AnySchema): ValidationError(dict): Key 'error' not found in <{'url': 'https://www.dailymotion.com/video/x2lefik', 'p...> ValidationError(dict): Key 'error' not found in <{'url': 'https://www.dailymotion.com/video/x2lefik', 'p...> ValidationError(dict): Key 'owner.username' not found in <{'url': 'https://www.dailymotion.com/video/x2lefik', 'p...> ```
2022-10-28T09:22:45
streamlink/streamlink
4,919
streamlink__streamlink-4919
[ "4918" ]
76769e3f8733e542dbaf0591713d6de180687dc8
diff --git a/src/streamlink/plugins/bloomberg.py b/src/streamlink/plugins/bloomberg.py --- a/src/streamlink/plugins/bloomberg.py +++ b/src/streamlink/plugins/bloomberg.py @@ -102,17 +102,7 @@ def _get_vod_streams(self, data): return secureStreams or streams def _get_streams(self): - self.session.http.headers.update({ - "authority": "www.bloomberg.com", - "upgrade-insecure-requests": "1", - "dnt": "1", - "accept": ";".join([ - "text/html,application/xhtml+xml,application/xml", - "q=0.9,image/webp,image/apng,*/*", - "q=0.8,application/signed-exchange", - "v=b3" - ]) - }) + del self.session.http.headers["Accept-Encoding"] try: data = self.session.http.get(self.url, schema=validate.Schema(
plugins.bloomberg: Always get Could not Find JSON data error from today ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Getting [plugins.bloomberg][error] Could not find JSON data. Invalid URL or bot protection... consistently. Used to be able to solve this buy trying VPN or a different network. This does not help now. My guess is that Bloomberg updated something on their side and the plugin is now broken? Or Bloomberg just update their "are you a robot" check? Tried changing User-Agent header manually and it did not help. Can someone point out what is the issue here? ### Debug log ```text PS C:\Users\***> streamlink -l debug https://www.bloomberg.com/live/us [cli][debug] OS: Windows 10 [cli][debug] Python: 3.10.8 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=https://www.bloomberg.com/live/us [cli][debug] --loglevel=debug [cli][info] Found matching plugin bloomberg for URL https://www.bloomberg.com/live/us [plugins.bloomberg][error] Could not find JSON data. Invalid URL or bot protection... error: No playable streams found on this URL: https://www.bloomberg.com/live/us ```
2022-11-02T19:56:50
streamlink/streamlink
4,942
streamlink__streamlink-4942
[ "4106" ]
ed88bcaac0809f321319532c9d4fd91aacaac164
diff --git a/src/streamlink/plugins/twitch.py b/src/streamlink/plugins/twitch.py --- a/src/streamlink/plugins/twitch.py +++ b/src/streamlink/plugins/twitch.py @@ -10,7 +10,7 @@ import logging import re import sys -from datetime import datetime +from datetime import datetime, timedelta from random import random from typing import List, NamedTuple, Optional from urllib.parse import urlparse @@ -19,7 +19,7 @@ from streamlink.plugin import Plugin, pluginargument, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream, HLSStreamReader, HLSStreamWorker, HLSStreamWriter -from streamlink.stream.hls_playlist import ByteRange, ExtInf, Key, M3U8, M3U8Parser, Map, load as load_hls_playlist +from streamlink.stream.hls_playlist import ByteRange, DateRange, ExtInf, Key, M3U8, M3U8Parser, Map, load as load_hls_playlist from streamlink.stream.http import HTTPStream from streamlink.utils.args import keyvalue from streamlink.utils.parse import parse_json, parse_qsd @@ -70,27 +70,31 @@ def parse_tag_ext_x_twitch_prefetch(self, value): # This is better than using the duration of the last segment when regular segment durations vary a lot. # In low latency mode, the playlist reload time is the duration of the last segment. duration = last.duration if last.prefetch else sum(segment.duration for segment in segments) / float(len(segments)) - segments.append(last._replace( + # Use the last duration for extrapolating the start time of the prefetch segment, which is needed for checking + # whether it is an ad segment and matches the parsed date ranges or not + date = last.date + timedelta(seconds=last.duration) + ad = self._is_segment_ad(date) + segment = last._replace( uri=self.uri(value), duration=duration, - prefetch=True - )) + title=None, + discontinuity=self.state.pop("discontinuity", False), + date=date, + ad=ad, + prefetch=True, + ) + segments.append(segment) def parse_tag_ext_x_daterange(self, value): super().parse_tag_ext_x_daterange(value) daterange = self.m3u8.dateranges[-1] - is_ad = ( - daterange.classname == "twitch-stitched-ad" - or str(daterange.id or "").startswith("stitched-ad-") - or any(attr_key.startswith("X-TV-TWITCH-AD-") for attr_key in daterange.x.keys()) - ) - if is_ad: + if self._is_daterange_ad(daterange): self.m3u8.dateranges_ads.append(daterange) def get_segment(self, uri: str) -> TwitchSegment: # type: ignore[override] extinf: ExtInf = self.state.pop("extinf", None) or ExtInf(0, None) date = self.state.pop("date", None) - ad = any(self.m3u8.is_date_in_daterange(date, daterange) for daterange in self.m3u8.dateranges_ads) + ad = self._is_segment_ad(date, extinf.title) return TwitchSegment( uri=uri, @@ -102,7 +106,21 @@ def get_segment(self, uri: str) -> TwitchSegment: # type: ignore[override] date=date, map=self.state.get("map"), ad=ad, - prefetch=False + prefetch=False, + ) + + def _is_segment_ad(self, date: datetime, title: Optional[str] = None) -> bool: + return ( + title is not None and "Amazon" in title + or any(self.m3u8.is_date_in_daterange(date, daterange) for daterange in self.m3u8.dateranges_ads) + ) + + @staticmethod + def _is_daterange_ad(daterange: DateRange) -> bool: + return ( + daterange.classname == "twitch-stitched-ad" + or str(daterange.id or "").startswith("stitched-ad-") + or any(attr_key.startswith("X-TV-TWITCH-AD-") for attr_key in daterange.x.keys()) )
diff --git a/tests/plugins/test_twitch.py b/tests/plugins/test_twitch.py --- a/tests/plugins/test_twitch.py +++ b/tests/plugins/test_twitch.py @@ -328,6 +328,37 @@ def test_hls_low_latency_has_prefetch_disable_ads_has_preroll(self, mock_log): call("Waiting for pre-roll ads to finish, be patient") ]) + @patch("streamlink.plugins.twitch.log") + def test_hls_low_latency_has_prefetch_disable_ads_no_preroll_with_prefetch_ads(self, mock_log): + # segments 3-6 are ads + ads = TagDateRangeAd(start=DATETIME_BASE + timedelta(seconds=3), duration=4) + thread, segments = self.subject([ + # regular stream data with prefetch segments + Playlist(0, [Segment(0), Segment(1), SegmentPrefetch(2), SegmentPrefetch(3)]), + # three prefetch segments, one regular (2) and two ads (3 and 4) + Playlist(1, [Segment(1), SegmentPrefetch(2), ads, SegmentPrefetch(3), SegmentPrefetch(4)]), + # all prefetch segments are gone once regular prefetch segments have shifted + Playlist(2, [Segment(2), ads, Segment(3), Segment(4), Segment(5)]), + # still no prefetch segments while ads are playing + Playlist(3, [ads, Segment(3), Segment(4), Segment(5), Segment(6)]), + # new prefetch segments on the first regular segment occurrence + Playlist(4, [ads, Segment(4), Segment(5), Segment(6), Segment(7), SegmentPrefetch(8), SegmentPrefetch(9)]), + Playlist(5, [ads, Segment(5), Segment(6), Segment(7), Segment(8), SegmentPrefetch(9), SegmentPrefetch(10)]), + Playlist(6, [ads, Segment(6), Segment(7), Segment(8), Segment(9), SegmentPrefetch(10), SegmentPrefetch(11)]), + Playlist(7, [Segment(7), Segment(8), Segment(9), Segment(10), SegmentPrefetch(11), SegmentPrefetch(12)], end=True), + ], disable_ads=True, low_latency=True) + + self.await_write(11) + content = self.await_read(read_all=True) + self.assertEqual( + content, + self.content(segments, cond=lambda s: 2 <= s.num <= 3 or 7 <= s.num) + ) + self.assertEqual(mock_log.info.mock_calls, [ + call("Will skip ad segments"), + call("Low latency streaming (HLS live edge: 2)"), + ]) + @patch("streamlink.plugins.twitch.log") def test_hls_low_latency_no_prefetch_disable_ads_has_preroll(self, mock_log): daterange = TagDateRangeAd(duration=4)
plugins.twitch: purple screen doesn't get filtered out correctly (embedded ads) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description **Embedded ads meta-thread here: #3210** ---- Twitch has, as expected, made new changes to their embedded ads system after their source code has been leaked a few weeks ago. New access token request headers were added in #4086, but this, as expected as well, stopped working too, at least for non-preroll ads as far as I can tell. The `--twitch-disable-ads` parameter still seems to be able to filter out ads, but there's one HLS segment with the purple screen which doesn't get caught by it, so the purple screen appears just before the stream output stops for filtering out the ads. It's possible that some timestamps are set differently now or that they are using different values in the metadata for annotating the ad segments. To be able to fix this, we need to know the actual HLS playlist contents when the embedded ads start. ### Debug log ```text - ```
Not sure how the purple screen makes it into the output when filtering is enabled. I've added some debugging info for the parsed segments, and I can't see anything wrong in the ad detection itself. This is the relevant part of the playlist content when a midroll ad appears (some IDs and tokens replaced): ```m3u #EXT-X-DATERANGE:ID="SOME-ID",CLASS="twitch-assignment",START-DATE="2021-10-19T21:25:23.502Z",END-ON-NEXT=YES,X-TV-TWITCH-SERVING-ID="SOME-ID",X-TV-TWITCH-NODE="video-edge-c1d600.fra02",X-TV-TWITCH-CLUSTER="fra02" #EXT-X-PROGRAM-DATE-TIME:2021-10-19T21:25:23.502Z #EXTINF:2.000,live https://video-edge-c1d600.fra02.abs.hls.ttvnw.net/v1/segment/... #EXT-X-DATERANGE:ID="stitched-ad-0123456789-0123456789",CLASS="twitch-stitched-ad",START-DATE="2021-10-19T21:25:25.502Z",DURATION=29.134,X-TV-TWITCH-AD-COMMERCIAL-ID="SOME-ID",X-TV-TWITCH-AD-CLICK-TRACKING-URL="https://example.com",X-TV-TWITCH-AD-POD-LENGTH="1",X-TV-TWITCH-AD-AD-FORMAT="standard_video_ad",X-TV-TWITCH-AD-RADS-TOKEN="SOME-TOKEN",X-TV-TWITCH-AD-HIDE-AD-OVERLAY="1",X-TV-TWITCH-AD-URL="https://twitch.tv/CHANNEL",X-TV-TWITCH-AD-LOUDNESS="-10.980000",X-TV-TWITCH-AD-ROLL-TYPE="MIDROLL",X-TV-TWITCH-AD-AD-SESSION-ID="SOME-ID",X-TV-TWITCH-AD-CLICK-BEACON-ID="clickDropBeacon",X-TV-TWITCH-AD-CREATIVE-ID="2474283100405",X-TV-TWITCH-AD-LINE-ITEM-ID="SOME-ID",X-TV-TWITCH-AD-POD-POSITION="0",X-TV-TWITCH-AD-VLM="1" #EXT-X-DATERANGE:ID="source-0123456789",CLASS="twitch-stream-source",START-DATE="2021-10-19T21:25:25.502Z",END-ON-NEXT=YES,X-TV-TWITCH-STREAM-SOURCE="Amazon|2474283100405" #EXT-X-DATERANGE:ID="quartile-0123456789-0",CLASS="twitch-ad-quartile",START-DATE="2021-10-19T21:25:25.502Z",DURATION=2.000,X-TV-TWITCH-AD-QUARTILE="0" #EXT-X-DISCONTINUITY #EXT-X-PROGRAM-DATE-TIME:2021-10-19T21:25:25.502Z #EXTINF:2.000,Amazon|2474283100405 https://video-edge-c1d600.fra02.abs.hls.ttvnw.net/v1/segment/... #EXT-X-PROGRAM-DATE-TIME:2021-10-19T21:25:27.502Z #EXTINF:2.000,Amazon|2474283100405 https://video-edge-c1d600.fra02.abs.hls.ttvnw.net/v1/segment/... ``` `2021-10-19T21:25:25.502Z` is the time of the X-DATERANGE tag when the ad starts, which matches the first ad segment and its X-PROGRAM-DATE-TIME value. And this is the debug log output of the segment creation: ``` [stream.hls][debug] Reloading playlist [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=false ExtInf(duration=2.0, title='live') [plugins.twitch][debug] Segment info: ad=true ExtInf(duration=2.0, title='Amazon|2474283100405') [plugins.twitch][debug] Segment info: ad=true ExtInf(duration=2.0, title='Amazon|2474283100405') [stream.hls][debug] Adding segment 329 to queue [stream.hls][info] Filtering out segments and pausing stream output ``` This shows that the segment's `ad` attribute gets correctly set to True for embedded ads. Filtering depends on that attribute, which is fully tested: - https://github.com/streamlink/streamlink/blob/765f969dfd6348a5d07c442c8980ea674bf6035f/src/streamlink/plugins/twitch.py#L112-L114 - https://github.com/streamlink/streamlink/blob/765f969dfd6348a5d07c442c8980ea674bf6035f/src/streamlink/stream/hls.py#L158-L171 - https://github.com/streamlink/streamlink/blob/765f969dfd6348a5d07c442c8980ea674bf6035f/tests/stream/test_hls_filtered.py Something else seems to have changed now and the midroll ads filtering doesn't stop and it keeps filtering indefinitely. This happened twice to me today now. I haven't had a look at this yet, but it's possible that something related to the daterange values has changed in Twitch's HLS playlists. The switch from playerType="embed" to "site" is the reason why new version is having issues but not older versions. Edit: Can confirm I get no midroll ads with "embed" (local hunk revert). This only affects the access token, which determines whether you're seeing ads or not. Both access token request parameters cause ads, but I feel like the switch from `embed` to `site` in #4156 is causing more ads now, so I think we should revert that. It also seems like `embed` doesn't cause any midroll ads. However, this thread is about ads filtering, and the current access token request parameter value shows that this isn't working 100% reliably anymore, so a fix is needed here as well. I've been observing HLS streams on Twitch for a bit but haven't been able to find anything yet. My gut feeling says that the issue with the stuck filtering is caused by an `END-ON-NEXT` daterange tag, which is currently unsupported. It might also be due to an invalid playlist reload time calculation. i fixed this by using a vpn > i fixed this by using a vpn This is completely unrelated to the topic of this thread. https://streamlink.github.io/latest/cli/plugins/twitch.html#embedded-ads I've had another look at the issue and it's now clear to me why Twitch's midroll ads (if they occur - I haven't seen one in a year or so) don't get filtered out properly. The issue is caused by the low-latency and segment prefetch implementation, which isn't solved in an ideal way. The prefetching works like this: there are regular HLS segments with time and duration metadata, and there are two prefetch URLs added to the playlist if the stream is a low latency stream. Streamlink currently clones the last regular segment and replaces its URL for each available prefetch URL and then extends the playlist by appending those prefetch segments. Since prefetch items don't include any metadata because it is unknown ahead of time, the metadata of the last regular segment gets re-used when appending prefetch segments. Streamlink already guesses the duration of the appended prefetch segments so that the playlist refresh times can be as low as possible, as the refresh time is determined by the duration of the last segment. However, it is currently not re-calculating the time of the prefetch segments, which means that the ad-calculation is wrong too, so those prefetch segments will never get filtered out unless the last regular segment was detected as an ad. But since Streamlink is extending the playlist with prefetch segments, the last regular segment is always a couple of seconds behind. Whether Twitch includes ads in the prefetch data is the big question here, and if they do, then this is the cause of this issue. A proper prefetch implementation would be not cloning the last regular segment and instead only downloading and caching the prefetch data, and using the cached data on the next playlist refresh for the regular segment that matches the sequence number and/or URL of the cached data. I have already successfully rewritten this locally and experimented with it a bit, but unfortunately it looks like it's adding a delay to the stream output, because it has to wait for the next playlist refresh first in order to use the cached prefetch data. This means that by using this prefetch implementation, there will always be a delay of the duration of the last regular segment, which is about two seconds. I will take a look at re-calculating the time and ad status of prefetch segments with the current implementation. This won't change the low latency stuff and will hopefully fix this issue.
2022-11-10T21:42:58
streamlink/streamlink
4,950
streamlink__streamlink-4950
[ "4938" ]
ed88bcaac0809f321319532c9d4fd91aacaac164
diff --git a/src/streamlink/plugin/api/http_session.py b/src/streamlink/plugin/api/http_session.py --- a/src/streamlink/plugin/api/http_session.py +++ b/src/streamlink/plugin/api/http_session.py @@ -1,6 +1,6 @@ import ssl import time -from typing import Any, Callable, Dict, List, Pattern, Tuple +from typing import Any, Dict, Pattern, Tuple import requests.adapters # noinspection PyPackageRequirements @@ -13,9 +13,6 @@ from streamlink.utils.parse import parse_json, parse_xml -urllib3_version = tuple(map(int, urllib3.__version__.split(".")[:3])) - - class _HTTPResponse(urllib3.response.HTTPResponse): def __init__(self, *args, **kwargs): # Always enforce content length validation! @@ -42,8 +39,8 @@ def __init__(self, *args, **kwargs): requests.adapters.HTTPResponse = _HTTPResponse # type: ignore[misc] -# Never convert percent-encoded characters to uppercase in urllib3>=1.25.4. -# This is required for sites which compare request URLs byte for byte and return different responses depending on that. +# Never convert percent-encoded characters to uppercase in urllib3>=1.25.8. +# This is required for sites which compare request URLs byte by byte and return different responses depending on that. # Older versions of urllib3 are not compatible with this override and will always convert to uppercase characters. # # https://datatracker.ietf.org/doc/html/rfc3986#section-2.1 @@ -53,31 +50,17 @@ def __init__(self, *args, **kwargs): # > octets, they are equivalent. For consistency, URI producers and # > normalizers should use uppercase hexadecimal digits for all percent- # > encodings. -if urllib3_version >= (1, 25, 4): - class Urllib3UtilUrlPercentReOverride: - _re_percent_encoding: Pattern = urllib3.util.url.PERCENT_RE # type: ignore[attr-defined] - - @classmethod - def _num_percent_encodings(cls, string) -> int: - return len(cls._re_percent_encoding.findall(string)) - - # urllib3>=1.25.8 - # https://github.com/urllib3/urllib3/blame/1.25.8/src/urllib3/util/url.py#L219-L227 - @classmethod - def subn(cls, repl: Callable, string: str) -> Tuple[str, int]: - return string, cls._num_percent_encodings(string) - - # urllib3>=1.25.4,<1.25.8 - # https://github.com/urllib3/urllib3/blame/1.25.4/src/urllib3/util/url.py#L218-L228 - @classmethod - def findall(cls, string: str) -> List[Any]: - class _List(list): - def __len__(self) -> int: - return cls._num_percent_encodings(string) - - return _List() - - urllib3.util.url.PERCENT_RE = Urllib3UtilUrlPercentReOverride # type: ignore[attr-defined] +class Urllib3UtilUrlPercentReOverride: + _re_percent_encoding: Pattern = urllib3.util.url.PERCENT_RE # type: ignore[attr-defined] + + # urllib3>=1.25.8 + # https://github.com/urllib3/urllib3/blame/1.25.8/src/urllib3/util/url.py#L219-L227 + @classmethod + def subn(cls, repl: Any, string: str, count: Any = None) -> Tuple[str, int]: + return string, len(cls._re_percent_encoding.findall(string)) + + +urllib3.util.url.PERCENT_RE = Urllib3UtilUrlPercentReOverride # type: ignore[attr-defined] # requests.Request.__init__ keywords, except for "hooks"
diff --git a/tests/test_api_http_session.py b/tests/test_api_http_session.py --- a/tests/test_api_http_session.py +++ b/tests/test_api_http_session.py @@ -5,11 +5,10 @@ import requests from streamlink.exceptions import PluginError -from streamlink.plugin.api.http_session import HTTPSession, urllib3_version +from streamlink.plugin.api.http_session import HTTPSession from streamlink.plugin.api.useragents import FIREFOX [email protected](urllib3_version < (1, 25, 4), reason="test only applicable on urllib3 >=1.25.4") class TestUrllib3Overrides: @pytest.fixture(scope="class") def httpsession(self) -> HTTPSession:
stream.hls: Exception in HLSStreamWriter ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description This started happening in the last 24h on twitch streams and is probably related to the the other twitch issue posted. The problem here appears on most streams i try to play with **--twitch-disable-ads** set. Without it it works fine. (I removed the last part of the log, the part after _https://video-weaver_, because my IP was shown and maybe something more I don't know about. If that's not OK feel free to close this.) ### Debug log ```text $ streamlink https://www.twitch.tv/alanzoka 360p --twitch-disable-hosting --twitch-disable-ads --loglevel debug [cli][debug] OS: Linux-5.4.0-131-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.6.4 [cli][debug] pycountry: 20.7.3 [cli][debug] pycryptodome: 3.10.1 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.26.0 [cli][debug] websocket-client: 1.2.1 [cli][debug] Arguments: [cli][debug] url=https://www.twitch.tv/alanzoka [cli][debug] stream=['360p'] [cli][debug] --loglevel=debug [cli][debug] --twitch-disable-ads=True [cli][debug] --twitch-disable-hosting=True [cli][info] Found matching plugin twitch for URL https://www.twitch.tv/alanzoka [plugins.twitch][debug] Getting live HLS streams for alanzoka [plugins.twitch][debug] {'adblock': False, 'geoblock_reason': '', 'hide_ads': False, 'server_ads': True, 'show_ads': True} [utils.l10n][debug] Language code: en_US [cli][info] Available streams: audio_only, 160p (worst), 360p, 480p, 720p60, 1080p60 (best) [cli][info] Opening stream: 360p (hls) [cli][info] Starting player: /usr/bin/vlc [plugins.twitch][info] Will skip ad segments [stream.hls][debug] Reloading playlist [cli][debug] Pre-buffering 8192 bytes [plugins.twitch][info] Waiting for pre-roll ads to finish, be patient [stream.hls][debug] First Sequence: 0; Last Sequence: 3 [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1; End Sequence: None [stream.hls][debug] Adding segment 1 to queue [stream.hls][debug] Adding segment 2 to queue [stream.hls][debug] Adding segment 3 to queue Exception in thread Thread-TwitchHLSStreamWriter: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/home/user/.local/lib/python3.8/site-packages/streamlink/stream/segmented.py", line 209, in run self.write(segment, result, *data) File "/home/user/.local/lib/python3.8/site-packages/streamlink/stream/hls.py", line 217, in write result.raw.drain_conn() AttributeError: '_HTTPResponse' object has no attribute 'drain_conn' [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 4 to queue [stream.hls][debug] Adding segment 5 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 6 to queue [stream.hls][debug] Adding segment 7 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 8 to queue [stream.hls][debug] Adding segment 9 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 10 to queue [stream.hls][debug] Adding segment 11 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 12 to queue [stream.hls][debug] Adding segment 13 to queue [stream.hls][debug] Adding segment 14 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 15 to queue [stream.hls][debug] Adding segment 16 to queue [stream.hls][debug] Adding segment 17 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 18 to queue [stream.hls][debug] Adding segment 19 to queue [stream.hls][debug] Adding segment 20 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 21 to queue [stream.hls][debug] Reloading playlist [stream.hls][debug] Adding segment 22 to queue [stream.segmented][debug] Closing worker thread [stream.segmented][debug] Closing writer thread [cli][error] Try 1/1: Could not open stream <TwitchHLSStream ['hls', 'https://video-weaver... ------REDACTED------- tried 1 times, exiting [cli][info] Closing currently open stream... ```
> ``` > Exception in thread Thread-TwitchHLSStreamWriter: > Traceback (most recent call last): > File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner > self.run() > File "/home/user/.local/lib/python3.8/site-packages/streamlink/stream/segmented.py", line 209, in run > self.write(segment, result, *data) > File "/home/user/.local/lib/python3.8/site-packages/streamlink/stream/hls.py", line 217, in write > result.raw.drain_conn() > AttributeError: '_HTTPResponse' object has no attribute 'drain_conn' > ``` Which version of `urllib3` do you have installed? Streamlink requires `requests >=2.26.0,<3`, which itself requires `urllib3 >=1.21.1,<1.27` on all supported versions: https://github.com/streamlink/streamlink/blob/5.0.1/setup.cfg#L48 https://github.com/psf/requests/blob/v2.26.0/setup.py#L48 `urllib3.HTTPResponse.drain_conn()` was apparently only added in its `1.26.0` release: https://github.com/urllib3/urllib3/commit/29b214a129883301f91ae4a74fd7ef2958cbf7b0 Your install is missing this method, which means it's an outdated version. We either need to set a version requirement for urllib3 or backport the method body. My urllib3 is Version: 1.25.8. Is that something that comes installed with the python version of the OS install? I'm on Linux Mint 20.3 so that's what it comes installed with if that's the case. Any reason why it started happening now, because I've watched several different streams most days regularly and have never got it before? `--twitch-disable-ads` triggers a different code path in the `HLSStreamWriter` when writing or filtering out HLS segments. The writer was updated a while ago with a bugfix to clean up stalled HTTP requests in the HTTP connection pool when filtering. To do this, a method from `urllib3` gets called, which was only added in `1.26.0`. While `1.26.0` was released more than two years ago, it's still supported by the specified version ranges of Streamlink and the `requests` library, which defines `urllib3` as a transitive dependency. According to your log output, you have installed Streamlink via pip in your local user's python environment (via the `--user` flag, or implicitly installed there) at `~/.local/`. If `urllib3` is not part of that user environment, then it'll be looked up by the Python interpreter in your system's python environment at `/usr/share/`. Depending on your distro, this can be outdated. You should however be able to install or update it in your user environment. Just run `pip install --user --upgrade urllib3` or install all the latest dependencies of Streamlink via `pip install --user --upgrade --upgrade-strategy=eager streamlink`. However, it's always better to install python projects in separate environments via the `virtualenv` library, so you don't risk introducing dependency conflicts with other applications: https://streamlink.github.io/install.html#virtual-environment The reason why you're only seeing this issue now is that Twitch simply didn't embed any ads for Streamlink to filter out. Now that ads are embedded once again, the filtering code path is used.
2022-11-11T15:14:34
streamlink/streamlink
4,954
streamlink__streamlink-4954
[ "4948" ]
ed88bcaac0809f321319532c9d4fd91aacaac164
diff --git a/src/streamlink/plugins/vtvgo.py b/src/streamlink/plugins/vtvgo.py --- a/src/streamlink/plugins/vtvgo.py +++ b/src/streamlink/plugins/vtvgo.py @@ -21,6 +21,9 @@ class VTVgo(Plugin): AJAX_URL = "https://vtvgo.vn/ajax-get-stream" def _get_streams(self): + # get cookies + self.session.http.get("https://vtvgo.vn/") + self.session.http.headers.update({ "Origin": "https://vtvgo.vn", "Referer": self.url,
plugins.vtvgo: '403 Client Error: Forbidden for url: ...' ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description The vtv plugin has stopped working, and I think vtv may have changed its website. The url it returns is not accessible. ### Debug log ```text streamlink "https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html" --loglevel=debug [cli][info] streamlink is running as root! Be careful! [cli][debug] OS: Linux-4.15.0-189-generic-x86_64-with-glibc2.27 [cli][debug] Python: 3.9.13 [cli][debug] Streamlink: 5.0.1+33.ged88bcaa [cli][debug] Dependencies: [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.8.0 [cli][debug] pycountry: 17.5.14 [cli][debug] pycryptodome: 3.14.1 [cli][debug] PySocks: 1.6.5 [cli][debug] requests: 2.27.1 [cli][debug] websocket-client: 1.3.2 [cli][debug] Arguments: [cli][debug] url=https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html [cli][debug] --loglevel=debug [cli][info] Found matching plugin vtvgo for URL https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html error: Unable to open URL: https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html (403 Client Error: Forbidden for url: https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html) ```
2022-11-11T17:29:23
streamlink/streamlink
4,994
streamlink__streamlink-4994
[ "4992" ]
f51bff07f31796ccc42e22b1aadd7a45c7a80caf
diff --git a/src/streamlink/plugins/svtplay.py b/src/streamlink/plugins/svtplay.py --- a/src/streamlink/plugins/svtplay.py +++ b/src/streamlink/plugins/svtplay.py @@ -13,114 +13,145 @@ from streamlink.plugin.api import validate from streamlink.stream.dash import DASHStream from streamlink.stream.ffmpegmux import MuxedStream +from streamlink.stream.hls import HLSStream from streamlink.stream.http import HTTPStream log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r'https?://(?:www\.)?svtplay\.se(/(kanaler/)?.*)' + r"https?://(?:www\.)?svtplay\.se/(?P<live>kanaler/)?" )) @pluginargument( "mux-subtitles", is_global=True, ) class SVTPlay(Plugin): - api_url = 'https://api.svt.se/videoplayer-api/video/{0}' - - latest_episode_url_re = re.compile(r''' - data-rt="top-area-play-button"\s+href="(?P<url>[^"]+)" - ''', re.VERBOSE) - - live_id_re = re.compile(r'.*/(?P<live_id>[^?]+)') - - _video_schema = validate.Schema({ - validate.optional('programTitle'): validate.text, - validate.optional('episodeTitle'): validate.text, - 'videoReferences': [{ - 'url': validate.url(), - 'format': validate.text, - }], - validate.optional('subtitleReferences'): [{ - 'url': validate.url(), - 'format': validate.text, - }], - }) - - def _set_metadata(self, data, category): - if 'programTitle' in data: - self.author = data['programTitle'] - - self.category = category - - if 'episodeTitle' in data: - self.title = data['episodeTitle'] - - def _get_live(self, path): - match = self.live_id_re.search(path) - if match is None: + _URL_API_VIDEO = "https://api.svt.se/videoplayer-api/video/{item}" + _MAP_CHANNEL_NAMES = { + "svt1": "ch-svt1", + "svt2": "ch-svt2", + "svtbarn": "ch-barnkanalen", + "kunskapskanalen": "ch-kunskapskanalen", + "svt24": "ch-svt24", + } + + def _api_call(self, item): + _schema_items = validate.all( + [ + validate.all( + { + "format": str, + "url": validate.url(), + }, + validate.union_get("format", "url"), + ), + ], + validate.transform(dict), + ) + + self.author, self.title, videos, subtitles = self.session.http.get( + self._URL_API_VIDEO.format(item=item), + schema=validate.Schema( + validate.parse_json(), + { + validate.optional("programTitle"): str, + validate.optional("episodeTitle"): str, + "videoReferences": _schema_items, + validate.optional("subtitleReferences"): _schema_items, + }, + validate.union_get( + "programTitle", + "episodeTitle", + "videoReferences", + "subtitleReferences", + ), + ), + ) + + return videos, subtitles + + def _get_live(self): + live_id = "/".join(urlparse(self.url).path.split("/")[2:]) + if not live_id: return - live_id = "ch-{0}".format(match.group('live_id')) - log.debug("Live ID={0}".format(live_id)) + live_id = self._MAP_CHANNEL_NAMES.get(live_id, f"ch-{live_id}") + log.debug(f"Live ID={live_id}") + self.category = "Live" + videos, subtitles = self._api_call(live_id) - res = self.session.http.get(self.api_url.format(live_id)) - api_data = self.session.http.json(res, schema=self._video_schema) - - self._set_metadata(api_data, 'Live') - - for playlist in api_data['videoReferences']: - if playlist['format'] == 'dashhbbtv': - yield from DASHStream.parse_manifest(self.session, playlist['url']).items() + return self._select_streams(videos, subtitles) def _get_vod(self): - vod_id = self._get_vod_id(self.url) + def get_vod_id(url): + return dict(parse_qsl(urlparse(url).query)).get("id") + + vod_id = get_vod_id(self.url) if vod_id is None: - res = self.session.http.get(self.url) - match = self.latest_episode_url_re.search(res.text) - if match is None: - return - vod_id = self._get_vod_id(match.group("url")) + vod_url = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//*[@data-rt='top-area-play-button'][@href][1]/@href"), + )) + if vod_url: + vod_id = get_vod_id(vod_url) if vod_id is None: return - log.debug("VOD ID={0}".format(vod_id)) + log.debug(f"VOD ID={vod_id}") + self.category = "Live" + videos, subtitles = self._api_call(vod_id) - res = self.session.http.get(self.api_url.format(vod_id)) - api_data = self.session.http.json(res, schema=self._video_schema) + return self._select_streams(videos, subtitles) - self._set_metadata(api_data, 'VOD') + def _select_streams(self, videos, subtitles): + # the goal is to have streams with the widest range of qualities/substreams and highest bitrate at the top + stream_priorities = { + "dashhbbtv": DASHStream, # DASH AVC + "dash-hbbtv-avc": DASHStream, # DASH AVC + "dash-avc": DASHStream, # DASH AVC + "dash-full": DASHStream, # DASH AVC + "dash": DASHStream, # DASH AVC - substreams = {} - if 'subtitleReferences' in api_data: - for subtitle in api_data['subtitleReferences']: - if subtitle['format'] == 'webvtt': - log.debug("Subtitle={0}".format(subtitle['url'])) - substreams[subtitle['format']] = HTTPStream( - self.session, - subtitle['url'], - ) + "hlswebvtt": HLSStream, # HLS with subtitles + "hls-cmaf-live-vtt": HLSStream, # HLS with subtitles + "hls-ts-avc": HLSStream, # HLS with MPEG-TS + "hls-ts-full": HLSStream, # HLS with MPEG-TS + "hls": HLSStream, # HLS with MPEG-TS + "hls-cmaf-live": HLSStream, # HLS with fMP4 + "hls-cmaf-full": HLSStream, # HLS with fMP4 - for manifest in api_data['videoReferences']: - if manifest['format'] == 'dashhbbtv': - for q, s in DASHStream.parse_manifest(self.session, manifest['url']).items(): - if self.get_option('mux_subtitles') and substreams: - yield q, MuxedStream(self.session, s, subtitles=substreams) - else: - yield q, s + "dash-hbbtv-hevc": DASHStream, # DASH HEVC (low prio, because of potential user decoder issues) - def _get_vod_id(self, url): - qs = dict(parse_qsl(urlparse(url).query)) - return qs.get("id") + "hls-ts-lb-full": HLSStream, # low bitrate + "hls-cmaf-lb-full": HLSStream, # low bitrate + "dash-lb-full": DASHStream, # low bitrate + } - def _get_streams(self): - path, live = self.match.groups() - log.debug("Path={0}".format(path)) + for fmt, streamtype in stream_priorities.items(): + if fmt not in videos: + continue + + if streamtype is HLSStream: + return HLSStream.parse_variant_playlist(self.session, videos[fmt], name_fmt="{pixels}_{bitrate}") - if live: - return self._get_live(path) + if streamtype is DASHStream: + mux_subtitles = self.get_option("mux_subtitles") + subtitlestreams = {} + if mux_subtitles and "webvtt" in subtitles: + subtitlestreams["webvtt"] = HTTPStream(self.session, subtitles["webvtt"]) + + dash_streams = DASHStream.parse_manifest(self.session, videos[fmt]) + if not subtitlestreams: + return dash_streams + + return {q: MuxedStream(self.session, s, subtitles=subtitlestreams) for q, s in dash_streams.items()} + + def _get_streams(self): + if self.match["live"]: + return self._get_live() else: return self._get_vod()
plugins.svtplay: No streams found on svt1+svt2 ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description svt1 and svt2 don't work. Kunskapskanalen still works. ``` [plugins.svtplay][debug] Path=/kanaler/svt1 [plugins.svtplay][debug] Live ID=ch-svt1 error: No playable streams found on this URL: https://www.svtplay.se/kanaler/svt1 ``` One month ago it worked so something have changed since then. ### Debug log ```text streamlink -l debug https://www.svtplay.se/kanaler/svt1 [cli][debug] OS: Linux-5.15.56-v7l+-armv7l-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] PySocks: 1.7.1 [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.6.3 [cli][debug] pycountry: 20.7.3 [cli][debug] pycryptodomex: 3.9.7 [cli][debug] requests: 2.25.1 [cli][debug] websocket-client: 0.57.0 [cli][debug] Arguments: [cli][debug] url=https://www.svtplay.se/kanaler/svt1 [cli][debug] --loglevel=debug [cli][info] Found matching plugin svtplay for URL https://www.svtplay.se/kanaler/svt1 [plugins.svtplay][debug] Path=/kanaler/svt1 [plugins.svtplay][debug] Live ID=ch-svt1 error: No playable streams found on this URL: https://www.svtplay.se/kanaler/svt1 Kunskapskanalen still works: streamlink -l debug https://www.svtplay.se/kanaler/kunskapskanalen [cli][debug] OS: Linux-5.15.56-v7l+-armv7l-with-glibc2.31 [cli][debug] Python: 3.9.2 [cli][debug] Streamlink: 5.0.1 [cli][debug] Dependencies: [cli][debug] PySocks: 1.7.1 [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.6.3 [cli][debug] pycountry: 20.7.3 [cli][debug] pycryptodomex: 3.9.7 [cli][debug] requests: 2.25.1 [cli][debug] websocket-client: 0.57.0 [cli][debug] Arguments: [cli][debug] url=https://www.svtplay.se/kanaler/kunskapskanalen [cli][debug] --loglevel=debug [cli][info] Found matching plugin svtplay for URL https://www.svtplay.se/kanaler/kunskapskanalen [plugins.svtplay][debug] Path=/kanaler/kunskapskanalen [plugins.svtplay][debug] Live ID=ch-kunskapskanalen [utils.l10n][debug] Language code: en_GB [stream.dash][debug] Available languages for DASH audio streams: sv-x-tal, sv (using: sv) Available streams: 234p (worst), 360p, 540p, 720p_alt, 720p (best) ```
The plugin currently expects DASH streams for live content. The site's API however only returns HLS streams, with optional subtitles. This means that the plugin needs an update, and ideally a complete rewrite, because it's very much outdated. The content however is geo-restricted, which is why nobody has updated the plugin yet. I'm surprised I was able to access the API and see the JSON response. I can't access the HLS playlists though, so even if someone with a VPN takes a look at rewriting the plugin, a confirmation will be needed before merging. Btw, as a side note, you are not using the latest stable version, and the dependency versions of your currently used Streamlink version do not match Streamlink's requirements. For the SVT plugin though, this is not a problem. For others however and for other code part of Streamlink, it is. Not sure if you're using a system package of your distro's package manager, but if that's the case, then the packager should fix this. I seem to remember that when I did a rewrite/fix on this plugin (>2 years ago now), there were DASH and HLS streams. I used DASH at the time because there seemed to be (then) an issue with ffmpeg and muxing on HLS streams. Hopefully that's not the case now (or can be fixed if it is).
2022-11-23T20:09:31
streamlink/streamlink
4,997
streamlink__streamlink-4997
[ "4996" ]
29d1f3921e5421fd7cb2e1846350297ce96efb48
diff --git a/src/streamlink/plugins/tvp.py b/src/streamlink/plugins/tvp.py --- a/src/streamlink/plugins/tvp.py +++ b/src/streamlink/plugins/tvp.py @@ -7,10 +7,14 @@ import logging import re +from typing import List, Optional, Tuple +from urllib.parse import urlparse from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate +from streamlink.stream.dash import DASHStream from streamlink.stream.hls import HLSStream +from streamlink.stream.http import HTTPStream log = logging.getLogger(__name__) @@ -19,7 +23,7 @@ r""" https?:// (?: - (?:tvpstream\.vod|stream)\.tvp\.pl(?:/(?:\?channel_id=(?P<video_id>\d+))?)?$ + (?:tvpstream\.vod|stream)\.tvp\.pl(?:/(?:\?channel_id=(?P<channel_id>\d+))?)?$ | vod\.tvp\.pl/[^/]+/.+,(?P<vod_id>\d+)$ ) @@ -30,8 +34,8 @@ class TVP(Plugin): _URL_PLAYER = "https://stream.tvp.pl/sess/TVPlayer2/embed.php" _URL_VOD = "https://vod.tvp.pl/api/products/{vod_id}/videos/playlist" - def _get_video_id(self): - return self.session.http.get( + def _get_video_id(self, channel_id: Optional[str]): + items: List[Tuple[int, int]] = self.session.http.get( self.url, headers={ # required, otherwise the next request for retrieving the HLS URL will be aborted by the server @@ -42,27 +46,42 @@ def _get_video_id(self): validate.none_or_all( validate.get("json"), validate.parse_json(), - [{ - "items": validate.none_or_all( - [{ - "video_id": int, - }], + [ + validate.all( + { + "id": int, + "items": validate.none_or_all( + [{ + "video_id": int, + }], + validate.get((0, "video_id")), + ), + }, + validate.union_get("id", "items"), ), - }], - validate.get((0, "items", 0, "video_id")), + ], ), ), ) - def _get_live(self, video_id): - video_id = video_id or self._get_video_id() + if channel_id is not None: + _channel_id = int(channel_id) + try: + return next(item[1] for item in items if item[0] == _channel_id) + except StopIteration: + pass + + return items[0][1] if items else None + + def _get_live(self, channel_id: Optional[str]): + video_id = self._get_video_id(channel_id) if not video_id: log.error("Could not find video ID") return log.debug(f"video ID: {video_id}") - return self.session.http.get( + streams: Optional[List[Tuple[str, str]]] = self.session.http.get( self._URL_PLAYER, params={ "ID": video_id, @@ -76,21 +95,46 @@ def _get_live(self, video_id): validate.get("json"), validate.parse_json(), { - "result": { - "content": { - "files": validate.all( - [{ - "type": str, - "url": validate.url(), - }], - validate.filter(lambda item: item["type"] == "hls"), - ), + "result": validate.none_or_all( + { + "content": { + "files": [ + validate.all( + { + "type": str, + "url": validate.url(), + }, + validate.union_get("type", "url"), + ), + ], + }, }, - }, + validate.get(("content", "files")), + ), }, - validate.get(("result", "content", "files", 0, "url")), + validate.get("result"), ), ) + if not streams: + return + + def get(items, condition): + return next((_url for _stype, _url in items if condition(_stype, urlparse(_url).path)), None) + + # prioritize HLSStream and get the first available stream + url = get(streams, lambda t, _: t == "hls") + if url: + return HLSStream.parse_variant_playlist(self.session, url) + + # fall back to DASHStream + url = get(streams, lambda t, p: t == "any_native" and p.endswith(".mpd")) + if url: + return DASHStream.parse_manifest(self.session, url) + + # fall back to HTTPStream + url = get(streams, lambda t, p: t == "any_native" and p.endswith(".mp4")) + if url: + return {"vod": HTTPStream(self.session, url)} def _get_vod(self, vod_id): data = self.session.http.get( @@ -123,16 +167,13 @@ def _get_vod(self, vod_id): return if data.get("HLS"): - return data["HLS"][0]["src"] + return HLSStream.parse_variant_playlist(self.session, data["HLS"][0]["src"]) def _get_streams(self): if self.match["vod_id"]: - hls_url = self._get_vod(self.match["vod_id"]) + return self._get_vod(self.match["vod_id"]) else: - hls_url = self._get_live(self.match["video_id"]) - - if hls_url: - return HLSStream.parse_variant_playlist(self.session, hls_url) + return self._get_live(self.match["channel_id"]) __plugin__ = TVP
diff --git a/tests/plugins/test_tvp.py b/tests/plugins/test_tvp.py --- a/tests/plugins/test_tvp.py +++ b/tests/plugins/test_tvp.py @@ -9,13 +9,13 @@ class TestPluginCanHandleUrlTVP(PluginCanHandleUrl): # live ("https://stream.tvp.pl", {}), ("https://stream.tvp.pl/", {}), - ("https://stream.tvp.pl/?channel_id=63759349", {"video_id": "63759349"}), - ("https://stream.tvp.pl/?channel_id=14812849", {"video_id": "14812849"}), + ("https://stream.tvp.pl/?channel_id=63759349", {"channel_id": "63759349"}), + ("https://stream.tvp.pl/?channel_id=14812849", {"channel_id": "14812849"}), # old live URLs ("https://tvpstream.vod.tvp.pl", {}), ("https://tvpstream.vod.tvp.pl/", {}), - ("https://tvpstream.vod.tvp.pl/?channel_id=63759349", {"video_id": "63759349"}), - ("https://tvpstream.vod.tvp.pl/?channel_id=14812849", {"video_id": "14812849"}), + ("https://tvpstream.vod.tvp.pl/?channel_id=63759349", {"channel_id": "63759349"}), + ("https://tvpstream.vod.tvp.pl/?channel_id=14812849", {"channel_id": "14812849"}), # VOD (
plugins.tvp: Unable to validate value of key 'result' ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description When trying TVP plugin an error occurs "error: Unable to validate response text: ValidationError(dict):" ### Debug log ```text [cli][info] streamlink is running as root! Be careful! [cli][debug] OS: Linux-5.4.0-132-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 5.1.1+2.g29d1f392 [cli][debug] Dependencies: [cli][debug] certifi: 2022.6.15 [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.6.5 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.9.9 [cli][debug] PySocks: 1.6.8 [cli][debug] requests: 2.27.1 [cli][debug] urllib3: 1.26.12 [cli][debug] websocket-client: 1.3.2 [cli][debug] importlib-metadata: 1.5.0 [cli][debug] Arguments: [cli][debug] url=https://stream.tvp.pl/?channel_id=63759349 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin tvp for URL https://stream.tvp.pl/?channel_id=63759349 [plugins.tvp][debug] video ID: 63759349 error: Unable to validate response text: ValidationError(dict): Unable to validate value of key 'result' Context(type): Type of None should be dict, but is NoneType ```
> https://stream.tvp.pl/?channel_id=63759349 The player JSON data includes `null` for the requested ID. The ID that's being used by the plugin is the one from the URL. Their website however is not using this ID, it's using the first item of the channels JSON data that's embedded in the site of the input URL and this does only get read when no channel ID was provided by the user. When the plugin was rewritten by me last month in #4905, this issue did not come up, so I'm a bit surprised here. Let me have a look... Indeed I took the id from inside the html page (14812849) and it works, but from the url the 1455 does not work I changed the order of ```video_id = video_id or self._get_video_id()``` to ```video_id = self._get_video_id() or video_id``` to force it to retrieve the video id from the json and it works. EDIT : It gets always the first one so in reality it doesnt work The fix is to look up the ID in the channels JSON and return the first one if no ID could be found. I'll open a PR with a fix in a bit.
2022-11-24T02:03:42
streamlink/streamlink
5,007
streamlink__streamlink-5007
[ "5006" ]
471f9efeb8db6385f9a1ba5aa08bfd21cb3778e6
diff --git a/src/streamlink/plugins/twitch.py b/src/streamlink/plugins/twitch.py --- a/src/streamlink/plugins/twitch.py +++ b/src/streamlink/plugins/twitch.py @@ -75,12 +75,15 @@ def parse_tag_ext_x_twitch_prefetch(self, value): # Use the last duration for extrapolating the start time of the prefetch segment, which is needed for checking # whether it is an ad segment and matches the parsed date ranges or not date = last.date + timedelta(seconds=last.duration) - ad = self._is_segment_ad(date) + # Don't pop() the discontinuity state in prefetch segments (at the bottom of the playlist) + discontinuity = self.state.get("discontinuity", False) + # Always treat prefetch segments after a discontinuity as ad segments + ad = discontinuity or self._is_segment_ad(date) segment = last._replace( uri=self.uri(value), duration=duration, title=None, - discontinuity=self.state.pop("discontinuity", False), + discontinuity=discontinuity, date=date, ad=ad, prefetch=True,
diff --git a/tests/plugins/test_twitch.py b/tests/plugins/test_twitch.py --- a/tests/plugins/test_twitch.py +++ b/tests/plugins/test_twitch.py @@ -330,34 +330,37 @@ def test_hls_low_latency_has_prefetch_disable_ads_has_preroll(self, mock_log): @patch("streamlink.plugins.twitch.log") def test_hls_low_latency_has_prefetch_disable_ads_no_preroll_with_prefetch_ads(self, mock_log): + # segment 1 has a shorter duration, to mess with the extrapolation of the prefetch start times # segments 3-6 are ads - ads = TagDateRangeAd(start=DATETIME_BASE + timedelta(seconds=3), duration=4) + Seg, Pre = Segment, SegmentPrefetch + ads = [ + Tag("EXT-X-DISCONTINUITY"), + TagDateRangeAd(start=DATETIME_BASE + timedelta(seconds=3), duration=4), + ] + # noinspection PyTypeChecker thread, segments = self.subject([ # regular stream data with prefetch segments - Playlist(0, [Segment(0), Segment(1), SegmentPrefetch(2), SegmentPrefetch(3)]), + Playlist(0, [Seg(0), Seg(1, duration=0.5), Pre(2), Pre(3)]), # three prefetch segments, one regular (2) and two ads (3 and 4) - Playlist(1, [Segment(1), SegmentPrefetch(2), ads, SegmentPrefetch(3), SegmentPrefetch(4)]), + Playlist(1, [Seg(1, duration=0.5), Pre(2)] + ads + [Pre(3), Pre(4)]), # all prefetch segments are gone once regular prefetch segments have shifted - Playlist(2, [Segment(2), ads, Segment(3), Segment(4), Segment(5)]), + Playlist(2, [Seg(2, duration=1.5)] + ads + [Seg(3), Seg(4), Seg(5)]), # still no prefetch segments while ads are playing - Playlist(3, [ads, Segment(3), Segment(4), Segment(5), Segment(6)]), + Playlist(3, ads + [Seg(3), Seg(4), Seg(5), Seg(6)]), # new prefetch segments on the first regular segment occurrence - Playlist(4, [ads, Segment(4), Segment(5), Segment(6), Segment(7), SegmentPrefetch(8), SegmentPrefetch(9)]), - Playlist(5, [ads, Segment(5), Segment(6), Segment(7), Segment(8), SegmentPrefetch(9), SegmentPrefetch(10)]), - Playlist(6, [ads, Segment(6), Segment(7), Segment(8), Segment(9), SegmentPrefetch(10), SegmentPrefetch(11)]), - Playlist(7, [Segment(7), Segment(8), Segment(9), Segment(10), SegmentPrefetch(11), SegmentPrefetch(12)], end=True), + Playlist(4, ads + [Seg(4), Seg(5), Seg(6), Seg(7), Pre(8), Pre(9)]), + Playlist(5, ads + [Seg(5), Seg(6), Seg(7), Seg(8), Pre(9), Pre(10)]), + Playlist(6, ads + [Seg(6), Seg(7), Seg(8), Seg(9), Pre(10), Pre(11)]), + Playlist(7, [Seg(7), Seg(8), Seg(9), Seg(10), Pre(11), Pre(12)], end=True), ], disable_ads=True, low_latency=True) self.await_write(11) content = self.await_read(read_all=True) - self.assertEqual( - content, - self.content(segments, cond=lambda s: 2 <= s.num <= 3 or 7 <= s.num) - ) - self.assertEqual(mock_log.info.mock_calls, [ + assert content == self.content(segments, cond=lambda s: 2 <= s.num <= 3 or 7 <= s.num) + assert mock_log.info.mock_calls == [ call("Will skip ad segments"), call("Low latency streaming (HLS live edge: 2)"), - ]) + ] @patch("streamlink.plugins.twitch.log") def test_hls_low_latency_no_prefetch_disable_ads_has_preroll(self, mock_log):
plugins.twitch: ad filtering via `--twitch-disable-ads` (partially) broken again (5.1.1) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description There appears to be new change on Twitch's end which breaks the current implementation of the plugin's ad filtering via [`--twitch-disable-ads`](https://streamlink.github.io/cli.html#cmdoption-twitch-disable-ads), so the ads / "purple screen" gets incorrectly included in Streamlink's output. **Please read the meta thread before reading or commenting:** https://github.com/streamlink/streamlink/issues/4949 ---- This might be caused by an incorrect extrapolation of the start time of ads when there are more than two prefetch segments. I'll need to have a closer look and reproduce this in tests with the data from the actual HLS playlists. ### Debug log ```text [20:48:36.680641][plugins.twitch][info] Will skip ad segments [20:48:36.680696][plugins.twitch][info] Low latency streaming (HLS live edge: 2) [20:48:36.681417][stream.hls][debug] Reloading playlist [20:48:36.681535][cli][debug] Pre-buffering 8192 bytes [20:48:37.138190][stream.hls_playlist][all] #EXT-X-VERSION:3 [20:48:37.138292][stream.hls_playlist][all] #EXT-X-TARGETDURATION:6 [20:48:37.138344][stream.hls_playlist][all] #EXT-X-MEDIA-SEQUENCE:5760 [20:48:37.138386][stream.hls_playlist][all] #EXT-X-TWITCH-LIVE-SEQUENCE:5760 [20:48:37.138426][stream.hls_playlist][all] #EXT-X-TWITCH-ELAPSED-SECS:10040.433 [20:48:37.138463][stream.hls_playlist][all] #EXT-X-TWITCH-TOTAL-SECS:10074.084 [20:48:37.138500][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="playlist-creation-1670010517",CLASS="timestamp",START-DATE="2022-12-02T11:48:37.128-08:00",END-ON-NEXT=YES,X-SERVER-TIME="1670010517.13" [20:48:37.142943][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="playlist-session-1670010517",CLASS="twitch-session",START-DATE="2022-12-02T11:48:37.128-08:00",END-ON-NEXT=YES,X-TV-TWITCH-SESSIONID="8657890014260199662" [20:48:37.143064][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="source-1670010487",CLASS="twitch-stream-source",START-DATE="2022-12-02T19:48:07.117Z",END-ON-NEXT=YES,X-TV-TWITCH-STREAM-SOURCE="live" [20:48:37.143167][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="trigger-1670010487",CLASS="twitch-trigger",START-DATE="2022-12-02T19:48:07.117Z",END-ON-NEXT=YES,X-TV-TWITCH-TRIGGER-URL="https://video-weaver.fra05.hls.ttvnw.net/trigger/SOME-BASE64-TOKEN" [20:48:37.143278][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:07.117Z [20:48:37.143345][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.143409][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.143489][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:09.117Z [20:48:37.143558][stream.hls_playlist][all] #EXTINF:1.750,live [20:48:37.143615][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.143689][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:10.867Z [20:48:37.143758][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.143816][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.143888][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:12.867Z [20:48:37.143957][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.144013][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.144079][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:14.867Z [20:48:37.144144][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.144208][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.144275][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:16.867Z [20:48:37.144341][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.144399][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.144466][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:18.867Z [20:48:37.144538][stream.hls_playlist][all] #EXTINF:1.800,live [20:48:37.144598][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.144666][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:20.667Z [20:48:37.144730][stream.hls_playlist][all] #EXTINF:1.283,live [20:48:37.144788][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.144868][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:21.950Z [20:48:37.144936][stream.hls_playlist][all] #EXTINF:2.350,live [20:48:37.144996][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.145063][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:24.300Z [20:48:37.145130][stream.hls_playlist][all] #EXTINF:1.184,live [20:48:37.145188][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.145257][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:25.484Z [20:48:37.145323][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.145382][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.145450][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:27.484Z [20:48:37.145515][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.145574][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.145641][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:29.484Z [20:48:37.145707][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:37.145765][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.145838][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:31.484Z [20:48:37.145903][stream.hls_playlist][all] #EXTINF:1.033,live [20:48:37.145961][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.146026][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:32.517Z [20:48:37.146089][stream.hls_playlist][all] #EXTINF:1.600,live [20:48:37.146143][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.146212][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:34.117Z [20:48:37.146276][stream.hls_playlist][all] #EXTINF:2.217,live [20:48:37.146329][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.146403][stream.hls_playlist][all] #EXT-X-TWITCH-PREFETCH:https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.146500][stream.hls_playlist][all] #EXT-X-TWITCH-PREFETCH:https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:37.146632][stream.hls][debug] First Sequence: 5760; Last Sequence: 5777 [20:48:37.146681][stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 5776; End Sequence: None [20:48:37.146723][stream.hls][debug] Adding segment 5776 to queue [20:48:37.147975][stream.hls][debug] Adding segment 5777 to queue [20:48:37.303129][stream.hls][debug] Writing segment 5776 to output [20:48:37.305258][cli.output][debug] Opening subprocess: mpv [20:48:37.806454][cli][debug] Writing stream to output [20:48:38.087492][stream.hls][debug] Segment 5776 complete [20:48:38.133442][stream.hls][debug] Writing segment 5777 to output [20:48:38.975280][stream.hls][debug] Reloading playlist [20:48:39.026840][stream.hls_playlist][all] #EXT-X-VERSION:3 [20:48:39.026920][stream.hls_playlist][all] #EXT-X-TARGETDURATION:6 [20:48:39.026969][stream.hls_playlist][all] #EXT-X-MEDIA-SEQUENCE:5761 [20:48:39.027010][stream.hls_playlist][all] #EXT-X-TWITCH-LIVE-SEQUENCE:5761 [20:48:39.027049][stream.hls_playlist][all] #EXT-X-TWITCH-ELAPSED-SECS:10042.433 [20:48:39.027086][stream.hls_playlist][all] #EXT-X-TWITCH-TOTAL-SECS:10077.316 [20:48:39.027124][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="playlist-creation-1670010517",CLASS="timestamp",START-DATE="2022-12-02T19:48:37.128Z",END-ON-NEXT=YES,X-SERVER-TIME="1670010517.13" [20:48:39.027235][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="playlist-session-1670010517",CLASS="twitch-session",START-DATE="2022-12-02T19:48:37.128Z",END-ON-NEXT=YES,X-TV-TWITCH-SESSIONID="8657890014260199662" [20:48:39.027339][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="source-1670010487",CLASS="twitch-stream-source",START-DATE="2022-12-02T19:48:07.117Z",END-ON-NEXT=YES,X-TV-TWITCH-STREAM-SOURCE="live" [20:48:39.027437][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="trigger-1670010487",CLASS="twitch-trigger",START-DATE="2022-12-02T19:48:07.117Z",END-ON-NEXT=YES,X-TV-TWITCH-TRIGGER-URL="https://video-weaver.fra05.hls.ttvnw.net/trigger/SOME-BASE64-TOKEN" [20:48:39.027542][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:09.117Z [20:48:39.027608][stream.hls_playlist][all] #EXTINF:1.750,live [20:48:39.027669][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.027744][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:10.867Z [20:48:39.027810][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:39.027874][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.027949][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:12.867Z [20:48:39.028015][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:39.028069][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.028136][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:14.867Z [20:48:39.028207][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:39.028262][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.028328][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:16.867Z [20:48:39.028393][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:39.028450][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.028521][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:18.867Z [20:48:39.028586][stream.hls_playlist][all] #EXTINF:1.800,live [20:48:39.028643][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.028710][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:20.667Z [20:48:39.028774][stream.hls_playlist][all] #EXTINF:1.283,live [20:48:39.028831][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.028890][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:21.950Z [20:48:39.028953][stream.hls_playlist][all] #EXTINF:2.350,live [20:48:39.029011][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.029070][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:24.300Z [20:48:39.029133][stream.hls_playlist][all] #EXTINF:1.184,live [20:48:39.029190][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.029249][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:25.484Z [20:48:39.029312][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:39.029370][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.029429][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:27.484Z [20:48:39.029491][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:39.029549][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.029609][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:29.484Z [20:48:39.029670][stream.hls_playlist][all] #EXTINF:2.000,live [20:48:39.029728][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.029788][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:31.484Z [20:48:39.029850][stream.hls_playlist][all] #EXTINF:1.033,live [20:48:39.029907][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.029967][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:32.517Z [20:48:39.030029][stream.hls_playlist][all] #EXTINF:1.600,live [20:48:39.030087][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.030147][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:34.117Z [20:48:39.030209][stream.hls_playlist][all] #EXTINF:2.217,live [20:48:39.030262][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.030326][stream.hls_playlist][all] #EXT-X-PROGRAM-DATE-TIME:2022-12-02T19:48:36.334Z [20:48:39.030390][stream.hls_playlist][all] #EXTINF:1.833,live [20:48:39.030445][stream.hls_playlist][all] https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.030513][stream.hls_playlist][all] #EXT-X-TWITCH-PREFETCH:https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.030596][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="stitched-ad-1670010520-30168000000",CLASS="twitch-stitched-ad",START-DATE="2022-12-02T19:48:40.768Z",DURATION=30.168,X-TV-TWITCH-AD-CLICK-TRACKING-URL="https://example.com",X-TV-TWITCH-AD-LOUDNESS="-10.980000",X-TV-TWITCH-AD-POD-POSITION="0",X-TV-TWITCH-AD-ROLL-TYPE="MIDROLL",X-TV-TWITCH-AD-RADS-TOKEN="SOME-BASE64-TOKEN",X-TV-TWITCH-AD-CLICK-BEACON-ID="clickDropBeacon",X-TV-TWITCH-AD-CREATIVE-ID="SOME-ID",X-TV-TWITCH-AD-AD-FORMAT="standard_video_ad",X-TV-TWITCH-AD-COMMERCIAL-ID="SOME-COMMERCIAL-ID",X-TV-TWITCH-AD-POD-LENGTH="1",X-TV-TWITCH-AD-AD-SESSION-ID="SOME-ID",X-TV-TWITCH-AD-URL="https://help.twitch.tv/s/article/ad-experience-on-twitch",X-TV-TWITCH-AD-LINE-ITEM-ID="SOME-ID" [20:48:39.030723][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="source-1670010520",CLASS="twitch-stream-source",START-DATE="2022-12-02T19:48:40.768Z",END-ON-NEXT=YES,X-TV-TWITCH-STREAM-SOURCE="Amazon|2474283100494" [20:48:39.030809][stream.hls_playlist][all] #EXT-X-DATERANGE:ID="quartile-1670010520-0",CLASS="twitch-ad-quartile",START-DATE="2022-12-02T19:48:40.768Z",DURATION=2.000,X-TV-TWITCH-AD-QUARTILE="0" [20:48:39.030893][stream.hls_playlist][all] #EXT-X-DISCONTINUITY [20:48:39.030950][stream.hls_playlist][all] #EXT-X-TWITCH-PREFETCH:https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.031039][stream.hls_playlist][all] #EXT-X-TWITCH-PREFETCH:https://video-edge-469c16.muc01.abs.hls.ttvnw.net/v1/segment/segment.ts [20:48:39.031171][stream.hls][debug] Adding segment 5778 to queue [20:48:39.031268][stream.hls][debug] Adding segment 5779 to queue [20:48:40.567511][stream.hls][debug] Segment 5777 complete [20:48:40.567671][stream.hls][debug] Writing segment 5778 to output [20:48:40.570485][stream.hls][debug] Segment 5778 complete [20:48:40.570615][stream.hls][debug] Discarding segment 5779 ```
The problem on this stream is that the regular segments have a variable durations, so when prefetch segments are included, those durations are unknown and the start times of the following prefetch segments therefore have to be extrapolated. The ad segment is annotated, but due to the extrapolation, there can be an incorrect offset. So either the ad start time needs to be checked differently, or the included discontinuity tag needs to be used, which it is currently not (by any of Streamlink's HLS implementations).
2022-12-02T20:54:19
streamlink/streamlink
5,011
streamlink__streamlink-5011
[ "5009" ]
44c4a4c5dc63b7ca865294cd651ffea1f472fc00
diff --git a/src/streamlink/options.py b/src/streamlink/options.py --- a/src/streamlink/options.py +++ b/src/streamlink/options.py @@ -28,6 +28,9 @@ def __init__(self, defaults=None): def _normalise_dict(cls, src): return {_normalise_option_name(key): value for key, value in src.items()} + def clear(self): + self.options = self.defaults.copy() + def set(self, key, value): self.options[_normalise_option_name(key)] = value diff --git a/src/streamlink/plugins/twitch.py b/src/streamlink/plugins/twitch.py --- a/src/streamlink/plugins/twitch.py +++ b/src/streamlink/plugins/twitch.py @@ -8,7 +8,6 @@ """ import argparse -import json import logging import re import sys @@ -251,20 +250,23 @@ def video(self, video_id, **extra_params): class TwitchAPI: + CLIENT_ID = "kimne78kx3ncx6brgo4mv6wki5h1ko" + def __init__(self, session): self.session = session self.headers = { - "Client-ID": "kimne78kx3ncx6brgo4mv6wki5h1ko", + "Client-ID": self.CLIENT_ID, } self.headers.update(**dict(session.get_plugin_option("twitch", "api-header") or [])) self.access_token_params = dict(session.get_plugin_option("twitch", "access-token-param") or []) self.access_token_params.setdefault("playerType", "embed") - def call(self, data, schema=None): + def call(self, data, schema=None, **kwargs): res = self.session.http.post( "https://gql.twitch.tv/gql", - data=json.dumps(data), - headers=self.headers + json=data, + headers={**self.headers, **kwargs.pop("headers", {})}, + **kwargs, ) return self.session.http.json(res, schema=schema) @@ -421,18 +423,30 @@ def access_token(self, is_live, channel_or_vod): validate.union_get("signature", "value"), ) - return self.call(query, schema=validate.Schema( - {"data": validate.any( + return self.call(query, acceptable_status=(200, 400, 401, 403), schema=validate.Schema( + validate.any( validate.all( - {"streamPlaybackAccessToken": subschema}, - validate.get("streamPlaybackAccessToken") + {"error": str, "message": str}, + validate.union_get("error", "message"), + validate.transform(lambda data: ("error", *data)), ), validate.all( - {"videoPlaybackAccessToken": subschema}, - validate.get("videoPlaybackAccessToken") - ) - )}, - validate.get("data") + { + "data": validate.any( + validate.all( + {"streamPlaybackAccessToken": subschema}, + validate.get("streamPlaybackAccessToken"), + ), + validate.all( + {"videoPlaybackAccessToken": subschema}, + validate.get("videoPlaybackAccessToken"), + ), + ), + }, + validate.get("data"), + validate.transform(lambda data: ("token", *data)), + ), + ), )) def clips(self, clipname): @@ -616,7 +630,12 @@ def _get_metadata(self): def _access_token(self, is_live, channel_or_vod): try: - sig, token = self.api.access_token(is_live, channel_or_vod) + response, *data = self.api.access_token(is_live, channel_or_vod) + if response != "token": + error, message = data + log.error(f"{error or 'Error'}: {message or 'Unknown error'}") + raise PluginError + sig, token = data except (PluginError, TypeError): raise NoStreamsError(self.url)
diff --git a/tests/plugins/test_twitch.py b/tests/plugins/test_twitch.py --- a/tests/plugins/test_twitch.py +++ b/tests/plugins/test_twitch.py @@ -2,10 +2,12 @@ from datetime import datetime, timedelta from unittest.mock import MagicMock, call, patch +import pytest import requests_mock from streamlink import Streamlink -from streamlink.plugins.twitch import Twitch, TwitchHLSStream, TwitchHLSStreamReader, TwitchHLSStreamWriter +from streamlink.exceptions import NoStreamsError +from streamlink.plugins.twitch import Twitch, TwitchAPI, TwitchHLSStream, TwitchHLSStreamReader, TwitchHLSStreamWriter from tests.mixins.stream_hls import EventedHLSStreamWriter, Playlist, Segment as _Segment, Tag, TestMixinStreamHLS from tests.plugins import PluginCanHandleUrl from tests.resources import text @@ -389,6 +391,154 @@ def test_hls_low_latency_no_ads_reload_time(self): self.assertEqual(self.thread.reader.worker.playlist_reload_time, 23 / 3) +class TestTwitchAPIAccessToken: + @pytest.fixture + def plugin(self, request): + session = Streamlink() + for param in getattr(request, "param", {}): + session.set_plugin_option("twitch", *param) + yield Twitch(session, "https://twitch.tv/channelname") + Twitch.options.clear() + + @pytest.fixture + def mocker(self): + # The built-in requests_mock fixture is bad when trying to reference the following constants or classes + with requests_mock.Mocker() as mocker: + mocker.register_uri(requests_mock.ANY, requests_mock.ANY, exc=requests_mock.exceptions.InvalidRequest) + yield mocker + + @pytest.fixture + def mock(self, request, mocker: requests_mock.Mocker): + mock = mocker.post("https://gql.twitch.tv/gql", **getattr(request, "param", {"json": {}})) + yield mock + assert mock.called_once + payload = mock.last_request.json() # type: ignore[union-attr] + assert tuple(sorted(payload.keys())) == ("extensions", "operationName", "variables") + assert payload.get("operationName") == "PlaybackAccessToken" + assert payload.get("extensions") == { + "persistedQuery": { + "sha256Hash": "0828119ded1c13477966434e15800ff57ddacf13ba1911c129dc2200705b0712", + "version": 1, + }, + } + + @pytest.fixture + def assert_live(self, mock): + yield + assert mock.last_request.json().get("variables") == { # type: ignore[union-attr] + "isLive": True, + "isVod": False, + "login": "channelname", + "vodID": "", + "playerType": "embed", + } + + @pytest.fixture + def assert_vod(self, mock): + yield + assert mock.last_request.json().get("variables") == { # type: ignore[union-attr] + "isLive": False, + "isVod": True, + "login": "", + "vodID": "vodid", + "playerType": "embed", + } + + @pytest.mark.parametrize("plugin,exp_headers,exp_variables", [ + ( + [], + {"Client-ID": TwitchAPI.CLIENT_ID}, + { + "isLive": True, + "isVod": False, + "login": "channelname", + "vodID": "", + "playerType": "embed", + }, + ), + ( + [ + ("api-header", [ + ("Authorization", "invalid data"), + ("Authorization", "OAuth 0123456789abcdefghijklmnopqrst"), + ]), + ("access-token-param", [ + ("specialVariable", "specialValue"), + ("playerType", "frontpage"), + ]), + ], + { + "Client-ID": TwitchAPI.CLIENT_ID, + "Authorization": "OAuth 0123456789abcdefghijklmnopqrst", + }, + { + "isLive": True, + "isVod": False, + "login": "channelname", + "vodID": "", + "playerType": "frontpage", + "specialVariable": "specialValue", + }, + ), + ], indirect=["plugin"]) + def test_plugin_options(self, plugin: Twitch, mock: requests_mock.Mocker, exp_headers: dict, exp_variables: dict): + with pytest.raises(NoStreamsError): + plugin._access_token(True, "channelname") + requestheaders = dict(mock.last_request._request.headers) # type: ignore[union-attr] + for header in plugin.session.http.headers.keys(): + del requestheaders[header] + del requestheaders["Content-Type"] + del requestheaders["Content-Length"] + assert requestheaders == exp_headers + assert mock.last_request.json().get("variables") == exp_variables # type: ignore[union-attr] + + @pytest.mark.parametrize("mock", [{ + "json": {"data": {"streamPlaybackAccessToken": {"value": '{"channel":"foo"}', "signature": "sig"}}}, + }], indirect=True) + def test_live_success(self, plugin: Twitch, mock: requests_mock.Mocker, assert_live): + data = plugin._access_token(True, "channelname") + assert data == ("sig", '{"channel":"foo"}', []) + + @pytest.mark.parametrize("mock", [{ + "json": {"data": {"streamPlaybackAccessToken": None}}, + }], indirect=True) + def test_live_failure(self, plugin: Twitch, mock: requests_mock.Mocker, assert_live): + with pytest.raises(NoStreamsError): + plugin._access_token(True, "channelname") + + @pytest.mark.parametrize("mock", [{ + "json": {"data": {"videoPlaybackAccessToken": {"value": '{"channel":"foo"}', "signature": "sig"}}}, + }], indirect=True) + def test_vod_success(self, plugin: Twitch, mock: requests_mock.Mocker, assert_vod): + data = plugin._access_token(False, "vodid") + assert data == ("sig", '{"channel":"foo"}', []) + + @pytest.mark.parametrize("mock", [{ + "json": {"data": {"videoPlaybackAccessToken": None}}, + }], indirect=True) + def test_vod_failure(self, plugin: Twitch, mock: requests_mock.Mocker, assert_vod): + with pytest.raises(NoStreamsError): + plugin._access_token(False, "vodid") + + @pytest.mark.parametrize("plugin,mock", [ + ( + [("api-header", [("Authorization", "OAuth invalid-token")])], + { + "status_code": 401, + "json": {"error": "Unauthorized", "status": 401, "message": "The \"Authorization\" token is invalid."}, + }, + ), + ], indirect=True) + def test_auth_failure(self, caplog: pytest.LogCaptureFixture, plugin: Twitch, mock: requests_mock.Mocker, assert_live): + with pytest.raises(NoStreamsError) as cm: + plugin._access_token(True, "channelname") + assert str(cm.value) == "No streams found on this URL: https://twitch.tv/channelname" + assert mock.last_request._request.headers["Authorization"] == "OAuth invalid-token" # type: ignore[union-attr] + assert [(record.levelname, record.module, record.message) for record in caplog.records] == [ + ("error", "twitch", "Unauthorized: The \"Authorization\" token is invalid."), + ] + + class TestTwitchMetadata(unittest.TestCase): def setUp(self): self.mock = requests_mock.Mocker()
plugins.twitch: Show error message when the provided OAuth token is not valid ### Checklist - [X] This is a feature request and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22feature+request%22) ### Description I'd imagine the playlist server returning some kind of 403 error or similar when the authorization fails, and if this is the case, could there be an error message related to tokens instead of "No playable streams found"? Would help with scripting a lot. Thanks.
Twitch authentication is not an officially supported feature, otherwise we'd add back the dedicated `--twitch-oauth-token` CLI parameter for that. This is because of how the GQL OAuth token is obtained by the user and how there's no proper integrated authentication flow, like there was on kraken or how there is on helix. Twitch's GQL API returns status code 200 if the `Authorization` header contains random garbage data when requesting a `PlaybackAccessToken`. If the header value however is in the format `OAuth token` and `token` is invalid, only then the response will be 401 with an appropriate JSON payload. The Twitch plugin would need to handle this separately and therefore implement partial authentication support, which I don't think is a good idea. If you want to automate unofficial authentication, then interact with Twitch's GQL API yourself and find a query where you can validate the OAuth token. ```sh $ curl \ -D - \ -H 'Client-ID: kimne78kx3ncx6brgo4mv6wki5h1ko' \ -H 'Authorization: invalid-data' \ -d '{"operationName": "PlaybackAccessToken", "extensions": {"persistedQuery": {"version": 1, "sha256Hash": "0828119ded1c13477966434e15800ff57ddacf13ba1911c129dc2200705b0712"}}, "variables": {"isLive": true, "login": "...", "isVod": false, "vodID": "", "playerType": "embed"}}' \ 'https://gql.twitch.tv/gql' HTTP/1.1 200 OK Connection: keep-alive Content-Length: 952 Content-Type: application/json Access-Control-Allow-Origin: * Date: Sat, 03 Dec 2022 13:07:38 GMT {"data":{"streamPlaybackAccessToken":{"value":"...","signature":"...","__typename":"PlaybackAccessToken"}},"extensions":{"durationMilliseconds":57,"operationName":"PlaybackAccessToken","requestID":"..."}} ``` ```sh $ curl \ -D - \ -H 'Client-ID: kimne78kx3ncx6brgo4mv6wki5h1ko' \ -H 'Authorization: OAuth invalid-token' \ -d '{"operationName": "PlaybackAccessToken", "extensions": {"persistedQuery": {"version": 1, "sha256Hash": "0828119ded1c13477966434e15800ff57ddacf13ba1911c129dc2200705b0712"}}, "variables": {"isLive": true, "login": "...", "isVod": false, "vodID": "", "playerType": "embed"}}' \ 'https://gql.twitch.tv/gql' HTTP/1.1 401 Unauthorized Connection: keep-alive Content-Length: 89 Content-Type: application/json Access-Control-Allow-Origin: * Date: Sat, 03 Dec 2022 13:07:53 GMT {"error":"Unauthorized","status":401,"message":"The \"Authorization\" token is invalid."} ``` That is a good idea, thank you. Btw, we can add generic access token failure handling. I'll submit a PR later. I have already implemented it with some added tests.
2022-12-03T23:46:25
streamlink/streamlink
5,023
streamlink__streamlink-5023
[ "5022" ]
68dad1059fa800c2064a474a2644d47bcbca06ce
diff --git a/src/streamlink/plugins/vtvgo.py b/src/streamlink/plugins/vtvgo.py --- a/src/streamlink/plugins/vtvgo.py +++ b/src/streamlink/plugins/vtvgo.py @@ -27,6 +27,7 @@ def _get_streams(self): self.session.http.headers.update({ "Origin": "https://vtvgo.vn", "Referer": self.url, + "Sec-Fetch-Site": "same-origin", "X-Requested-With": "XMLHttpRequest", })
plugins.vtvgo: '403 Client Error: Forbidden for url: ...' ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Last month VtvGo added cookie requirements for the stream playlist, and now it seems that they added another security layer. The request to the website returns error 403. ### Debug log ```text streamlink https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html --loglevel=debug [cli][debug] OS: Linux-5.15.0-53-generic-x86_64-with-glibc2.35 [cli][debug] Python: 3.10.6 [cli][debug] Streamlink: 5.1.2+4.g68dad105 [cli][debug] Dependencies: [cli][debug] certifi: 2022.9.24 [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] urllib3: 1.26.12 [cli][debug] websocket-client: 1.4.1 [cli][debug] importlib-metadata: 4.6.4 [cli][debug] Arguments: [cli][debug] url=https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html [cli][debug] --loglevel=debug [cli][info] Found matching plugin vtvgo for URL https://vtvgo.vn/xem-truc-tuyen-kenh-vtv3-3.html error: Unable to open URL: https://vtvgo.vn/ajax-get-stream (403 Client Error: Forbidden for url: https://vtvgo.vn/ajax-get-stream) ```
2022-12-12T15:26:47
streamlink/streamlink
5,024
streamlink__streamlink-5024
[ "4642" ]
68dad1059fa800c2064a474a2644d47bcbca06ce
diff --git a/src/streamlink_cli/main.py b/src/streamlink_cli/main.py --- a/src/streamlink_cli/main.py +++ b/src/streamlink_cli/main.py @@ -1,5 +1,4 @@ import argparse -import errno import logging import os import platform @@ -7,16 +6,13 @@ import signal import sys from contextlib import closing, suppress -from functools import partial from gettext import gettext -from itertools import chain from pathlib import Path from time import sleep -from typing import Any, Dict, Iterator, List, Optional, Type, Union +from typing import Any, Dict, List, Optional, Type, Union import streamlink.logger as logger from streamlink import NoPluginError, PluginError, StreamError, Streamlink, __version__ as streamlink_version -from streamlink.compat import is_win32 from streamlink.exceptions import FatalPluginError from streamlink.plugin import Plugin, PluginOptions from streamlink.stream.stream import Stream, StreamIO @@ -26,17 +22,11 @@ from streamlink_cli.console import ConsoleOutput, ConsoleUserInputRequester from streamlink_cli.constants import CONFIG_FILES, DEFAULT_STREAM_METADATA, LOG_DIR, PLUGIN_DIRS, STREAM_SYNONYMS from streamlink_cli.output import FileOutput, PlayerOutput +from streamlink_cli.streamrunner import StreamRunner from streamlink_cli.utils import Formatter, HTTPServer, datetime -from streamlink_cli.utils.progress import Progress from streamlink_cli.utils.versioncheck import check_version -ACCEPTABLE_ERRNO = (errno.EPIPE, errno.EINVAL, errno.ECONNRESET) -try: - ACCEPTABLE_ERRNO += (errno.WSAECONNABORTED,) # type: ignore -except AttributeError: - pass # Not windows - QUIET_OPTIONS = ("json", "stream_url", "quiet") @@ -262,7 +252,12 @@ def output_stream_http( if stream_fd and prebuffer: log.debug("Writing stream to player") - read_stream(stream_fd, server, prebuffer, formatter) + stream_runner = StreamRunner(stream_fd, server) + try: + stream_runner.run(prebuffer) + except OSError as err: + # TODO: refactor all console.exit() calls + console.exit(str(err)) if not continuous: break @@ -364,72 +359,19 @@ def output_stream(stream, formatter: Formatter): console.exit(f"Failed to open output ({err}") return - with closing(output): - log.debug("Writing stream to output") - read_stream(stream_fd, output, prebuffer, formatter) - - return True - - -def read_stream(stream, output, prebuffer, formatter: Formatter, chunk_size=8192): - """Reads data from stream and then writes it to the output.""" - is_player = isinstance(output, PlayerOutput) - is_http = isinstance(output, HTTPServer) - is_fifo = is_player and output.namedpipe - show_progress = ( - isinstance(output, FileOutput) - and output.fd is not stdout - and (sys.stdout.isatty() or args.force_progress) - ) - show_record_progress = ( - hasattr(output, "record") - and isinstance(output.record, FileOutput) - and output.record.fd is not stdout - and (sys.stdout.isatty() or args.force_progress) - ) - - progress: Optional[Progress] = None - stream_iterator: Iterator = chain( - [prebuffer], - iter(partial(stream.read, chunk_size), b"") - ) - if show_progress or show_record_progress: - progress = Progress( - sys.stderr, - output.filename or output.record.filename, - ) - stream_iterator = progress.iter(stream_iterator) - try: - for data in stream_iterator: - # We need to check if the player process still exists when - # using named pipes on Windows since the named pipe is not - # automatically closed by the player. - if is_win32 and is_fifo: - output.player.poll() - - if output.player.returncode is not None: - log.info("Player closed") - break - - try: - output.write(data) - except OSError as err: - if is_player and err.errno in ACCEPTABLE_ERRNO: - log.info("Player closed") - elif is_http and err.errno in ACCEPTABLE_ERRNO: - log.info("HTTP connection closed") - else: - console.exit(f"Error when writing to output: {err}, exiting") - - break + with closing(output): + log.debug("Writing stream to output") + # TODO: finally clean up the global variable mess and refactor the streamlink_cli package + # noinspection PyUnboundLocalVariable + stream_runner = StreamRunner(stream_fd, output, args.force_progress) + # noinspection PyUnboundLocalVariable + stream_runner.run(prebuffer) except OSError as err: - console.exit(f"Error when reading from stream: {err}, exiting") - finally: - if progress: - progress.close() - stream.close() - log.info("Stream ended") + # TODO: refactor all console.exit() calls + console.exit(str(err)) + + return True def handle_stream(plugin: Plugin, streams: Dict[str, Stream], stream_name: str) -> None: diff --git a/src/streamlink_cli/streamrunner.py b/src/streamlink_cli/streamrunner.py new file mode 100644 --- /dev/null +++ b/src/streamlink_cli/streamrunner.py @@ -0,0 +1,154 @@ +import errno +import logging +import sys +from contextlib import suppress +from pathlib import Path +from threading import Event, Lock, Thread +from typing import Optional, Union + +from streamlink.stream.stream import StreamIO +from streamlink_cli.output import FileOutput, PlayerOutput +from streamlink_cli.utils.http_server import HTTPServer +from streamlink_cli.utils.progress import Progress + + +# Use the main Streamlink CLI module as logger +log = logging.getLogger("streamlink.cli") + + +ACCEPTABLE_ERRNO = errno.EPIPE, errno.EINVAL, errno.ECONNRESET +with suppress(AttributeError): + ACCEPTABLE_ERRNO += (errno.WSAECONNABORTED,) # type: ignore[assignment,attr-defined] + + +class _ReadError(BaseException): + pass + + +class PlayerPollThread(Thread): + """ + Poll the player process in a separate thread, to isolate it from the stream's read-loop in the main thread. + Reading the stream can stall indefinitely when filtering content. + """ + + POLLING_INTERVAL: float = 0.5 + + def __init__(self, stream: StreamIO, output: PlayerOutput): + super().__init__(daemon=True, name=self.__class__.__name__) + self._stream = stream + self._output = output + self._stop_polling = Event() + self._lock = Lock() + + def close(self): + self._stop_polling.set() + + def playerclosed(self): + # Ensure that "Player closed" does only get logged once, either when writing the read stream data has failed, + # or when the player process was terminated/killed before writing. + with self._lock: + if self._stop_polling.is_set(): + return + self.close() + log.info("Player closed") + + def poll(self) -> bool: + return self._output.player.poll() is None + + def run(self) -> None: + while not self._stop_polling.wait(self.POLLING_INTERVAL): + if self.poll(): + continue + self.playerclosed() + # close stream as soon as the player was closed + self._stream.close() + break + + +class StreamRunner: + """Read data from a stream and write it to the output.""" + + playerpoller: Optional[PlayerPollThread] = None + progress: Optional[Progress] = None + + # TODO: refactor all output implementations + def __init__( + self, + stream: StreamIO, + output: Union[PlayerOutput, FileOutput, HTTPServer], + force_progress: bool = False, + ): + self.stream = stream + self.output = output + self.is_http = isinstance(output, HTTPServer) + + filename: Optional[Path] = None + + if isinstance(output, PlayerOutput): + self.playerpoller = PlayerPollThread(stream, output) + if output.record: + filename = output.record.filename + + elif isinstance(output, FileOutput): + if output.filename: + filename = output.filename + elif output.record: + filename = output.record.filename + + if filename and (sys.stdout.isatty() or force_progress): + self.progress = Progress(sys.stderr, filename) + + def run( + self, + prebuffer: bytes, + chunk_size: int = 8192, + ) -> None: + read = self.stream.read + write = self.output.write + progress = self.progress.write if self.progress else lambda _: None + + if self.playerpoller: + self.playerpoller.start() + if self.progress: + self.progress.start() + + # TODO: Fix error messages (s/when/while/) and only log "Stream ended" when it ended on its own (data == b""). + # These are considered breaking changes of the CLI output, which is parsed by 3rd party tools. + try: + write(prebuffer) + progress(prebuffer) + del prebuffer + + # Don't check for stream.closed, so the buffer's contents can be fully read after the stream ended or was closed + while True: + try: + data = read(chunk_size) + if data == b"": + break + except OSError as err: + raise _ReadError() from err + + write(data) + progress(data) + + except _ReadError as err: + raise OSError(f"Error when reading from stream: {err.__context__}, exiting") from err.__context__ + + except OSError as err: + if self.playerpoller and err.errno in ACCEPTABLE_ERRNO: + self.playerpoller.playerclosed() + elif self.is_http and err.errno in ACCEPTABLE_ERRNO: + log.info("HTTP connection closed") + else: + raise OSError(f"Error when writing to output: {err}, exiting") from err + + finally: + if self.playerpoller: + self.playerpoller.close() + self.playerpoller.join() + if self.progress: + self.progress.close() + self.progress.join() + + self.stream.close() + log.info("Stream ended") diff --git a/src/streamlink_cli/utils/progress.py b/src/streamlink_cli/utils/progress.py --- a/src/streamlink_cli/utils/progress.py +++ b/src/streamlink_cli/utils/progress.py @@ -6,7 +6,7 @@ from string import Formatter as StringFormatter from threading import Event, RLock, Thread from time import time -from typing import Callable, Deque, Dict, Iterable, Iterator, List, Optional, TextIO, Tuple, Union +from typing import Callable, Deque, Dict, Iterable, List, Optional, TextIO, Tuple, Union from streamlink.compat import is_win32 @@ -242,25 +242,16 @@ def __init__( def close(self): self._wait.set() - def put(self, chunk: bytes): + def write(self, chunk: bytes): size = len(chunk) with self._lock: self.overall += size self.written += size - def iter(self, iterator: Iterator[bytes]) -> Iterator[bytes]: - self.start() - try: - for chunk in iterator: - self.put(chunk) - yield chunk - finally: - self.close() - def run(self): self.started = time() try: - while not self._wait.wait(self.interval): + while not self._wait.wait(self.interval): # pragma: no cover self.update() finally: self.print_end()
diff --git a/tests/cli/test_streamrunner.py b/tests/cli/test_streamrunner.py new file mode 100644 --- /dev/null +++ b/tests/cli/test_streamrunner.py @@ -0,0 +1,696 @@ +import asyncio +import errno +import sys +from collections import deque +from pathlib import Path +from threading import Thread +from typing import Callable, Deque, List, Union +from unittest.mock import Mock, patch + +import pytest + +from streamlink.stream.stream import StreamIO +from streamlink_cli.output import FileOutput, PlayerOutput +from streamlink_cli.streamrunner import PlayerPollThread, StreamRunner, log as streamrunnerlogger +from streamlink_cli.utils.http_server import HTTPServer +from streamlink_cli.utils.progress import Progress +from tests.testutils.handshake import Handshake + + +TIMEOUT_AWAIT_HANDSHAKE = 1 +TIMEOUT_AWAIT_THREADJOIN = 1 + + +class EventedPlayerPollThread(PlayerPollThread): + POLLING_INTERVAL = 0 + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.handshake = Handshake() + + def poll(self): + with self.handshake(): + return super().poll() + + def close(self): + super().close() + # Let thread terminate on close() + self.handshake.go() + + +class FakeStream(StreamIO): + """Fake stream implementation, for feeding sample data to the stream runner and simulating read pauses and read errors""" + + def __init__(self) -> None: + super().__init__() + self.handshake = Handshake() + self.data: Deque[Union[bytes, Callable]] = deque() + + # noinspection PyUnusedLocal + def read(self, *args): + with self.handshake(): + if not self.data: + return b"" + data = self.data.popleft() + return data() if callable(data) else data + + +class FakeOutput: + """Common output/http-server/progress interface, for caching all write() calls and simulating write errors""" + + def __init__(self, *args, **kwargs) -> None: + super().__init__(*args, **kwargs) + self.handshake = Handshake() + self.data: List[bytes] = [] + + def write(self, data): + with self.handshake(): + return self._write(data) + + def _write(self, data): + self.data.append(data) + + +class FakePlayerOutput(FakeOutput, PlayerOutput): + pass + + +class FakeFileOutput(FakeOutput, FileOutput): + pass + + +class FakeHTTPServer(FakeOutput, HTTPServer): + def __init__(self, *args, **kwargs): + with patch("streamlink_cli.utils.http_server.socket"): + super().__init__(*args, **kwargs) + + +class FakeProgress(FakeOutput, Progress): + # we're not interested in any application logic of the Progress class + update = print_end = lambda *_, **__: None + + +class FakeStreamRunner(StreamRunner): + # override and remove optional typing annotations + playerpoller: EventedPlayerPollThread + progress: FakeProgress + + [email protected](autouse=True) +def _logging(caplog: pytest.LogCaptureFixture): + assert streamrunnerlogger.name == "streamlink.cli" + caplog.set_level(1, "streamlink") + + [email protected](autouse=True) +def isatty(request: pytest.FixtureRequest): + with patch("sys.stdout.isatty", return_value=getattr(request, "param", False)): + yield + + [email protected] +def stream(): + stream = FakeStream() + yield stream + assert stream.closed + + +# "stream_runner" fixture dependency declared in downstream scopes [email protected] +def runnerthread(request: pytest.FixtureRequest, stream_runner: StreamRunner): + class RunnerThread(Thread): + exception = None + + def run(self): + try: + super().run() + except BaseException as err: + self.exception = err + + thread = RunnerThread( + daemon=True, + name="Runner thread", + target=stream_runner.run, + args=(b"prebuffer",), + ) + yield thread + + assert_thread_termination(thread, "Runner thread has terminated") + + exception = getattr(request, "param", {}).get("exception", None) + assert isinstance(thread.exception, type(exception)) + assert str(thread.exception) == str(exception) + + +async def assert_handshake_steps(*items): + """ + Run handshake steps concurrently, to not be dependent too much on implementation details and the order of handshakes. + For example, concurrently await one read(), one write() and one progress() call. + """ + steps = asyncio.gather( + *(item.handshake.asyncstep(TIMEOUT_AWAIT_HANDSHAKE) for item in items), + return_exceptions=True, + ) + assert await steps == [True for _ in items] + + +def assert_thread_termination(thread: Thread, assertion: str): + thread.join(TIMEOUT_AWAIT_THREADJOIN) + assert not thread.is_alive(), assertion + + +class TestPlayerOutput: + @pytest.fixture + def player_process(self): + player_process = Mock() + player_process.poll = Mock(return_value=None) + + return player_process + + @pytest.fixture + def output(self, player_process: Mock): + with patch("subprocess.Popen") as mock_popen, \ + patch("streamlink_cli.output.sleep"): + mock_popen.return_value = player_process + output = FakePlayerOutput("mocked") + output.open() + yield output + + @pytest.fixture + def stream_runner(self, stream: FakeStream, output: FakePlayerOutput): + with patch("streamlink_cli.streamrunner.PlayerPollThread", EventedPlayerPollThread): + stream_runner = StreamRunner(stream, output) + assert isinstance(stream_runner.playerpoller, EventedPlayerPollThread) + assert not stream_runner.playerpoller.is_alive() + assert not stream_runner.is_http + assert not stream_runner.progress + yield stream_runner + assert not stream_runner.playerpoller.is_alive() + + @pytest.mark.asyncio + async def test_read_write( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakePlayerOutput, + ): + stream.data.extend((b"foo", b"bar")) + + runnerthread.start() + assert output.data == [] + + # write prebuffer + await assert_handshake_steps(output) + assert output.data == [b"prebuffer"] + + # read and write next chunk + await assert_handshake_steps(stream, output) + assert output.data == [b"prebuffer", b"foo"] + + # poll player process + await assert_handshake_steps(stream_runner.playerpoller) + assert stream_runner.playerpoller.is_alive() + + # read and write next chunk + await assert_handshake_steps(stream, output) + assert output.data == [b"prebuffer", b"foo", b"bar"] + + assert not stream.closed, "Stream is not closed" + + # read stream end + await assert_handshake_steps(stream) + assert output.data == [b"prebuffer", b"foo", b"bar"] + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("streamrunner", "info", "Stream ended"), + ] + + @pytest.mark.asyncio + async def test_paused( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakePlayerOutput, + ): + delayed = Handshake() + + def item(): + with delayed(): + return b"delayed" + + stream.data.append(item) + + runnerthread.start() + assert output.data == [] + + # write prebuffer + await assert_handshake_steps(output) + assert output.data == [b"prebuffer"] + assert not delayed.wait_ready(0), "Delayed chunk has not been read yet" + + # attempt reading delayed chunk + stream.handshake.go() + assert delayed.wait_ready(TIMEOUT_AWAIT_HANDSHAKE), "read() call of delayed chunk is paused" + assert output.data == [b"prebuffer"] + + assert not stream.closed, "Stream is not closed" + + # poll player process + await assert_handshake_steps(stream_runner.playerpoller) + assert stream_runner.playerpoller.is_alive() + + # unpause delayed chunk + delayed.go() + assert stream.handshake.wait_done(TIMEOUT_AWAIT_HANDSHAKE), "Delayed chunk has successfully been read" + await assert_handshake_steps(output) + assert output.data == [b"prebuffer", b"delayed"] + + assert not stream.closed, "Stream is not closed" + + # read stream end + await assert_handshake_steps(stream) + assert output.data == [b"prebuffer", b"delayed"] + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("streamrunner", "info", "Stream ended"), + ] + + @pytest.mark.asyncio + @pytest.mark.parametrize( + "writeerror,runnerthread", + [ + pytest.param( + OSError(errno.EPIPE, "Broken pipe"), + {}, + id="Acceptable error: EPIPE", + ), + pytest.param( + OSError(errno.EINVAL, "Invalid argument"), + {}, + id="Acceptable error: EINVAL", + ), + pytest.param( + OSError(errno.ECONNRESET, "Connection reset"), + {}, + id="Acceptable error: ECONNRESET", + ), + pytest.param( + OSError("Unknown error"), + {"exception": OSError("Error when writing to output: Unknown error, exiting")}, + id="Non-acceptable error", + ), + ], + indirect=["runnerthread"], + ) + async def test_player_close( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakePlayerOutput, + player_process: Mock, + writeerror: Exception, + ): + stream.data.extend((b"foo", b"bar")) + + runnerthread.start() + assert output.data == [] + + # write prebuffer + await assert_handshake_steps(output) + assert output.data == [b"prebuffer"] + + # poll player process + await assert_handshake_steps(stream_runner.playerpoller) + assert stream_runner.playerpoller.is_alive() + + # read and write next chunk + await assert_handshake_steps(stream, output) + assert output.data == [b"prebuffer", b"foo"] + + assert not stream.closed, "Stream is not closed yet" + + # close player + with patch.object(output, "_write", side_effect=writeerror): + # let player process terminate with code 0 and poll process once + player_process.poll.return_value = 0 + await assert_handshake_steps(stream_runner.playerpoller) + assert_thread_termination(stream_runner.playerpoller, "Polling has stopped after player process terminated") + + assert stream.closed, "Stream got closed after the player was closed" + + # read and write next chunk (write will now also raise) + await assert_handshake_steps(stream, output) + assert output.data == [b"prebuffer", b"foo"] + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("streamrunner", "info", "Player closed"), + ("streamrunner", "info", "Stream ended"), + ] + + @pytest.mark.asyncio + async def test_player_close_paused( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakePlayerOutput, + player_process: Mock, + ): + delayed = Handshake() + + def item(): + with delayed(): + return b"" + + stream.data.append(item) + + runnerthread.start() + assert output.data == [] + + # write prebuffer + await assert_handshake_steps(output) + assert output.data == [b"prebuffer"] + assert not delayed.wait_ready(0), "Delayed chunk has not been read yet" + + # poll player process + await assert_handshake_steps(stream_runner.playerpoller) + assert stream_runner.playerpoller.is_alive() + + stream.handshake.go() + assert delayed.wait_ready(TIMEOUT_AWAIT_HANDSHAKE), "read() call of delayed chunk is paused" + assert output.data == [b"prebuffer"] + + assert not stream.closed, "Stream is not closed yet" + + # let player process terminate with code 0 and poll process once + player_process.poll.return_value = 0 + await assert_handshake_steps(stream_runner.playerpoller) + assert_thread_termination(stream_runner.playerpoller, "Polling has stopped after player process terminated") + + assert stream.closed, "Stream got closed after the player was closed, even if the stream was paused" + + # unpause delayed chunk (stream end) + delayed.go() + assert stream.handshake.wait_done(TIMEOUT_AWAIT_HANDSHAKE), "Delayed chunk has successfully been read" + assert output.data == [b"prebuffer"] + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("streamrunner", "info", "Player closed"), + ("streamrunner", "info", "Stream ended"), + ] + + @pytest.mark.asyncio + @pytest.mark.parametrize( + "runnerthread", + [{"exception": OSError("Error when reading from stream: Read timeout, exiting")}], + indirect=["runnerthread"], + ) + async def test_readerror( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakePlayerOutput, + ): + # make next read() call raise a read-timeout error + stream.data.append(Mock(side_effect=OSError("Read timeout"))) + + runnerthread.start() + assert output.data == [] + + # write prebuffer + await assert_handshake_steps(output) + assert output.data == [b"prebuffer"] + + # poll player process + await assert_handshake_steps(stream_runner.playerpoller) + assert stream_runner.playerpoller.is_alive() + + # read stream (will raise a read timeout) + await assert_handshake_steps(stream) + + # poll player process again + await assert_handshake_steps(stream_runner.playerpoller) + assert_thread_termination(stream_runner.playerpoller, "Polling has stopped on read error") + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("streamrunner", "info", "Stream ended"), + ] + + +class TestHTTPServer: + @pytest.fixture + def output(self): + return FakeHTTPServer() + + @pytest.fixture + def stream_runner(self, stream: FakeStream, output: FakeHTTPServer): + stream_runner = StreamRunner(stream, output) + assert not stream_runner.playerpoller + assert not stream_runner.progress + assert stream_runner.is_http + yield stream_runner + + @pytest.mark.asyncio + async def test_read_write( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakeHTTPServer, + ): + stream.data.extend((b"foo", b"bar")) + + runnerthread.start() + assert output.data == [] + + # write prebuffer + await assert_handshake_steps(output) + assert output.data == [b"prebuffer"] + + # read and write next chunk + await assert_handshake_steps(stream, output) + assert output.data == [b"prebuffer", b"foo"] + + # read and write next chunk + await assert_handshake_steps(stream, output) + assert output.data == [b"prebuffer", b"foo", b"bar"] + + assert not stream.closed, "Stream is not closed" + + # read stream end + await assert_handshake_steps(stream) + assert output.data == [b"prebuffer", b"foo", b"bar"] + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("streamrunner", "info", "Stream ended"), + ] + + @pytest.mark.parametrize( + "writeerror,logs,runnerthread", + [ + pytest.param( + OSError(errno.EPIPE, "Broken pipe"), + True, + {}, + id="Acceptable error: EPIPE", + ), + pytest.param( + OSError(errno.EINVAL, "Invalid argument"), + True, + {}, + id="Acceptable error: EINVAL", + ), + pytest.param( + OSError(errno.ECONNRESET, "Connection reset"), + True, + {}, + id="Acceptable error: ECONNRESET", + ), + pytest.param( + OSError("Unknown error"), + False, + {"exception": OSError("Error when writing to output: Unknown error, exiting")}, + id="Non-acceptable error", + ), + ], + indirect=["runnerthread"], + ) + def test_writeerror( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakePlayerOutput, + logs: bool, + writeerror: Exception, + ): + runnerthread.start() + + with patch.object(output, "_write", side_effect=writeerror): + assert output.handshake.step(TIMEOUT_AWAIT_HANDSHAKE) + assert output.data == [] + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + expectedlogs = ( + ([("streamrunner", "info", "HTTP connection closed")] if logs else []) + + [("streamrunner", "info", "Stream ended")] + ) + assert [(record.module, record.levelname, record.message) for record in caplog.records] == expectedlogs + + [email protected]( + "isatty,force_progress", + [ + pytest.param(False, True, id="No TTY, force"), + pytest.param(True, False, id="TTY, no force"), + ], + indirect=["isatty"], +) +class TestHasProgress: + @pytest.mark.parametrize( + "output", + [ + pytest.param( + FakePlayerOutput("mocked"), + id="Player output without record", + ), + pytest.param( + FakeFileOutput(fd=Mock()), + id="FileOutput with file descriptor", + ), + pytest.param( + FakeHTTPServer(), + id="HTTPServer", + ), + ], + ) + def test_no_progress( + self, + output: Union[FakePlayerOutput, FakeFileOutput, FakeHTTPServer], + isatty: bool, + force_progress: bool, + ): + stream_runner = FakeStreamRunner(StreamIO(), output, force_progress) + assert not stream_runner.progress + + @pytest.mark.parametrize( + "output,expected", + [ + pytest.param( + FakePlayerOutput("mocked", record=FakeFileOutput(Path("record"))), + Path("record"), + id="PlayerOutput with record", + ), + pytest.param( + FakeFileOutput(filename=Path("filename")), + Path("filename"), + id="FileOutput with file name", + ), + pytest.param( + FakeFileOutput(record=FakeFileOutput(filename=Path("record"))), + Path("record"), + id="FileOutput with record", + ), + pytest.param( + FakeFileOutput(filename=Path("filename"), record=FakeFileOutput(filename=Path("record"))), + Path("filename"), + id="FileOutput with file name and record", + ), + ], + ) + def test_has_progress( + self, + output: Union[FakePlayerOutput, FakeFileOutput], + isatty: bool, + force_progress: bool, + expected: Path, + ): + stream_runner = FakeStreamRunner(StreamIO(), output, force_progress) + assert stream_runner.progress + assert not stream_runner.progress.is_alive() + assert stream_runner.progress.stream is sys.stderr + assert stream_runner.progress.path == expected + + +class TestProgress: + @pytest.fixture + def output(self): + yield FakeFileOutput(Path("filename")) + + @pytest.fixture + def stream_runner(self, stream: FakeStream, output: FakeFileOutput): + with patch("streamlink_cli.streamrunner.Progress", FakeProgress): + stream_runner = FakeStreamRunner(stream, output, True) + assert not stream_runner.playerpoller + assert not stream_runner.is_http + assert isinstance(stream_runner.progress, FakeProgress) + assert stream_runner.progress.path == Path("filename") + assert not stream_runner.progress.is_alive() + yield stream_runner + assert not stream_runner.progress.is_alive() + + @pytest.mark.asyncio + async def test_read_write( + self, + caplog: pytest.LogCaptureFixture, + runnerthread: Thread, + stream_runner: FakeStreamRunner, + stream: FakeStream, + output: FakeFileOutput, + ): + stream.data.extend((b"foo", b"bar")) + + runnerthread.start() + assert output.data == [] + + # write prebuffer + await assert_handshake_steps(output, stream_runner.progress) + assert output.data == [b"prebuffer"] + assert stream_runner.progress.data == [b"prebuffer"] + + # read and write next chunk + await assert_handshake_steps(stream, output, stream_runner.progress) + assert output.data == [b"prebuffer", b"foo"] + assert stream_runner.progress.data == [b"prebuffer", b"foo"] + + # read and write next chunk + await assert_handshake_steps(stream, output, stream_runner.progress) + assert output.data == [b"prebuffer", b"foo", b"bar"] + assert stream_runner.progress.data == [b"prebuffer", b"foo", b"bar"] + + assert not stream.closed, "Stream is not closed" + + # read stream end + await assert_handshake_steps(stream) + assert output.data == [b"prebuffer", b"foo", b"bar"] + assert stream_runner.progress.data == [b"prebuffer", b"foo", b"bar"] + + # wait for runner thread to terminate first before asserting log records + assert_thread_termination(runnerthread, "Runner thread has terminated") + assert [(record.module, record.levelname, record.message) for record in caplog.records] == [ + ("streamrunner", "info", "Stream ended"), + ] diff --git a/tests/cli/utils/test_progress.py b/tests/cli/utils/test_progress.py --- a/tests/cli/utils/test_progress.py +++ b/tests/cli/utils/test_progress.py @@ -225,33 +225,33 @@ def test_download_speed(self): == call("\r[download] Written 0 bytes to ../../the/path/where/we/write/to (0s) ") frozen_time.tick() - progress.put(kib * 1) + progress.write(kib * 1) progress.update() assert output_write.call_args_list[-1] \ == call("\r[download] Written 1.00 KiB to …th/where/we/write/to (1s @ 1.00 KiB/s)") frozen_time.tick() mock_width.return_value = 65 - progress.put(kib * 3) + progress.write(kib * 3) progress.update() assert output_write.call_args_list[-1] \ == call("\r[download] Written 4.00 KiB to …ere/we/write/to (2s @ 2.00 KiB/s)") frozen_time.tick() mock_width.return_value = 60 - progress.put(kib * 5) + progress.write(kib * 5) progress.update() assert output_write.call_args_list[-1] \ == call("\r[download] Written 9.00 KiB (3s @ 4.50 KiB/s) ") frozen_time.tick() - progress.put(kib * 7) + progress.write(kib * 7) progress.update() assert output_write.call_args_list[-1] \ == call("\r[download] Written 16.00 KiB (4s @ 7.50 KiB/s) ") frozen_time.tick() - progress.put(kib * 5) + progress.write(kib * 5) progress.update() assert output_write.call_args_list[-1] \ == call("\r[download] Written 21.00 KiB (5s @ 8.50 KiB/s) ")
cli: threaded read_stream() for undelayed process termination [`streamlink_cli.main.read_stream()`](https://github.com/streamlink/streamlink/blame/4.1.0/src/streamlink_cli/main.py#L372-L431) is in need of an update. It's the heart of the `streamlink_cli` package and it's responsible for reading stream data and writing it to the output. The issue is that reading stream data is single-threaded by reading from a simple `stream.read()` iterator, and the polling of the player process for termination or rather the detection of the end of the write-pipe thus depends on the stream-buffer's thread locks while reading the stream data, which is pretty bad when read calls stall for several seconds due to an empty stream buffer while waiting for new data. What this means is that closing the player often doesn't get detected immediately, and ending the streamlink process takes an unnecessarily long time, as it's waiting for the next stream data first before it can continue exiting gracefully. This is especially noticable when reading HLS streams with long segment durations and long playlist refresh times. The download-progress output is also tightly coupled with the current stream iterator and its updates are therefore timely inconsistent, which is bad. ---- What should be done in order to fix this is 1. Wrap the stream reader in a separate thread. This can already be done via `streamlink.stream.wrappers.StreamIOThreadWrapper`, but it'd add a secondary stream buffer, which is probably not ideal. I haven't had a closer look at this yet. 2. Poll the player process for termination in a separate thread as long as the stream reader thread is alive. Abort the stream reader when the player was closed. 3. Move the download progress output into a separate thread, so it can update in consistent time intervals instead of relying on the pace of the stream reader. I have already experimented with that, and apart from the progress output which would need some code refactoring, streams get closed immediately which previously caused a delay of several seconds. The challenge however is implementing this so that current functionality doesn't break, because there are no tests written for that (the sparse test coverage comes as a side effect from mocked cmdline tests).
I've already mentioned it yesterday in the thread linked above. Filtering out segments from HLS streams and pausing the output also affects the read-loop in the main CLI module, as explained in the OP. This means that if the player gets closed during a filtered-out ad break on Twitch, Streamlink won't recognize it until data gets written to the stream's buffer again once the ad break and the filtering ends. That's pretty bad. It sounds bad but it will only happen on twitch streams with ads being filtered. Now it's happening on every single stream no matter what site is being processed by streamlink, so for me this enhancement will be positive overall. > it will only happen on twitch streams with ads being filtered. There are several other plugins which also filter out HLS segments. And there's also the CLI parameter for filtering. > it's happening on every single stream Yes, as explained in the OP. A fix of the `read_stream()` method would mean that closing the player gets detected "immediately", with a short delay due to the process polling. The only thing that currently can't easily get fixed are ongoing HTTP requests, which means when closing the player, the stream's `close()` call will still wait until all remaining HTTP requests have finished and all thread pool threads have terminated (the writer thread blocks on close). For segmented streams with small segments (talking about size), this is no problem, but for streams with "future segments" where the HTTP request gets made ahead of time, this still is a problem. I believe we'd need to update the `iter_content` loop of the HLS stream implementation and check whether the stream has been closed on each individual chunk iteration, which is a bit wasteful. If this doesn't work, then `requests` would need to get replaced by `httpx`. ---- I already have a local fix here that doesn't involve using the `StreamIOThreadWrapper`. That namely would mean copying data from buffer to buffer to buffer, which is stupid, and it also doesn't work with paused streams which I wasn't aware of initially. I don't know yet though whether I want to write some tests. There are no tests for the `read_stream` method at all, and writing ones requires writing a lot of mocked code for the two additional threads, the player polling and the progress output. I also have a commit with a refactor and cleanup of the filtering logic which abstracts it from the HLS implementation, because it actually doesn't belong there. Also not sure yet if I want to include it in a PR for this fix. It should probably be submitted as a separate pull request.
2022-12-12T15:38:53
streamlink/streamlink
5,041
streamlink__streamlink-5041
[ "5039" ]
5639a065bf82c859c1c25c121b471c132b5175ea
diff --git a/src/streamlink/plugins/cdnbg.py b/src/streamlink/plugins/cdnbg.py --- a/src/streamlink/plugins/cdnbg.py +++ b/src/streamlink/plugins/cdnbg.py @@ -85,7 +85,7 @@ def _get_streams(self): schema=validate.Schema( validate.any( self._find_url( - re.compile(r"sdata\.src.*?=.*?(?P<q>[\"'])(?P<url>http.*?)(?P=q)") + re.compile(r"sdata\.src.*?=.*?(?P<q>[\"'])(?P<url>.*?)(?P=q)") ), self._find_url( re.compile(r"(src|file): (?P<q>[\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)")
plugins.cdnbg: can't load tv.bnt.bg/bnt3 (Unable to validate response text) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest build from the master branch ### Description Can't load tv.bnt.bg/bnt3 or any of this tv.bnt.bg/bnt2 and tv.bnt.bg/bnt. ### Debug log ```text streamlink --loglevel debug "http://tv.bnt.bg/bnt3" best [cli][debug] OS: Linux-5.15.0-56-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 5.1.2+9.g5639a065 [cli][debug] Dependencies: [cli][debug] certifi: 2020.12.5 [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.1 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.15.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.28.1 [cli][debug] urllib3: 1.26.12 [cli][debug] websocket-client: 1.4.1 [cli][debug] Arguments: [cli][debug] url=http://tv.bnt.bg/bnt3 [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin cdnbg for URL http://tv.bnt.bg/bnt3 [plugins.cdnbg][debug] Found iframe: https://i.cdn.bg/live/OQ70Ds9Lcp error: Unable to validate response text: ValidationError(AnySchema): ValidationError(RegexSchema): Pattern 'sdata\\.src.*?=.*?(?P<q>[\\"\'])(?P<url>http.*?)(?P=q)' did not match <'<!doctype html>\n<html>\n\n<head>\n\n <meta charset...> ValidationError(RegexSchema): Pattern <'(src|file): (?P<q>[\\"\'])(?P<url>(https?:)?//.+?m3u8....> did not match <'<!doctype html>\n<html>\n\n<head>\n\n <meta charset...> ValidationError(RegexSchema): Pattern 'video src=(?P<url>http[^ ]+m3u8[^ ]*)' did not match <'<!doctype html>\n<html>\n\n<head>\n\n <meta charset...> ValidationError(RegexSchema): Pattern 'source src=\\"(?P<url>[^\\"]+m3u8[^\\"]*)\\"' did not match <'<!doctype html>\n<html>\n\n<head>\n\n <meta charset...> ValidationError(RegexSchema): Pattern '(?P<url>[^\\"]+geoblock[^\\"]+)' did not match <'<!doctype html>\n<html>\n\n<head>\n\n <meta charset...> ```
2022-12-19T22:11:43
streamlink/streamlink
5,053
streamlink__streamlink-5053
[ "5052" ]
d8225bbbff4e5725fd3eb93d237a02bdafb7c982
diff --git a/src/streamlink/plugins/dogan.py b/src/streamlink/plugins/dogan.py --- a/src/streamlink/plugins/dogan.py +++ b/src/streamlink/plugins/dogan.py @@ -12,124 +12,144 @@ import re from urllib.parse import urljoin -from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin import Plugin, PluginError, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream log = logging.getLogger(__name__) -@pluginmatcher(re.compile(r""" - https?://(?:www\.)? - (?: - cnnturk\.com/(?:action/embedvideo/|canli-yayin|tv-cnn-turk|video/)| - dreamturk\.com\.tr/(?:canli|canli-yayin-izle|dream-turk-ozel/|programlar/)| - dreamtv\.com\.tr/dream-ozel/| - kanald\.com\.tr/| - teve2\.com\.tr/(?:canli-yayin|diziler/|embed/|filmler/|programlar/) - ) -""", re.VERBOSE)) +@pluginmatcher(re.compile(r"https?://(?:www\.)?cnnturk\.com/")) +@pluginmatcher(re.compile(r"https?://(?:www\.)?(dreamturk|dreamtv)\.com\.tr/")) +@pluginmatcher(re.compile(r"https?://(?:www\.)?teve2\.com\.tr/")) +@pluginmatcher(re.compile(r"https?://(?:www\.)?kanald\.com\.tr/")) class Dogan(Plugin): - playerctrl_re = re.compile(r'''<div\s+id="video-element".*?>''', re.DOTALL) - data_id_re = re.compile(r'''data-id=(?P<quote>["'])/?(?P<id>\w+)(?P=quote)''') - content_id_re = re.compile(r'"content[Ii]d",\s*"(\w+)"') - item_id_re = re.compile(r"_itemId\s+=\s+'(\w+)';") - content_api = "/actions/media?id={id}" - dream_api = "/actions/content/media/{id}" - new_content_api = "/action/media/{id}" - content_api_schema = validate.Schema( - { - "data": { - "id": str, - "media": { - "link": { - validate.optional("defaultServiceUrl"): validate.any(validate.url(), ""), - validate.optional("serviceUrl"): validate.any(validate.url(), ""), - "securePath": str, + # based on the order of matchers + API_URLS = [ + "/api/media?id={id}", + "/actions/content/media/{id}", + "/action/media/{id}", + ] + API_URL_OLD = "/actions/media?id={id}" + + def _get_content_id(self): + return self.session.http.get( + self.url, + schema=validate.Schema( + validate.parse_html(), + validate.any( + validate.all( + validate.xml_xpath_string(""" + .//div[@data-id][ + @data-live + or @id='video-element' + or @id='player-container' + or contains(@class, 'player-container') + ][1]/@data-id + """), + str, + ), + # xpath query needs to have a lower priority + validate.all( + validate.xml_xpath_string( + ".//body[@data-content-id][1]/@data-content-id", + ), + str, + ), + ), + ), + ) + + def _api_query_new(self, content_id, api_url): + url = urljoin(self.url, api_url.format(id=content_id)) + data = self.session.http.get( + url, + schema=validate.Schema( + validate.parse_json(), + validate.any( + validate.all( + str, + validate.parse_json(), + {"Error": str}, + validate.get("Error"), + ), + validate.all( + { + "Media": { + "Link": { + "ContentId": str, + validate.optional("DefaultServiceUrl"): validate.any(validate.url(), ""), + validate.optional("ServiceUrl"): validate.any(validate.url(), ""), + "SecurePath": str, + }, + }, + }, + validate.get(("Media", "Link")), + validate.union_get("ServiceUrl", "DefaultServiceUrl", "SecurePath", "ContentId"), + ), + ), + ), + ) + if type(data) is str: + log.error(data) + return + + service_url, default_service_url, secure_path, content_id = data + + if default_service_url == "https://www.kanald.com.tr": + self.url = default_service_url + return self._api_query_old(content_id) + + if re.match(r"^https?://", secure_path): + return secure_path + + return urljoin(service_url or default_service_url, secure_path) + + def _api_query_old(self, content_id): + url = urljoin(self.url, self.API_URL_OLD.format(id=content_id)) + service_url, default_service_url, secure_path = self.session.http.get( + url, + schema=validate.Schema( + validate.parse_json(), + { + "data": { + "id": str, + "media": { + "link": { + validate.optional("defaultServiceUrl"): validate.any(validate.url(), ""), + validate.optional("serviceUrl"): validate.any(validate.url(), ""), + "securePath": str, + }, + }, }, }, - }, - }, - validate.get("data"), - validate.get("media"), - validate.get("link"), - ) - new_content_api_schema = validate.Schema( - { - "Media": { - "Link": { - "ContentId": str, - validate.optional("DefaultServiceUrl"): validate.any(validate.url(), ""), - validate.optional("ServiceUrl"): validate.any(validate.url(), ""), - "SecurePath": str, - }, - }, - }, - validate.get("Media"), - validate.get("Link"), - ) + validate.get(("data", "media", "link")), + validate.union_get("serviceUrl", "defaultServiceUrl", "securePath"), + ), + ) - def _get_content_id(self): - res = self.session.http.get(self.url) - # find the contentId - content_id_m = self.content_id_re.search(res.text) - if content_id_m: - log.debug("Found contentId by contentId regex") - return content_id_m.group(1) - - # find the PlayerCtrl div - player_ctrl_m = self.playerctrl_re.search(res.text) - if player_ctrl_m: - # extract the content id from the player control data - player_ctrl_div = player_ctrl_m.group(0) - content_id_m = self.data_id_re.search(player_ctrl_div) - if content_id_m: - log.debug("Found contentId by player data-id regex") - return content_id_m.group("id") - - # find the itemId var - item_id_m = self.item_id_re.search(res.text) - if item_id_m: - log.debug("Found contentId by itemId regex") - return item_id_m.group(1) - - def _get_new_content_hls_url(self, content_id, api_url): - log.debug("Using new content API url") - d = self.session.http.get(urljoin(self.url, api_url.format(id=content_id))) - d = self.session.http.json(d, schema=self.new_content_api_schema) - - if d["DefaultServiceUrl"] == "https://www.kanald.com.tr": - self.url = d["DefaultServiceUrl"] - return self._get_content_hls_url(content_id) - else: - if d["SecurePath"].startswith("http"): - return d["SecurePath"] - else: - return urljoin((d["ServiceUrl"] or d["DefaultServiceUrl"]), d["SecurePath"]) - - def _get_content_hls_url(self, content_id): - d = self.session.http.get(urljoin(self.url, self.content_api.format(id=content_id))) - d = self.session.http.json(d, schema=self.content_api_schema) - - return urljoin((d["serviceUrl"] or d["defaultServiceUrl"]), d["securePath"]) + return urljoin(service_url or default_service_url, secure_path) def _get_hls_url(self, content_id): - # make the api url relative to the current domain - if "cnnturk.com" in self.url or "teve2.com.tr" in self.url: - return self._get_new_content_hls_url(content_id, self.new_content_api) - elif "dreamturk.com.tr" in self.url or "dreamtv.com.tr" in self.url: - return self._get_new_content_hls_url(content_id, self.dream_api) - else: - return self._get_content_hls_url(content_id) + for idx, match in enumerate(self.matches[:len(self.API_URLS)]): + if match: + return self._api_query_new(content_id, self.API_URLS[idx]) + + return self._api_query_old(content_id) def _get_streams(self): - content_id = self._get_content_id() - if content_id: - log.debug(f"Loading content: {content_id}") - hls_url = self._get_hls_url(content_id) - return HLSStream.parse_variant_playlist(self.session, hls_url) - else: - log.error("Could not find the contentId for this stream") + try: + content_id = self._get_content_id() + except PluginError: + log.error("Could not find the content ID for this stream") + return + + log.debug(f"Loading content: {content_id}") + hls_url = self._get_hls_url(content_id) + if not hls_url: + return + + return HLSStream.parse_variant_playlist(self.session, hls_url) __plugin__ = Dogan
diff --git a/tests/plugins/test_dogan.py b/tests/plugins/test_dogan.py --- a/tests/plugins/test_dogan.py +++ b/tests/plugins/test_dogan.py @@ -6,34 +6,23 @@ class TestPluginCanHandleUrlDogan(PluginCanHandleUrl): __plugin__ = Dogan should_match = [ - 'https://www.cnnturk.com/action/embedvideo/', - 'https://www.cnnturk.com/action/embedvideo/5fa56d065cf3b018a8dd0bbc', - 'https://www.cnnturk.com/canli-yayin', - 'https://www.cnnturk.com/tv-cnn-turk/', - 'https://www.cnnturk.com/tv-cnn-turk/belgeseller/bir-zamanlar/bir-zamanlar-90lar-belgeseli', - 'https://www.cnnturk.com/video/', - 'https://www.cnnturk.com/video/turkiye/polis-otomobiliyle-tur-atan-sahisla-ilgili-islem-baslatildi-video', - 'https://www.dreamturk.com.tr/canli', - 'https://www.dreamturk.com.tr/canli-yayin-izle', - 'https://www.dreamturk.com.tr/dream-turk-ozel/', - 'https://www.dreamturk.com.tr/dream-turk-ozel/radyo-d/ilyas-yalcintas-radyo-dnin-konugu-oldu', - 'https://www.dreamturk.com.tr/programlar/', - 'https://www.dreamturk.com.tr/programlar/t-rap/bolumler/t-rap-keisan-ozel', - 'https://www.dreamtv.com.tr/dream-ozel/', - 'https://www.dreamtv.com.tr/dream-ozel/konserler/acik-sahne-dream-ozel', - 'https://www.kanald.com.tr/canli-yayin', - 'https://www.kanald.com.tr/sadakatsiz/fragmanlar/sadakatsiz-10-bolum-fragmani', - 'https://www.teve2.com.tr/canli-yayin', - 'https://www.teve2.com.tr/diziler/', - 'https://www.teve2.com.tr/diziler/guncel/oyle-bir-gecer-zaman-ki/bolumler/oyle-bir-gecer-zaman-ki-1-bolum', - 'https://www.teve2.com.tr/embed/', - 'https://www.teve2.com.tr/embed/55f6d5b8402011f264ec7f64', - 'https://www.teve2.com.tr/filmler/', - 'https://www.teve2.com.tr/filmler/guncel/yasamak-guzel-sey', - 'https://www.teve2.com.tr/programlar/', - 'https://www.teve2.com.tr/programlar/guncel/kelime-oyunu/bolumler/kelime-oyunu-800-bolum-19-12-2020', - ] + "https://www.cnnturk.com/canli-yayin", + "https://www.cnnturk.com/action/embedvideo/5fa56d065cf3b018a8dd0bbc", + "https://www.cnnturk.com/tv-cnn-turk/belgeseller/bir-zamanlar/bir-zamanlar-90lar-belgeseli", + "https://www.cnnturk.com/video/turkiye/polis-otomobiliyle-tur-atan-sahisla-ilgili-islem-baslatildi-video", + + "https://www.dreamturk.com.tr/canli-yayin-izle", + "https://www.dreamturk.com.tr/dream-turk-ozel/radyo-d/ilyas-yalcintas-radyo-dnin-konugu-oldu", + "https://www.dreamturk.com.tr/programlar/dream-10", + "https://www.dreamtv.com.tr/dream-ozel/konserler/acik-sahne-dream-ozel", + + "https://www.teve2.com.tr/canli-yayin", + "https://www.teve2.com.tr/diziler/guncel/oyle-bir-gecer-zaman-ki/bolumler/oyle-bir-gecer-zaman-ki-1-bolum", + "https://www.teve2.com.tr/embed/55f6d5b8402011f264ec7f64", + "https://www.teve2.com.tr/filmler/guncel/yasamak-guzel-sey", + "https://www.teve2.com.tr/programlar/guncel/kelime-oyunu/bolumler/kelime-oyunu-800-bolum-19-12-2020", - should_not_match = [ - 'https://www.dreamtv.com.tr/canli-yayin', + "https://www.kanald.com.tr/canli-yayin", + "https://www.kanald.com.tr/embed/5fda41a8ebc8302048167df6", + "https://www.kanald.com.tr/sadakatsiz/fragmanlar/sadakatsiz-10-bolum-fragmani", ]
plugins.dogan: fix broken streams <!-- Thanks for opening a pull request! Before you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink If possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures: https://streamlink.github.io/latest/developing.html#validating-changes Also don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers. Thank you very much! --> Live streams other than kanald were broken. I tired to fix them. I checked all the example links from https://github.com/streamlink/streamlink/issues/3416#issuecomment-748613571 and from test file. All links are working again.
Please see #5053 instead.
2022-12-29T14:07:22
streamlink/streamlink
5,063
streamlink__streamlink-5063
[ "5055" ]
871f7f7b3a438d7c901e4ba43ba8c64b6657dbf0
diff --git a/src/streamlink/plugins/ceskatelevize.py b/src/streamlink/plugins/ceskatelevize.py --- a/src/streamlink/plugins/ceskatelevize.py +++ b/src/streamlink/plugins/ceskatelevize.py @@ -5,131 +5,206 @@ $region Czechia """ +import json import logging import re +from urllib.parse import urlparse -from streamlink.plugin import Plugin, PluginError, pluginmatcher +from streamlink.plugin import Plugin, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.dash import DASHStream log = logging.getLogger(__name__) -@pluginmatcher(re.compile( - r"https://(?:www\.)?ceskatelevize\.cz/zive/\w+" -)) +@pluginmatcher(re.compile(r"https?://ct24\.ceskatelevize\.cz/")) +@pluginmatcher(re.compile(r"https?://decko\.ceskatelevize\.cz/")) +@pluginmatcher(re.compile(r"https?://sport\.ceskatelevize\.cz/")) +@pluginmatcher(re.compile(r"https?://(?:www\.)?ceskatelevize\.cz/zive/\w+")) class Ceskatelevize(Plugin): - _re_playlist_info = re.compile(r"{\"type\":\"([a-z]+)\",\"id\":\"([0-9]+)\"") - - def _get_streams(self): - self.session.http.headers.update({"Referer": self.url}) - schema_data = validate.Schema( - validate.parse_html(), - validate.xml_xpath_string(".//script[@id='__NEXT_DATA__'][text()]/text()"), - str, - validate.parse_json(), - { - "props": { - "pageProps": { - "data": { - "liveBroadcast": { - # "id": str, - "current": validate.any(None, { - "channel": str, - "channelName": str, - "legacyEncoder": str, - }), - "next": validate.any(None, { - "channel": str, - "channelName": str, - "legacyEncoder": str, - }) - } - } - } - } + schema_playlist = { + "playlist": [{ + "streamUrls": { + "main": validate.url(), }, - validate.get(("props", "pageProps", "data", "liveBroadcast")), - validate.union_get("current", "next"), - ) + }], + } - try: - data_current, data_next = self.session.http.get( - self.url, schema=schema_data) - except PluginError: - return - - log.debug(f"current={data_current!r}") - log.debug(f"next={data_next!r}") - - data = data_current or data_next - video_id = data["legacyEncoder"] - self.title = data["channelName"] - - _hash = self.session.http.get( - "https://www.ceskatelevize.cz/v-api/iframe-hash/", - schema=validate.Schema(str)) - res = self.session.http.get( - "https://www.ceskatelevize.cz/ivysilani/embed/iFramePlayer.php", - params={ - "hash": _hash, - "origin": "iVysilani", - "autoStart": "true", - "videoID": video_id, - }, - ) - - m = self._re_playlist_info.search(res.text) - if not m: - return - _type, _id = m.groups() - - data = self.session.http.post( + def get_stream_url(self, video_id): + url = self.session.http.post( "https://www.ceskatelevize.cz/ivysilani/ajax/get-client-playlist/", data={ - "playlist[0][type]": _type, - "playlist[0][id]": _id, + "playlist[0][type]": "channel", + "playlist[0][id]": video_id, "requestUrl": "/ivysilani/embed/iFramePlayer.php", "requestSource": "iVysilani", "type": "html", "canPlayDRM": "false", }, - headers={ - "x-addr": "127.0.0.1", - }, schema=validate.Schema( validate.parse_json(), { - validate.optional("streamingProtocol"): str, - "url": validate.any( - validate.url(), - "Error", - "error_region" - ) - } + "url": validate.url(), + }, + validate.get("url"), ), ) - if data["url"] in ["Error", "error_region"]: - log.error("This stream is not available") - return + return self.session.http.get( + url, + schema=validate.Schema( + validate.parse_json(), + self.schema_playlist, + validate.get(("playlist", 0, "streamUrls", "main")), + ), + ) + + def get_ct24(self): + self.id = 24 + self.title = "ČT24" + return self.get_stream_url(self.id) - url = self.session.http.get( - data["url"], + def get_decko(self): + self.id = 5 + self.title = "Déčko" + return self.get_stream_url(self.id) + + def get_sport(self, content): + video_id, key, date = validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//section[@id='live']/@data-ctcomp-data"), + str, + validate.parse_json(), + { + "items": [{ + "items": [{ + validate.optional("video"): { + "data": { + "source": { + "playlist": [{ + "id": int, + "key": str, + "date": str, + "noDrmData": { + "id": int, + "key": str, + "drm": int, + "quality": str, + "assetId": str, + }, + }], + }, + }, + }, + }], + }], + }, + validate.get(("items", 0, "items", 0, "video", "data", "source", "playlist", 0)), + validate.union_get(("noDrmData", "id"), ("noDrmData", "key"), "date"), + ).validate(content) + + self.id = video_id + self.title = "ČT sport" + + return self.session.http.post( + "https://playlist.ceskatelevize.cz/", + data={ + "data": json.dumps( + { + "contentType": "live", + "items": [{ + "id": video_id, + "key": f"{key}", + "assetId": "CT4DRM", + "playerType": "dash", + "date": f"{date}", + "requestSource": "front-sport", + "drm": 0, + "quality": "web", + }], + }, + separators=(',', ':'), + ), + }, schema=validate.Schema( validate.parse_json(), { - "playlist": [{ - validate.optional("type"): str, - "streamUrls": { - "main": validate.url(), - } - }] + "RESULT": self.schema_playlist, }, - validate.get(("playlist", 0, "streamUrls", "main")) - ) + validate.get(("RESULT", "playlist", 0, "streamUrls", "main")), + ), ) - return DASHStream.parse_manifest(self.session, url) + + def get_channel(self, content): + data = validate.Schema( + validate.parse_html(), + validate.xml_xpath_string(".//script[@id='__NEXT_DATA__'][text()]/text()"), + str, + validate.parse_json(), + { + "props": { + "pageProps": { + validate.optional("data"): { + "liveBroadcast": { + "id": str, + "current": validate.none_or_all( + { + "id": str, + "channelName": str, + "isPlayable": bool, + }, + ), + "next": validate.none_or_all( + { + "id": str, + "channelName": str, + "isPlayable": bool, + }, + ), + }, + }, + }, + }, + }, + validate.get(("props", "pageProps")), + ).validate(content) + + if not data: + return + + log.debug(f"data={data}") + data = data.get("data").get("liveBroadcast") + self.id = data.get("id") + + data = data.get("current") or data.get("next") + if not data: + return + + self.title = data.get("channelName") + + return self.get_stream_url(self.id) + + def _get_streams(self): + res = self.session.http.get(self.url) + + if "://ct24" in res.url: + url = self.get_ct24() + elif "://decko" in res.url: + url = self.get_decko() + elif "://sport" in res.url: + url = self.get_sport(res.content) + else: + url = self.get_channel(res.content) + + if url: + res = self.session.http.head(url, allow_redirects=True) + log.debug(f"res.url={res.url}") + p = urlparse(res.url).path + if not p.split("/")[-1].startswith(str(self.id)): + log.error("This stream is not available") + return + else: + return DASHStream.parse_manifest(self.session, res.url) __plugin__ = Ceskatelevize
diff --git a/tests/plugins/test_ceskatelevize.py b/tests/plugins/test_ceskatelevize.py --- a/tests/plugins/test_ceskatelevize.py +++ b/tests/plugins/test_ceskatelevize.py @@ -6,23 +6,17 @@ class TestPluginCanHandleUrlCeskatelevize(PluginCanHandleUrl): __plugin__ = Ceskatelevize should_match = [ - "https://www.ceskatelevize.cz/zive/ct1/", - "https://www.ceskatelevize.cz/zive/ct2/", - "https://www.ceskatelevize.cz/zive/ct24/", - "https://www.ceskatelevize.cz/zive/ct26/", - "https://www.ceskatelevize.cz/zive/ct27/", - "https://www.ceskatelevize.cz/zive/ct28/", - "https://www.ceskatelevize.cz/zive/ct31/", - "https://www.ceskatelevize.cz/zive/ct32/", - "https://www.ceskatelevize.cz/zive/decko/", - "https://www.ceskatelevize.cz/zive/sport/", + "https://ceskatelevize.cz/zive/any", + "https://www.ceskatelevize.cz/zive/any", + "https://ct24.ceskatelevize.cz/", + "https://ct24.ceskatelevize.cz/any", + "https://decko.ceskatelevize.cz/", + "https://decko.ceskatelevize.cz/any", + "https://sport.ceskatelevize.cz/", + "https://sport.ceskatelevize.cz/any", ] should_not_match = [ - "http://decko.ceskatelevize.cz/zive/", - "http://www.ceskatelevize.cz/art/zive/", - "http://www.ceskatelevize.cz/ct1/zive/", - "http://www.ceskatelevize.cz/ct2/zive/", - "http://www.ceskatelevize.cz/ct24/", - "http://www.ceskatelevize.cz/sport/zive-vysilani/", + "https://ceskatelevize.cz/", + "https://www.ceskatelevize.cz/zive/", ]
plugins.ceskatelevize: URL updates needed for CT24 and CT Sport ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Ceska Televize offers five channels. For three of them, ceskatelevize.py still works fine (CT1, CT 2, CT :D/Art). However, the remaining two channels have recently changed their URLs., i.e. CT 24: formerly https://www.ceskatelevize.cz/zive/ct24/ , now redirects to https://ct24.ceskatelevize.cz/#live CT Sport: formerly https://www.ceskatelevize.cz/zive/sport/ , now redirects to https://sport.ceskatelevize.cz/#live These two new URLs will not be recognised by ceskatelevize.py anymore as it basically looks for www.ceskatelevize.cz/zive/... It yould be great if you could update ceskatelevize.py to enable it again for CT 24 and CT Sport. Please note that CT Sport is geoblocked for the Czech Republic. This is not the case for CT 24. ### Debug log ```text Debug für CT24 with its now outdated URL: [streamlinksrv][debug] Arguments: [streamlinksrv][debug] url=https://www.ceskatelevize.cz/zive/ct24/ [streamlinksrv][debug] stream=['best'] [streamlinksrv][info] Found matching plugin ceskatelevize for URL https://www.ceskatelevize.cz/zive/ct24/ [streamlinksrv][error] No playable streams found on this URL: https://www.ceskatelevize.cz/zive/ct24/ [streamlinksrv][debug] Send Offline clip ```
2023-01-02T16:55:21
streamlink/streamlink
5,064
streamlink__streamlink-5064
[ "5062" ]
c81ceff5711b6532e1664b15cff7e87ca391d583
diff --git a/src/streamlink/plugins/euronews.py b/src/streamlink/plugins/euronews.py --- a/src/streamlink/plugins/euronews.py +++ b/src/streamlink/plugins/euronews.py @@ -4,73 +4,110 @@ $type live """ +import logging import re -from urllib.parse import urlparse -from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin import Plugin, PluginError, pluginmatcher from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream from streamlink.stream.http import HTTPStream -from streamlink.utils.url import update_scheme + + +log = logging.getLogger(__name__) @pluginmatcher(re.compile( - r'https?://(?:(?P<subdomain>\w+)\.)?euronews\.com/' + r"https?://(?:(?P<subdomain>\w+)\.)?euronews\.com/(?P<live>live$)?" )) class Euronews(Plugin): - API_URL = "https://{subdomain}.euronews.com/api/watchlive.json" - - def _get_vod_stream(self): - root = self.session.http.get(self.url, schema=validate.Schema( - validate.parse_html() - )) - - video_url = root.xpath("string(.//meta[@property='og:video'][1]/@content)") - if video_url: - return dict(vod=HTTPStream(self.session, video_url)) + API_URL = "https://{subdomain}.euronews.com/api/live/data" + + def _get_live(self): + if not self.match["live"] or not self.match["subdomain"]: + return + + try: + log.debug("Querying live API") + stream_url = self.session.http.get( + self.API_URL.format(subdomain=self.match["subdomain"]), + params={"locale": self.match["subdomain"]}, + schema=validate.Schema( + validate.parse_json(), + {"videoPrimaryUrl": validate.url(path=validate.endswith(".m3u8"))}, + validate.get("videoPrimaryUrl"), + ), + ) + except PluginError: + pass + else: + return HLSStream.parse_variant_playlist(self.session, stream_url) - video_id = root.xpath("string(.//div[@data-google-src]/@data-video-id)") - if video_id: + def _get_embed(self, root): + schema_video_id = validate.Schema( + validate.xml_xpath_string(".//div[@data-video][1]/@data-video"), + str, + validate.parse_json(), + { + "player": "pfp", + "videoId": str, + }, + validate.get("videoId"), + ) + try: + log.debug("Looking for YouTube video ID") + video_id = schema_video_id.validate(root) + except PluginError: + pass + else: return self.session.streams(f"https://www.youtube.com/watch?v={video_id}") - video_url = root.xpath("string(.//iframe[@id='pfpPlayer'][starts-with(@src,'https://www.youtube.com/')][1]/@src)") - if video_url: + schema_video_url = validate.Schema( + validate.xml_xpath_string(".//iframe[@id='pfpPlayer'][starts-with(@src,'https://www.youtube.com/')][1]/@src"), + str, + ) + try: + log.debug("Looking for embedded YouTube iframe") + video_url = schema_video_url.validate(root) + except PluginError: + pass + else: return self.session.streams(video_url) - def _get_live_streams(self): - video_id = self.session.http.get(self.url, schema=validate.Schema( - validate.parse_html(), - validate.xml_xpath_string(".//div[@data-google-src]/@data-video-id") - )) + def _get_vod(self, root): + schema_vod = validate.Schema( + validate.any( + validate.all( + validate.xml_xpath_string(".//meta[@property='og:video'][1]/@content"), + str, + ), + validate.all( + validate.xml_xpath_string(".//div[@data-video][1]/@data-video"), + str, + validate.parse_json(), + {"url": str}, + validate.get("url"), + ), + ), + validate.url(), + ) + try: + log.debug("Looking for VOD URL") + video_url = schema_vod.validate(root) + except PluginError: + pass + else: + return dict(vod=HTTPStream(self.session, video_url)) - if video_id: - return self.session.streams(f"https://www.youtube.com/watch?v={video_id}") + def _get_streams(self): + live = self._get_live() + if live: + return live - info_url = self.session.http.get(self.API_URL.format(subdomain=self.match.group("subdomain")), schema=validate.Schema( - validate.parse_json(), - {"url": validate.url()}, - validate.get("url"), - validate.transform(lambda url: update_scheme("https://", url)) - )) - hls_url = self.session.http.get(info_url, schema=validate.Schema( - validate.parse_json(), - { - "status": "ok", - "protocol": "hls", - "primary": validate.url() - }, - validate.get("primary") + root = self.session.http.get(self.url, schema=validate.Schema( + validate.parse_html(), )) - return HLSStream.parse_variant_playlist(self.session, hls_url) - - def _get_streams(self): - parsed = urlparse(self.url) - - if parsed.path == "/live": - return self._get_live_streams() - else: - return self._get_vod_stream() + return self._get_embed(root) or self._get_vod(root) __plugin__ = Euronews
diff --git a/tests/plugins/test_euronews.py b/tests/plugins/test_euronews.py --- a/tests/plugins/test_euronews.py +++ b/tests/plugins/test_euronews.py @@ -6,19 +6,19 @@ class TestPluginCanHandleUrlEuronews(PluginCanHandleUrl): __plugin__ = Euronews should_match = [ - "http://www.euronews.com/live", - "http://fr.euronews.com/live", - "http://de.euronews.com/live", - "http://it.euronews.com/live", - "http://es.euronews.com/live", - "http://pt.euronews.com/live", - "http://ru.euronews.com/live", - "http://ua.euronews.com/live", - "http://tr.euronews.com/live", - "http://gr.euronews.com/live", - "http://hu.euronews.com/live", - "http://fa.euronews.com/live", - "http://arabic.euronews.com/live", - "http://www.euronews.com/2017/05/10/peugeot-expects-more-opel-losses-this-year", - "http://fr.euronews.com/2017/05/10/l-ag-de-psa-approuve-le-rachat-d-opel" + "https://www.euronews.com/live", + "https://fr.euronews.com/live", + "https://de.euronews.com/live", + "https://it.euronews.com/live", + "https://es.euronews.com/live", + "https://pt.euronews.com/live", + "https://ru.euronews.com/live", + "https://ua.euronews.com/live", + "https://tr.euronews.com/live", + "https://gr.euronews.com/live", + "https://hu.euronews.com/live", + "https://fa.euronews.com/live", + "https://arabic.euronews.com/live", + "https://www.euronews.com/video", + "https://www.euronews.com/2023/01/02/giving-europe-a-voice-television-news-network-euronews-turns-30", ]
plugins.euronews: 404 Client Error ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description Hello I m facing an issue with euronews ### Debug log ```text /usr/local/bin/streamlink --loglevel debug "https://fr.euronews.com/live" [cli][info] streamlink is running as root! Be careful! [cli][debug] OS: Linux-5.15.0-56-generic-x86_64-with-glibc2.29 [cli][debug] Python: 3.8.10 [cli][debug] Streamlink: 5.1.2 [cli][debug] Dependencies: [cli][debug] certifi: 2019.11.28 [cli][debug] isodate: 0.6.0 [cli][debug] lxml: 4.8.0 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.9.9 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.27.1 [cli][debug] urllib3: 1.26.13 [cli][debug] websocket-client: 1.3.2 [cli][debug] Arguments: [cli][debug] url=https://fr.euronews.com/live [cli][debug] --loglevel=debug [cli][info] Found matching plugin euronews for URL https://fr.euronews.com/live error: Unable to open URL: https://fr.euronews.com/api/watchlive.json (404 Client Error: Not Found for url: https://fr.euronews.com/api/watchlive.json) ```
Seems like it's broken on their website. It's not loading via a web browser either. Neither is `https://www.euronews.com/live` for that matter. yes, https://www.euronews.com/live is loading for me actually? https://i.gyazo.com/e7cc397999ce1d5e4776cb28f025bd4c.mp4 Ah, one of my add-ons seemed to be stopping it. Why does this plugin even exist? It's just a wrapper for YouTube live streams and HTTPStream VODs. The API stuff for non-embedded live streams doesn't seem to exist anymore. At least I can't find anything. This means that the plugin should be removed, even though a fix would be trivial for the current issue where the youtube stream ID has been moved to a different data- tag with a small JSON string. Live streams can be found here: https://www.youtube.com/@euronews/streams Apparently, some languages are not available on YT and they are using a third party streaming service with an HLS stream that's retrieved from a different API endpoint.
2023-01-02T16:57:47