[cleanup] Misc (#8598)

Authored by: bashonly, pukkandan, seproDev, Grub4K

Co-authored-by: bashonly <bashonly@protonmail.com>
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
This commit is contained in:
Simon Sawicki 2023-12-30 22:27:36 +01:00 committed by GitHub
parent 5f009a094f
commit f9fb3ce86e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
30 changed files with 77 additions and 82 deletions

View File

@ -36,8 +36,8 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
# CPython 3.11 is in quick-test # CPython 3.8 is in quick-test
python-version: ['3.8', '3.9', '3.10', '3.12', pypy-3.8, pypy-3.10] python-version: ['3.9', '3.10', '3.11', '3.12', pypy-3.8, pypy-3.10]
include: include:
# atleast one of each CPython/PyPy tests must be in windows # atleast one of each CPython/PyPy tests must be in windows
- os: windows-latest - os: windows-latest

View File

@ -10,10 +10,10 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- name: Set up Python 3.11 - name: Set up Python 3.8
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: '3.11' python-version: '3.8'
- name: Install test requirements - name: Install test requirements
run: pip install pytest -r requirements.txt run: pip install pytest -r requirements.txt
- name: Run tests - name: Run tests

View File

@ -29,6 +29,7 @@ You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [autho
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
* Improved plugin architecture * Improved plugin architecture
* Rewrote the networking infrastructure, implemented support for `requests`
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements * YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc * Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc * Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
@ -46,16 +47,17 @@ You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [autho
## [bashonly](https://github.com/bashonly) ## [bashonly](https://github.com/bashonly)
* `--update-to`, automated release, nightly builds * `--update-to`, self-updater rewrite, automated/nightly/master releases
* `--cookies-from-browser` support for Firefox containers * `--cookies-from-browser` support for Firefox containers, external downloader cookie handling overhaul
* Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc * Added support for new websites like Dacast, Kick, NBCStations, Triller, VideoKen, Weverse, WrestleUniverse etc
* Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc * Improved/fixed support for Anvato, Brightcove, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
## [Grub4K](https://github.com/Grub4K) ## [Grub4K](https://github.com/Grub4K)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K) [![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K)
* `--update-to`, automated release, nightly builds * `--update-to`, self-updater rewrite, automated/nightly/master releases
* Rework internals like `traverse_obj`, various core refactors and bugs fixes * Reworked internals like `traverse_obj`, various core refactors and bugs fixes
* Helped fix crunchyroll, Twitter, wrestleuniverse, wistia, slideslive etc * Implemented proper progress reporting for parallel downloads
* Improved/fixed/added Bundestag, crunchyroll, pr0gramm, Twitter, WrestleUniverse etc

View File

@ -159,6 +159,7 @@ Some of yt-dlp's default options are different from that of youtube-dl and youtu
* yt-dlp versions between 2021.09.01 and 2023.01.02 applies `--match-filter` to nested playlists. This was an unintentional side-effect of [8f18ac](https://github.com/yt-dlp/yt-dlp/commit/8f18aca8717bb0dd49054555af8d386e5eda3a88) and is fixed in [d7b460](https://github.com/yt-dlp/yt-dlp/commit/d7b460d0e5fc710950582baed2e3fc616ed98a80). Use `--compat-options playlist-match-filter` to revert this * yt-dlp versions between 2021.09.01 and 2023.01.02 applies `--match-filter` to nested playlists. This was an unintentional side-effect of [8f18ac](https://github.com/yt-dlp/yt-dlp/commit/8f18aca8717bb0dd49054555af8d386e5eda3a88) and is fixed in [d7b460](https://github.com/yt-dlp/yt-dlp/commit/d7b460d0e5fc710950582baed2e3fc616ed98a80). Use `--compat-options playlist-match-filter` to revert this
* yt-dlp versions between 2021.11.10 and 2023.06.21 estimated `filesize_approx` values for fragmented/manifest formats. This was added for convenience in [f2fe69](https://github.com/yt-dlp/yt-dlp/commit/f2fe69c7b0d208bdb1f6292b4ae92bc1e1a7444a), but was reverted in [0dff8e](https://github.com/yt-dlp/yt-dlp/commit/0dff8e4d1e6e9fb938f4256ea9af7d81f42fd54f) due to the potentially extreme inaccuracy of the estimated values. Use `--compat-options manifest-filesize-approx` to keep extracting the estimated values * yt-dlp versions between 2021.11.10 and 2023.06.21 estimated `filesize_approx` values for fragmented/manifest formats. This was added for convenience in [f2fe69](https://github.com/yt-dlp/yt-dlp/commit/f2fe69c7b0d208bdb1f6292b4ae92bc1e1a7444a), but was reverted in [0dff8e](https://github.com/yt-dlp/yt-dlp/commit/0dff8e4d1e6e9fb938f4256ea9af7d81f42fd54f) due to the potentially extreme inaccuracy of the estimated values. Use `--compat-options manifest-filesize-approx` to keep extracting the estimated values
* yt-dlp uses modern http client backends such as `requests`. Use `--compat-options prefer-legacy-http-handler` to prefer the legacy http handler (`urllib`) to be used for standard http requests. * yt-dlp uses modern http client backends such as `requests`. Use `--compat-options prefer-legacy-http-handler` to prefer the legacy http handler (`urllib`) to be used for standard http requests.
* The sub-module `swfinterp` is removed.
For ease of use, a few more compat options are available: For ease of use, a few more compat options are available:
@ -299,7 +300,7 @@ While all the other dependencies are optional, `ffmpeg` and `ffprobe` are highly
* [**pycryptodomex**](https://github.com/Legrandin/pycryptodome)\* - For decrypting AES-128 HLS streams and various other data. Licensed under [BSD-2-Clause](https://github.com/Legrandin/pycryptodome/blob/master/LICENSE.rst) * [**pycryptodomex**](https://github.com/Legrandin/pycryptodome)\* - For decrypting AES-128 HLS streams and various other data. Licensed under [BSD-2-Clause](https://github.com/Legrandin/pycryptodome/blob/master/LICENSE.rst)
* [**phantomjs**](https://github.com/ariya/phantomjs) - Used in extractors where javascript needs to be run. Licensed under [BSD-3-Clause](https://github.com/ariya/phantomjs/blob/master/LICENSE.BSD) * [**phantomjs**](https://github.com/ariya/phantomjs) - Used in extractors where javascript needs to be run. Licensed under [BSD-3-Clause](https://github.com/ariya/phantomjs/blob/master/LICENSE.BSD)
* [**secretstorage**](https://github.com/mitya57/secretstorage) - For `--cookies-from-browser` to access the **Gnome** keyring while decrypting cookies of **Chromium**-based browsers on **Linux**. Licensed under [BSD-3-Clause](https://github.com/mitya57/secretstorage/blob/master/LICENSE) * [**secretstorage**](https://github.com/mitya57/secretstorage)\* - For `--cookies-from-browser` to access the **Gnome** keyring while decrypting cookies of **Chromium**-based browsers on **Linux**. Licensed under [BSD-3-Clause](https://github.com/mitya57/secretstorage/blob/master/LICENSE)
* Any external downloader that you want to use with `--downloader` * Any external downloader that you want to use with `--downloader`
### Deprecated ### Deprecated

View File

@ -114,5 +114,11 @@
"action": "add", "action": "add",
"when": "f04b5bedad7b281bee9814686bba1762bae092eb", "when": "f04b5bedad7b281bee9814686bba1762bae092eb",
"short": "[priority] Security: [[CVE-2023-46121](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-46121)] Patch [Generic Extractor MITM Vulnerability via Arbitrary Proxy Injection](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-3ch3-jhc6-5r8x)\n\t- Disallow smuggling of arbitrary `http_headers`; extractors now only use specific headers" "short": "[priority] Security: [[CVE-2023-46121](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-46121)] Patch [Generic Extractor MITM Vulnerability via Arbitrary Proxy Injection](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-3ch3-jhc6-5r8x)\n\t- Disallow smuggling of arbitrary `http_headers`; extractors now only use specific headers"
},
{
"action": "change",
"when": "15f22b4880b6b3f71f350c64d70976ae65b9f1ca",
"short": "[webvtt] Allow spaces before newlines for CueBlock (#7681)",
"authors": ["TSRBerry"]
} }
] ]

View File

@ -40,20 +40,6 @@ class CommitGroup(enum.Enum):
return { return {
name: group name: group
for group, names in { for group, names in {
cls.CORE: {
'aes',
'cache',
'compat_utils',
'compat',
'cookies',
'dependencies',
'formats',
'jsinterp',
'outtmpl',
'plugins',
'update',
'utils',
},
cls.MISC: { cls.MISC: {
'build', 'build',
'ci', 'ci',
@ -404,9 +390,9 @@ class CommitRange:
if not group: if not group:
if self.EXTRACTOR_INDICATOR_RE.search(commit.short): if self.EXTRACTOR_INDICATOR_RE.search(commit.short):
group = CommitGroup.EXTRACTOR group = CommitGroup.EXTRACTOR
logger.error(f'Assuming [ie] group for {commit.short!r}')
else: else:
group = CommitGroup.POSTPROCESSOR group = CommitGroup.CORE
logger.warning(f'Failed to map {commit.short!r}, selected {group.name.lower()}')
commit_info = CommitInfo( commit_info = CommitInfo(
details, sub_details, message.strip(), details, sub_details, message.strip(),

View File

@ -9,11 +9,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import re import re
from devscripts.utils import ( from devscripts.utils import get_filename_args, read_file, write_file
get_filename_args,
read_file,
write_file,
)
VERBOSE_TMPL = ''' VERBOSE_TMPL = '''
- type: checkboxes - type: checkboxes

View File

@ -1,6 +1,5 @@
mutagen mutagen
pycryptodomex pycryptodomex
websockets
brotli; implementation_name=='cpython' brotli; implementation_name=='cpython'
brotlicffi; implementation_name!='cpython' brotlicffi; implementation_name!='cpython'
certifi certifi

View File

@ -730,7 +730,7 @@ class TestYoutubeDL(unittest.TestCase):
self.assertEqual(got_dict.get(info_field), expected, info_field) self.assertEqual(got_dict.get(info_field), expected, info_field)
return True return True
test('%()j', (expect_same_infodict, str)) test('%()j', (expect_same_infodict, None))
# NA placeholder # NA placeholder
NA_TEST_OUTTMPL = '%(uploader_date)s-%(width)d-%(x|def)s-%(id)s.%(ext)s' NA_TEST_OUTTMPL = '%(uploader_date)s-%(width)d-%(x|def)s-%(id)s.%(ext)s'

View File

@ -9,7 +9,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, report_warning from test.helper import FakeYDL, report_warning
from yt_dlp.update import Updater, UpdateInfo from yt_dlp.update import UpdateInfo, Updater
# XXX: Keep in sync with yt_dlp.update.UPDATE_SOURCES # XXX: Keep in sync with yt_dlp.update.UPDATE_SOURCES

View File

@ -2110,6 +2110,8 @@ Line 1
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str_or_none})), self.assertEqual(traverse_obj(_TEST_DATA, (..., {str_or_none})),
[item for item in map(str_or_none, _TEST_DATA.values()) if item is not None], [item for item in map(str_or_none, _TEST_DATA.values()) if item is not None],
msg='Function in set should be a transformation') msg='Function in set should be a transformation')
self.assertEqual(traverse_obj(_TEST_DATA, ('fail', {lambda _: 'const'})), 'const',
msg='Function in set should always be called')
if __debug__: if __debug__:
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'): with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
traverse_obj(_TEST_DATA, set()) traverse_obj(_TEST_DATA, set())

View File

@ -1 +1 @@
@py -bb -Werror -Xdev "%~dp0yt_dlp\__main__.py" %* @py -Werror -Xdev "%~dp0yt_dlp\__main__.py" %*

View File

@ -1,2 +1,2 @@
#!/usr/bin/env sh #!/usr/bin/env sh
exec "${PYTHON:-python3}" -bb -Werror -Xdev "$(dirname "$(realpath "$0")")/yt_dlp/__main__.py" "$@" exec "${PYTHON:-python3}" -Werror -Xdev "$(dirname "$(realpath "$0")")/yt_dlp/__main__.py" "$@"

View File

@ -60,7 +60,13 @@ from .postprocessor import (
get_postprocessor, get_postprocessor,
) )
from .postprocessor.ffmpeg import resolve_mapping as resolve_recode_mapping from .postprocessor.ffmpeg import resolve_mapping as resolve_recode_mapping
from .update import REPOSITORY, _get_system_deprecation, _make_label, current_git_head, detect_variant from .update import (
REPOSITORY,
_get_system_deprecation,
_make_label,
current_git_head,
detect_variant,
)
from .utils import ( from .utils import (
DEFAULT_OUTTMPL, DEFAULT_OUTTMPL,
IDENTITY, IDENTITY,

View File

@ -152,7 +152,7 @@ class BanByeChannelIE(BanByeBaseIE):
'sort': 'new', 'sort': 'new',
'limit': self._PAGE_SIZE, 'limit': self._PAGE_SIZE,
'offset': page_num * self._PAGE_SIZE, 'offset': page_num * self._PAGE_SIZE,
}, note=f'Downloading page {page_num+1}') }, note=f'Downloading page {page_num + 1}')
return [ return [
self.url_result(f"{self._VIDEO_BASE}/{video['_id']}", BanByeIE) self.url_result(f"{self._VIDEO_BASE}/{video['_id']}", BanByeIE)
for video in data['items'] for video in data['items']

View File

@ -53,21 +53,6 @@ class DuoplayIE(InfoExtractor):
'episode_id': 14, 'episode_id': 14,
'release_year': 2010, 'release_year': 2010,
}, },
}, {
'note': 'Movie',
'url': 'https://duoplay.ee/4325/naljamangud',
'md5': '2b0bcac4159a08b1844c2bfde06b1199',
'info_dict': {
'id': '4325',
'ext': 'mp4',
'title': 'Näljamängud',
'thumbnail': r're:https://.+\.jpg(?:\?c=\d+)?$',
'description': 'md5:fb35f5eb2ff46cdb82e4d5fbe7b49a13',
'cast': ['Jennifer Lawrence', 'Josh Hutcherson', 'Liam Hemsworth'],
'upload_date': '20231109',
'timestamp': 1699552800,
'release_year': 2012,
},
}, { }, {
'note': 'Movie without expiry', 'note': 'Movie without expiry',
'url': 'https://duoplay.ee/5501/pilvede-all.-neljas-ode', 'url': 'https://duoplay.ee/5501/pilvede-all.-neljas-ode',

View File

@ -173,8 +173,8 @@ class FloatplaneIE(InfoExtractor):
'formats': formats, 'formats': formats,
}) })
uploader_url = format_field(traverse_obj( uploader_url = format_field(
post_data, 'creator'), 'urlname', 'https://www.floatplane.com/channel/%s/home', default=None) post_data, [('creator', 'urlname')], 'https://www.floatplane.com/channel/%s/home') or None
channel_url = urljoin(f'{uploader_url}/', traverse_obj(post_data, ('channel', 'urlname'))) channel_url = urljoin(f'{uploader_url}/', traverse_obj(post_data, ('channel', 'urlname')))
post_info = { post_info = {
@ -248,7 +248,7 @@ class FloatplaneChannelIE(InfoExtractor):
for post in page_data or []: for post in page_data or []:
yield self.url_result( yield self.url_result(
f'https://www.floatplane.com/post/{post["id"]}', f'https://www.floatplane.com/post/{post["id"]}',
ie=FloatplaneIE, video_id=post['id'], video_title=post.get('title'), FloatplaneIE, id=post['id'], title=post.get('title'),
release_timestamp=parse_iso8601(post.get('releaseDate'))) release_timestamp=parse_iso8601(post.get('releaseDate')))
def _real_extract(self, url): def _real_extract(self, url):
@ -264,5 +264,5 @@ class FloatplaneChannelIE(InfoExtractor):
return self.playlist_result(OnDemandPagedList(functools.partial( return self.playlist_result(OnDemandPagedList(functools.partial(
self._fetch_page, display_id, creator_data['id'], channel_data.get('id')), self._PAGE_SIZE), self._fetch_page, display_id, creator_data['id'], channel_data.get('id')), self._PAGE_SIZE),
display_id, playlist_title=channel_data.get('title') or creator_data.get('title'), display_id, title=channel_data.get('title') or creator_data.get('title'),
playlist_description=channel_data.get('about') or creator_data.get('about')) description=channel_data.get('about') or creator_data.get('about'))

View File

@ -35,8 +35,8 @@ from ..utils import (
unified_timestamp, unified_timestamp,
unsmuggle_url, unsmuggle_url,
update_url_query, update_url_query,
urlhandle_detect_ext,
url_or_none, url_or_none,
urlhandle_detect_ext,
urljoin, urljoin,
variadic, variadic,
xpath_attr, xpath_attr,

View File

@ -536,7 +536,7 @@ class PanoptoListIE(PanoptoBaseIE):
} }
response = self._call_api( response = self._call_api(
base_url, '/Services/Data.svc/GetSessions', f'{display_id} page {page+1}', base_url, '/Services/Data.svc/GetSessions', f'{display_id} page {page + 1}',
data={'queryParameters': params}, fatal=False) data={'queryParameters': params}, fatal=False)
for result in get_first(response, 'Results', default=[]): for result in get_first(response, 'Results', default=[]):

View File

@ -264,7 +264,7 @@ class RadioFranceLiveIE(RadioFranceBaseIE):
} }
class RadioFrancePlaylistBase(RadioFranceBaseIE): class RadioFrancePlaylistBaseIE(RadioFranceBaseIE):
"""Subclasses must set _METADATA_KEY""" """Subclasses must set _METADATA_KEY"""
def _call_api(self, content_id, cursor, page_num): def _call_api(self, content_id, cursor, page_num):
@ -308,7 +308,7 @@ class RadioFrancePlaylistBase(RadioFranceBaseIE):
})}) })})
class RadioFrancePodcastIE(RadioFrancePlaylistBase): class RadioFrancePodcastIE(RadioFrancePlaylistBaseIE):
_VALID_URL = rf'''(?x) _VALID_URL = rf'''(?x)
{RadioFranceBaseIE._VALID_URL_BASE} {RadioFranceBaseIE._VALID_URL_BASE}
/(?:{RadioFranceBaseIE._STATIONS_RE}) /(?:{RadioFranceBaseIE._STATIONS_RE})
@ -369,7 +369,7 @@ class RadioFrancePodcastIE(RadioFrancePlaylistBase):
note=f'Downloading page {page_num}', query={'pageCursor': cursor}) note=f'Downloading page {page_num}', query={'pageCursor': cursor})
class RadioFranceProfileIE(RadioFrancePlaylistBase): class RadioFranceProfileIE(RadioFrancePlaylistBaseIE):
_VALID_URL = rf'{RadioFranceBaseIE._VALID_URL_BASE}/personnes/(?P<id>[\w-]+)' _VALID_URL = rf'{RadioFranceBaseIE._VALID_URL_BASE}/personnes/(?P<id>[\w-]+)'
_TESTS = [{ _TESTS = [{

View File

@ -70,7 +70,7 @@ class WordpressPlaylistEmbedIE(InfoExtractor):
'height': int_or_none(traverse_obj(track, ('dimensions', 'original', 'height'))), 'height': int_or_none(traverse_obj(track, ('dimensions', 'original', 'height'))),
'width': int_or_none(traverse_obj(track, ('dimensions', 'original', 'width'))), 'width': int_or_none(traverse_obj(track, ('dimensions', 'original', 'width'))),
} for track in traverse_obj(playlist_json, ('tracks', ...), expected_type=dict)] } for track in traverse_obj(playlist_json, ('tracks', ...), expected_type=dict)]
yield self.playlist_result(entries, self._generic_id(url) + f'-wp-playlist-{i+1}', 'Wordpress Playlist') yield self.playlist_result(entries, self._generic_id(url) + f'-wp-playlist-{i + 1}', 'Wordpress Playlist')
class WordpressMiniAudioPlayerEmbedIE(InfoExtractor): class WordpressMiniAudioPlayerEmbedIE(InfoExtractor):

View File

@ -5297,6 +5297,7 @@ class YoutubeTabBaseInfoExtractor(YoutubeBaseInfoExtractor):
# See: https://github.com/yt-dlp/yt-dlp/issues/116 # See: https://github.com/yt-dlp/yt-dlp/issues/116
if not traverse_obj(data, 'contents', 'currentVideoEndpoint', 'onResponseReceivedActions'): if not traverse_obj(data, 'contents', 'currentVideoEndpoint', 'onResponseReceivedActions'):
retry.error = ExtractorError('Incomplete yt initial data received') retry.error = ExtractorError('Incomplete yt initial data received')
data = None
continue continue
return webpage, data return webpage, data

View File

@ -28,4 +28,3 @@ except ImportError:
pass pass
except Exception as e: except Exception as e:
warnings.warn(f'Failed to import "websockets" request handler: {e}' + bug_reports_message()) warnings.warn(f'Failed to import "websockets" request handler: {e}' + bug_reports_message())

View File

@ -219,7 +219,7 @@ def _socket_connect(ip_addr, timeout, source_address):
sock.bind(source_address) sock.bind(source_address)
sock.connect(sa) sock.connect(sa)
return sock return sock
except socket.error: except OSError:
sock.close() sock.close()
raise raise
@ -237,7 +237,7 @@ def create_socks_proxy_socket(dest_addr, proxy_args, proxy_ip_addr, timeout, sou
sock.bind(source_address) sock.bind(source_address)
sock.connect(dest_addr) sock.connect(dest_addr)
return sock return sock
except socket.error: except OSError:
sock.close() sock.close()
raise raise
@ -255,7 +255,7 @@ def create_connection(
host, port = address host, port = address
ip_addrs = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM) ip_addrs = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM)
if not ip_addrs: if not ip_addrs:
raise socket.error('getaddrinfo returns an empty list') raise OSError('getaddrinfo returns an empty list')
if source_address is not None: if source_address is not None:
af = socket.AF_INET if ':' not in source_address[0] else socket.AF_INET6 af = socket.AF_INET if ':' not in source_address[0] else socket.AF_INET6
ip_addrs = [addr for addr in ip_addrs if addr[0] == af] ip_addrs = [addr for addr in ip_addrs if addr[0] == af]
@ -272,7 +272,7 @@ def create_connection(
# https://bugs.python.org/issue36820 # https://bugs.python.org/issue36820
err = None err = None
return sock return sock
except socket.error as e: except OSError as e:
err = e err = e
try: try:

View File

@ -188,6 +188,7 @@ class RequestsSession(requests.sessions.Session):
""" """
Ensure unified redirect method handling with our urllib redirect handler. Ensure unified redirect method handling with our urllib redirect handler.
""" """
def rebuild_method(self, prepared_request, response): def rebuild_method(self, prepared_request, response):
new_method = get_redirect_method(prepared_request.method, response.status_code) new_method = get_redirect_method(prepared_request.method, response.status_code)
@ -218,6 +219,7 @@ class Urllib3LoggingFilter(logging.Filter):
class Urllib3LoggingHandler(logging.Handler): class Urllib3LoggingHandler(logging.Handler):
"""Redirect urllib3 logs to our logger""" """Redirect urllib3 logs to our logger"""
def __init__(self, logger, *args, **kwargs): def __init__(self, logger, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self._logger = logger self._logger = logger
@ -367,7 +369,7 @@ class SocksHTTPConnection(urllib3.connection.HTTPConnection):
self, f'Connection to {self.host} timed out. (connect timeout={self.timeout})') from e self, f'Connection to {self.host} timed out. (connect timeout={self.timeout})') from e
except SocksProxyError as e: except SocksProxyError as e:
raise urllib3.exceptions.ProxyError(str(e), e) from e raise urllib3.exceptions.ProxyError(str(e), e) from e
except (OSError, socket.error) as e: except OSError as e:
raise urllib3.exceptions.NewConnectionError( raise urllib3.exceptions.NewConnectionError(
self, f'Failed to establish a new connection: {e}') from e self, f'Failed to establish a new connection: {e}') from e

View File

@ -5,20 +5,26 @@ import logging
import ssl import ssl
import sys import sys
from ._helper import create_connection, select_proxy, make_socks_proxy_opts, create_socks_proxy_socket from ._helper import (
from .common import Response, register_rh, Features create_connection,
create_socks_proxy_socket,
make_socks_proxy_opts,
select_proxy,
)
from .common import Features, Response, register_rh
from .exceptions import ( from .exceptions import (
CertificateVerifyError, CertificateVerifyError,
HTTPError, HTTPError,
ProxyError,
RequestError, RequestError,
SSLError, SSLError,
TransportError, ProxyError, TransportError,
) )
from .websocket import WebSocketRequestHandler, WebSocketResponse from .websocket import WebSocketRequestHandler, WebSocketResponse
from ..compat import functools from ..compat import functools
from ..dependencies import websockets from ..dependencies import websockets
from ..utils import int_or_none
from ..socks import ProxyError as SocksProxyError from ..socks import ProxyError as SocksProxyError
from ..utils import int_or_none
if not websockets: if not websockets:
raise ImportError('websockets is not installed') raise ImportError('websockets is not installed')

View File

@ -2,7 +2,7 @@ from __future__ import annotations
import abc import abc
from .common import Response, RequestHandler from .common import RequestHandler, Response
class WebSocketResponse(Response): class WebSocketResponse(Response):

View File

@ -49,7 +49,7 @@ class Socks5AddressType:
ATYP_IPV6 = 0x04 ATYP_IPV6 = 0x04
class ProxyError(socket.error): class ProxyError(OSError):
ERR_SUCCESS = 0x00 ERR_SUCCESS = 0x00
def __init__(self, code=None, msg=None): def __init__(self, code=None, msg=None):

View File

@ -558,7 +558,7 @@ class LenientJSONDecoder(json.JSONDecoder):
s = self._close_object(e) s = self._close_object(e)
if s is not None: if s is not None:
continue continue
raise type(e)(f'{e.msg} in {s[e.pos-10:e.pos+10]!r}', s, e.pos) raise type(e)(f'{e.msg} in {s[e.pos - 10:e.pos + 10]!r}', s, e.pos)
assert False, 'Too many attempts to decode JSON' assert False, 'Too many attempts to decode JSON'
@ -1885,6 +1885,7 @@ def setproctitle(title):
buf = ctypes.create_string_buffer(len(title_bytes)) buf = ctypes.create_string_buffer(len(title_bytes))
buf.value = title_bytes buf.value = title_bytes
try: try:
# PR_SET_NAME = 15 Ref: /usr/include/linux/prctl.h
libc.prctl(15, buf, 0, 0, 0) libc.prctl(15, buf, 0, 0, 0)
except AttributeError: except AttributeError:
return # Strange libc, just skip this return # Strange libc, just skip this
@ -2260,6 +2261,9 @@ class PagedList:
raise self.IndexError() raise self.IndexError()
return entries[0] return entries[0]
def __bool__(self):
return bool(self.getslice(0, 1))
class OnDemandPagedList(PagedList): class OnDemandPagedList(PagedList):
"""Download pages until a page with less than maximum results""" """Download pages until a page with less than maximum results"""
@ -5070,7 +5074,7 @@ def truncate_string(s, left, right=0):
assert left > 3 and right >= 0 assert left > 3 and right >= 0
if s is None or len(s) <= left + right: if s is None or len(s) <= left + right:
return s return s
return f'{s[:left-3]}...{s[-right:] if right else ""}' return f'{s[:left - 3]}...{s[-right:] if right else ""}'
def orderedSet_from_options(options, alias_dict, *, use_regex=False, start=None): def orderedSet_from_options(options, alias_dict, *, use_regex=False, start=None):

View File

@ -23,7 +23,7 @@ def traverse_obj(
>>> obj = [{}, {"key": "value"}] >>> obj = [{}, {"key": "value"}]
>>> traverse_obj(obj, (1, "key")) >>> traverse_obj(obj, (1, "key"))
"value" 'value'
Each of the provided `paths` is tested and the first producing a valid result will be returned. Each of the provided `paths` is tested and the first producing a valid result will be returned.
The next path will also be tested if the path branched but no results could be found. The next path will also be tested if the path branched but no results could be found.