]> jfr.im git - yt-dlp.git/blob - CONTRIBUTING.md
Remove Python 3.7 support (#8361)
[yt-dlp.git] / CONTRIBUTING.md
1 # CONTRIBUTING TO YT-DLP
2
3 - [OPENING AN ISSUE](#opening-an-issue)
4 - [Is the description of the issue itself sufficient?](#is-the-description-of-the-issue-itself-sufficient)
5 - [Are you using the latest version?](#are-you-using-the-latest-version)
6 - [Is the issue already documented?](#is-the-issue-already-documented)
7 - [Why are existing options not enough?](#why-are-existing-options-not-enough)
8 - [Have you read and understood the changes, between youtube-dl and yt-dlp](#have-you-read-and-understood-the-changes-between-youtube-dl-and-yt-dlp)
9 - [Is there enough context in your bug report?](#is-there-enough-context-in-your-bug-report)
10 - [Does the issue involve one problem, and one problem only?](#does-the-issue-involve-one-problem-and-one-problem-only)
11 - [Is anyone going to need the feature?](#is-anyone-going-to-need-the-feature)
12 - [Is your question about yt-dlp?](#is-your-question-about-yt-dlp)
13 - [Are you willing to share account details if needed?](#are-you-willing-to-share-account-details-if-needed)
14 - [Is the website primarily used for piracy](#is-the-website-primarily-used-for-piracy)
15 - [DEVELOPER INSTRUCTIONS](#developer-instructions)
16 - [Adding new feature or making overarching changes](#adding-new-feature-or-making-overarching-changes)
17 - [Adding support for a new site](#adding-support-for-a-new-site)
18 - [yt-dlp coding conventions](#yt-dlp-coding-conventions)
19 - [Mandatory and optional metafields](#mandatory-and-optional-metafields)
20 - [Provide fallbacks](#provide-fallbacks)
21 - [Regular expressions](#regular-expressions)
22 - [Long lines policy](#long-lines-policy)
23 - [Quotes](#quotes)
24 - [Inline values](#inline-values)
25 - [Collapse fallbacks](#collapse-fallbacks)
26 - [Trailing parentheses](#trailing-parentheses)
27 - [Use convenience conversion and parsing functions](#use-convenience-conversion-and-parsing-functions)
28 - [My pull request is labeled pending-fixes](#my-pull-request-is-labeled-pending-fixes)
29 - [EMBEDDING YT-DLP](README.md#embedding-yt-dlp)
30
31
32
33 # OPENING AN ISSUE
34
35 Bugs and suggestions should be reported at: [yt-dlp/yt-dlp/issues](https://github.com/yt-dlp/yt-dlp/issues). Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in our [discord server](https://discord.gg/H5MNcFW63r).
36
37 **Please include the full output of yt-dlp when run with `-vU`**, i.e. **add** `-vU` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
38 ```
39 $ yt-dlp -vU <your command line>
40 [debug] Command-line config: ['-v', 'demo.com']
41 [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
42 [debug] yt-dlp version 2021.09.25 (zip)
43 [debug] Python version 3.8.10 (CPython 64bit) - Linux-5.4.0-74-generic-x86_64-with-glibc2.29
44 [debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4
45 [debug] Proxy map: {}
46 Current Build Hash 25cc412d1d3c0725a1f2f5b7e4682f6fb40e6d15f7024e96f7afd572e9919535
47 yt-dlp is up to date (2021.09.25)
48 ...
49 ```
50 **Do not post screenshots of verbose logs; only plain text is acceptable.**
51
52 The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore will be closed as `incomplete`.
53
54 The templates provided for the Issues, should be completed and **not removed**, this helps aide the resolution of the issue.
55
56 Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
57
58 ### Is the description of the issue itself sufficient?
59
60 We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources.
61
62 So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
63
64 - What the problem is
65 - How it could be fixed
66 - How your proposed solution would look like
67
68 If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. We often get frustrated by these issues, since the only possible way for us to move forward on them is to ask for clarification over and over.
69
70 For bug reports, this means that your report should contain the **complete** output of yt-dlp when called with the `-vU` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
71
72 If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--write-pages` and upload the `.dump` files you get [somewhere](https://gist.github.com).
73
74 **Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
75
76 ### Are you using the latest version?
77
78 Before reporting any issue, type `yt-dlp -U`. This should report that you're up-to-date. This goes for feature requests as well.
79
80 ### Is the issue already documented?
81
82 Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, subcribe to it to be notified when there is any progress. Unless you have something useful to add to the converation, please refrain from commenting.
83
84 Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
85
86 ### Why are existing options not enough?
87
88 Before requesting a new feature, please have a quick peek at [the list of supported options](README.md#usage-and-options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
89
90 ### Have you read and understood the changes, between youtube-dl and yt-dlp
91
92 There are many changes between youtube-dl and yt-dlp [(changes to default behavior)](README.md#differences-in-default-behavior), and some of the options available have a different behaviour in yt-dlp, or have been removed all together [(list of changes to options)](README.md#deprecated-options). Make sure you have read and understand the differences in the options and how this may impact your downloads before opening an issue.
93
94 ### Is there enough context in your bug report?
95
96 People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
97
98 We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
99
100 ### Does the issue involve one problem, and one problem only?
101
102 Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
103
104 In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of yt-dlp that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
105
106 ### Is anyone going to need the feature?
107
108 Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
109
110 ### Is your question about yt-dlp?
111
112 Some bug reports are completely unrelated to yt-dlp and relate to a different, or even the reporter's own, application. Please make sure that you are actually using yt-dlp. If you are using a UI for yt-dlp, report the bug to the maintainer of the actual application providing the UI. In general, if you are unable to provide the verbose log, you should not be opening the issue here.
113
114 If the issue is with `youtube-dl` (the upstream fork of yt-dlp) and not with yt-dlp, the issue should be raised in the youtube-dl project.
115
116 ### Are you willing to share account details if needed?
117
118 The maintainers and potential contributors of the project often do not have an account for the website you are asking support for. So any developer interested in solving your issue may ask you for account details. It is your personal discretion whether you are willing to share the account in order for the developer to try and solve your issue. However, if you are unwilling or unable to provide details, they obviously cannot work on the issue and it cannot be solved unless some developer who both has an account and is willing/able to contribute decides to solve it.
119
120 By sharing an account with anyone, you agree to bear all risks associated with it. The maintainers and yt-dlp can't be held responsible for any misuse of the credentials.
121
122 While these steps won't necessarily ensure that no misuse of the account takes place, these are still some good practices to follow.
123
124 - Look for people with `Member` (maintainers of the project) or `Contributor` (people who have previously contributed code) tag on their messages.
125 - Change the password before sharing the account to something random (use [this](https://passwordsgenerator.net/) if you don't have a random password generator).
126 - Change the password after receiving the account back.
127
128 ### Is the website primarily used for piracy?
129
130 We follow [youtube-dl's policy](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) to not support services that is primarily used for infringing copyright. Additionally, it has been decided to not to support porn sites that specialize in fakes. We also cannot support any service that serves only [DRM protected content](https://en.wikipedia.org/wiki/Digital_rights_management).
131
132
133
134
135 # DEVELOPER INSTRUCTIONS
136
137 Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases) or get them via [the other installation methods](README.md#installation).
138
139 To run yt-dlp as a developer, you don't need to build anything either. Simply execute
140
141 python -m yt_dlp
142
143 To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
144
145 python -m unittest discover
146 python test/test_download.py
147 nosetests
148 pytest
149
150 See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
151
152 If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile).
153
154
155 ## Adding new feature or making overarching changes
156
157 Before you start writing code for implementing a new feature, open an issue explaining your feature request and atleast one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright.
158
159 The same applies for changes to the documentation, code style, or overarching changes to the architecture
160
161
162 ## Adding support for a new site
163
164 If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](#is-the-website-primarily-used-for-piracy)**. yt-dlp does **not support** such sites thus pull requests adding support for them **will be rejected**.
165
166 After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
167
168 1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork)
169 1. Check out the source code with:
170
171 git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
172
173 1. Start a new git branch with
174
175 cd yt-dlp
176 git checkout -b yourextractor
177
178 1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`:
179
180 ```python
181 from .common import InfoExtractor
182
183
184 class YourExtractorIE(InfoExtractor):
185 _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
186 _TESTS = [{
187 'url': 'https://yourextractor.com/watch/42',
188 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
189 'info_dict': {
190 'id': '42',
191 'ext': 'mp4',
192 'title': 'Video title goes here',
193 'thumbnail': r're:^https?://.*\.jpg$',
194 # TODO more properties, either as:
195 # * A value
196 # * MD5 checksum; start the string with md5:
197 # * A regular expression; start the string with re:
198 # * Any Python type, e.g. int or float
199 }
200 }]
201
202 def _real_extract(self, url):
203 video_id = self._match_id(url)
204 webpage = self._download_webpage(url, video_id)
205
206 # TODO more code goes here, for example ...
207 title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
208
209 return {
210 'id': video_id,
211 'title': title,
212 'description': self._og_search_description(webpage),
213 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
214 # TODO more properties (see yt_dlp/extractor/common.py)
215 }
216 ```
217 1. Add an import in [`yt_dlp/extractor/_extractors.py`](yt_dlp/extractor/_extractors.py). Note that the class name must end with `IE`.
218 1. Run `python test/test_download.py TestDownload.test_YourExtractor` (note that `YourExtractor` doesn't end with `IE`). This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, the tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. You can also run all the tests in one go with `TestDownload.test_YourExtractor_all`
219 1. Make sure you have atleast one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running.
220 1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L119-L440). Add tests and code for as many as you want.
221 1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
222
223 $ flake8 yt_dlp/extractor/yourextractor.py
224
225 1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.8 and above. Backward compatibility is not required for even older versions of Python.
226 1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
227
228 $ git add yt_dlp/extractor/_extractors.py
229 $ git add yt_dlp/extractor/yourextractor.py
230 $ git commit -m '[yourextractor] Add extractor'
231 $ git push origin yourextractor
232
233 1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
234
235 In any case, thank you very much for your contributions!
236
237 **Tip:** To test extractors that require login information, create a file `test/local_parameters.json` and add `"usenetrc": true` or your username and password in it:
238 ```json
239 {
240 "username": "your user name",
241 "password": "your password"
242 }
243 ```
244
245 ## yt-dlp coding conventions
246
247 This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
248
249 Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old yt-dlp versions working. Even though this breakage issue may be easily fixed by a new version of yt-dlp, this could take some time, during which the extractor will remain broken.
250
251
252 ### Mandatory and optional metafields
253
254 For extraction to work yt-dlp relies on metadata your extractor extracts and provides to yt-dlp expressed by an [information dictionary](yt_dlp/extractor/common.py#L119-L440) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by yt-dlp:
255
256 - `id` (media identifier)
257 - `title` (media title)
258 - `url` (media download URL) or `formats`
259
260 The aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken. While all extractors must return a `title`, they must also allow it's extraction to be non-fatal.
261
262 For pornographic sites, appropriate `age_limit` must also be returned.
263
264 The extractor is allowed to return the info dict without url or formats in some special cases if it allows the user to extract usefull information with `--ignore-no-formats-error` - e.g. when the video is a live stream that has not started yet.
265
266 [Any field](yt_dlp/extractor/common.py#219-L426) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
267
268 #### Example
269
270 Say you have some source dictionary `meta` that you've fetched as JSON with HTTP request and it has a key `summary`:
271
272 ```python
273 meta = self._download_json(url, video_id)
274 ```
275
276 Assume at this point `meta`'s layout is:
277
278 ```python
279 {
280 "summary": "some fancy summary text",
281 "user": {
282 "name": "uploader name"
283 },
284 ...
285 }
286 ```
287
288 Assume you want to extract `summary` and put it into the resulting info dict as `description`. Since `description` is an optional meta field you should be ready that this key may be missing from the `meta` dict, so that you should extract it like:
289
290 ```python
291 description = meta.get('summary') # correct
292 ```
293
294 and not like:
295
296 ```python
297 description = meta['summary'] # incorrect
298 ```
299
300 The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
301
302
303 If the data is nested, do not use `.get` chains, but instead make use of `traverse_obj`.
304
305 Considering the above `meta` again, assume you want to extract `["user"]["name"]` and put it in the resulting info dict as `uploader`
306
307 ```python
308 uploader = traverse_obj(meta, ('user', 'name')) # correct
309 ```
310
311 and not like:
312
313 ```python
314 uploader = meta['user']['name'] # incorrect
315 ```
316 or
317 ```python
318 uploader = meta.get('user', {}).get('name') # incorrect
319 ```
320 or
321 ```python
322 uploader = try_get(meta, lambda x: x['user']['name']) # old utility
323 ```
324
325
326 Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
327
328 ```python
329 description = self._search_regex(
330 r'<span[^>]+id="title"[^>]*>([^<]+)<',
331 webpage, 'description', fatal=False)
332 ```
333
334 With `fatal` set to `False` if `_search_regex` fails to extract `description` it will emit a warning and continue extraction.
335
336 You can also pass `default=<some fallback value>`, for example:
337
338 ```python
339 description = self._search_regex(
340 r'<span[^>]+id="title"[^>]*>([^<]+)<',
341 webpage, 'description', default=None)
342 ```
343
344 On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
345
346
347 Another thing to remember is not to try to iterate over `None`
348
349 Say you extracted a list of thumbnails into `thumbnail_data` and want to iterate over them
350
351 ```python
352 thumbnail_data = data.get('thumbnails') or []
353 thumbnails = [{
354 'url': item['url'],
355 'height': item.get('h'),
356 } for item in thumbnail_data if item.get('url')] # correct
357 ```
358
359 and not like:
360
361 ```python
362 thumbnail_data = data.get('thumbnails')
363 thumbnails = [{
364 'url': item['url'],
365 'height': item.get('h'),
366 } for item in thumbnail_data] # incorrect
367 ```
368
369 In this case, `thumbnail_data` will be `None` if the field was not found and this will cause the loop `for item in thumbnail_data` to raise a fatal error. Using `or []` avoids this error and results in setting an empty list in `thumbnails` instead.
370
371 Alternately, this can be further simplified by using `traverse_obj`
372
373 ```python
374 thumbnails = [{
375 'url': item['url'],
376 'height': item.get('h'),
377 } for item in traverse_obj(data, ('thumbnails', lambda _, v: v['url']))]
378 ```
379
380 or, even better,
381
382 ```python
383 thumbnails = traverse_obj(data, ('thumbnails', ..., {'url': 'url', 'height': 'h'}))
384 ```
385
386 ### Provide fallbacks
387
388 When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
389
390
391 #### Example
392
393 Say `meta` from the previous example has a `title` and you are about to extract it like:
394
395 ```python
396 title = meta.get('title')
397 ```
398
399 If `title` disappears from `meta` in future due to some changes on the hoster's side the title extraction would fail.
400
401 Assume that you have some another source you can extract `title` from, for example `og:title` HTML meta of a `webpage`. In this case you can provide a fallback like:
402
403 ```python
404 title = meta.get('title') or self._og_search_title(webpage)
405 ```
406
407 This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`, making the extractor more robust.
408
409
410 ### Regular expressions
411
412 #### Don't capture groups you don't use
413
414 Capturing group must be an indication that it's used somewhere in the code. Any group that is not used must be non capturing.
415
416 ##### Example
417
418 Don't capture id attribute name here since you can't use it for anything anyway.
419
420 Correct:
421
422 ```python
423 r'(?:id|ID)=(?P<id>\d+)'
424 ```
425
426 Incorrect:
427 ```python
428 r'(id|ID)=(?P<id>\d+)'
429 ```
430
431 #### Make regular expressions relaxed and flexible
432
433 When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on.
434
435 ##### Example
436
437 Say you need to extract `title` from the following HTML code:
438
439 ```html
440 <span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">some fancy title</span>
441 ```
442
443 The code for that task should look similar to:
444
445 ```python
446 title = self._search_regex( # correct
447 r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
448 ```
449
450 which tolerates potential changes in the `style` attribute's value. Or even better:
451
452 ```python
453 title = self._search_regex( # correct
454 r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
455 webpage, 'title', group='title')
456 ```
457
458 which also handles both single quotes in addition to double quotes.
459
460 The code definitely should not look like:
461
462 ```python
463 title = self._search_regex( # incorrect
464 r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
465 webpage, 'title', group='title')
466 ```
467
468 or even
469
470 ```python
471 title = self._search_regex( # incorrect
472 r'<span style=".*?" class="title">(.*?)</span>',
473 webpage, 'title', group='title')
474 ```
475
476 Here the presence or absence of other attributes including `style` is irrelevant for the data we need, and so the regex must not depend on it
477
478
479 #### Keep the regular expressions as simple as possible, but no simpler
480
481 Since many extractors deal with unstructured data provided by websites, we will often need to use very complex regular expressions. You should try to use the *simplest* regex that can accomplish what you want. In other words, each part of the regex must have a reason for existing. If you can take out a symbol and the functionality does not change, the symbol should not be there.
482
483 ##### Example
484
485 Correct:
486
487 ```python
488 _VALID_URL = r'https?://(?:www\.)?website\.com/(?:[^/]+/){3,4}(?P<display_id>[^/]+)_(?P<id>\d+)'
489 ```
490
491 Incorrect:
492
493 ```python
494 _VALID_URL = r'https?:\/\/(?:www\.)?website\.com\/[^\/]+/[^\/]+/[^\/]+(?:\/[^\/]+)?\/(?P<display_id>[^\/]+)_(?P<id>\d+)'
495 ```
496
497 #### Do not misuse `.` and use the correct quantifiers (`+*?`)
498
499 Avoid creating regexes that over-match because of wrong use of quantifiers. Also try to avoid non-greedy matching (`?`) where possible since they could easily result in [catastrophic backtracking](https://www.regular-expressions.info/catastrophic.html)
500
501 Correct:
502
503 ```python
504 title = self._search_regex(r'<span\b[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
505 ```
506
507 Incorrect:
508
509 ```python
510 title = self._search_regex(r'<span\b.*class="title".*>(.+?)<', webpage, 'title')
511 ```
512
513
514 ### Long lines policy
515
516 There is a soft limit to keep lines of code under 100 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse. Sometimes, it may be reasonable to go upto 120 characters and sometimes even 80 can be unreadable. Keep in mind that this is not a hard limit and is just one of many tools to make the code more readable.
517
518 For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit:
519
520 Conversely, don't unnecessarily split small lines further. As a rule of thumb, if removing the line split keeps the code under 80 characters, it should be a single line.
521
522 ##### Examples
523
524 Correct:
525
526 ```python
527 'https://www.youtube.com/watch?v=FqZTN594JQw&list=PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
528 ```
529
530 Incorrect:
531
532 ```python
533 'https://www.youtube.com/watch?v=FqZTN594JQw&list='
534 'PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
535 ```
536
537 Correct:
538
539 ```python
540 uploader = traverse_obj(info, ('uploader', 'name'), ('author', 'fullname'))
541 ```
542
543 Incorrect:
544
545 ```python
546 uploader = traverse_obj(
547 info,
548 ('uploader', 'name'),
549 ('author', 'fullname'))
550 ```
551
552 Correct:
553
554 ```python
555 formats = self._extract_m3u8_formats(
556 m3u8_url, video_id, 'mp4', 'm3u8_native', m3u8_id='hls',
557 note='Downloading HD m3u8 information', errnote='Unable to download HD m3u8 information')
558 ```
559
560 Incorrect:
561
562 ```python
563 formats = self._extract_m3u8_formats(m3u8_url,
564 video_id,
565 'mp4',
566 'm3u8_native',
567 m3u8_id='hls',
568 note='Downloading HD m3u8 information',
569 errnote='Unable to download HD m3u8 information')
570 ```
571
572
573 ### Quotes
574
575 Always use single quotes for strings (even if the string has `'`) and double quotes for docstrings. Use `'''` only for multi-line strings. An exception can be made if a string has multiple single quotes in it and escaping makes it *significantly* harder to read. For f-strings, use you can use double quotes on the inside. But avoid f-strings that have too many quotes inside.
576
577
578 ### Inline values
579
580 Extracting variables is acceptable for reducing code duplication and improving readability of complex expressions. However, you should avoid extracting variables used only once and moving them to opposite parts of the extractor file, which makes reading the linear flow difficult.
581
582 #### Examples
583
584 Correct:
585
586 ```python
587 return {
588 'title': self._html_search_regex(r'<h1>([^<]+)</h1>', webpage, 'title'),
589 # ...some lines of code...
590 }
591 ```
592
593 Incorrect:
594
595 ```python
596 TITLE_RE = r'<h1>([^<]+)</h1>'
597 # ...some lines of code...
598 title = self._html_search_regex(TITLE_RE, webpage, 'title')
599 # ...some lines of code...
600 return {
601 'title': title,
602 # ...some lines of code...
603 }
604 ```
605
606
607 ### Collapse fallbacks
608
609 Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns.
610
611 #### Example
612
613 Good:
614
615 ```python
616 description = self._html_search_meta(
617 ['og:description', 'description', 'twitter:description'],
618 webpage, 'description', default=None)
619 ```
620
621 Unwieldy:
622
623 ```python
624 description = (
625 self._og_search_description(webpage, default=None)
626 or self._html_search_meta('description', webpage, default=None)
627 or self._html_search_meta('twitter:description', webpage, default=None))
628 ```
629
630 Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`.
631
632
633 ### Trailing parentheses
634
635 Always move trailing parentheses used for grouping/functions after the last argument. On the other hand, multi-line literal list/tuple/dict/set should closed be in a new line. Generators and list/dict comprehensions may use either style
636
637 #### Examples
638
639 Correct:
640
641 ```python
642 url = traverse_obj(info, (
643 'context', 'dispatcher', 'stores', 'VideoTitlePageStore', 'data', 'video', 0, 'VideoUrlSet', 'VideoUrl'), list)
644 ```
645 Correct:
646
647 ```python
648 url = traverse_obj(
649 info,
650 ('context', 'dispatcher', 'stores', 'VideoTitlePageStore', 'data', 'video', 0, 'VideoUrlSet', 'VideoUrl'),
651 list)
652 ```
653
654 Incorrect:
655
656 ```python
657 url = traverse_obj(
658 info,
659 ('context', 'dispatcher', 'stores', 'VideoTitlePageStore', 'data', 'video', 0, 'VideoUrlSet', 'VideoUrl'),
660 list
661 )
662 ```
663
664 Correct:
665
666 ```python
667 f = {
668 'url': url,
669 'format_id': format_id,
670 }
671 ```
672
673 Incorrect:
674
675 ```python
676 f = {'url': url,
677 'format_id': format_id}
678 ```
679
680 Correct:
681
682 ```python
683 formats = [process_formats(f) for f in format_data
684 if f.get('type') in ('hls', 'dash', 'direct') and f.get('downloadable')]
685 ```
686
687 Correct:
688
689 ```python
690 formats = [
691 process_formats(f) for f in format_data
692 if f.get('type') in ('hls', 'dash', 'direct') and f.get('downloadable')
693 ]
694 ```
695
696
697 ### Use convenience conversion and parsing functions
698
699 Wrap all extracted numeric data into safe functions from [`yt_dlp/utils/`](yt_dlp/utils/): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
700
701 Use `url_or_none` for safe URL processing.
702
703 Use `traverse_obj` and `try_call` (superseeds `dict_get` and `try_get`) for safe metadata extraction from parsed JSON.
704
705 Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
706
707 Explore [`yt_dlp/utils/`](yt_dlp/utils/) for more useful convenience functions.
708
709 #### Examples
710
711 ```python
712 description = traverse_obj(response, ('result', 'video', 'summary'), expected_type=str)
713 thumbnails = traverse_obj(response, ('result', 'thumbnails', ..., 'url'), expected_type=url_or_none)
714 video = traverse_obj(response, ('result', 'video', 0), default={}, expected_type=dict)
715 duration = float_or_none(video.get('durationMs'), scale=1000)
716 view_count = int_or_none(video.get('views'))
717 ```
718
719
720 # My pull request is labeled pending-fixes
721
722 The `pending-fixes` label is added when there are changes requested to a PR. When the necessary changes are made, the label should be removed. However, despite our best efforts, it may sometimes happen that the maintainer did not see the changes or forgot to remove the label. If your PR is still marked as `pending-fixes` a few days after all requested changes have been made, feel free to ping the maintainer who labeled your issue and ask them to re-review and remove the label.
723
724
725
726
727 # EMBEDDING YT-DLP
728 See [README.md#embedding-yt-dlp](README.md#embedding-yt-dlp) for instructions on how to embed yt-dlp in another Python program