Commit Graph

16 Commits

Author SHA1 Message Date
ff5542d8b0
Add sudo apt install nginx to README.md for hosting the website 2023-02-25 15:55:24 +01:00
572f9e121f
Precise in README.md in which folder each command has to be ran 2023-02-23 23:48:40 +01:00
95e96d08e1
Advertize pip instead of apt in README.md to install the latest version of yt-dlp 2023-02-23 23:16:36 +01:00
df3af2780b
#19: Detail how to run the website and reference channels.txt on it 2023-02-23 23:12:18 +01:00
68cd27c263
Fix #19: Improve documentation and code comments 2023-02-23 22:50:30 +01:00
09f7675bf7
#31: Make search within captions not limited by line wrapping 2023-02-14 01:32:36 +01:00
04c59eb025
Fix #13: Add captions extraction
I was about to commit in addition:

```c++
// Due to videos with automatically generated captions but being set to `Off` by default aren't retrieved with `--sub-langs '.*orig'`.
// My workaround is to first call YouTube Data API v3 Captions: list endpoint with `part=snippet` and retrieve the language that has `"trackKind": "asr"` (automatic speech recognition) in `snippet`.
/*json data = getJson(threadId, "captions?part=snippet&videoId=" + videoId, true, channelToTreat),
     items = data["items"];
for(const auto& item : items)
{
    json snippet = item["snippet"];
    if(snippet["trackKind"] == "asr")
    {
        string language = snippet["language"];
        cmd = cmdCommonPrefix + "--write-auto-subs --sub-langs '" + language + "-orig' --sub-format ttml --convert-subs vtt" + cmdCommonPostfix;
        exec(threadId, cmd);
        // As there should be a single automatic speech recognized track, there is no need to go through all tracks.
        break;
    }
}*/
```

Instead of:

```c++
cmd = cmdCommonPrefix + "--write-auto-subs --sub-langs '.*orig' --sub-format ttml --convert-subs vtt" + cmdCommonPostfix;
exec(threadId, cmd);
```

But I realized that, as the GitHub comment I was about to add to https://github.com/yt-dlp/yt-dlp/issues/2655, I was
wrong:

> `yt-dlp --cookies cookies.txt --sub-langs 'en.*,.*orig' --write-auto-subs https://www.youtube.com/watch?v=tQqDBySHYlc` work as expected. Many thanks again.
>
> ```
> 'subtitleslangs': ['en.*','.*orig'],
> 'writeautomaticsub': True,
> ```
>
> Work as expected too. Thank you
>
> Very sorry for the video sample. I even not watched it.

Thank you for this workaround. However note that videos having automatically generated subtitles but being set to `Off` by default aren't retrieved with your method (example of such video: [`mozyXsZJnQ4`](https://www.youtube.com/watch?v=mozyXsZJnQ4)). My workaround is to first call [YouTube Data API v3](https://developers.google.com/youtube/v3) [Captions: list](https://developers.google.com/youtube/v3/docs/captions/list) endpoint with [`part=snippet`](https://developers.google.com/youtube/v3/docs/captions/list#part) and retrieve the [`language`](https://developers.google.com/youtube/v3/docs/captions#snippet.language) that has [`"trackKind": "asr"`](https://developers.google.com/youtube/v3/docs/captions#snippet.trackKind) (automatic speech recognition) in [`snippet`](https://developers.google.com/youtube/v3/docs/captions#snippet).
2023-02-10 20:03:08 +01:00
05cd243abd
Add comment in README.md about the usage of --no-keys or generating a YouTube Data API v3 key 2023-01-22 15:41:13 +01:00
46ef8146f8
Add in README.md the fact that as documented in #30, this algorithm is only known to be working fin on Linux 2023-01-21 22:20:45 +01:00
7456685f2b
#11: Add a first iteration for the CHANNELS retrieval 2023-01-15 02:19:31 +01:00
2ffc1d0e5d
Add main.cpp, Makefile and channelsToTreat.txt
Note that running this algorithm end up with channel [`UC-99odscxh1xxTyxHyXuRrg`](https://www.youtube.com/channel/UC-99odscxh1xxTyxHyXuRrg) and more precisely the video [`Tq5aPNzfYcg`](https://www.youtube.com/watch?v=Tq5aPNzfYcg) and more precisely the comment [`Ugx-TlSq6SNCbOX04mx4AaABAg`](https://www.youtube.com/watch?v=Tq5aPNzfYcg&lc=Ugx-TlSq6SNCbOX04mx4AaABAg) [which doesn't have any author](https://yt.lemnoslife.com/noKey/comments?part=snippet&id=Ugx-TlSq6SNCbOX04mx4AaABAg)...
2022-12-22 05:20:32 +01:00
53acda6abe
Update README.md to remove the question about whether or not both methods return the same comments, as it's the case
More precisely I used following algorithm with these three channels:
channel id               | 1st method            | 2nd method
-------------------------|-----------------------|-----------
UCt5USYpzzMCYhkirVQGHwKQ | 16                    | 16
UCUo1RqYV8tGjV38sQ8S5p9A | 58,165                | 58,165
UCWIdqSQekeGmUWlSFeCiEnA | *error* (as expected) | 27

```py
"""
Algorithm comparing comments count using:
1. CommentThreads: list with allThreadsRelatedToChannelId filter
2. PlaylistItems: list and CommentThreads: list
Note that the second approach isn't *atomic*, so counts will differ if some comments are posted while retrieving data.
"""

import requests, json

CHANNEL_ID = 'UC...'
API_KEY = 'AIzaSy...'

def getJSON(url, firstTry = True):
    if firstTry:
        url = 'https://www.googleapis.com/youtube/v3/' + url + f'&key={API_KEY}'
    try:
        content = requests.get(url).text
    except:
        print('retry')
        return getJSON(url, False)
    data = json.loads(content)
    return data

items = []
pageToken = ''
while True:
    # After having verified, I confirm that using `allThreadsRelatedToChannelId` doesn't return comments of the `COMMUNITY` tab
    data = getJSON(f'commentThreads?part=id,snippet,replies&allThreadsRelatedToChannelId={CHANNEL_ID}&maxResults=100&pageToken={pageToken}')
    items += data['items']
    # In fact once we have top level comment, then with both methods if the replies *count* is correct, then we are fine as we both use the same Comments: list endpoint
    """for item in data['items']:
        if 'replies' in item:
            if len(item['replies']['comments']) >= 5:
                print('should consider replies too!')"""
    print(len(items))
    if 'nextPageToken' in data:
        pageToken = data['nextPageToken']
    else:
        break

print(len(items))

PLAYLIST_ID = 'UU' + CHANNEL_ID[2:]

videoIds = []
pageToken = ''
while True:
    data = getJSON(f'playlistItems?part=snippet&playlistId={PLAYLIST_ID}&maxResults=50&pageToken={pageToken}')
    for item in data['items']:
        videoIds += [item['snippet']['resourceId']['videoId']]
    print(len(videoIds))
    if 'nextPageToken' in data:
        pageToken = data['nextPageToken']
    else:
        break

print(len(videoIds))
items = []

for videoIndex, videoId in enumerate(videoIds):
    pageToken = ''
    while True:
        data = getJSON(f'commentThreads?part=id,snippet,replies&videoId={videoId}&maxResults=100&pageToken={pageToken}')
        if 'items' in data:
            items += data['items']
        # repeat replies check as could be the case here and not there
            """for item in data['items']:
                if 'replies' in item:
                    if len(item['replies']['comments']) >= 5:
                        print('should consider replies too!')"""
        print(videoIndex, len(videoIds), len(items))
        if 'nextPageToken' in data:
            pageToken = data['nextPageToken']
        else:
            break

print(len(items))
```
2022-12-22 03:18:25 +01:00
d776c09fec
Update README.md to clean notes concerning optimized approaches 2022-12-22 02:02:48 +01:00
daf14d4b5b
Update README.md to make clear to use different strategies to optimize the process
Note that as far as I (and StackOverflow ([1.](https://stackoverflow.com/q/63387215) and [2.](https://stackoverflow.com/q/67652250)) seems to) know there is no workaround to the 20,000 limit of PlaylistItems: list. This issue can be checked with:

```py
import requests, json

PLAYLIST_ID = 'UUf8w5m0YsRa8MHQ5bwSGmbw'
API_KEY = 'AIzaSy...'

items = []
pageToken = ''
while True:
    url = f'https://www.googleapis.com/youtube/v3/playlistItems?part=id&playlistId={PLAYLIST_ID}&maxResults=50&key={API_KEY}&pageToken={pageToken}'
    content = requests.get(url).text
    data = json.loads(content)
    items += data['items']
    print(len(items))
    if 'nextPageToken' in data:
        pageToken = data['nextPageToken']
    else:
        break

print(len(items))
```

Returns >= 19,000.

Note that this algorithm says that:
- [france24](https://www.youtube.com/@FRANCE24) has 6,086 videos while [SocialBlade states that it has 101,196 videos](https://socialblade.com/youtube/user/france24)
- [CNN](https://www.youtube.com/@CNN) has 19,289 while [SocialBlade states that it has 157,321 videos](https://socialblade.com/youtube/user/cnn)

Indeed both YouTube Data API v3 Search: list (I verified that https://github.com/Benjamin-Loison/YouTube-operational-API/issues/4 applied here with below code) and web-scraping `VIDEOS` tab don't work (see second SO link).

```py
import requests, json

CHANNEL_ID = 'UCf8w5m0YsRa8MHQ5bwSGmbw'
API_KEY = 'AIzaSy...'

items = []
pageToken = ''
while True:
    url = f'https://www.googleapis.com/youtube/v3/search?part=id&type=video&channelId={CHANNEL_ID}&maxResults=50&key={API_KEY}&pageToken={pageToken}'
    content = requests.get(url).text
    data = json.loads(content)
    items += data['items']
    print(len(items))
    if 'nextPageToken' in data:
        pageToken = data['nextPageToken']
    else:
        break

print(len(items))
```

Got ~18,734.

Another try by working with Search: list with date filter may make sense.

Note that according to SocialBlade:
- [asianetnews has 195,600 videos](https://socialblade.com/youtube/user/asianetnews)
- [RoelVandePaar has 2,2025,566 videos](https://socialblade.com/youtube/c/roelvandepaar)
2022-12-22 01:54:57 +01:00
3aa9947f8e
Update README.md to remove possibility to proceed using YouTube Data API v3 CommentThreads: list endpoint with allThreadsRelatedToChannelId filter
As we want to retrieve as many comments as possible, we have to proceed video per video, as [`3F8dFt8LsXY`](https://www.youtube.com/watch?v=3F8dFt8LsXY) for instance has comments but using YouTube Data API v3 CommentThreads: list endpoint with `allThreadsRelatedToChannelId` filter returns for `UCWIdqSQekeGmUWlSFeCiEnA`:
```json
{
  "error": {
    "code": 403,
    "message": "The video identified by the \u003ccode\u003e\u003ca href=\"/youtube/v3/docs/commentThreads/list#videoId\"\u003evideoId\u003c/a\u003e\u003c/code\u003e parameter has disabled comments.",
    "errors": [
      {
        "message": "The video identified by the \u003ccode\u003e\u003ca href=\"/youtube/v3/docs/commentThreads/list#videoId\"\u003evideoId\u003c/a\u003e\u003c/code\u003e parameter has disabled comments.",
        "domain": "youtube.commentThread",
        "reason": "commentsDisabled",
        "location": "videoId",
        "locationType": "parameter"
      }
    ]
  }
}
```
2022-12-21 23:49:27 +01:00
c828b118d3
Add README.md with first sketching questions 2022-12-21 23:46:14 +01:00