Commit Graph

103 Commits

Author SHA1 Message Date
a116b29df9
Make websockets.php able to proceed blocking treatments 2023-02-07 01:22:26 +01:00
1fe92ec2d0
Make a WebSocket example work with crawler.yt.lemnoslife.com 2023-01-31 01:05:09 +01:00
411a3db465
Run php-cs-fixer fix --rules=@PSR12 websocket.php 2023-01-31 00:57:06 +01:00
08b465753d
Rename chat.php to websocket.php 2023-01-30 22:24:02 +01:00
45c5d8a940
Copy-pasted the README.md quick example of ratchetphp/Ratchet
5012dc9545 (a-quick-example)
2023-01-30 22:19:04 +01:00
668aa608ed
Add static website/index.php 2023-01-30 22:14:05 +01:00
c746d43ddf
Correct typo: the channel tab is LIVE, not LIVES 2023-01-25 01:00:29 +01:00
05cd243abd
Add comment in README.md about the usage of --no-keys or generating a YouTube Data API v3 key 2023-01-22 15:41:13 +01:00
9d40fef429
Introduce {,MAIN_}EXIT_WITH_ERROR macros for exitting with an error 2023-01-22 15:17:14 +01:00
d34fade0cd
#11: Add the discovering of channels having commented on ended livestreams 2023-01-22 15:15:27 +01:00
68b1f9a77f
#11: Add current livestreams support to discover channels 2023-01-22 04:00:11 +01:00
c17a33d181
Instead of looping on items where we expect only one to be, we just use items[0] 2023-01-22 02:19:26 +01:00
59dc5676cc
Make PRINT not requiring to precise threadId 2023-01-22 02:04:03 +01:00
548a797ee8
#11: Treat COMMUNITY post comments to discover channels 2023-01-22 01:37:32 +01:00
46ef8146f8
Add in README.md the fact that as documented in #30, this algorithm is only known to be working fin on Linux 2023-01-21 22:20:45 +01:00
4133faad41
#11: Update channel CHANNELS tab treatment following YouTube-operational-API/issues/121 closure 2023-01-21 02:24:42 +01:00
fced9e0a3a
#11: Add the treatment of channels' tab, but only postpone unlisted videos treatment 2023-01-15 14:56:44 +01:00
f114aac0cf
#7: Make commentsCount and requestsPerChannel compatible with multithreading 2023-01-15 14:31:55 +01:00
7456685f2b
#11: Add a first iteration for the CHANNELS retrieval 2023-01-15 02:19:31 +01:00
270c48da02
#11: Add --youtube-operational-api-instance-url parameter and use exit(EXIT_{SUCCESS, FAILURE}) instead of exit({0, 1}) 2023-01-15 00:49:32 +01:00
f6c11b54f3
Fix #26: Keep efficient search algorithm while keeping order (notably of the starting set) 2023-01-14 15:14:24 +01:00
27cd5c3a64
Fix #24: Stop using macros for user inputs to notably make releases 2023-01-08 18:26:20 +01:00
eb805f5ced
Fix #6: Add support for multiple keys to be resilient against exceeded quota errors 2023-01-08 17:59:08 +01:00
d6f6b26361
Fix #23: YouTube Data API v3 PlaylistItems: list endpoint returns playlistNotFound error for regular uploads ones 2023-01-08 16:31:57 +01:00
b3779fe49a
Fix #20: YouTube Data API v3 returns rarely suddenly commentsDisabled error which involves an unwanted method switch
Also modified compression command, as I got `sh: 1: zip: Argument list too long` when compressing the 248,868 json files of the French most subscribers channel.
2023-01-08 15:43:27 +01:00
3ae0f4e924
Make all Python scripts executable and add findAlreadyTreatedCommentsCount.py to find how many comments were already treated 2023-01-07 15:45:31 +01:00
3758405f52
Add a note about the timing percentage of findLatestTreatedCommentsForChannelsBeingTreated.py going backward 2023-01-07 15:35:12 +01:00
71e4bd95a9
Fix #9: Make sure that in case of error returned by the YouTube Data API v3 the algorithm treats it correctly
Note that in case of error the algorithm used to skip the received content, as if just no `items` were in it.
2023-01-06 20:55:32 +01:00
34bbc216f6
Fix #15: Provide an algorithm to retrieve the list of 100 French channels with most subscribers (and provide it too) 2023-01-06 18:06:00 +01:00
baec8fcb6c
#7: Remove remaining undefined behavior due to missing mutex use 2023-01-06 18:00:51 +01:00
773f86c551
Fix #17: Add to stdout live statistics of the number of comments treated per second 2023-01-06 17:55:16 +01:00
f436007836
Fix #16: Provide an algorithm to determine the progress of retrieving comments for huge YouTube channels 2023-01-06 17:51:00 +01:00
dfbf38b071
#1: Add GNU AGPLv3 license 2023-01-06 16:09:12 +01:00
292dd8919e
Add try/catch around json parser
As got:
```
terminate called after throwing an instance of 'nlohmann::detail::parse_error'
terminate called recursively
  what():  [json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - unexpected end of input; expected '[', '{', or a literal
terminate called recursively
```
2023-01-06 00:31:05 +01:00
dab4c8ff69
Modify removeChannelsBeingTreated.py to be more resilient against not existing files in the treatment process 2023-01-04 03:10:28 +01:00
9d5c9fde2a
#2: Add compression to channels/ folder
Can use following Python script to compress existing uncompressed
`channels/` folder.

```py

import os, shutil

path = 'channels/'

os.chdir(path)

d = next(os.walk('.'))[1]
for channelIndex, channelId in enumerate(d):
    print(f'{channelIndex} / {len(d)}: {channelId}')
    shutil.make_archive(channelId, 'zip', channelId)
    shutil.rmtree(channelId)
```
2023-01-04 03:06:33 +01:00
f201ae7a91
Make #7: Add multi-threading compatible with my Debian setup 2023-01-04 02:51:40 +01:00
4cae7e09d1
Add {removeChannelsBeingTreated, findTreatedChannelWithMost{Comments, Subscribers}}.py 2023-01-04 02:41:07 +01:00
e4b4ce21a2
Fix #7: Add multi-threading 2023-01-03 04:56:19 +01:00
a2990c7699
Fix #8: Support comments disabled channels
Tested with `UCWIdqSQekeGmUWlSFeCiEnA` which treated correctly the 36 comments of the only comments enabled video `3F8dFt8LsXY`.

Note that this commit doesn't support comments disabled channels with more than 20,000 videos.
2023-01-03 02:56:07 +01:00
923c14a77b
#2: Add data logging 2023-01-02 19:46:32 +01:00
73a9dea023
Apply astyle formatting to main.cpp 2023-01-02 18:31:16 +01:00
938ae4b0fb
Fix #4: Provide a version relying on the no-key service of https://yt.lemnoslife.com 2023-01-02 18:30:18 +01:00
c50a82df1b
Make compatible with Debian
More precise ly make compatible with `gcc version 10.2.1 20210110 (Debian 10.2.1-6)`
2023-01-02 18:23:30 +01:00
36f1fb9e83
Add progression save and use spaces instead of tabs 2022-12-22 06:18:22 +01:00
934954092a
Add time to logging 2022-12-22 05:47:16 +01:00
eaae954e1b
Add resilience to missing authorChannelId in main.cpp 2022-12-22 05:41:38 +01:00
2ffc1d0e5d
Add main.cpp, Makefile and channelsToTreat.txt
Note that running this algorithm end up with channel [`UC-99odscxh1xxTyxHyXuRrg`](https://www.youtube.com/channel/UC-99odscxh1xxTyxHyXuRrg) and more precisely the video [`Tq5aPNzfYcg`](https://www.youtube.com/watch?v=Tq5aPNzfYcg) and more precisely the comment [`Ugx-TlSq6SNCbOX04mx4AaABAg`](https://www.youtube.com/watch?v=Tq5aPNzfYcg&lc=Ugx-TlSq6SNCbOX04mx4AaABAg) [which doesn't have any author](https://yt.lemnoslife.com/noKey/comments?part=snippet&id=Ugx-TlSq6SNCbOX04mx4AaABAg)...
2022-12-22 05:20:32 +01:00
53acda6abe
Update README.md to remove the question about whether or not both methods return the same comments, as it's the case
More precisely I used following algorithm with these three channels:
channel id               | 1st method            | 2nd method
-------------------------|-----------------------|-----------
UCt5USYpzzMCYhkirVQGHwKQ | 16                    | 16
UCUo1RqYV8tGjV38sQ8S5p9A | 58,165                | 58,165
UCWIdqSQekeGmUWlSFeCiEnA | *error* (as expected) | 27

```py
"""
Algorithm comparing comments count using:
1. CommentThreads: list with allThreadsRelatedToChannelId filter
2. PlaylistItems: list and CommentThreads: list
Note that the second approach isn't *atomic*, so counts will differ if some comments are posted while retrieving data.
"""

import requests, json

CHANNEL_ID = 'UC...'
API_KEY = 'AIzaSy...'

def getJSON(url, firstTry = True):
    if firstTry:
        url = 'https://www.googleapis.com/youtube/v3/' + url + f'&key={API_KEY}'
    try:
        content = requests.get(url).text
    except:
        print('retry')
        return getJSON(url, False)
    data = json.loads(content)
    return data

items = []
pageToken = ''
while True:
    # After having verified, I confirm that using `allThreadsRelatedToChannelId` doesn't return comments of the `COMMUNITY` tab
    data = getJSON(f'commentThreads?part=id,snippet,replies&allThreadsRelatedToChannelId={CHANNEL_ID}&maxResults=100&pageToken={pageToken}')
    items += data['items']
    # In fact once we have top level comment, then with both methods if the replies *count* is correct, then we are fine as we both use the same Comments: list endpoint
    """for item in data['items']:
        if 'replies' in item:
            if len(item['replies']['comments']) >= 5:
                print('should consider replies too!')"""
    print(len(items))
    if 'nextPageToken' in data:
        pageToken = data['nextPageToken']
    else:
        break

print(len(items))

PLAYLIST_ID = 'UU' + CHANNEL_ID[2:]

videoIds = []
pageToken = ''
while True:
    data = getJSON(f'playlistItems?part=snippet&playlistId={PLAYLIST_ID}&maxResults=50&pageToken={pageToken}')
    for item in data['items']:
        videoIds += [item['snippet']['resourceId']['videoId']]
    print(len(videoIds))
    if 'nextPageToken' in data:
        pageToken = data['nextPageToken']
    else:
        break

print(len(videoIds))
items = []

for videoIndex, videoId in enumerate(videoIds):
    pageToken = ''
    while True:
        data = getJSON(f'commentThreads?part=id,snippet,replies&videoId={videoId}&maxResults=100&pageToken={pageToken}')
        if 'items' in data:
            items += data['items']
        # repeat replies check as could be the case here and not there
            """for item in data['items']:
                if 'replies' in item:
                    if len(item['replies']['comments']) >= 5:
                        print('should consider replies too!')"""
        print(videoIndex, len(videoIds), len(items))
        if 'nextPageToken' in data:
            pageToken = data['nextPageToken']
        else:
            break

print(len(items))
```
2022-12-22 03:18:25 +01:00
d776c09fec
Update README.md to clean notes concerning optimized approaches 2022-12-22 02:02:48 +01:00