Benjamin Loison
3bba97e90c
Otherwise `Linux, is in millions of computers` doesn't match the not automatically generated caption of [`o8NPllzkFhE`](https://www.youtube.com/watch?v=o8NPllzkFhE). Note to be confused with the search across captions that already used to work with for instance `is in millions of computers, it`. |
||
---|---|---|
website | ||
.gitignore | ||
channels.txt | ||
findAlreadyTreatedCommentsCount.py | ||
findLatestTreatedCommentsForChannelsBeingTreated.py | ||
findTreatedChannelWithMostComments.py | ||
findTreatedChannelWithMostSubscribers.py | ||
keys.txt | ||
LICENSE | ||
main.cpp | ||
Makefile | ||
README.md | ||
removeChannelsBeingTreated.py | ||
retrieveTop100SubscribersFrance.py |
The algorithm:
To retrieve the most YouTube video ids in order to retrieve the most video captions, we need to retrieve the most YouTube channels. So to discover the YouTube channels graph with a breadth-first search, we proceed as follows:
- Provide a starting set of channels.
- Given a channel, retrieve other channels thanks to its content by using YouTube Data API v3 and YouTube operational API and then repeat 1. for each retrieved channel.
A ready to be used by the end-user website instance of this project is hosted at: https://crawler.yt.lemnoslife.com
See more details on the Wiki.
Running the YouTube graph discovery algorithm:
Because of the current compression mechanism, Linux is the only known OS able to run this algorithm.
To clone the repository, run:
git clone https://gitea.lemnoslife.com/Benjamin_Loison/YouTube_captions_search_engine
Move to the cloned repository by running:
cd YouTube_captions_search_engine/
To install the dependencies on an apt
based Linux distribution of this project make sure to have pip
and run:
sudo apt install nlohmann-json3-dev
pip install yt-dlp
To compile the YouTube discovery graph algorithm, run:
make
To see the command line arguments of the algorithm, run:
./youtubeCaptionsSearchEngine -h
To run the YouTube discovery graph algorithm, run:
./youtubeCaptionsSearchEngine
Except if you provide the argument --youtube-operational-api-instance-url https://yt.lemnoslife.com
, you have to host your own instance of the YouTube operational API.
Except if you provide the argument --no-keys
, you have to provide at least one YouTube Data API v3 key in keys.txt
.
Hosting the website enabling users to make requests:
Move to the website/
folder by running:
cd website/
To install its dependencies make sure to have composer
installed and run:
composer install
pip install webvtt-py
Add the following configuration to your Nginx website one:
# Make the default webpage of your website to be `index.php`.
index index.php;
# Allow end-users to retrieve the content of a file within a channel zip.
location /channels {
rewrite ^(.*).zip$ /channels.php;
rewrite ^(.*).zip/(.*).json$ /channels.php;
rewrite ^(.*).zip/(.*).txt$ /channels.php;
rewrite ^(.*).zip/(.*).vtt$ /channels.php;
# Allow end-users to list `channels/` content.
autoindex on;
}
# Disable end-users to access to other end-users requests.
location /users {
deny all;
}
# Configure the websocket endpoint.
location /websocket {
# switch off logging
access_log off;
# redirect all HTTP traffic to localhost
proxy_pass http://localhost:4430;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# timeout extension, possibly keep this short if using a ping strategy
proxy_read_timeout 99999s;
}
Start the websocket worker by running:
php websockets.php