lint README.md

This commit is contained in:
Akash Mahanty 2020-10-17 12:01:49 +05:30
parent 7aef50428f
commit 50e3154a4e

View File

@ -275,7 +275,7 @@ print(archive_count) # total_archives() returns an int
<sub>Try this out in your browser @ <https://repl.it/@akamhy/WaybackPyTotalArchivesExample></sub>
#### List of URLs that Wayback Machine knows and has archived for a domain name
#### List of URLs that Wayback Machine knows and has archived for a domain name
1) If alive=True is set, waybackpy will check all URLs to identify the alive URLs. Don't use with popular websites like google or it would take too long.
2) To include URLs from subdomain set sundomain=True
@ -341,7 +341,7 @@ https://web.archive.org/web/20200606044708/https://en.wikipedia.org/wiki/YouTube
#### Get JSON data of avaialblity API
```bash
$ waybackpy --url "https://en.wikipedia.org/wiki/SpaceX" --user_agent "my-unique-user-agent" --json
waybackpy --url "https://en.wikipedia.org/wiki/SpaceX" --user_agent "my-unique-user-agent" --json
```
@ -382,7 +382,8 @@ waybackpy --url google.com --user_agent "my-unique-user-agent" --get save # Save
<sub>Try this out in your browser @ <https://repl.it/@akamhy/WaybackPyBashGet></sub>
#### Fetch all the URLs that the Wayback Machine knows for a domain
#### Fetch all the URLs that the Wayback Machine knows for a domain
1) You can add the '--alive' flag to only fetch alive links.
2) You can add the '--subdomain' flag to add subdomains.
3) '--alive' and '--subdomain' flags can be used simultaneously.
@ -413,9 +414,11 @@ waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls --sub
<sub>Try this out in your browser @ <https://repl.it/@akamhy/WaybackpyKnownUrlsFromWaybackMachine#main.sh></sub>
## Tests
[Here](https://github.com/akamhy/waybackpy/tree/master/tests)
To run tests locally:
```bash
pip install -U pytest
pip install codecov