Fork of https://github.com/akamhy/waybackpy Wayback Machine API interface & a command-line tool
Go to file
2020-12-13 15:13:51 +05:30
.github/workflows Update ci.yml 2020-11-26 12:14:15 +05:30
assets Add files via upload 2020-09-08 09:23:59 +05:30
tests remove unused import (urllib) 2020-12-13 15:13:51 +05:30
waybackpy now using requests lib as it handles errors nicely 2020-12-13 15:05:57 +05:30
_config.yml now using requests lib as it handles errors nicely 2020-12-13 15:05:57 +05:30
.gitignore Initial commit 2020-05-02 14:49:46 +05:30
.whitesource Add .whitesource configuration file (#6) 2020-05-05 09:33:50 +05:30
index.rst Threading enabled checking for URLs 2020-11-26 06:15:42 +05:30
LICENSE + https://github.com/akamhy/waybackpy/graphs/contributors 2020-11-04 08:09:30 +05:30
README.md Update README.md (#40) 2020-11-26 09:18:26 +05:30
requirements.txt now using requests lib as it handles errors nicely 2020-12-13 15:05:57 +05:30
setup.cfg Code style improvements (#20) 2020-07-22 10:09:14 +05:30
setup.py deleted .travis.yml, link with flake (#41) 2020-11-26 13:06:50 +05:30

waybackpy

contributions welcome Build Status codecov Downloads Release Codacy Badge Maintainability made-with-python pypi PyPI - Python Version Maintenance Repo size License: MIT

Wayback Machine

Waybackpy is a Python package that interfaces with Internet Archive's Wayback Machine API. Archive webpages and retrieve archived webpages easily.

Table of contents

Installation

Using pip:

pip install waybackpy

or direct from this repository using git.

pip install git+https://github.com/akamhy/waybackpy.git

Usage

As a Python package

Capturing aka Saving an url using save()

import waybackpy

url = "https://en.wikipedia.org/wiki/Multivariable_calculus"
user_agent = "Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 Firefox/40.0"

waybackpy_url_obj = waybackpy.Url(url, user_agent)
archive = waybackpy_url_obj.save()
print(archive)
https://web.archive.org/web/20201016171808/https://en.wikipedia.org/wiki/Multivariable_calculus

Try this out in your browser @ https://repl.it/@akamhy/WaybackPySaveExample

Retrieving the archive for an URL using archive_url

import waybackpy

url = "https://www.google.com/"
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:40.0) Gecko/20100101 Firefox/40.0"

waybackpy_url_obj = waybackpy.Url(url, user_agent)
archive_url = waybackpy_url_obj.archive_url
print(archive_url)
https://web.archive.org/web/20201016153320/https://www.google.com/

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyArchiveUrl

Retrieving the oldest archive for an URL using oldest()

import waybackpy

url = "https://www.google.com/"
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:40.0) Gecko/20100101 Firefox/40.0"

waybackpy_url_obj = waybackpy.Url(url, user_agent)
oldest_archive_url = waybackpy_url_obj.oldest()
print(oldest_archive_url)
http://web.archive.org/web/19981111184551/http://google.com:80/

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyOldestExample

Retrieving the newest archive for an URL using newest()

import waybackpy

url = "https://www.facebook.com/"
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:39.0) Gecko/20100101 Firefox/39.0"

waybackpy_url_obj = waybackpy.Url(url, user_agent)
newest_archive_url = waybackpy_url_obj.newest()
print(newest_archive_url)
https://web.archive.org/web/20201016150543/https://www.facebook.com/

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyNewestExample

Retrieving the JSON reponse for the avaliblity API request

import waybackpy

url = "https://www.facebook.com/"
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:39.0) Gecko/20100101 Firefox/39.0"

waybackpy_url_obj = waybackpy.Url(url, user_agent)
json_dict = waybackpy_url_obj.JSON
print(json_dict)
{'url': 'https://www.facebook.com/', 'archived_snapshots': {'closest': {'available': True, 'url': 'http://web.archive.org/web/20201016150543/https://www.facebook.com/', 'timestamp': '20201016150543', 'status': '200'}}}

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyJSON

Retrieving archive close to a specified year, month, day, hour, and minute using near()

from waybackpy import Url

user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Firefox/38.0"
url = "https://github.com/"

waybackpy_url_obj = Url(url, user_agent)

# Do not pad (don't use zeros in the month, year, day, minute, and hour arguments). e.g. For January, set month = 1 and not month = 01.
github_archive_near_2010 = waybackpy_url_obj.near(year=2010)
print(github_archive_near_2010)
https://web.archive.org/web/20101018053604/http://github.com:80/
github_archive_near_2011_may = waybackpy_url_obj.near(year=2011, month=5)
print(github_archive_near_2011_may)
https://web.archive.org/web/20110518233639/https://github.com/
github_archive_near_2015_january_26 = waybackpy_url_obj.near(year=2015, month=1, day=26)
print(github_archive_near_2015_january_26)
https://web.archive.org/web/20150125102636/https://github.com/
github_archive_near_2018_4_july_9_2_am = waybackpy_url_obj.near(year=2018, month=7, day=4, hour=9, minute=2)
print(github_archive_near_2018_4_july_9_2_am)
https://web.archive.org/web/20180704090245/https://github.com/

The package doesn't support second argument yet. You are encourged to create a PR ;)

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyNearExample

Get the content of webpage using get()

import waybackpy

google_url = "https://www.google.com/"

User_Agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36"

waybackpy_url_object = waybackpy.Url(google_url, User_Agent)


# If no argument is passed in get(), it gets the source of the Url used to create the object.
current_google_url_source = waybackpy_url_object.get()
print(current_google_url_source)


# The following chunk of code will force a new archive of google.com and get the source of the archived page.
# waybackpy_url_object.save() type is string.
google_newest_archive_source = waybackpy_url_object.get(waybackpy_url_object.save())
print(google_newest_archive_source)


# waybackpy_url_object.oldest() type is str, it's oldest archive of google.com
google_oldest_archive_source = waybackpy_url_object.get(waybackpy_url_object.oldest())
print(google_oldest_archive_source)

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyGetExample#main.py

Count total archives for an URL using total_archives()

import waybackpy

URL = "https://en.wikipedia.org/wiki/Python (programming language)"
UA = "Mozilla/5.0 (iPad; CPU OS 8_1_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B435 Safari/600.1.4"

waybackpy_url_object = waybackpy.Url(url=URL, user_agent=UA)

archive_count = waybackpy_url_object.total_archives()

print(archive_count) # total_archives() returns an int
2516

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyTotalArchivesExample

List of URLs that Wayback Machine knows and has archived for a domain name

  1. If alive=True is set, waybackpy will check all URLs to identify the alive URLs. Don't use with popular websites like google or it would take too long.
  2. To include URLs from subdomain set sundomain=True
import waybackpy

URL = "akamhy.github.io"
UA = "Mozilla/5.0 (iPad; CPU OS 8_1_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B435 Safari/600.1.4"

waybackpy_url_object = waybackpy.Url(url=URL, user_agent=UA)
known_urls = waybackpy_url_object.known_urls(alive=True, subdomain=False) # alive and subdomain are optional.
print(known_urls) # known_urls() returns list of URLs
['http://akamhy.github.io',
'https://akamhy.github.io/waybackpy/',
'https://akamhy.github.io/waybackpy/assets/css/style.css?v=a418a4e4641a1dbaad8f3bfbf293fad21a75ff11',
'https://akamhy.github.io/waybackpy/assets/css/style.css?v=f881705d00bf47b5bf0c58808efe29eecba2226c']

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyKnownURLsToWayBackMachineExample#main.py

With the Command-line interface

Save

$ waybackpy --url "https://en.wikipedia.org/wiki/Social_media" --user_agent "my-unique-user-agent" --save
https://web.archive.org/web/20200719062108/https://en.wikipedia.org/wiki/Social_media

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashSave

Get archive URL

$ waybackpy --url "https://en.wikipedia.org/wiki/SpaceX" --user_agent "my-unique-user-agent" --archive_url
https://web.archive.org/web/20201007132458/https://en.wikipedia.org/wiki/SpaceX

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashArchiveUrl

Oldest archive

$ waybackpy --url "https://en.wikipedia.org/wiki/SpaceX" --user_agent "my-unique-user-agent" --oldest
https://web.archive.org/web/20040803000845/http://en.wikipedia.org:80/wiki/SpaceX

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashOldest

Newest archive

$ waybackpy --url "https://en.wikipedia.org/wiki/YouTube" --user_agent "my-unique-user-agent" --newest
https://web.archive.org/web/20200606044708/https://en.wikipedia.org/wiki/YouTube

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashNewest

Get JSON data of avaialblity API

waybackpy --url "https://en.wikipedia.org/wiki/SpaceX" --user_agent "my-unique-user-agent" --json

{'archived_snapshots': {'closest': {'timestamp': '20201007132458', 'status': '200', 'available': True, 'url': 'http://web.archive.org/web/20201007132458/https://en.wikipedia.org/wiki/SpaceX'}}, 'url': 'https://en.wikipedia.org/wiki/SpaceX'}

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashJSON

Total number of archives

$ waybackpy --url "https://en.wikipedia.org/wiki/Linux_kernel" --user_agent "my-unique-user-agent" --total
853

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashTotal

Archive near time

$ waybackpy --url facebook.com --user_agent "my-unique-user-agent" --near --year 2012 --month 5 --day 12
https://web.archive.org/web/20120512142515/https://www.facebook.com/

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashNear

Get the source code

waybackpy --url google.com --user_agent "my-unique-user-agent" --get url # Prints the source code of the url
waybackpy --url google.com --user_agent "my-unique-user-agent" --get oldest # Prints the source code of the oldest archive
waybackpy --url google.com --user_agent "my-unique-user-agent" --get newest # Prints the source code of the newest archive
waybackpy --url google.com --user_agent "my-unique-user-agent" --get save # Save a new archive on wayback machine then print the source code of this archive.

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashGet

Fetch all the URLs that the Wayback Machine knows for a domain

  1. You can add the '--alive' flag to only fetch alive links.
  2. You can add the '--subdomain' flag to add subdomains.
  3. '--alive' and '--subdomain' flags can be used simultaneously.
  4. All links will be saved in a file, and the file will be created in the current working directory.
pip install waybackpy

# Ignore the above installation line.

waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls
# Prints all known URLs under akamhy.github.io


waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls --alive
# Prints all known URLs under akamhy.github.io which are still working and not dead links.


waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls --subdomain
# Prints all known URLs under akamhy.github.io inclusing subdomain


waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls --subdomain --alive
# Prints all known URLs under akamhy.github.io including subdomain which are not dead links and still alive.

Try this out in your browser @ https://repl.it/@akamhy/WaybackpyKnownUrlsFromWaybackMachine#main.sh

Tests

Here

To run tests locally:

pip install -U pytest
pip install codecov
pip install pytest pytest-cov
cd tests
pytest --cov=../waybackpy
python -m codecov #For reporting coverage on Codecov

Packaging

  1. Increment version.

  2. Build package python setup.py sdist bdist_wheel.

  3. Sign & upload the package twine upload -s dist/*.

License

Released under the MIT License. See license for details.