Fork of https://github.com/akamhy/waybackpy Wayback Machine API interface & a command-line tool
Go to file
2020-07-18 16:54:07 +05:30
tests update test - save 2020-07-18 16:39:35 +05:30
waybackpy update readme for newer oop and some test changes (#12) 2020-07-18 16:22:09 +05:30
_config.yml Set theme jekyll-theme-cayman 2020-05-06 12:20:43 +05:30
.gitignore Initial commit 2020-05-02 14:49:46 +05:30
.travis.yml Update .travis.yml 2020-05-05 17:27:58 +05:30
.whitesource Add .whitesource configuration file (#6) 2020-05-05 09:33:50 +05:30
index.rst Update index.rst 2020-07-18 16:54:07 +05:30
LICENSE Initial commit 2020-05-02 14:49:46 +05:30
README.md Update README.md 2020-07-18 16:46:40 +05:30
setup.cfg license_file = LICENSE 2020-05-07 19:38:19 +05:30
setup.py Update setup.py 2020-05-07 20:14:40 +05:30

waybackpy

Build Status Downloads Release Codacy Badge License: MIT Maintainability CodeFactor made-with-python pypi PyPI - Python Version Maintenance codecov contributions welcome

Internet Archive Wayback Machine

Waybackpy is a Python library that interfaces with the Internet Archive's Wayback Machine API. Archive pages and retrieve archived pages easily.

Table of contents

Installation

Using pip:

pip install waybackpy

Usage

Capturing aka Saving an url Using save()

import waybackpy
# Capturing a new archive on Wayback machine.
target_url = waybackpy.Url("https://github.com/akamhy/waybackpy", user_agnet="My-cool-user-agent")
archived_url = target_url.save()
print(archived_url)

This should print an URL similar to the following archived URL:

https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy

Receiving the oldest archive for an URL Using oldest()

import waybackpy
# retrieving the oldest archive on Wayback machine.
target_url = waybackpy.Url("https://www.google.com/", "My-cool-user-agent")
oldest_archive = target_url.oldest()
print(oldest_archive)

This should print the oldest available archive for https://google.com.

http://web.archive.org/web/19981111184551/http://google.com:80/

Receiving the newest archive for an URL using newest()

import waybackpy
# retrieving the newest/latest archive on Wayback machine.
target_url = waybackpy.Url(url="https://www.google.com/", user_agnet="My-cool-user-agent")
newest_archive = target_url.newest()
print(newest_archive)

This print the newest available archive for https://www.microsoft.com/en-us, something just like this:

http://web.archive.org/web/20200429033402/https://www.microsoft.com/en-us/

Receiving archive close to a specified year, month, day, hour, and minute using near()

import waybackpy
# retriving the the closest archive from a specified year.
# supported argumnets are year,month,day,hour and minute
target_url = waybackpy.Url(https://www.facebook.com/", "Any-User-Agent")
archive_near_year = target_url.near(year=2010)
print(archive_near_year)

returns : http://web.archive.org/web/20100504071154/http://www.facebook.com/

Please note that if you only specify the year, the current month and day are default arguments for month and day respectively. Just putting the year parameter would not return the archive closer to January but the current month you are using the package. You need to specify the month "1" for January , 2 for february and so on.

Do not pad (don't use zeros in the month, year, day, minute, and hour arguments). e.g. For January, set month = 1 and not month = 01.

Get the content of webpage using get()

import waybackpy
# retriving the webpage from any url including the archived urls. Don't need to import other libraies :)
# supported argumnets encoding and user_agent
target = waybackpy.Url("google.com", "any-user_agent")
oldest_url = target.oldest()
webpage = target.get(oldest_url) # We are getting the source of oldest archive of google.com.
print(webpage)

This should print the source code for oldest archive of google.com. If no URL is passed in get() then it should retrive the source code of google.com and not any archive.

Count total archives for an URL using total_archives()

from waybackpy import Url
# retriving the content of a webpage from any url including but not limited to the archived urls.
count = Url("https://en.wikipedia.org/wiki/Python (programming language)", "User-Agent").total_archives()
print(count)

This should print an integer (int), which is the number of total archives on archive.org

Tests

Dependency

  • None, just python standard libraries (re, json, urllib and datetime). Both python 2 and 3 are supported :)

License

MIT License