Compare commits
34 Commits
Author | SHA1 | Date | |
---|---|---|---|
55d8687566 | |||
0fa28527af | |||
68259fd2d9 | |||
e7086a89d3 | |||
e39467227c | |||
ba840404cf | |||
8fbd2d9e55 | |||
eebf6043de | |||
3d3b09d6d8 | |||
ef15b5863c | |||
256c0cdb6b | |||
12c72a8294 | |||
0ad27f5ecc | |||
700b60b5f8 | |||
11032596c8 | |||
9727f92168 | |||
d2893fec13 | |||
f1353b2129 | |||
c76a95ef90 | |||
62d88359ce | |||
9942c474c9 | |||
dfb736e794 | |||
84d1766917 | |||
9d3cdfafb3 | |||
20a16bfa45 | |||
f2112c73f6 | |||
9860527d96 | |||
9ac1e877c8 | |||
f881705d00 | |||
f015c3f4f3 | |||
42ac399362 | |||
e9d010c793 | |||
58a6409528 | |||
7ca2029158 |
138
README.md
138
README.md
@ -1,5 +1,6 @@
|
||||
# waybackpy
|
||||
[](https://travis-ci.org/akamhy/waybackpy)
|
||||
|
||||
[](https://travis-ci.org/akamhy/waybackpy)
|
||||
[](https://pypistats.org/packages/waybackpy)
|
||||
[](https://github.com/akamhy/waybackpy/releases)
|
||||
[](https://www.codacy.com/manual/akamhy/waybackpy?utm_source=github.com&utm_medium=referral&utm_content=akamhy/waybackpy&utm_campaign=Badge_Grade)
|
||||
@ -10,167 +11,132 @@
|
||||

|
||||

|
||||
[](https://github.com/akamhy/waybackpy/graphs/commit-activity)
|
||||
|
||||
[](https://codecov.io/gh/akamhy/waybackpy)
|
||||

|
||||

|
||||
|
||||
|
||||

|
||||

|
||||
|
||||
The waybackpy is a python wrapper for [Internet Archive](https://en.wikipedia.org/wiki/Internet_Archive)'s [Wayback Machine](https://en.wikipedia.org/wiki/Wayback_Machine).
|
||||
Waybackpy is a Python library that interfaces with the [Internet Archive](https://en.wikipedia.org/wiki/Internet_Archive)'s [Wayback Machine](https://en.wikipedia.org/wiki/Wayback_Machine) API. Archive pages and retrieve archived pages easily.
|
||||
|
||||
Table of contents
|
||||
=================
|
||||
<!--ts-->
|
||||
|
||||
* [Installation](https://github.com/akamhy/waybackpy#installation)
|
||||
* [Installation](#installation)
|
||||
|
||||
* [Usage](https://github.com/akamhy/waybackpy#usage)
|
||||
* [Saving an url using save()](https://github.com/akamhy/waybackpy#capturing-aka-saving-an-url-using-save)
|
||||
* [Receiving the oldest archive for an URL Using oldest()](https://github.com/akamhy/waybackpy#receiving-the-oldest-archive-for-an-url-using-oldest)
|
||||
* [Receiving the recent most/newest archive for an URL using newest()](https://github.com/akamhy/waybackpy#receiving-the-newest-archive-for-an-url-using-newest)
|
||||
* [Receiving archive close to a specified year, month, day, hour, and minute using near()](https://github.com/akamhy/waybackpy#receiving-archive-close-to-a-specified-year-month-day-hour-and-minute-using-near)
|
||||
* [Get the content of webpage using get()](https://github.com/akamhy/waybackpy#get-the-content-of-webpage-using-get)
|
||||
* [Count total archives for an URL using total_archives()](https://github.com/akamhy/waybackpy#count-total-archives-for-an-url-using-total_archives)
|
||||
* [Usage](#usage)
|
||||
* [Saving an url using save()](#capturing-aka-saving-an-url-using-save)
|
||||
* [Receiving the oldest archive for an URL Using oldest()](#receiving-the-oldest-archive-for-an-url-using-oldest)
|
||||
* [Receiving the recent most/newest archive for an URL using newest()](#receiving-the-newest-archive-for-an-url-using-newest)
|
||||
* [Receiving archive close to a specified year, month, day, hour, and minute using near()](#receiving-archive-close-to-a-specified-year-month-day-hour-and-minute-using-near)
|
||||
* [Get the content of webpage using get()](#get-the-content-of-webpage-using-get)
|
||||
* [Count total archives for an URL using total_archives()](#count-total-archives-for-an-url-using-total_archives)
|
||||
|
||||
|
||||
* [Tests](https://github.com/akamhy/waybackpy#tests)
|
||||
* [Tests](#tests)
|
||||
|
||||
* [Dependency](https://github.com/akamhy/waybackpy#dependency)
|
||||
* [Dependency](#dependency)
|
||||
|
||||
* [License](https://github.com/akamhy/waybackpy#license)
|
||||
* [License](#license)
|
||||
|
||||
<!--te-->
|
||||
|
||||
## Installation
|
||||
Using [pip](https://en.wikipedia.org/wiki/Pip_(package_manager)):
|
||||
|
||||
**pip install waybackpy**
|
||||
|
||||
```bash
|
||||
pip install waybackpy
|
||||
```
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
#### Capturing aka Saving an url Using save()
|
||||
|
||||
```diff
|
||||
+ waybackpy.save(url, UA=user_agent)
|
||||
```
|
||||
> url is mandatory. UA is not, but highly recommended.
|
||||
```python
|
||||
import waybackpy
|
||||
# Capturing a new archive on Wayback machine.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
archived_url = waybackpy.save("https://github.com/akamhy/waybackpy", UA = "Any-User-Agent")
|
||||
target_url = waybackpy.Url("https://github.com/akamhy/waybackpy", user_agnet="My-cool-user-agent")
|
||||
archived_url = target_url.save()
|
||||
print(archived_url)
|
||||
```
|
||||
This should print something similar to the following archived URL:
|
||||
This should print an URL similar to the following archived URL:
|
||||
|
||||
> <https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy>
|
||||
|
||||
<https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy>
|
||||
|
||||
#### Receiving the oldest archive for an URL Using oldest()
|
||||
|
||||
```diff
|
||||
+ waybackpy.oldest(url, UA=user_agent)
|
||||
```
|
||||
> url is mandatory. UA is not, but highly recommended.
|
||||
|
||||
|
||||
```python
|
||||
import waybackpy
|
||||
# retrieving the oldest archive on Wayback machine.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
oldest_archive = waybackpy.oldest("https://www.google.com/", UA = "Any-User-Agent")
|
||||
target_url = waybackpy.Url("https://www.google.com/", "My-cool-user-agent")
|
||||
oldest_archive = target_url.oldest()
|
||||
print(oldest_archive)
|
||||
```
|
||||
This returns the oldest available archive for <https://google.com>.
|
||||
This should print the oldest available archive for <https://google.com>.
|
||||
|
||||
> <http://web.archive.org/web/19981111184551/http://google.com:80/>
|
||||
|
||||
<http://web.archive.org/web/19981111184551/http://google.com:80/>
|
||||
|
||||
#### Receiving the newest archive for an URL using newest()
|
||||
|
||||
```diff
|
||||
+ waybackpy.newest(url, UA=user_agent)
|
||||
```
|
||||
> url is mandatory. UA is not, but highly recommended.
|
||||
|
||||
|
||||
```python
|
||||
import waybackpy
|
||||
# retrieving the newest archive on Wayback machine.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
newest_archive = waybackpy.newest("https://www.microsoft.com/en-us", UA = "Any-User-Agent")
|
||||
# retrieving the newest/latest archive on Wayback machine.
|
||||
target_url = waybackpy.Url(url="https://www.google.com/", user_agnet="My-cool-user-agent")
|
||||
newest_archive = target_url.newest()
|
||||
print(newest_archive)
|
||||
```
|
||||
This returns the newest available archive for <https://www.microsoft.com/en-us>, something just like this:
|
||||
This print the newest available archive for <https://www.microsoft.com/en-us>, something just like this:
|
||||
|
||||
> <http://web.archive.org/web/20200429033402/https://www.microsoft.com/en-us/>
|
||||
|
||||
<http://web.archive.org/web/20200429033402/https://www.microsoft.com/en-us/>
|
||||
|
||||
#### Receiving archive close to a specified year, month, day, hour, and minute using near()
|
||||
|
||||
```diff
|
||||
+ waybackpy.near(url, year=2020, month=1, day=1, hour=1, minute=1, UA=user_agent)
|
||||
```
|
||||
> url is mandotory. year,month,day,hour and minute are optional arguments. UA is not mandotory, but higly recomended.
|
||||
|
||||
|
||||
```python
|
||||
import waybackpy
|
||||
# retriving the the closest archive from a specified year.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
# supported argumnets are year,month,day,hour and minute
|
||||
archive_near_year = waybackpy.near("https://www.facebook.com/", year=2010, UA ="Any-User-Agent")
|
||||
target_url = waybackpy.Url(https://www.facebook.com/", "Any-User-Agent")
|
||||
archive_near_year = target_url.near(year=2010)
|
||||
print(archive_near_year)
|
||||
```
|
||||
returns : <http://web.archive.org/web/20100504071154/http://www.facebook.com/>
|
||||
|
||||
```waybackpy.near("https://www.facebook.com/", year=2010, month=1, UA ="Any-User-Agent")``` returns: <http://web.archive.org/web/20101111173430/http://www.facebook.com//>
|
||||
|
||||
```waybackpy.near("https://www.oracle.com/index.html", year=2019, month=1, day=5, UA ="Any-User-Agent")``` returns: <http://web.archive.org/web/20190105054437/https://www.oracle.com/index.html>
|
||||
> Please note that if you only specify the year, the current month and day are default arguments for month and day respectively. Do not expect just putting the year parameter would return the archive closer to January but the current month you are using the package. If you are using it in July 2018 and let's say you use ```waybackpy.near("https://www.facebook.com/", year=2011, UA ="Any-User-Agent")``` then you would be returned the nearest archive to July 2011 and not January 2011. You need to specify the month "1" for January.
|
||||
> Please note that if you only specify the year, the current month and day are default arguments for month and day respectively. Just putting the year parameter would not return the archive closer to January but the current month you are using the package. You need to specify the month "1" for January , 2 for february and so on.
|
||||
|
||||
> Do not pad (don't use zeros in the month, year, day, minute, and hour arguments). e.g. For January, set month = 1 and not month = 01.
|
||||
|
||||
|
||||
#### Get the content of webpage using get()
|
||||
|
||||
```diff
|
||||
+ waybackpy.get(url, encoding="UTF-8", UA=user_agent)
|
||||
```
|
||||
> url is mandatory. UA is not, but highly recommended. encoding is detected automatically, don't specify unless necessary.
|
||||
|
||||
```python
|
||||
from waybackpy import get
|
||||
import waybackpy
|
||||
# retriving the webpage from any url including the archived urls. Don't need to import other libraies :)
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
# supported argumnets are url, encoding and UA
|
||||
webpage = get("https://example.com/", UA="User-Agent")
|
||||
# supported argumnets encoding and user_agent
|
||||
target = waybackpy.Url("google.com", "any-user_agent")
|
||||
oldest_url = target.oldest()
|
||||
webpage = target.get(oldest_url) # We are getting the source of oldest archive of google.com.
|
||||
print(webpage)
|
||||
```
|
||||
> This should print the source code for <https://example.com/>.
|
||||
> This should print the source code for oldest archive of google.com. If no URL is passed in get() then it should retrive the source code of google.com and not any archive.
|
||||
|
||||
#### Count total archives for an URL using total_archives()
|
||||
|
||||
```diff
|
||||
+ waybackpy.total_archives(url, UA=user_agent)
|
||||
```
|
||||
> url is mandatory. UA is not, but highly recommended.
|
||||
|
||||
```python
|
||||
from waybackpy import total_archives
|
||||
# retriving the webpage from any url including the archived urls. Don't need to import other libraies :)
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
# supported argumnets are url and UA
|
||||
count = total_archives("https://en.wikipedia.org/wiki/Python (programming language)", UA="User-Agent")
|
||||
from waybackpy import Url
|
||||
# retriving the content of a webpage from any url including but not limited to the archived urls.
|
||||
count = Url("https://en.wikipedia.org/wiki/Python (programming language)", "User-Agent").total_archives()
|
||||
print(count)
|
||||
```
|
||||
> This should print an integer (int), which is the number of total archives on archive.org
|
||||
|
||||
|
||||
## Tests
|
||||
* [Here](https://github.com/akamhy/waybackpy/tree/master/tests)
|
||||
|
||||
|
||||
## Dependency
|
||||
* None, just python standard libraries (json, urllib and datetime). Both python 2 and 3 are supported :)
|
||||
* None, just python standard libraries (re, json, urllib and datetime). Both python 2 and 3 are supported :)
|
||||
|
||||
|
||||
## License
|
||||
|
||||
[MIT License](https://github.com/akamhy/waybackpy/blob/master/LICENSE)
|
||||
|
191
index.rst
191
index.rst
@ -3,9 +3,9 @@ waybackpy
|
||||
|
||||
|Build Status| |Downloads| |Release| |Codacy Badge| |License: MIT|
|
||||
|Maintainability| |CodeFactor| |made-with-python| |pypi| |PyPI - Python
|
||||
Version| |Maintenance|
|
||||
Version| |Maintenance| |codecov| |image1| |contributions welcome|
|
||||
|
||||
.. |Build Status| image:: https://travis-ci.org/akamhy/waybackpy.svg?branch=master
|
||||
.. |Build Status| image:: https://img.shields.io/travis/akamhy/waybackpy.svg?label=Travis%20CI&logo=travis&style=flat-square
|
||||
:target: https://travis-ci.org/akamhy/waybackpy
|
||||
.. |Downloads| image:: https://img.shields.io/pypi/dm/waybackpy.svg
|
||||
:target: https://pypistats.org/packages/waybackpy
|
||||
@ -25,11 +25,17 @@ Version| |Maintenance|
|
||||
.. |PyPI - Python Version| image:: https://img.shields.io/pypi/pyversions/waybackpy?style=flat-square
|
||||
.. |Maintenance| image:: https://img.shields.io/badge/Maintained%3F-yes-green.svg
|
||||
:target: https://github.com/akamhy/waybackpy/graphs/commit-activity
|
||||
|
||||
.. |codecov| image:: https://codecov.io/gh/akamhy/waybackpy/branch/master/graph/badge.svg
|
||||
:target: https://codecov.io/gh/akamhy/waybackpy
|
||||
.. |image1| image:: https://img.shields.io/github/repo-size/akamhy/waybackpy.svg?label=Repo%20size&style=flat-square
|
||||
.. |contributions welcome| image:: https://img.shields.io/static/v1.svg?label=Contributions&message=Welcome&color=0059b3&style=flat-square
|
||||
|
||||
|
||||
|Internet Archive| |Wayback Machine|
|
||||
|
||||
The waybackpy is a python wrapper for `Internet Archive`_\ ’s `Wayback
|
||||
Machine`_.
|
||||
Waybackpy is a Python library that interfaces with the `Internet
|
||||
Archive`_\ ’s `Wayback Machine`_ API. Archive pages and retrieve
|
||||
archived pages easily.
|
||||
|
||||
.. _Internet Archive: https://en.wikipedia.org/wiki/Internet_Archive
|
||||
.. _Wayback Machine: https://en.wikipedia.org/wiki/Wayback_Machine
|
||||
@ -37,127 +43,130 @@ Machine`_.
|
||||
.. |Internet Archive| image:: https://upload.wikimedia.org/wikipedia/commons/thumb/8/84/Internet_Archive_logo_and_wordmark.svg/84px-Internet_Archive_logo_and_wordmark.svg.png
|
||||
.. |Wayback Machine| image:: https://upload.wikimedia.org/wikipedia/commons/thumb/0/01/Wayback_Machine_logo_2010.svg/284px-Wayback_Machine_logo_2010.svg.png
|
||||
|
||||
Table of contents
|
||||
=================
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!--ts-->
|
||||
|
||||
- `Installation`_
|
||||
|
||||
- `Usage`_
|
||||
|
||||
- `Saving an url using save()`_
|
||||
- `Receiving the oldest archive for an URL Using oldest()`_
|
||||
- `Receiving the recent most/newest archive for an URL using
|
||||
newest()`_
|
||||
- `Receiving archive close to a specified year, month, day, hour,
|
||||
and minute using near()`_
|
||||
- `Get the content of webpage using get()`_
|
||||
- `Count total archives for an URL using total_archives()`_
|
||||
|
||||
- `Tests`_
|
||||
|
||||
- `Dependency`_
|
||||
|
||||
- `License`_
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<!--te-->
|
||||
|
||||
.. _Installation: #installation
|
||||
.. _Usage: #usage
|
||||
.. _Saving an url using save(): #capturing-aka-saving-an-url-using-save
|
||||
.. _Receiving the oldest archive for an URL Using oldest(): #receiving-the-oldest-archive-for-an-url-using-oldest
|
||||
.. _Receiving the recent most/newest archive for an URL using newest(): #receiving-the-newest-archive-for-an-url-using-newest
|
||||
.. _Receiving archive close to a specified year, month, day, hour, and minute using near(): #receiving-archive-close-to-a-specified-year-month-day-hour-and-minute-using-near
|
||||
.. _Get the content of webpage using get(): #get-the-content-of-webpage-using-get
|
||||
.. _Count total archives for an URL using total_archives(): #count-total-archives-for-an-url-using-total_archives
|
||||
.. _Tests: #tests
|
||||
.. _Dependency: #dependency
|
||||
.. _License: #license
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
Using `pip`_:
|
||||
|
||||
**pip install waybackpy**
|
||||
.. code:: bash
|
||||
|
||||
.. _pip: https://en.wikipedia.org/wiki/Pip_(package_manager)
|
||||
pip install waybackpy
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Archiving aka Saving an url Using save()
|
||||
Capturing aka Saving an url Using save()
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code:: diff
|
||||
|
||||
+ waybackpy.save(url, UA=user_agent)
|
||||
|
||||
..
|
||||
|
||||
url is mandatory. UA is not, but highly recommended.
|
||||
|
||||
.. code:: python
|
||||
|
||||
import waybackpy
|
||||
# Capturing a new archive on Wayback machine.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
archived_url = waybackpy.save("https://github.com/akamhy/waybackpy", UA = "Any-User-Agent")
|
||||
target_url = waybackpy.Url("https://github.com/akamhy/waybackpy", user_agnet="My-cool-user-agent")
|
||||
archived_url = target_url.save()
|
||||
print(archived_url)
|
||||
|
||||
This should print something similar to the following archived URL:
|
||||
This should print an URL similar to the following archived URL:
|
||||
|
||||
https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy
|
||||
https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy
|
||||
|
||||
.. _pip: https://en.wikipedia.org/wiki/Pip_(package_manager)
|
||||
|
||||
Receiving the oldest archive for an URL Using oldest()
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code:: diff
|
||||
|
||||
+ waybackpy.oldest(url, UA=user_agent)
|
||||
|
||||
..
|
||||
|
||||
url is mandatory. UA is not, but highly recommended.
|
||||
|
||||
.. code:: python
|
||||
|
||||
import waybackpy
|
||||
# retrieving the oldest archive on Wayback machine.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
oldest_archive = waybackpy.oldest("https://www.google.com/", UA = "Any-User-Agent")
|
||||
target_url = waybackpy.Url("https://www.google.com/", "My-cool-user-agent")
|
||||
oldest_archive = target_url.oldest()
|
||||
print(oldest_archive)
|
||||
|
||||
This returns the oldest available archive for https://google.com.
|
||||
This should print the oldest available archive for https://google.com.
|
||||
|
||||
http://web.archive.org/web/19981111184551/http://google.com:80/
|
||||
http://web.archive.org/web/19981111184551/http://google.com:80/
|
||||
|
||||
Receiving the newest archive for an URL using newest()
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code:: diff
|
||||
|
||||
+ waybackpy.newest(url, UA=user_agent)
|
||||
|
||||
..
|
||||
|
||||
url is mandatory. UA is not, but highly recommended.
|
||||
|
||||
.. code:: python
|
||||
|
||||
import waybackpy
|
||||
# retrieving the newest archive on Wayback machine.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
newest_archive = waybackpy.newest("https://www.microsoft.com/en-us", UA = "Any-User-Agent")
|
||||
# retrieving the newest/latest archive on Wayback machine.
|
||||
target_url = waybackpy.Url(url="https://www.google.com/", user_agnet="My-cool-user-agent")
|
||||
newest_archive = target_url.newest()
|
||||
print(newest_archive)
|
||||
|
||||
This returns the newest available archive for
|
||||
This print the newest available archive for
|
||||
https://www.microsoft.com/en-us, something just like this:
|
||||
|
||||
http://web.archive.org/web/20200429033402/https://www.microsoft.com/en-us/
|
||||
http://web.archive.org/web/20200429033402/https://www.microsoft.com/en-us/
|
||||
|
||||
Receiving archive close to a specified year, month, day, hour, and minute using near()
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code:: diff
|
||||
|
||||
+ waybackpy.near(url, year=2020, month=1, day=1, hour=1, minute=1, UA=user_agent)
|
||||
|
||||
..
|
||||
|
||||
url is mandotory. year,month,day,hour and minute are optional
|
||||
arguments. UA is not mandotory, but higly recomended.
|
||||
|
||||
.. code:: python
|
||||
|
||||
import waybackpy
|
||||
# retriving the the closest archive from a specified year.
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
# supported argumnets are year,month,day,hour and minute
|
||||
archive_near_year = waybackpy.near("https://www.facebook.com/", year=2010, UA ="Any-User-Agent")
|
||||
target_url = waybackpy.Url(https://www.facebook.com/", "Any-User-Agent")
|
||||
archive_near_year = target_url.near(year=2010)
|
||||
print(archive_near_year)
|
||||
|
||||
returns :
|
||||
http://web.archive.org/web/20100504071154/http://www.facebook.com/
|
||||
|
||||
``waybackpy.near("https://www.facebook.com/", year=2010, month=1, UA ="Any-User-Agent")``
|
||||
returns:
|
||||
http://web.archive.org/web/20101111173430/http://www.facebook.com//
|
||||
Please note that if you only specify the year, the current month and
|
||||
day are default arguments for month and day respectively. Just
|
||||
putting the year parameter would not return the archive closer to
|
||||
January but the current month you are using the package. You need to
|
||||
specify the month “1” for January , 2 for february and so on.
|
||||
|
||||
``waybackpy.near("https://www.oracle.com/index.html", year=2019, month=1, day=5, UA ="Any-User-Agent")``
|
||||
returns:
|
||||
http://web.archive.org/web/20190105054437/https://www.oracle.com/index.html
|
||||
> Please note that if you only specify the year, the current month and
|
||||
day are default arguments for month and day respectively. Do not expect
|
||||
just putting the year parameter would return the archive closer to
|
||||
January but the current month you are using the package. If you are
|
||||
using it in July 2018 and let’s say you use
|
||||
``waybackpy.near("https://www.facebook.com/", year=2011, UA ="Any-User-Agent")``
|
||||
then you would be returned the nearest archive to July 2011 and not
|
||||
January 2011. You need to specify the month “1” for January.
|
||||
..
|
||||
|
||||
Do not pad (don’t use zeros in the month, year, day, minute, and hour
|
||||
arguments). e.g. For January, set month = 1 and not month = 01.
|
||||
@ -165,46 +174,30 @@ January 2011. You need to specify the month “1” for January.
|
||||
Get the content of webpage using get()
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code:: diff
|
||||
|
||||
+ waybackpy.get(url, encoding="UTF-8", UA=user_agent)
|
||||
|
||||
..
|
||||
|
||||
url is mandatory. UA is not, but highly recommended. encoding is
|
||||
detected automatically, don’t specify unless necessary.
|
||||
|
||||
.. code:: python
|
||||
|
||||
from waybackpy import get
|
||||
import waybackpy
|
||||
# retriving the webpage from any url including the archived urls. Don't need to import other libraies :)
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
# supported argumnets are url, encoding and UA
|
||||
webpage = get("https://example.com/", UA="User-Agent")
|
||||
# supported argumnets encoding and user_agent
|
||||
target = waybackpy.Url("google.com", "any-user_agent")
|
||||
oldest_url = target.oldest()
|
||||
webpage = target.get(oldest_url) # We are getting the source of oldest archive of google.com.
|
||||
print(webpage)
|
||||
|
||||
..
|
||||
|
||||
This should print the source code for https://example.com/.
|
||||
This should print the source code for oldest archive of google.com.
|
||||
If no URL is passed in get() then it should retrive the source code
|
||||
of google.com and not any archive.
|
||||
|
||||
Count total archives for an URL using total_archives()
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code:: diff
|
||||
|
||||
+ waybackpy.total_archives(url, UA=user_agent)
|
||||
|
||||
..
|
||||
|
||||
url is mandatory. UA is not, but highly recommended.
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code:: python
|
||||
|
||||
from waybackpy import total_archives
|
||||
# retriving the webpage from any url including the archived urls. Don't need to import other libraies :)
|
||||
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
|
||||
# supported argumnets are url and UA
|
||||
count = total_archives("https://en.wikipedia.org/wiki/Python (programming language)", UA="User-Agent")
|
||||
from waybackpy import Url
|
||||
# retriving the content of a webpage from any url including but not limited to the archived urls.
|
||||
count = Url("https://en.wikipedia.org/wiki/Python (programming language)", "User-Agent").total_archives()
|
||||
print(count)
|
||||
|
||||
..
|
||||
@ -220,7 +213,7 @@ Tests
|
||||
Dependency
|
||||
----------
|
||||
|
||||
- None, just python standard libraries (json, urllib and datetime).
|
||||
- None, just python standard libraries (re, json, urllib and datetime).
|
||||
Both python 2 and 3 are supported :)
|
||||
|
||||
License
|
||||
|
2
setup.py
2
setup.py
@ -19,7 +19,7 @@ setup(
|
||||
author = about['__author__'],
|
||||
author_email = about['__author_email__'],
|
||||
url = about['__url__'],
|
||||
download_url = 'https://github.com/akamhy/waybackpy/archive/v1.5.tar.gz',
|
||||
download_url = 'https://github.com/akamhy/waybackpy/archive/v1.6.tar.gz',
|
||||
keywords = ['wayback', 'archive', 'archive website', 'wayback machine', 'Internet Archive'],
|
||||
install_requires=[],
|
||||
python_requires= ">=2.7",
|
||||
|
@ -1,98 +1,116 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import sys
|
||||
sys.path.append("..")
|
||||
import waybackpy
|
||||
import pytest
|
||||
|
||||
import random
|
||||
|
||||
user_agent = "Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/20.0"
|
||||
|
||||
def test_clean_url():
|
||||
test_url = " https://en.wikipedia.org/wiki/Network security "
|
||||
answer = "https://en.wikipedia.org/wiki/Network_security"
|
||||
test_result = waybackpy.clean_url(test_url)
|
||||
target = waybackpy.Url(test_url, user_agent)
|
||||
test_result = target.clean_url()
|
||||
assert answer == test_result
|
||||
|
||||
def test_url_check():
|
||||
InvalidUrl = "http://wwwgooglecom/"
|
||||
broken_url = "http://wwwgooglecom/"
|
||||
with pytest.raises(Exception) as e_info:
|
||||
waybackpy.url_check(InvalidUrl)
|
||||
waybackpy.Url(broken_url, user_agent)
|
||||
|
||||
def test_save():
|
||||
# Test for urls that exist and can be archived.
|
||||
url1="https://github.com/akamhy/waybackpy"
|
||||
archived_url1 = waybackpy.save(url1, UA=user_agent)
|
||||
|
||||
url_list = [
|
||||
"en.wikipedia.org",
|
||||
"www.wikidata.org",
|
||||
"commons.wikimedia.org",
|
||||
"www.wiktionary.org",
|
||||
"www.w3schools.com",
|
||||
"www.youtube.com"
|
||||
]
|
||||
x = random.randint(0, len(url_list)-1)
|
||||
url1 = url_list[x]
|
||||
target = waybackpy.Url(url1, "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1944.0 Safari/537.36")
|
||||
archived_url1 = target.save()
|
||||
assert url1 in archived_url1
|
||||
|
||||
|
||||
# Test for urls that are incorrect.
|
||||
with pytest.raises(Exception) as e_info:
|
||||
url2 = "ha ha ha ha"
|
||||
waybackpy.save(url2, UA=user_agent)
|
||||
waybackpy.Url(url2, user_agent)
|
||||
|
||||
# Test for urls not allowed to archive by robot.txt.
|
||||
with pytest.raises(Exception) as e_info:
|
||||
url3 = "http://www.archive.is/faq.html"
|
||||
waybackpy.save(url3, UA=user_agent)
|
||||
|
||||
target = waybackpy.Url(url3, "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:25.0) Gecko/20100101 Firefox/25.0")
|
||||
target.save()
|
||||
|
||||
# Non existent urls, test
|
||||
with pytest.raises(Exception) as e_info:
|
||||
url4 = "https://githfgdhshajagjstgeths537agajaajgsagudadhuss8762346887adsiugujsdgahub.us"
|
||||
archived_url4 = waybackpy.save(url4, UA=user_agent)
|
||||
target = waybackpy.Url(url3, "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.20.25 (KHTML, like Gecko) Version/5.0.4 Safari/533.20.27")
|
||||
target.save()
|
||||
|
||||
def test_near():
|
||||
url = "google.com"
|
||||
archive_near_year = waybackpy.near(url, year=2010, UA=user_agent)
|
||||
target = waybackpy.Url(url, "Mozilla/5.0 (Windows; U; Windows NT 6.0; de-DE) AppleWebKit/533.20.25 (KHTML, like Gecko) Version/5.0.3 Safari/533.19.4")
|
||||
archive_near_year = target.near(year=2010)
|
||||
assert "2010" in archive_near_year
|
||||
|
||||
archive_near_month_year = waybackpy.near(url, year=2015, month=2, UA=user_agent)
|
||||
archive_near_month_year = target.near( year=2015, month=2)
|
||||
assert ("201502" in archive_near_month_year) or ("201501" in archive_near_month_year) or ("201503" in archive_near_month_year)
|
||||
|
||||
archive_near_day_month_year = waybackpy.near(url, year=2006, month=11, day=15, UA=user_agent)
|
||||
archive_near_day_month_year = target.near(year=2006, month=11, day=15)
|
||||
assert ("20061114" in archive_near_day_month_year) or ("20061115" in archive_near_day_month_year) or ("2006116" in archive_near_day_month_year)
|
||||
|
||||
archive_near_hour_day_month_year = waybackpy.near("www.python.org", year=2008, month=5, day=9, hour=15, UA=user_agent)
|
||||
target = waybackpy.Url("www.python.org", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246")
|
||||
archive_near_hour_day_month_year = target.near(year=2008, month=5, day=9, hour=15)
|
||||
assert ("2008050915" in archive_near_hour_day_month_year) or ("2008050914" in archive_near_hour_day_month_year) or ("2008050913" in archive_near_hour_day_month_year)
|
||||
|
||||
with pytest.raises(Exception) as e_info:
|
||||
NeverArchivedUrl = "https://ee_3n.wrihkeipef4edia.org/rwti5r_ki/Nertr6w_rork_rse7c_urity"
|
||||
waybackpy.near(NeverArchivedUrl, year=2010, UA=user_agent)
|
||||
target = waybackpy.Url(NeverArchivedUrl, user_agent)
|
||||
target.near(year=2010)
|
||||
|
||||
def test_oldest():
|
||||
url = "github.com/akamhy/waybackpy"
|
||||
archive_oldest = waybackpy.oldest(url, UA=user_agent)
|
||||
assert "20200504141153" in archive_oldest
|
||||
target = waybackpy.Url(url, user_agent)
|
||||
assert "20200504141153" in target.oldest()
|
||||
|
||||
def test_newest():
|
||||
url = "github.com/akamhy/waybackpy"
|
||||
archive_newest = waybackpy.newest(url, UA=user_agent)
|
||||
assert url in archive_newest
|
||||
target = waybackpy.Url(url, user_agent)
|
||||
assert url in target.newest()
|
||||
|
||||
def test_get():
|
||||
oldest_google_archive = waybackpy.oldest("google.com", UA=user_agent)
|
||||
oldest_google_page_text = waybackpy.get(oldest_google_archive, UA=user_agent)
|
||||
assert "Welcome to Google" in oldest_google_page_text
|
||||
target = waybackpy.Url("google.com", user_agent)
|
||||
assert "Welcome to Google" in target.get(target.oldest())
|
||||
|
||||
def test_total_archives():
|
||||
|
||||
count1 = waybackpy.total_archives("https://en.wikipedia.org/wiki/Python (programming language)", UA=user_agent)
|
||||
assert count1 > 2000
|
||||
target = waybackpy.Url(" https://google.com ", user_agent)
|
||||
assert target.total_archives() > 500000
|
||||
|
||||
count2 = waybackpy.total_archives("https://gaha.e4i3n.m5iai3kip6ied.cima/gahh2718gs/ahkst63t7gad8", UA=user_agent)
|
||||
assert count2 == 0
|
||||
target = waybackpy.Url(" https://gaha.e4i3n.m5iai3kip6ied.cima/gahh2718gs/ahkst63t7gad8 ", user_agent)
|
||||
assert target.total_archives() == 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_clean_url()
|
||||
print(".")
|
||||
print(".") #1
|
||||
test_url_check()
|
||||
print(".")
|
||||
print(".") #1
|
||||
test_get()
|
||||
print(".")
|
||||
print(".") #3
|
||||
test_near()
|
||||
print(".")
|
||||
print(".") #4
|
||||
test_newest()
|
||||
print(".")
|
||||
print(".") #5
|
||||
test_save()
|
||||
print(".")
|
||||
print(".") #6
|
||||
test_oldest()
|
||||
print(".")
|
||||
print(".") #7
|
||||
test_total_archives()
|
||||
print(".")
|
||||
print(".") #8
|
||||
print("OK")
|
||||
|
@ -10,13 +10,15 @@
|
||||
# ━━━━━━━━━━━┗━━┛━━━━━━━━━━━━━━━━━━━━━━━━┗━━┛━
|
||||
|
||||
"""
|
||||
A python wrapper for Internet Archive's Wayback Machine API.
|
||||
Waybackpy is a Python library that interfaces with the Internet Archive's Wayback Machine API.
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Archive pages and retrieve archived pages easily.
|
||||
|
||||
Usage:
|
||||
>>> import waybackpy
|
||||
>>> new_archive = waybackpy.save('https://www.python.org')
|
||||
>>> target_url = waybackpy.Url('https://www.python.org', 'Your-apps-cool-user-agent')
|
||||
>>> new_archive = target_url.save()
|
||||
>>> print(new_archive)
|
||||
https://web.archive.org/web/20200502170312/https://www.python.org/
|
||||
|
||||
@ -25,6 +27,6 @@ Full documentation @ <https://akamhy.github.io/waybackpy/>.
|
||||
:license: MIT
|
||||
"""
|
||||
|
||||
from .wrapper import save, near, oldest, newest, get, clean_url, url_check, total_archives
|
||||
from .wrapper import Url
|
||||
from .__version__ import __title__, __description__, __url__, __version__
|
||||
from .__version__ import __author__, __author_email__, __license__, __copyright__
|
||||
|
@ -1,7 +1,9 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
__title__ = "waybackpy"
|
||||
__description__ = "A python wrapper for Internet Archive's Wayback Machine API. Archive pages and retrieve archived pages easily."
|
||||
__description__ = "A Python library that interfaces with the Internet Archive's Wayback Machine API. Archive pages and retrieve archived pages easily."
|
||||
__url__ = "https://akamhy.github.io/waybackpy/"
|
||||
__version__ = "v1.5"
|
||||
__version__ = "2.0.0"
|
||||
__author__ = "akamhy"
|
||||
__author_email__ = "akash3pro@gmail.com"
|
||||
__license__ = "MIT"
|
||||
|
@ -1,43 +1,6 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
class TooManyArchivingRequests(Exception):
|
||||
|
||||
"""Error when a single url reqeusted for archiving too many times in a short timespam.
|
||||
Wayback machine doesn't supports archivng any url too many times in a short period of time.
|
||||
class WaybackError(Exception):
|
||||
"""
|
||||
|
||||
class ArchivingNotAllowed(Exception):
|
||||
|
||||
"""Files like robots.txt are set to deny robot archiving.
|
||||
Wayback machine respects these file, will not archive.
|
||||
"""
|
||||
|
||||
class PageNotSaved(Exception):
|
||||
"""
|
||||
When unable to save a webpage.
|
||||
"""
|
||||
|
||||
class ArchiveNotFound(Exception):
|
||||
"""
|
||||
When a page was never archived but client asks for old archive.
|
||||
"""
|
||||
|
||||
class UrlNotFound(Exception):
|
||||
"""
|
||||
Raised when 404 UrlNotFound.
|
||||
"""
|
||||
|
||||
class BadGateWay(Exception):
|
||||
"""
|
||||
Raised when 502 bad gateway.
|
||||
"""
|
||||
|
||||
class WaybackUnavailable(Exception):
|
||||
"""
|
||||
Raised when 503 API Service Temporarily Unavailable.
|
||||
"""
|
||||
|
||||
class InvalidUrl(Exception):
|
||||
"""
|
||||
Raised when url doesn't follow the standard url format.
|
||||
Raised when API Service error.
|
||||
"""
|
||||
|
@ -1,143 +1,153 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
import sys
|
||||
import json
|
||||
from datetime import datetime
|
||||
from waybackpy.exceptions import TooManyArchivingRequests, ArchivingNotAllowed, PageNotSaved, ArchiveNotFound, UrlNotFound, BadGateWay, InvalidUrl, WaybackUnavailable
|
||||
try:
|
||||
from waybackpy.exceptions import WaybackError
|
||||
|
||||
if sys.version_info >= (3, 0): # If the python ver >= 3
|
||||
from urllib.request import Request, urlopen
|
||||
from urllib.error import HTTPError, URLError
|
||||
except ImportError:
|
||||
else: # For python2.x
|
||||
from urllib2 import Request, urlopen, HTTPError, URLError
|
||||
|
||||
default_UA = "waybackpy python package - https://github.com/akamhy/waybackpy"
|
||||
|
||||
default_UA = "waybackpy python package"
|
||||
|
||||
def url_check(url):
|
||||
if "." not in url:
|
||||
raise InvalidUrl("'%s' is not a vaild url." % url)
|
||||
|
||||
def clean_url(url):
|
||||
return str(url).strip().replace(" ","_")
|
||||
|
||||
def wayback_timestamp(**kwargs):
|
||||
return (
|
||||
str(kwargs["year"])
|
||||
+
|
||||
str(kwargs["month"]).zfill(2)
|
||||
+
|
||||
str(kwargs["day"]).zfill(2)
|
||||
+
|
||||
str(kwargs["hour"]).zfill(2)
|
||||
+
|
||||
str(kwargs["minute"]).zfill(2)
|
||||
)
|
||||
|
||||
def handle_HTTPError(e):
|
||||
if e.code == 502:
|
||||
raise BadGateWay(e)
|
||||
elif e.code == 503:
|
||||
raise WaybackUnavailable(e)
|
||||
elif e.code == 429:
|
||||
raise TooManyArchivingRequests(e)
|
||||
elif e.code == 404:
|
||||
raise UrlNotFound(e)
|
||||
|
||||
def save(url, UA=default_UA):
|
||||
url_check(url)
|
||||
request_url = ("https://web.archive.org/save/" + clean_url(url))
|
||||
|
||||
hdr = { 'User-Agent' : '%s' % UA } #nosec
|
||||
req = Request(request_url, headers=hdr) #nosec
|
||||
class Url():
|
||||
"""waybackpy Url object"""
|
||||
|
||||
|
||||
try:
|
||||
response = urlopen(req) #nosec
|
||||
except HTTPError as e:
|
||||
if handle_HTTPError(e) is None:
|
||||
raise PageNotSaved(e)
|
||||
except URLError:
|
||||
def __init__(self, url, user_agent=default_UA):
|
||||
self.url = url
|
||||
self.user_agent = user_agent
|
||||
self.url_check() # checks url validity on init.
|
||||
|
||||
def __repr__(self):
|
||||
"""Representation of the object."""
|
||||
return "waybackpy.Url(url=%s, user_agent=%s)" % (self.url, self.user_agent)
|
||||
|
||||
def __str__(self):
|
||||
"""String representation of the object."""
|
||||
return "%s" % self.clean_url()
|
||||
|
||||
def __len__(self):
|
||||
"""Length of the URL."""
|
||||
return len(self.clean_url())
|
||||
|
||||
def url_check(self):
|
||||
"""Check for common URL problems."""
|
||||
if "." not in self.url:
|
||||
raise URLError("'%s' is not a vaild url." % self.url)
|
||||
return True
|
||||
|
||||
def clean_url(self):
|
||||
"""Fix the URL, if possible."""
|
||||
return str(self.url).strip().replace(" ","_")
|
||||
|
||||
def wayback_timestamp(self, **kwargs):
|
||||
"""Return the formatted the timestamp."""
|
||||
return (
|
||||
str(kwargs["year"])
|
||||
+
|
||||
str(kwargs["month"]).zfill(2)
|
||||
+
|
||||
str(kwargs["day"]).zfill(2)
|
||||
+
|
||||
str(kwargs["hour"]).zfill(2)
|
||||
+
|
||||
str(kwargs["minute"]).zfill(2)
|
||||
)
|
||||
|
||||
def handle_HTTPError(self, e):
|
||||
"""Handle some common HTTPErrors."""
|
||||
if e.code == 404:
|
||||
raise HTTPError(e)
|
||||
if e.code >= 400:
|
||||
raise WaybackError(e)
|
||||
|
||||
def save(self):
|
||||
"""Create a new archives for an URL on the Wayback Machine."""
|
||||
request_url = ("https://web.archive.org/save/" + self.clean_url())
|
||||
hdr = { 'User-Agent' : '%s' % self.user_agent } #nosec
|
||||
req = Request(request_url, headers=hdr) #nosec
|
||||
try:
|
||||
response = urlopen(req) #nosec
|
||||
except URLError as e:
|
||||
raise UrlNotFound(e)
|
||||
response = urlopen(req, timeout=30) #nosec
|
||||
except Exception:
|
||||
try:
|
||||
response = urlopen(req, timeout=300) #nosec
|
||||
except Exception as e:
|
||||
raise WaybackError(e)
|
||||
header = response.headers
|
||||
try:
|
||||
arch = re.search(r"rel=\"memento.*?web\.archive\.org(/web/[0-9]{14}/.*?)>", str(header)).group(1)
|
||||
except KeyError as e:
|
||||
raise WaybackError(e)
|
||||
return "https://web.archive.org" + arch
|
||||
|
||||
header = response.headers
|
||||
|
||||
if "exclusion.robots.policy" in str(header):
|
||||
raise ArchivingNotAllowed("Can not archive %s. Disabled by site owner." % (url))
|
||||
|
||||
return "https://web.archive.org" + header['Content-Location']
|
||||
|
||||
def get(url, encoding=None, UA=default_UA):
|
||||
url_check(url)
|
||||
hdr = { 'User-Agent' : '%s' % UA }
|
||||
req = Request(clean_url(url), headers=hdr) #nosec
|
||||
|
||||
try:
|
||||
resp=urlopen(req) #nosec
|
||||
except URLError:
|
||||
def get(self, url=None, user_agent=None, encoding=None):
|
||||
"""Returns the source code of the supplied URL. Auto detects the encoding if not supplied."""
|
||||
if not url:
|
||||
url = self.clean_url()
|
||||
if not user_agent:
|
||||
user_agent = self.user_agent
|
||||
hdr = { 'User-Agent' : '%s' % user_agent }
|
||||
req = Request(url, headers=hdr) #nosec
|
||||
try:
|
||||
resp=urlopen(req) #nosec
|
||||
except URLError as e:
|
||||
raise UrlNotFound(e)
|
||||
except URLError:
|
||||
try:
|
||||
resp=urlopen(req) #nosec
|
||||
except URLError as e:
|
||||
raise HTTPError(e)
|
||||
if not encoding:
|
||||
try:
|
||||
encoding= resp.headers['content-type'].split('charset=')[-1]
|
||||
except AttributeError:
|
||||
encoding = "UTF-8"
|
||||
return resp.read().decode(encoding.replace("text/html", "UTF-8", 1))
|
||||
|
||||
if encoding is None:
|
||||
def near(self, **kwargs):
|
||||
""" Returns the archived from Wayback Machine for an URL closest to the time supplied.
|
||||
Supported params are year, month, day, hour and minute.
|
||||
The non supplied parameters are default to the runtime time.
|
||||
"""
|
||||
year=kwargs.get("year", datetime.utcnow().strftime('%Y'))
|
||||
month=kwargs.get("month", datetime.utcnow().strftime('%m'))
|
||||
day=kwargs.get("day", datetime.utcnow().strftime('%d'))
|
||||
hour=kwargs.get("hour", datetime.utcnow().strftime('%H'))
|
||||
minute=kwargs.get("minute", datetime.utcnow().strftime('%M'))
|
||||
timestamp = self.wayback_timestamp(year=year,month=month,day=day,hour=hour,minute=minute)
|
||||
request_url = "https://archive.org/wayback/available?url=%s×tamp=%s" % (self.clean_url(), str(timestamp))
|
||||
hdr = { 'User-Agent' : '%s' % self.user_agent }
|
||||
req = Request(request_url, headers=hdr) # nosec
|
||||
try:
|
||||
encoding= resp.headers['content-type'].split('charset=')[-1]
|
||||
except AttributeError:
|
||||
encoding = "UTF-8"
|
||||
response = urlopen(req) #nosec
|
||||
except Exception as e:
|
||||
self.handle_HTTPError(e)
|
||||
data = json.loads(response.read().decode("UTF-8"))
|
||||
if not data["archived_snapshots"]:
|
||||
raise WaybackError("'%s' is not yet archived." % url)
|
||||
archive_url = (data["archived_snapshots"]["closest"]["url"])
|
||||
# wayback machine returns http sometimes, idk why? But they support https
|
||||
archive_url = archive_url.replace("http://web.archive.org/web/","https://web.archive.org/web/",1)
|
||||
return archive_url
|
||||
|
||||
return resp.read().decode(encoding.replace("text/html", "UTF-8", 1))
|
||||
def oldest(self, year=1994):
|
||||
"""Returns the oldest archive from Wayback Machine for an URL."""
|
||||
return self.near(year=year)
|
||||
|
||||
def near(url, **kwargs):
|
||||
def newest(self):
|
||||
"""Returns the newest archive on Wayback Machine for an URL, sometimes you may not get the newest archive because wayback machine DB lag."""
|
||||
return self.near()
|
||||
|
||||
try:
|
||||
url = kwargs["url"]
|
||||
except KeyError:
|
||||
url = url
|
||||
|
||||
year=kwargs.get("year", datetime.utcnow().strftime('%Y'))
|
||||
month=kwargs.get("month", datetime.utcnow().strftime('%m'))
|
||||
day=kwargs.get("day", datetime.utcnow().strftime('%d'))
|
||||
hour=kwargs.get("hour", datetime.utcnow().strftime('%H'))
|
||||
minute=kwargs.get("minute", datetime.utcnow().strftime('%M'))
|
||||
UA=kwargs.get("UA", default_UA)
|
||||
|
||||
url_check(url)
|
||||
timestamp = wayback_timestamp(year=year,month=month,day=day,hour=hour,minute=minute)
|
||||
request_url = "https://archive.org/wayback/available?url=%s×tamp=%s" % (clean_url(url), str(timestamp))
|
||||
hdr = { 'User-Agent' : '%s' % UA }
|
||||
req = Request(request_url, headers=hdr) # nosec
|
||||
|
||||
try:
|
||||
response = urlopen(req) #nosec
|
||||
except HTTPError as e:
|
||||
handle_HTTPError(e)
|
||||
|
||||
data = json.loads(response.read().decode("UTF-8"))
|
||||
if not data["archived_snapshots"]:
|
||||
raise ArchiveNotFound("'%s' is not yet archived." % url)
|
||||
|
||||
archive_url = (data["archived_snapshots"]["closest"]["url"])
|
||||
# wayback machine returns http sometimes, idk why? But they support https
|
||||
archive_url = archive_url.replace("http://web.archive.org/web/","https://web.archive.org/web/",1)
|
||||
return archive_url
|
||||
|
||||
def oldest(url, UA=default_UA, year=1994):
|
||||
return near(url, year=year, UA=UA)
|
||||
|
||||
def newest(url, UA=default_UA):
|
||||
return near(url, UA=UA)
|
||||
|
||||
def total_archives(url, UA=default_UA):
|
||||
url_check(url)
|
||||
|
||||
hdr = { 'User-Agent' : '%s' % UA }
|
||||
request_url = "https://web.archive.org/cdx/search/cdx?url=%s&output=json" % clean_url(url)
|
||||
req = Request(request_url, headers=hdr) # nosec
|
||||
|
||||
try:
|
||||
response = urlopen(req) #nosec
|
||||
except HTTPError as e:
|
||||
handle_HTTPError(e)
|
||||
|
||||
return (len(json.loads(response.read())))
|
||||
def total_archives(self):
|
||||
"""Returns the total number of archives on Wayback Machine for an URL."""
|
||||
hdr = { 'User-Agent' : '%s' % self.user_agent }
|
||||
request_url = "https://web.archive.org/cdx/search/cdx?url=%s&output=json&fl=statuscode" % self.clean_url()
|
||||
req = Request(request_url, headers=hdr) # nosec
|
||||
try:
|
||||
response = urlopen(req) #nosec
|
||||
except Exception as e:
|
||||
self.handle_HTTPError(e)
|
||||
return str(response.read()).count(",") # Most efficient method to count number of archives (yet)
|
||||
|
Reference in New Issue
Block a user