Skip to content

Commit 8364c3c

Browse files
authored
Merge pull request #123 from scrapinghub/more-docs
[docs] minor README.rst cleanup
2 parents a868966 + fdc1948 commit 8364c3c

File tree

4 files changed

+27
-6
lines changed

4 files changed

+27
-6
lines changed

.bumpversion.cfg

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
[bumpversion]
2-
current_version = 0.11.0
2+
current_version = 0.12.0
33
commit = True
44
tag = True
55
tag_name = {new_version}
66

77
[bumpversion:file:scrapyrt/VERSION]
8-

README.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Scrapyrt (Scrapy realtime)
2424
Introduction
2525
============
2626

27-
HTTP server which provides API for scheduling Scrapy spiders and
27+
HTTP server which provides API for scheduling `Scrapy <https://scrapy.org/>`_ spiders and
2828
making requests with spiders.
2929

3030
Features
@@ -56,9 +56,9 @@ and will raise error if it won't find one. Note that you need to have all
5656
your project requirements installed.
5757

5858
Scrapyrt supports endpoint ``/crawl.json`` that can be requested
59-
with two methods.
59+
with two methods: GET and POST.
6060

61-
To run sample `toscrape-css spider`_ from `Scrapy educational quotesbot project`_
61+
To run sample toscrape-css spider from `Quotesbot <https://github.com/scrapy/quotesbot>`_
6262
parsing page about famous quotes::
6363

6464
curl "http://localhost:9080/crawl.json?spider_name=toscrape-css&url=http://quotes.toscrape.com/"

docs/source/api.rst

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,6 +130,16 @@ start_requests
130130
behavior. If this argument is present API will execute start_requests
131131
Spider method.
132132

133+
crawl_args
134+
- type: urlencoded JSON string
135+
- optional
136+
137+
Optional arguments for spider. This is same as you use when running
138+
spider from command line with -a argument, for example if you run
139+
spider like this: "scrapy crawl spider -a zipcode=14100" you can
140+
send crawl_args={"zipcode":"14100"} (urlencoded: crawl_args=%7B%22zipcode%22%3A%2014100%7D)
141+
and spider will get zipcode argument.
142+
133143
If required parameters are missing api will return 400 Bad Request
134144
with hopefully helpful error message.
135145

@@ -558,6 +568,18 @@ But if you still want to save all stdout to some file - you can create custom
558568
approach described in `Python Logging HOWTO`_ or redirect stdout to a file using
559569
`bash redirection syntax`_, `supervisord logging`_ etc.
560570

571+
Releases
572+
========
573+
ScrapyRT 0.12 (2021-03-08)
574+
--------------------------
575+
- added crawl arguments for API
576+
- removed Python 2 support
577+
- added Python 3.9 support
578+
- docs clean up
579+
- removed superfluous requirements (demjson, six)
580+
- fixed API crash when spider returns bytes in items output
581+
- updated unit tests
582+
- development improvements, moved from Travis to Github Workflows
561583

562584
.. _toscrape-css spider: https://github.com/scrapy/quotesbot/blob/master/quotesbot/spiders/toscrape-css.py
563585
.. _Scrapy educational quotesbot project: https://github.com/scrapy/quotesbot

scrapyrt/VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
0.11.0
1+
0.12.0

0 commit comments

Comments
 (0)