Skip to content

Commit 92900c1

Browse files
author
Madison Bahmer
authored
Merge pull request #167 from istresearch/hotfix-1.2.1
Hotfix 1.2.1
2 parents 7712929 + 5db560b commit 92900c1

File tree

13 files changed

+38
-29
lines changed

13 files changed

+38
-29
lines changed

.travis.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,10 @@ env:
1616
run_opts: ""
1717
- docker: 1
1818
dockerfile_name: Dockerfile
19-
docker_tag_suffix: 1.2
19+
docker_tag_suffix: 1.2.1
2020
- docker: 1
2121
dockerfile_name: Dockerfile.py2alpine
22-
docker_tag_suffix: 1.2-alpine
22+
docker_tag_suffix: 1.2.1-alpine
2323

2424
install: true
2525

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,10 +51,10 @@ To set up a pre-canned Scrapy Cluster test environment, make sure you have the l
5151

5252
## Documentation
5353

54-
Please check out the official [Scrapy Cluster 1.2 documentation](http://scrapy-cluster.readthedocs.org/en/latest/) for more information on how everything works!
54+
Please check out the official [Scrapy Cluster 1.2.1 documentation](http://scrapy-cluster.readthedocs.org/en/latest/) for more information on how everything works!
5555

5656
## Branches
5757

58-
The `master` branch of this repository contains the latest stable release code for `Scrapy Cluster 1.2`.
58+
The `master` branch of this repository contains the latest stable release code for `Scrapy Cluster 1.2.1`.
5959

6060
The `dev` branch contains bleeding edge code and is currently working towards [Scrapy Cluster 1.3](https://github.com/istresearch/scrapy-cluster/milestone/3). Please note that not everything may be documented, finished, tested, or finalized but we are happy to help guide those who are interested.

crawler/tests/online.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,10 +35,10 @@ class CustomSpider(LinkSpider):
3535
class TestLinkSpider(TestCase):
3636

3737
example_feed = "{\"allowed_domains\":null,\"allow_regex\":null,\""\
38-
"crawlid\":\"abc12345\",\"url\":\"istresearch.com\",\"expires\":0,\""\
38+
"crawlid\":\"abc12345\",\"url\":\"http://dmoztools.net/\",\"expires\":0,\""\
3939
"ts\":1461549923.7956631184,\"priority\":1,\"deny_regex\":null,\""\
4040
"cookie\":null,\"attrs\":null,\"appid\":\"test\",\"spiderid\":\""\
41-
"link\",\"useragent\":null,\"deny_extensions\":null,\"maxdepth\":0}"
41+
"test-link\",\"useragent\":null,\"deny_extensions\":null,\"maxdepth\":0}"
4242

4343
def setUp(self):
4444
self.settings = get_project_settings()
@@ -75,7 +75,7 @@ def test_crawler_process(self):
7575
d = runner.crawl(CustomSpider)
7676
d.addBoth(lambda _: reactor.stop())
7777
# add crawl to redis
78-
key = "test-spider:istresearch.com:queue"
78+
key = "test-spider:dmoztools.net:queue"
7979
self.redis_conn.zadd(key, self.example_feed, -99)
8080

8181
# run the spider, give 20 seconds to see the url, crawl it,

docker-compose.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,27 +2,27 @@ version: '2'
22

33
services:
44
kafka_monitor:
5-
image: istresearch/scrapy-cluster:kafka-monitor-1.2
5+
image: istresearch/scrapy-cluster:kafka-monitor-1.2.1
66
depends_on:
77
- kafka
88
- redis
99
restart: always
1010
redis_monitor:
11-
image: istresearch/scrapy-cluster:redis-monitor-1.2
11+
image: istresearch/scrapy-cluster:redis-monitor-1.2.1
1212
depends_on:
1313
- kafka
1414
- redis
1515
- zookeeper
1616
restart: always
1717
crawler:
18-
image: istresearch/scrapy-cluster:crawler-1.2
18+
image: istresearch/scrapy-cluster:crawler-1.2.1
1919
depends_on:
2020
- kafka
2121
- redis
2222
- zookeeper
2323
restart: always
2424
rest:
25-
image: istresearch/scrapy-cluster:rest-1.2
25+
image: istresearch/scrapy-cluster:rest-1.2.1
2626
depends_on:
2727
- kafka
2828
- redis

docs/conf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,9 +56,9 @@
5656
# built documents.
5757
#
5858
# The short X.Y version.
59-
version = '1.2'
59+
version = '1.2.1'
6060
# The full version, including alpha/beta/rc tags.
61-
release = '1.2'
61+
release = '1.2.1'
6262

6363
# The language for content autogenerated by Sphinx. Refer to documentation
6464
# for a list of supported languages.

docs/topics/advanced/docker.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ It is recommended you use docker compose to orchestrate your cluster with all of
7777

7878
::
7979

80-
image: istresearch/scrapy-cluster:kafka-monitor-1.2
80+
image: istresearch/scrapy-cluster:kafka-monitor-1.2.1
8181
build:
8282
context: .
8383
dockerfile: docker/kafka-monitor/Dockerfile

docs/topics/advanced/rediskeys.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,6 @@ If you run the integration tests, there may be temporary Redis keys created that
6969

7070
- **cluster:test** - Used when testing the Kafka Monitor can act and set a key in Redis
7171

72-
- **test-spider:istresearch.com:queue** - Used when testing the crawler installation can interact with Redis and Kafka
72+
- **test-spider:dmoztools.net:queue** - Used when testing the crawler installation can interact with Redis and Kafka
7373

7474
- **stats:crawler:<hostname>:test-spider:<window>** - Automatically created and destoryed during crawler testing by the stats collection mechanism settings.

docs/topics/changelog.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,15 @@ Change Log
55

66
This page serves to document any changes made between releases.
77

8+
Scrapy Cluster 1.2.1
9+
--------------------
10+
11+
Date: 01/19/2018
12+
13+
- Fixes unit test syntax for link spider
14+
15+
- Fixes docker version upgrade on Travis for continuous integration tests
16+
817
Scrapy Cluster 1.2
918
------------------
1019

docs/topics/introduction/quickstart.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -431,15 +431,15 @@ Which ever setup you chose, every process within should stay running for the rem
431431

432432
::
433433

434-
python kafka_monitor.py feed '{"url": "http://istresearch.com", "appid":"testapp", "crawlid":"abc123"}'
434+
python kafka_monitor.py feed '{"url": "http://dmoztools.net", "appid":"testapp", "crawlid":"abc123"}'
435435

436436
You will see the following output on the command line for that successful request:
437437

438438
::
439439

440440
2015-12-22 15:45:37,457 [kafka-monitor] INFO: Feeding JSON into demo.incoming
441441
{
442-
"url": "http://istresearch.com",
442+
"url": "http://dmoztools.net",
443443
"crawlid": "abc123",
444444
"appid": "testapp"
445445
}
@@ -460,7 +460,7 @@ Crawl Request:
460460

461461
::
462462

463-
python kafka_monitor.py feed '{"url": "http://dmoz.org", "appid":"testapp", "crawlid":"abc1234", "maxdepth":1}'
463+
python kafka_monitor.py feed '{"url": "http://dmoztools.net", "appid":"testapp", "crawlid":"abc1234", "maxdepth":1}'
464464

465465
Now send an ``info`` action request to see what is going on with the
466466
crawl:

docs/topics/kafka-monitor/quickstart.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ JSON Object feeder into your desired Kafka Topic. This takes a valid JSON object
3333

3434
::
3535

36-
$ python kafka_monitor.py feed '{"url": "http://istresearch.com", "appid":"testapp", "crawlid":"ABC123"}'
36+
$ python kafka_monitor.py feed '{"url": "http://dmoztools.net", "appid":"testapp", "crawlid":"ABC123"}'
3737

3838
The command line feed is very slow and should not be used in production. Instead, you should write your own continuously running application to feed Kafka the desired API requests that you require.
3939

@@ -89,10 +89,10 @@ Feed an item
8989

9090
::
9191

92-
$ python kafka_monitor.py feed '{"url": "http://istresearch.com", "appid":"testapp", "crawlid":"ABC123"}'
92+
$ python kafka_monitor.py feed '{"url": "http://dmoztools.net", "appid":"testapp", "crawlid":"ABC123"}'
9393
2016-01-05 15:14:44,829 [kafka-monitor] INFO: Feeding JSON into demo.incoming
9494
{
95-
"url": "http://istresearch.com",
95+
"url": "http://dmoztools.net",
9696
"crawlid": "ABC123",
9797
"appid": "testapp"
9898
}
@@ -116,8 +116,8 @@ If you have a :ref:`Crawler <crawler>` running, you should see the html come thr
116116
"response_headers": {
117117
<headers omitted>
118118
},
119-
"response_url": "http://istresearch.com",
120-
"url": "http://istresearch.com",
119+
"response_url": "http://dmoztools.net",
120+
"url": "http://dmoztools.net",
121121
"status_code": 200,
122122
"status_msg": "OK",
123123
"appid": "testapp",

0 commit comments

Comments
 (0)