Skip to content

Update README.md for running CoreNLP server if setup through docker. #32

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,16 @@ $ docker run -v {path to wiki_bigrams.bin}:/sent2vec/pretrained_model.bin -it ke
You have to specify the path to your sent2vec model using the `-v` argument.
If, for example, you should choose not to use the *wiki_bigrams.bin* model, adjust your path accordingly (and of course, remember to remove the curly brackets).


To turn on the CoreNLP server and run your code through your own python script
```
$ docker run -p9000:9000 -v {path to wiki_bigrams.bin}:/sent2vec/pretrained_model.bin -it keyphrase-extraction
# Run the corenlp server
/app # cd /stanford-corenlp
/stanford-corenlp # java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos -status_port 9000 -port 9000 -timeout 15000 &
```
You'll get the IP (preferably 0.0.0.0) and the port (9000) on which the server is running. Add those details in the cong.ini file (as indicated in step 6 above).

# Usage

Once the CoreNLP server is running
Expand Down