Skip to content

Suggestions to improve Dockerfile - reduce image size #4986

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
barakbd opened this issue Aug 18, 2018 · 50 comments
Closed

Suggestions to improve Dockerfile - reduce image size #4986

barakbd opened this issue Aug 18, 2018 · 50 comments

Comments

@barakbd
Copy link

barakbd commented Aug 18, 2018

Is your feature request related to a problem? Please describe.

  1. Use node:carbon-alpine base image to reduce image size (unless you you really need the full size node image)

  2. Volumes:
    No need to mkdir + VOLUME. Also, it is possible to specify multi in one line
    https://docs.docker.com/engine/reference/builder/#volume

  3. Non-root user:
    https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#non-root-user

  4. Entrypoint - use node and not npm
    https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#docker-run

  5. Multi stage Dockerfile - reduce image size by building final image without devDependencies:
    Docker multi stage - https://docs.docker.com/develop/develop-images/multistage-build/
    TravisCI supported - Upgrade Docker for multi-stage build support travis-ci/travis-ci#8181

Suggestted Dockerfile

FROM node:carbon-alpine

#after this you can just use ./ to refer to cwd
WORKDIR /parse-server

#Specify multiple volumes in one line - reuse layers during build
VOLUME ["/parse-server/config", "/parse-server/cloud"]

#Copy into /parse-server which is current working directory
COPY ./ ./ 

RUN npm install && \
    npm run build

ENV PORT=1337

EXPOSE $PORT

# non-root user
USER node

# start with node, not npm
ENTRYPOINT ["node", "./bin/parse-server", "--"]
@flovilmart
Copy link
Contributor

The main issues we have with the docker setup is that it isn’t that easy to get started IHMO.

As the cloud, config and all env are provided a posteriori, and possible with more package to install from npm/yarn, the base image as it is, isn’t mega useful.

Would you have an idea on how to solve that?

ie: making a base image useful enough that you can start quickly AND fully enjoy when you have complex config/cloud code

@flovilmart
Copy link
Contributor

I’m also thinking using with k8s, can we provide an easy to use pod template or helm that ‘scales’.

Otherwise it’s easy to package a node app into a docker image, so we should encourage people to build a node app, and then use docker for their deployment

@barakbd
Copy link
Author

barakbd commented Aug 18, 2018

I just stated using the docker image. But I do have some experience with Docker/K8s, so if I can understand the problem better, I could help.
What do you mean "cloud, config and all env are provided a posteriori"?

@flovilmart
Copy link
Contributor

I mean that as there are two main ways to use parse-server with docker.

  1. One would be to use the auto generated image, mount the volumes, and pass the env variables / config at ‘docker run’ time. In this scenario, the user don’t write any docker file. Using with k8s there is no custom image and no need to customize further.

  2. The other option for the user is to use parse-server as an npm dependency, (like in parse-server-example) and treat everything as a nodejs app. Therefore the dockerfile is justFROM node.

The benefits of 1 are that the whole thing is Docker oriented. No need to know about node runtimes etc...

The benefits of 2 is that it’s node JS oriented and easy to run locally.

Do you see what I mean?

My ‘issue’ at this time is that the two options are totally valid. I tend to go to option 2 so Docker is not ‘required’ to run the projects locally. As the 2 options are totally valid, it’s ‘hard’ for someone to pick the right one, OR to properly explain it, or to make an image that is very very useful for everyone.

@acinader
Copy link
Contributor

I haven't been able to get #1 to work with our cloud code. I haven't put too much effort into it, but I did try just dockerizing the full repo. As I recall, I wasn't able to get my cloud code running. so far I have gone with #2 as @flovilmart suggests.

@acinader
Copy link
Contributor

I am interested in figuring out a Kubernetes recipe for parse-server deployment.

@barakbd
Copy link
Author

barakbd commented Aug 19, 2018

I'm actually having trouble installing the npm. bcrypt needs to build for some reason, and I have python 3.x instead of 2.7. Once I solved that, it says I'm missing Xcode build tools...

This is exactly why docker is great - no need to worry about environment setup.
Had someone been able to get cloud code working with the docker image?
I will be able to try this week.
Once I get the basic image working, I can talk about K8s.

@barakbd
Copy link
Author

barakbd commented Aug 19, 2018

I can help with multi state docker build, docker-compose and CI, though I have experience with Jenkins, not TravisCI

@flovilmart
Copy link
Contributor

@barakbd you’ll Likely need Xcode installed and the command line tools. (Which can be installed through https://github.com/KrauseFx/xcode-install/blob/master/README.md)

@acinader
Copy link
Contributor

acinader commented Aug 21, 2018

I think that for our on-boarding documentation, the docker solution should be a workflow that uses the prebuilt parseplatform/parse-server docker.

I played around with this a bit over the last few days and made more successful progress than I have to date. I set up three containers: mongo, parse-server, and parse-dashboard. In the case of the dashboard, I mounted a volume with the dashboard config in it. In the case of parse server, I created a volume with cloud code in it and passed the rest of the config via command line arguments to docker run.

I put a package.json in the cloud directory with lodash as a dependency and used lodash.
I used a parse-server object (the LoggerController).

Here's the cloud code which all worked as expected (I could verify the lodash use in the dashboard and I could see the 'hi mom' in the docker logs):

const { AppCache } = require('/parse-server/lib/cache');
const _ = require('lodash');

const unsorted = [10,7,4,6,9,3,8,2,1];

Parse.Cloud.beforeSave('GameScore', (request) => {
  const { loggerController: logger } = AppCache.get(Parse.applicationId);
  logger.info('hi mom');
  const sample = _.sample(unsorted);
  const { object: gameScore } = request;
  gameScore.set({ sample });
  return gameScore;
});

So this to me is a good proof of concept that docker images can work and is reasonable.

The next step for me is to try and put it all together in k8s to see how easy it is to get running locally in minikube.

@barakbd. Do you want to open a pr with your docker file changes? I haven't tested with those yet, but I will.

@barakbd
Copy link
Author

barakbd commented Aug 22, 2018

I want to test first. Do you know if the docker image is built from an express app that imports parse-server or is it pure parse-server?
I am a bit busy this week, but hopefully by next week, I will be able to run the container and see the contents (directories/files) in the image.
Then I will build, and compare sizes.

@flovilmart
Copy link
Contributor

@barakbd the docker image that we auto build is using the parse-server CLI so it's creating it's own express server.

See: https://github.com/parse-community/parse-server/blob/master/src/cli/parse-server.js

Which is basically:

  1. parsing the configuration / arguments / environment variables with the help of commander
  2. calling ParseServer.start() with the result of step 1.

@barakbd
Copy link
Author

barakbd commented Aug 22, 2018

By the way, this is a cleaner way for gitignore/dockerignore - https://stackoverflow.com/questions/987142/make-gitignore-ignore-everything-except-a-few-files

# Ignore everything
*

# But not these files...
!.gitignore
!script.pl
!template.latex
# etc...

# ...even if they are in subdirectories
!*/

# if the files to be tracked are in subdirectories
!*/a/b/file1.txt
!*/a/b/c/*

@barakbd
Copy link
Author

barakbd commented Aug 22, 2018

Just to confirm why I would like to reduce the image size:
Git repo size is around 6 mb (https://api.github.com/repos/parse-community/parse-server)

"size": 6102

Docker image size is 897 mb (docker images)

parseplatform/parse-server         latest              e3c6c9eea775        5 days ago          897MB

Am I correct?

@flovilmart
Copy link
Contributor

By the way, this is a cleaner way for gitignore/dockerignore -

Do we need really to rewrite that now? What's the benefit?

Just to confirm why I would like to reduce the image size:

This makes sense as we're building from a 'fat source' BUT, it also has some dependencies. If by inadvertence the dev packages are copied into the docker image, this would explain why it's so big.

@barakbd
Copy link
Author

barakbd commented Aug 22, 2018

Also the base image is plain node. We should use "FROM node:carbon" which is smaller.
Also, we need to add tini package (https://github.com/krallin/tini#alpine-linux-package)
Here is a great article to explain why (old, but explains well) - https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/

I just forked and and will try to build locally with an updated Dockerfile.
If I run into issue, I will let you know.

@flovilmart
Copy link
Contributor

I’m not convinced in the utility of tini / pid 1 reaping problem. The server doesn’t spawn any child processes, and should not spawn any. I don’t expect the process to restart on crashes, this should be handled externally, through health checks.

@barakbd
Copy link
Author

barakbd commented Aug 23, 2018

If your entrypoint is calling "node" directly, then no need. But if you are starting "node" through "npm", which we should not, then you should have tini.

The improvement to the ignore file is not urgent, but good to note.

So for building locally I need to fork the project, clone master locally, make changes to Dockerfile, and run docker build?

@flovilmart
Copy link
Contributor

flovilmart commented Aug 23, 2018

There are other issues using npm, as the entrypoint, as improper forwarding of certain signals to the node process. So always we should use node and not tini.

@barakbd
Copy link
Author

barakbd commented Aug 30, 2018

It doesn't seem that TravisCI is building and pushing to DockerHub. Is this correct? Are you doing this manually?
@flovilmart - Let me know when you are available for a quick call. I have multistage Dockerfile ready, just need to know if I am doing it correctly.

@flovilmart
Copy link
Contributor

everything is done automatically with docker hub, yes.

I have multistage Dockerfile ready, just need to know if I am doing it correctly.

Well, that you should know, what does your docker file look like?

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

I successfully built a smaller image using the multi-stage Dockerfile below.

Without adding python, make and g++, which are required for bcrypt 3.x, image size is 167 mb.
With them added, image size is 367mb.

stage 3 - test - is commented out because tests were failing, specfically the pretest script which runs lint.
I could not see where to correct the error (events.js).
Who is familiar with the tests that can help?

# https://blog.hasura.io/an-exhaustive-guide-to-writing-dockerfiles-for-node-js-web-apps-bbee6bd2f3c4
# https://codefresh.io/docker-tutorial/node_docker_multistage/


# ------- Stage 1 - Base ---------
FROM node:carbon-alpine as base
# Apk - https://www.cyberciti.biz/faq/10-alpine-linux-apk-command-examples/
RUN apk update && apk upgrade && \
    apk add --no-cache bash git 
# python make and g++  - add this for bcrypt 3.x 

# ENV WORKDIR AND COPY run with USER root
# If you want the server files and node_modules files to be owned by USER node:
# you need to chown to USER node after every COPY.
#after this you can just use ./ to refer to WORKDIR
WORKDIR /parse-server

#Specify multiple volumes in one line - reuse layers during build
VOLUME ["/parse-server/config", "/parse-server/cloud"]

# Copy all package.json-related files
COPY package*.json ./

#
# ------- Stage 2 - Dependencies ---------
# This makes sure that npm i only runs once for each package - shorter build time.
FROM base AS dependencies
# set npm configs 
RUN npm set progress=false && npm config set depth 0
# Install production packages only and copy to another dir
RUN npm install --only=production 
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install

#
# # ------- Stage 3 - Test ---------
# # run linters, setup and tests
# FROM dependencies AS test
# # Copy everything into WORKDIR (/parse-server) excluding items in .dockerignore
# COPY . .
# # Maybe need to build before test?
# # If test
# RUN npm run test


# ------- Stage 4 - Release ---------
FROM base AS release
# copy production node_modules
COPY --from=dependencies /parse-server/prod_node_modules ./node_modules
COPY . .
#capture git_commit in label
ARG GIT_COMMIT
LABEL git_commit=$GIT_COMMIT

# run as non-root. USER node is provided with node images
# https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#non-root-user
USER node

#EXPOSE - informational ony
EXPOSE 4000

# https://www.ctl.io/developers/blog/post/dockerfile-entrypoint-vs-cmd/
# start with node, not npm
ENTRYPOINT ["node", "./bin/parse-server", "--"]

# Target a certain step in the build process
# sudo docker build --target release -t username/parse-server:test .

@flovilmart
Copy link
Contributor

flovilmart commented Sep 2, 2018

stage 3 - test - is commented out because tests were failing, specfically the pretest script which runs lint.
I could not see where to correct the error (events.js).
Who is familiar with the tests that can help?

Do you have more info?

Yes it’s likely you’ll need to copy the built JS, and not work with the base JS from the src folder as no node runtime understands the source.

As for a target user, how would he use it?

How is it better/easier than what we have now? I seems very complex, and the last thing I want to add is complexity in both maintenance and usage.

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

> npm run lint


> [email protected] lint /parse-server
> flow && eslint --cache ./

events.js:183
      throw er; // Unhandled 'error' event
      ^

Error: spawn /parse-server/node_modules/flow-bin/flow-linux64-v0.79.1/flow ENOENT
    at _errnoException (util.js:992:11)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:190:19)
    at onErrorNT (internal/child_process.js:372:16)
    at _combinedTickCallback (internal/process/next_tick.js:138:11)
    at process._tickCallback (internal/process/next_tick.js:180:9)
    at Function.Module.runMain (module.js:695:11)
    at startup (bootstrap_node.js:191:16)
    at bootstrap_node.js:612:3
npm ERR! code ELIFECYCLE

This is standard now, as multi-stage builds ensures unit tests pass before image is built. See the first 2 links I have put in the top of the Dokcerfile.
I did not see in the travis.yaml or the package.json that we need to build before test. Can you tell me exactly which folders/files are needed for the prod image?
Is it:
package.json
/lib
/node_modules?
Anything else?

@flovilmart
Copy link
Contributor

Well, you need the dev dependencies to run the tests. As 1: you need to build the lib folder and 2: you need jasmine et al.

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

Stage 3 which runs the tests has all those. Please see the Dockerfile. Stage 3 starts From dependencies, so the base image for stage 3 has all those.

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

The target user didn't need the Dockerfile. It is just for building a smaller, more reliable image which the target user can enjoy.
Reducing an almost 1GB image to 100mb is a big difference.

@flovilmart
Copy link
Contributor

Dependencies is npm install —only-production

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

Look at the last line. It runs npm install again, without prod flag. The logic for this is in the article from Codefresh.

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

Can you show me where you build before testing in the travis.yaml for?
Which files/folders are mandatory for production image?
It will help me reduce the image size and optimize it.

@flovilmart
Copy link
Contributor

Well, hahaha :) it’s not because someone write something in an article that it works. I find it odd to run 2 npm installs. The toolchain and environment is completely different from tests to production. If anything, the test image should mount the lib folder from the prod image, not the opposite

@flovilmart
Copy link
Contributor

flovilmart commented Sep 2, 2018

The build of the lib folder is done on npm install.

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

Obviously. This why I am trying to understand where you build before testing and which files/folders are needed for prod.
I will try adding npm run build in pretest in package.json. that should work, right?
The reason to run 2 npm installs is to reduce build time. First run installs only prod modules and copies to a folder for future use.
2nd run installs devDependencies, required for testing, but since the prod modules are already installed, install time is shorter.

@barakbd
Copy link
Author

barakbd commented Sep 2, 2018

I don't see it in postinstall script or postinstall.js. what an I missing?

@flovilmart
Copy link
Contributor

ill try adding npm run build in pretest in package.json. that should work, right?

That's a bad idea, because you're twisting the node project in order to make it run in docker.

The reason to run 2 npm installs is to reduce build time. First run installs only prod modules and copies to a folder for future use.
2nd run installs devDependencies, required for testing, but since the prod modules are already installed, install time is shorter.

Why do you care? It's built on docker hub. Seems to be a premature optimizaation which leads to bad things.

Also: I though the goal was to:

  • Reduce the image size
  • Make it easier to use parse-server on things like k8s and docker compose being able to reference the official image and provide just volume mounts etc...

It seems that the current solution you're exploring, while perhaps going there, isn't bringing us any closer to the goal.

@flovilmart
Copy link
Contributor

flovilmart commented Sep 2, 2018

there is prepare and post install:

https://github.com/parse-community/parse-server/blob/master/package.json#L84

AND perhpas the package.json docs may help you as well: https://docs.npmjs.com/files/package.json#devdependencies

@barakbd
Copy link
Author

barakbd commented Sep 3, 2018

  1. First goal is to get image size smaller.
  2. Regarding ease of use, I did not claim to make it easier. I also don't think it is too complicated. Mounting volumes in Docker is common practice. Once I am able to get the image smaller I can start testing with docker-compose file and post it as an example. My hope is to get a working example with a .env file, postman collection and docker-compose file.

Have you looked at the error that I shared from the test stage? The test stage does have the /lib directory as a full npm install is run previously (stage 2).

I tested npm install, and since there is a build step, 2 npm installs are required. The devDependencies add about 300 mb. When npm install --production is run, the lib directory is not created (obviously).
So this the updated multi-stage file. I got rid of the test stage, as the test is run outside of Docker.
I just need to know if I am copying the necessary folder/files in stage 4.

# https://blog.hasura.io/an-exhaustive-guide-to-writing-dockerfiles-for-node-js-web-apps-bbee6bd2f3c4
# https://codefresh.io/docker-tutorial/node_docker_multistage/


# ------- Stage 1 - Base ---------
FROM node:carbon-alpine as base
# Apk - https://www.cyberciti.biz/faq/10-alpine-linux-apk-command-examples/
RUN apk update && apk upgrade && \
    apk add --no-cache git bash
# python make and g++  - add this for bcrypt 3.x 

# ENV WORKDIR AND COPY run with USER root
# If you want the server files and node_modules files to be owned by USER node:
# you need to chown to USER node after every COPY.
#after this you can just use ./ to refer to WORKDIR
WORKDIR /parse-server

#Specify multiple volumes in one line - reuse layers during build
VOLUME ["/parse-server/config", "/parse-server/cloud"]

# Copy all package.json-related files
COPY package*.json ./

#
# ------- Stage 2 - Dependencies ---------
# This makes sure that npm i only runs once for each package - shorter build time.
FROM base AS dependencies
# set npm configs 
RUN npm set progress=false && npm config set depth 0
# Install production packages only and copy to another dir
RUN npm install --production 
# RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
# RUN npm install


# ------- Stage 3 - Build ---------
# run linters, setup and tests
FROM dependencies AS build
# Copy everything into WORKDIR (/parse-server) excluding items in .dockerignore
COPY . .
# Maybe need to build before test?
RUN npm install


# ------- Stage 4 - Release ---------
FROM dependencies AS release
# copy production node_modules
# COPY --from=dependencies /parse-server/prod_node_modules ./node_modules
COPY --from=build /parse-server/lib ./lib
COPY --from=build /parse-server/bin ./bin

#capture git_commit in label
ARG GIT_COMMIT
LABEL git_commit=$GIT_COMMIT

# run as non-root. USER node is provided with node images
# https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#non-root-user
USER node

#EXPOSE - informational ony
EXPOSE 4000

# https://www.ctl.io/developers/blog/post/dockerfile-entrypoint-vs-cmd/
# start with node, not npm
ENTRYPOINT ["node", "./bin/parse-server", "--"]

# Target a certain step in the build process
# sudo docker build --target release -t username/parse-server:test .

@flovilmart
Copy link
Contributor

You are probably missing Many files, have a look at the package.json and the files directive that declares the files that need to be copied over when distributing.

Now, the order of operations don’t make any sense as no tests are running.

It would make more sense to run install —prod in the base image
Use a temporary image for creating the lib folder.
In the final image, copy necessary files.

What i really don’t like with this approach of copying things over is that if requires manual maintenance if we add more things.

@flovilmart
Copy link
Contributor

flovilmart commented Sep 3, 2018

Also, in the sense of ‘all articles may not be right’ moving the node_modules folder is unnecessary, you can use npm prune —production To remove installed dev dependencies:
https://docs.npmjs.com/cli/prune

@barakbd
Copy link
Author

barakbd commented Sep 6, 2018

I found the error with the test stage. Apparently flow-bin from npm doesn't work in alpine version. Solution is here (posted 7 days ago): facebook/flow#3649.
Adding this line works:

RUN apk add --no-cache --repository https://nl.alpinelinux.org/alpine/edge/testing flow

However I cannot run npm test since in the pretest script it runs flow-bin from npm, and that exits with an error. When I remove flow from the pretest script, all tests pass. It can be run manually as a separate script. I only suggest this as a solution to add testing inside docker.
Also, need to manually run RUN npm run prepare && npm run postinstall as Docker runs as root and then npm will not run scripts automatically.

Now, the order of operations don’t make any sense as no tests are running.

It would make more sense to run install —prod in the base image
Use a temporary image for creating the lib folder.
In the final image, copy necessary files.

This is exactly what the dockerfile is doing (see above)

Also, in the sense of ‘all articles may not be right’ moving the node_modules folder is unnecessary, you can use npm prune —production To remove installed dev dependencies:

Yes, but then you would also have to rm -rf src/ and other items to reduce image size. In any case it is a manual process.

I am testing an updated Dockerfile. Do we want to create it with bcrypt version 3.x? according to npm (https://www.npmjs.com/package/bcrypt) this would require node version 10. Is there any issue in using node version 10 as the base image?

@flovilmart
Copy link
Contributor

This seems to be overcomplicated / overengineered for me at this point.
I’m not sure why any of this is required as the build system works as is.
Why not build on a ‘fat image’ and only copy relevant artifacts to a ‘slim one’.

Also, npm should run prepare etc... if it isn’t that doesn’t fall into the ‘expected behaviours’, isn’t the system supposed to finally run the process as an unprivileged user? Do you have any docs supporting the claims that running as ‘root’ npm won’t run the normal flow?

Node 10 is not supported nor recommended for productions. AND I don’t want to maintain / support many docker images. Ideally I want to minimize the amount of support.

Feel free to open a Pr when ready

@flovilmart
Copy link
Contributor

es, but then you would also have to rm -rf src/ and other items to reduce image size. In any case it is a manual process.

The src folder is 1.2Mb... Which is pale in comparison to the node_modules folder...

@barakbd
Copy link
Author

barakbd commented Sep 6, 2018

@flovilmart
Copy link
Contributor

This is probably why the current docker image uses:

npm install && npm run build

I've built the image based upon our current one:

-FROM node:carbon
+FROM node:carbon-alpine
+RUN echo "@edge http://nl.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories
+RUN apk update
+RUN apk add git
 
 RUN mkdir -p /parse-server
 COPY ./ /parse-server/
 
-RUN mkdir -p /parse-server/config
-VOLUME /parse-server/config
-
-RUN mkdir -p /parse-server/cloud
-VOLUME /parse-server/cloud
+VOLUME ["/parse-server/config", "/parse-server/cloud"]
 
 WORKDIR /parse-server
 
 RUN npm install && \
     npm run build
 
+RUN npm prune --production
+
 ENV PORT=1337
 
 EXPOSE $PORT
 
+USER node
+
 ENTRYPOINT ["npm", "start", "--"]

With minimal changes:

  • use alpine image
  • use npm prune to remove extraneous packages

The final size is: 378MB which is not bad and actually very simple to maintain, understand and run.

This still doesn't help with usability in the context of compose or k8s which is for me the main issue.

As you originally suggested, a slimmed down image should be used (:YAY:) Can we move forward with that and not keep going with nit picking megabytes?

Closing the issue pending as all required elements are available for a PR.

@barakbd
Copy link
Author

barakbd commented Sep 6, 2018

OK. I will create a PR soon.

@barakbd
Copy link
Author

barakbd commented Sep 13, 2018

@acinader Can you have a look at the following repo?
https://github.com/barakbd/parse-server-docker-compose

It is a docker-compose file that runs mongo and parse-server.
I get an error that user cannot be authenticated.

mongo-3.6       | 2018-09-12T23:45:00.583+0000 I ACCESS   [conn1] SCRAM-SHA-1 authentication failed for admin on undefined from client 172.25.0.3:45108 ; UserNotFound: Could not find user admin@undefined
mongo-3.6       | 2018-09-12T23:45:00.593+0000 I NETWORK  [conn1] end connection 172.25.0.3:45108 (0 connections now open)
parse-server    | warn: Unable to ensure uniqueness for usernames:  MongoError: Authentication failed.
parse-server    |     at /parse-server/node_modules/mongodb-core/lib/connection/pool.js:581:63
parse-server    |     at authenticateStragglers (/parse-server/node_modules/mongodb-core/lib/connection/pool.js:504:16)
parse-server    |     at Connection.messageHandler (/parse-server/node_modules/mongodb-core/lib/connection/pool.js:540:5)
parse-server    |     at emitMessageHandler (/parse-server/node_modules/mongodb-core/lib/connection/connection.js:310:10)
parse-server    |     at Socket.<anonymous> (/parse-server/node_modules/mongodb-core/lib/connection/connection.js:453:17)
parse-server    |     at emitOne (events.js:116:13)
parse-server    |     at Socket.emit (events.js:211:7)
parse-server    |     at addChunk (_stream_readable.js:263:12)
parse-server    |     at readableAddChunk (_stream_readable.js:250:11)
parse-server    |     at Socket.Readable.push (_stream_readable.js:208:10)

Here is my repo with the updated dockerfile to build parse-server using multi-stage:
https://github.com/barakbd/parse-server

@flovilmart
Copy link
Contributor

In your env file it seems that the mongodb url is malformed. Unless ‘mongo’ is a valid host.

@barakbd
Copy link
Author

barakbd commented Sep 13, 2018

In the Dockerfile "mongo" is the name of the service. Actually there is no need to even specify a port so it could even be:
mongodb://admin:admin2@mongo

The only issue with using mongo container is trying to figure out how to create a db on docker run.
There is a env var called MONGO_INITDB_DATABASE (https://hub.docker.com/_/mongo/), however, if I understand correctly, I also need a script to to actually create the database.
Do I HAVE to specify a database to connect from parse-server to mongodb?

I am able to connect the the db using a mongo client from the host.
I assume there should be no problem using mongo 3.6?

@flovilmart
Copy link
Contributor

Yes. You need a specific database to connect to. As all the examples show in the tests, ci and documentations.

@acinader
Copy link
Contributor

@barakbd i have no experience with docker composer, so I took a look, but not sure what I am looking at. Just fyi.

As @flovilmart points out, your error is indicative of not being able to connect to the database.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants