Skip to content

How to use as a base image? #16

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
suan opened this issue Aug 24, 2014 · 13 comments
Closed

How to use as a base image? #16

suan opened this issue Aug 24, 2014 · 13 comments

Comments

@suan
Copy link

suan commented Aug 24, 2014

I'm trying to use this as a base image, and set different databases and users in my Dockerfile, but am having trouble accessing the files in mounted as VOLUME. Here's what I have:

Dockerfile

FROM postgres
VOLUME /var/lib/postgresql/data
ADD setup_db.sh /code/
RUN /code/setup_db.sh

setup_db.sh

#!/bin/bash

gosu postgres postgres
createdb mydb
# createuser myrole
# postgres stop

But when I try to build my image I get:

$ docker build --tag="pgdock" .
Sending build context to Docker daemon 17.92 kB
Sending build context to Docker daemon
Step 0 : FROM postgres
 ---> 2f007d4a5aa5
Step 1 : VOLUME /var/lib/postgresql/data
 ---> Using cache
 ---> ccfab417f4c2
Step 2 : RUN ls -l /var/lib/postgresql/data/
 ---> Using cache
 ---> c1194d442777
Step 3 : ADD setup_db.sh /code/
 ---> Using cache
 ---> d50b814fb7f9
Step 4 : RUN /code/setup_db.sh
 ---> Running in 233e6b4fe77b
postgres cannot access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
createdb: could not connect to database template1: could not connect to server: No such file or directory
    Is the server running locally and accepting
    connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
2014/08/24 15:39:35 The command [/bin/sh -c /code/setup_db.sh] returned a non-zero code: 1

Any advice?

@yosifkit
Copy link
Member

I think the problem is that the database does not exist until the entrypoint script is run. We do this in case a user bind mounts in their own volume for the data as it is not accessible until runtime.

A possible solution is to copy the current entrypoint to your build dir and extend it with your needed options (createdb, createuser) and ADD ./docker-entrypoint.sh /, replacing the regular entry point. Probably set ENTRYPOINT as well, in case it changes in later releases of the image.

@md5
Copy link
Contributor

md5 commented Oct 7, 2014

Take a look at how I've extended the docker-entrypoint.sh initialization here: https://github.com/md5/docker-postgis/blob/master/docker-entrypoint.sh#L10-17

@tianon
Copy link
Member

tianon commented Oct 9, 2014

Yeah, this should be solved by #23 which just merged a little bit ago. 😄

@tianon
Copy link
Member

tianon commented Oct 9, 2014

See also docker-library/docs#71 for the documentation of it! ❤️

@md5
Copy link
Contributor

md5 commented Oct 9, 2014

@suan check out how I've used this new functionality in a derived image for PostGIS:

@watsonkp
Copy link

I ran into some unfortunate confusion with this. The documentation change from docker-library/docs#71 appears at https://registry.hub.docker.com/_/postgres/, but the images pulled from there do not yet have #23. It works fine when the image is built from this repo, but there was some head scratching before I noticed the discrepancy.

@yosifkit
Copy link
Member

Sorry, @watsonkp. We will get the new images up in just a bit.

@watsonkp
Copy link

Looks good, thanks.

@mhubig
Copy link

mhubig commented Nov 25, 2014

Hi im looking for an example on how to import a pg_dump file on the first run by putting a script into docker-entrypoint-initdb.d. I tried something like:

gosu postgres postgres --single stash -j < /tmp/stash.dump

but it didn't work ...

@mhubig
Copy link

mhubig commented Nov 25, 2014

OK I found a solution ... but its kinda awkward :-(

if [ -r '/tmp/db.dump' ]; then
    echo "**IMPORTING DATABASE BACKUP**"
    gosu postgres postgres &
    PID=$!
    sleep 2
    gosu postgres psql db < /tmp/db.dump
    kill $PID
    sleep 2
    echo "**DATABASE BACKUP IMPORTED***"
fi

@yosifkit
Copy link
Member

The less awkward solution is to docker run postgres and then run another container linked to the postgres container with the db dump file and the postres client (psql). This extra container could just be another instance of the postgres image with docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres' (or something similar).

@mhubig
Copy link

mhubig commented Nov 25, 2014

Yes I used to do it this way, but now I have this image as part of a fig orchestration. The problem is, the other container I start are unusable without the db content. So I have to start the orchestration, replay the database then restart everything ... look at https://github.com/mhubig/atlassian for reverence.

@yosifkit
Copy link
Member

The original issue here has been fixed. (There could be documentation improvements for something like importing a pg_dump file, but that would be a separate issue.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants