I have two Docker images, one containing pandoc
(an utility to convert documents in different formats to many formats), and an other containing pdflatex
(from texlive
, to convert tex
files into pdf
). My goal here is to convert documents from md
to pdf
.
I can run each image separately :
# call pandoc inside my-pandoc-image (md -> tex)
docker run --rm
-v $(pwd):/pandoc
my-pandoc-image
pandoc -s test.md -o test.tex
# call pdflatex inside my-texlive-image (tex -> pdf)
docker run --rm
-v $(pwd):/texlive
my-texlive-image
pdflatex test.tex # generates test.pdf
But, in fact, what I want is to call pandoc
(from its container) directly to convert md
into pdf
, like this :
docker run --rm
-v $(pwd):/pandoc
my-pandoc-image
pandoc -s test.md --latex-engine pdflatex -o test.pdf
This command does not work here, because pandoc
inside the container tries to call pdflatex
(that must be in $PATH
) to generate the pdf, but pdflatex
does not exist since it is not installed in the my-pandoc-image
.
In my case, pdflatex
is installed in the image my-texlive-image
.
So, from this example, my question is : Can a container A call an executable located on an other container B ?
I am pretty sure this is possible, because if I install pandoc
on my host (without pdflatex
), I can run pandoc -s test.md--latex-engine=pdflatex -o test.pdf
by simply aliasing the pdflatex
command with :
pdflatex() {
docker run --rm
-v $(pwd):/texlive
my-texlive-image
pdflatex "$@"
}
Thus, when pdflatex
is called by pandoc
, a container starts and do the conversion.
But when using the 2 containers, how could I alias the pdflatex
command to simulate its existence on the container having only pandoc
?
I took a look at docker-compose
, since I have already used it to make 2 containers communicate (app communicating with a database). I even thought about ssh
-ing from container A to container B to call the pdflatex
command, but this is definitively not the right solution.
Finally, I also have built an image containing pandoc
+ pdflatex
(it worked because the two executables were on the same image), but I really want to keep the 2 images separately, since they could be used independently by other images.
Edit :
A similar question is exposed here, as I understand the provided answer needs Docker to be installed on container A, and needs a docker socket binding (/var/run/docker.sock
) between host and container A. I don't think this is best practice, it seems like a hack that can create security issues.
See Question&Answers more detail:
os