These rules used to be docker_build, docker_push, etc. and the aliases for
these (mostly) legacy names still exist largely for backwards-compatibility. We
also have early-stageoci_image, oci_push, etc. aliases for folks that
enjoy the consistency of a consistent rule prefix. The only place the
format-specific names currently do any more than alias things is in foo_push,
where they also specify the appropriate format as which to publish the image.
Overview
This repository contains a set of rules for pulling down base images, augmenting
them with build artifacts and assets, and publishing those images.
These rules do not require / use Docker for pulling, building, or pushing
images. This means:
They can be used to develop Docker containers on OSX without
boot2docker or docker-machine installed. Note use of these rules on Windows
is currently not supported.
They do not require root access on your workstation.
Also, unlike traditional container builds (e.g. Dockerfile), the Docker images
produced by container_image are deterministic / reproducible.
To get started with building Docker images, check out the
examples
that build the same images using both rules_docker and a Dockerfile.
It is notable that: cc_image, go_image, rust_image, and d_image
also allow you to specify an external binary target.
Docker Rules
This repo now includes rules that provide additional functionality
to install packages and run commands inside docker containers. These
rules, however, require a docker binary is present and properly
configured. These rules include:
Docker run rules: rules to run commands inside docker
containers.
Overview
In addition to low-level rules for building containers, this repository
provides a set of higher-level rules for containerizing applications. The idea
behind these rules is to make containerizing an application built via a
lang_binary rule as simple as changing it to lang_image.
By default these higher level rules make use of the distroless language runtimes, but these
can be overridden via the base="..." attribute (e.g. with a container_pull
or container_image target).
Note also that these rules do not expose any docker related attributes. If you
need to add a custom env or symlink to a lang_image, you must use
container_image targets for this purpose. Specifically, you can use as base for your
lang_image target a container_image target that adds e.g., custom env or symlink.
Please see go_image (custom base) for an example.
Setup
Add the following to your WORKSPACE file to add the external repositories:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
# Get copy paste instructions for the http_archive attributes from the# release notes at https://github.com/bazelbuild/rules_docker/releases
)
# OPTIONAL: Call this to override the default docker toolchain configuration.# This call should be placed BEFORE the call to "container_repositories" below# to actually override the default toolchain configuration.# Note this is only required if you actually want to call# docker_toolchain_configure with a custom attr; please read the toolchains# docs in /toolchains/docker/ before blindly adding this to your WORKSPACE.# BEGIN OPTIONAL segment:load("@io_bazel_rules_docker//toolchains/docker:toolchain.bzl",
docker_toolchain_configure="toolchain_configure"
)
docker_toolchain_configure(
name="docker_config",
# OPTIONAL: Bazel target for the build_tar tool, must be compatible with build_tar.pybuild_tar_target="<enter absolute path (i.e., must start with repo name @...//:...) to an executable build_tar target>",
# OPTIONAL: Path to a directory which has a custom docker client config.json.# See https://docs.docker.com/engine/reference/commandline/cli/#configuration-files# for more details.client_config="<enter Bazel label to your docker config.json here>",
# OPTIONAL: Path to the docker binary.# Should be set explicitly for remote execution.docker_path="<enter absolute path to the docker binary (in the remote exec env) here>",
# OPTIONAL: Path to the gzip binary.gzip_path="<enter absolute path to the gzip binary (in the remote exec env) here>",
# OPTIONAL: Bazel target for the gzip tool.gzip_target="<enter absolute path (i.e., must start with repo name @...//:...) to an executable gzip target>",
# OPTIONAL: Path to the xz binary.# Should be set explicitly for remote execution.xz_path="<enter absolute path to the xz binary (in the remote exec env) here>",
# OPTIONAL: Bazel target for the xz tool.# Either xz_path or xz_target should be set explicitly for remote execution.xz_target="<enter absolute path (i.e., must start with repo name @...//:...) to an executable xz target>",
# OPTIONAL: List of additional flags to pass to the docker command.docker_flags= [
"--tls",
"--log-level=info",
],
)
# End of OPTIONAL segment.load(
"@io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories="repositories",
)
container_repositories()
load("@io_bazel_rules_docker//repositories:deps.bzl", container_deps="deps")
container_deps()
load(
"@io_bazel_rules_docker//container:container.bzl",
"container_pull",
)
container_pull(
name="java_base",
registry="gcr.io",
repository="distroless/java",
# 'tag' is also supported, but digest is encouraged for reproducibility.digest="sha256:deadbeef",
)
Known Issues
Bazel does not deal well with diamond dependencies.
If the repositories that are imported by container_repositories() have already been
imported (at a different version) by other rules you called in your WORKSPACE, which
are placed above the call to container_repositories(), arbitrary errors might
occur. If you get errors related to external repositories, you will likely
not be able to use container_repositories() and will have to import
directly in your WORKSPACE all the required dependencies (see the most up
to date impl of container_repositories() for details).
ImportError: No module named moves.urllib.parse
This is an example of an error due to a diamond dependency. If you get this
error, make sure to import rules_docker before other libraries, so that
six can be patched properly.
Ensure your project has a BUILD or BUILD.bazel file at the top level. This
can be a blank file if necessary. Otherwise you might see an error that looks
like:
Unable to load package for //:WORKSPACE: BUILD file not found in any of the following directories.
rules_docker uses transitions to build your containers using toolchains the correct
architecture and operating system. If you run into issues with toolchain resolutions,
you can disable this behaviour, by adding this to your .bazelrc:
Suppose you have a container_image target //my/image:helloworld:
container_image(
name="helloworld",
...
)
You can load this into your local Docker client by running:
bazel run my/image:helloworld.
For the lang_image targets, this will also run the
container using docker run to maximize compatibility with lang_binary rules.
Arguments to this command are forwarded to docker, meaning the command
bazel run my/image:helloworld -- -p 8080:80 -- arg0
performs the following steps:
load the my/image:helloworld target into your local Docker client
start a container using this image where arg0 is passed to the image entrypoint
port forward 8080 on the host to port 80 on the container, as per docker run documentation
You can suppress this behavior by passing the single flag: bazel run :foo -- --norun
Alternatively, you can build a docker load compatible bundle with:
bazel build my/image:helloworld.tar. This will produce the file:
bazel-bin/my/image/helloworld.tar, which you can load into
your local Docker client by running:
docker load -i bazel-bin/my/image/helloworld.tar. Building
this target can be expensive for large images.
These work with both container_image, container_bundle, and the
lang_image rules. For everything except
container_bundle, the image name will be bazel/my/image:helloworld.
For container_bundle, it will apply the tags you have specified.
Authentication
You can use these rules to access private images using standard Docker
authentication methods. e.g. to utilize the Google Container Registry. See
here for authentication methods.
Once you've setup your docker client configuration, see here
for an example of how to use container_pull with custom docker authentication credentials
and here for an example of how
to use container_push with custom docker authentication credentials.
Varying image names
A common request from folks using
container_push, container_bundle, or container_image is to
be able to vary the tag that is pushed or embedded. There are two options
at present for doing this.
Stamping
The first option is to use stamping.
Stamping is enabled when bazel is run with --stamp.
This enables replacements in stamp-aware attributes.
A python format placeholder (e.g. {BUILD_USER})
is replaced by the value of the corresponding workspace-status variable.
# A common pattern when users want to avoid trampling# on each other's images during development.container_push(
name="publish",
format="Docker",
# Any of these components may have variables.registry="gcr.io",
repository="my-project/my-image",
# This will be replaced with the current user when built with --stamptag="{BUILD_USER}",
)
Rules that are sensitive to stamping can also be forced to stamp or non-stamp mode
irrespective of the --stamp flag to Bazel. Use the build_context_data rule
to make a target that provides StampSettingInfo, and pass this to the
build_context_data attribute.
The next natural question is: "Well what variables can I use?" This
option consumes the workspace-status variables Bazel defines in
bazel-out/stable-status.txt and bazel-out/volatile-status.txt.
Note that changes to the stable-status file
cause a rebuild of the action, while volatile-status does not.
You can add more stamp variables via --workspace_status_command,
see the bazel docs.
A common example is to provide the current git SHA, with
--workspace_status_command="echo STABLE_GIT_SHA $(git rev-parse HEAD)"
That flag is typically passed in the .bazelrc file, see for example .bazelrc in kubernetes.
Make variables
The second option is to employ Makefile-style variables:
By default the lang_image rules use the distroless base runtime images,
which are optimized to be the minimal set of things your application needs
at runtime. That can make debugging these containers difficult because they
lack even a basic shell for exploring the filesystem.
To address this, we publish variants of the distroless runtime images tagged
:debug, which are the exact-same images, but with additions such as busybox
to make debugging easier.
Hint: if you want to put files in specific directories inside the image
use pkg_tar rule
to create the desired directory structure and pass that to container_image via
tars attribute. Note you might need to set strip_prefix = "." or strip_prefix = "{some directory}"
in your rule for the files to not be flattened.
See Bazel upstream issue 2176 and
rules_docker issue 317
for more details.
If you need to modify somehow the container produced by
cc_image (e.g., env, symlink), see note above in
Language Rules Overview about how to do this
and see go_image (custom base) example below.
If you need to modify somehow the container produced by
py_image (e.g., env, symlink), see note above in
Language Rules Overview about how to do this
and see go_image (custom base) example below.
If you are using py_image with a custom base that has python tools installed
in a location different to the default base, please see
Python tools.
py_image (fine layering)
For Python and Java's lang_image rules, you can factor
dependencies that don't change into their own layers by overriding the
layers=[] attribute. Consider this sample from the rules_k8s repository:
py_image(
name="server",
srcs= ["server.py"],
# "layers" is just like "deps", but it also moves the dependencies each into# their own layer, which can dramatically improve developer cycle time. For# example here, the grpcio layer is ~40MB, but the rest of the app is only# ~400KB. By partitioning things this way, the large grpcio layer remains# unchanging and we can reduce the amount of image data we repush by ~99%!layers= [
requirement("grpcio"),
"//examples/hellogrpc/proto:py",
],
main="server.py",
)
You can also implement more complex fine layering strategies by using the
py_layer rule and its filter attribute. For example:
# Suppose that we are synthesizing an image that depends on a complex set# of libraries that we want to break into layers.LIBS= [
"//pkg/complex_library",
# ...
]
# First, we extract all transitive dependencies of LIBS that are under //pkg/common.py_layer(
name="common_deps",
deps=LIBS,
filter="//pkg/common",
)
# Then, we further extract all external dependencies of the deps under //pkg/common.py_layer(
name="common_external_deps",
deps= [":common_deps"],
filter="@",
)
# We also extract all external dependencies of LIBS, which is a superset of# ":common_external_deps".py_layer(
name="external_deps",
deps=LIBS,
filter="@",
)
# Finally, we create the image, stacking the above filtered layers on top of one# another in the "layers" attribute. The layers are applied in order, and any# dependencies already added to the image will not be added again. Therefore,# ":external_deps" will only add the external dependencies not present in# ":common_external_deps".py_image(
name="image",
deps=LIBS,
layers= [
":common_external_deps",
":common_deps",
":external_deps",
],
# ...
)
py3_image
To use a Python 3 runtime instead of the default of Python 2, use py3_image,
instead of py_image. The other semantics are identical.
If you need to modify somehow the container produced by
py3_image (e.g., env, symlink), see note above in
Language Rules Overview about how to do this
and see go_image (custom base) example below.
If you are using py3_image with a custom base that has python tools installed
in a location different to the default base, please see
Python tools.
nodejs_image
It is notable that unlike the other image rules, nodejs_image is not
currently using the gcr.io/distroless/nodejs image for a handful of reasons.
This is a switch we plan to make, when we can manage it. We are currently
utilizing the gcr.io/google-appengine/debian9 image as our base.
To use nodejs_image, add the following to WORKSPACE:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name="build_bazel_rules_nodejs",
# Replace with a real SHA256 checksumsha256="{SHA256}"# Replace with a real release versionurls= ["https://github.com/bazelbuild/rules_nodejs/releases/download/{VERSION}/rules_nodejs-{VERSION}.tar.gz"],
)
load("@build_bazel_rules_nodejs//:index.bzl", "npm_install")
# Install your declared Node.js dependenciesnpm_install(
name="npm",
package_json="//:package.json",
yarn_lock="//:yarn.lock",
)
load(
"@io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories="repositories",
)
container_repositories()
load(
"@io_bazel_rules_docker//nodejs:image.bzl",
_nodejs_image_repos="repositories",
)
_nodejs_image_repos()
Note: See note about diamond dependencies in setup
if you run into issues related to external repos after adding these
lines to your WORKSPACE.
Then in your BUILD file, simply rewrite nodejs_binary to nodejs_image with
the following import:
load("@io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name="nodejs_image",
entry_point="@your_workspace//path/to:file.js",
# npm deps will be put into their own layerdata= [":file.js", "@npm//some-npm-dep"],
...
)
nodejs_image also supports the launcher and launcher_args attributes which are passed to container_image and used to prefix the image's entry_point.
If you need to modify somehow the container produced by
nodejs_image (e.g., env, symlink), see note above in
Language Rules Overview about how to do this
and see go_image (custom base) example below.
请发表评论