Manually
In order to run tests with Emcee you have to deploy container instances of queue, workers (at least one) and Artifactory. Below are guides how to achive it manually or with help of Docker Compose. For production deployment refer to this doc.
Run containers manually#
In oreder to launch Emcee without Kubernetes you have to create Docker network and run all necessary containers manually.
Create network for Docker containers with arbitrary name. For instance, emcee-network
:
Pull and run Emcee queue container with operating network specified. Additionally --publish
parameter should be provided to map host-to-container port thus opening external access to container with queue server:
docker run --detach --rm --name emcee-queue-service --network emcee-network --publish 41000:41000 avitotech/emcee-queue:latest
Pull and run Emcee worker container. You may run as many such containers as you want providing distinct name for every container:
docker run --detach --rm --name queue-worker1 --network emcee-network --device /dev/kvm avitotech/emcee-worker:latest
Pull and run Artifactory container. Setting up Artifactory will be covered later.
docker run --detach --network emcee-network --publish 8081:8081 --publish 8082:8082 --name emcee-artifactory-service --volume emcee_artifactory:/var/opt/jfrog/artifactory docker.bintray.io/jfrog/artifactory-oss:7.63.11
To check Emcee queue server is operating correcty connect to container and check logs:
Run containers with Docker Compose#
Steps of manual deployment described above may be siplified with Docker Compose.
First, create configuration file docker-compose.yml
version: '3'
services:
emcee-queue-service:
image: avitotech/emcee-queue:latest
container_name: emcee-queue-service
ports:
- 41000:41000
queue-worker:
image: avitotech/emcee-worker:latest
env_file:
- emcee-worker.env
depends_on:
- emcee-queue-service
deploy:
replicas: 3
resources:
limits:
cpus: "2.2" # Here you can configure resources limits depends on features usage for each worker
memory: "6g" # For example, screen recording is very cpu consumption or full hd screen with high api level require more cpu/memory
devices:
- "/dev/kvm:/dev/kvm"
emcee-artifactory:
image: docker.bintray.io/jfrog/artifactory-oss:7.63.11
container_name: emcee-artifactory
ports:
- 8081:8081
- 8082:8082
volumes:
- emcee_artifactory:/var/opt/jfrog/artifactory
volumes:
emcee_artifactory:
Create environment file emcee-worker.env
next to docker-compose.yml
in which you may configure environment variables for worker or left it empty. Learn more in worker configuration.
Possible emcee-worker.env
:
EMCEE_WORKER_QUEUE_URL=http://emcee-queue-service:41000
EMCEE_WORKER_LOG_LEVEL=info
# Here you can configure other worker params, see doc for more info https://docs.emcee.cloud/on-premise/deployment/worker_configuration/
# Note: if you use Artifactory under authorization set Artifactory username and password here too
Then run docker compose up
from directory containing docker-compose.yml
.
It will kickstart queue server, 3 workers and Artifactory.
Obtaining network address of queue and Artifactory#
If you deploy Emcee manually or with Docker Compose then local network addresses will be like:
http://localhost:41000/
for queuehttp://localhost:8081/
for Artifactory
where ports match those specified above when running containers. It's 41000
for queue and 8081
,8082
for Artifactory.
If emcee gets deployed on remote machine then access to it should be aligned with your network configuration.