I deployed this state, this is all installed. I do not want to remove all of it only php5. How can I do this with salt leaving the other tools in tact?
webserver_stuff:
pkg:
- installed
- pkgs:
- apache2
- php5
- php5-mysql
You probably need pkg.purged.
This makes sure the package is not installed anymore. You can make an extra state for it like this:
webserver_stuff:
pkg.installed
- pkgs:
- apache2
- php5
- php5-mysql
php5:
pkg.purged: []
Related
I have a multi container Django app. One Container is the database, another one the main webapp with Django installed for handling the front- and backend. I want to add a third container which provides the main functionality/tool we want to offer via the webapp. It has some complex dependencies, which is why I would like to have it as a seperate container as well. It's functionality is wrapped as a CLI tool and currently we build the image and run it as needed passing the arguments for the CLI tool.
Currently, this is the docker-compose.yml file:
version: '3'
services:
db:
image: mysql:8.0.30
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- TZ=${TZ}
volumes:
- db:/var/lib/mysql
- db-logs:/var/log/mysql
networks:
- net
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
app:
build:
context: .
dockerfile: ./Dockerfile.webapp
environment:
- MYSQL_NAME=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
ports:
- "8000:8000"
networks:
- net
volumes:
- ./app/webapp:/app
- data:/data
depends_on:
- db
restart: unless-stopped
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
tool:
build:
context: .
dockerfile: ./Dockerfile.tool
volumes:
- data:/data
networks:
net:
driver: bridge
volumes:
db:
db-logs:
data:
In the end, the user should be able to set the parameters via the Web-UI and run the tool container. Multiple processes should be managed by a job scheduler. I hoped that running the container within a multi-container app should be straightforward, but as far as I now know it is only possible through mounting the docker socket which should be avoided regarding security issues.
So my question is: what are the possiblites to achieve my desired goal?
Things I considered:
multi-stage container: main purpose is to reduce file size, but is there a hack to use the cli-tool along with its built environment in the latest image of the multi-stage container?
Api: build an Api for the tool. Other containers can communicate via the docker network. Seems to be cumbersome
The service "app" (the main django app) is built on top of the official python image which I would like to keep this way. Nevertheless there is the possibility to build one large image based on Ubuntu which includes the tool along with its dependendencies and the main django app. This will probably heavily increase sizes and maybe turn into dependency issues.
Has anybody run into similar issues? Which direction would you point me to? I'm also looking for some buzzwords that speed up my research.
You should build both parts into a single unified image, and then you can use the Python subprocess module as normal to invoke the tool.
The standard Docker Hub python image is already built on Debian, which is very closely related to Ubuntu. So you should be able to do something like
FROM python:3.10
# Install OS-level dependencies for both the main application and
# the support tool
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
another-dependency \
some-dependency \
third-dependency
# Install the support tool
ADD http://repository.example.com/the-tool/the-tool /usr/local/bin/the-tool
RUN chmod +x /usr/local/bin/the-tool
# Copy and install Python-level dependencies
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
# Copy in the main application
COPY ./ ./
# Metadata on how to run the application
EXPORT 8000
# USER someuser
CMD ["./the-app.py"]
You've already noted the key challenges in having the tool in a separate container. You can't normally "run commands" in a container; a container is a wrapper around some single process, and it requires unrestricted root-level access to the host to be able to manipulate the container in any way (including using the docker exec debugging tool). You'd also need unrestricted root-level access to the host to be able to launch a temporary container per request.
Putting some sort of API or job queue around the tool would be the "most Dockery" way to do it, but that can also be significant development effort. In this setup as you've described it, the support tool is mostly an implementation detail of the main process, so you're not really breaking the "one container does one thing" rule by making it available for a normal Unix subprocess invocation inside the same container.
Very similar to this question, I cannot connect to my local docker-compose container from my browser (Firefox) on Windows 10 and have been troubleshooting for some time, but I cannot seem to find the issue.
Here is my docker-compose.yml:
version: "3"
services:
frontend:
container_name: frontend
build: ./frontend
ports:
- "3000:3000"
working_dir: /home/node/app/
environment:
DEVELOPMENT: "yes"
stdin_open: true
volumes:
- ./frontend:/home/node/app/
command: bash -c "npm start & npm run build"
my_app_django:
container_name: my_app_django
build: ./backend/
environment:
SECRET_KEY: "... not included ..."
command: ["./rundjango.sh"]
volumes:
- ./backend:/code
- media_volume:/code/media
- static_volume:/code/static
expose:
- "443"
my_app_nginx:
container_name: my_app_nginx
image: nginx:1.17.2-alpine
volumes:
- ./nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- ./frontend:/home/app/frontend/
ports:
- "80:80"
depends_on:
- my_app_django
volumes:
static_volume:
media_volume:
I can start the containers with docker-compose -f docker-compose.yml up -d and there are no errors when I check the logs with docker logs my_app_django or docker logs my_app_nginx. Additionally, doing docker ps shows all the containers running as they should.
The odd part about this issue is that on Linux, everything runs without issue and I can find my app on localhost at port 80. The only thing I do differently when I am on Windows is that I run a dos2unix on my .sh files to ensure that they run properly. If I omit this step, then I get many errors which leads me to believe that I have to do this.
If anyone could give guidance/advice as to what may I be doing incorrectly or missing altogether, I would be truly grateful. I am also happy to provide more details, just let me know. Thank you!
EDIT #1: As timur suggested, I did a docker run -p 80:80 -d nginx and here was the output:
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
bf5952930446: Pull complete
ba755a256dfe: Pull complete
c57dd87d0b93: Pull complete
d7fbf29df889: Pull complete
1f1070938ccd: Pull complete
Digest: sha256:36b74457bccb56fbf8b05f79c85569501b721d4db813b684391d63e02287c0b2
Status: Downloaded newer image for nginx:latest
19b56a66955145e4f59eefff57340b4affe5f7e0d82ad013742a60b479687c40
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint naughty_hoover (8c7b2fa4aef964899c366e1897e38727bb7e4c38431875c5cb8456567005f368): Bind for 0.0.0.0:80 failed: port is already allocated.
This might be the cause of the error but I don't really understand what needs to be done at this point.
EDIT #2: As requested, here are my Dockerfiles (one for backend, one for frontend)
Backend Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y imagemagick libxmlsec1-dev pkg-config
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code
Frontend Dockerfile:
FROM node
WORKDIR /home/node/app/
COPY . /home/node/app/
RUN npm install -g react-scripts
RUN npm install
EDIT #3: When I do docker ps, this is what I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0da02ad8d746 nginx:1.17.2-alpine "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp my_app_nginx
070291de8362 my_app_frontend "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:3000->3000/tcp frontend
2fcf551ce3fa my_app_django "./rundjango.sh" 12 days ago Up About an hour 443/tcp my_app_django
As we established you use Docker Toolbox that is backed by VirtualBox rather than default Hyper-V Docker for Windows. In this case you might think of it as a VBox VM that actually runs Docker - so all volume mounts and port mappings apply to docker machine VM, not your host. And management tools (i.e. Docker terminal and docker-compose) actually run on your host OS through MinGW.
Due to this, you don't get binding ports on localhost by default (but you can achieve this by editing VM properties in VirtualBox manually if you so desire - I just googled the second link for some picture tutorials). Suprisingly, the official documentation on this particular topic is pretty scarce - you can get a hint by looking at their examples though.
So in your case, the correct url should be http://192.168.99.100
Another thing that is different between these two solutions is volume mounts. And again, documentation sorta hints at what it should be but I can't point you a more explicit source. As you have probably noticed the terminal you use for all your docker interactions encodes paths a bit differently (I presume because of that MinGW layer) and converted paths get sent off to docker-machine - because it's Linux and would not handle windows-style paths anyway.
From here I see a couple of avenues for you to explore:
Run your project from C:\Users\...\MyProject
As the documentation states, you get c:\Users mounted into /c/Users by default. So theoretically, if you run your docker-compose from your user home folder - paths should automagically align - but since you are having this issue - you are probably running it from somewhere else.
Create another share
You also can create your own mounting mount in Virtual Box. Run pwd in your terminal and note where project root is. Then use Virtual Vox UI and create a path that would make it align with your directory tree (for example, D:\MyProject\ should become /d/MyProject.
Hopefully this will not require you to change your docker-compose.yml either
Alternatively, switch to Hyper-V Docker Desktop - and these particular issues will go away.
Bear in mind, that Hyper-V will not coexist with VirtualBox. So this option might not be available to you if you need VBox for something else.
I don't think that my gae python app has a flexible environment because I created it many years ago. Now I want to try and create a module that has another runtime than python and keep the python app running python alongside a new runtime, custom or just another. Maybe mix PHP and python or similar. I don't need it but I want to learn and explore the possibilities. I'm also interested in learning Erlang and deploy Erlang code with appengine. I see there is questions about it already
erlang on google app engine?
And issue 125 in the tracker.
But how should we actually do it? If we make our own runtime provided that is allowed.
My app.yaml looks like
application: montaoproject
version: newsearch
runtime: python27
api_version: 1
threadsafe: true
module: default
instance_class: F1
automatic_scaling:
min_idle_instances: 5
max_idle_instances: automatic
min_pending_latency: automatic
max_pending_latency: 30ms
max_concurrent_requests: 50
default_expiration: "14d 5h"
env_variables:
GAE_USE_MONTAO : 'anyvalue'
KOOL_VERSION : '17a'
includes:
- br.yaml # Brazil
- in.yaml # India
- us.yaml # USA
- pk.yaml
- search.yaml # search pages
- admin.yaml # admin pages
- providers.yaml # auth providers
- statics.yaml # static content
handlers:
- url: /(business|ai|newindia|insert-ad.html)
script: montao.app
- url: /blobview.*
script: kool_update.app
login: admin
- url: /market.*
script: main.app
- url: /
script: montao.app
- url: /(index.html|sign-up.html|login.html)
script: montao.app
- url: /(login.*|login|googlogin|googlogout|create/)
script: login.app
- url: /(customer_service.htm|contactfileupload|support.html|faq.html)
script: customer_service.app
- url: /stats.*
script: google.appengine.ext.appstats.ui.app
# All other URLs use main.app
- url: /.*
script: main.app
inbound_services:
- mail
builtins:
- remote_api: on
- deferred: on
#- appstats: on
error_handlers:
- file: default_error.html
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
- name: setuptools
version: latest
- name: markupsafe
version: latest
- name: django
version: latest
- name: PIL
version: latest
- name: webob
version: latest
- name: lxml
version: latest
- name: ssl
version: latest
Yes, your app.yaml file is a standard env one (it doesn't have vm:true or env:flex in it).
Yes, it's possible to mix and match services/modules in different languages and with different environments inside the same app. You can even switch the language and environment of the same module in a different version of that module. That's because modules offer complete code isolation, see Comparison of service isolation and project isolation. Related post: Upload a Java and node.js project to Google AppEngine at once
I always try to structure a multi-service/module app with each service in its own subdir, as described in Can a default service/module in a Google App Engine app be a sibling of a non-default one in terms of folder structure?
So first I'd create a default subdirectory of your app dir and move all your existing default module specific files into it, with the exception of the app-level config, which I'd keep at the top level and symlink inside the default dir as described in that post. Then I'd verify that the default module still works as expected.
Then I'd create a new subdirectory for every new module I need to add and add the code for it as needed.
Side note: sharing code via symlinks as described in the post mentioned above works for standard env modules, but it probably doesn't work with flexible ones, see Sharing code between modules in a GAE project
I just made a fresh checkout of my Rails Engine project, and when (after bundle install) I invoke rails -v in its root directory, I'm getting a LoadError in which Rails appears to be looking for the very gem/engine I'm trying to build:
my-engine dmoles$ rails -v
/Users/dmoles/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/dependency.rb:315:in `to_specs': Could not find 'my-engine' (>= 0) among 117 total gem(s) (Gem::LoadError)
Checked in 'GEM_PATH=/Users/dmoles/.rvm/gems/ruby-2.2.1:/Users/dmoles/.rvm/gems/ruby-2.2.1#global', execute `gem env` for more information
from /Users/dmoles/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/dependency.rb:324:in `to_spec'
from /Users/dmoles/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_gem.rb:64:in `gem'
from /Users/dmoles/.rvm/gems/ruby-2.2.1/bin/rails:22:in `<main>'
Running gem env as suggested provides no more enlightenment:
my-engine dmoles$ gem env
RubyGems Environment:
- RUBYGEMS VERSION: 2.4.6
- RUBY VERSION: 2.2.1 (2015-02-26 patchlevel 85) [x86_64-darwin14]
- INSTALLATION DIRECTORY: /Users/dmoles/.rvm/gems/ruby-2.2.1
- RUBY EXECUTABLE: /Users/dmoles/.rvm/rubies/ruby-2.2.1/bin/ruby
- EXECUTABLE DIRECTORY: /Users/dmoles/.rvm/gems/ruby-2.2.1/bin
- SPEC CACHE DIRECTORY: /Users/dmoles/.gem/specs
- SYSTEM CONFIGURATION DIRECTORY: /Users/dmoles/.rvm/rubies/ruby-2.2.1/etc
- RUBYGEMS PLATFORMS:
- ruby
- x86_64-darwin-14
- GEM PATHS:
- /Users/dmoles/.rvm/gems/ruby-2.2.1
- /Users/dmoles/.rvm/gems/ruby-2.2.1#global
- GEM CONFIGURATION:
- :update_sources => true
- :verbose => true
- :backtrace => false
- :bulk_threshold => 1000
- REMOTE SOURCES:
- https://rubygems.org/
- SHELL PATH:
- /Users/dmoles/.rvm/gems/ruby-2.2.1/bin
- /Users/dmoles/.rvm/gems/ruby-2.2.1#global/bin
- /Users/dmoles/.rvm/rubies/ruby-2.2.1/bin
- /Users/dmoles/.rvm/bin
- /Users/dmoles/bin
- /usr/local/bin
- /usr/bin
- /bin
- /usr/sbin
- /sbin
- /usr/local/git/bin
I can build/install the gem fine with
gem build my-engine.gemspec
gem install my-engine-0.0.1.gem
after which rails -v starts working.* It doesn't seem like this should be necessary, though, and it makes me worry that Rails may be using the built/installed version of the code instead of the live source. What am I doing wrong?
* That is, it runs, even though it complains "Bundler is using a binstub that was created for a different gem". Possibly because it doesn't like the fact that the gem is named my-engine but the ENGINE_PATH has my/engine?
It doesn't make sense to call rails -v within the engine root since an engine is a gem after all that need to be loaded in a rails application, unless you have it structured like a normal rails app instead of an engine gem.
So basically, when you run rails -v, it will try to load your engine as a gem not as a normal rails application thus it will complain if its not installed in your environment.
It looks like somehow the local gem setup on this machine was corrupted -- even if I created a brand-new unrelated engine, it still complained about not being able to find my-engine. rm -r ~/.rvm/gems/ruby-2.2.1* / gem install rails fixed the problem (though at the price of having to reinstall a bunch of gems).
I try to connect together our cicleCI with browserstack and run our integration_test and unit tests not only with PhantomJS but on real Firefox and Internet Explorer as well, using Browserstack service.
I try to configure browserstack-cli. I can run the test from circleci via tunnel on Browserstack, but never report back to circleci server.
Could you please share your experience if you already played with this stack? Thank you very much!
The solution is to use BrowserStackLocal and browserstack-cli tools together. The 64bit linux version of BrowserStackLocal builds up the tunnel from circleCI server to Browserstack server. After that we can use browserstack-cli to launch browsers and running tests from testem.
Download BrowserStackLocal
and insert in .browserstack folder in your project.
64bit linux version of BrowserStackLocal: http://www.browserstack.com/local-testing (Binnaries)
Create a script,
which will run and create settings for browserstack-cli. You have to setup global variables in circleCI, and you can keep your access details there secretly. Let's call this file runthis.sh and save in .browserstack folder. This script will run your BrowserStackLocal binary as well, so the tunnel will be exist.
#!/bin/bash
echo "{\"username\":\"`echo $BS_USER`\", \"password\":\"`echo $BS_PASSWORD`\", \"privateKey\": \"`echo $BS_KEY`\", \"apiKey\":\"`echo $BS_KEY`\"}" >> ~/.browserstack/browserstack.json
./.browserstack/BrowserStackLocal $BS_KEY &
CircleCI config
(circle.yml) file mainly depend of your project. We have to copy .browserstack folder in home folder, install bower, browserstack-cli and testem.
An example:
machine:
timezone:
Pacific/Auckland
node:
version: v0.10.28
dependencies:
pre:
- mv ./.browserstack ~/
- sh ~/.browserstack/runthis.sh
post:
- bower install
- npm install browserstack-cli -g
- npm install testem -g
test:
override:
- PATH=$PATH:bin grunt integration_tests_cli; testem ci
- PATH=$PATH:bin grunt tests_cli; testem ci
Testem config:
testem.yml - Most of the part is depend of your project. Important in our case is launchers section.
framework: "qunit"
test_page: "tmp/index.html"
src_files:
- "tmp/assets/application.js"
- "tmp/tests.js"
- "tmp/integration_tests.js"
launchers:
bs_chrome:
command: browserstack launch chrome --attach
protocol: browser
timeout: 300
launch_in_ci:
- "PhantomJS"
- "bs_chrome"
launch_in_dev:
- "Chrome"
- "Firefox"
- "PhantomJS"
parallel: 2
So, if you update your project on github, circleci will launch your test and connect to browserstack and gonna use browsers there...