I had created asp core console application and within this application i had used c++ binary (dll file) reference.For my asp core application i am able to create image on linux docker but while run into the container it throws filenotfoundexception on executing C++ binary function(C++ dll unable to load or one of its dependency).Can some one help me to resolve this issue?
Try:
FROM microsoft/aspnetcore
RUN mkdir -p /app
COPY . /app
ARG files=./bin/Debug/netcoreapp2.0/publish
RUN mkdir -p ./appcode
COPY $files ./appcode
WORKDIR /app/appcode
ENTRYPOINT ["dotnet", "aspcoreapp.dll"]
Also
Add reference C++ Runtime.
Under your project in Solution Explorer
Right-Click
Click on References
Select Add Reference
Go to Extensions
Check Visual C++ Runtime Package
Or
Right-click on the project in Solution
Explorer choose Add Reference
Click the Windows tab and then Extensions sub-tab
Check the checkboxes for the new Extension SDKs.
Click OK.
Finally I resolved the issue with using c++ binary build in Linux environment.but now I am facing the issue of I can't ping to my running container ip from my host even it is in same subnet.
Related
Background
I was using a laptop with openSUSE Leap 15.1 to develop a Qt app. I upgraded to openSUSE Tumbleweed. Now I realize that library versions which my app is dependent upon are not available for Tumbleweed. Now I have these options:
Reinstall openSUSE Leap 15.1 (or maybe 15.2?)
Set up a development environment with some Docker images
Set up a development environment with a virtual machine
Unavailable dependencies: grab their binary packages directly and install them manually on openSUSE Tumbleweed
...?
Question
About 2nd option i.e. Docker.
It's known how to use Docker to deploy the app. You set up the development container with all the dependencies and run some deployment scripts with it.
However, I don't know:
Is it possible to set up Docker containers in a way that Qt Creator debugger can be used for development? If I use Docker, would I be able to step through the code with Qt Creator debugger?
Is this scenario possible:
Pull an openSUSE Leap 15.1 Docker image
Set up a bindmount volume that links the /usr/lib64/ directory from inside the container to the ~/leaplib directory on the host machine. It means ~/leaplib:/usr/lib64/
Do the same for development headers i.e. ~/leapinclude:/usr/include/
Bindmount procedure is explained here
Install all the Qt project dependencies on the openSUSE Leap 15.1 container
Therefore, all dependency libraries and header files would be installed inside the container bindmount volumes
Inside Qt Creator project on the host machine, add ~/leaplib to library path
Inside Qt Creator project on the host machine, add ~/leapinclude to include path
The Qt project source code repository is of course on the host machine
Use Qt Creator to open project repository source code
You should be able to develop and debug the code with Qt Creator debugger, right?
The above plan is not tested yet. Not sure if it would work. Any idea? Am I missing something?
Another scenario:
Make use of docker-compose and Dockerfile suggested by #DavidMaze
Create a docker-compose.yml file defining custom bindmount volumes to be able to share data between container and the host
Create a Dockerfile starting with FROM opensuse/leap:15.1
Install all the dependency packages inside Dockerfile with zypper --root /usr/local/
Needed container data would be inside /usr/local/lib64/, /usr/local/lib/ and /usr/local/include/
Share needed container data with the host by copying data to custom bindmount volumes defined inside docker-compose.yml file
Add bindmount volumes to Qt Creator library path and include path
Use Qt Creator to debug the source code in the host machine
Is something missed?
I am trying to test my local Docker build before I deploy to AWS. My app has dependencies to AWSSDK.Core via NuGet and I am using the following Docker file:
FROM microsoft/dotnet:2.2.0-aspnetcore-runtime AS runtime
WORKDIR /My.App
COPY bin/Release/netcoreapp2.2 .
ENV ASPNETCORE_URLS http://+:5000
EXPOSE 5000
ENTRYPOINT ["dotnet", "My.App.dll"]
To build my file
docker build -t myapp .
However, when I try to run it with
docker run -it --rm --name my_app myapp
I get the error
Error:
An assembly specified in the application dependencies manifest (My.App.deps.json) was not found:
package: 'AWSSDK.Core', version: '3.3.106.17'
path: 'lib/netstandard2.0/AWSSDK.Core.dll'
As far as I can tell, I should be adding a RUN command to install the AWSSDK in my Docker image but I cannot find it. So, my question would be: Am I doing something wrong? If not, is there some kind of reference as to the locations of packages to use in Docker?
After digging around some, I found a few possible answers. First I found this blog post which exemplifies how to use docker with .NET with a very simple project. The solution to my problem was to use the dotnet package command, which gathers all the dependencies and puts them in a single directory.
However, after digging some more, I found Microsoft's Container Tools in Visual Studio which, by pressing the right mouse button on the project, offers to add Docker Support which is more elaborate than the first solution.
I would like to run VSCode on my host machine, but (using its features / extensions) fire up tools from within the dev-env living inside my Docker container.
I have set up a docker image as a development environment for C++. Let's call it dev-env.
It is linux-based and contains required libraries, crosscompilation toolchains and various tools we use for building and testing our software (cmake, ninja, cppcheck, clang-tidy etc.)
I have a GIT repository on a host machine, which I mount inside a docker.
So my usual workflow would be to run docker:
host$
host$ docker run -v path/to/my/codebase/on/host:path/inside/docker -h dev-env --rm -it image_name bash
docker#
docker# cd build; cmake ..
etc...
And as such, I can build, test and run my tools inside my unified development environment inside the docker.
Now, the goal is to take it out of the terminal to the world of IDE.
I happen to use VS Code.
On host machine, I open my codebase folder in VSCode. Since it's mapped inside the docker, any changes I make locally will be available inside dev-env as well.
But if I now run anything from VSCode (CMake configure, build, etc.) it will of course call the tools from within my host machine - which of course will not work, and is not what I want.
With tasks defined in tasks.json I could probably manage with having them run something like docker exec CONTAINER my_command
It gets more complicated with extensions:
What I would like is to have the e.g. VSCode CMake Tools extension configured in such a way, that when I run Cmake Configure (in a VSCode running on my host machine), it will actually run cmake commands from within Docker container, using cmake installed inside Docker, not from my host machine.
Temporary solution: Forwarding display through X / VNC
So Installing VSCode inside the Docker, running x/vnc server inside the Docker, exposing port and connecting to it from the host machine.
Yes, it is possible, I have it running here. But it has many limitations and problems, of which the most painful is the lag/delay.
This is bad solution in general, so I would strongly push for avoiding this.
Another solution that I can think about:
VSCode instance running as a server inside the docker.
VSCode instance on your host connecting to the server instance.
You do all the work inside your host VSCode, but anytime you run a command, it is executed by a server instance, which runs everything inside Docker.
I guess this would require support from VSCode (or maybe an extension).
VSCode Live Share extension is not made exactly for that, but it's functionalities might do the job. I have not tested it yet.
Within Google Container OS, I would like to use it as my cloud development environment. How would I run the docker command from the toolbox? Do I need to add the docker.sock as a bind mount? I need to be able to run docker (and docker-compose) to run my development environment.
Google Container OS images come with docker already installed and configured, so you will be able to use the docker command from the command line without any prior configuration if you create a virtual machine from one of these images, and SSH into the machine.
As for docker-compose, this doesn't come pre-installed. However, you can install this, and other relevant tools/programs you require by making use of the toolbox you mentioned which provides a shell (including a package manager)in a Debian chroot-like environment (here you automatically gain root privileges).
You can install docker-compose by following these steps:
1) If you havn't already, enter the toolbox environment by running /usr/bin/toolbox
2) Check the latest version of docker-compose here.
3) You can run the following to retrieve and install docker-compose on the machine (substitute the docker-compose version number for the latest version you retrieved in step 2):
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
You've probably found at this point that although you can now run the freshly installed docker-compose command within the toolbox, you can't run the docker command. This is because by default the toolbox environment doesn't have access to all paths with the rootfs and that the filesystem available doesn't correspond between both environments.
It may be possible to remedy this by exiting out of the toolbox shell, and then edit the /etc/default/toolbox file which allows you to configure the toolbox settings. This would allow you to provide access to the docker binary file in the standard environment by following these steps:
1) Ensure you are no longer in the toolbox shell, then run command which docker. You will see something similar to /usr/bin/docker.
2) Open file /etc/default/toolbox
3) The TOOLBOX_BIND line specifies the paths from rootfs to be made available inside the toolbox environment. To ensure docker is available inside the toolbox environment, you could try adding an entry to the TOOLBOX_BIND section, for example --bind=/usr/bin/docker:/usr/bin/docker.
However, I've found that even though it's possible to edit the /etc/default/toolbox to make the docker binary file available in the toolbox environment, when certain docker commands are run in the toolbox environment, additional errors are generated as the docker version that comes pre-installed on the machine is configured to use certain configuration files and directories and although it may be possible edit the /etc/default/toolbox file and make all of the required locations accessible from within the toolbox environment, it may be simpler to install docker within the toolbox by following the instructions for installing docker on debian found here.
You would then be able, to issue both the docker and docker-compose commands from within toolbox.
Alternatively, it's possible to simply install docker and docker-compose on a standard VM (i.e. without necessarily using a Google Container OS machine type) although the suitability of this depends on your use case.
I'm a newer for the hyperledger and just studying it by following the tutorials on http://hyperledger-fabric.readthedocs.io. I am trying to build the first network using "first-network" in the fabric-samples. The ./byfn -m generate is OK. But after typing ./byfn -m up, I meet
/bin/bash: ./scripts/script.sh: No such file or directory
error and the process hangs.
What is going wrong?
PS: The OS is Windows 10.
Check to see if you have a local firewall enabled. Depending on your docker configuration, a firewall may prohibit the docker daemon from accessing share drives as specified in docker setup (windows).
Restart the Docker daemon after applying local firewall changes.
I was facing the same issue and could resolve it.
The shared network drive needs to be working for any directory on the local machine to be identified from the container.
Docker for example has the "Shared drive" usually c:\ under which all your byfn.sh paths shall be present. Second condition is you need to be running the byfn.sh script with the same user who was authenticated to share the drives on the container. Your password change on the windows environment could break the already existing shared drives with the containers, hence creating problems in starting them.
Follow these steps :
In your docker terminal check the path $HOME. Type the command echo $HOME.
Make sure that your fabric-samples folder is the same path as of the variable $HOME.
Follow the steps for generating your first network.
or try the below solution.
Follow these steps :
Go to settings of docker.
Click on reset credentials.
Now check if the shared drives include the required drives or not.
If not, then include them apply your changes and restart your docker and your bash where you were trying to start your network.
I know the question is old but i have faced the similar issue so i did the following
./byfn.sh -m generate
./byfn.sh -m up
i was missing .sh in both commands.