I am trying to install laravel 5 in windows but when i run homestead up command it show a time out while waiting for the machine to boot error. Below is my code
ip: "192.168.10.10"
memory: 2048
cpus: 1
provider: virtualbox
authorize: C:\Users\DB\.ssh\id_rsa.pub
keys:
- C:\Users\DB\.ssh\id_rsa
folders:
- map: C:\bin\tmp\yahavi
to: /home/vagrant/Code
sites:
- map: homestead.app
to: /home/vagrant/Code/Laravel/public
databases:
- homestead
variables:
- key: APP_ENV
value: local
My project folder directory is C:\bin\tmp\project
The most likely explanation for this is that you do not have the virtualisation settings enabled in your machine BIOS. I appreciate that this sounds unlikely, but I experienced the same problem with a newly built desktop last week. If you have an intel machine enabling VT-X in your machine BIOS or the equivalent on an AMD machine fixes this issue.
Related
All ! :)
I’m using the mono-repo with multi application architecture.
- foo
- dev
- alp
- prd
- bar
- dev
- alp
- prd
- argocd
- dev
- foo-application (argo cd app, target revision : master, destination cluster : dev, path: foo/dev)
- bar-application (argo cd app, target revision : master, destination cluster : dev, path: bar/dev)
- alp
- foo-application (argo cd app, target revision : master, destination cluster: alp, path: foo/alp)
- bar-application (argo cd app, target revision : master, destination cluster: alp, path: bar/alp)
- ...
Recently I found out that merging to the master branch triggers syncing with other applications as well Despite no change in that target path directory..
So, whenever one application is modified and merged into the master, multiple applications are Out-Of-Sync -> Syncing -> Synced is happening repeatedly. :(
In my opinion, if there is no code change in the target path, even if the git sha value of the branch changes, synced is maintained.
But it wasn’t. When the git sha of the target branch is changed, ArgoCD is unconditionally triggered by changing the cache key.
To solve this problem, it seems wasteful to create a manifest repository for each application.
While looking for a solution, I came across this feature.
webhook-and-manifest-paths-annotation
However, according to the documentation, this seems to work when used with the GitHub Webhook.
Currently we are using ArgoCD polling the Repository every 3 minutes. Does this annotation not work in this case?
I've been following this tutorial (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.DownloadingAndRunning.html) on how to set up a downloadable DynamoDB on my computer, but have been coming across an issue when I try to connect to a local host.
I have checked my host file and everything seems to be ok...
I am using Windows 10 cmd and these are the outputs on my command line:
C:\Users\Desktop\dynamodb_local_latest>java -
D"java.library.path=./DynamoDBLocal_lib" -jar DynamoDBLocal.jar
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
C:\Users\Desktop\dynamodb_local_latest>aws dynamodb list-tables --endpoint-
url http://localhost:8000
Could not connect to the endpoint URL: "http://localhost:8000/"
C:\Users\Desktop\dynamodb_local_latest>
Any help will be greatly appreciated!
You must run 'aws configure' and set the required parameters (even if you're only using a local dynamo db emulator, just ignore the access/secret keys)
In addition to running aws configure as mentioned in #J.S.'s answer, you will need to ensure Dynamo DB is running. I recently had this error, where the service had shut down and I didn't realize it. If this is your case, make sure to restart it by going to the folder it is installed in and running java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb &
I am attempting to tune HPs for my model using the ml-engine on a local server. In my case the model trains a single pass, but no HP trials are performed. Is this a configuration issue, or is HP optimization not supported in local mode?
My local command:
gcloud ml-engine local train --package-path $PWD --module-name example.train --configuration example/hpconfig.yaml -- --param1 16 --param2 2
My config file:
trainingInput:
workerCount: 1
hyperparameters:
goal: MINIMIZE
hyperparameterMetricTag: val_loss
maxTrials: 10
maxParallelTrials: 1
enableTrialEarlyStopping: True
params:
- parameterName: param1
type: INTEGER
minValue: 4
maxValue: 128
scaleType: UNIT_LINEAR_SCALE
- parameterName: param2
type: INTEGER
minValue: 1
maxValue: 4
scaleType: UNIT_LINEAR_SCALE
Unfortunately, HP Tuning cannot be run in local mode. I would recommend a workflow like so:
Run locally with small data, etc. to ensure everything is working (I recommend using GCS paths).
Run a small test on cloud (single job) to ensure dependencies are correct, data files properly point to GCS instead of locally, etc.
Run an HP Tuning job.
Once 1 and 2 are working, 3 generally will, too.
Also, as a side note. Kubeflow supports Katib for running HP tuning jobs from any kubernetes deployment, including Minikube (for local development).
I have the following docker containers that I have set up to test my web application:
Jenkins
Apache 1 (serving a laravel app)
Apache 2 (serving a legacy codeigniter app)
MySQL (accessed by both Apache 1 and Apache 2)
Selenium HUB
Selenium Node — ChromeDriver
The jenkins job runs a behat command on Apache 1 which in turn connects to Selenium Hub, which has a ChromeDriver node to actually hit the two apps: Apache 1 and Apache 2.
The whole system is running on an EC2 t2.small instance (1 core, 2GB RAM) with AWS linux.
The problem
The issue I am having is that if I run the pipeline multiple times, the first few times it runs just fine (the behat stage takes about 20s), but on the third and consecutive runs, the behat stage starts slowing down (taking 1m30s) and then failing after 3m or 10m or whenever I lose patience.
If I restart the docker containers, it works again, but only for another 2-4 runs.
Clues
Monitoring docker stats each time I run the jenkins pipeline, I noticed that the Block I/O, and specifically the 'I' was growing exponentially after the first few runs.
For example, after run 1
After run 2
After run 3
After run 4
The Block I/O for the chromedriver container is 21GB and the driver hangs. While I might expect the Block I/O to grow, I wouldn't expect it to grow exponentially as it seems to be doing. It's like something is... exploding.
The same docker configuration (using docker-compose) runs flawlessly every time on my personal MacBook Pro. Block I/O does not 'explode'. I constrain Docker to only use 1 core and 2GB of RAM.
What I've tried
This situation has sent me down the path of learning a lot more about docker, filesystems and memory management, but I'm still not resolving the issue. Some of the things I have tried:
Memory
I set mem_limit options on all containers and tuned them so that during any given run, the memory would not reach 100%. Memory usage now seems fairly stable, and never 'blows up'.
Storage Driver
The default for AWS Linux Docker is devicemapper in loop-lvm mode. After reading this doc
https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper
I switched to the suggested direct-lvm mode.
docker-compose restart
This does indeed 'reset' the issue, allowing me to get a few more runs in, but it doesn't last. After 2-4 runs, things seize up and the tests start failing.
iotop
Running iotop on the host shows that reads are going through the roof.
My Question...
What is happening that causes the block i/o to grow exponentially? I'm not clear if it's docker, jenkins, selenium or chromedriver that are causing the problem. My first guess is chromedriver, although the other containers are also showing signs of 'exploding'.
What is a good approach to tuning a system like this with multiple moving parts?
Additonal Info
My chromedriver container has the following environment set in docker-compose:
- SE_OPTS=-maxSession 6 -browser browserName=chrome,maxInstances=3
docker info:
$ docker info
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 5
Server Version: 1.12.6
Storage Driver: devicemapper
Pool Name: docker-thinpool
Pool Blocksize: 524.3 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 4.862 GB
Data Space Total: 20.4 GB
Data Space Available: 15.53 GB
Metadata Space Used: 2.54 MB
Metadata Space Total: 213.9 MB
Metadata Space Available: 211.4 MB
Thin Pool Minimum Free Space: 2.039 GB
Udev Sync Supported: true
Deferred Removal Enabled: true
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay null host bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 4.4.51-40.60.amzn1.x86_64
Operating System: Amazon Linux AMI 2017.03
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.956 GiB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Are there any bat scripts (for Windows Server) to handle Elastic Load Balancer when deploying through CodeDeploy? I've found only scripts for linux:
https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb
Unfortunately, they even don't mention in docs about Windows Server support:
http://docs.aws.amazon.com/codedeploy/latest/userguide/elastic-load-balancing-integ.html
The official answer from Amazon linked back to this topic and they said "someone" is using Cygwin and I should try it also...
Unfortunately, having no other possibility, I've installed Cygwin and in appspec.yml I've put:
version: 0.0
os: windows
files:
- source: \xxx\
destination: C:\xxx\
hooks:
ApplicationStop:
- location: \deregister_from_elb.bat
timeout: 900
<next steps here>
ApplicationStart:
- location: \register_with_elb.bat
timeout: 900
In the deregister_from_elb.bat file, I run the .sh file with Cygwin, as follows:
#echo off
SET mypath=%~dp0
SET mypath=%mypath:~3%
C:\cygwin64\bin\bash.exe -l -c "'/cygdrive/c/%mypath%deregister_from_elb.sh'"
You can imagine how register_with_elb.bat looks like.
This solution works in production now, without any major problems for ~6 months.