Deploy Multiple Services inside same Service fabric application type - appfabric

Is it possible to Create multiple services under a single Application type name.
I have two service fabric application packages, both have the same application type name.
When i tried to deploy the second application after deploying the first. I get the error 'application type and version already exists'.
Is there a way i can deploy both packages under a single application type name.

You can't deploy two application types with the same name to a cluster.
If you want to share services between applications, you can create two (or more) Service Fabric Application projects with different names and add references to the shared services to the applications (in Visual Studio, right click the References node in the application project and click Add Service).
Somewhat related to your question - you can also (using PowerShell):
Create multiple instances of the same application type with different parameters after it's been deployed (New-ServiceFabricApplication)
Create more instances of the same service type within an existing application (New-ServiceFabricService)

Related

How to create a windows ec2 instance with domain join and preinstalled applications

I am creating windows EC2 instances in my work and joining domain, then installing 10 third party applications, it takes me almost 2 to 3hours time to make server up and running.
And I am repeating the same task for each projects.
Instead of repeating the same, would like to go with any automated way to create, join domain and install applications which are centrally located.
Creating golden image with all apps installed will not be perfect solution as my apps keep on change for different stacks,
Kindly suggest me what can be suitable solution.
I would suggest you to first launch the Instance and make them seamless domain joined to your Active directory.
Also, make sure that you make all the Instance as Managed, so that you can use AWS Systems Manager to Manage multiple Instances at once. You can tag the Instances as per the Instance role and perform action on all the Tag Instances at once, or you can simply select the Instance when using Systems Manager.
For Example, you can use Distributor to Install the necessary packages that you need for multiple Instances simultaneously.
[1] System Manager : https://aws.amazon.com/systems-manager/#:~:text=AWS%20Systems%20Manager%20helps%20you,or%20anything%20else%20you%20choose.
[2] Distributor : https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor.html
[3] Registering Instances with Systems Manager: https://www.youtube.com/watch?v=DQ619NSwoGg
[4] Install or update packages : https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-deploy.html

Turn off the v0.1 and v1beta1 endpoints via GCP console

I have a Flutter + Firebase app, and received an email about "Legacy GAE and GCF Metadata Server endpoints will be turned down on April 30, 2020". I updated it to v1 or whatever, and at the end of the email it suggests to turn off the endpoints completely. I'm using Google Cloud Functions and the email says
If you are using App Engine Standard or Cloud Functions, set the following environment variable: DISABLE_LEGACY_METADATA_SERVER_ENDPOINTS=true.
Upon further research this can be done through the console (https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom). It says to add it as custom metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#disable-legacy-endpoints) but I'm not sure if I'm doing this right.
For additional info, the email was triggered from a few cloud functions I have where I used the firebase admin to send push notifications (via cloud messaging)
The custom metadata feature you mention is meant to be used with Compute Engine and allows to pass in arbitrary values to your project or instance, and set startup and shutdown scripts. It's a handy way to pass common environment variables to all your GCE VMs in your project. You can also use those custom metadata in App Engine Flexible instances because they are actually Compute Engine VMs in your project running your App Engine code.
Cloud Functions and App Engine Standard are fundamentally different in that they don't run in your project but in a Google-owned project. This makes your project-wide custom metadata unreachable to them.
For this reason, for Cloud Functions you'll need to set a CF-specific environment variable by either:
using the --set-env-vars flag when deploying your Function with the gcloud functions deploy command
adding it to the environment variable section of your Function when creating it via the Developer Console

VSTS - cannot deploy to an on premise web server

I am new to VSTS build and deploy and I am struggling with it.
I have a solution that contains a Web Core API and a ASP.Net web project.
I have done my build and now I want to deploy the build to an on premise web server.
When I look at my artifacts, everything looks OK;
So when I set up a deploy definition I start with an empty environment and I add a task. It looks to me that given I want to move the artifacts to an on premise web server I should be using the Windows Machine File Copy task. But when I do I find that I do not have access to the drop folder. How do I fix this (and have I selected the correct task?).
You're using the hosted agent. The hosted agent can't deploy to an on-prem server -- it has no network route.
You can either use Deployment Groups (the agent is installed on your target machine and talks directly to VSTS), or you can install your own on-prem build/release server, then push the bits to the target machine using the Windows Machine File Copy task.

Environment specific application.properties in springboot Application

I'm trying to automate the process of deploying code using github and jenkins job to deploy my Springboot Application on AWS .
I want to know where should I place the application.properties file in case I m deploying a war file on Tomcat and don't want this file to be pushed onto github as it may contain some database credentials , not to be exposed.
Should I put separate application-prod.properties file in Tomcat (AWS) so that my war file will be independent of these properties ?
See my answer here.
In a nutshell, you externalise the properties and then pass one or more profiles that will activate one or more Spring Configuration classes. Each Configuration class will load one or more property file. In your case, if you only have one environment, you can just create a configuration file for one profile.
Then, on your AWS instance, you will deploy the configuration file separately. At runtime, you will need to pass to your Spring Boot application the active profile(s). You can do this by passing the VM argument: -Dspring.profiles.active=[your-profile]
I'm completing the final lectures on an online course that shows how to create from scratch a Spring Boot website with Thymeleaf, Spring Security, Email and Data JPA, how to process credit card payments with Stripe and how to deploy to AWS. You can register your interest here.
how about using spring-cloud-starter-config instead of local properties ?
If using spring-cloud-start-config, all configurations should be loaded from your config-center instead of reading them locally.
Even if you have multiple different environments, spring-cloud-starter-config could handle it with different profiles.
What's more, spring-cloud-starter-config could use local environment variables too.
By the way, the only local resource could be bootstrap.yml if you are using spring-cloud-starter-config.
Wish I can help you!

Run one app on multiple elastic beanstalk instances

I have one Flask app which handles a number of things which are common to a number of elastic beanstalk applications: logging, database/ORM, error handling, are all handled by Flask, and similar across elastic beanstalk instances.
I have four eb applications, which each do different jobs, demand different docker images, and so on.
One approach would be have each eb app target its own unique endpoint on the Flask App and follow its own unique code path, while sharing common resources, such as the ORM and error handling.
Is this possible? The big limitation seems to be one Dockerfile per project, which has a fixed name, and sets the image. I would rather be able to specify the Dockerfile-path at deploy-time.
Is this even a reasonable approach to take?
You've got three options:
Run multiple services in one container: Treat containers like VMs; run both your Flask app and your other services in one container. You could then have the Flask app built as a base container and build your other 4 applications off that base.
Run an internal service on another instance: Put the Flask app on a 5th EB machine, one that's internal-facing, and direct the other 4 to talk to it.
Don't use Elastic Beanstalk: Provision your own instance and run it the way you like.
Of these, I'd strongly consider the last. Once you find yourself trying to work around the limitations of EB, you've probably outgrown it.