Specify multiple provisioning profile for expo local credentials - expo

Expo EAS has a way to use local credentials
The problem is that I have 2 environments: production and development, and 2 provision profile accordingly. Unfortunatly I can't find a way how to define 2 different provision profiles in credentials.json.
{
"android": {
"keystore": {
"keystorePath": "android/keystores/release.keystore",
"keystorePassword": "paofohlooZ9e",
"keyAlias": "keyalias",
"keyPassword": "aew1Geuthoev"
}
},
"ios": {
"provisioningProfilePath": "ios/certs/profile.mobileprovision",
"distributionCertificate": {
"path": "ios/certs/dist-cert.p12",
"password": "iex3shi9Lohl"
}
}
}
The only field we have here is provisioningProfilePath.
When I'm trying to build internal distribution with a production profile, of course, I'm getting the error: You must use an adhoc provisioning profile (target 'TARGET NAME)' for internal distribution any solution?
Any suggestions?

Related

AWS amplify running datastore with type/javascript freeze in the console

I am using the aws-amplify API to manipulate the datastore. Everything was going fine and all queries were running successfully until it stopped to work suddenly.
In my case i am building an NPM package to wrap aws-amplify functionality with node and typescript. And another developer is using the package to build a native app with react-native.
So when i implement new functions i test it locally with ts-node, something like DataStore.query or DataStore.Save ...etc, and the other developer is testing with expo after install the last package release i have doen.
Once we had a problem saying:
[WARN] 04:14.549 DataStore, Object {
"cause": Object {
"error": Object {
"errors": Array [
Object {
"message": "Connection failed: {\"errors\":{\"errorType\":\"MaxSubscriptionsReachedError\",\"message\":\"Max number of 100 subscriptions reached\"}}",
},
],
},
When it's happened, I tried to run queries locally and it work good with a warning:
[WARN] 33:35.743 DataStore - Realtime disabled when in a server-side environment
So we thought it is cache problem or something. But now nothing works at all in the dataStore. If i am trying to run code locally with ts-node, the console freeze and never comeback.
For example if i do:
the console will freeze with the warning message:
We tried to fix appSync and subscriptions but it is not working at all.
Cognito user pool works good, S3 also good, only datastore is sad :(
// How we configure amplify
this.awsExports = Amplify.configure({ ...awsConfig });
// How we import Datastore
import {DataStore} from "aws-amplify/";
// Our dependencies
"dependencies": {
"#aws-amplify/core": "^4.6.0",
"#aws-amplify/datastore": "^3.12.4",
"#react-native-async-storage/async-storage": "^1.17.4",
"#react-native-community/netinfo": "^8.3.0",
"#types/amplify": "^1.1.25",
"algoliasearch": "^4.14.1",
"aws-amplify": "^4.3.29",
"aws-amplify-react-native": "^6.0.5",
"aws-sdk": "^2.1142.0",
"aws-sdk-mock": "^5.7.0",
"eslint-plugin-jsdoc": "^39.2.9",
"mustache": "^4.2.0"
}
Please any one can help?

Hashicorp Vault - Django query from docker container

Good afternoon,
I have a two docker containers, one running a django app and the other running Hashicorp Vault as I am starting to play with Vault in a dev environment.
I am using HVAC from a django view to write a secret to the vault that is entered by a user to set up an integration to a REST API for a data pull.
When I run the following from my host machine, it writes just fine.
client_write = hvac.Client(url='http://127.0.0.1:8200', token='MY_TOKEN')
client_write.is_authenticated()
When I run the same from the Django container, it fails with:
requests.exceptions.ConnectionError:
HTTPConnectionPool(host='127.0.0.1', port=8200): Max retries exceeded
with url: /v1/auth/token/lookup-self (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7f2a21990610>: Failed to establish a new connection: [Errno 111]
Connection refused'))
Django docker container is running on localhost:8000 & the vault is localhost:8200. I also have a front end written in VueJS running on localhost:8080 that has no trouble communicating back and forth with the django rest API (django-rest-framework).
Is there something in vault that I need to list where the queries can come from?
EDIT: Also, I have used both my purpose built tokens with policies that allow writing of the secrets in question along with the following perms input (per https://github.com/hashicorp/vault/issues/781 ):
path "auth/token/lookup-self" {
capabilities = ["read"]
}
path "auth/token/renew-self" {
capabilities = ["update"]
}
Furthermore, the same behavior occurs when testing with the root token and the purpose built tokens work from the host system.
Vault Config:
{
"listener": {
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": "true"
}
},
"backend": {
"file": {
"path": "/vault/file"
}
},
"default_lease_ttl": "240h",
"max_lease_ttl": "720h",
"ui": true,
"api_addr": "http://0.0.0.0:8200",
}
Thank you, I am very new to Vault and am struggling through it a bit.
BCBB
OK, so I neglected to provide enough relevant information in my first post due to my not understanding. Thanks to the reference to networking in compose in the comment above, I started down a path.
I realize now that I have each element in a different docker-compose: project_ui/docker-compose for the VueJS front end, project_api/ for the Django & Postgres, and then project_vault for the hashicorp vault container.
To enable these to talk, I followed the guidance here: Communication between multiple docker-compose projects
I created a network in the django app, and then linked the other containers to it as described in that answer.
Thanks.

Google.Cloud.Diagnostics.AspNetCore 3.0.0-beta13 Not working on GKE

Purpose
Use Google Cloud Diagnostics on a .net core 2.2 REST API, for Logging, Tracing and Error Reporting in two possible scenarios:
Local execution on Visual Studio 2017
Deployed on a Docker Container and Running on a GCP Kubernetes Engine
Environment details
.NET version: 2.2.0
Package name and version: Google.Cloud.Diagnostics.AspNetCore 3.0.0-beta13
Description
For configuring Google Cloud Diagnostics, two documentation sources were used:
Google.Cloud.Diagnostics.AspNetCore
Enable configuration support for Google.Cloud.Diagnostics.AspNetCore #2435
Based on the above documentation the UseGoogleDiagnostics extension method on IWebHostBuilder was used, as this configures Logging, Tracing and Error Reporting middleware.
According to the 2) link, the following table presents the information needed when using the UseGoogleDiagnostics method:
For local execution => project_id, module_id, and version_id are needed,
For GKE => module_id, and version_id
The .net core configuration files were used to provide the above information for each deployment:
appsettings.json
{
"GCP": {
"ServiceID": "my-service",
"VersionID": "v1"
}
}
appsettings.Development.json
{
"GCP": {
"ID": "my-id"
}
}
Basically, the above will render the following configuration:
On Local execution
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics("my-id", "my-service", "v1")
.UseStartup<Startup>();
On GKE
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics(null, "my-service", "v1")
.UseStartup<Startup>();
To guarantee i'm using the correct information, i used two places on the GCP UI to verify:
On Endpoints Listing, checked the service details:
Service name: my-service Active version: v1
Checked the Endpoint logs, for a specific API POST Endpoint
{
insertId: "e6a63a28-1451-4132-ad44-a4447c33a4ac#a1"
jsonPayload: {…}
logName: "projects/xxx%2Fendpoints_log"
receiveTimestamp: "2019-07-11T21:03:34.851569606Z"
resource: {
labels: {
location: "us-central1-a"
method: "v1.xxx.ApiOCRPost"
project_id: "my-id"
service: "my-service"
version: "v1"
}
type: "api"
}
severity: "INFO"
timestamp: "2019-07-11T21:03:27.397632588Z"
}
Am i doing anything wrong or is it a bug on Google.Cloud.Diagnostics.AspNetCore 3.0.0-beta13?
When executing the service Endpoints, for each specific deployment, Google Cloud Diagnostics behaves differently:
On Local execution (VS2017) => Logging, Tracing and Error Reporting work as expected, everything showing in GCP UI
On GKE Deployment => Logging, Tracing and Error Reporting DO NOT Work, nothing shows in GCP UI
I've tried several variations, hardcoding the values directly in the code, etc, but no matter what i do, Google Cloud Diagnostics is not working when deployed in GKE:
Hardcoding the values directly
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics(null, "my-service", "v1")
.UseStartup<Startup>();
Without the v in Version
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics(null, "my-service", "1")
.UseStartup<Startup>();

Disabling development environment in .NET Core 2.0 app in AWS

I have a .NET Core MVC app that I deployed to AWS ElasticBeanstalk. But when i go to the app i get an error with message:
Development environment should not be enabled in deployed applications
In launchsettings.json file i have set ASPNETCORE_ENVIRONMENT's
value to Production.
When i deploy app using Visual Studio (AWS toolkit) i set Project
build configuration's value to Release.
I have also created environment variable with name
ASPNETCORE_ENVIRONMENT and value Production in EB Software Configuration.
But I am still getting the same error, any idea what would be the fix ?
My launchSettings.json file looks like this:
{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://DUMMY.us-west-2.elasticbeanstalk.com/",
"sslPort": 0
}
},
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"launchUrl": "http://DUMMY.us-west-2.elasticbeanstalk.com/",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"AlacsWeb": {
"commandName": "Project",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"applicationUrl": "http://DUMMY.us-west-2.elasticbeanstalk.com/"
}
}
}
And startup.cs file:
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();
// Add application services.
services.AddTransient<IEmailSender, EmailSender>();
services.AddMvc();
// Add http context
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseBrowserLink();
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
app.UseStaticFiles();
app.UseAuthentication();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
Thanks for posting your code. I'm answering here because there's more to say than the comment form will allow.
TLDR
Your environment variables are ignored in EBS because your startup isn't picking them up.
I don't know what influence launchSettings.json has in EBS but given that the environment variables in that file are ignored I suspect the answer is none whatsoever.
You can save environment variables in EBS.
launchSettings.json
I don't use these myself, so what follows is research and trivial testing.
This MS article claims that this file only kicks in for running in Visual Studio
When using Visual Studio, environment variables may be set in the launchSettings.json file.
However I know from a simple test that these are also picked up by dotnet run in the project directory. Also, I know that VS Code ignores it in favour of .vscode/launch.json.
What I do not know is whether IIS pays any attention to it. (IIS in an EBS instance I mean, as opposed to IIS Express on your dev box).
Environment variables
I think I can see why environment variables are being ignored.
Startup.cs has an alternative constructor which lets you build the configuration object from environment variables, configuration files, and so on. It accepts an IHostingEnvironment instance.
public class Startup
{
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.AddEnvironmentVariables(); // <--- This picks up env variables
Configuration = builder.Build();
}
public IConfigurationRoot Configuration { get; }
// etc ....
}
EB environment variables
As I explained in a comment, EB + dotnet core 2 are in a right mess over environment variables. Our solution is not to parse the infernal file per my earlier answer but to Dockerise our dotnet apps.
That said, you can save environment variables in EB. As you say, Software Configuration is the correct place to enter them. Then, click on your environment (the green/grey/whatever card as it appears in EB), go to the Actions menu, and Save Configuration.
launchsettings.json is visual studio build specific. It doesn't impact deployment.
Following question has provided some insight on the issue
AWS Elastic Beanstalk environment variables in ASP.NET Core 1.0
a crazy solution:
replace:
if (env.IsDevelopment())
{
app.UseBrowserLink();
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
to
app.UseBrowserLink();
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();

AWS CloudFormation - Add New Certificate to an existing listener

What I have is one platform stack, and possibly multiple web application stacks (each represent one web application). The platform stack deploys an ECS platform that allows hosting multiple web applications, but doesn't actually have any web applications. It's just a platform. Then, each web application stack represents a web application.
One of the HTTPS listeners I have in my platform stack template is this. Basically I have an HTTPS listener on port 443, and will carry one default certificate (by requirement you will need at least one certificate to create https listener):
"BsAlbListenerHttps": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"Certificates": [{
"CertificateArn": {
"Ref": "BsCertificate1"
}
}],
...
"Port": "443",
"Protocol": "HTTPS"
}
},
...
Now, let's say if I want to create a new web application (eg. www.example.com), I deploy the web application stack, specify some parameters, and obviously, I'll have to make a bunch of new resources. But at the same time, I will have to modify the current "BsAlbListenerHttps".
I'm able to import the current listener (using Imports and Exports) into my web application stack. But what I want to do is also add a new certificate for www.example.com to the listener.
I've tried looking around but failed to find any answer.
Does anyone know how to do this? Your help is appreciated. Thank you!
What I do in similar cases, is to use only one certificate for the entire region, and add domains to it as I add apps/listeners that are on different domains. I also do this per environment, so I have a staging cert and a production cert in 2 different templates. But for each you would define a standalone cert stack, called for example, certificate-production.json, but use the stack name as 'certificate' so that regardless of the environment, the stack reference is consistent:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "SSL certificates for production V2",
"Resources" : {
"Certificate" : {
"Type": "AWS::CertificateManager::Certificate",
"Properties": {
"DomainName": "*.example.com",
"SubjectAlternativeNames" : [ "*.example2.com","*.someotherdomain.com" ]
}
}
},
"Outputs": {
"CertificateId" : {
"Value" : {"Ref":"Certificate"},
"Description" : "Certificate ID",
"Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-CertificateId" }}
}
}
}
As you can see by using the SubjectAlternativeNames property, this certificate will serve 3 wild card domains. This way I can update the domains as I add services, and rerun the stack. The dependent listeners are not changed in anyway - they always refer to the single app certificate in the region.
One caveat: When you update a cert in CloudFormation, it will email all host administrtors on the given domain (hostmaster#example.com etc.). Each domain will get a confirmation email, and each email has to be confirmed again. If all the domains are not confirmed in this way, then the stack will fail to create/update.
Using this technique, I can manage SSL for all my apps without any trouble, while making it easy to add new ssl endpoints for new domains.
I create the certificate stack right after the main VPC stack, so all later stacks can refer to the certificate id defined here via an export.