Expo/Metro/Minify: how to exclude specific variable names from getting mangled? - expo

I'm using Expo managed workflow.
Is there a way to exclude certain variable names from being mangled when Expo bundles and minifies my app in production mode?
I tried adding the following “reserved” array to the miniferConfig, but it had no effect:
path: node_modules/metro-config/src/defaults/index.js
...
minifierConfig: {
mangle: {
toplevel: false,
reserved: [“myVariableName”]
},
...
Thanks

Related

karma config fails to add external project or file in the current project

I have added unit tests in some frontend projects using karma. I have multiple projects in my Git folder. If I run them individually, they work fine. However if there is a dependency of one project in another project it fails to include it. (failed to load JavaScript resource:)
If I run the tests using the html file directly, it runs the tests normally and even loads the external projects without any error. following are my resource roots in my unitTest.qunit.html file:
data-sap-ui-resourceroots='{
"x.y.projectmain": "../../",
"test.unit": "./",
"x.y.project2": "../../../../project2/WebContent"
}'
If I try to include the project same way in my Karma.conf.js it gives an error:
"Failed to resolve dependencies of 'x/y/projectmain/test/unit/AllTests.js' -> 'x/y/projectmain/test/unit/myUnitTest.js' -> 'x.y.project2/util/myfile.js': failed to load 'x.y.project2/util/myfile.js' from ./../../project2/WebContent/util/myfile.js: script load error"
Following are some of my Karma.conf.js settings:
ui5: {
type: "library",
paths: {
src: "projectmain/WebContent",
test: "projectmain/WebContent/test"
},
url: "https://openui5.hana.ondemand.com",
mode: "script",
config: {
async: true,
bindingSyntax: "complex",
compatVersion: "edge",
resourceRoots: {
"x.y.projectmain": "./base/projectmain/WebContent",
// "x.y.project2": path.resolve('../project2/WebContent')
"x.y.project2": "./../../projet2/WebContent"
// "x.y.project2": "./base/projectmain/WebContent/test/resources/project2/WebContent"
// "x.y.project2.util": "./base/project2/WebContent/util"
}
}
,
tests: [
"x.y.projectmain/test/unit/AllTests"
]
},
files: [
'Utils.js',
{ pattern: "../public/Project2/WebContent/utils/myfile.js", included: false, served: true, watched: false, nocache: true },
{ pattern: '../Project2/WebContent/**/*', watched: true, served: true, included: false }
],
// proxies: {
// '/project2/': path.resolve('../../project2/WebContent')
// },
proxies: {
'/x.y.project2/': '/absolute/' + path.resolve('../project2/WebContent'),
'/myfile.js/': '../public/project2/WebContent/util/myfile.js'
},
I have tried many things here. It even refers to the exact file in that external project but it just cant load the file. If I try to load the file manually in the browser it opens fine. But with Karma it gives an error.
My ultimate goal is to add one project as a dependency inside another project. I did check it by copying the whole WebContent folder from Project 2 inside the 'ProjectMain/WebContent/test/Resources/' directory It does work, but that's not appropriate way to include it.
There must be some way where we can register or include one project in another either as a resource root or proxies.

How to create a file or template owned by a user that does not exist on the host with ansible?

I'm experimenting with podman rootless.
Users in containers get assigned a subuid / subgid space from the host.
Files created or updated from a user in the container environment belong to the user id space,
that doesn't exist on the host.
That's where I'm currently stuck. I can calculate the subuid with ansible and ease access to the container owned files with ACL, but I can't get ansible to write out a jinja template and chown it to a user that doesn't exist on the host.
I also don't want to workaround by creating a dummy user with a matching UID on the host, since that would probably undermine the security advantages / the rootless concept.
Here the task:
- name: copy hass main config to storage
become: yes
template:
src: configuration.yaml.j2
dest: "{{ hass_data_dir }}/configuration.yaml"
owner: "{{ stat_container_base_dir }}.uid"
group: "{{ stat_container_base_dir }}.gid"
mode: 0640
and the error message when running the task.
TASK [server/smarthome/homeassistant/podman : copy hass main config to storage] ************************************************************************************************************************
fatal: [odroid]: FAILED! =>
changed: false
checksum: 20c59b4a12d4ebe52a3dd191a80a5091d8e6dc0c
gid: 0
group: root
mode: '0640'
msg: 'chown failed: failed to look up user {''changed'': False, ''stat'': {''exists'':
True, ''path'': ''/home/homeassistant/container'', ''mode'': ''0770'', ''isdir'':
True, ''ischr'': False, ''isblk'': False, ''isreg'': False, ''isfifo'': False,
''islnk'': False, ''issock'': False, ''uid'': 363147, ''gid'': 362143, ''size'':
4096, ''inode'': 4328211, ''dev'': 45826, ''nlink'': 3, ''atime'': 1669416005.068732,
I tried to find help in the modules documentation at: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html
My ansible version is: ansible [core 2.13.1]
As you can see in the error message, ansible is missing a user with UID 363147 on the host.
Is there any way to circumvent the test if a user exists in ansible.builtin.template and similar modules, that allow user assignment with owner: and group:?
The only workaround I found was using command, but with the need for templates, complexity will increase when I'd have to parse jinja templates without the ansible template module.
I would appreciate if I missed an existing option or would like to create a pull request for an option like:
ignore_usercheck: true or validate_user: false
Hope you can help me out here :)
After all this was only a misleading error message, not a missing feature in Ansible.
I tested with the debug module and found out, that the values of stat have to be accessed from inside the curly brackets.
- name: debug
debug:
msg: "{{ stat_container_base_dir.stat.uid }}"
What Ansible got, was the whole string content of stat, not just the UID.
User ID's that don't exist on the host can be assigned.

Error HH8: There's one or more errors in your config file: * Invalid value undefined for HardhatConfig.networks.rinkeby.url - Expected a value of t

I get this error when i try to run my script on the rinkeby network:
Error HH8: There's one or more errors in your config file:
Invalid value undefined for HardhatConfig.networks.rinkeby.url - Expected a value of type string.
require('#nomiclabs/hardhat-waffle');
require('dotenv').config();
module.exports = {
solidity: '0.8.1',
networks: {
rinkeby: {
url: process.env.STAGING_ALCHEMY_KEY,
accounts: process.env.PRIVATE_KEY,
},
},
};
.env File
process.env.STAGING_ALCHEMY_KEY=https://eth-rinkeby.dotdotdot
process.env.PRIVATE_KEY=PRIVATE_KEY
Please what could possibly be the problem?
Could be a few things here but if you're using create-react-app your .env variables need to be prefixed with REACT_APP. So by example your env variable STAGING_ALCHEMY_KEY should be REACT_APP_STAGING_ALCHEMY_KEY. If you use webpack you might need to make some modifications in here as well. Hope this helps.
make sure both .env and hardhat.config.js are in the hardhat-tutorial folder
check are you in root folder or not, if not then go to root of folder and try again
Add goerli after networks and before url.
module.exports = {
solidity: "0.8.4",
networks: {
goerli:{
url: process.env.STAGING_ALCHEMY_KEY,
accounts: [process.env.PRIVATE_KEY],
}
},
};

DM create bigquery view then authorize it on dataset

Using Google Deployment Manager, has anybody found a way to first create a view in BigQuery, then authorize one or more datasets used by the view, sometimes in different projects, and were not created/managed by deployment manager? Creating a dataset with a view wasn't too challenging. Here is the jinja template named inventoryServices_bigquery_territory_views.jinja:
resources:
- name: territory-{{properties["OU"]}}
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: territory_{{properties["OU"]}}
- name: files
type: gcp-types/bigquery-v2:tables
properties:
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableReference:
tableId: files
view:
query: >
SELECT DATE(DAY) DAY, ou, email, name, mimeType
FROM `{{properties["files_table_id"]}}`
WHERE LOWER(SPLIT(ou, "/")[SAFE_OFFSET(1)]) = "{{properties["OU"]}}"
useLegacySql: false
The deployment configuration references the above template like this:
imports:
- path: inventoryServices_bigquery_territory_views.jinja
resources:
- name: inventoryServices_bigquery_territory_views
type: inventoryServices_bigquery_territory_views.jinja
In the example above files_table_id is the project.dataset.table that needs the newly created view authorized.
I have seen some examples of managing IAM at project/folder/org level, but my need is on the dataset, not project. Looking at the resource representation of a dataset it seems like I can update access.view with the newly created view, but am a bit lost on how I would do that without removing existing access levels, and for datasets in projects different than the one the new view is created in. Any help appreciated.
Edit:
I tried adding the dataset which needs the view authorized like so, then deploy in preview mode just to see how it interprets the config:
-name: files-source
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: {{properties["files_table_id"]}}
access:
view:
projectId: {{env['project']}}
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableId: $(ref.territory_files.tableReference.tableId)
But when I deploy in preview mode it throws this error:
errors:
- code: MANIFEST_EXPANSION_USER_ERROR
location: /deployments/inventoryservices-bigquery-territory-views-us/manifests/manifest-1582283242420
message: |-
Manifest expansion encountered the following errors: mapping values are not allowed here
in "<unicode string>", line 26, column 7:
type: gcp-types/bigquery-v2:datasets
^ Resource: config
Strange to me, hard to make much sense of that error since the line/column it points to is formatted exactly the same as the other dataset in the config, except that maybe it doesn't like that the files-source dataset already exists and was created from outside of deployment manager.

How to ingest variable data like passwords into compute instance when deploying from template

We are trying to figure out how we can create a compute engine template and set some information like passwords with the help of variables in the moment when the final instance is generated by deployment manager, not in the base image.
When deploying something from marketplace you can see that passwords are generated by "password.py" and stored as metadata in the VMs template. But i can't find the code that writes this data into the VMs disk image.
Could someone explain how this can be achieved?
Edit:
I found out that startup scripts are able to read the instance's metadata: https://cloud.google.com/compute/docs/storing-retrieving-metadata Is this how they do it in marketplace click-to-deploy scripts like https://console.cloud.google.com/marketplace/details/click-to-deploy-images/wordpress ? Or is there an even better way to accomplish this?
The best way is to use the metadata server.
In a star-up script, use this to recover all the attributes of your VM.
curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetada
ta/v1/instance/attributes/"
Then, do what you want with it
Don't forget to delete secret from metadata after their use. Or change them on the compute. Secrets must be stay secret.
By the way, I would to recommand you to have a look to another things: berglas. Berglas is made by a Google Developer Advocate, specialized in security: Seth Vargo. In summary the principle:
Bootstrap a bucket with Berglas
Create a secret in this Bucket ith Berglas
Pass the reference to this secret in your compute Metadata (berglas://<my_bucket>/<my secret name>)
Use berglas in start up script to resolve secret.
All this action are possible in command line, thus an integration in a script is possible.
You can use python templates , this give you more flexibility. In your YAML you can call the python script to fill the necessary information, from documentation:
imports:
- path: vm-template.py
resources:
- name: vm-1
type: vm-template.py
- name: a-new-network
type: compute.v1.network
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: true
Where vm-template.py it's a python script:
"""Creates the virtual machine."""
COMPUTE_URL_BASE = 'https://www.googleapis.com/compute/v1/'
def GenerateConfig(unused_context):
"""Creates the first virtual machine."""
resources = [{
'name': 'the-first-vm',
'type': 'compute.v1.instance',
'properties': {
'zone': 'us-central1-f',
'machineType': ''.join([COMPUTE_URL_BASE, 'projects/[MY_PROJECT]',
'/zones/us-central1-f/',
'machineTypes/f1-micro']),
'disks': [{
'deviceName': 'boot',
'type': 'PERSISTENT',
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': ''.join([COMPUTE_URL_BASE, 'projects/',
'debian-cloud/global/',
'images/family/debian-9'])
}
}],
'networkInterfaces': [{
'network': '$(ref.a-new-network.selfLink)',
'accessConfigs': [{
'name': 'External NAT',
'type': 'ONE_TO_ONE_NAT'
}]
}]
}
}]
return {'resources': resources}
Now for the password it depends which VM you are using, Windows or Linux.
Linux you can add a startup script which inject a ssh public key.
Windows you can first prepare the proper key, see this Automate password generation