Everytime I download a "power portal" with the pac cli command:
pac paportal download -id <guid> --path ./ --overwrite true
Many of the files seem to be regenerated with new short guids on the end, although they haven't changed. And the sitestettings.yml file gets re-ordered so it shows a bunch of changes.
Below I made one change to a site setting, and I have 134 changes.
Can this be avoided? It makes it frustrating to track actual changes in source control.
If you have multiple records with the same name, then the short guid will be appended as file/folders cannot have same names, if you avoid creating records with exact same names (active/inactive both) you should not face this issue
Related
This may be a basic question but I cannot figure out the answer. I have a simple postman collection that is run through newman
newman run testPostman.json -r htmlextra
That generates a nice dynamic HTML report of the test run.
How can I then share that with someone else? i.e. via email. The HTML report is just created through a local URL and I can't figure out how to save it so it stays in its dynamic state. Right clicking and Save As .html saves the file, but you lose the ability to click around in it.
I realize that I can change the export path so it saves to some shared drive somewhere, but aside from that is there any other way?
It's been already saved to newman/ in the current working directory, no need to 'Save As' one more time. You can zip it and send via email.
If you want to change the location of generated report, check this.
I have a mautic marketing automation installed on my server (I am a beginner)
However i replicated this issue when configuring GeoLite2-City IP lookup
Automatically fetching the IP lookup data failed. Download http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz, extract if necessary, and upload to /home/ol*****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb.
What i attempted
i FTP into the /home/ol****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb. directory
uploaded the file (the original GeoLite2-City.mmdb has '0 byte', while the newly added file is about '6000 kb'
However, once i go back into mautic to implement the lookup, the newly added file reverts back to '0byte" and i still cant get the IP lookup configured.
I have also changed the file permission to 0744, but the issue still replicates.
Did you disable the cron job which looks for the file? If not, or if you clicked the button again in the dashboard, it will overwrite the file you manually placed there.
As a side note, the 2.16 release addresses this issue, please take a look at https://www.mautic.org/blog/community/announcing-mautic-2-16/.
Please ensure you take a full backup (files and database) and where possible, run the update at command line to avoid browser timeouts :)
I'm using Chrome (vs Cloud SDK / command line) to repeatedly replace a file in a bucket. Dragging / dropping a file to overwrite the existing one, and / or deleting it first and putting it back (changed).
At a certain point the file stops updating and remains in a persistent state, even if I literally rm -r its parent folder.
i.e., I could have /bucket/css/file.css and rm -r /bucket/css and the file will still be available to the public.
From your second answer, it seems that your bucket has the option of “Object Versioning” enabled.
When Object Versioning is enabled for a bucket, Cloud Storage creates an archived version of an object each time the live version of the object is overwritten or deleted.
To verify that “Object Versioning” is enabled on your bucket you can use the following command:
gsutil versioning get gs://[BUCKET_NAME]
The response looks like the following if Object Versioning is enabled:
gs://[BUCKET_NAME]: Enabled
However, according to the official documentation there is no limit to the number of older versions of an object you will create if you continue to upload to the same object in a versioning-enabled bucket.
Having said that, I tried to reproduce your case in my own bucket. The steps I followed are:
1.Enable Object Versioning for my bucket
2.Upload a file in the bucket with the name “example.png”, using the GCP Console.
3.Drag and drop another file with the same name (“example.png”), but different content.
4.Check option “Replace existing object”
5.Check if the file has been updated. It had.
6.Repeated the process 50 times (since you said you had 40 archived versions of your file) by uploading the different files one after the other, every time overriding the previous one. Each time I uploaded a different content file, a new archived version of that file was created. Each time file updated accordingly without any problems.
Please review the steps I followed and let me know if there is any additional action from your side.
Since you are able to delete the files via gsutil command, deletion is working fine and you have all permissions required. Would you be able to clean-up the cookies of your web browser, and try deleting it again? You can also try to use incognito window mode to check if its working.
Furthermore, if Object Versioning is on, you can disable it and try deleting the object again.Note that object deletion can not be undone and once you delete the object it will be completely removed.
Additionally, a good practice suggested along with Object Versioning is to create an Object Lifecycle rule for the bucket that would delete all the objects that have been stored for more than a specific amount of time. You can use this as a workaround for deleting either live or archived versions of your object (if Object Versioning is actually enabled) and to accomplish it, you can follow this link.
Generally, you can review Deleting data best practices here.
Note that, according to Cloud Storage Object Limits a single particular object can only be updated or overwritten up to once per second. For more information, check here.
I used the gsutil to delete it and it worked... temporarily. It seems there were like 40 cached versions of the file with hash tag ids.
At some point it stops updating / deleting the file. :(
gsutil rm -r gs://bucket/path/to/folder/
I've setup a localstack install based off the article How to fake AWS locally with LocalStack. I've tested copying a file up to the mocked S3 service and it works great.
I started looking for the test file I uploaded. I see there's an encoded version of the file I uploaded inside .localstack/data/s3_api_calls.json, but I can't find it anywhere else.
Given: DATA_DIR=/tmp/localstack/data I was expecting to find it there, but it's not.
It's not critical that I have access to it directly on the file system, but it would be nice.
My question is: Is there anywhere/way to see files that are uploaded to the localstack's mock S3 service?
After the latest update, now we have only one port which is 4566.
Yes, you can see your file.
Open http://localhost:4566/your-funny-bucket-name/you-weird-file-name in chrome.
You should be able to see the content of your file now.
I went back and re-read the original article which states:
"Once we start uploading, we won't see new files appear in this directory. Instead, our uploads will be recorded in this file (s3_api_calls.json) as raw data."
So, it appears there isn't a direct way to see the files.
However, the Commandeer app provides a view into localstack that includes a directory listing of the mocked S3 buckets. There isn't currently a way to see the contents of the files, but the directory structure is enough for what I'm doing. UPDATE: According to #WallMobile it's now possible to see the contents of files too.
You could use the following command
aws --endpoint-url=http://localhost:4572 s3 ls s3:<your-bucket-name>
In order to list the exact folder in s3 bucket you could use this command:
aws --endpoint-url=http://localhost:4566 s3 ls s3://<bucket-name>/<folder-in-bucket>/
Image is saved as base64 encoded string in the file recorded_api_calls.json
I have passed DATA_DIR=/tmp/localstack/data
and the file is saved at /tmp/localstack/data/recorded_api_calls.json
Open the file and copy the data (d) from any API call that looks like this
"a": "s3", "m": "PUT", "p": "/bucket-name/foo.png"
extract this data using bas64 decoding
You can use this script to extract data from localstack s3
https://github.com/nkalra0123/extract-data-from-localstack-s3/blob/main/script.sh
From my understanding, localstack saves the data in memory by default. This is what happens unless you specify a data directory. Obvious, if in memory, you won't see any files anywhere.
To create a data directory you can run a command such as:
mkdir ~/.localstack
Then you have to instruct localstack to save the data at that location. You can do so by adding the DATA_DIR=... path and a volume in your docker-compose.yml file like so:
localstack:
image: localstack/localstack:latest
ports:
- 4566:4566
- 8055:8080
environment:
- SERVICES=s3
- DATA_DIR=/tmp/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- /home/alexis/.localstack:/tmp/localstack
Then rebuild and start the docker.
Once the localstack process started, you'll see a JSON database under ~/.localstack/data/....
WARNING: if you are dealing with very large files (Gb), then it is going to be DEAD SLOW. The issue is that all the data is going to be saved inside that one JSON file in base64. In other words, it's going to generate a file much bigger than what you sent and re-reading it is also going to be enormous. It may be possible to fix this issue by setting the legacy storage mechanism to false:
LEGACY_PERSISTENCE=false
I have not tested that flag (yet).
I'm new to Alexa Skill development and I'm sure this issue is process/environmental due to lack of experience.
Whenever I try to use a sample from an offical Alexa tutorial, I can never get the skill to pass the first TEST - always getting an error :(
In this case I am trying to run and fiddle with this tutorial:
https://developer.amazon.com/blogs/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill
What is happening / What I've done:
I download the Node SDK from the Git link, I also download the sample from the Git link. I then create a new ZIP that contains the sample code with the Node SDK included in the path /src/alexa-sdk/
I go to AWS and create a new function, not using a blueprint. I 'author from scratch' and create a function with the Skills Kit as a trigger. I name the function and use Node 6.10 runtime.
I upload my ZIP file and leave all boxes default, for Role I choose Custom Role then pick Basic Execution from the Role screen.
I leave the rest blank, go to NEXT and CREATE.
The function is created okay, but I do see this error 'This function contains external libraries. Uploading a new file will override these libraries.'
Here's the problem - this is the point of failure on all tutorials I've tried so far. I go to Configure Test Event, I choose ALEXA START SESSION as the template and click Save And Test...
EXECUTION RESULT FAILED:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"require (internal/module.js:20:19)"
]
}
Here's something from associated error logs, unsure if it's useful:
Unable to import module 'index': Error
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
I have noticed two things that I suspect may be an issue:
1) When I go to the CODE tab for this function, I see this message:
Your Lambda function "testprojectx" cannot be edited inline since the file name specified in the handler does not match a file name in your deployment package.
2) When I look at the code that's inserted into the test when I choose ALEXA SESSION START, I see many instances of 'unique value here':
amzn1.echo-api.session.[unique-value-here]
Although, there is no mention of this in the tutorial link I am referencing.
I'm really downhearted about it now as this is like the 3rd tutorial code I've tried to configure. Can anybody with experience follow the steps I've taken and point me in the right direction.
Thank you SO MUCH in advance if so.
EDIT: Absolute Clarification on how I am creating the ZIP file
I'm using Windows 10 and Chrome to download the files from GitHub.
I download the skill-sample-nodejs-decision-tree-master ZIP file from GitHub,
I do not know how to use NPM so I do this simply via downloading to desktop.
I then download the alexa-skills-kit-sdk-for-nodejs-master.ZIP file to desktop.
I unzip the contents of decision-tree-master into a folder on the desktop also called alexa-skills-kit-sdk-for-nodejs-master.
Within this folder, I navigate to /src/ and create a new folder called 'node_modules' within /src/.
Within /src/node_modules/ I now create another new folder called 'alexa-sdk'.
I unzip the contents of alexa-skills-kit-sdk-for-nodejs-master.zip into /src/node_modules/alexa-sdk/.
I have tried two approaches from here - both fail:
1) I ZIP only the contents of /src/ (not including the /src/ folder itself) and upload to Amazon.
2) I ZIP the entire 'decision-tree-master' folder and upload to Amazon.
I must be missing something, as I said this is just one of many Alexa tutorials I've tried to get working and this always happens :( So disheartened now.
This is common issue I have seen in many posts. Most of the cases it is the way zipping the files making the problem. Instead of zipping the folder you have to select all files and zip it like below,