I have created a questionnaire with django and in my views.py have the following code as part of a function
if text is not None:
for answer in datas:
f=open('/Users/arsenios/Desktop/data.txt', 'a')
f.write(answer+",")
f.write("\n")
f.close()
This works fine locally. It creates a text folder on the desktop and fills it in with the data of each person that completes it. When I run the code with openshift I get:
"[Errno 2] No such file or directory: '/Users/arsenios/Desktop/data.txt'".
I have seen some people asking and mentioning "OPENSHIFT_DATA_DIR" but I feel like there are steps they haven't included. I don't know what changes I should make to settings.py and views.py.
Any help would be appreciated.
The OPENSHIFT_DATA_DIR is from OpenShift 2 and is not set in OpenShift 3.
The bigger question is whether that is a temporary file or needs to be persistent across restarts of the application container. If temporary file, use a name under /tmp directory. If it needs to be persistent, then you need to look at mounting a persistent volume to save the data in, or look at using a separate database with its own persistent storage.
For explanation of some of the fundamentals of using OpenShift 3, suggest you look at the free eBook at:
https://www.openshift.com/deploying-to-openshift/
I managed to solve it. It turns out the data was getting saved in data.txt in openshift and the command I had to use was oc rsync pod:/opt/app-root/src/data.txt /path/to/directory. This command downloaded the data.txt file from openshift to the directory I wanted to. So in my case I had to use oc rsync save-4-tb2dm:/opt/app-root/src/data.txt /Users/arsenios/Desktop
Related
I recently moved my project from heroku to google cloud. It's written in flask and basically does some text summary (nothing fancy) of an uploaded .docx file. I was able to locally use files on heroku due to their ephemeral file system.
With google cloud, finding myself lost trying to use a file uploaded and running python functions on it.
The error I'm getting is:
with open(self.file, 'rb') as file: FileNotFoundError: [Errno 2] No such file or directory: 'http://storage.googleapis.com/...'
Edited the specifics out for now but when I open the link in a browser it brings up the download window. I know the file gets there since I go to google cloud and everything is in the proper bucket.
Also is there a way to delete from the bucket immediately after python goes through the document? Currently have the lifecycle set to a day but just need the data temporarily runover.
I'm sorry if these are silly questions. Very new to this and trying to learn.
Thanks
Oh and here's the current code
gcs = storage.Client()
user_file = request.files['file']
local = secure_filename(user_file.filename)
blob = bucket.blob(local)
blob.upload_from_string(user_file.read(),content_type=user_file.content_type)
this_file = f"http://storage.googleapis.com/{CLOUD_STORAGE_BUCKET}/{local}"
then a function is supposed to open this_file
returned a public_url to a file name to be processed and used
def open_file(self):
url = self.file
file = BytesIO(requests.get(url).content)
return docx.Document(file)
I have a mautic marketing automation installed on my server (I am a beginner)
However i replicated this issue when configuring GeoLite2-City IP lookup
Automatically fetching the IP lookup data failed. Download http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz, extract if necessary, and upload to /home/ol*****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb.
What i attempted
i FTP into the /home/ol****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb. directory
uploaded the file (the original GeoLite2-City.mmdb has '0 byte', while the newly added file is about '6000 kb'
However, once i go back into mautic to implement the lookup, the newly added file reverts back to '0byte" and i still cant get the IP lookup configured.
I have also changed the file permission to 0744, but the issue still replicates.
Did you disable the cron job which looks for the file? If not, or if you clicked the button again in the dashboard, it will overwrite the file you manually placed there.
As a side note, the 2.16 release addresses this issue, please take a look at https://www.mautic.org/blog/community/announcing-mautic-2-16/.
Please ensure you take a full backup (files and database) and where possible, run the update at command line to avoid browser timeouts :)
I've setup a localstack install based off the article How to fake AWS locally with LocalStack. I've tested copying a file up to the mocked S3 service and it works great.
I started looking for the test file I uploaded. I see there's an encoded version of the file I uploaded inside .localstack/data/s3_api_calls.json, but I can't find it anywhere else.
Given: DATA_DIR=/tmp/localstack/data I was expecting to find it there, but it's not.
It's not critical that I have access to it directly on the file system, but it would be nice.
My question is: Is there anywhere/way to see files that are uploaded to the localstack's mock S3 service?
After the latest update, now we have only one port which is 4566.
Yes, you can see your file.
Open http://localhost:4566/your-funny-bucket-name/you-weird-file-name in chrome.
You should be able to see the content of your file now.
I went back and re-read the original article which states:
"Once we start uploading, we won't see new files appear in this directory. Instead, our uploads will be recorded in this file (s3_api_calls.json) as raw data."
So, it appears there isn't a direct way to see the files.
However, the Commandeer app provides a view into localstack that includes a directory listing of the mocked S3 buckets. There isn't currently a way to see the contents of the files, but the directory structure is enough for what I'm doing. UPDATE: According to #WallMobile it's now possible to see the contents of files too.
You could use the following command
aws --endpoint-url=http://localhost:4572 s3 ls s3:<your-bucket-name>
In order to list the exact folder in s3 bucket you could use this command:
aws --endpoint-url=http://localhost:4566 s3 ls s3://<bucket-name>/<folder-in-bucket>/
Image is saved as base64 encoded string in the file recorded_api_calls.json
I have passed DATA_DIR=/tmp/localstack/data
and the file is saved at /tmp/localstack/data/recorded_api_calls.json
Open the file and copy the data (d) from any API call that looks like this
"a": "s3", "m": "PUT", "p": "/bucket-name/foo.png"
extract this data using bas64 decoding
You can use this script to extract data from localstack s3
https://github.com/nkalra0123/extract-data-from-localstack-s3/blob/main/script.sh
From my understanding, localstack saves the data in memory by default. This is what happens unless you specify a data directory. Obvious, if in memory, you won't see any files anywhere.
To create a data directory you can run a command such as:
mkdir ~/.localstack
Then you have to instruct localstack to save the data at that location. You can do so by adding the DATA_DIR=... path and a volume in your docker-compose.yml file like so:
localstack:
image: localstack/localstack:latest
ports:
- 4566:4566
- 8055:8080
environment:
- SERVICES=s3
- DATA_DIR=/tmp/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- /home/alexis/.localstack:/tmp/localstack
Then rebuild and start the docker.
Once the localstack process started, you'll see a JSON database under ~/.localstack/data/....
WARNING: if you are dealing with very large files (Gb), then it is going to be DEAD SLOW. The issue is that all the data is going to be saved inside that one JSON file in base64. In other words, it's going to generate a file much bigger than what you sent and re-reading it is also going to be enormous. It may be possible to fix this issue by setting the legacy storage mechanism to false:
LEGACY_PERSISTENCE=false
I have not tested that flag (yet).
In my local machine I use RStudio + Shiny to work properly.
Now that I have Shiny-Server installed on linux, but I do not know the Data generated by RStudiom.
how can I get Shiny-Server to read it?
Do not know what keyword query?
Thanks
Importing data in the server
As I see it, there are two ways to supply data in this situation.
The first one is to upload the data to the server where your shiny-apps are hosted. This can be done via ssh (wget) or something like FileZilla. You can put your data in the same folder as the app and then access them with relative paths. For example if you have
- app-folder
- app.R
- data.rds
- more_data.csv
You can use readRDS("data.rds") or readr::read_csv2("more_data.csv") in app.R to use the data in the app.
The second option is to use fileInput inside your app. This will give you the option to upload data from your local machine in the GUI. This data will then be put onto the server temporarilly. See ?shiny::fileInput.
Exporting data from RStudio
There are numerous ways to do this. You can use save to write your whole workspace to disk. If you just want to save single objects, saveRDS is quite handy. If you want to save datasets (for example data.frames) you can also use readr::write_csv or similar functions.
I have uploaded a simple 10 row csv file (S3) into AWS ML website. It keeps giving me the error,
"We cannot find any valid records for this datasource."
There are records there and Y variable is continuous (not binary). I am pretty much stuck at this point because there is only 1 button to move forward to build Machine Learning. Does any one know what should I do to fix it? Thanks!
The only way I have been able to upload .csv files to S3 created on my own is by downloading an existing .csv file from my S3 server, modifying the data, uploading it then changing the name in the S3 console.
Could you post the first few lines of contents of the .csv file? I am able to upload my own .csv file along with a schema that I have created and it is working. However, I did have issues in that Amazon ML was unable to create the schema for me.
Also, did you try to save the data in something like Sublime, Notepad++, etc. in order to get a different format? On my mac with Microsoft Excel, the CSV did not work, but when I tried LibreOffice on my Windows, the same file worked perfectly.