I have a promblem with Postman and FasrtAPI.
Wrote thin code for FastAPI. First is for single image, second for mulriple. In FastAPI Swaager it all works fine, but my POST requests with files or images it fails, i got this
{
"detail": [
{
"loc": [
"body",
"file"
],
"msg": "field required",
"type": "value_error.missing"
}
]
}
And here is the code
FastAPI how to upload files
#app.post("/upload")
async def single(file: UploadFile = File(...)):
with open(f'{file.filename}', "wb") as buffer:
shutil.copyfileobj(file.file, buffer)
return {'index.html': file.filename}
#FastAPI how to upload files
FastAPI how to upload multiple files
#app.post("/img")
async def upload_images(files: List[UploadFile] = File(...)):
for file in files:
with open(f'{file.filename}', "wb") as buffer:
shutil.copyfileobj(file.file, buffer)
return {"file_name": file.filename}
FastAPI how to upload multiple files
Related
I set up a google cloud scheduler job that triggers a cloud function through HTTP. I can be sure that the cloud function is triggered and runs successfully - it has produced the expected outcome.
However, the scheduler job still shows "failed" and the logger is like:
{
"insertId": "8ca551232347v49",
"jsonPayload": {
"jobName": "projects/john/locations/asia-southeast2/jobs/Get_food",
"status": "UNKNOWN",
"url": "https://asia-southeast2-john.cloudfunctions.net/Get_food",
"#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished",
"targetType": "HTTP"
},
"httpRequest": {},
"resource": {
"type": "cloud_scheduler_job",
"labels": {
"job_id": "Get_food",
"location": "asia-southeast2",
"project_id": "john"
}
},
"timestamp": "2020-10-22T04:08:24.521610728Z",
"severity": "ERROR",
"logName": "projects/john/logs/cloudscheduler.googleapis.com%2Fexecutions",
"receiveTimestamp": "2020-10-22T04:08:24.521610728Z"
}
I have pasted the cloud function code below with edits necessary to remove sensitive information:
import requests
import pymysql
from pymysql.constants import CLIENT
from google.cloud import storage
import os
import time
from DingBot import DING_BOT
from decouple import config
import datetime
BUCKET_NAME = 'john-test-dataset'
FOLDER_IN_BUCKET = 'compressed_data'
LOCAL_PATH = '/tmp/'
TIMEOUT_TIME = 500
def run(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
while True:
# some code that will be break the loop in about 200 seconds
DING_BOT.send_text(msg)
return 'ok'
what I can be sure of is that the line right before the end of the fucntion, DING_BOT.send_text(msg) executed successfully. I have received the text message.
What cloud be wrong here?
It's a common problem because of partial UI of Google Cloud Console. So, I took the hypothesis that you set up your scheduler only with the console.
So, you need to create, or to update it with command line (GCLOUD) or API (but GCLOUD is easier), to add the "attempt-deadline" parameter.
In fact Cloud Scheduler also have a timeout (60s by default)and if the URL don't answer in this timeframe, the call is considered as fail
Increase this param to 250s, and it should be OK.
Note: you can also set retry policies with the CLI, it could be interesting if you need it!
I've been trying to set up a React component inside Django with the help of Webpack 4.
To get me starting I went through and read:
Using Webpack transparently with Django + hot reloading React components as a bonus
Tutorial: Django REST with React (Django 2.0 and a sprinkle of testing)
Both these walkthroughs are great. At last, I got it almost working by following the second link even though I use Django 1.11.
The problem I had after following the second link was that hot reloading does not work when using a webpack-dev-server. The problem is that Django cannot read the output file of the webpack-dev-server (gives 404 error) while the main.js can be read. I've read that the dev-server files do only live in memory by default.
To overcome the issue with error 404 on the hot reload files I installed the package write-file-webpack-plugin to write out the file each reloads. Then changed the webpack-config.js to (I deleted some lines to keep it shorter....):
var path = require('path');
//webpack is not needed since I removed it from plugins
//const webpack = require('webpack');
var BundleTracker = require('webpack-bundle-tracker');
var WriteFilePlugin =require('write-file-webpack-plugin');
module.exports = {
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: "babel-loader"
}
},
]
},
entry: [
'./frontend/src/index',
],
output: {
path: path.join(__dirname, 'frontend/static/frontend'),
// Changing the name from "[name]-[hash].js" to not get 1000 files in the static folder.
filename: 'hotreloadfile.js'
},
plugins: [
//This line writes the file on each hot reload
new WriteFilePlugin(),
//This can be removed.
//new webpack.HotModuleReplacementPlugin(),
new BundleTracker({filename: './webpack-stats.json'})
],
mode:'development',
};
in my package.json I have the follow line among the script tag:
"start": "webpack-dev-server --config ./webpack.config.js",
And in Django I installed webpack-loader with the following lines in settings.py:
STATIC_URL = '/static/'
WEBPACK_LOADER = {
'DEFAULT': {
'BUNDLE_DIR_NAME': 'frontend/',
'STATS_FILE': os.path.join(BASE_DIR, 'webpack-stats.json')
}
}
Finally, in my root component called index.js, I do not need the module.hot.accept(); line
Do you see any drawbacks to this approach? Except that I had to install another package?
Why didn't I get it to work with new webpack.HotModuleReplacementPlugin()?
Here is another approach if you develop frontend in react and backend in django.
I have django server running on port 8000 and react server running on port 3000.
If I add "proxy": "http://localhost:8000" line in package.json of react code, localhost:3000 will do hot-reloading while api call goes to localhost:8000.
I have a Lambda function, which is a basic Python GET call to an API. It works fine locally, however when I upload to Lambda (along with the requests library) it will not return the JSON response from the API call. I simply want it to return the entire JSON object to the caller. Am I doing something fundamentally wrong here - I stumbled across a couple of articles saying that returning JSON from a Lambda Python function is not supported.
Here is the code:
import requests
import json
url = "http://url/api/projects/"
headers = {
'content-type': "application/json",
'x-octopus-apikey': "redacted",
'cache-control': "no-cache"
}
def lambda_handler(event, context):
response = requests.request("GET", url, headers=headers)
return response
My package contains the requests library and dist, and the json library (I don't think it needs this though). Error message returned is:
{
"stackTrace": [
[
"/usr/lib64/python2.7/json/__init__.py",
251,
"dumps",
"sort_keys=sort_keys, **kw).encode(obj)"
],
[
"/usr/lib64/python2.7/json/encoder.py",
207,
"encode",
"chunks = self.iterencode(o, _one_shot=True)"
],
[
"/usr/lib64/python2.7/json/encoder.py",
270,
"iterencode",
"return _iterencode(o, 0)"
],
[
"/var/runtime/awslambda/bootstrap.py",
41,
"decimal_serializer",
"raise TypeError(repr(o) + \" is not JSON serializable\")"
]
],
"errorType": "TypeError",
"errorMessage": "<Response [200]> is not JSON serializable"
}
I've resolved this - the problem with my Python code is that it was trying to return the entire response, rather than simply the JSON body (the code for my local version prints 'response.text'). In addition, I have ensured that the response is JSON formatted (rather than raw text). Updated code:
import requests
import json
url = "http://url/api/projects/"
headers = {
'content-type': "application/json",
'x-octopus-apikey': "redacted",
'cache-control': "no-cache"
}
def lambda_handler(event, context):
response = requests.request("GET", url, headers=headers)
try:
output = response.json()
except ValueError:
output = response.text
return output
I was also getting same error and for a while I was able to solve this by changing the response code in Lambda python3.6:
Change : response['Body'].read() to response['Body'].read().decode()
This way you will get a JSON though in my case I got / everywhere which I removed later.
I have a function right now that runs youtube-dl to convert a video.
def start_audio_extraction(url, audio_filename):
localfile = 'music/%s.mp3' % audio_filename
temp_filepath = os.environ.get(s3.Object(bucketname, localfile))
ydl_opts = {
'format': 'bestaudio/best', # choice of quality
'extractaudio' : True, # only keep the audio
'outtmpl': temp_filepath, # name the location
'noplaylist' : True, # only download single song, not playlist
'prefer-ffmpeg' : True,
# 'verbose': True,
'postprocessors': [{
'key': 'FFmpegMetadata'
},
{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}],
'logger': MyLogger(),
'progress_hooks': [my_hook],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
result = ydl.download([url])
return result
But the problem is when I run this I end up getting this error
File "/home/john/.virtualenvs/yout/local/lib/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 578, in prepare_filename
tmpl = compat_expanduser(outtmpl)
File "/home/john/.virtualenvs/yout/local/lib/python2.7/site-packages/youtube_dl/compat.py", line 353, in compat_expanduser
if not path.startswith('~'):
AttributeError: 'NoneType' object has no attribute 'startswith'
I tried asking in the youtube-dl repository, and told outtmpl must be a string.
Since I believe that the s3 object is a lambda function is my only solution to move hosting over to Amazon?
You can use something like goofys to redirect youtube-dl's output to S3.
I am new to django, and I using the django-multiuploader 0.2.40 for my project.
what I want is to upload .pcd file-format(PCL-lib), but when I using this plugin-app, I will get
[IOError: cannot identify image file]
so I added some description to the "MULTIUPLOADER_FORMS_SETTINGS"
like:
'default': {
'FILE_TYPES': ['pcd', 'jpg', 'jpeg', ...],
'CONTENT_TYPES': [
'text/pcd',
'image/jpeg',
'image/png',
...
],
},
'images': {
'FILE_TYPES': ['pcd', 'jpg', ...],
'CONTENT_TYPES': [
'image/pcd',
'image/gif',
...
],
and this make nothing different, still get ioerror.
but the stranges is .pcd file still can successfully save to my database.
something I did is wrong?