I am using stream-django to create newsfeed in my DRF project but when I am trying to add an activity I am getting this error,
StreamApiException: GetStreamAPI404
I have properly mentioned the STREAM_API_KEY and STREAM_API_SECRET
Could you please help me, I am using stream-django for the first time?
stream-django uses the stream-python library which actually looks for an environment variable LOCAL if it finds it then it assumes that the stream server is a local server. Unfortunately, I was using LOCAL variable as an environment local development purpose. While digging inside the library I found it and removed it. Since then it started working as expected.
Related
I'm writig a dApp using web3Modal and web3js libraries.
I have an issue regarding the WalletConnect provider.
Upon choosing to connect using WalletConnect, the QR code doesn't show up and I'm immeidatly connected to the previous (old local test network..) wallet.
I tried looking for an option in the WalletConnectProvider api and the web3js libraries, without success.
Openning the website in incognitio mode DOES work, but loading the page without using cache does not work (ctrl+shift+f5 on chrome), nor does disabling cache using html headers.
I'm not sure what am I missing, as it's clearly saved somewhere but not in the cache.
First you need to use connector.killSession() and then clear local storage.
Solution: clear local storage using localStorage.clear().
More granulaity should be possible if need be.
I've created a docker container (ubuntu:focal) with a C++ application that is using boost::filesystem (v1.76.0) to create some directories while processing data. It works if I run the container locally, but it fails when deployed to Cloud Run.
A simple statement like
boost::filesystem::exists(boost::filesystem::current_path())
fails with "Invalid argument '/current/path/here'". It doesn't work in this C++ application, but from a Python app running equivalent statements, it does work.
Reading the docs I can see Cloud Run is using gVisor and not all the system calls are fully supported (link: https://gvisor.dev/docs/user_guide/compatibility/linux/amd64/), nevertheless I would expect simple calls to work: check if a directory exists, create a directory, remove,...
Maybe I'm doing something wrong when deploying my container. Is there any way to work around it? Any boost configuration I can use to prevent it from using some syscalls?
Thanks for your help!
I am running Apache Superset at the following address:
http://superset.example.com:8088
That gets redirected to:
http://superset.example.com:8088/superset/welcome
Ideally, users would get redirected to:
http://superset.example.com:8088/welcome
How can that be accomplished? As well I would like for it to run under port 80 so the port doesn't need to be specified but I haven't been able to do that either.
This issue covers what you're talking about:
https://github.com/apache/incubator-superset/issues/985
which led to this closed PR:
https://github.com/apache/incubator-superset/pull/1866
You can try to reopen the PR and finish it, or you can try configuring nginx like this guy suggests.
I found it very frustrating to setup a base url for superset. If you want to save some time, I condensed a couple of comments into a working example here: https://github.com/komoot/superset-reverse-nginx-example
Below is the way I eventually made it to run on an endpoint other than '/'. But my use case is to make it work on AWS Lambda in Serverless environment.
Eventually what i did was the below to make it work:
In config.py i have added another configuration variable and used this variable in locations where redirect or appbuilder.add_link has been used.
In templates folder there are places where directly '/superset/' has been used. So, even if i did first step right, the templates are not rendering in right way. So, i have to go and change the template as well (As of now I have hard-coded this. I need to make it configurable)
In front-end i have added a file called config.ts and I have used this config in locations wherever redirect was done in front-end. This has fixed up all my front-end links.
Only thing remaining for me was fixing "Upload CSV to Database" Link. When we click this link and enter the data, since Lambda doesn't allow any writes i tried to write to /tmp - but since we don't know whether the next request is going to be served by same lambda or not... so this is an issue as of now. The way I am planning to fix this is to write the files to s3 instead of local folder. I am still figuring out a way to do this.
-- No more nginx or other links. We don't even need gunicorn in this setup.
Thanks
I have a Java application, using a jvm variable. Normally, I set it using a command like
APP_HOME="-DAPP_HOME=$CATALINA_HOME/myapp"
in order to point to the correct folder within my application structure on Tomcat.
Now I am trying to deploy my application to the MicroCloud virtual machine. Once deployed, I use the command
vmc env-add myapp APP_HOME="-DAPP_HOME=$HOME/myapp
to set my variable. But the problem is that the variable is set as a shell variable and not JVM variable. When I use System.getenv(); I can see that my variable is set, but when I use System.getProperty("APP_HOME") the variable is null.
Anyone had experience with this and could recommend what to do set it as a JVM variable on CF?
p.s.
I read all the existing topics on the CloudFoundry Q&A and here on stackoverflow, but I do not see an answer to this problem...
Thank you in advance!
What about
vmc env-add myapp JAVA_OPTS="-DAPP_HOME=$HOME/myapp"
(i.e. pass -DAPP_HOME as a property to the JVM)
However, why are you trying to do this - and why is it such a bad idea just to grab the value using getenv instead of looking for a system Property?
I asked a previous question getting a django command to run on a schedule. I got a solution for that question, but I still want to get my commands to run from the admin interface. The obstacle I'm hitting is that my custom management commands aren't getting recognized once I get to the admin interface.
I traced this back to the __init__.py file of the django/core/management utility. There seems to be some strange behavior going on. When the server first comes up, a dictionary variable _commands is populated with the core commands (from django/core/management/commands). Custom management commands from all of the installed apps are also pushed into the _commands variable for an overall dictionary of all management commands.
Somehow, though between when the server starts and when django-chronograph goes to run the job from the admin interface, the _commands variable loses the custom commands; the only commands in the dictionary are the core commands. I'm not sure why this is. Could it be a path issue? Am I missing some setting? Is it a django-chronograph specific problem? So forget scheduling. How might I run a custom management command from the django admin graphical interface to prove that it can indeed be done? Or rather, how can I make sure that custom management commands available from said interface?
I'm also using django-chronograph
and for me it works fine. I did also run once into the problem that my custom commands were not regognized by the auto-discovery feature. I think the first reason was because the custom command had an error in it. Thus it might be an idea to check whether your custom commands run without problems from the command line.
The second reason was indeed some strange path issue. I might check back with my hosting provider to provide you with a solution. Will post back to you in a few days..
i am the "unix-guy" mentioned above by tom tom.
as far as i remember there were some issues in the cronograph code itself, so it would be a good idea to use the code tom tom posted in the comments.
where on the filesystem is django-cronograph stored (in you app-folder, in an extra "lib-folder" or in your site-packages?
when you have it in site-packages or another folder that is in your "global pythonpath" pathing should be no issue.
the cron-process itself DOES NOT USE THE SAME pythonpath, as your django app. remember: you start the cron-process via your crontab - right? so there are 2 different process who do not "know" each other: the cron-process AND the django-process (initialized by the webserver) so i would suggest to call the following script via crontab and export pythonpath again:
#!/bin/bash
PYTHONPATH=/path/to/libs:/path/to/project_root:/path/to/other/libs/used/in/project
export PYTHONPATH
python /path/to/project/manage.py cron
so the cron-started-process has the same pythonpath-information as your project.
greez from vienna/austria
berni