I am trying to connect to my LND node running on AWS (I know it is not the best case scenario for an LND node but this time I had no other way of doing it) from my local running Django Rest Api. The issue is that it cannot find the admin.macaroon file even though the file is in the mentioned directory. Below I am giving some more detailed information:
view.py
class GetInfo(APIView):
def get(self, request):
REST_HOST = "https://ec2-18-195-111-81.eu-central-1.compute.amazonaws.com"
MACAROON_PATH = "/home/ubuntu/.lnd/data/chain/bitcoin/mainnet/admin.macaroon"
# url = "https://ec2-18-195-111-81.eu-central-1.compute.amazonaws.com/v1/getinfo"
TLS_PATH = "/home/ubuntu/.lnd/tls.cert"
url = f"https//{REST_HOST}/v1/getinfo"
macaroon = codecs.encode(open(MACAROON_PATH, "rb").read(), "hex")
headers = {"Grpc-Metadata-macaroon": macaroon}
r = requests.get(url, headers=headers, verify=TLS_PATH)
return Response(json.loads(r.text))
The node is running with no problem on AWS. This is what I get when I run lncli getinfo:
$ lncli getinfo:
{
"version": "0.15.5-beta commit=v0.15.5-beta",
"commit_hash": "c0a09209782b1c62c3393fcea0844exxxxxxxxxx",
"identity_pubkey": "mykey",
"alias": "020d4da213770890e1c1",
"color": "#3399ff",
"num_pending_channels": 0,
"num_active_channels": 0,
"num_inactive_channels": 0,
"uris": [
....
and the permissions are as below:
$ ls -l
total 138404
-rwxrwxr-x 1 ubuntu ubuntu 293 Feb 6 09:38 admin.macaroon
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 5 14:48 bin
drwxr-xr-x 6 ubuntu ubuntu 4096 Jan 27 20:17 bitcoin-22.0
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 1 16:39 go
-rw-rw-r-- 1 ubuntu ubuntu 141702072 Mar 15 2022 go1.18.linux-amd64.tar.gz
drwxrwxr-x 72 ubuntu ubuntu 4096 Feb 1 16:36 lnd
-rw-rw-r-- 1 ubuntu ubuntu 0 Jan 27 20:13 screenlog.0
The error I get is [Errno 2] No such file or directory:'/home/ubuntu/.lnd/data/chain/bitcoin/mainnet/admin.macaroon'
I guess the problem should be how I need to access the node from my API, but I have no idea how to access an EC2 instance from an external api.
Thank you in advance
Related
I'm working with OpenEdX, it has a plugin system, called XBlocks, that in this case allows importing content created by third party "studio apps." This content can be uploaded as a zip file. it is then processed by the following code:
#XBlock.handler
def studio_submit(self, request, _suffix):
self.display_name = request.params["display_name"]
self.width = request.params["width"]
self.height = request.params["height"]
self.has_score = request.params["has_score"]
self.weight = request.params["weight"]
self.icon_class = "problem" if self.has_score == "True" else "video"
response = {"result": "success", "errors": []}
if not hasattr(request.params["file"], "file"):
# File not uploaded
return self.json_response(response)
package_file = request.params["file"].file
self.update_package_meta(package_file)
# First, save scorm file in the storage for mobile clients
if default_storage.exists(self.package_path):
logger.info('Removing previously uploaded "%s"', self.package_path)
default_storage.delete(self.package_path)
default_storage.save(self.package_path, File(package_file))
logger.info('Scorm "%s" file stored at "%s"', package_file, self.package_path)
# Then, extract zip file
if default_storage.exists(self.extract_folder_base_path):
logger.info(
'Removing previously unzipped "%s"', self.extract_folder_base_path
)
recursive_delete(self.extract_folder_base_path)
with zipfile.ZipFile(package_file, "r") as scorm_zipfile:
for zipinfo in scorm_zipfile.infolist():
default_storage.save(
os.path.join(self.extract_folder_path, zipinfo.filename),
scorm_zipfile.open(zipinfo.filename),
)
try:
self.update_package_fields()
except ScormError as e:
response["errors"].append(e.args[0])
return self.json_response(response)
where the code
default_storage.save(
os.path.join(self.extract_folder_path, zipinfo.filename),
scorm_zipfile.open(zipinfo.filename),
)
is the origin of the following (Django) error trace:
cms_1 | File "/openedx/venv/lib/python3.5/site-packages/openedxscorm/scormxblock.py", line 193, in studio_submit
cms_1 | scorm_zipfile.open(zipinfo.filename),
cms_1 | File "/openedx/venv/lib/python3.5/site-packages/django/core/files/storage.py", line 52, in save
cms_1 | return self._save(name, content)
cms_1 | File "/openedx/venv/lib/python3.5/site-packages/django/core/files/storage.py", line 249, in _save
cms_1 | raise IOError("%s exists and is not a directory." % directory)
cms_1 | OSError: /openedx/media/scorm/c154229b568d45128e1098b530267a35/a346b1db27aaa89b89b31e1c3e2a1af04482abad/assets exists and is not a directory.
I posted the issue on github too
exception FileExistsError
Raised when trying to create a file or directory which already exists. Corresponds to errno EEXIST.
I don't really understand what is going on. It's based on a hairball of javascript in layered docker containers, so I can't readily hack&print for extra info.
The only thing I found was that some of the folders in the zip file are written to the docker volume as files instead of directories at the moment the error is thrown. This may however be expected and these files might be rewritten as or changed to directories later (?) on Linux (?).
The error lists the assets folder
root#93f0d2b9667f:/openedx/media/scorm/5e085cbc04e24b3b911802f7cba44296/92b12100be7651c812a1d29a041153db5ba89239# ls -la
total 84
drwxr-xr-x 2 root root 4096 Aug 2 22:17 .
drwxr-xr-x 3 root root 4096 Aug 2 22:17 ..
-rw-r--r-- 1 root root 4398 Aug 2 22:17 adlcp_rootv1p2.xsd
-rw-r--r-- 1 root root 0 Aug 2 22:17 assets
-rw-r--r-- 1 root root 0 Aug 2 22:17 course
-rw-r--r-- 1 root root 14560 Aug 2 22:17 imscp_rootv1p1p2.xsd
-rw-r--r-- 1 root root 1847 Aug 2 22:17 imsmanifest.xml
-rw-r--r-- 1 root root 22196 Aug 2 22:17 imsmd_rootv1p2p1.xsd
-rw-r--r-- 1 root root 1213 Aug 2 22:17 ims_xml.xsd
-rw-r--r-- 1 root root 1662 Aug 2 22:17 index.html
-rw-r--r-- 1 root root 0 Aug 2 22:17 libraries
-rw-r--r-- 1 root root 1127 Aug 2 22:17 log_output.html
-rw-r--r-- 1 root root 481 Aug 2 22:17 main.html
-rw-r--r-- 1 root root 759 Aug 2 22:17 offline_API_wrapper.js
-rw-r--r-- 1 root root 0 Aug 2 22:17 player
-rw-r--r-- 1 root root 1032 Aug 2 22:17 popup.html
root#93f0d2b9667f:/openedx/media/scorm/5e085cbc04e24b3b911802f7cba44296/92b12100be7651c812a1d29a041153db5ba89239# cd assets
bash: cd: assets: Not a directory
I have used configure_make and other features of rules_foreign_cc for quite some time. This is a fantastic project. I have run into a simple but old C++ project that is giving me trouble. I suspect it's caused by the use of configure_in_place but that is just a wild guess. I spent a few hours tracing through the code but couldn't figure out a fix.
I am running bazel 3.2.0 on OSX High Sierra with the rules_foreign_cc master branch that has #403 merged. This PR enables running autoreconf via configure_make.
I could of course work around this issue by making a real Bazel project for this simple project. I am submitting this issue in the hopes that fixing a bug or correcting my user error will be an easy task for a rules_foreign_cc expert.
WORKSPACE
workspace(name = "foo")
http_archive(
name = "rules_foreign_cc",
strip_prefix = "rules_foreign_cc-master",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/master.zip",
)
load("#rules_foreign_cc//:workspace_definitions.bzl", "rules_foreign_cc_dependencies")
rules_foreign_cc_dependencies()
all_content = """filegroup(name = "all", srcs = glob(["**"]), visibility = ["//visibility:public"])"""
http_archive(
name = "md5",
build_file_content = all_content,
strip_prefix = "MD5-master",
url = "https://github.com/devguy-com/MD5/archive/master.zip"
)
BUILD
load("#rules_foreign_cc//tools/build_defs:configure.bzl", "configure_make")
configure_make(
name = "md5",
autoreconf = True,
configure_command = "configure",
configure_in_place = True,
lib_source = "#md5//:all",
make_commands = [
"make libmd5.a",
],
out_lib_dir = "",
static_libraries = ["libmd5.a"],
visibility = ["//visibility:public"],
)
Run Bazel
$ bazel build --sandbox_debug -s --verbose_failures --sandbox_debug md5
Build Output
The complete build output is available online thanks to BuildBuddy.
Pertinent snippets:
1.
export BUILD_LOG="bazel-out/darwin-fastbuild/bin/md5/logs/Configure.log
2.
ERROR: /Users/user/smart/mac/pe-compute/2019-11-30/BUILD:4:15: output
'md5/libmd5.a' was not created
$ cat bazel-out/darwin-fastbuild/bin/md5/logs/Configure.log
/private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/12/execroot/exd_edge_compute/external/local_config_cc/wrapped_clang -D_FORTIFY_SOURCE=1 -fstack-protector -fcolor-diagnostics -Wall -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O0 -DDEBUG -std=c++11 -isysroot __BAZEL_XCODE_SDKROOT__ -F__BAZEL_XCODE_SDKROOT__/System/Library/Frameworks -F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/MacOSX.platform/Developer/Library/Frameworks -mmacosx-version-min=10.12 -no-canonical-prefixes -Wno-builtin-macro-redefined -D__DATE__="redacted" -D__TIMESTAMP__="redacted" -D__TIME__="redacted" -c -o src/md5.o src/md5.cpp
ar cr libmd5.a src/md5.o
ranlib libmd5.a
find: /var/folders/3l/rrlrzd555yl84h_l_nytc4w00000gp/T/tmp.XRqtHHSM/md5: No
such file or directory
Root Cause
The output library libmd5.a is located at
/var/folders/3l/rrlrzd555yl84h_l_nytc4w00000gp/T/tmp.XRqtHHSM.
However, configure_make is looking in that directory's md5 subdirectory.
Using out_lib_dir=".." does not solve the problem. out_lib_dir must be set to an empty string because this particular project creates the library in the project's root directory.
$ ls -l /var/folders/3l/rrlrzd555yl84h_l_nytc4w00000gp/T/tmp.XRqtHHSM
lrwxr-xr-x 1 user staff 139 Jun 10 06:36 BUILD.bazel -> /private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/1/execroot/exd_edge_compute/external/md5/BUILD.bazel
lrwxr-xr-x 1 user staff 135 Jun 10 06:36 LICENSE -> /private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/1/execroot/exd_edge_compute/external/md5/LICENSE
-rw-r--r-- 1 user staff 2573 Jun 10 06:36 Makefile
lrwxr-xr-x 1 user staff 139 Jun 10 06:36 Makefile.in -> /private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/1/execroot/exd_edge_compute/external/md5/Makefile.in
lrwxr-xr-x 1 user staff 134 Jun 10 06:36 README -> /private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/1/execroot/exd_edge_compute/external/md5/README
lrwxr-xr-x 1 user staff 137 Jun 10 06:36 WORKSPACE -> /private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/1/execroot/exd_edge_compute/external/md5/WORKSPACE
drwxr-xr-x 7 user staff 224 Jun 10 06:36 autom4te.cache
drwxr-xr-x 3 user staff 96 Jun 10 06:36 config
-rw-r--r-- 1 user staff 23872 Jun 10 06:36 config.log
-rwxr-xr-x 1 user staff 32037 Jun 10 06:36 config.status
-rwxr-xr-x 1 user staff 142714 Jun 10 06:36 configure
lrwxr-xr-x 1 user staff 140 Jun 10 06:36 configure.ac -> /private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/1/execroot/exd_edge_compute/external/md5/configure.ac
-rw-r--r-- 1 user staff 12560 Jun 10 06:36 libmd5.a
lrwxr-xr-x 1 user staff 139 Jun 10 06:36 rfc1321.txt -> /private/var/tmp/_bazel_user/a146b90161b9da4530683d7f8b2053fd/sandbox/darwin-sandbox/1/execroot/exd_edge_compute/external/md5/rfc1321.txt
drwxr-xr-x 8 user staff 256 Jun 10 06:36 src
drwxr-xr-x 3 user staff 96 Jun 10 06:36 tests
I have setup a Django application, in which user can upload his image and it is served by Nginx and Gunicorn.
I have a problem with uploading large image files which does not get appropriate permissions to be served by Nginx
location /medias/images/ {
root /var/www/html;
}
When uploading files, the larger ones only get read permissions for the user, not for group/other:
-rw------- 1 user1 user1 4.9M Mar 15 14:35 File1.jpg
-rw------- 1 user1 user1 3.7M Mar 15 14:31 File2.jpg
-rw-r--r-- 1 user1 user1 110K Mar 15 14:44 File3.pdf
-rw-r--r-- 1 user1 user1 34K Mar 15 09:17 File4.docx
-rw-r--r-- 1 user1 user1 136K Mar 15 14:45 File5.jpg
-rw-r--r-- 1 user1 user1 92K Mar 15 14:22 File6.doc
-rw------- 1 user1 user1 4.4M Mar 15 14:25 File7.jpg
However the smaller images get their permissions fine and are served properly.
The point is that both uploading small and semi-large (3mb) image files are done by a same process.
Any ideas?
Set the FILE_UPLOAD_MAX_MEMORY_SIZE parameter in your Django settings, in Bytes.
For example FILE_UPLOAD_MAX_MEMORY_SIZE = 20971520 equals 20MB.
I have trouble installing avbin on my raspberry pi3 with jessie.
The command doesn't work:
sudo apt-get install libavbin-dev libavbin0
It says that is unable to locate.
I have manually downloaded version 10 and 8 and neither one works.
Pythons shows that cannot find the library.
Any suggestions please?
This post didn't help me much
Python pyglet AVBin - How to install AVBin
Update:
My sys path:
sys.path ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-arm-linux-gnueabihf', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7']
and:
> usr/local/lib/python2.7/dist-packages/pyglet/media $ ls -l
total 180
-rw-r--r-- 1 root staff 19846 May 13 14:07 avbin.py
-rw-r--r-- 1 root staff 15458 May 13 14:07 avbin.pyc
-drwxr-sr-x 5 root staff 4096> May 13 14:07 drivers
-rw-r--r-- 1 root staff 46363 May 13 14:07 init.py
-rw-r--r-- 1 root staff 53881 May 13 14:07 init.pyc
-rw-r--r-- 1 root staff 6446 May 13 14:07 procedural.py
-rw-r--r-- 1 root staff 5762 May> 13 14:07 procedural.pyc
-rw-r--r-- 1 root staff 8107 May 13 14:07 riff.py
-rw-r--r-- 1 root staff 8331 May 13 14:07 riff.pyc
In /usr/lib I have:
libavbin.so
libavbin.so.7
libavbin.so.8
I just created a VM vagrant with centos, installed python2.7 and pip using Miniconda, installed pymqi using pip, created a test python file to see if my pymqi installation is correct :
import pymqi
print "hello..."
but I got this :
[vagrant#localhost projects]$ python test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import pymqi
File "/home/vagrant/miniconda2/lib/python2.7/site-packages/pymqi/__init__.py", line 109, in <module>
import pymqe, CMQC, CMQCFC, CMQXC
ImportError: libmqic_r.so: cannot open shared object file: No such file or directory
I looked for that file :
[vagrant#localhost projects]$ find /opt/mqm/ -name 'libmqic_r.so'
/opt/mqm/lib/compat/libmqic_r.so
/opt/mqm/lib/libmqic_r.so
/opt/mqm/lib64/compat/libmqic_r.so
/opt/mqm/lib64/libmqic_r.so
Thank you, your help is appreciated.
I found the solution :
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/mqm/lib64
As a general rule, using the LD_LIBRARY_PATH variable is a bad practice. You'd better just create the appropriate symlink to the 64bit version of the shared objects.
For fome reason, when you install the IBM MQSeries Client, only 32bit mq libraries are linked into /usr/lib/:
[root#host ~]# ll /usr/lib/libmq*
lrwxrwxrwx 1 root root 26 Jan 25 12:49 /usr/lib/libmqicb_r.so -> /opt/mqm/lib/libmqicb_r.so
lrwxrwxrwx 1 root root 24 Jan 25 12:49 /usr/lib/libmqicb.so -> /opt/mqm/lib/libmqicb.so
lrwxrwxrwx 1 root root 25 Jan 25 12:49 /usr/lib/libmqic_r.so -> /opt/mqm/lib/libmqic_r.so
lrwxrwxrwx 1 root root 23 Jan 25 12:49 /usr/lib/libmqic.so -> /opt/mqm/lib/libmqic.so
lrwxrwxrwx 1 root root 25 Jan 25 12:49 /usr/lib/libmqiz_r.so -> /opt/mqm/lib/libmqiz_r.so
lrwxrwxrwx 1 root root 23 Jan 25 12:49 /usr/lib/libmqiz.so -> /opt/mqm/lib/libmqiz.so
lrwxrwxrwx 1 root root 25 Jan 25 12:49 /usr/lib/libmqjx_r.so -> /opt/mqm/lib/libmqjx_r.so
lrwxrwxrwx 1 root root 26 Jan 25 12:49 /usr/lib/libmqmcs_r.so -> /opt/mqm/lib/libmqmcs_r.so
lrwxrwxrwx 1 root root 24 Jan 25 12:49 /usr/lib/libmqmcs.so -> /opt/mqm/lib/libmqmcs.so
lrwxrwxrwx 1 root root 25 Jan 25 12:49 /usr/lib/libmqmzse.so -> /opt/mqm/lib/libmqmzse.so
While 64bit libs are not:
[root#host ~]# ll /usr/lib64/libmq*
ls: /usr/lib64/libmq*: No such file or directory
You can fix by just executing
[root#host ~]# ln -s /opt/mqm/lib64/libmq* /usr/lib64/
Please check if you have installed MQSeriesClient or else .so files is not in LIB path