I have a list as below: How do I do index in python. I want to fetch a value for "OS"? Please let me know.
[{'UserName': 'd699a1f25d9a3', 'BrowserVersion': None, 'PasswordMinLength': 0, 'SystemAutoLock': 0, 'OS': 'Windows 7 6.1 Build 7601 : Service Pack 1 64bit'}]
mylist = [{'UserName': 'd699a1f25d9a3', 'BrowserVersion': None, 'PasswordMinLength': 0, 'SystemAutoLock': 0, 'OS': 'Windows 7 6.1 Build 7601 : Service Pack 1 64bit'}]
OS = mylist[0]['OS']
print OS
Let's say your list is stored in variable my_list:
mylist = [{'UserName': 'd699a1f25d9a3', 'BrowserVersion': None, 'PasswordMinLength': 0, 'SystemAutoLock': 0, 'OS': 'Windows 7 6.1 Build 7601 : Service Pack 1 64bit'}]
Since the dictionary is the first element of the list, you can access to it using indexing. Remember that indexes start with 0:
my_list[0] # first element
Then you can access to a key in the dictionarie by its corresponding value: my_dict[key]. In your case:
my_list[0]['OS']
Related
I followed the installation guide for Devstack according to this https://docs.openstack.org/devstack/latest/ and then followed this to configure the keystoneauth middleware https://docs.openstack.org/swift/latest/overview_auth.html#keystone-auth
But when I tried to list bucket using boto3 with credentials I generate from OpenStack ec2 credential create, I got the error "The AWS Access Key Id you provided does not exist in our records"
Would appreciate any help
My boto3 code is
import boto3 s3 = boto3.resource('s3',aws_access_key_id='5d14869948294bb48f9bfe684b8892ca',aws_secret_access_key='ffcbcec69fb54622a0185a5848d7d0d2',)
for bucket in s3.objects.all():
print(bucket)
Where the 2 keys are according to below:
| access | 5d14869948294bb48f9bfe684b8892ca|
| links | {'self': '10.180.205.202/identity/v3/users/…'} |
| project_id | c128ad4f9a154a04832e41a43756f47d |
| secret | ffcbcec69fb54622a0185a5848d7d0d2 |
| trust_id | None |
| user_id | 2abd57c56867482ca6cae5a9a2afda29
After running the commands #larsks provided, I got public: http://10.180.205.202:8080/v1/AUTH_ed6bbefe5ab44f32b4891fc5e3e55f1f for my swift endpoint. And just making sure, my ec2 credential is under the user admin and also project admin.
When I followed the Boto3 code and removed everything starting from v1 for my endpoint I got the error botocore.exceptions.ClientError: An error occurred () when calling the ListBuckets operation:
And when I kept the AUTH part, I got botocore.exceptions.ClientError: An error occurred (412) when calling the ListBuckets operation: Precondition Failed
The previous problem is resolved by adding enable_service s3api in the local.conf and stack again. This is likely because OpenStack needs to know it's using s3api, from the documentation it says Swift will be configured to act as a S3 endpoint for Keystone so effectively replacing the nova-objectstore.
Your problem is probably that nowhere are you telling boto3 how to connect to your OpenStack environment, so by default it is trying to connect to Amazon's S3 service (in your example you're also not passing in your access key and secret key, but I'm assuming this was just a typo when creating your example).
If you want to connect to the OpenStack object storage service, you'll need to first get the endpoint for that service from the catalog. You can get this from the command line by running openstack catalog list; you can also retrieve it programatically if you make use of the openstack Python module.
You can just inspect the output of openstack catalog list and look for the swift service, or you can parse it out using e.g. jq:
$ openstack catalog list -f json |
jq -r '.[]|select(.Name == "swift")|.Endpoints[]|select(.interface == "public")|.url'
https://someurl.example.com/swift/v1
In any case, you need to pass the endpoint to boto3:
>>> import boto3
>>> session = boto3.session.Session()
>>> s3 = session.client(service_name='s3',
... aws_access_key_id='access_key_id_goes_here',
... aws_secret_access_key='secret_key_goes_here',
... endpoint_url='endpoint_url_goes_here')
>>> s3.list_buckets()
{'ResponseMetadata': {'RequestId': 'tx0000000000000000d6a8c-0060de01e2-cff1383c-default', 'HostId': '', 'HTTPStatusCode': 200, 'HTTPHeaders': {'transfer-encoding': 'chunked', 'x-amz-request-id': 'tx0000000000000000d6a8c-0060de01e2-cff1383c-default', 'content-type': 'application/xml', 'date': 'Thu, 01 Jul 2021 17:56:51 GMT', 'connection': 'close', 'strict-transport-security': 'max-age=16000000; includeSubDomains; preload;'}, 'RetryAttempts': 0}, 'Buckets': [{'Name': 'larstest', 'CreationDate': datetime.datetime(2018, 12, 5, 0, 20, 19, 4000, tzinfo=tzutc())}, {'Name': 'larstest2', 'CreationDate': datetime.datetime(2019, 3, 7, 21, 4, 12, 628000, tzinfo=tzutc())}, {'Name': 'larstest4', 'CreationDate': datetime.datetime(2021, 5, 12, 18, 47, 54, 510000, tzinfo=tzutc())}], 'Owner': {'DisplayName': 'lars', 'ID': '4bb09e3a56cd451b9d260ad6c111fd96'}}
>>>
Note that if the endpoint url from openstack catalog list includes a version (e.g., .../v1), you will probably want to drop that.
Today I wanted to create a new post for my website (built using blogdown), but the New Post addin doesn't seem to work.
When I select "New Post" or run
blogdown:::new_post_addin()
I get an error:
Error in FUN(X[[i]], ...) : subscript out of bounds
In addition: Warning messages:
1: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/photo.md': Scanner error: while scanning an alias at line 3, column 1 did not find expected alphabetic or numeric character at line 3, column 2
2: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/research.md': Scanner error: while scanning an alias at line 4, column 1 did not find expected alphabetic or numeric character at line 4, column 2
I am not sure what the additional warnings are about, but I want to focus on the main error. Here are details returned by traceback():
> traceback()
10: lapply(meta, `[[`, i)
9: unlist(lapply(meta, `[[`, i))
8: blogdown:::collect_yaml()
7: eval(exprs[i], envir)
6: eval(exprs[i], envir)
5: sys.source(pkg_file("scripts", file), envir = new.env(parent = globalenv()),
keep.source = FALSE)
4: xfun::in_dir(site_root(), expr)
3: in_root(sys.source(pkg_file("scripts", file), envir = new.env(parent = globalenv()),
keep.source = FALSE))
2: source_addin("new_post.R")
1: blogdown:::new_post_addin()
Interestingly, when I run this command:
blogdown::new_post(title, ext = '.md')
it works fine and I can create a new post. I updated both blogdown and hugo but to no avail. Could someone help me understand what this error is about? Other addins (such as Insert Image) work fine.
As requested, the githup repo is https://github.com/msmielak/msmielak.github.io and the dput() output is below:
>dput(blogdown:::scan_yaml())
list(`content/about.md` = "<img align=\"right\" src=\"/./about_files/rsz_screenshot_2020-12-28_une_home.png\" alt=\"\" width=\"100px\"/>\n\n**2014-**\nPhD candidate at the School of Environmental and Rural Sciences University of New England in Armidale, Australia.",
`content/code.md` = NULL, `content/contact.md` = NULL, `content/photo.md` = NULL,
`content/post/2021-03-29-extracting-date-and-time-from-photo-using-ocr-engine-tesseract/index.md` = list(
title = "Extracting date and time from camera trap photos using R and tesseract",
author = "", date = "2021-03-29", slug = list(), categories = c("code",
"R"), tags = c("R", "code", "camera trap", "OCR"), description = "",
featured = "", featuredalt = "", featuredpath = "", linktitle = ""),
`content/research.md` = NULL, `content/technology.md` = NULL)
Warning messages:
1: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/code.md': Parser error: did not find expected <document start> at line 3, column 67
2: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/photo.md': Scanner error: while scanning an alias at line 3, column 1 did not find expected alphabetic or numeric character at line 3, column 2
3: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/research.md': Scanner error: while scanning an alias at line 4, column 1 did not find expected alphabetic or numeric character at line 4, column 2
The YAML metadata of the file content/about.md seems to be invalid. Normally YAML metadata should be of the form:
---
tag1: value1
tag2: value2
---
Update: with the dev version of blogdown (>= v1.2.4), the error will no longer occur. What's more, blogdown::check_site() can detect this problem and suggest users fix the problematic YAML metadata.
remotes::install_github('rstudio/blogdown')
I'm using the following Python code to fetch ipv4 and ipv6 addresses information (according to millions of tutorials i went through including https://docs.python.org/3.5/library/socket.html), but i receive only the ipv4 info. My ipv6 connection is ok, i can see my addresses via "ipconfig /all", the host i'm trying to connect to is "akamai.com" (also tried www.python.org and example.org).
I have also tried the 0 argument (full range of results) instead of limiting it to socket.AF_INET6 or socket.AF_INET and yet i receive only the ipv4 address
import sys, socket
x = socket.getaddrinfo("akamai.com", 80, socket.AF_INET6, 0, socket.IPPROTO_IP, socket.AI_CANONNAME)
#Expected format (as i found in some examples in this site):
socket.getaddrinfo("www.python.org", 80, 0, 0, socket.SOL_TCP)
[(2, 1, 6, '', ('82.94.164.162', 80)),
(10, 1, 6, '', ('2001:888:2000:d::a2', 80, 0, 0))]
I receive only the first tulip (plus the canonical name)
Thanks for your help in advance
I want to store ruby inject values into array I found one good example (on site http://matthewcarriere.com/2008/06/23/using-select-reject-collect-inject-and-detect/) but its returning Fixnum instead of array.
[1,2,3,4].inject([]) {|acc,n| acc << n+n}
this is returning 262144. However I want array as [2, 4, 6, 8].
Any help is appreciated.
it works in my machine.
Have you tried it in a new irb session?
Which version of ruby are you using?
$ ruby --version
ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux]
$ irb --version
irb 0.9.6(09/06/30)
$ irb
irb(main):001:0> [1,2,3,4].inject([]) {|acc,n| acc << n+n}
=> [2, 4, 6, 8]
irb(main):002:0>
I'd like to create a periodic task for celery using django-celery's admin interface. I have a task set up which runs great when called manually or by script. It just doesn't work through celerybeat. According to the debug logs the task is set to enabled = False on first retrieval and I wonder why.
When adding the periodic task and passing [1, False] as positional arguments, the task is automatically disabled and I don't see any further output. When added without arguments the task is executed but raises an exception instantly because I didn't supply the needed arguments (makes sense).
Does anyone see what's the problem here?
Thanks in advance.
This is the output after supplying arguments:
[DEBUG/Beat] SELECT "djcelery_periodictask"."id", [...]
FROM "djcelery_periodictask"
WHERE "djcelery_periodictask"."enabled" = true ; args=(True,)
[DEBUG/Beat] SELECT "djcelery_intervalschedule"."id", [...]
FROM "djcelery_intervalschedule"
WHERE "djcelery_intervalschedule"."id" = 3 ; args=(3,)
[DEBUG/Beat] SELECT (1) AS "a"
FROM "djcelery_periodictask"
WHERE "djcelery_periodictask"."id" = 3 LIMIT 1; args=(3,)
[DEBUG/Beat] UPDATE "djcelery_periodictask"
SET "name" = E'<taskname>', "task" = E'<task.module.path>',
"interval_id" = 3, "crontab_id" = NULL,
"args" = E'[1, False,]', "kwargs" = E'{}', "queue" = NULL,
"exchange" = NULL, "routing_key" = NULL,
"expires" = NULL, "enabled" = false,
"last_run_at" = E'2011-05-25 00:45:23.242387', "total_run_count" = 9,
"date_changed" = E'2011-05-25 09:28:06.201148'
WHERE "djcelery_periodictask"."id" = 3;
args=(
u'<periodic-task-name>', u'<task.module.path>',
3, u'[1, False,]', u'{}',
False, u'2011-05-25 00:45:23.242387', 9,
u'2011-05-25 09:28:06.201148', 3
)
[DEBUG/Beat] Current schedule:
<ModelEntry: celery.backend_cleanup celery.backend_cleanup(*[], **{}) {<crontab: 0 4 * (m/h/d)>}
[DEBUG/Beat] Celerybeat: Waking up in 5.00 seconds.
EDIT:
It works with the following setting. I still have no idea why it doesn't work with django-celery.
CELERYBEAT_SCHEDULE = {
"example": {
"task": "<task.module.path>",
"schedule": crontab(),
"args": (1, False)
},
}
I had the same issue. Make sure the arguments are JSON formatted. For example, try setting the positional args to [1, false] -- lowercase 'false' -- I just tested it on a django-celery instance (version 2.2.4) and it worked.
For the keyword args, use something like {"name": "aldarund"}
I got the same problem too.
With the description of PeriodicTask models in djcelery ("JSON encoded positional arguments"), same as Evan answer. I try using python json lib to encode before save.
And this work with me
import json
o = PeriodicTask()
o.kwargs = json.dumps({'myargs': 'hello'})
o.save()
celery version 3.0.11
CELERYBEAT_SCHEDULE = {
"example": {
"task": "<task.module.path>",
"schedule": crontab(),
"enable": False
},
}
I tried and it worked.I run on celery beat v5.1.2