SNMP table with V3 pysnmp - reg - python-2.7

I am trying to create the SNMP table with SNMP v3. I followed this documentation and was able to create SNMP table using v2c and get response. When I tried modifying the user registration part to support V3 there was no response to the manager.
Link followed: https://pysnmp.readthedocs.io/en/latest/examples/v3arch/asyncore/agent/cmdrsp/agent-side-mib-implementations.html
Topic:
Implementing conceptual table
I modified this part
to
But the code worked in v2c and not in V3

Related

How to query devices using metadata in GCP IOT Core?

When I go to the IOT Core Registry page (on the GCP console) and select a device, I can edit it. There's a "Device metadata" section there, reading the following:
You can set custom metadata, such as manufacturer, location, etc. for the device. These
can be used to query devices in this registry. Learn more
Where the documentation page shows nothing about querying devices using metadata.
Is this possible at all? What can be done using device metadata?
I am asking because I am looking for the following features with Azure IOT Hub has with device twin tags:
Ideally I would like to enrich messages the device sends (state, events) with corresponding metadata.
Querying for multiple devices based on a metadata field.
One first has to add device meta data, before one can query it:
https://cloud.google.com/iot/docs/how-tos/devices#creating_a_device
https://cloud.google.com/iot/docs/how-tos/devices#getting_device_details
One can query gcloud iot deviceslist (--registry=REGISTRY : --region=REGION):
--filter="metadata.items.key['test_metadata'][value]='test_value'"
See gcloud topic filters for more information about filter expressions.
Or with format: --format='value[](metadata.items.test_metadata)'
It might be easier if you implement this using client libraries. Using the suggestion of #MartinZeitler list your devices then perform get for each device and then do the checking on the metadata. See Python code below for the implementation:
from google.cloud import iot_v1
def sample_list_devices(meta_key_name,meta_val_name):
# Create a client
client = iot_v1.DeviceManagerClient()
project_id="your-project-id"
location="asia-east1" #define your device location
registry="your-registry-id"
parent=f"projects/{project_id}/locations/{location}/registries/{registry}"
# Initialize request argument(s)
list_request = iot_v1.ListDevicesRequest(
parent=parent,
)
# Make the request
list_result = client.list_devices(request=list_request)
# Handle the response
for response in list_result:
device=response.num_id
get_request = iot_v1.GetDeviceRequest(
name=f"{parent}/devices/{device}",
)
get_response = client.get_device(request=get_request)
if get_response.metadata[meta_key_name]==meta_val_name:
print(get_response)
return get_response
#define metadata key and metadata value that you want to use for filtering
sample_list_devices(meta_key_name="test_key",meta_val_name="test_val")
Filtered response:
See device configuration:
No, it is not possible to query metadata the way you want it. The doc says the following about the server.
"Cloud IoT Core does not interpret or index device metadata."
As you are already aware as a client-side workaround to simulate a query search, we can list all the devices first and then filter the output by metadata.

How to get oracle id for chainlink api call?

I need to make API call over Polygon network. I did follow the tutorials and was trying to find the Job Id and oracle Id. I was able to find the job
https://market.link/jobs/56666c3e-534d-490f-8757-521928739291. However, I cannot find the Oracle Id after clicking on Node as mentioned in the docs. Where can I find it?
In the 'Job Spec' section, you can see the oracle address in the initiator part of the JSON spec. In the example above, it's 0x0a31078cd57d23bf9e8e8f1ba78356ca2090569e

How to specify dataset location using BigQuery API v0.27?

I am trying to figure out how to specify the dataset location in a BigQuery API query using v0.27 of the BigQuery API.
I have a dataset located in northamerica-northeast1 and the BigQuery API is returning 404 errors since this is not the default multi-regional location "US."
I am using the run_async_query method to execute my queries but based on documentation am unsure how to add a location to this field to make it location aware.
I have also tried to previously update my client instantiation like this:
def _get_client(self):
bigquery.Client.SCOPE = (
'https://www.googleapis.com/auth/bigquery',
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive')
client = bigquery.Client.from_service_account_json(_KEY_FILE)
if self._params['bq_data_location'].strip():
client.location = self._params['bq_data_location']
return client
However, it does not appear that this is the correct way to inform the BigQuery API of a dataset location.
For additional context, in my SQL that I am passing to the BigQuery API, I am already specifying the PROJECT_ID.DATASET_ID.TABLE_ID, however, this does not seem to be sufficient to find regional data.
Furthermore, I am making this request from Google App Engine using the CRMint open source data flow platform.
Can you please help me with an example of how location can be added to the BigQuery API for v0.27 so that the API does not return 404?
Thank you!
From the code sample it seems you're likely talking about google-cloud-bigquery 0.27, which was released in Aug 2017 and predates location support (as well as many other features).
Your best bet is to update that dependency to something more recent.

First time protege user, trying to export a simple ontology to AWS dynamodb

I am currently using protege 5.0 and have created a very simple ontology (the pizza example). I was wondering how I would export this ontology to dynamodb on AWS. I was hoping someone could post a link to a good tutorial on protege 5.0 or walk me through this. Thanks!
If you are using dynamodb just to store the content of a file and to be able to access the file at a specific URL, then the process required is just the same as for any other file type you would store on dynamodb. The default way for Protege and most other OWL related tools to access an ontology is a simple HTTP get from a provided IRI.

How to change client schema during provisioning?

I'm rushing (never a good thing) to get Sync Framework up and running for a "offline support" deadline on my project. We have a SQL Express 2008 instance on our server and then will deploy SQLCE to the clients. Clients will only sync with server, no peer-to-peer.
So far I have the following working:
Server schema setup
Scope created and tested
Server provisioned
Client provisioned w/ table creation
I've been very impressed with the relative simplicity of all of this. Then I realized the following:
Schema created through client provisioning to SQLCE does not setup default values for uniqueidentifier types.
FK constraints are not created on client
Here is the code that is being used to create the client schema (pulled from an example I found somewhere online)
static void Provision()
{
SqlConnection serverConn = new SqlConnection(
"Data Source=xxxxx, xxxx; Database=xxxxxx; " +
"Integrated Security=False; Password=xxxxxx; User ID=xxxxx;");
// create a connection to the SyncCompactDB database
SqlCeConnection clientConn = new SqlCeConnection(
#"Data Source='C:\SyncSQLServerAndSQLCompact\xxxxx.sdf'");
// get the description of the scope from the SyncDB server database
DbSyncScopeDescription scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(
ScopeNames.Main, serverConn);
// create CE provisioning object based on the scope
SqlCeSyncScopeProvisioning clientProvision = new SqlCeSyncScopeProvisioning(clientConn, scopeDesc);
clientProvision.SetCreateTableDefault(DbSyncCreationOption.CreateOrUseExisting);
// starts the provisioning process
clientProvision.Apply();
}
When Sync Framework creates the schema on the client I need to make the additional changes listed earlier (default values, constraints, etc.).
This is where I'm getting confused (and frustrated):
I came across a code example that shows a SqlCeClientSyncProvider that has a CreatingSchema event. This code example actually shows setting the RowGuid property on a column which is EXACTLY what I need to do. However, what is a SqlCeClientSyncProvider?! This whole time (4 days now) I've been working with SqlCeSyncProvider in my sync code. So there is a SqlCeSyncProvider and a SqlCeClientSyncProvider?
The documentation on MSDN is not very good in explaining what either of these.
I've further confused whether I should make schema changes at provision time or at sync time?
How would you all suggest that I make schema changes to the client CE schema during provisioning?
SqlCeSyncProvider and SqlCeClientSyncProvider are different.
The latter is what is commonly referred to as the offline provider and this is the provider used by the Local Database Cache project item in Visual Studio. This provider works with the DbServerSyncProvider and SyncAgent and is used in hub-spoke topologies.
The one you're using is referred to as a collaboration provider or peer-to-peer provider (which also works in a hub-spoke scenario). SqlCeSyncProvider works with SqlSyncProvider and SyncOrchestrator and has no corresponding Visual Studio tooling support.
both providers requires provisioning the participating databases.
The two types of providers provisions the sync objects required to track and apply changes differently. The SchemaCreated event applies to the offline provider only. This get's fired the first time a sync is initiated and when the framework detects that the client database has not been provisioned (create user tables and the corresponding sync framework objects).
the scope provisioning used by the other provider dont apply constraints other than the PK. so you will have to do a post-provisioning step to apply the defaults and constraints yourself outside of the framework.
While researching solutions without using SyncAgent I found that the following would also work (in addition to my commented solution above):
Provision the client and let the framework create the client [user] schema. Now you have your tables.
Deprovision - this removes the restrictions on editing the tables/columns
Make your changes (in my case setting up Is RowGuid on PK columns and adding FK constraints) - this actually required me to drop and add a column as you can't change the "Is RowGuid" property an existing columns
Provision again using DbSyncCreationOption.CreateOrUseExisting