MPU Disable and Enable sequence after reset - cortex-m

Once the system is up and the MPU is configured and unable, after the processor reset how to enable the MPU?
What are the steps need to be consider for Enabling again the MPU?

Related

Do we need to increase activity version in SWF for increasing its timeout?

We have an existing workflow where we need to increase timeout for an activity (Start to Close) to enable an urgent processing. Do we require to do a version bump up on activity ?
It depends on how you specify the timeout. If you specify it on registration then you have to change the version as registration is immutable. If you specify it as part of the invocation then no need for a new version.
If you are using AWS Flow Framework for Java use ActivitySchedulingOptions to pass the timeout. Here is the relevant documentation.

Reset a Managed Chrome Device with SDK using Google Apps Script

I'm attempting to create a dashboard for admins to allow them to reset a chrome device managed by GoogleAdmin using google apps script.
I don't see any way to perform a reset using Admin SDK API. Can this be done?
If you want to deprovision and/or disable a ChromeOS device
The supported actions when using the Directory API, according to the documentation here are:
deprovision: Remove a device from management that is no longer active, being resold, or is being submitted for return / repair, use the deprovision action to dissociate it from management.
disable: If you believe a device in your organization has been lost or stolen, you can disable the device so that no one else can use it. When a device is disabled, all the user can see when turning on the Chrome device is a screen telling them that it’s been disabled, and your desired contact information of where to return the device.
Taking this into account, this is how the request would look like:
POST https://admin.googleapis.com/admin/directory/v1/customer/{customerId}/devices/chromeos/{resourceId}/action
If you want to reboot and/or remote powerwash a ChromeOS device
However, if you simply plan on doing a powerwash or a reboot, you can make use of the below information:
REBOOT: Reboot the device. Can only be issued to Kiosk and managed guest session devices.
REMOTE_POWERWASH: Wipes the device by performing a power wash. Executing this command in the device will remove all data including user policies, device policies and enrollment policies.
Warning: This will revert the device back to a factory state with no enrollment unless the device is subject to forced or auto enrollment. Use with caution, as this is an irreversible action!
Taking this into account, this is how the request would look like:
POST https://admin.googleapis.com/admin/directory/v1/customer/{customerId}/devices/chromeos/{deviceId}:issueCommand
Apps Script
As for applying any of these in Apps Script, you will have to add the Admin SDK API advanced service and choose the directory _v1 version and simulate any of the above requests.
Code
Assuming you want to remote powerwash a device, you will have to write something similar to this:
let resource = {
YOUR_RESOURCE_HERE;
"commandType": "REMOTE_POWERWASH"
};
let customerId = 'CUSTOMER_ID';
let deviceId = 'DEVICE_ID';
AdminDirectory.Customer.Devices.Chromeos.issueCommand(resource, customerId, deviceId);
Not what you are looking for?
You can simply create a feature request on Google's Issue Tracker and provide the details with regards to your task by filling in the form here.
Reference
Directory API Manage ChromeOS Devices.

How to configure istio-proxy to log traceId?

I am using istio with version 1.3.5. Is there any configuration to be set to allow istio-proxy to log traceId? I am using jaeger tracing (wit zipkin protocol) being enabled. There is one thing I want to accomplish by having traceId logging:
- log correlation in multiple services upstream. Basically I can filter all logs by certain traceId.
According to envoy proxy documentation for envoy v1.12.0 used by istio 1.3:
Trace context propagation
Envoy provides the capability for reporting tracing information regarding communications between services in the mesh. However, to be able to correlate the pieces of tracing information generated by the various proxies within a call flow, the services must propagate certain trace context between the inbound and outbound requests.
Whichever tracing provider is being used, the service should propagate the x-request-id to enable logging across the invoked services to be correlated.
The tracing providers also require additional context, to enable the parent/child relationships between the spans (logical units of work) to be understood. This can be achieved by using the LightStep (via OpenTracing API) or Zipkin tracer directly within the service itself, to extract the trace context from the inbound request and inject it into any subsequent outbound requests. This approach would also enable the service to create additional spans, describing work being done internally within the service, that may be useful when examining the end-to-end trace.
Alternatively the trace context can be manually propagated by the service:
When using the LightStep tracer, Envoy relies on the service to propagate the
x-ot-span-context
HTTP header while sending HTTP requests to other services.
When using the Zipkin tracer, Envoy relies on the service to propagate the B3 HTTP headers (
x-b3-traceid,
x-b3-spanid,
x-b3-parentspanid,
x-b3-sampled,
and
x-b3-flags).
The
x-b3-sampled
header can also be supplied by an external client to either enable or
disable tracing for a particular request. In addition, the single
b3
header propagation format is supported, which is a more compressed
format.
When using the Datadog tracer, Envoy relies on the service to propagate the Datadog-specific HTTP headers (
x-datadog-trace-id,
x-datadog-parent-id,
x-datadog-sampling-priority).
TLDR: traceId headers need to be manually added to B3 HTTP headers.
Additional information: https://github.com/openzipkin/b3-propagation

How do I stop call recording for sensitive information In Amazon Connect

I am working on AWS's Amazon Connect. I am creating a contact flow in which I need to store call recordings. So I am using Enable Call Recording component of Contact Flow and its working fine.
Now, suppose contact flow is taking some sensitive information such as Credit card details, in this case I need to stop call recording and then again start it once user done with sensitive information. How can I do this?
Thanks,
Gans
You can disable recording in the contact flow by using the set recording behavior block, the same way you initially enabled it. This can be enabled or disabled as many times as needed during the contact flow itself, prior to routing the call to an agent. If you need to disable the recording after the call has already been routed at an agent, you would use a quick connect to send the caller back to a contact flow that set the recording behavior to disabled and use LEX or DTMF to capture the sensitive information before setting the recording back to an enabled state and reconnecting the caller to the agent (agent is placed on hold by using the quick connect in this situation).
The problem with the above approach is that it won't work well if you want the same agent to handle the call. The transfer to Quick Connect will put the call on hold on the original leg while reattempting to connect back to the same agent and unless the agent quickly clicks on the "swap" button on CCP the call will be dropped.
You could transfer back to another queue with recording off and the same agent pool, but this would mean the customer may have to wait again for the call to be connected.
A better solution would be an Interactive prompt with an option to disable recording and the start of the contact flow. The timeout could be set to continue with recording enabled so if nothing is pressed the call continues with recording on.
Routing to the last agent is still possible with temp table data store in DynamoDB. The agent have to be blocked to get calls routed from the queue until the contact re-arrives or the call will be placed in the personal (invisible) queue of the agent.
Use APIs that enable you to start, stop, pause, and resume call recording.
The call recording APIs are available in all AWS regions where Amazon Connect is offered. There is no charge to use these APIs. https://aws.amazon.com/about-aws/whats-new/2020/07/amazon-connect-adds-call-recording-apis/

ideal aws instance type for neo4j database

neo4j 3.2 version is installed aws Linux ec2 instance. I am checking for the ideal instance type.
There are different projects persisted in the neo4j graph and each project can have around 50k nodes, a million relationships, and around 100k properties. A single save operation can have up to 25k nodes, 75k relations, and 10k properties. i tried out i2,d2,r largest instance type. everything is taking almost a minute or more for every "save" operation. In my windows operating system, max time it takes is 20 seconds.
what would be the ideal instance type in this scenario?
and i also doubt whether it has something to do with the neo4j.conf file. there are so many jvm paramaters mentioned in neo4j.conf in Linux aws. none of them are specified in my windows system.
#*****************************************************************
# Neo4j configuration
#
# For more details and a complete list of settings, please see
# https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
#*****************************************************************
# The name of the database to mount
#dbms.active_database=graph.db
# Paths of directories in the installation.
dbms.directories.data=/var/lib/neo4j/data
dbms.directories.plugins=/var/lib/neo4j/plugins
dbms.directories.certificates=/var/lib/neo4j/certificates
dbms.directories.logs=/var/log/neo4j
dbms.directories.lib=/usr/share/neo4j/lib
dbms.directories.run=/var/run/neo4j
# This setting constrains all `LOAD CSV` import files to be under the `import` directory. Remove or comment it out to
# allow files to be loaded from anywhere in the filesystem; this introduces possible security problems. See the
# `LOAD CSV` section of the manual for details.
dbms.directories.import=/var/lib/neo4j/import
# Whether requests to Neo4j are authenticated.
# To disable authentication, uncomment this line
#dbms.security.auth_enabled=false
# Enable this to be able to upgrade a store from an older version.
#dbms.allow_upgrade=true
# Java Heap Size: by default the Java heap size is dynamically
# calculated based on available system resources.
# Uncomment these lines to set specific initial and maximum
# heap size.
# dbms.memory.heap.initial_size=512m
#dbms.memory.heap.max_size=512m
# The amount of memory to use for mapping the store files, in bytes (or
# kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
# If Neo4j is running on a dedicated server, then it is generally recommended
# to leave about 2-4 gigabytes for the operating system, give the JVM enough
# heap to hold all your transaction state and query context, and then leave the
# rest for the page cache.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 50% of RAM minus the max Java heap size.
#dbms.memory.pagecache.size=10g
#*****************************************************************
# Network connector configuration
#*****************************************************************
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
# You can also choose a specific network interface, and configure a non-default
# port for each connector, by setting their individual listen_address.
# The address at which this server can be reached by its clients. This may be the server's IP address or DNS name, or
# it may be the address of a reverse proxy which sits in front of the server. This setting may be overridden for
# individual connectors below.
#dbms.connectors.default_advertised_address=localhost
# You can also choose a specific advertised hostname or IP address, and
# configure an advertised port for each connector, by setting their
# individual advertised_address.
# Bolt connector
dbms.connector.bolt.enabled=true
#dbms.connector.bolt.tls_level=OPTIONAL
#dbms.connector.bolt.listen_address=:7687
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
#dbms.connector.http.listen_address=:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
#dbms.connector.https.listen_address=:7473
# Number of Neo4j worker threads.
#dbms.threads.worker_count=
#*****************************************************************
# SSL system configuration
#*****************************************************************
# Names of the SSL policies to be used for the respective components.
# The legacy policy is a special policy which is not defined in
# the policy configuration section, but rather derives from
# dbms.directories.certificates and associated files
# (by default: neo4j.key and neo4j.cert). Its use will be deprecated.
# The policies to be used for connectors.
#
# N.B: Note that a connector must be configured to support/require
# SSL/TLS for the policy to actually be utilized.
#
# see: dbms.connector.*.tls_level
#bolt.ssl_policy=legacy
#https.ssl_policy=legacy
#*****************************************************************
# SSL policy configuration
#*****************************************************************
# Each policy is configured under a separate namespace, e.g.
# dbms.ssl.policy.<policyname>.*
#
# The example settings below are for a new policy named 'default'.
# The base directory for cryptographic objects. Each policy will by
# default look for its associated objects (keys, certificates, ...)
# under the base directory.
#
# Every such setting can be overriden using a full path to
# the respective object, but every policy will by default look
# for cryptographic objects in its base location.
#
# Mandatory setting
#dbms.ssl.policy.default.base_directory=certificates/default
# Allows the generation of a fresh private key and a self-signed
# certificate if none are found in the expected locations. It is
# recommended to turn this off again after keys have been generated.
#
# Keys should in general be generated and distributed offline
# by a trusted certificate authority (CA) and not by utilizing
# this mode.
#dbms.ssl.policy.default.allow_key_generation=false
# Enabling this makes it so that this policy ignores the contents
# of the trusted_dir and simply resorts to trusting everything.
#
# Use of this mode is discouraged. It would offer encryption but no security.
#dbms.ssl.policy.default.trust_all=false
# The private key for the default SSL policy. By default a file
# named private.key is expected under the base directory of the policy.
# It is mandatory that a key can be found or generated.
#dbms.ssl.policy.default.private_key=
# The private key for the default SSL policy. By default a file
# named public.crt is expected under the base directory of the policy.
# It is mandatory that a certificate can be found or generated.
#dbms.ssl.policy.default.public_certificate=
# The certificates of trusted parties. By default a directory named
# 'trusted' is expected under the base directory of the policy. It is
# mandatory to create the directory so that it exists, because it cannot
# be auto-created (for security purposes).
#
# To enforce client authentication client_auth must be set to 'require'!
#dbms.ssl.policy.default.trusted_dir=
# Client authentication setting. Values: none, optional, require
# The default is to require client authentication.
#
# Servers are always authenticated unless explicitly overridden
# using the trust_all setting. In a mutual authentication setup this
# should be kept at the default of require and trusted certificates
# must be installed in the trusted_dir.
#dbms.ssl.policy.default.client_auth=require
# A comma-separated list of allowed TLS versions.
# By default TLSv1, TLSv1.1 and TLSv1.2 are allowed.
#dbms.ssl.policy.default.tls_versions=
# A comma-separated list of allowed ciphers.
# The default ciphers are the defaults of the JVM platform.
#dbms.ssl.policy.default.ciphers=
#*****************************************************************
# Logging configuration
#*****************************************************************
# To enable HTTP logging, uncomment this line
#dbms.logs.http.enabled=true
# Number of HTTP logs to keep.
#dbms.logs.http.rotation.keep_number=5
# Size of each HTTP log that is kept.
#dbms.logs.http.rotation.size=20m
# To enable GC Logging, uncomment this line
#dbms.logs.gc.enabled=true
# GC Logging Options
# see http://docs.oracle.com/cd/E19957-01/819-0084-10/pt_tuningjava.html#wp57013 for more information.
#dbms.logs.gc.options=-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintPromotionFailure -XX:+PrintTenuringDistribution
# Number of GC logs to keep.
#dbms.logs.gc.rotation.keep_number=5
# Size of each GC log that is kept.
#dbms.logs.gc.rotation.size=20m
# Size threshold for rotation of the debug log. If set to zero then no rotation will occur. Accepts a binary suffix "k",
# "m" or "g".
#dbms.logs.debug.rotation.size=20m
# Maximum number of history files for the internal log.
#dbms.logs.debug.rotation.keep_number=7
#*****************************************************************
# Miscellaneous configuration
#*****************************************************************
# Enable this to specify a parser other than the default one.
#cypher.default_language_version=3.0
# Determines if Cypher will allow using file URLs when loading data using
# `LOAD CSV`. Setting this value to `false` will cause Neo4j to fail `LOAD CSV`
# clauses that load data from the file system.
#dbms.security.allow_csv_import_from_file_urls=true
# Retention policy for transaction logs needed to perform recovery and backups.
dbms.tx_log.rotation.retention_policy=1 days
# Enable a remote shell server which Neo4j Shell clients can log in to.
#dbms.shell.enabled=true
# The network interface IP the shell will listen on (use 0.0.0.0 for all interfaces).
#dbms.shell.host=127.0.0.1
# The port the shell will listen on, default is 1337.
#dbms.shell.port=1337
# Only allow read operations from this Neo4j instance. This mode still requires
# write access to the directory for lock purposes.
#dbms.read_only=false
# Comma separated list of JAX-RS packages containing JAX-RS resources, one
# package name for each mountpoint. The listed package names will be loaded
# under the mountpoints specified. Uncomment this line to mount the
# org.neo4j.examples.server.unmanaged.HelloWorldResource.java from
# neo4j-server-examples under /examples/unmanaged, resulting in a final URL of
# http://localhost:7474/examples/unmanaged/helloworld/{nodeId}
#dbms.unmanaged_extension_classes=org.neo4j.examples.server.unmanaged=/examples/unmanaged
#********************************************************************
# JVM Parameters
#********************************************************************
# G1GC generally strikes a good balance between throughput and tail
# latency, without too much tuning.
dbms.jvm.additional=-XX:+UseG1GC
# Have common exceptions keep producing stack traces, so they can be
# debugged regardless of how often logs are rotated.
#dbms.jvm.additional=-XX:-OmitStackTraceInFastThrow
# Make sure that `initmemory` is not only allocated, but committed to
# the process, before starting the database. This reduces memory
# fragmentation, increasing the effectiveness of transparent huge
# pages. It also reduces the possibility of seeing performance drop
# due to heap-growing GC events, where a decrease in available page
# cache leads to an increase in mean IO response time.
# Try reducing the heap memory, if this flag degrades performance.
dbms.jvm.additional=-XX:+AlwaysPreTouch
# Trust that non-static final fields are really final.
# This allows more optimizations and improves overall performance.
# NOTE: Disable this if you use embedded mode, or have extensions or dependencies that may use reflection or
# serialization to change the value of final fields!
dbms.jvm.additional=-XX:+UnlockExperimentalVMOptions
dbms.jvm.additional=-XX:+TrustFinalNonStaticFields
# Disable explicit garbage collection, which is occasionally invoked by the JDK itself.
dbms.jvm.additional=-XX:+DisableExplicitGC
# Remote JMX monitoring, uncomment and adjust the following lines as needed. Absolute paths to jmx.access and
# jmx.password files are required.
# Also make sure to update the jmx.access and jmx.password files with appropriate permission roles and passwords,
# the shipped configuration contains only a read only role called 'monitor' with password 'Neo4j'.
# For more details, see: http://download.oracle.com/javase/8/docs/technotes/guides/management/agent.html
# On Unix based systems the jmx.password file needs to be owned by the user that will run the server,
# and have permissions set to 0600.
# For details on setting these file permissions on Windows see:
# http://docs.oracle.com/javase/8/docs/technotes/guides/management/security-windows.html
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.port=3637
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.authenticate=true
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.ssl=false
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.password.file=/absolute/path/to/conf/jmx.password
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.access.file=/absolute/path/to/conf/jmx.access
# Some systems cannot discover host name automatically, and need this line configured:
#dbms.jvm.additional=-Djava.rmi.server.hostname=$THE_NEO4J_SERVER_HOSTNAME
# Expand Diffie Hellman (DH) key size from default 1024 to 2048 for DH-RSA cipher suites used in server TLS handshakes.
# This is to protect the server from any potential passive eavesdropping.
dbms.jvm.additional=-Djdk.tls.ephemeralDHKeySize=2048
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
dbms.windows_service_name=neo4j
#********************************************************************
# Other Neo4j system properties
#********************************************************************
dbms.jvm.additional=-Dunsupported.dbms.udc.source=rpm
According to your specified configurations and the give recommended system requirements in documentation at the official neo4j site I would suggest these instances:
1) If your application is burstable in nature, I would recommend Burstable Performance Instances like t2.xlarge(16 GB RAM, Intel Xeon family processor, moderate Networking Performance) or t2.2xlarge (32 GB RAM, Intel Xeon family processor, moderate Networking Performance).
2) If your application is general purpose and requires high configuration I would recommend:
If cost is a concern- m3.xlarge(4 vCPU, 15 GB RAM, Intel Xeon E5-2670 v2, High Networking Performance, 2 x 40 SSD), m3.2xlarge(8 vCPU, 30 GB RAM, Intel Xeon E5-2670 v2, High Networking Performance, 2 x 80 SSD)
or
If you want the best and the latest generation of General Purpose Instances- m4.xlarge(4 vCPU, 16 GB RAM, Intel Xeon E5-2676 v3, High Networking Performance, EBS Only storage), m4.2xlarge(8 vCPU, 32 GB RAM, Intel Xeon E5-2676 v3, High Networking Performance, EBS Only storage).
For more information on instance types you can refer here.
finally figured out.
For neo4j graph db, ubuntu OS they prefer. Regarding instance type,
one with SSD storage is preferable. So we can go with M3.2xlarge.
Remember to configure IOPS which will increase the throughput. set heap & cache memory if required.
Expand EBS Volume
Modify EBS volume