Setting Probes for SimVision in SystemVerilog Code - cadence

I'm trying to probe the systemverilog signals by using irun .
I came across the some example to dump wave as the below ,when I googling.
initial begin
$recordfile("sv_wave");
$recordvars("depth=all",pstest);
end
It seems work but the other variables can't see the value with "No Value Available".
I use the below script to run the simulation.
irun \
+access+wrc \
-cdn_vip_root /u572/cadence/installs/VIPCAT113 \
/u572/sv/denaliMem.sv \
/u572/sv/denaliCdn_ahb.sv \
/u572/svExamples/simpleExample/hdl/master_mux.v \
/u572/svExamples/simpleExample/hdl/slave_mux.v \
hdl/ahb_verilog.v \
test2.v \
tb.sv \
-incdir /u572/svExamples/simpleExample \
-timescale 1ps/1ps -top pstest
What am I supposed to do to get the variable's value of the No Value Available variable?

On the simulator side, the command you can use is probe -create <signal> <options>. You can either type that in the irun simulator console or provide as an instruction in the .tcl file at startup. Refer to the documentation provided with the simulator under the section Simulator Tcl Commands / probe for verbose description & examples.

Related

Kakadu, Multi-component transforms on volumetric images

From this list of examples, it seems that Kakadu is able to compress volumetric images along the z-direction by leveraging the multi-component transform (Part 2) of the JPEG 2000 standard.
Specifically example (Aj). I tested it, it seems to work.
I tried to modify the example above for an image of 1024 x 1024 x 128 pixels,
I want to group slices in batches of 32 (or 64) and do a full DWT on the individual batches,
but I fail.
This is what I tried:
kdu_compress -i img.rawl\*128#2097152 -o img.jpx -jpx_layers \* \
-jpx_space sLUM Creversible=yes Sdims="{1024,1024}" Clayers=4 \
Mcomponents=32 Nsigned=no Nprecision=12 \
Sprecision=12,12,12,12,12,13 Ssigned=no,no,no,no,no,yes \
Mvector_size:I4=32 Mvector_coeffs:I4=32 \
Mstage_inputs:I25="{0,31}" Mstage_outputs:I25="{0,31}" \
Mstage_collections:I25="{32,32}" \
Mstage_xforms:I25="{DWT,1,4,3,0}" \
Mnum_stages=1 Mstages=25
It fails with this error message:
Kakadu Core Error:
Multi-component transform does not satisfy the constraints imposed by Part 2 of
the JPEG2000 standard. The first transform stage must touch every codestream
image component (no more and no less), while subsequent stages must touch every
component produced by the previous stage.
What am I doing wrong? What's the fix?

How to implement blue/green deployments in AWS with Terraform without losing capacity

I have seen multiple articles discussing blue/green deployments and they consistently involve forcing recreation of the Launch Configuration and the Autoscaling Group. For example:
https://groups.google.com/forum/#!msg/terraform-tool/7Gdhv1OAc80/iNQ93riiLwAJ
This works great in general except that the desired capacity of the ASG gets reset to the default. So if my cluster is under load then there will be a sudden drop in capacity.
My question is this: is there a way to execute a Terraform blue/green deployment without a loss of capacity?
I don't have a full terraform-only solution to this.
The approach I have is to run a small script to get the current desired capacity, set a variable, and then use that variable in the asg.
handle-desired-capacity:
#echo "Handling current desired capacity"
#echo "---------------------------------"
#if [ "$(env)" == "" ]; then \
echo "Cannot continue without an environment"; \
exit -1; \
fi
$(eval DESIRED_CAPACITY := $(shell aws autoscaling describe-auto-scaling-groups --profile $(env) | jq -SMc '.AutoScalingGroups[] | select((.Tags[]|select(.Key=="Name")|.Value) | match("prod-asg-app")).DesiredCapacity'))
#if [ "$(DESIRED_CAPACITY)" == '' ]; then \
echo Could not determine desired capacity.; \
exit -1; \
fi
#if [ "$(DESIRED_CAPACITY)" -lt 2 -o "$(DESIRED_CAPACITY)" -gt 10 ]; then \
echo Can only deploy between 2 and 10 instances.; \
exit -1; \
fi
#echo "Desired Capacity is $(DESIRED_CAPACITY)"
#sed -i.bak 's!desired_capacity = [0-9]*!desired_capacity = $(DESIRED_CAPACITY)!g' $(env)/terraform.tfvars
#rm -f $(env)/terraform.tfvars.bak
#echo ""
Clearly, this is as ugly as it gets, but it does the job.
I am looking to see if we can get the name of the ASG as an output from the remote state that I can then use on the next run to get the desired capacity, but I'm struggling to understand this enough to make it useful.
As a second answer, I wrapped the AWSCLI + jq into a Terraform module.
https://registry.terraform.io/modules/digitickets/cli/aws/latest
module "current_desired_capacity" {
source = "digitickets/cli/aws"
assume_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/OrganizationAccountAccessRole"
role_session_name = "GettingDesiredCapacityFor${var.environment}"
aws_cli_commands = ["autoscaling", "describe-auto-scaling-groups"]
aws_cli_query = "AutoScalingGroups[?Tags[?Key==`Name`]|[?Value==`digitickets-${var.environment}-asg-app`]]|[0].DesiredCapacity"
}
and
module.current_desired_capacity.result gives you the current desired capacity of the ASG you have nominated in the aws_cli_query.
Again, this is quite ugly, but the formalisation of this means you can now access a LOT of properties from AWS that are not yet available within Terraform.
This is a gentle hack. No resources are passed around and it was written purely with read-only for single scalar values in mind, so please use it with care.
As the author, I'd be happy to explain anything about this via the GitHub Issues page at https://github.com/digitickets/terraform-aws-cli/issues

PyUSB how to specify the type and the buffer size of the endpoint

There are four different ways to send data across USB: Control, Interrupt, Bulk, and Isochronous. book ref 1
From the book book ref 1 page 330:
... Bulk endpoints transfer large amounts of data. These endpoints are usually much larger (they can hold more characters at once) that interrupt endpoints. ...
when I get my endpoint input I use the following command.
import usb.core
import usb.util
dev = usb.core.find(idVendor=0x0683, idProduct=0x4108)
if dev is None:
raise ValueError('Device not found')
dev.reset()
dev.set_configuration()
cfg = dev.get_active_configuration()
intf = cfg[(0,0)]
epi = usb.util.find_descriptor(
intf,
# match the first IN endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) ==\
usb.util.ENDPOINT_IN)
I tried to add, but it give my a sytax error that I don't fully understand :
usb.util.endpoint_type()== \
usb.util.ENDPOINT_TYPE_BULK
Here is another very good source on how to work with USB link 1
It seems that usb endpoints have parameters that can be specified in python
where bEndpointAddress indicates what endpoint this descriptor is describing.
bmAttributes specifies the transfer type. This can either be Control, Interrupt, Isochronous or Bulk Transfers. If an Isochronous endpoint is specified, additional attributes can be selected such as the Synchronisation and usage types.
wMaxPacketSize indicates the maximum payload size for this endpoint.
bInterval is used to specify the polling interval of certain transfers. The units are expressed in frames, thus this equates to either 1ms for low/full speed devices and 125us for high speed devices.
I have tried:
epi.wMaxPacketSize = 72000000 #to make the buffer large
epi.bmAttributes = 3 # 3 = 10 in binary. to change the mode to bulk
My questions are:
Where do I specify what kind of endpoint I am using for Windows and(or) Linux and how to do that? and How can I change the buffer size on each endpoint?
Try this:
epi = usb.util.find_descriptor(intf,
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_IN \
and \
usb.util.endpoint_type(e.bmAttributes) == \
usb.util.ENDPOINT_TYPE_BULK )
But you misunderstood the part about the parameters. bmAttributes and wMaxPacketSize are specified by the USB hardware and not to be changed by Python.

Why is rrdtool graph blank?

I am trying to get rrdtool working to plot ifInOctets.
I created the database using:
rrdtool create bandwidth.rrd --start N DS:in:COUNTER:60:U:U RRA:AVERAGE:0.5:1:432
Once a minute I run:
/usr/bin/rrdupdate /srv/www/htdocs/rrdtool/bandwidth.rrd N:`/usr/bin/snmpget -v 2c -Oqv -c secret 192.168.1.1 ifInOctets.2`
If I run
/usr/bin/snmpget -v 2c -Oqv -c secret 192.168.1.1 ifInOctets.2
it is returning the correct result.
I then create the rrd graph using:
/usr/bin/rrdtool graph /srv/www/htdocs/mrtg/bandwidth.png -a PNG -w 785 -h 120 -s -129600 -v "Data Throughput" \
'DEF:in=/srv/www/htdocs/rrdtool/bandwidth.rrd:in:AVERAGE' \
'CDEF:kbin=in,1024,/' \
'AREA:in#00FF00:Bandwidth In' \
'GPRINT:kbin:LAST:Last Bandwidth In\: %3.2lf KBps' \
'GPRINT:kbin:AVERAGE:Average Bandwidth In\: %3.2lf KBps'
Is there something obvious I am missing?
If you are collecting the data every minute, then you need to set the RRD step to be 60s (the default is 300s) using --step=60
Also, you have the Heartbeat for the 'in' DS set to 60. Normally, you should set this to be twice the step size, else you need to update every 59 seconds... What is happening is that the updates are happening every 60s which is the heartbeat time, and so most are being set to unknown.
Change the heartbeat to 120 and the step to 60, and it should work:
rrdtool create bandwidth.rrd --step 60 --start N DS:in:COUNTER:120:U:U RRA:AVERAGE:0.5:1:432

loading same gstreamer elements multiple times in a process

This may be silly question. How gstreamer elements are loaded multiple times in a process?. When Gstreamer elements are created are they shared if already one created and present in memory? In my case, one process creates multiple thread, for each thread I am creating following gstreamer elements, linking and set pipeline to PLAYING state, filesrc->Q->filesink and this works. But when I add gstreamer element(newly written for processing gst buffers data) between Q->filesink all thread stops working, what might be the problem? How can I debug? Please provide inputs. Thanks in advance. -opensid
The elements are within shared libraries and thus the code will be just once in memory. Each instance will occupy some memory for its own state though. When doing multithreaded stuff, you should call gst_init() just once from your main thread. As gstreamer already creates new threads for the data processing, it is saver to create all the gstreamer pipeline from one main thread. You can run several pipelines in parallel.
I agree with ensonic's answer as it applies to data stored in klass. However it doesn't seem to apply to gbuffers. I am working my way through versions of a IEEE1278 audio transform based on basetransform. In one version there is a filter plug-in to allow UDP packets through based on set-able properties and a plug-in for two-way transform, IEEE1278 <-> mulaw depending on what the pads are set to.
For a simple test I tried a loop:
gst-launch-1.0 -v filesrc
location=IsaacAsimov-Foundation1Of8_64kb.mp3 \
! mpegaudioparse \
! mpg123audiodec
! 'audio/x-raw,rate=8000,channels=1' \
! audioresample \
! 'audio/x-raw,rate=8000,channels=1' \
! mulawenc \
! 'audio/x-mulaw,rate=8000,channels=1' \
! dissignalaudio \
! disfilter \
! dissignalaudio \
! 'audio/x-mulaw,rate=8000,channels=1' \
! mulawdec \
! 'audio/x-raw,rate=8000,channels=1' \
! autoaudiosink
No matter what I did to the gbuffer data or metadata in dissignalausio_transform the output audio had a lot of strong clicking noise. gprints in mulawdec showed that none of my transform changes were arriving at mulawdec. I separated the loop into two launch pipelines using UDP loop-back and the noise went away. Somehow the gbuffer from the first instance of dissignalaudio was overriding the second instance.
Lesson learned:
There is a reason there are no examples of two-way transforms and all transforms have separate encode and decode plug-ins.