AWS Capacity Status meaning differences - amazon-web-services

I am currently using the AWS price list api and I want to use the filter "capacityStatus" to filter for the cost of the ec2 instance. I see that there is multiple values for it that can be used for Filtering purposes.
I see:
AllocatedCapacityReservation
AllocatedHost
NA
UnusedCapacityReservation
Used
Can someone explain to me the differences between the 5? This is specifically for "OnDemand" instances.
My understanding is that AllocatedCapacityReservation is the "cost" when you make a reservation for that allocated capacity "On demand"/"at that given moment". Allocated host means the same but in the host sense. I'm just a little bit confused on the "UnusedCapacityReservation" and "Used". Does "UnusedCapacityReservation" refer to the cost for a product in which you reserve the product but it is never used? What does "Used" mean?
Would anyone be able to direct me to any documentation explaining this in detail/better explain this to me? Would greatly appreciate it.

Related

How do I limit the max supply of an Elrond NFT collection?

I'm currently working with the Elrond blockchain, and I've successfully registered a new NFT collection & created an NFT following the documentation example here.
However, I need to limit the amount of NFTs that can be created for a specific collection. The documentation says:
The ESDT manager can stop the creation of an NFT for the given ESDT forever by removing the only ESDTRoleNFTCreate role available.
So I guess I could just create the max supply of a collection (e.g 100 NFTs for collection1) and then remove the creation role. But this means I have to create all NFTs beforehand, which is not ideal.
Is there another way to limit the max supply, maybe when registering/issuing the collection itself?
On a separate note, is there good documentation on the difference between EGLD / ESDT / SFT / NFT. I'm particularly interested in SFTs, but I don't understand how the state change from fungible -> non-fungible works (e.g once the SFT becomes non-fungible, is it stuck in that state or can it become fungible again?). Documentation on that is quite unclear.
Thanks in advance for your help!
There is no optionnal "limit supply" for an NFT collection.
After you created the collection and give the ESDTRoleNFTCreate to your adresse or a contract you can call X time the function mint
Eventually, if you use a SC to mint the NFT, you can set a variable to limit the maximum allowed mint.
You cant convert NFT in SFT or SFT in NFT
The only operation allowed is converting an SFT into Meta-ESDT with no way back. This is possible only because SFT are really close to META-ESDT but with no decimals :)

Questions on Uniswap Fees keep Changing

I am doing a research on Uniswap, but I can't find explanations regarding its Swap UI.
What I want to know is what the 0.275% in the second row means.
I am well aware that Uniswap collects transaction fees for the liquidity pools, but the number I keep mentioning keeps changing and even seem to be positive at times.
Is the 0.275% a liquidity pool fee?
If you hover over the number it says The estimated difference between the USD values of input and output amounts. as you can see in the screenshot below:
You can read more about pricing in the uniswap docs.

Generate blocks of 128MB in Nifi

What I'm trying to do is to deposit into HDFS blocks of size of 128MB I've been trying several processors but can't get the good one or I haven't identify the correct property:
This is how prety much the flow looks like:
Right now I'm using PutParquet but this processor doesn't have a property to do that
The previous processor is a MergeContent and this is the configuration
and on the SplitAvro I have next configuration
Hope someone can help I'm really stuck trying to do this.
You shouldn't need the SplitAvro or ConvertAvroToJSON, if you use MergeRecord instead you can supply an AvroReader and JsonRecordSetWriter and it will do the conversion for you. If you know the approximate number of records that will fit in an HDFS block, you can set that as the Maximum Number of Entries and also the Max Group Size. Keep in mind those are soft limits though, so you might want to set it to something safer like 100MB.
When you tried with your flow from the description, what did you observe? Were the files still too big, or did it not seem to obey the min/max limits, etc.?

This instance has too many database splits to complete the operation

Until now, even if 20 per database were created, no error was issued, but when I made suddenly more than 16 suddenly I got an error.
It is totally different from what is described in https://cloud.google.com/spanner/quotas. I do not understand the reason at all.
It is not clear if you're talking about 20 and 16 secondary indexes and what was the type of the operation that you were trying to execute.
Just to be clear, splits are not indexes. Splits are not exposed to users in Cloud Spanner, see more details on the topic here:
https://cloud.google.com/spanner/docs/schema-and-data-model#database-splits
"Too many database splits" error indicates that you need more nodes to manage your dataset. A node may manage up to 2TB of data:
https://cloud.google.com/spanner/quotas#database_limits

Scaling down spanner nodes

What are the limitations / considerations in scaling down spanner nodes? Since there is a tight coupling of nodes to data stored - is it fair to say that it is highly scalable but not elastic? The following is a quote from the quizlet case study on GCP website...
"it might be impossible to reduce the number of nodes on your database, even if you previously ran the database with that number of nodes."
The word "might" needs some expanding
To expand on the "might" -- we restrict the reduction of nodes to meet a 2T/node limit for the instance. You can scale up and down, as long as the down-sizing doesn't cross that threshold.
Hope this helps!
A few things we would recommend to scale down effectively is by deleting unused data (databases, tables, global index, rows etc.). This data will be cleaned within ~7 days, allowing you to potentially run with lower node counts.