Can I change chunk size to allways be the same ? What I need is fixed chunk size to always be 100mb. I want to send files from browser without passing it throught server. I use signature 4. It would be cool if we could restrict max file size and max chunk size
Related
when using recvmsg I use MSG_TRUNC and MSG_PEEK like so:
msgLen = recvmsg(fd, &hdr, MSG_PEEK | MSG_TRUNC)
this gives me the size of the buffer to allocate for the next message
my question is how do I get the size of the buffer I should allocate for the msg_control field inside the header
Based on the doc, you need to allocate the buffer for msg_control of the size msg_controllen. To know the size beforehand, you could call like you did recvmsg(fd, &hdr, MSG_PEEK | MSG_TRUNC). MSG_PEEK won't remove the message and MSG_TRUNC will allow to return the size of the message, even if the buffer is too small.
a few solutions:
call recvmsg(fd, &hdr, MSG_PEEK | MSG_TRUNC) and init the buffer in hdr based on the size returned, and call it again without the flags.
allocate a buffer big enough, if you know the size of your messages beforehand, and call recvmsg. If an error occurs (returned -1), check the error code if the message was truncated (MSG_TRUNC or MSG_CTRUNC)
I cannot speak for other platforms than macOS (whose core is based upon a FreeBSD core, so maybe it's no different in BSD-systems, too) and the POSIX standard is not helpful either as it leaves pretty much all details to be defined by the protocol, but by default behavior of recvmsg on macOS for a UDP socket is to not deliver any control data at all. No matter what size you set msg_control on input, it will always be 0 on output. If you wish to receive any control data, you first have to explicitly enable that for the socket.
E.g. if you want to know both addresses, source and destination address of a packet (msg_name only gives you the source address of a received packet), then you have to do this:
int yes = 1;
setsockopt(soc, IPPROTO_IP, IP_RECVDSTADDR, &yes, sizeof(yes));
And now you'll get the destination address for IPv4 sockets documented as
The msg_control field in the msghdr structure points to a buffer that
contains a cmsghdr structure followed by the IP address. The cmsghdr
fields have the following values:
cmsg_len = sizeof(struct in_addr)
cmsg_level = IPPROTO_IP
cmsg_type = IP_RECVDSTADDR
This means you need to provide at least 16 bytes storage on my system, as struct cmsghdr alone is always 12 bytes on that system (four times 32 bit) and an IPv4 address is another 4 bytes, that's 16 bytes together. This value needs to be correctly rounded using CMSG_SPACE macro, but on my system the macro only makes sure it's a multiple of 32 bit and 16 byte already is such a multiple, so CMSG_SPACE(16) returns 16 for me.
As I know in advance which options I have enabled and which control data I will receive, I can exactly calculate the required space in advance.
For raw and other more obscure sockets, certain control data may always be included in the output by default, even if not explicitly enabled, but this control data will then always be equal in size and won't fluctuate from packet to packet as the packet payload size does. Thus once you know the correct size, you can rely upon the fact that it won't change, at least not without you enabling/disabling any options.
If your control data buffer was too small, the MSG_CTRUNC flag is set in the output, always (even if you don't set any flags on input), then you need to increase the control data buffer size and try again (with the next packet or with the same packet if you used MSG_PEEK as input flag), until you've once been able to make that call without getting the MSG_CTRUNC flag on output. Finally look at what the msg_control field says. On input it's the amount of buffer space available but on output it contains the exact amount of buffer space that was actually used. This is the exact buffer size you need to receive the control data of all future packets of that socket, unless you change options that will cause more/less control data to be sent and then you just have to detect that size again the same way as before.
For a more complete example, you may also have a look at:
https://stackoverflow.com/a/49308499/15809
I am afraid you can't get that value from the Posix.1g sockets API. Not sure about all implementations, but not possible in Linux. As you may notice, no control flow is provided in ancillary data buffers, so you will need to implement it yourself in case you are sending a lot of info between processes. On the other hand, for common case uses, you already know what you are going to receive at compile time (but you probably already know this). If you need to implement you own control flow, take into account that, in Linux, ancillary data seems to behave like a stream socket.
However, you can get/set the buffer length of the worst case scenario in /proc/sys/net/core/optmem_max, see cmsg(3). So, I guess you could set it to a reasonable value and declare a buffer that big.
it's the first time when I'm working with wave files.
The problem is that I don't exactly understand how to properly read stored data. My code for reading:
uint8_t* buffer = new uint8_t[BUFFER_SIZE];
std::cout << "Buffering data... " << std::endl;
while ((bytesRead = fread(buffer, sizeof buffer[0], BUFFER_SIZE / (sizeof buffer[0]), wavFile)) > 0)
{
//do sth with buffer data
}
Sample file header gives me information that data is PCM (1 channel) with 8 bits per sample and sampling rate is 11025Hz.
Output data gives me (after updates) values from 0 to 255, so values are proper PCM values for 8bit modulation. But, any idea what BUFFER_SIZE would be prefferable to correctly read those values?
WAV file I'm using: http://www.wavsource.com/movies/2001.htm (daisy.wav)
TXT output: https://paste.ee/p/pXGvm
You've got two common situations. The first is where the WAV file represents a short audio sample and you want to read the whole thing into memory and manipulate it. So BUFFER_SIZE is a variable. Basically you seek to the end of the file to get its size, then load it.
The second common situation is that the WAV file represent fairly long audio recording, and you want to process it piecewise, often by writing to an output device in real time. So BUFFER_SIZE needs to be large enough to hold a bite-sized chunk, but not so large that you require excessive memory. Now often the size of a "frame" of audio is given by the output device itself, it expects 25 samples per second to synchronise with video or something similar. You generally need a double buffer to ensure that you can always meet the demand for more samples when the DAC (digital to analogue converter) runs out. Then on giving out a sample you load the next chunk of data from disk. Sometimes there isn't a "right" value for the chunk size, you've just got to go with something fairly sensible that balances memory footprint against the number of calls.
If you need to do FFT, it's normal to use a buffer size that is a power of two, to make the fast transform simpler. Size you need depends on the lowest frequency you are interested in.
i read but still confused
what happens if a file size is less than block size.
if file size is 1MB will it consume 64MB or only 1MB
It will consume only 1 MB. That remaining can be used to store some other files block.
Ex: Consider your HDFS Data node total size is 128MB and block size is 64MB.
Then HDFS can store 2, 64MB blocks
or 128, 1MB blocks
or any number of block that can consume 128MB of Data node.
I have a requirement to create a new volume (it can be static) based on the size of the ubifs image (say rootfs.ubifs) which I am going to write into that volume. The aim is to create the volume with the minimum possible size required to write 'rootfs.ubifs' to that volume and boot the device from it.
Can somebody please help me in this regard?
The difference is the overhead of the UBI layer. This is documented as O in the web page or,
O - the overhead related to storing EC and VID headers in bytes, i.e. O = SP - SL.
SP is a physical erase block size and SL is what UbiFs will get. Usually, it is the minimum page size times two. One for an EC and another for a VID; these are the two structures that UBI uses to manage the flash. Both are defined in ubi-media.h. EC is the ubi_ec_hdr structure and VID is the ubi_vid_hdr structure. The EC or erase count is written every time an erase block is erased and this is responsible for wear leveling.note The VID or volume id header allows UBI to support multiple volumes and provide the PEB to LEB (physical to logical erase block) management.
So for a 2k page NAND flash without sub-pages, it is 4k; if sub-pages are supported then it is possible to put both headers in the same page and only 2k is needed. If your flash page size differs, you just need to multiply by two without sub-pages and only add the page overhead if you have sub-pages. The overhead for NOR flash is 256 bytes as it doesn't have the idea of pages.
In order to create your rootfs.ubifs, you must have specified a logic erase block size (to mkfs.ubifs). The difference between logical erase block (LEB) and physical erase block (PEB) is just the overhead documented above. Multiply your rootfs.ubifs by PEB/LEB to get the minimum possible size for the UBI volume.
note: If an erase is interrupted (reset/power cycle) between the actual erase and the EC write, an average of all other erase blocks is used to set the erase count when UBI re-reads the ubi device.
I've written a small program which is able to upload files to a server via ftp. Because of the large size of some files I want to create a progress bar for the user. So during the upload I need to know at certain intervals how many bytes have been sent to the server in order to derive the percentage of the file that has been uploaded. What I have tried so far:
While I call the function FtpPutFile() to upload the file, I spawn a thread with the following code:
hInternet = InternetOpen(NULL,INTERNET_OPEN_TYPE_DIRECT,NULL,NULL,0);
hFtpSession = InternetConnect(hInternet, ftpserver, port, user, pass, INTERNET_SERVICE_FTP, 0, 0);
int filesize = 0; // 2GB max
hFile = FtpOpenFile(hFtpSession,szFileTitle,GENERIC_READ,FTP_TRANSFER_TYPE_BINARY,0);
filesize = FtpGetFileSize(hFile,0);
cout << "Size: " << filesize << endl;
However this doesn't seem to work as filesize keeps returning a value of -1. I think this is due to the fact that I'm writing to a file (uploading part) and at the same time I'm trying to read it to get the file size. And I think this is not possible (please correct me if I'm wrong).
My main question: is there another way to create a progress bar for ftp uploading? Perhaps counting the bytes before they are uploaded using the function readBytesCount() (not sure if this is possible at all).
You want to:
Call InternetSetStatusCallback to set a function that will be called periodically during the transfer.
Pass a (non-zero) value the last parameter to FtpOpenFile. This will be passed back to your status callback function during the transfer.
Then, during the FTP operation, your callback function will be invoked periodically with information about the progress of the transfer, which it can then display to the user.
I don't believe this will let you show actual bytes as they're being transferred though -- if memory serves it mostly shows the discrete steps in a transfer, like opening the handle, resolving names, sending/receiving cookies, and finally closing the handle.
To deal with the actual bytes being written during the transfer of the file itself, you'd typically read a buffer-full of data from the local file, then write that buffer with InternetWriteFile. With that, you can compute the percentage transferred as the number of bytes written so far divided by the total size of the file (and multiply by 100).
Well, I resolved such a problem by sending the file by chunks and updating the progress in place of sending the whole file by one call to FtpPutFile
I mean:
FtpOpenFile(...)
for( ... )
{
InternetWriteFile(... dwChunkSize...)
UpdateProgressBar(dwChunkSize)
}
InternetCloseHandle(...)