I having a GST pipeline as shown below.
SRC ->A ->ROI ->B-> SINK
The custom elements (A, ROI and B) I has created by using the Gstreamer bad plugin tool element-maker as shown below.
./element-maker videofilter
I have removed the transform_frame() function from the above custom plugin source code and using transform_frame_ip() in order to use these plugin as a pass-through.
The above pipeline is working fine as a pass-through.
Requirement
Here, the 'ROI' element will produce multiple frames that will be passed to the element 'B' for further processing.
To implement this algorithm I have seen GstBuffer Pool concept in Gstreamer from the link mentioned below.
https://gstreamer.freedesktop.org/documentation/gstreamer/gstbufferpool.html?gi-language=c
My ROI function is acting like an Upstream element and below is the pool configuration inside the init_ROI element.
static void gst_ROI_init (GstROI *ROI){
/* GstBuffer Pool Config.*/
pool = gst_buffer_pool_new();
config = gst_buffer_pool_get_config(pool);
size = 420*420*3; // For RGB fromat.
min_ = 1;
max_ = 10;
caps = gst_caps_from_string("video/x-raw");
gst_buffer_pool_config_set_params (config, caps,
size, min_, max_);
gst_caps_unref(caps);
if ( gst_buffer_pool_set_config (pool, config)){
std::cout << "[INFO] Pool configuration done
successfully\n";
if( gst_buffer_pool_set_active( pool, TRUE )){
std::cout << "Successfully activated the pool
buffer\n";
}
else{
std::cout << "[ERROR] Failed to activate the
buffer pool\n";
}
} //end if
else {
std::cout << "[ERROR] Failed to set the pool
config.\n";
}
}
My **'B' element ** is working as a downstream element and below is the code to acquire the frames from pool.
gst_B_transform_frame_ip (GstVideoFilter * filter,
GstVideoFrame * frame)
{
/* GstBufferPool */
/*if (gst_buffer_pool_acquire_buffer(pool, &buf,
NULL) == GST_FLOW_EOS){ // pool variable is shared
variable ascross the ROI and B element.
std::cout << "[ERROR] Failed to acquire the frame\n";
}
Problem:
But the above implementation is not working as expected. It seems something I did wrong.
I have some doubts about the GstBuffer pool.
1- How to add the frames inside the buffer pool in the upstream element?
2- Is there any alternative and simple way to do the same. like I can create a sharable array of GstBuffer to store the n-cropped images and the same array I will access inside the B element instead of implementing the GstBuffer pool?
very new to this environment. any feedback will be useful for me.
thank you in advance.
Can any give their feedback on the above implementation and I will be great If anyone can share the reference code to implement the same GstBufferPool.
Related
I'm working on project to attach video from c++.
I have success create video element from c++.
video = emscripten::val::global("document").call<emscripten::val>("createElement", emscripten::val("video"));
video.set("src", emscripten::val("http://jplayer.org/video/webm/Big_Buck_Bunny_Trailer.webm"));
video.set("crossOrigin", emscripten::val("Anonymous"));
video.set("autoplay", emscripten::val(true));
video.call<void>("load");
But the problem came up is I have to wait until video buffer load enough to play.
My solution is using pthread to create thread wait until video buffer load enough and do stuff with the video.
pthread_create(&threadVideo, NULL, attachVideo, NULL);
And inside attachVideo function
void *attachVideo(void *arg)
{
pthread_detach(pthread_self());
cout << "ready to run" << endl;
cout << "readyState: " << video["readyState"].as<int>() << endl;
pthread_exit(NULL);
}
And when I run it cameout error: Cannot read properties of undefined (reading 'value')"
Can someone help me with this ?
This is because emscripten::val is represents an object in JS and the JS state is all thread local. Another way of putting it: Each thread gets is own JS environment, so emscripten::val cannot be shared between threads.
I've written an app that's based on the SampleGrabberSink example. My application is actually working as I want but one function I needed to perform was to get the video resolution when the .mp4 source changed part way into the file. I eventually worked out how to do it but it seems very long winded and I suspect there must be a simpler way.
In the sample below is there a way to shorten the code block handling the MESessionStreamSinkFormatChanged case? It seems like what's taking nearly 40 lines of code (counting initialisation and cleanup) should take 1 or 2.
HRESULT RunSession(IMFMediaSession *pSession, IMFTopology *pTopology, OnVideoResolutionChangedFunc onVideoResolutionChanged)
{
IMFMediaEvent *pEvent = NULL;
IMFTopologyNode *pNode = nullptr;
IMFStreamSink *pStreamSink = nullptr;
IUnknown *pNodeObject = NULL;
IMFMediaTypeHandler *pMediaTypeHandler = nullptr;
IMFMediaType *pMediaType = nullptr;
PROPVARIANT var;
PropVariantInit(&var);
HRESULT hr = S_OK;
CHECK_HR(hr = pSession->SetTopology(0, pTopology));
CHECK_HR(hr = pSession->Start(&GUID_NULL, &var));
while(1)
{
HRESULT hrStatus = S_OK;
MediaEventType met;
CHECK_HR(hr = pSession->GetEvent(0, &pEvent));
CHECK_HR(hr = pEvent->GetStatus(&hrStatus));
CHECK_HR(hr = pEvent->GetType(&met));
if(FAILED(hrStatus))
{
printf("Session error: 0x%x (event id: %d)\n", hrStatus, met);
hr = hrStatus;
goto done;
}
else
{
//printf("Session event: event id: %d\n", met);
switch(met)
{
case MESessionStreamSinkFormatChanged:
//std::cout << "MESessionStreamSinkFormatChanged." << std::endl;
{
MF_TOPOLOGY_TYPE nodeType;
UINT64 outputNode{0};
GUID majorMediaType;
UINT64 videoResolution{0};
UINT32 stride{0};
// This seems a ridiculously convoluted way to extract the change to the video resolution. There may
// be a simpler way but then again this is the Media Foundation and COM!
CHECK_HR_ERROR(pEvent->GetUINT64(MF_EVENT_OUTPUT_NODE, &outputNode), "Failed to get ouput node from media changed event.");
CHECK_HR_ERROR(pTopology->GetNodeByID(outputNode, &pNode), "Failed to get topology node for output ID.");
CHECK_HR_ERROR(pNode->GetObject(&pNodeObject), "Failed to get the node's object pointer.");
CHECK_HR_ERROR(pNodeObject->QueryInterface(IID_PPV_ARGS(&pStreamSink)), "Failed to get media stream sink from activation object.");
CHECK_HR_ERROR(pStreamSink->GetMediaTypeHandler(&pMediaTypeHandler), "Failed to get media type handler from stream sink.");
CHECK_HR_ERROR(pMediaTypeHandler->GetCurrentMediaType(&pMediaType), "Failed to get current media type.");
CHECK_HR_ERROR(pMediaType->GetMajorType(&majorMediaType), "Failed to get major media type.");
if(majorMediaType == MFMediaType_Video)
{
CHECK_HR_ERROR(pMediaType->GetUINT64(MF_MT_FRAME_SIZE, &videoResolution), "Failed to get new video resolution.");
CHECK_HR_ERROR(pMediaType->GetUINT32(MF_MT_DEFAULT_STRIDE, &stride), "Failed to get the new stride.");
std::cout << "Media session video resolution changed to width " << std::to_string(HI32(videoResolution))
<< " and height " << std::to_string(LO32(videoResolution))
<< " and stride " << stride << "." << std::endl;
if(onVideoResolutionChanged != nullptr) {
onVideoResolutionChanged(HI32(videoResolution), LO32(videoResolution), stride);
}
}
break;
}
default:
break;
}
}
if(met == MESessionEnded)
{
break;
}
SafeRelease(&pEvent);
SafeRelease(&pNode);
SafeRelease(&pStreamSink);
SafeRelease(&pNodeObject);
SafeRelease(&pMediaTypeHandler);
SafeRelease(&pMediaType);
}
done:
SafeRelease(&pEvent);
SafeRelease(&pNode);
SafeRelease(&pStreamSink);
SafeRelease(&pNodeObject);
SafeRelease(&pMediaTypeHandler);
SafeRelease(&pMediaType);
return hr;
}
// This seems a ridiculously convoluted way to extract the change to the video resolution. There may
// be a simpler way but then again this is the Media Foundation and COM!
The code looks good. You don't need to do everything on resolution change - you can retrieve media type handler just once and keep the pointer when it's needed.
On the entertaining comment above I would say the following. Just like in the case of DirectShow, Sample Grabber is the way to cut corners hard and do something against the design of the pipeline. Almost everyone out there loved DirectShow Sample Grabber and so the future of Media Foundation Sample Grabber could be if there was enough of people who developed for Media Foundation in first place.
Resolution change is generally the business of primitives, i.e. source-transform, transform-transform, and transform-sink connections. Even in this scenario you are getting the notification on resolution change out of band (it's asynchronous notification for you) and you are lucky Media Foundation and its Sample Grabber are so flexible that you can handle this in first place.
To implement this reliably you would normally need a custom media sink, but Sample Grabber lets you cut a corner even at this time.
With custom sink implementation you are guaranteed that you don't receive media sample with new resolution before you agree on new resolution in first place (and you can reject it, of course). With MESessionStreamSinkFormatChanged however the event is posted for async retrieval and Sample Grabber continues processing, so technially you can have grabber callbacks with frames of new resolution before you get the session event.
If in your real application you create output node using stream sink and not media sink activate as in your example above, you would not need to retrieve media type handle using topology nodes - you would be able to pull it directly.
I am working on a project which requires me to record audio data as .wav files(of 1 second each) from a MIDI Synth plugin loaded in the JUCE Demo Audio Plugin host. Basically, I need to create a dataset automatically (corresponding to different parameter configurations) from the MIDI Synth.
Will I have to send MIDI Note On/Off messages to generate audio data? Or is there a better way of getting audio data?
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
Is this the function which will solve my needs? If yes, how would I store the data? If not, could someone please guide me to the right function/solution.
Thank you.
I'm not exactly sure what you're asking, so I'm going to guess:
You need to programmatically trigger some MIDI notes in your synth, then write all the audio to a .wav file, right?
Assuming you already know JUCE, it would be fairly trivial to make an app that opens your plugin, sends MIDI, and records audio, but it's probably just easier to tweak the AudioPluginHost project.
Lets break it into a few simple steps (first open the AudioPluginHost project):
Programmatically send MIDI
Look at GraphEditorPanel.h, specifically the class GraphDocumentComponent. It has a private member variable: MidiKeyboardState keyState;. This collects incoming MIDI Messages and then inserts them into the incoming Audio & MIDI buffer that is sent to the plugin.
You can simply call keyState.noteOn (midiChannel, midiNoteNumber, velocity) and keyState.noteOff (midiChannel, midiNoteNumber, velocity) to trigger a note on.
Record Audio Output
This is a fairly straightforward thing to do in JUCE — you should start by looking at the JUCE Demos. The following example records output audio in the background, but there are plenty of other ways to do it:
class AudioRecorder : public AudioIODeviceCallback
{
public:
AudioRecorder (AudioThumbnail& thumbnailToUpdate)
: thumbnail (thumbnailToUpdate)
{
backgroundThread.startThread();
}
~AudioRecorder()
{
stop();
}
//==============================================================================
void startRecording (const File& file)
{
stop();
if (sampleRate > 0)
{
// Create an OutputStream to write to our destination file...
file.deleteFile();
ScopedPointer<FileOutputStream> fileStream (file.createOutputStream());
if (fileStream.get() != nullptr)
{
// Now create a WAV writer object that writes to our output stream...
WavAudioFormat wavFormat;
auto* writer = wavFormat.createWriterFor (fileStream.get(), sampleRate, 1, 16, {}, 0);
if (writer != nullptr)
{
fileStream.release(); // (passes responsibility for deleting the stream to the writer object that is now using it)
// Now we'll create one of these helper objects which will act as a FIFO buffer, and will
// write the data to disk on our background thread.
threadedWriter.reset (new AudioFormatWriter::ThreadedWriter (writer, backgroundThread, 32768));
// Reset our recording thumbnail
thumbnail.reset (writer->getNumChannels(), writer->getSampleRate());
nextSampleNum = 0;
// And now, swap over our active writer pointer so that the audio callback will start using it..
const ScopedLock sl (writerLock);
activeWriter = threadedWriter.get();
}
}
}
}
void stop()
{
// First, clear this pointer to stop the audio callback from using our writer object..
{
const ScopedLock sl (writerLock);
activeWriter = nullptr;
}
// Now we can delete the writer object. It's done in this order because the deletion could
// take a little time while remaining data gets flushed to disk, so it's best to avoid blocking
// the audio callback while this happens.
threadedWriter.reset();
}
bool isRecording() const
{
return activeWriter != nullptr;
}
//==============================================================================
void audioDeviceAboutToStart (AudioIODevice* device) override
{
sampleRate = device->getCurrentSampleRate();
}
void audioDeviceStopped() override
{
sampleRate = 0;
}
void audioDeviceIOCallback (const float** inputChannelData, int numInputChannels,
float** outputChannelData, int numOutputChannels,
int numSamples) override
{
const ScopedLock sl (writerLock);
if (activeWriter != nullptr && numInputChannels >= thumbnail.getNumChannels())
{
activeWriter->write (inputChannelData, numSamples);
// Create an AudioBuffer to wrap our incoming data, note that this does no allocations or copies, it simply references our input data
AudioBuffer<float> buffer (const_cast<float**> (inputChannelData), thumbnail.getNumChannels(), numSamples);
thumbnail.addBlock (nextSampleNum, buffer, 0, numSamples);
nextSampleNum += numSamples;
}
// We need to clear the output buffers, in case they're full of junk..
for (int i = 0; i < numOutputChannels; ++i)
if (outputChannelData[i] != nullptr)
FloatVectorOperations::clear (outputChannelData[i], numSamples);
}
private:
AudioThumbnail& thumbnail;
TimeSliceThread backgroundThread { "Audio Recorder Thread" }; // the thread that will write our audio data to disk
ScopedPointer<AudioFormatWriter::ThreadedWriter> threadedWriter; // the FIFO used to buffer the incoming data
double sampleRate = 0.0;
int64 nextSampleNum = 0;
CriticalSection writerLock;
AudioFormatWriter::ThreadedWriter* volatile activeWriter = nullptr;
};
Note that the actual audio callbacks that contain the audio data from your plugin are happening inside the AudioProcessorGraph inside FilterGraph. There is an audio callback happening many times a second where the raw audio data is passed in. It would probably be very messy to change that inside AudioPluginHost unless you know what you are doing — it would probably be simpler to use something like the above example or create your own app that has its own audio flow.
The function you asked about:
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
is irrelevant. Once you're already in the audio callback, this would give you the audio being sent to a bus of your plugin (aka if your synth had a side chain). What you want to do instead is take the audio coming out of the callback and pass it to an AudioFormatWriter, or preferably an AudioFormatWriter::ThreadedWriter so that the actual writing happens on a different thread.
If you're not at all familiar with C++ or JUCE, Max/MSP or Pure Data might be easier for you to quickly whip something up.
Context:
I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here: https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html
Video capture specifics:
I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions:
// Used for injecting decoding functions for different media types, allowing
// for a generic decode loop
typedef std::function<int(AVPacket*, int*, int)> PacketDecoder;
/**
* Decodes a video packet.
* If the decoding operation is successful, returns the number of bytes decoded,
* else returns the result of the decoding process from ffmpeg
*/
int decode_video_packet(AVPacket *packet,
int *got_frame,
int cached){
int ret = 0;
int decoded = packet->size;
*got_frame = 0;
//Decode video frame
ret = avcodec_decode_video2(video_decode_context,
video_frame, got_frame, packet);
if (ret < 0) {
//FFmpeg users should use av_err2str
char errbuf[128];
av_strerror(ret, errbuf, sizeof(errbuf));
std::cerr << "Error decoding video frame " << errbuf << std::endl;
decoded = ret;
} else {
if (*got_frame) {
video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);
//Write to log file
AVRational *time_base = &video_decode_context->time_base;
log_frame(video_frame, time_base,
video_frame->coded_picture_number, video_log_stream);
#if( DEBUG )
std::cout << "Video frame " << ( cached ? "(cached)" : "" )
<< " coded:" << video_frame->coded_picture_number
<< " pts:" << pts << std::endl;
#endif
/*Copy decoded frame to destination buffer:
*This is required since rawvideo expects non aligned data*/
av_image_copy(video_dest_attr.video_destination_data,
video_dest_attr.video_destination_linesize,
(const uint8_t **)(video_frame->data),
video_frame->linesize,
video_decode_context->pix_fmt,
video_decode_context->width,
video_decode_context->height);
//Write to rawvideo file
fwrite(video_dest_attr.video_destination_data[0],
1,
video_dest_attr.video_destination_bufsize,
video_out_file);
//Unref the refcounted frame
av_frame_unref(video_frame);
}
}
return decoded;
}
/**
* Grabs frames in a loop and decodes them using the specified decoding function
*/
int process_frames(AVFormatContext *context,
PacketDecoder packet_decoder) {
int ret = 0;
int got_frame;
AVPacket packet;
//Initialize packet, set data to NULL, let the demuxer fill it
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
// read frames from the file
for (;;) {
ret = av_read_frame(context, &packet);
if (ret < 0) {
if (ret == AVERROR(EAGAIN)) {
continue;
} else {
break;
}
}
//Convert timing fields to the decoder timebase
unsigned int stream_index = packet.stream_index;
av_packet_rescale_ts(&packet,
context->streams[stream_index]->time_base,
context->streams[stream_index]->codec->time_base);
AVPacket orig_packet = packet;
do {
ret = packet_decoder(&packet, &got_frame, 0);
if (ret < 0) {
break;
}
packet.data += ret;
packet.size -= ret;
} while (packet.size > 0);
av_free_packet(&orig_packet);
if(stop_recording == true) {
break;
}
}
//Flush cached frames
std::cout << "Flushing frames" << std::endl;
packet.data = NULL;
packet.size = 0;
do {
packet_decoder(&packet, &got_frame, 1);
} while (got_frame);
av_log(0, AV_LOG_INFO, "Done processing frames\n");
return ret;
}
Questions:
How do I go about debugging the underlying issue?
Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem?
Am I doing something wrong in the decoding code?
Things I have tried/found:
I found this thread that is about the same problem here: FFMPEG decoding artifacts between keyframes
(I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have)
However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn't 'preserving the packets correctly', but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification.
I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.
I found another question about debugging decoding issues, but couldn't find anything conclusive: How is video decoding corruption debugged?
I'd appreciate any insight. Thanks a lot!
[EDIT] In response to Ronald's answer, I am adding a little more information that wouldn't fit in a comment:
I am only calling decode_video_packet() from the thread processing video frames; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.
I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.
A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can't even say if a solution works or not.
A few things to check:
are you running multiple threads that are calling decode_video_packet()? If you are: don't do that! FFmpeg has built-in support for multi-threaded decoding, and you should let FFmpeg do threading internally and transparently.
you are calling av_free_packet() right after calling the frame decoder function, but at that point it may not yet have had a chance to copy the contents. You should probably let decode_video_packet() free the packet instead, after calling avcodec_decode_video2().
General debugging advice:
run it without any threading and see if that works;
if it does, and with threading it fails, use thread debuggers such as tsan or helgrind to help in finding race conditions that point to your code.
it can also help to know whether the output you're getting is reproduceable (this suggests a non-threading-related bug in your code) or changes from one run to the other (this suggests a race condition in your code).
And yes, the periodic clean-ups are because of keyframes.
Architecture ->GBIP from external interface is connected to target ( linux) system via gpib bus.
Inside Linux box , there is ethernet cable from GPIB to motherboard.
The PIC_GPIB card on external interface is IEEE 488.2
I am sending a query from external interface to linux box.
Few scenarios
1) If I send a query which does not expect a response back , then next query send will work.
2) If I send a query which expect response back , and when I have received the response and read it and then fire next query it works fine.
3) BUT if I send a query from external interface and got response back and I ignore to read the response , then Next query fails.
I am requesting help for scenario 3.
The coding is done on linux side and its a socket programming , which uses linux inbuilt function from unistd.h for read and write.
My investigation : I have found there is a internal memory on gbib card on external interface which stores the value of previous response until we have the read. Generally I use IEEE string utility software to write commands that goes to linux box and read reposne via read button .
Could someone please direct me how to clean input buffer or memory which stores value so that write from external command contiunues without bothering to read it.
My code on linux side has been developed in C++ and socket programming. I have used in bulit write and read function to write and read to the gpib and to json server.
Sample code is shown below
bool GpibClass::ReadWriteFromGPIB()
{
bool check = true;
int n = 0;
char buffer[BUFFER_SIZE];
fd_set read_set;
struct timeval lTimeOut;
// Reset the read mask for the select
FD_ZERO(&read_set);
FD_SET(mGpibFd, &read_set);
FD_SET(mdiffFd, &read_set);
// Set Timeout to check the status of the connection
// when no data is being received
lTimeOut.tv_sec = CONNECTION_STATUS_CHECK_TIMEOUT_SECONDS;
lTimeOut.tv_usec = 0;
cout << "Entered into this function" << endl;
// Look for sockets with available data
if (-1 == select(FD_SETSIZE, &read_set, NULL, NULL, &lTimeOut))
{
cout << "Select failed" << endl;
// We don't know the cause of select's failure.
// Close everything and start from scratch:
CloseConnection(mGpibFd);
CloseConnection(mdifferntServer); // this is different server
check = false;
}
// Check if data is available from GPIB server,
// and if any read and push it to gpib
if(true == check)
{
cout << "Check data from GPIB after select" << endl;
if (FD_ISSET(mGpibFd, &read_set))
{
n = read(mGpibFd, buffer, BUFFER_SIZE);
cout << "Read from GPIB" << n << " bytes" << endl;
if(0 < n)
{
// write it to different server and check if we get response from it
}
else
{
// Something failed on socket read - most likely
// connection dropped. Close socket and retry later
CloseConnection(mGpibFd);
check = false;
}
}
}
// Check if data is available from different server,
// and if any read and push it to gpib
if(true == check)
{
cout << "Check data from diff server after select" << endl;
if (FD_ISSET(mdiffFd, &read_set))
{
n = read(mdiffFd, buffer, BUFFER_SIZE);
cout << "Read from diff servewr " << n << " bytes" << endl;
if (0 < n)
{
// Append, just in case - makes sure data is sent.
// Extra cr/lf shouldn't cause any problem if the json
// server has already added them
strcpy(buffer + n, "\r\n");
write(mGpibFd, buffer, n + 2);
std::cout <<" the buffer sixze = " << buffer << std::endl;
}
else
{
// Something failed on socket read - most likely
// connection dropped. Close socket and retry later
CloseConnection(mdiffFd);
check = false;
}
}
}
return check;
}
You should ordinarily be reading responses after any operation which could generate them.
If you fail to do that, an easy solution would be to read responses in a loop until you have drained the queue to empty.
You can reset the instrument (probably *RST), but you would probably loose other state as well. You will have to check it's documentation to see if there is a command to reset only the response queue. Checking the documentation is always a good idea, because the number of instruments which precisely comply with the spec is dwarfed by the number which augment or omit parts in unique ways.