Windows Desktop Duplication API taking a long time - c++

I am using the windows desktop duplication API to record my screen in windows 10. I am however having some issues with performance. When playing a video using google chrome and attempting to record the time it takes to record the screen fluctuates from 15ms to 45ms. I wanted to be able to record at at least 30fps, and I know the desktop duplication api is capable of doing it. Anyways here is the code I used to actually capture the screen:
processor->hr = processor->lDeskDupl->AcquireNextFrame(0, &processor->lFrameInfo, &processor->lDesktopResource);
if (processor->hr == DXGI_ERROR_WAIT_TIMEOUT) {
processor->lDeskDupl->ReleaseFrame();
return false;
}
if (FAILED(processor->hr)) {
processor->lDeskDupl->ReleaseFrame();
return false;
}
// QI for ID3D11Texture2D
processor->hr = processor->lDesktopResource->QueryInterface(IID_PPV_ARGS(&processor->lAcquiredDesktopImage));
if (FAILED(processor->hr)) {
processor->lDeskDupl->ReleaseFrame();
return false;
}
processor->lDesktopResource.Release();
if (processor->lAcquiredDesktopImage == nullptr) {
processor->lDeskDupl->ReleaseFrame();
return false;
}
processor->lImmediateContext->CopyResource(processor->lGDIImage, processor->lAcquiredDesktopImage);
processor->lAcquiredDesktopImage.Release();
processor->lDeskDupl->ReleaseFrame();
// Copy image into CPU access texture
processor->lImmediateContext->CopyResource(processor->lDestImage, processor->lGDIImage);
// Copy from CPU access texture to bitmap buffer
D3D11_MAPPED_SUBRESOURCE resource;
processor->subresource = D3D11CalcSubresource(0, 0, 0);
processor->lImmediateContext->Map(processor->lDestImage, processor->subresource, D3D11_MAP_READ_WRITE, 0, &resource);
BYTE* sptr = reinterpret_cast<BYTE*>(resource.pData);
BYTE* dptr = processor->pBuf;
UINT lRowPitch = min(processor->lBmpRowPitch, resource.RowPitch);
for (int i = 0; i < processor->lOutputDuplDesc.ModeDesc.Height; i++) {
memcpy_s(dptr, processor->lBmpRowPitch, sptr, lRowPitch);
sptr += resource.RowPitch;
dptr += processor->lBmpRowPitch;
}
It is important to note that this is the specific section that is taking 15ms-45ms to complete every cycle. The memcpy loop at the bottom accounts for about 2ms of that time usually so I know that that is not responsible for the time it is taking here. Also AcquireNextFrame's timeout is set to zero so it returns nearly immediately. Any help would be greatly appreciated! The code pasted here was adapted from this: https://gist.github.com/Xirexel/a69ade44df0f70afd4a01c1c9d9e02cd

You're not using the API in optimal way. Read remarks in ReleaseFrame API documentation:
For performance reasons, we recommend that you release the frame just before you call the IDXGIOutputDuplication::AcquireNextFrame method to acquire the next frame. When the client does not own the frame, the operating system copies all desktop updates to the surface. This can result in wasted GPU cycles if the operating system updates the same region for each frame that occurs.
You are not doing what's written there, you release frames as soon as you copy.

Related

WASAPI captured packets do not align

I'm trying to visualize a soundwave captured by WASAPI loopback but find that the packets I record do not form a smooth wave when put together.
My understanding of how the WASAPI capture client works is that when I call pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL) the buffer pData is filled from the front with numFramesAvailable datapoints. Each datapoint is a float and they alternate by channel. Thus to get all available datapoints I should cast pData to a float pointer, and take the first channels * numFramesAvailable values. Once I release the buffer and call GetBuffer again it provides the next packet. I would assume that these packets would follow on from each other but it doesn't seem to be the case.
My guess is that either I'm making an incorrect assumption about the format of the audio data in pData or the capture client is either missing or overlapping frames. But have no idea how to check these.
To make the code below as brief as possible I've removed things like error status checking and cleanup.
Initialization of capture client:
const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator);
const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator);
const IID IID_IAudioClient = __uuidof(IAudioClient);
const IID IID_IAudioCaptureClient = __uuidof(IAudioCaptureClient);
pAudioClient = NULL;
IMMDeviceEnumerator * pDeviceEnumerator = NULL;
IMMDevice * pDeviceEndpoint = NULL;
IAudioClient *pAudioClient = NULL;
IAudioCaptureClient *pCaptureClient = NULL;
int channels;
// Initialize audio device endpoint
CoInitialize(nullptr);
CoCreateInstance(CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, IID_IMMDeviceEnumerator, (void**)&pDeviceEnumerator );
pDeviceEnumerator ->GetDefaultAudioEndpoint(eRender, eConsole, &pDeviceEndpoint );
// init audio client
WAVEFORMATEX *pwfx = NULL;
REFERENCE_TIME hnsRequestedDuration = 10000000;
REFERENCE_TIME hnsActualDuration;
audio_device_endpoint->Activate(IID_IAudioClient, CLSCTX_ALL, NULL, (void**)&pAudioClient);
pAudioClient->GetMixFormat(&pwfx);
pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_LOOPBACK, hnsRequestedDuration, 0, pwfx, NULL);
channels = pwfx->nChannels;
pAudioClient->GetService(IID_IAudioCaptureClient, (void**)&pCaptureClient);
pAudioClient->Start(); // Start recording.
Capture of packets (note that std::mutex packet_buffer_mutex and vector<vector<float>> packet_bufferare already be defined and used by another thread to safely display the data):
UINT32 packetLength = 0;
BYTE *pData = NULL;
UINT32 numFramesAvailable;
DWORD flags;
int max_packets = 8;
std::unique_lock<std::mutex>write_guard(packet_buffer_mutex, std::defer_lock);
while (true) {
pCaptureClient->GetNextPacketSize(&packetLength);
while (packetLength != 0)
{
// Get the available data in the shared buffer.
pData = NULL;
pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL);
if (flags & AUDCLNT_BUFFERFLAGS_SILENT)
{
pData = NULL; // Tell CopyData to write silence.
}
write_guard.lock();
if (packet_buffer.size() == max_packets) {
packet_buffer.pop_back();
}
if (pData) {
float * pfData = (float*)pData;
packet_buffer.emplace(packet_buffer.begin(), pfData, pfData + channels * numFramesAvailable);
} else {
packet_buffer.emplace(packet_buffer.begin());
}
write_guard.unlock();
hpCaptureClient->ReleaseBuffer(numFramesAvailable);
pCaptureClient->GetNextPacketSize(&packetLength);
}
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
I store the packets in a vector<vector<float>> (where each vector<float> is a packet) removing the last one and inserting the newest at the start so I can iterate over them in order.
Below is the result of a captured sinewave, plotting alternating values so it only represents a single channel. It is clear where the packets are being stitched together.
Something is playing a sine wave to Windows; you're recording the sine wave back in the audio loopback; and the sine wave you're getting back isn't really a sine wave.
You're almost certainly running into glitches. The most likely causes of glitching are:
Whatever is playing the sine wave to Windows isn't getting data to Windows in time, so the buffer is running dry.
Whatever is reading the loopback data out of Windows isn't reading the data in time, so the buffer is filling up.
Something is going wrong in between playing the sine wave to Windows and reading it back.
It is possible that more than one of these are happening.
The IAudioCaptureClient::GetBuffer call will tell you if you read the data too late. In particular it will set *pdwFlags so that the AUDCLNT_BUFFERFLAGS_DATA_DISCONTINUITY bit is set.
Looking at your code, I see you're doing the following things between the GetBuffer and the WriteBuffer:
Waiting on a lock
Sometimes doing something called "pop_back"
Doing something called "emplace"
I quote from the above-linked documentation:
Clients should avoid excessive delays between the GetBuffer call that acquires a packet and the ReleaseBuffer call that releases the packet. The implementation of the audio engine assumes that the GetBuffer call and the corresponding ReleaseBuffer call occur within the same buffer-processing period. Clients that delay releasing a packet for more than one period risk losing sample data.
In particular you should NEVER DO ANY OF THE FOLLOWING between GetBuffer and ReleaseBuffer because eventually they will cause a glitch:
Wait on a lock
Wait on any other operation
Read from or write to a file
Allocate memory
Instead, pre-allocate a bunch of memory before calling IAudioClient::Start. As each buffer arrives, write to this memory. On the side, have a regularly scheduled work item that takes written memory and writes it to disk or whatever you're doing with it.

DirectShow CSourceStream::FillBuffer unpredictable number of calls after Pause and Seek to the first frame

I have a Directshow File Source Filter which has audio and frame output pins. It is written in C++ based on this tutorial on MSDN. My filter opens the video by using Medialooks MFormats SDK and provides raw data to output pins. Two pins are directly connecting to renderer filters when they are rendered.
The problem occurs when I run the graph, pause the video and seek to the frame number 0. After a call to ChangeStart method in output frame pin, sometimes FillBuffer is called three times and frame 1 is shown on the screen instead of 0. When it is called two times, it shows the correct frame which is the frame 0.
Output pins are inherited from CSourceStream and CSourceSeeking classes. Here is my FillBuffer and ChangeStart methods of the output frame pin;
FillBuffer Method
HRESULT frame_pin::FillBuffer(IMediaSample *sample)
{
CheckPointer(sample, E_POINTER);
BYTE *frame_buffer;
sample->GetPointer(&frame_buffer);
// Check if the downstream filter is changing the format.
CMediaType *mt;
HRESULT hr = sample->GetMediaType(reinterpret_cast<AM_MEDIA_TYPE**>(&mt));
if (hr == S_OK)
{
auto new_width = reinterpret_cast<VIDEOINFOHEADER2*>(mt->pbFormat)->bmiHeader.biWidth;
auto old_witdh = reinterpret_cast<VIDEOINFOHEADER2*>(m_mt.pbFormat)->bmiHeader.biWidth;
if(new_width != old_witdh)
format_changed_ = true;
SetMediaType(mt);
DeleteMediaType(mt);
}
ASSERT(m_mt.formattype == FORMAT_VideoInfo2);
VIDEOINFOHEADER2 *vih = reinterpret_cast<VIDEOINFOHEADER2*>(m_mt.pbFormat);
CComPtr<IMFFrame> mf_frame;
{
CAutoLock lock(&shared_state_);
if (source_time_ >= m_rtStop)
return S_FALSE;
// mf_reader_ is a member external SDK instance which gets the frame data with this function call
hr = mf_reader_->SourceFrameConvertedGetByNumber(&av_props_, frame_number_, -1, &mf_frame, CComBSTR(L""));
if (FAILED(hr))
return hr;
REFERENCE_TIME start, stop = 0;
start = stream_time_;
stop = static_cast<REFERENCE_TIME>(tc_.get_stop_time() / m_dRateSeeking);
sample->SetTime(&start, &stop);
stream_time_ = stop;
source_time_ += (stop - start);
frame_number_++;
}
if (format_changed_)
{
CComPtr<IMFFrame> mf_frame_resized;
mf_frame->MFResize(eMFCC_YUY2, std::abs(vih->bmiHeader.biWidth), std::abs(vih->bmiHeader.biHeight), 0, &mf_frame_resized, CComBSTR(L""), CComBSTR(L""));
mf_frame = mf_frame_resized;
}
MF_FRAME_INFO mf_frame_info;
mf_frame->MFAllGet(&mf_frame_info);
memcpy(frame_buffer, reinterpret_cast<BYTE*>(mf_frame_info.lpVideo), mf_frame_info.cbVideo);
sample->SetActualDataLength(static_cast<long>(mf_frame_info.cbVideo));
sample->SetSyncPoint(TRUE);
sample->SetPreroll(FALSE);
if (discontinuity_)
{
sample->SetDiscontinuity(TRUE);
discontinuity_ = FALSE;
}
return S_OK;
}
ChangeStart Method
HRESULT frame_pin::ChangeStart()
{
{
CAutoLock lock(CSourceSeeking::m_pLock);
tc_.reset();
stream_time_ = 0;
source_time_ = m_rtStart;
frame_number_ = static_cast<int>(m_rtStart / frame_lenght_);
}
update_from_seek();
return S_OK;
}
From the Microsoft DirectShow documentation:
The CSourceSeeking class is an abstract class for implementing
seeking in source filters with one output pin.
CSourceSeeking is not recommended for a filter with more than one
output pin. The main issue is that only one pin should respond to
seeking requests. Typically this requires communication among the pins
and the filter.
And you have two output pins in your source filter.
The CSourceSeeking class can be extended to manage more than one output pin with custom coding. When seek commands come in they'll come through both input pins so you'll need to decide which pin is controlling seeking and ignore seek commands arriving at the other input pin.

Media Foundation Webcam live capture freezes in low light condition

We are building a video communication software. We are using Media Foundation to obtain the live Stream. We use the IMFSourceReadder to perform the capture.
The sequence of call looks like:
hr = pAttributes->SetString(MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK, m_pwszSymbolicLink);
hr = MFCreateDeviceSourceActivate(pAttributes, &avdevice);
hr = avdevice->ActivateObject(__uuidof(IMFMediaSource), (void**) &m_mediaSource);
hr = m_mediaSource->CreatePresentationDescriptor(&pPD);
hr = pPD->GetStreamDescriptorByIndex(m_streamIdx, &fSelected, &pSD);
hr =
// we select the best native MediaType enumerating the source reader
pHandler->SetCurrentMediaType(m_bestNativeType);
hr = pAttributes->SetUINT32(MF_READWRITE_DISABLE_CONVERTERS, FALSE);
hr = pAttributes->SetUINT32(MF_SOURCE_READER_ENABLE_ADVANCED_VIDEO_PROCESSING, TRUE);
hr = MFCreateSourceReaderFromMediaSource(m_mediaSource, pAttributes, &m_reader);
Then we start to read the frame SYNCHRONOUSLY in a separate thread using
m_reader->ReadSample()
When we need to stop the device or reconfigure it, we stop the thread (by setting an flag and exiting the thread). We call the following
hr = m_mediaSource->Stop();
m_mediaSource->Shutdown();
SafeRelease(&m_mediaSource);
SafeRelease(&m_reader);
The software can be out ouf call. There, it captures the webcam video in VGA format and display it on screen. In call, it selects the best capture format depending on the negociated call quality and restarts the capture.
The issues that we are experiencing are the following: some cameras freeze sometimes in low light conditions (low fps output). It can happen right away at the beginning of the call or during the call.
When it freezes, one of the two things can happen (not sure which one)
m_reader->ReadSample() fails repetitively with MF_E_OPERATION_CANCELLED error code
m_reader->ReadSample() returns often producing more than 80 frames per seconds producing same frozen image.
When we hang up the device is reconfigured back to VGA capture and works fine.
Does someone struggled with Media Foundation on the same issue?
You wrote that web camera "freez" - produce low frame rate while capture image with low light condition. The result of it that controller of web camera take more time on exposition of photo matrix in automatic mode. It allows improve quality of image by increasing frame duration. So, it is special feature of hardware part. it is possible to switch such behavior of camera from auto mode on manual mode of parameter
Code::Result VideoCaptureDevice::setParametrs(CamParametrs parametrs){
ResultCode::Result result = ResultCode::VIDEOCAPTUREDEVICE_SETPARAMETRS_ERROR;
if(pLocalSource)
{
unsigned int shift = sizeof(Parametr);
Parametr *pParametr = (Parametr *)(&settings);
Parametr *pPrevParametr = (Parametr *)(&prevParametrs);
CComPtrCustom<IAMVideoProcAmp> pProcAmp;
HRESULT hr = pLocalSource->QueryInterface(IID_PPV_ARGS(&pProcAmp));
if (SUCCEEDED(hr))
{
for(unsigned int i = 0; i < 10; i++)
{
if(pPrevParametr[i].CurrentValue != pParametr[i].CurrentValue || pPrevParametr[i].Flag != pParametr[i].Flag)
hr = pProcAmp->Set(VideoProcAmp_Brightness + i, pParametr[i].CurrentValue, pParametr[i].Flag);
}
}
else
{
result = ResultCode::VIDEOCAPTUREDEVICE_SETPARAMETRS_SETVIDEOPROCESSOR_ERROR;
goto finish;
}
CComPtrCustom<IAMCameraControl> pProcControl;
hr = pLocalSource->QueryInterface(IID_PPV_ARGS(&pProcControl));
if (SUCCEEDED(hr))
{
for(unsigned int i = 0; i < 7; i++)
{
if(pPrevParametr[10 + i].CurrentValue != pParametr[10 + i].CurrentValue || pPrevParametr[10 + i].Flag != pParametr[10 + i].Flag)
hr = pProcControl->Set(CameraControl_Pan+i, pParametr[10 + i].CurrentValue, pParametr[10 + i].Flag);
}
}
else
{
result = ResultCode::VIDEOCAPTUREDEVICE_SETPARAMETRS_SETVIDEOCONTROL_ERROR;
goto finish;
}
result = ResultCode::OK;
prevParametrs = parametrs.settings;
}finish:
if(result != ResultCode::OK)
DebugPrintOut::getInstance().printOut(L"VIDEO CAPTURE DEVICE: Parametrs of video device cannot be set!!!\n");
return result;
}
where:
struct Parametr
{
long CurrentValue;
long Min;
long Max;
long Step;
long Default;
long Flag;
Parametr();
};
struct CamParametrs
{
Parametr Brightness;
Parametr Contrast;
Parametr Hue;
Parametr Saturation;
Parametr Sharpness;
Parametr Gamma;
Parametr ColorEnable;
Parametr WhiteBalance;
Parametr BacklightCompensation;
Parametr Gain;
Parametr Pan;
Parametr Tilt;
Parametr Roll;
Parametr Zoom;
Parametr Exposure;
Parametr Iris;
Parametr Focus;
};
More code you can find on site:
Capturing Live-video from Web-camera on Windows 7 and Windows 8
However, using of IMFSourceReader can be not effective. Media Foundation model uses async interaction - after sending the request into the media source code must listen responding from media source with new frame or some other info. Method with direct calling m_reader->ReadSample() cannot be effective - you faced with it. Method m_reader->ReadSample() can be effective with reading frames from video file while delay can be very low, but for web camera I can advice use topology - session binding, like in my code Capturing Live-video from Web-camera on Windows 7 and Windows 8
Regards,
Evgeny Pereguda
The question description leaves an impression that you do things in a somewhat chaotic way and the resulting freeze is not necessarily caused by Media Foundation or camera.
Use of media source and source reader are certainly the right way to access a camera and it provides efficient way to capture video, both synchronously and asynchronously.
However, your incomplete code snippets show that you create a media source, then source reader, and then you keep dealing with media source directly. Well, you are not supposed to do this. Once you created a source reader, it will manage media source for you: you don't need Stop, Shutdown calls. Your calling that and other methods might bring confusion that results in incorrect source reader behavior.
That is, either you deal with a media source, or you plug it into Media Session or Source Reader and use this higher level API.
Also note that if/when you experience a freeze, you are interested to break in with debugged and locate threads that indicate freeze position.

Exeception in application when Threading used to saveImage simultaneously from multiple IP Camera

I am working on application where I connect the application and display multiple video feed from IP cameras. I am able to get the video feed which is too laggy(working to get solution to remove the lag). And in the application I have Button which when clicked it takes 50 pictures from all the IP cameras connected. But the code I have implemented gives an exception when threading is implemented. When used without threading it works fine.Heres the code for the Button Event.
void CDialogBasedVideoDlg::OnBnClickedButtonTakePic()
{
int nIndex = m_CamListBox.GetCurSel();
CStaticEx *objStaticEx = (CStaticEx*)m_StaticArray.GetAt(nIndex);
objStaticEx->StartThreadToCaptureUSBCam();//threading implementation gives exception.
//objStaticEx->CapturePicture();//this func works fine(without threading)
// TODO: Add your control notification handler code here
}
I have overrriden Static class which dynamically creates a picture control and displays the live video feed, threading is implemented in this class where the images are saved. Here's the code for capturing images and threading function.
void CStaticEx::CapturePicture(void)
{
CString csFileDir;
CString csFileName;
csFileDir.Format(DIR_USB_CAM_NAME,m_IpAddr);
if(IsDirExist(csFileDir)== false){
CreateDirectory(csFileDir, NULL);
}
CString csStr = csFileDir;
csStr += RANDOM_FILE_SEARCH;
int nNoOfFile = CountFileNumInDir((char*)csStr.GetBuffer());
csFileDir += DBL_SLASH;
int i = 0;
do{
csFileName.Format(FILE_NAME, csFileDir, (m_nCamID+1));
CString csCount;
csCount.Format(_T("%d"),(nNoOfFile+1));
csFileName += csCount;
csFileName += JPG;
m_pFrameImg = cvQueryFrame( m_pCamera ); //<----Exception come at this point
if(m_pFrameImg){
cvSaveImage(csFileName, m_pFrameImg);
i++;
nNoOfFile++;
csFileName = _T("");
}
}while(i < 50);
}
Threading Control function.
void CStaticEx::StartThreadToCaptureUSBCam(){
THREADSTRUCT *_param = new THREADSTRUCT;
_param->_this = this;
AfxBeginThread(StartThread,_param);
}
UINT CStaticEx::StartThread (LPVOID param)
{
THREADSTRUCT* ts = (THREADSTRUCT*)param;
//AfxMessageBox("Thread Started");
ts->_this->CapturePicture();
return 1;
}
Exception thrown is as follows.
Windows has treggered a breakpoint in DialogBasedVideo.exe.
This may be due to a corruption of heap, which indicates a bug in DialogBasedVideo.exe or any of the DLLs it has loaded.
This may also be due to the user pressing F12 while dialogbasedvideo.exe has focus.
The output window may have more diagnostic information. How do i get rid of this exception.
All the experts out their please help me.I am using VS2010 and Windows7, OpenCv2.4.6 Thanks in advance.

Is there a way to detect if a monitor is plugged in?

I have a custom application written in C++ that controls the resolution and other settings on a monitor connected to an embedded system. Sometimes the system is booted headless and run via VNC, but can have a monitor plugged in later (post boot). If that happens he monitor is fed no video until the monitor is enabled. I have found calling "displayswitch /clone" brings the monitor up, but I need to know when the monitor is connected. I have a timer that runs every 5 seconds and looks for the monitor, but I need some API call that can tell me if the monitor is connected.
Here is a bit of psudocode to describe what I'm after (what is executed when the timer expires every 5 seconds).
if(If monitor connected)
{
ShellExecute("displayswitch.exe /clone);
}else
{
//Do Nothing
}
I have tried GetSystemMetrics(SM_CMONITORS) to return the number of monitors, but it returns 1 if the monitor is connected or not. Any other ideas?
Thanks!
Try the following code
BOOL IsDisplayConnected(int displayIndex = 0)
{
DISPLAY_DEVICE device;
device.cb = sizeof(DISPLAY_DEVICE);
return EnumDisplayDevices(NULL, displayIndex, &device, 0);
}
This will return true if Windows identifies a display device with index (AKA identity) 0 (this is what the display control panel uses internally). Otherwise, it will return false false. So by checking the first possible index (which I marked as the default argument), you can find out whether any display device is connected (or at least identified by Windows, which is essentially what you're looking for).
Seems that there is some kind of "default monitor" even if no real monitor is connected.
The function below works for me (tested on a Intel NUC and a Surface 5 tablet).
The idea is to get the device id and check if it contains the string "default_monitor".
bool hasMonitor()
{
// Check if we have a monitor
bool has = false;
// Iterate over all displays and check if we have a valid one.
// If the device ID contains the string default_monitor no monitor is attached.
DISPLAY_DEVICE dd;
dd.cb = sizeof(dd);
int deviceIndex = 0;
while (EnumDisplayDevices(0, deviceIndex, &dd, 0))
{
std::wstring deviceName = dd.DeviceName;
int monitorIndex = 0;
while (EnumDisplayDevices(deviceName.c_str(), monitorIndex, &dd, 0))
{
size_t len = _tcslen(dd.DeviceID);
for (size_t i = 0; i < len; ++i)
dd.DeviceID[i] = _totlower(dd.DeviceID[i]);
has = has || (len > 10 && _tcsstr(dd.DeviceID, L"default_monitor") == nullptr);
++monitorIndex;
}
++deviceIndex;
}
return has;
}