CoC not sending autocomplete request message to language server - c++

My nvim autocomplete window is not coming up when editing C++ code.
I've been trying to follow this debugging guideline but I haven't had much success. I'm using the Kythe language server configured as follows in coc-settings.json:
{
"languageserver": {
"kythe": {
"command": "/full/path/omitted/kythe_languageserver",
"filetypes": ["python", "go", "java", "cpp", "cc", "c++", "proto"],
"trace.server" : "verbose"
}
}
}
When opening a C++ file and running :CocList services, I can see that the language server has started:
languageserver.kythe [running] python, go, java, cpp, cc, c++, proto
And indeed in the :CocCommand workspace.showOutput, there are messages indicating a successful initialization:
[Trace - 4:45:05 PM] Received response 'initialize - (0)' in 264ms.
Result: {
"capabilities": {
"textDocumentSync": 1,
"hoverProvider": true,
"definitionProvider": true,
"referencesProvider": true
}
}
One thought, is there a missing capability here that is required for autocomplete to come up? In any case, when I edit the file and try to, for example, type std:: expecting some form of autocompletion for that namespace, nothing happens and the only messages sent to the language server seem to be as follows:
[Trace - 4:45:47 PM] Sending notification 'textDocument/didChange'.
Params: {
"textDocument": {
"uri": "[redacted]",
"version": 4
},
"contentChanges": [
{
"text": "[redacted]"
}
]
}
From my limited knowledge, I'm pretty sure that autocompletion also requires a different message type to be sent to the language server, no? textDocument/didChange seems to be only for updating the state.
Edit: Full set of requested capabilities by nvim:
"capabilities": {
"workspace": {
"applyEdit": true,
"workspaceEdit": {
"documentChanges": true,
"resourceOperations": [
"create",
"rename",
"delete"
],
"failureHandling": "textOnlyTransactional"
},
"didChangeConfiguration": {
"dynamicRegistration": true
},
"didChangeWatchedFiles": {
"dynamicRegistration": true
},
"symbol": {
"dynamicRegistration": true,
"symbolKind": {
"valueSet": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26
]
},
"tagSupport": {
"valueSet": [
1
]
}
},
"executeCommand": {
"dynamicRegistration": true
},
"configuration": true,
"workspaceFolders": true
},
"textDocument": {
"publishDiagnostics": {
"relatedInformation": true,
"versionSupport": false,
"tagSupport": {
"valueSet": [
1,
2
]
}
},
"synchronization": {
"dynamicRegistration": true,
"willSave": true,
"willSaveWaitUntil": true,
"didSave": true
},
"completion": {
"dynamicRegistration": true,
"contextSupport": true,
"completionItem": {
"snippetSupport": true,
"commitCharactersSupport": true,
"documentationFormat": [
"markdown",
"plaintext"
],
"deprecatedSupport": true,
"preselectSupport": true
},
"completionItemKind": {
"valueSet": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25
]
}
},
"hover": {
"dynamicRegistration": true,
"contentFormat": [
"markdown",
"plaintext"
]
},
"signatureHelp": {
"dynamicRegistration": true,
"signatureInformation": {
"documentationFormat": [
"markdown",
"plaintext"
],
"parameterInformation": {
"labelOffsetSupport": true
}
}
},
"definition": {
"dynamicRegistration": true
},
"references": {
"dynamicRegistration": true
},
"documentHighlight": {
"dynamicRegistration": true
},
"documentSymbol": {
"dynamicRegistration": true,
"symbolKind": {
"valueSet": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26
]
},
"hierarchicalDocumentSymbolSupport": true,
"tagSupport": {
"valueSet": [
1
]
}
},
"codeAction": {
"dynamicRegistration": true,
"isPreferredSupport": true,
"codeActionLiteralSupport": {
"codeActionKind": {
"valueSet": [
"",
"quickfix",
"refactor",
"refactor.extract",
"refactor.inline",
"refactor.rewrite",
"source",
"source.organizeImports"
]
}
}
},
"codeLens": {
"dynamicRegistration": true
},
"formatting": {
"dynamicRegistration": true
},
"rangeFormatting": {
"dynamicRegistration": true
},
"onTypeFormatting": {
"dynamicRegistration": true
},
"rename": {
"dynamicRegistration": true,
"prepareSupport": true
},
"documentLink": {
"dynamicRegistration": true,
"tooltipSupport": true
},
"typeDefinition": {
"dynamicRegistration": true
},
"implementation": {
"dynamicRegistration": true
},
"declaration": {
"dynamicRegistration": true
},
"colorProvider": {
"dynamicRegistration": true
},
"foldingRange": {
"dynamicRegistration": true,
"rangeLimit": 5000,
"lineFoldingOnly": true
},
"selectionRange": {
"dynamicRegistration": true
}
},
"window": {
"workDoneProgress": true
}
},

Kythe language server does not support auto complete capabilities as of yet.

Related

Django: complex order by and filter by from many relations query with nested models

What I want to achieve:
I want list of
[
{
"location":"Loc 1",
"session":[
{
"start":"2021-01-01",
"counts":600,
"details":[
{
"id":13,
"length_max":21,
"length_min":15,
"length_avg":16,
"length_std":19,
"is_active":false,
"type":"dog"
}
]
}
]
},
{
"location":"Loc3",
"session":[
{
"start":"2021-01-01",
"counts":500,
"details":[
{
"id":15,
"length_max":19,
"length_min":16,
"length_avg":16,
"length_std":19,
"is_active":false,
"type":"dog"
}
]
}
]
}
]
My Viewset is
class SessionFilter(FilterSet):
type_filter = filters.CharFilter(method="filter_by_type")
def filter_by_type(self,queryset,name,value):
queryset = queryset.filter(session__details__type=value).distinct()
return queryset
class SessionModelViewSet(ModelViewSet):
queryset = Session.objects.all()
serializer_class = SessionSerializers
filter_backends = (DjangoFilterBackend,)
filter_class = SessionFilter
I am trying to filter based on type, but am not able to fetch what I need.
The out I am getting is
[
{
"location":"Loc 1",
"session":[
{
"start":"2021-01-01",
"counts":600,
"details":[
{
"id":13,
"length_max":21,
"length_min":15,
"length_avg":16,
"length_std":19,
"is_active":false,
"type":"dog"
}
]
},
{
"start":"2021-01-01",
"counts":600,
"details":[
{
"id":7,
"length_max":39,
"length_min":25,
"length_avg":25,
"length_std":27,
"is_active":true,
"type":"cat"
},
{
"id":19,
"length_max":39,
"length_min":25,
"length_avg":25,
"length_std":27,
"is_active":false,
"type":"cat"
}
]
}
]
},
{
"location":"Loc3",
"session":[
{
"start":"2021-01-01",
"counts":500,
"details":[
{
"id":15,
"length_max":19,
"length_min":16,
"length_avg":16,
"length_std":19,
"is_active":false,
"type":"dog"
}
]
},
{
"start":"2021-01-01",
"counts":500,
"details":[
{
"id":9,
"length_max":39,
"length_min":25,
"length_avg":25,
"length_std":27,
"is_active":true,
"type":"cat"
},
{
"id":21,
"length_max":39,
"length_min":25,
"length_avg":25,
"length_std":27,
"is_active":false,
"type":"cat"
}
]
}
]
}
]
How can I customise the filter or change the nested query to fetch the desired output and how can I order by id. Till the second nested level, I am able to fetch data (maybe it is not complicated) but in the 3rd level, I am facing this issue.
Model.py
class Place(models.Model):
id = models.IntegerField(primary_key=True)
location = models.CharField(max_length=100)
class Meta:
db_table = 'place'
managed=False
class Session(models.Model):
id = models.IntegerField(primary_key=True)
place = models.ForeignKey(Place,related_name='session',on_delete=models.CASCADE, null=True)
start = models.DateField(auto_now=True)
counts = models.IntegerField()
class Meta:
db_table = 'session'
managed=False
class Animal(models.Model):
id = models.IntegerField(primary_key=True)
sess = models.ForeignKey(Session,related_name='details',on_delete=models.CASCADE, null=True)
type = models.CharField(max_length=100)
is_active = models.BooleanField()
length_max = models.IntegerField()
length_min = models.IntegerField()
length_avg = models.IntegerField()
length_std = models.IntegerField()
class Meta:
db_table = 'animal'
managed=False
serializer.py
class AnimalSerializers(FlexFieldsModelSerializer):
class Meta:
model = Animal
fields = ["id","length_max","length_min","length_avg","length_std","is_active","type"]
class SessionSerializers(FlexFieldsModelSerializer):
class Meta:
model = Session
fields = ["start","counts","details"]
expandable_fields = {
'details': (AnimalSerializers, {'many': True})
}
class PlaceSerializers(FlexFieldsModelSerializer):
class Meta:
model = Place
fields = ["location","session"]
expandable_fields = {
'session': (SessionSerializers, {'many': True})
}
After prefetch I am getting the json format as
{
"location": "Loc 2",
"session": [
{
"start": "2021-01-01",
"counts": 300,
"details": [
{
"id": 14,
"length_max": 22,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": true,
"type": "dog"
}
]
},
{
"start": "2021-01-01",
"counts": 300,
"details": []
}
]
}
is there a way to eliminate and fetch the only one which is required?
After using with filter method, the obtained json format is
.prefetch_related(
Prefetch('session', queryset=Session.objects.filter(details__type=value)),
Prefetch('session__details', queryset=Details.objects.all().order_by('id'))),
)
[
{
"location": "Loc 3",
"session": [
{
"start": "2021-01-01",
"counts": 500,
"details": [
{
"id": 15,
"length_max": 19,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
}
]
},
{
"location": "Loc 4",
"session": [
{
"start": "2021-01-02",
"counts": 800,
"details": [
{
"id": 1,
"length_max": 24,
"length_min": 18,
"length_avg": 25,
"length_std": 27,
"is_active": false,
"type": "cat"
},
{
"id": 4,
"length_max": 24,
"length_min": 18,
"length_avg": 25,
"length_std": 27,
"is_active": false,
"type": "cat"
},
{
"id": 16,
"length_max": 24,
"length_min": 18,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
},
{
"start": "2021-01-02",
"counts": 800,
"details": [
{
"id": 10,
"length_max": 29,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
},
{
"id": 22,
"length_max": 29,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
}
]
},
Output after updating the code
[
{
"location": "Loc 1",
"session": [
{
"start": "2021-01-01",
"counts": 600,
"details": [
{
"id": 13,
"length_max": 21,
"length_min": 15,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
}
]
},
{
"location": "Loc 2",
"session": [
{
"start": "2021-01-01",
"counts": 300,
"details": [
{
"id": 14,
"length_max": 22,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": true,
"type": "dog"
}
]
}
]
},
{
"location": "Loc 3",
"session": [
{
"start": "2021-01-01",
"counts": 500,
"details": [
{
"id": 15,
"length_max": 19,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
}
]
},
{
"location": "Loc 4",
"session": [
{
"start": "2021-01-02",
"counts": 800,
"details": [
{
"id": 16,
"length_max": 24,
"length_min": 18,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
},
{
"start": "2021-01-02",
"counts": 800,
"details": [
{
"id": 10,
"length_max": 29,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
},
{
"id": 22,
"length_max": 29,
"length_min": 16,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
}
]
},
{
"location": "Loc 5",
"session": [
{
"start": "2021-01-02",
"counts": 400,
"details": [
{
"id": 17,
"length_max": 28,
"length_min": 19,
"length_avg": 16,
"length_std": 19,
"is_active": true,
"type": "dog"
}
]
},
{
"start": "2021-01-02",
"counts": 400,
"details": [
{
"id": 11,
"length_max": 38,
"length_min": 28,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
},
{
"id": 23,
"length_max": 38,
"length_min": 28,
"length_avg": 16,
"length_std": 19,
"is_active": true,
"type": "dog"
}
]
}
]
},
{
"location": "Loc 6",
"session": [
{
"start": "2021-01-02",
"counts": 450,
"details": [
{
"id": 18,
"length_max": 35,
"length_min": 26,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
},
{
"start": "2021-01-02",
"counts": 450,
"details": [
{
"id": 12,
"length_max": 15,
"length_min": 13,
"length_avg": 16,
"length_std": 19,
"is_active": true,
"type": "dog"
},
{
"id": 24,
"length_max": 15,
"length_min": 13,
"length_avg": 16,
"length_std": 19,
"is_active": false,
"type": "dog"
}
]
}
]
}
]
Try with this:
.prefetch_related(
Prefetch('session', queryset=Session.objects.filter(details__type=value)),
Prefetch('session__details', queryset=Details.objects.filter(type=value).order_by('id'))),
)
This will remove all sessions which don't have details that have the value you are looking for, and will sort all the details within those sessions by their ids.

icCube gauge with multiple band colors

I'm trying to make a gauge and want to have two band colors (wrong/red, good/green). I've an example of the amchart in their online Chart maker https://live.amcharts.com/new/edit/. But I'm not able to get this working in icCube.
current we have icCube reporting version 7.0.0 (5549).
This is my chart JSON:
{
"box": {
"id": "wb695",
"widgetAdapterId": "w28",
"rectangle": {
"left": 1510,
"top": 340,
"right": 1910,
"bottom": 640
},
"zIndex": 901
},
"data": {
"mode": "MDX",
"schemaSettings": {
"cubeName": null,
"schemaName": null
},
"options": {
"WIZARD": {
"measures": [],
"rows": [],
"rowsNonEmpty": false,
"columns": [],
"columnsNonEmpty": false,
"filter": []
},
"MDX": {
"statement": "with \n member [Measures].[Measure1] AS 0.9\n member [Measures].[Measure2] AS 0.1\nSELECT\n\n{[Measures].[Measure1], [Measures].[Measure2]} on 0\n\nFROM [cube]"
},
"DATASOURCE": {}
},
"ic3_name": "mdx Query-5",
"ic3_uid": "m17"
},
"data-render": {
"chartType": {
"label": "Gauge",
"proto": {
"chartPrototype": {
"type": "gauge",
"arrows": [
{
"id": "GaugeArrow-1"
}
],
"axes": [
{
"id": "GaugeAxis-1"
}
]
},
"graphPrototype": {},
"dataProviderType": 3
},
"id": "gauge-chart"
},
"graphsConfiguration": [
{
"graph": {}
}
],
"valueAxes": [],
"trendLinesGuides": {},
"configuredQuadrants": {},
"advanced": {
"titles": [],
"faceAlpha": 0,
"faceBorderAlpha": 0
},
"balloon": {
"offsetX": 8
},
"chartOptions": {
"axes": [
{
"axisAlpha": 0.25,
"bottomText": "SLA",
"bottomTextColor": "#2A3F56",
"tickAlpha": 0.25,
"bandOutlineAlpha": 1,
"bandAlpha": 1,
"bandOutlineThickness": 95,
"bandOutlineColor": "#0095BC",
"id": 1
}
],
"bands": [
{
"alpha": 0.8,
"color": "#B53728",
"endValue": 0.6,
"startValue": 0,
"id": "GaugeBand-1"
},
{
"alpha": 0.6,
"color": "#435035",
"endValue": 1,
"startValue": 0.6,
"innerRadius": 0.69,
"id": "GaugeBand-2"
}
]
},
"ic3Data": {
"chartTypeConfig": {
"pie-chart-donut": {
"chartType": {
"label": "Donut",
"proto": {
"chartPrototype": {
"type": "donut",
"pullOutRadius": 0,
"startDuration": 0,
"legend": {
"enabled": false,
"align": "center",
"markerType": "circle"
},
"innerRadius": "60%"
},
"dataProviderType": 1
},
"id": "pie-chart-donut"
},
"graphsConfiguration": [
{}
],
"valueAxes": [],
"trendLinesGuides": {},
"configuredQuadrants": {},
"advanced": {
"titles": []
},
"balloon": {
"offsetX": 8
},
"chartOptions": {
"showZeroSlices": false,
"labelsEnabled": false,
"innerRadius": "60%",
"startAngle": 270,
"radius": "",
"fontSize": 20,
"color": "#0095BC",
"outlineAlpha": 0.25,
"tapToActivate": false
}
}
}
},
"axes": [
{
"startValue": 0,
"endValue": 1,
"startAngle": -90,
"endAngle": 90
}
],
"valueFormatting": ""
},
"navigation": {
"menuVisibility": {
"back": true,
"axisXChange": "All",
"axisYChange": "All",
"filter": "All",
"reset": true,
"widget": true,
"others": "All"
},
"selectionMode": "disabled"
},
"events": {},
"filtering": {},
"hooks": {}
}
Sorry for the late answer, out of the box it's not possible but you can use hooks to change the javascript options sent to amcharts.
JS / On Widget Options :
function(context, options, $box) {
const bands = [
{
"color": "#00CC00",
"endValue": 300000,
"id": "GaugeBand-1",
"startValue": 0
},
{
"color": "#ffac29",
"endValue": 600000,
"id": "GaugeBand-2",
"startValue": 300000
},
{
"color": "#ea3838",
"endValue": 900000,
"id": "GaugeBand-3",
"innerRadius": "95%",
"startValue": 600000
}
];
options.axes[0]["bands"] = bands;
return options;
}
This should work

using mongodb case insentive regex with case insentive index

is mongo regex ignoring my index? I have a case insentive index, but by the look of things my regex search recognize it and ignores it.
db.getCollection("myCol").find({ value: /^mysearchVal/i }}).explain(...)
I have 95, 708 docs total.
output:
{
"queryPlanner": {
"plannerVersion": 1,
"namespace": "myDb.myCol",
"indexFilterSet": false,
"parsedQuery": {
"Value": {
"$regex": "^mysearchVal",
"$options": "i"
}
},
"winningPlan": {
"stage": "FETCH",
"filter": {
"Value": {
"$regex": "^mysearchVal",
"$options": "i"
}
},
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"Value": 1
},
"indexName": "value_case_insensitive_and_unique",
"collation": {
"locale": "en",
"caseLevel": false,
"caseFirst": "off",
"strength": 2,
"numericOrdering": false,
"alternate": "non-ignorable",
"maxVariable": "punct",
"normalization": false,
"backwards": false,
"version": "57.1"
},
"isMultiKey": false,
"multiKeyPaths": {
"Value": []
},
"isUnique": true,
"isSparse": false,
"isPartial": false,
"indexVersion": 2,
"direction": "forward",
"indexBounds": {
"Value": [
"[\"\", {})",
"[/^mysearchVal/i, /^mysearchVal/i]"
]
}
}
},
"rejectedPlans": []
},
"executionStats": {
"executionSuccess": true,
"nReturned": 1,
"executionTimeMillis": 1447,
"totalKeysExamined": 95708,
"totalDocsExamined": 95708,
"executionStages": {
"stage": "FETCH",
"filter": {
"Value": {
"$regex": "^mysearchVal",
"$options": "i"
}
},
"nReturned": 1,
"executionTimeMillisEstimate": 1270,
"works": 95709,
"advanced": 1,
"needTime": 95707,
"needYield": 0,
"saveState": 785,
"restoreState": 785,
"isEOF": 1,
"invalidates": 0,
"docsExamined": 95708,
"alreadyHasObj": 0,
"inputStage": {
"stage": "IXSCAN",
"nReturned": 95708,
"executionTimeMillisEstimate": 596,
"works": 95709,
"advanced": 95708,
"needTime": 0,
"needYield": 0,
"saveState": 785,
"restoreState": 785,
"isEOF": 1,
"invalidates": 0,
"keyPattern": {
"Value": 1
},
"indexName": "value_case_insensitive_and_unique",
"collation": {
"locale": "en",
"caseLevel": false,
"caseFirst": "off",
"strength": 2,
"numericOrdering": false,
"alternate": "non-ignorable",
"maxVariable": "punct",
"normalization": false,
"backwards": false,
"version": "57.1"
},
"isMultiKey": false,
"multiKeyPaths": {
"Value": []
},
"isUnique": true,
"isSparse": false,
"isPartial": false,
"indexVersion": 2,
"direction": "forward",
"indexBounds": {
"Value": [
"[\"\", {})",
"[/^mysearchVal/i, /^mysearchVal/i]"
]
},
"keysExamined": 95708,
"seeks": 1,
"dupsTested": 0,
"dupsDropped": 0,
"seenInvalidated": 0
}
},
"allPlansExecution": []
},
"ok": 1.0
}
the output shows 95,708 keys and docs examined, 1 doc returned. really? did the index apply in this case or am I missing a point or two?
Case insensitive regular expression queries generally cannot use
indexes effectively. The $regex implementation is not collation-aware
and is unable to utilize case-insensitive indexes.
https://docs.mongodb.com/manual/reference/operator/query/regex/#index-use

Google Line Chart set value for no data to 0

I want integrate a google line chart on my raspberry pi terminal to show some statistics about my coffee consumption. If my json have got no value for a date, the line chart should set the value to 0. At the moment, dates with no values have got the value of the day before. Any ideas?
I have used this configuration:
let options = {
hAxis: {
format: 'd.M.yy',
gridlines: {count: 15},
},
vAxis: {
title: 'Cups of Coffee',
},
colors: ['#34495e'],
interpolateNulls : true
};
Dates with no value are not displayed in my json. For example: no entry for the date 3.6.2017 Here is the json:
[{
"_id": {
"year": 2017,
"month": 6,
"day": 9,
"action": "Coffee made"
},
"createdAt": "2017-06-09T06:41:50.904Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 8,
"action": "Coffee made"
},
"createdAt": "2017-06-08T05:44:04.081Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 7,
"action": "Coffee made"
},
"createdAt": "2017-06-07T06:10:01.713Z",
"count": 4
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 6,
"action": "Coffee made"
},
"createdAt": "2017-06-06T05:52:09.775Z",
"count": 2
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 2,
"action": "Coffee made"
},
"createdAt": "2017-06-02T06:03:47.243Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 1,
"action": "Coffee made"
},
"createdAt": "2017-06-01T05:37:31.399Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 5,
"day": 31,
"action": "Coffee made"
},
"createdAt": "2017-05-31T05:18:49.220Z",
"count": 1
}
]
Current line chart output (The values of date 2. Jun to 5. June should be 0)
just need to add a row for the missing dates...
use data table method getFilteredRows to check data for a certain day
see following working snippet...
the json is loaded, then starting with the min date in the data,
and ending with the current date, each day is checked for data
if now rows are found, one is added with value of 0
google.charts.load('current', {
callback: function () {
drawChart();
window.addEventListener('resize', drawChart, false);
},
packages:['corechart', 'table']
});
function drawChart() {
var jsonData = [{
"_id": {
"year": 2017,
"month": 6,
"day": 9,
"action": "Coffee made"
},
"createdAt": "2017-06-09T06:41:50.904Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 8,
"action": "Coffee made"
},
"createdAt": "2017-06-08T05:44:04.081Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 7,
"action": "Coffee made"
},
"createdAt": "2017-06-07T06:10:01.713Z",
"count": 4
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 6,
"action": "Coffee made"
},
"createdAt": "2017-06-06T05:52:09.775Z",
"count": 2
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 2,
"action": "Coffee made"
},
"createdAt": "2017-06-02T06:03:47.243Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 6,
"day": 1,
"action": "Coffee made"
},
"createdAt": "2017-06-01T05:37:31.399Z",
"count": 1
},
{
"_id": {
"year": 2017,
"month": 5,
"day": 31,
"action": "Coffee made"
},
"createdAt": "2017-05-31T05:18:49.220Z",
"count": 1
}
];
var datePattern = 'd.M.yy';
var formatDate = new google.visualization.DateFormat({
pattern: datePattern
});
var dataTable = new google.visualization.DataTable({
"cols": [
{"label": "Date", "type": "date"},
{"label": "Cups of Coffee", "type":"number"}
]
});
jsonData.forEach(function (row) {
dataTable.addRow([
new Date(row.createdAt),
row.count
]);
});
var startDate = dataTable.getColumnRange(0).min;
var endDate = new Date();
var oneDay = (1000 * 60 * 60 * 24);
for (var i = startDate.getTime(); i < endDate.getTime(); i = i + oneDay) {
var coffeeData = dataTable.getFilteredRows([{
column: 0,
test: function (value, row, column, table) {
var coffeeDate = formatDate.formatValue(table.getValue(row, column));
var testDate = formatDate.formatValue(new Date(i));
return (coffeeDate === testDate);
}
}]);
if (coffeeData.length === 0) {
dataTable.addRow([
new Date(i),
0
]);
}
}
dataTable.sort({column: 0});
var chartLine = new google.visualization.ChartWrapper({
chartType: 'LineChart',
containerId: 'chart',
dataTable: dataTable,
options: {
theme: 'material',
legend: {
position: 'none',
},
chartArea: {
top: 12,
right: 12,
bottom: 48,
left: 48,
height: '100%',
width: '100%'
},
colors: ['#34495e'],
hAxis: {
format: datePattern,
gridlines: {
count: 15
},
},
pointSize: 4,
vAxis: {
title: 'Cups of Coffee',
}
}
});
chartLine.draw();
}
<script src="https://www.gstatic.com/charts/loader.js"></script>
<div id="chart"></div>

AWS DMS not giving 100% migaration

HI all we are migrating out database from on premises to Amazon aurora.our database size is around 136GB moreover few tables have over millions of records each. Howover after full load complete out of millions rows approx 200,000 to 300,000 rows gets migrated.WE dont know where we are falling since we are new to DMS.Can anyone know how can we migrate exact count of rows.
migration type :full load
Here are our AWS DMS task settings
{
"TargetMetadata": {
"TargetSchema": "",
"SupportLobs": true,
"FullLobMode": true,
"LobChunkSize": 64,
"LimitedSizeLobMode": false,
"LobMaxSize": 0,
"LoadMaxFileSize": 0,
"ParallelLoadThreads": 0,
"BatchApplyEnabled": false
},
"FullLoadSettings": {
"FullLoadEnabled": true,
"ApplyChangesEnabled": false,
"TargetTablePrepMode": "TRUNCATE_BEFORE_LOAD",
"CreatePkAfterFullLoad": false,
"StopTaskCachedChangesApplied": false,
"StopTaskCachedChangesNotApplied": false,
"ResumeEnabled": false,
"ResumeMinTableSize": 100000,
"ResumeOnlyClusteredPKTables": true,
"MaxFullLoadSubTasks": 15,
"TransactionConsistencyTimeout": 600,
"CommitRate": 10000
},
"Logging": {
"EnableLogging": true,
"LogComponents": [
{
"Id": "SOURCE_UNLOAD",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "SOURCE_CAPTURE",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TARGET_LOAD",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TARGET_APPLY",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TASK_MANAGER",
"Severity": "LOGGER_SEVERITY_DEFAULT"
}
],
"CloudWatchLogGroup": "dms-tasks-krishna-smartdata",
"CloudWatchLogStream": "dms-task-UERQWLR6AYHYIEKMR3HN2VL7T4"
},
"ControlTablesSettings": {
"historyTimeslotInMinutes": 5,
"ControlSchema": "",
"HistoryTimeslotInMinutes": 5,
"HistoryTableEnabled": true,
"SuspendedTablesTableEnabled": true,
"StatusTableEnabled": true
},
"StreamBufferSettings": {
"StreamBufferCount": 3,
"StreamBufferSizeInMB": 8,
"CtrlStreamBufferSizeInMB": 5
},
"ChangeProcessingDdlHandlingPolicy": {
"HandleSourceTableDropped": true,
"HandleSourceTableTruncated": true,
"HandleSourceTableAltered": true
},
"ErrorBehavior": {
"DataErrorPolicy": "LOG_ERROR",
"DataTruncationErrorPolicy": "LOG_ERROR",
"DataErrorEscalationPolicy": "SUSPEND_TABLE",
"DataErrorEscalationCount": 0,
"TableErrorPolicy": "SUSPEND_TABLE",
"TableErrorEscalationPolicy": "STOP_TASK",
"TableErrorEscalationCount": 0,
"RecoverableErrorCount": -1,
"RecoverableErrorInterval": 5,
"RecoverableErrorThrottling": true,
"RecoverableErrorThrottlingMax": 1800,
"ApplyErrorDeletePolicy": "IGNORE_RECORD",
"ApplyErrorInsertPolicy": "LOG_ERROR",
"ApplyErrorUpdatePolicy": "LOG_ERROR",
"ApplyErrorEscalationPolicy": "LOG_ERROR",
"ApplyErrorEscalationCount": 0,
"FullLoadIgnoreConflicts": true
},
"ChangeProcessingTuning": {
"BatchApplyPreserveTransaction": true,
"BatchApplyTimeoutMin": 1,
"BatchApplyTimeoutMax": 30,
"BatchApplyMemoryLimit": 500,
"BatchSplitSize": 0,
"MinTransactionSize": 1000,
"CommitTimeout": 1,
"MemoryLimitTotal": 1024,
"MemoryKeepTime": 60,
"StatementCacheSize": 50
}
}
Mapping Method:
{
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "1",
"object-locator": {
"schema-name": "dbo",
"table-name": "%"
},
"rule-action": "include"
},
{
"rule-type": "transformation",
"rule-id": "2",
"rule-name": "2",
"rule-target": "schema",
"object-locator": {
"schema-name": "dbo"
},
"rule-action": "rename",
"value": "smartdata_int"
}
]
}
You should have the option of setting up CloudWatch logs for each DMS task. Have you inspected the logs for this task? Do you have varchar/text columns > 32KB? These will be truncated when migrating data into a target like redshift, so be aware that this will count towards your error count.
First thing to do is to increase log level :
"Logging": {
"EnableLogging": true,
"LogComponents": [{
"Id": "SOURCE_UNLOAD",
"Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
},{
"Id": "SOURCE_CAPTURE",
"Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
},{
"Id": "TARGET_LOAD",
"Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
},{
"Id": "TARGET_APPLY",
"Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
},{
"Id": "TASK_MANAGER",
"Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
}]
},
Then you will be able to get details about errors occuring.
Turn on validation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html
This will slow the migration down so you cloud also look at splitting this out into multiple tasks and running them on multiple replication instances, expand rule 1 out into multiple rules, rather than '%' add a condition that meets a subset of the tables.
You might also try a different replication engine, 3.1.1 has just been released, at the time of writing there are no release notes for 3.1.1.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html