SaltStack: Getting Up and Running Minion on EC2 - amazon-web-services

I am working through the SaltStack walk through to set up salt on my ec2 cluster. I just edited /etc/salt/minion and added the public dns of my salt master.
master: ec2-54-201-153-192.us-west-2.compute.amazonaws.com
Then I restarted the minion. In debug mode, this put out the following
$ sudo salt-minion -l debug
[DEBUG ] Reading configuration from /etc/salt/minion
[INFO ] Using cached minion ID: localhost.localdomain
[DEBUG ] loading log_handlers in ['/var/cache/salt/minion/extmods/log_handlers', '/usr/lib/python2.6/site-packages/salt/log/handlers']
[DEBUG ] Skipping /var/cache/salt/minion/extmods/log_handlers, it is not a directory
[DEBUG ] None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found the in the configuration. Not loading the Logstash logging handlers module.
[DEBUG ] Configuration file path: /etc/salt/minion
[INFO ] Setting up the Salt Minion "localhost.localdomain"
[DEBUG ] Created pidfile: /var/run/salt-minion.pid
[DEBUG ] Chowned pidfile: /var/run/salt-minion.pid to user: root
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] loading grain in ['/var/cache/salt/minion/extmods/grains', '/usr/lib/python2.6/site-packages/salt/grains']
[DEBUG ] Skipping /var/cache/salt/minion/extmods/grains, it is not a directory
[DEBUG ] Attempting to authenticate with the Salt Master at 172.31.21.27
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
Sure enough, 172.31.21.27 is the private ip of the master. So far this looks ok. According to the walkthrough, the next step is to accept the minions key on the master:
"Now that the minion is started it will generate cryptographic keys and attempt to
connect to the master. The next step is to venture back to the master server and
accept the new minion's public key."
However, when I go to the master node and look for new keys I don't see any pending requests.
$ sudo salt-key -L
Accepted Keys:
Unaccepted Keys:
Rejected Keys:
And the ping test does not see the minion either:
$ sudo salt '*' test.ping
This is where Im stuck, what should I do next to get up and running?

Turn off iptables and do salt-key -L to check if the key shows up. If it does, then you need to open port 4505 and 4506 on the master for the minion to be able to connect to it. You could do lokkit -p tcp:4505 -p tcp:4506 to open these ports.

You likely need to add rules for 4505/4506 between the salt master and minion security group. Salt master needs these ports to be able to communicate with the minions.

Related

What does "Error" mean in Systems Manager Run Command (Document: AmazonCloudWatch-ManageAgent)?

I basically followed this web page. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance-fleet.html
Steps
Launch an EC2 instance (Amazon Linux 2) with an IAM role (Permissions: CloudWatchAgentServerRole, AmazonSSMManagedInstanceCore).
See "Download the CloudWatch agent package" section in the documentation and run "AWS-ConfigureAWSPackage".
Go to Systems Manager Parameter Store and create a parameter.
Name: AmazonCloudWatch-linux
Parameter: see below
{
"metrics": {
"append_dimensions": {
"ImageId": "${!aws:ImageId}",
"InstanceId": "${!aws:InstanceId}",
"InstanceType": "${!aws:InstanceType}"
},
"metrics_collected": {
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
See "Start the CloudWatch agent" section in the documentation and run "AmazonCloudWatch-ManageAgent". I input "AmazonCloudWatch-linux" to the "Optional Configuration Location" box.
To check the status of CloudWatch Agent, I run sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status, and this below is returned. This means CloudWatch Agent is running successfully.
{
"status": "running",
"starttime": "2022-07-20T15:06:12+0000",
"configstatus": "configured",
"cwoc_status": "running",
"cwoc_starttime": "2022-07-20T15:06:11+0000",
"cwoc_configstatus": "configured",
"version": "1.247353.0b251941"
}
I also go to CloudWatch Metrics and confirm I get a new metric.
However, the execution history of "AmazonCloudWatch-ManageAgent" (Step 4) has some messages in "Error".
Created symlink from /etc/systemd/system/multi-user.target.wants/cwagent-otel-collector.service to /etc/systemd/system/cwagent-otel-collector.service.
Redirecting to /bin/systemctl restart cwagent-otel-collector.service
2022/07/20 15:06:12 D! [EC2] Found active network interface
2022/07/20 15:06:12 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/ssm_AmazonCloudWatch-linux.tmp ...
2022/07/20 15:06:12 I! Valid Json input schema.
2022/07/20 15:06:12 D! [EC2] Found active network interface
Created symlink from /etc/systemd/system/multi-user.target.wants/amazon-cloudwatch-agent.service to /etc/systemd/system/amazon-cloudwatch-agent.service.
Redirecting to /bin/systemctl restart amazon-cloudwatch-agent.service
I also check "Output" of the execution history. In my understanding, this does not show any issues.
****** processing cwagent-otel-collector ******
Successfully fetched the config and saved in /opt/aws/amazon-cloudwatch-agent/cwagent-otel-collector/etc/cwagent-otel-collector.d/default.tmp
cwagent-otel-collector config has been successfully fetched.
cwagent-otel-collector has already been stopped
****** processing amazon-cloudwatch-agent ******
/opt/aws/amazon-cloudwatch-agent/bin/config-downloader --output-dir /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d --download-source ssm:AmazonCloudWatch-linux --mode ec2 --config /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml --multi-config default
I! Trying to detect region from ec2
Region: ap-northeast-1
credsConfig: map[]
Successfully fetched the config and saved in /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/ssm_AmazonCloudWatch-linux.tmp
Start configuration validation...
/opt/aws/amazon-cloudwatch-agent/bin/config-translator --input /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json --input-dir /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d --output /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml --mode ec2 --config /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml --multi-config default
I! Detecting run_as_user...
I! Trying to detect region from ec2
No csm configuration found.
No log configuration found.
Configuration validation first phase succeeded
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent -schematest -config /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml
Configuration validation second phase succeeded
Configuration validation succeeded
amazon-cloudwatch-agent has already been stopped
Question
Even though there is no problem at Step 5, 6 and 8, why is there a message in Error at Step 7?

Rootless Podman with systemd in ubi8 Container on RHEL8 not working

We are trying to run a Container from ubi8-init Image as non root user under RHEL8 with podman. We enabled cgroups 2 globally by adding kernel parameters and checked versioins:
cgroup_no_v1=all systemd.unified_cgroup_hierarchy=1
$ podman -v
podman version 2.0.5
$ podman info --debug
host:
arch: amd64
buildahVersion: 1.15.1
cgroupVersion: v2
Subuid and subguid are set:
bob:100000:65536
Due to permission problem, ugly workaround:
Failed to create /user.slice/user-992.slice/session-371.scope/init.scope control group: Permission denied
$ chown -R 992 /sys/fs/cgroup/user.slice/user-992.slice/session-371.scope
Now we are able to run the container and jump into it via exec /bin/bash. Problem is we get following error if we want to copy something into the container using podman cp:
opening file `/sys/fs/cgroup/cgroup.freeze` for writing: Permission denied
Sample output from commands without chown workaround:
# Trying with --cgroup-manager=systemd
$ podman run --name=ubi-init-test --cgroup-manager=systemd -it --rm --systemd=true ubi8-init
Error: writing file `/sys/fs/cgroup/user.slice/user-992.slice/user#992.service/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error
# Trying with --cgroup-manager=cgroupfs
$ podman run --name=ubi-init-test --cgroup-manager=cgroupfs -it --rm --systemd=true ubi8-init
systemd 239 (239-41.el8_3) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy)
Detected virtualization container-other.
Detected architecture x86-64.
Welcome to Red Hat Enterprise Linux 8.3 (Ootpa)!
Set hostname to <b64ed4493a24>.
Initializing machine ID from random generator.
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Failed to create /init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object, freezing.
Freezing execution.
There must be either something completely wrong, misconfigured or buggy. Has anyone done this or any advice regarding the issues we run into?
Trying to solve similar issue.
I did setsebool -P container_manage_cgroup true on top of adding kernel parameters for cgroups v2. But it didn't help. Then I found this comment https://bbs.archlinux.org/viewtopic.php?pid=1895705#p1895705 and moved little bit further with --cgroup-manager=cgroupfs (used podman unshare and then unset DBUS_SESSION_BUS_ADDRESS):
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:path=/run/user/1000/bus
$ podman unshare
$ export DBUS_SESSION_BUS_ADDRESS=
$ podman run --name=ubi-init-test --cgroup-manager=cgroupfs -it --rm --systemd=true ubi8-init
systemd 239 (239-41.el8_3.1) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy)
Detected virtualization container-other.
Detected architecture x86-64.
Welcome to Red Hat Enterprise Linux 8.3 (Ootpa)!
Set hostname to <3caae9f73645>.
Initializing machine ID from random generator.
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Couldn't move remaining userspace processes, ignoring: Input/output error
[ OK ] Reached target Local File Systems.
[ OK ] Listening on Journal Socket.
[ OK ] Reached target Network is Online.
[ OK ] Started Dispatch Password Requests to Console Directory Watch.
[ OK ] Reached target Remote File Systems.
[ OK ] Reached target Slices.
Starting Rebuild Journal Catalog...
[ OK ] Started Forward Password Requests to Wall Directory Watch.
[ OK ] Reached target Paths.
[ OK ] Listening on initctl Compatibility Named Pipe.
[ OK ] Reached target Swap.
[ OK ] Listening on Process Core Dump Socket.
[ OK ] Listening on Journal Socket (/dev/log).
Starting Journal Service...
Starting Rebuild Dynamic Linker Cache...
Starting Create System Users...
[ OK ] Started Rebuild Journal Catalog.
[ OK ] Started Create System Users.
[ OK ] Started Rebuild Dynamic Linker Cache.
Starting Update is Completed...
[ OK ] Started Update is Completed.
[ OK ] Started Journal Service.
Starting Flush Journal to Persistent Storage...
[ OK ] Started Flush Journal to Persistent Storage.
Starting Create Volatile Files and Directories...
[ OK ] Started Create Volatile Files and Directories.
Starting Update UTMP about System Boot/Shutdown...
[ OK ] Started Update UTMP about System Boot/Shutdown.
[ OK ] Reached target System Initialization.
[ OK ] Started dnf makecache --timer.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Reached target Sockets.
[ OK ] Started Daily Cleanup of Temporary Directories.
[ OK ] Reached target Timers.
[ OK ] Reached target Basic System.
Starting Permit User Sessions...
[ OK ] Started D-Bus System Message Bus.
[ OK ] Started Permit User Sessions.
[ OK ] Reached target Multi-User System.
Starting Update UTMP about System Runlevel Changes...
[ OK ] Started Update UTMP about System Runlevel Changes.

reusing the salt states in the AMI image

I have several salt states(base and pillars) already written and present in Amazon s3. I want to re-use the salt states instead of writing the salt state again. I want to create an AMI image using packer and apply the salt-states that I have downloaded from s3 to the Packer Builder EC2 instance. Even if the salt-minion is installed on the CentOS -7 machine, I have installed salt-master service as well and started both salt-minion and salt-master by following commands.
cat > /etc/salt/minion.d/minion_id.conf <<'EOT' id: ${host} # id salt-minion id EOT
Generate the name of the master to connect to
cat > /etc/salt/minion.d/master_name.conf <<'EOT' master: localhost EOT
systemctl enable salt-minion
systemctl start salt-minion
systemctl enable salt-master
systemctl start salt-master
When running the below command it doesn't list any minions:
salt-key -L Accepted Keys: Denied Keys: Unaccepted Keys: Rejected Keys:
So the salt 'localhost-*' state.sls state.high_state
fails with errors:
"No minions matched the target. No command was sent, no jid was assigned.
ERROR: No return received"
This is because no minionid is created from salt-key.
Anybody has any idea why the salt-key is not being shown with salt-minion and how can i resolve this issue by running the existing salt-state successfully downloaded from s3 will work in AMI image?
Regards
Pradeep
What could be happening is that your minions can't find (resolve/DNS) the master salt.
What you could do is add the IP of your master to your minions /etc/salt/minion something like this:
master: 10.0.0.1
Replace 10.0.0.3 with the IP of your master
Later restart your minion and check the master again for requests.

ElasticSearch cloud-aws plugin not able to join cluster

So i have been attempting to use the ElasticSearch "cloud-aws" plugin to join elasticsearch nodes to my single master. I have been though a few online guides and tried a few settings from various sources but I still can't get the new nodes to join the existing master.
I have configured IAMS roles and tags for EC2 and this is my elasticsearch.yml file on one node (the others are similar):
node.name: Thor
node.client: "true"
network.host: localhost
cloud.aws.access_key: foobar
cloud.aws.secret_key: barfoo
cloud.aws.region: eu-west-1
discovery.type: ec2
discovery.ec2.tag.elasticsearch: Ubuntu-ElasticNode
The logging from elasticsearch is poor and even in DEBUG mode not much is offered up.
[2016-03-15 23:01:05,440][INFO ][node ] [Thor] version[2.2.0], pid[1550], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-03-15 23:01:05,447][INFO ][node ] [Thor] initializing ...
[2016-03-15 23:01:06,685][INFO ][plugins ] [Thor] modules [lang-expression, lang-groovy], plugins [cloud-aws], sites []
[2016-03-15 23:01:10,016][INFO ][node ] [Thor] initialized
[2016-03-15 23:01:10,017][INFO ][node ] [Thor] starting ...
[2016-03-15 23:01:10,106][INFO ][transport ] [Thor] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-03-15 23:01:10,115][INFO ][discovery ] [Thor] elasticsearch/9PmYq5tXQcaPUPqDh4VTSQ
[2016-03-15 23:01:40,116][WARN ][discovery ] [Thor] waited for 30s and no initial state was set by the discovery
[2016-03-15 23:01:40,155][INFO ][http ] [Thor] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-03-15 23:01:40,155][INFO ][node ] [Thor] started
[2016-03-15 23:54:40,863][DEBUG][action.admin.cluster.health] [Thor] no known master node, scheduling a retry
[2016-03-15 23:55:10,864][DEBUG][action.admin.cluster.health] [Thor] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
[2016-03-15 23:55:10,874][INFO ][rest.suppressed ] /_cluster/health Params: {pretty=}
MasterNotDiscoveredException[null]
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:205)
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:794)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have the port range 9200 - 9400 open between the elasticsearch servers but the log seems to indicate that the discovery is still timing out. I set "discovery.ec2.tag.*" to speed up the discovery process but this hasn't helped.
Does anyone have any idea how this plugin needs to be configured? I have read a few guides and a lot use even less configuration options than I and are still able to join nodes to the master.
I'm running ElasticSearch 2.2. Here's an example of my working config:
plugin.mandatory: cloud-aws
cluster.name: mynewcluster
cloud.aws.access_key: mykey
cloud.aws.secret_key: mysecret
cloud.aws.region: us-east-1
discovery.type: ec2
discovery.ec2.tag.elasticsearch: mynewcluster
I think you need to look at your network config. Specificallynetwork.host. From the docs:
Elasticsearch binds to localhost only by default. This is sufficient for you to run a local development server (or even a development cluster, if you start multiple nodes on the same machine), but you will need to configure some basic network settings in order to run a real production cluster across multiple servers.
I don't have the network.host config in my elasticsearch.yml. Which leads me to suggest taking it out altogether. However, since the docs say that it binds to localhost by default I would also suggest that you try to set it to the public hostname or IP of the node.
All of this assumes that you correctly set up IAM, Security Groups according to: https://github.com/elastic/elasticsearch-cloud-aws
So after having this chat in the es forums: https://discuss.elastic.co/t/cloud-aws-plugin-not-able-to-join-cluster/44703/3
I decided to rebuild the node cleanly as i suspected a Java downgrade from 8 to 7 to allow the cloud-aws plugin to work may be causing the issue and i had tried to many failed fixes. I also (from advise in the link provided) installed the marvel-agent and license plugins but I haven't seen any one else do this to get discovery working so I am not sure this is a requirement. I also made sure to hold es package upgrades because the marvel plugin did a bit of complaining when es upgraded (although the plugin could also have been upgraded so just a personal preference really).
Discovery is now working very well.

fail to insert data into local Google App Engine datastore

I am following the example from Google's Mobile Shopping Assistant sample which asks me to import data according to this link.
I tried the steps according to the example (all the sample codes are vanilla, i didn't change any thing except to fix the warning to use the latest Gradle build version)
I believe that I am missing an essential step that is not stated in the example. Can someone provide some insights into which steps I did wrongly?
the following are the steps i did
start local googleAppEngine app "backend"
ran cmd " appcfg.py upload_data --config_file bulkloader.yaml --url=http
://localhost:8080/remote_api --filename places.csv --kind=Place -e myEmailAddress#gmail.com".
This command is supposed to insert 2 rows into the datastore (places.csv contains two entries)
this gave me the following readout
10:07 AM Uploading data records.
[INFO ] Logging to bulkloader-log-20151020.100728
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
Error 302: --- begin server output ---
--- end server output ---
I then go to "http://localhost:8080/admin/buildsearchindex" which displays "MaintenanceTasks completed."
Next I go to "http://localhost:8080/_ah/admin" but it displays
Datastore has no entities in the Empty namespace. You need to add data
programatically before you can use this tool to view and edit it.
I had the same problem but not with the local developer server but with the deployed version. After nearly going insane, I found a workaround to upload the data using appcfg. In my case, I noticed that when trying the following
Input not working for me:
gcloud auth login
appcfg.py upload_data --config_file bulkloader.yaml --url=http://<yourproject>.appspot.com/remote_api --filename places.csv --kind=Place --email=<you#gmail.com>
Output of error:
11:10 AM Uploading data records.
[INFO ] Logging to bulkloader-log-20160108.111007
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
Error 302: --- begin server output ---
--- end server output ---
as expected, I was not asked to authenticate myself again during the second command but apparently, appcfg could still not authenticate my account. I am using Win7, with Python 2.7, the Python Google App Engine SDK including appcfg.py and gcloud from the Google Cloud SDK if I get it right.
However, on https://cloud.google.com/container-registry/docs/auth it is shown that you can print out the access token using the gcloud command and then insert it manually in your appcfg command which worked for me
Input working for me:
gcloud auth login
gcloud auth print-access-token
This prints out the access token which you can use with appcfg
appcfg.py upload_data --oauth2_access_token=<oauth2_access_token> --config_file bulkloader.yaml --url=http://<yourproject>.appspot.com/remote_api --filename places.csv --kind=Place --email=<you#gmail.com>
Output of successful data upload:
10:42 AM Uploading data records.
[INFO ] Logging to bulkloader-log-20160108.104246
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20160108.104246.sql3
[INFO ] Connecting to <yourproject>.appspot.com/remote_api
[INFO ] Starting import; maximum 10 entities per post
.
[INFO ] 3 entities total, 0 previously transferred
[INFO ] 3 entities (4099 bytes) transferred in 1.7 seconds
[INFO ] All entities successfully transferred
I hope this helps anybody trying to solve this problem. To me, it is not clear what the source of this problem is, but this is a workaround that works for me.
BTW, I observed the same problem on a mac.
So here is what I found through testing. I went through the same steps initially and got the same error, but what is worthy of note in the output is the entry INFO client.py:669 access_token is expired:
MobileAssistant-Data> appcfg.py upload_data --config_file bulkloader.yaml --url=http://localhost:8080/remote_api --filename places.csv --kind=Place -e myEmailAddress#gmail.com
05:12 PM Uploading data records.
[INFO ] Logging to bulkloader-log-20151112.171238
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
2015-11-12 17:12:38,466 INFO client.py:669 access_token is expired. Now: 2015-11-12 22:12:38.466000, token_expiry: 2015-11-06 01:33:21
Error 302: --- begin server output ---
This looked somewhat like an issue I saw in the Remote API handler for the dev server that surfaced after ClientLogin was deprecated (but in the Python SDK). Just to test I edited the build.gradle to use the latest SDK version (1.9.28 from 1.9.18) and ran it again:
MobileAssistant-Data> appcfg.py upload_data --config_file bulkloader.yaml --url=http://localhost:8080/remote_api --filename places.csv --kind=Place -e myEmailAddress#gmail.com
05:16 PM Uploading data records.
[INFO ] Logging to bulkloader-log-20151112.171615
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
2015-11-12 17:16:15,177 INFO client.py:669 access_token is expired. Now: 2015-11-12 22:16:15.177000, token_expiry: 2015-11-06 01:33:21
2015-11-12 17:16:15,565 INFO client.py:669 access_token is expired. Now: 2015-11-12 22:16:15.565000, token_expiry: 2015-11-06 01:33:21
2015-11-12 17:16:15,573 INFO client.py:571 Refreshing due to a 401 (attempt 1/2)
2015-11-12 17:16:15,575 INFO client.py:797 Refreshing access_token
2015-11-12 17:16:16,039 INFO client.py:571 Refreshing due to a 401 (attempt 2/2)
2015-11-12 17:16:16,040 INFO client.py:797 Refreshing access_token
... (ad infinitum)
This output now mirrors the Python Remote API issue exactly. So it seems that the same issue exists with the Java Remote API (the auth check has not been properly updated to use the new auth scheme).
The workaround in Python was to manually edit the SDK source code and stub out the auth check. I suspect a similar hack would be necessary for the Java SDK. It's not as straightforward though as the SDK would need to be rebuilt from source.
If I come across anything else I will update this answer with my findings. Note that this should work perfectly fine with a deployed application - it's only the dev server that is affected.
Update:
The culprit is the admin check in com/google/apphosting/utils/remoteapi/RemoteApiServlet.java as with the same issue in the Python SDK noted above. Unfortunately you cannot trivially rebuild the SDK from source, as the build target in build.xml only includes 'jsr107cache' and the rest of the build is done from pre-generated binaries. It looks like we'll have to just wait until this is fixed in a future release, but for now I will update the public bug.
For now I would recommend sticking to the documentation and only using the deployed app version for remote_api uploads.
Better use the new UI in google developer console. URL : https://console.developers.google.com/project/<YOUR_PROJECT_ID>/datastore
You can see Query or indexes subsections of this for your kinds[You can also use GQL] and indexes.
NOTE : Also I noticed a particular kind does not appear in query section unless some data is added in it [GQL also returns empty data]. I see that particular kind in indexes section though.