I have an EC2 instance running at AWS with some standard webpages. Since a few days the server replies with "AWS!" instead of delivering the index.page. Checking the source code of this page:
<html><body><script type="text/javascript" src="http://gc.kis.v2.scr.kaspersky-labs.com/XXXXX/main.js" charset="UTF-8"></script>AWS!</body></html>
On this instance Kaspersky is not installed. I didnt found any hints on Google so far - maybe someone has made a similiar experience and give me a hint why my index-page is not shown anymore (the code was not changed). Maybe AWS has undergone a change?
Any hint is very appreciated.
Issue fixed, stdunbar & John pointed into the right direction. DNS entry was wrong.
Related
Attempted a minor system update to WikiJS yesterday afternoon, and when the site came up for a restart... all I'm getting now is a site can't be reached failure.
Having issues SSH'ing into the box, and am looking for ways around that particular problem. It's hosted on an AWS EC2 instance that I can stop/start/reboot, but that's it.
At one point yesterday I did get an Unknown authentication strategy "jwt" error, but now it's showing nothing again.
While I'm working through the issues of getting into the box itself, is there anything that jumps out at y'all that I should be looking towards?
Many thanks in advance.
This may be a very simple thing, but I am pretty new to GCP and don't really understand how all this stuff works so please bear with me.
I am trying to host a static site with GCP. My site is built with Jekyll and I am using GCP containers to deploy it. I got that part working.
I then wanted to give it a human-friendly URL. I bought one using the GCP console and then went to create a domain name mapping. So far I have been waiting for a couple of days. I read on some other similar posts that canceling and restarting the mapping process helped with the issue, but I've tried 3 times so far waiting ~24 hours between each, and no luck still.
It tells me that I need to configure the DNS records with my domain host, but if I understand it correctly GCP is my domain host. I have also followed the instructions here and still no luck.
Am I doing something wrong or perhaps I am missing something here?
Note: I have DNSSEC on, maybe that makes a difference.
i try to access some s3-items (json-files) from an neo4j-database they is running in a EC2-machine.
I have some problems to understand, what is meaned with "endpoint" and "port" in the APOC-Manual.
Screenshot from APOC Manual
I downloaded all in the picture described Plugins and my ec2 is running in a VPC.
I'm new in AWS, sorry for stupid asking :/
and thanks for your reply.
Based on the comments, the issue was using incorrect standard-endpoint than the required one.
This morning I logged into my pc and attempted to access remotely into a VM I have. No connection was the reported error. I log into my cloud console to find no projects.
Google Support is not available for me, as I have bronze package and I do not have 150$ available to upgrade it.
Are there any logs that could explain what happened? Did it just get wiped out? The instance is still there. But the machine itself is gone. I can't find any records of it. Please advise any help you can.
I believe your question also confuses others, what do you mean by "The instance is still there. But the machine itself is gone"?
Because you also mentioned "I log into my cloud console to find no projects. ", which means you should see nothing before you choose a valid project
Please be more specific about your questions
Could you indicate us step by step what you do in the Google Cloud Platform console? Where you click and what you type.
Please, check also the Activity tab on the home page of the console. Once in it, on the right-hand side, select Resource type: GCE VM instance, to see modifications in VMs.
We need to know exactly what you are seeing on each step, and any error code. Then we could see if the problem is in your procedure, or if there is an issue you should report to billing support, which is free, as pointed out by John Hanley in his comment.
Please, when you do this, make sure you don't include any personal information in the data you post here (such as project ID or password).
I have a project deployed on EC2 instance and is up.
But sometime when I login through FTP and transfer the updated build to the EC2, some of my project file gets missing.
After a while those set of files is seen listed at the same place.
Couldn't relate why these unexpected behavior is happening. Let me know if anyone has faced similar kind of situation.
Or anyone can give me a way to know what all logins are being done through FTP and SSH on my EC2.
Files don't just randomly go missing on an EC2 instance. I suspect there is something going on and you'll need to diagnose it. There is not enough information here to help you but I can try point you in the right direction.
A few things that come to mind are:
What are you running to execute the ftp command? If it's appearing after some time, are you sure it's just not in progress when you first check then it appears when it's done? are you sure nothing is being cached?
Are you sure your FTP client is connected to the right instance?
Are you sure there are no cron tasks or external entities connecting to the instance and cleaning out a certain directory? You said something about the build, is this a build agent you're performing this on?
I highly doubt it's this one but: What type of volume are you working on? EBS? Instance Store? Instance Store is ephemeral so stopping/starting the instance can result in data being lost.
Have you tried using scp ?
If you're still stumped, please provide more info on your ec2 config and how you're transferring the file.