I have tried code entered into the putty command line such as
rsync -avz myHugeFile.dat amazonusername#my.amazon.host.ip:
Trouble Uploading Large Files to RStudio using Louis Aslett's AMI on EC2
(code taken from the linked question above)
Though I could not get it to work and I think this was due to not knowing how to properly frame it in light of the location of the files.
I am putting this as a separate question rather than a comment as I don't have enough 'reputation' to comment. in the context of a windows user asking this
1) Was I correct to use this in Putty?
2) Did I need to put anything before it?
3) the only things editable are the address of the EC2 so I have that correct and the location of the file. If the file is on the desktop, how would I write this?
Even if you can only answer one of these basic questions it would be really helpful as I piece together code on how to do this.
If this question is too basic for this site and your are going to remove, could you please give me an answer before doing so ;)
to sum up what we said in comments: don't use putty to upload files, it is more intended to ssh to your instances. Use a software like winscp or filezilla instead, which are free and easy to use, you will find it a lot easier
Related
So I've recently joined in a project. They asked me to develop a particular module so I did. Now I need to integrate it with the system. They gave me AWS login credentials for the integration purpose. I'm new to AWS and I don't wanna sound dumb to them by asking where's the code. I saw that there's an EC2 instance running but I see no option to see code there. So can you please let me know where can I see the code of running EC2 instance?
Never feel dumb about asking questions on your team. It's much better to ask questions and seek clarification, rather than assume and waste your time and theirs.
So if your team is tasking you with integrating a module you've built with something running on EC2, they probably have an API of some sorta to integrate. They likely aren't expecting you to go to EC2 and view code or decompile DLLs to view source code.
However, to potentially answer your question if your EC2 instance is running some sort of application that has DLLs, you can download those and decompile them using various tools to view the actual source code. YOu would of course need the keypair to access the EC2 instances so you'd have to get that first.
I would just ask someone on your team how to integrate with the system running on EC2. They likely have the source code stored somewhere in a repository.
Is there any option in AWS to restrict developers to download source code from Cloud9 IDE ?
I don't think that could be possible, even if they (AWS) wanted to. If someone can see the code, they can copy the code - if only by copy and paste - so any attempt to prevent someone from downloading what they can already see with their own eyes, is going to fall short.
I've been searching for an answer to this question for quite some time now. I've read almost every official aws written tutorial on their services and watched several youtube videos, read some third party tutorials on the subject but there doesn't seem to be a simple easy-to-follow solution to this. Then I tried to search on stackoverflow and although there are 2-3 similiar questions they are either not an exact problem or their solutions are not applicable/understandable and require further explanations.
Problem:
On my PC I do docker-compose build and then docker-compose up and voila, I have my instances up and running without problems.
I want to do the exact same thing but so that every service in docker-compose.yml file starts on its own EC2 Instance (or whichever service that AWS offers). Then I want to be able to change my code and push it to github/gitlab/bitbucket (in my case gitlab) and for it to be deployed to my instances.(but this goes into topic about CI/CD so not important for this exact question)
I think that most of people who are used to docker and docker compose and want to start using AWS will or already have encountered this problem.
A step by step solution( or at least a link or pointing me in the right direction) would be really useful because at the moment there is just too much information on EC2, ECR, ECS, IAM and bunch of other stuff and for a beginner in the AWS world it is really really hard to understand and follow.
EDIT:
I know that it probably isn't that simple but a solution must exists even if it is something as cumbersome as creating every single service by itself on ECS (as mentioned in comments). If it is the only way then sure, I all for it, and I did try several of those tutorials but still didn't succeed in my goal.
I have a project deployed on EC2 instance and is up.
But sometime when I login through FTP and transfer the updated build to the EC2, some of my project file gets missing.
After a while those set of files is seen listed at the same place.
Couldn't relate why these unexpected behavior is happening. Let me know if anyone has faced similar kind of situation.
Or anyone can give me a way to know what all logins are being done through FTP and SSH on my EC2.
Files don't just randomly go missing on an EC2 instance. I suspect there is something going on and you'll need to diagnose it. There is not enough information here to help you but I can try point you in the right direction.
A few things that come to mind are:
What are you running to execute the ftp command? If it's appearing after some time, are you sure it's just not in progress when you first check then it appears when it's done? are you sure nothing is being cached?
Are you sure your FTP client is connected to the right instance?
Are you sure there are no cron tasks or external entities connecting to the instance and cleaning out a certain directory? You said something about the build, is this a build agent you're performing this on?
I highly doubt it's this one but: What type of volume are you working on? EBS? Instance Store? Instance Store is ephemeral so stopping/starting the instance can result in data being lost.
Have you tried using scp ?
If you're still stumped, please provide more info on your ec2 config and how you're transferring the file.
Is there any way to save a file from the linux servers to my desktop. In my college we are using windows XP and use Putty to connect to the college Linux server. We have individual accounts on the server. I have created a lot of cpp files on it and now want to copy them to my pendrive so I can work with them on my home PC. Also please mention a way to copy from desktop to the server(i.e., home of my account in it).
Thank you for your help in advance. :) :D
WinSCP does this very nicely in either SFTP, SCP, FTPS or FTP.
Depending on your permissions and what is on the box you can email the contents of files to yourself.
mail -s "Subject" myemail#somewhere.com < /home/me/file.txt
Can alwasy test with something simple
mail -s “Hi” myemail#somewhere.com
Set up an online account for a version control system (GIT, Mercurial, Bazaar, SVN), and store your files there. That way, you can just "clone", "pull" or "update" the files wherever you are that has a reasonable connection to the internet.
There are quite a few sites that have free online version control systems, so it's mostly a case of "pick a version control system", and type "free online vcs server" into your favourite search engine (replace vcs with your choice of version control system).
An added benefit is that you will have version control and thus be able to go back and forth between different version (very useful when you realise that all the changes you've done this morning ended up being a bad route to follow [I do that sometimes, still, after over 30 years of programming - I just tend to know sooner when I've messed up and go back to the original code], so you want to go back to where you were last afternoon, before you started breaking it).