Accessing http resources on aws s3 with autorization - amazon-web-services

I have an s3 bucket with a set of static website resources as shown below:
Root-Bucket/
|-- website1/
| |-- index.html
| |-- 1_resource1.jpg
| |-- 1_resourse2.css
|-- website2/
| |-- index.html
| |-- 2_resource1.jpg
| |-- 2_resourse2.css
All of the objects shown above are private by default.
And I don't want to make above resources being accessible by everyone, only those who have authorized should be able to view index.html with attached resources.
Is there any way serving such static website with authorization?

Related

Using shellspec for testing

I try to implement a testing framework using shellspec.
I have read the article and README at shellspec github project.
But I`m still confused about how to customise projects directories.
I`d like to have the next structure of my testing framework:
<root_dir>
|-- README
|
|-- tests
|
|-- test_instance_1
| |
| |-- lib
| | |
| | |-- my_test_1.sh
| |
| |-- spec
| |
| |-- my_test_1_spec.sh
|
|
|-- test_instance_2
|
|-- lib
| |
| |-- my_test_2.sh
|
|-- spec
|
|-- my_test_2_spec.sh
As it is mentioned at shellspec github project, it is possible to customise directory structure:
This is the typical directory structure. Version 0.28.0 allows many of
these to be changed by specifying options, supporting a more flexible
directory structure.
So I tried to modify my .shellspec file in the following way:
--default-path "***/spec"
--execdir #basedir/lib`
But when I run shellspec command in my command line, I get the next errors:
shellspec.sh: eval: line 23: unexpected EOF while looking for matching ``'
shellspec.sh: eval: line 24: syntax error: unexpected end of file
shellspec is run in <root_dir>.
Also I saw that there should be .shellspec-basedir file in each subdirectory, but I don`t realise, what it should contain.
I'd be happy, if someone give an example of existing project with custome directory structure or tell me, what I`m doing wrong.
The answer turned out to be very simple. Need to use
--default-path "**/spec"
to find _spec.sh files in all spec/ directories in the project

Aws Glue causing issue after reading array json file

We need to your assistance for the below AWS ETL Glue issue .
We are trying to read json files using AWS Glue dynamic frame .
Ex input json data :
{"type":"TripLog","count":"2","def":["CreateTimestamp","UUID","DataTimestamp","VIN","DrivingRange","DrivingRangeUnit","FinishPos.Datum","FinishPos.Event","FinishPos.Lat","FinishPos.Lon","FinishPos.Odo","FinishPos.Time","FuelConsumption1Trip","FuelConsumption1TripUnit","FuelConsumptionTripA","FuelConsumptionTripAUnit","FuelUsed","FuelUsedUnit","Mileage","MileageUnit","ODOUnit","Score.AcclAdviceMsg","Score.AcclScore","Score.AcclScoreUnit","Score.BrakeAdviceMsg","Score.BrakeScore","Score.BrakeScoreUnit","Score.ClassJudge","Score.IdleAdviceMsg","Score.IdleScore","Score.IdleScoreUnit","Score.IdleStopTime","Score.LifetimeTotalScore","Score.TotalScore","Score.TotalScoreUnit","StartPos.Datum","StartPos.Lat","StartPos.Lon","StartPos.Odo","StartPos.Time","TripDate","TripId"],"data":[["2017-10-17 08:47:17.930","xxxxxxx","20171017084659"," xxxxxxxxxxx ","419","mile","WGS84","Periodic intervals during IG ON","38,16,39.846","-77,30,45.230","33559","20171017-033104","50.1","M-G - mph(U.S. gallon)","36.0","M-G - mph(U.S. gallon)","428.1","cm3",null,null,"km",null,null,"%",null,null,"%",null,null,null,"%","0x0",null,null,"%","WGS84","39,12,50.988","-76,38,36.417","33410","20171017-015103","20171017-015103","0"],["2017-10-17 08:47:17.930"," xxxxxxx ","20171017084659","xxxxxxxxxxx","414","mile","WGS84","Periodic intervals during IG ON","38,12,12.376","-77,29,57.915","33568","20171017-033604","50.1","M-G - mph(U.S. gallon)","36.0","M-G - mph(U.S. gallon)","838.0","cm3",null,null,"km",null,null,"%",null,null,"%",null,null,null,"%","0x0",null,null,"%","WGS84","39,12,50.988","-76,38,36.417","33410","20171017-015103","20171017-015103","0"]]}
Step 1): Code to read the json file to dynamic frame : * landing_location(Our file location)
dyf = glueContext.create_dynamic_frame.from_options(connection_type = "s3",connection_options= {"paths": [landing_location], 'recurse':True, 'groupFiles': 'inPartition', 'groupSize': '1048576'},format = "json", transformation_ctx = "dyf")
dyf.printSchema()
root |-- type: string |-- count: string |-- def: array | |-- element: string |-- data: array | |-- element: array | | |-- element: choice | | | |-- int | | | |-- string
Step 2): Converting into spark data frame and exploding the data.
dtcadf = dyf.toDF()
dtcadf.show(truncate=False)
dtcadf.registerTempTable('dtcadf')
data=spark.sql('select explode(data) from dtcadf')
data.show(1,False)
Getting below issue :
An error occurred while calling o270.showString.
Note : Same file we can able to succeed when we directly read the file using spark data frame instead of AWS Glue dynamic frame.
Can you please help me to resolve the issue ,And do let me know for further information from our end.

Github pages makes download markdown files

I'm creating a github pages website with this tree in my repository:
|- pages
| |- 3.0.3
| | |- SUMMARY.md
| | |- core
| | | |- String.md
|- LICENSE.md
|- README.md
|- _config.yml
|- index.md
In index.md I wrote
* [0.3.0](pages/0.3.0/SUMMARY.md)
and in pages/0.3.0/SUMMARY.md
* [String](core/String.md)
My problem is this: I can correctly access the main page ('index.md') and pages/0.3.0/SUMMARY.md, but when I want to access String.md
generated page, github pages makes me download the file instead of loading the .html page.
What am I doing wrong?
Here is the website and here my repository
Your links are incorrect,
You should use [String](core/String.html) URL instead of [String](core/String.md)
Because, Jekyll renders .md files as .html

Using Grunt.js to copy all HTML files from one directory structure to another

I have a large directory structure, typical of most apps.
For example, like this:
theprojectroot
|- src
| |- app
| | |- index.html
| | |- index.js
| | |- userhome
| | | |- userhome.html
| | | |- userhome.js
| | |- management
| | | |- management.html
| | | |- management.js
| | |- social
| | | |- social.html
| | | |- social.js
| |- assets
|- vendor
|- package.json
I would like to copy all the HTML files - and ONLY the HTML files - in all the directories into another folder.
I'm currently using Grunt copy to copy all files, but now I'd like to do so just for the HTML. In the docs, there doesn't seem to be any option to select a file type.
Does anyone have a hack they could suggest to do this?
The following code will work
copy: {
files: {
cwd: 'path/to/files', // set working folder / root to copy
src: '**/*.html', // copy all files and subfolders **with ending .html**
dest: 'dist/files', // destination folder
expand: true // required when using cwd
}
}
The flatten: true option as in this answer might work for some cases, but it seems to me that the more common requirement (as in my case) is to copy a folder and its sub-folder structure, as-is, to dest. It seems that in most cases if you have sub-folders, they are probably being referenced that way in code. The key to doing this is the cwd option, which will preserve folder structure relative to the specified working directory:
copy: {
files: {
cwd: 'path/to/files', // set working folder / root to copy
src: '**/*.html', // copy only html files
dest: 'dist/files', // destination folder
expand: true // required when using cwd
}
}

Organizing Django + Static Website Folder Hierarchy

I'm currently working on developing a personal Django site that will consist of various technologies / subdomains. My main page(s) will be Django, with a blog.blah.com subdomain that runs wordpress, and several other subdomains for projects (project1.blah.com, project2.blah.com), that are static HTML files (created with Sphinx).
I'm having a lot of trouble organizing my file hierarchy and web server configurations. I'm currently running Apache on port 8080 which serves the Django stuff via mod_wsgi, and I use NGINX on port 80 to handle requests and proxying.
Here's my current filesystem layout. NOTE: I run ALL websites under a single user account.
blah#blah:~$ tree
.
`-- sites
|-- blah.org
| |-- logs
| |-- blah
| | |-- apache
| | | |-- blah.conf
| | | `-- blah.wsgi
| | |-- INSTALL
| | |-- nginx
| | | `-- blah.conf
| | |-- blah
| | | |-- app1
| | | | `-- models.py
| | | |-- app2
| | | | `-- models.py
| | | |-- manage.py
| | | |-- settings.py
| | | `-- urls.py
| | `-- README
| `-- private
`-- blah2.org
Can anyone help me figure out where to place files for a best-practices type of deployment? The structure above ONLY contains my Django code. I've got no idea where to put my static content files (eg: html subdomain sites), and my other services (eg: wordpress stuff).
Any help would be greatly appreciated! Bonus points if you show off your directory structure.
I put my stuff in /srv/www/blah.org/ like this:
-- blah.org
| -- media
| -- amedia
| -- templates
| -- blah
| django app
...
| -- settings.py
| -- config
| -- crontab
| -- blag.org.conf (nginx)
| -- manage.py
Then I confiugure static /media/ and /amedia/ with nginx and proxy everything else to gunicorn serving django.