Here's the idea: you commit your code to a repository and call a web service (or enter the request through a web app) to have it compiled. The results are then pushed up to a FTP server, S3 bucket, etc. Is there anything like this out there on the public internet?
TFS has a build queuing feature, but I'm thinking more along the lines of a internet (not intranet) web service. And if it can pull from known source control interfaces (Subversion, CVS, etc.) then the caller needs to pass very little besides selecting a compiler and specific compilation options.
My reasoning is more along the lines of removing a lot of software installation and configuration hassles, especially when working between different languages/platforms/frameworks/projects
See Is there build farm for checking open source apps against different OS'es? for related information.
Related
I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.
I'm developing an tool using Django for internal use at my organization. It's used to search and tag documents (using Haystack and Solr), and will be employed on different projects. My team currently has a working prototype and we want to deploy it 'in the wild.'
Our security environment is strict. Project documents are located on subfolders on a network drive, and access to these folders is restricted based on users' Windows credentials (we also have an MS SQL server that uses the same credentials). A user can only access the projects they are involved in. Since we're an exclusively Microsoft shop, if we want to deploy our app on the company intranet, we'll need to use an IIS server to deal with these permissions. No one on the team has the requisite knowledge to work with IIS, Active Directory, and our IT department is already over-extended. In short, we're not web developers and we don't have immediate access to anybody experienced.
My hacky solution is to forgo IIS entirely and have each end user run a lightweight server locally (namely, CherryPy) while each retaining access to a common project-specific database (e.g. a SQLite DB living on the network drive or a DB on the MS SQL server). In order to use the tool, they would just launch an all-in-one batch script and point their browser to 127.0.0.1:8000. I recognize how ugly this is, but I feel like it leverages the security measures already in place (note that never expect more than 10 simultaneous users on a given project). Is this a terrible idea, and if so, what's a better solution?
I've dealt with a similar situation (primary development was geared toward a normal deployment situation, but some users have a requirement to use the application on a standalone workstation). Rather than deploy web and db servers on a standalone workstation, I just run the app with the Django internal development server and a SQLite DB. I didn't use CherryPy, but hopefully this is somewhat useful to you.
My current solution makes a nice executable for users not familiar with the command line (who also have trouble remembering the URL to put in their browser) but is also relatively easy development:
Use PyInstaller to package up the Django app into single executable. Once you figure this out, don't continue to do it by hand, add it to your continuous integration system (or at least write a script).
Modify the manage.py to:
Detect if the app is frozen by PyInstaller and there are no arguments (i.e.: user executed it by double clicking it) and if so, then run execute_from_command_line(..) with arguments to start the Django development server.
Right before running the execute_from_command_line(..), pop off a thread that does a time.sleep(2) (to let the development server come up fully) and then webbrowser.open_new("http://127.0.0.1:8000").
Modify the app's settings.py to detect if frozen and change things around such as the path to the DB server, enabling the development server, etc.
A couple additional notes.
If you go with SQLite, Windows file locking on network shares may not be adequate if you have concurrent writing to the DB; concurrent readers should be fine. Additionally, since you'll have different DB files for different projects you'll have to figure out a way for the user to indicate which file to use. Maybe prompt in app, or build the same app multiple times with different settings.py files. Variety of a ways to hit this nail...
If you go with MSSQL (or any client/server DB), the app will have to know the DB credentials (which means they could be extracted by a knowledgable user). This presents a security risk that may not be acceptable. Basically, don't try to have the only layer of security within the app that the user is executing. The DB credentials used by the app that a user is executing should only have the access that the user is allowed.
Looking at some of the delivered SAPUI5 code on HANA I noticed that WebStorm and even RubyMine was used by some SAP developers. I have also heard that various other developers on customer sites use WebStorm for code checked into the ABAP repository.
Both the HANA and ABAP repositories technically look to be proprietary. The default method for syncing SAPUI5 code with HANA and ABAP repos seems to be using Eclipse or the Eclipsed based HANA Studio, via SAP delivered plugins installed.
I searched and couldn't find any plugins or help on how you could check in and out of HANA or ABAP repo easily not using Eclipse or Orion.
For HANA you can put Github in the middle using something like the SAP HANA Deployment Shell and on the ABAP stack you can /UI5/UI5_REPOSITORY_LOAD to manually upload, i have heard alternatives for both where developers have reverse engineered the services eclipse use by listening on the HTTP traffic or de-compiling the plugins.
My question how are others using Webstorm to develop SAPUI5 applications within a team and how do you sync your code with the SAP repository?
I use Webstorm for my UI5 development. We store the code in a GIT repository hosted through an internal Gitlab server (https://about.gitlab.com/) running on Ubuntu! You could just as easily use cloud solutions such as Gitlab or Bitbucket.
There are two ways to circumvent Eclipse and remove the need for the ABAP team repository:
(1) Use the abap program /UI5/UI5_REPOSITORY_LOAD in t-code SE38 on your Gateway ABAP stack. Just point it to your git directory and execute!
(2) Use the program /UI5/UI5_REPOSITORY_LOAD_HTTP to do the same thing from a webserver. You could imagine a scenario where you have a HTTP service that triggers the pull on SAP but we always use the first method!
Edit # 03-SEP-14
To clarify my thoughts on (2) the ideal scenario would be to implement a small post commit handler so that on a repository change it would:
Pull the changes from the repository
build the UI (i.e. perform minification/uglify on the JS & CSS) to a separate build folder (create preload files)
perform any unit tests on the code (if they exist)
if the tests pass, upload to Gateway by either:
zip the build folder and post it to a custom Gateway service (or)
call a custom gateway service to then trigger a pull of the build folder via HTTP
(Since master is always deployable :-)!)
You end up with a continuous integration platform that ensures the integrity of your code and ensures that you also deploy only the production code (always a little uncertain of deploying non-minified source code with comments etc. to a productive internet facing server..).
This method is agnostic of the IDE you use and if you do it right, also of the source code repository setup.
Hope this helps & happy developing!
Oli
Are there any best practices on virus scanning all files being uploaded to the Sitecore media library (and ultimately stored in Sitecore's DB)?
I searched all over the web but there is too much noise caused by the word virus since many people seem to have performance issues on server that have anti-virus software installed.
I don't know if it is an established best practice, but I would probably add a processor for the uiUpload pipeline that used an API or command line process for a commercial antivirus product. Other than the fact that it is in a pipeline processor, it shouldn't really be much different from how you would do it in any other ASP.NET application. Performance will definitely be a concern, but you could create a dialog with a psuedo progress bar to give some feedback to the user.
Take a look at this post by Mike Reynolds. It may help you out:
http://sitecorejunkie.com/2013/11/09/perform-a-virus-scan-on-files-uploaded-into-sitecore/
I am not aware of any published best practices, but if you are able to add a step in the upload process, you might want to take a look at Metascan, which provides API level integration to multiple antivirus engines. Using this, you could build a workflow for those uploaded files to scan them prior to them hitting your Sitecore media library by establishing rules based on the results of the antivirus engines used in your Metascan deployment. There's also a hosted version at metascan-online(dot)com
Disclaimer /// I am an employee of OPSWAT, who produces Metascan, but it appears to be a potential solution to your issue
In one of our recent Projects, we were faced with a requirement to scan incoming files for virus. The problem in the project was that the files after begin uploaded, were made public available on the website.
The way we solved the problem was to implementing https://www.virustotal.com/. Its a free online virus scanner that has a public API. You can send files via SSL.
We implemented the solution by adding newly uploaded files to a Sitecore workflow. The workflow would handle the scanning of files, and move the files to the final stage of the workflow, if the files wasn't infected. If a file was infected, the file would be deleted.
A Scheduler is running every 5 minutes to check for new incoming files with the workflow.
This also means that the files aren't available straight away, as the scheduler has to check the file, but you should be able to implement the functionality directly when the user has uploaded the file, by adding your custom code to the upload pipeline.
I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.