Developing SAPUI5 applications with WebStorm - webstorm

Looking at some of the delivered SAPUI5 code on HANA I noticed that WebStorm and even RubyMine was used by some SAP developers. I have also heard that various other developers on customer sites use WebStorm for code checked into the ABAP repository.
Both the HANA and ABAP repositories technically look to be proprietary. The default method for syncing SAPUI5 code with HANA and ABAP repos seems to be using Eclipse or the Eclipsed based HANA Studio, via SAP delivered plugins installed.
I searched and couldn't find any plugins or help on how you could check in and out of HANA or ABAP repo easily not using Eclipse or Orion.
For HANA you can put Github in the middle using something like the SAP HANA Deployment Shell and on the ABAP stack you can /UI5/UI5_REPOSITORY_LOAD to manually upload, i have heard alternatives for both where developers have reverse engineered the services eclipse use by listening on the HTTP traffic or de-compiling the plugins.
My question how are others using Webstorm to develop SAPUI5 applications within a team and how do you sync your code with the SAP repository?

I use Webstorm for my UI5 development. We store the code in a GIT repository hosted through an internal Gitlab server (https://about.gitlab.com/) running on Ubuntu! You could just as easily use cloud solutions such as Gitlab or Bitbucket.
There are two ways to circumvent Eclipse and remove the need for the ABAP team repository:
(1) Use the abap program /UI5/UI5_REPOSITORY_LOAD in t-code SE38 on your Gateway ABAP stack. Just point it to your git directory and execute!
(2) Use the program /UI5/UI5_REPOSITORY_LOAD_HTTP to do the same thing from a webserver. You could imagine a scenario where you have a HTTP service that triggers the pull on SAP but we always use the first method!
Edit # 03-SEP-14
To clarify my thoughts on (2) the ideal scenario would be to implement a small post commit handler so that on a repository change it would:
Pull the changes from the repository
build the UI (i.e. perform minification/uglify on the JS & CSS) to a separate build folder (create preload files)
perform any unit tests on the code (if they exist)
if the tests pass, upload to Gateway by either:
zip the build folder and post it to a custom Gateway service (or)
call a custom gateway service to then trigger a pull of the build folder via HTTP
(Since master is always deployable :-)!)
You end up with a continuous integration platform that ensures the integrity of your code and ensures that you also deploy only the production code (always a little uncertain of deploying non-minified source code with comments etc. to a productive internet facing server..).
This method is agnostic of the IDE you use and if you do it right, also of the source code repository setup.
Hope this helps & happy developing!
Oli

Related

Using cloud functions vs cloud run as webhook for dialogflow

I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.

Continuous deployment without cloning whole repository

I am searching for a solution to do continuous deployment in a cloud environment, more specific, in an Amazon AWS environment.
The code to be deployed are mainly Microsoft's ASP and PHP, so this framework should work on both platforms. As I have an auto-scale environment, this framework will work if it pulls the new code, like Puppet does.
My first thought was to deploy direct from the VCS, but I ended in a problem where all repository information was mirrored to the servers, as GIT, for instance, works. This is a problem because the repository keeps growing and the servers will demand more and more space.
I found Ansible, that works the way I need, but does not work on Windows environment. It only sends to the servers the production code, not the VCS repository, and keeps track which servers are updated.
Without using an easy-to-setup framework like this, I will need to create a Puppet + Jenkins + a VCS framework, where Jenkins creates the package from a VCS source code and Puppet delivers it.
Does anybody know any small framework for my needs or the Puppet + Jenkins + VCS is the way to go?
Consider CloudMunch (www.cloudmunch.com) for this. The platform is built exactly to solve this kind of polyglot requirements.
Disclaimer: I work for CloudMunch

Using a CI Server to Unit Test a Web Application plugin

I'm using TeamCity 7 as CI Server, and I'd have to test several Web Application plugins, mainly written with PHP. I'm familiar with ANT and *Unit, but I have an issue to solve: to properly test a plugin, my idea would be the following:
Cleanup the testing environment.
Install a clean copy of the Web Application which will host the plugin.
Install the plugin.
Enable the plugin.
Run the tests.
Obviously, running the tests on an installed environment is the easy part. Most tests are fired by directly calling plugin's classes' methods, yet the framework must be configured, even with minimal settings, to allow calling its bootstrap file and perform due initialization. I tried running the tests in an environment I prepared manually, and they run as expected.
The issue is now automating the installation of the standard Web Application, and, most importantly, its configuration. The basic steps are the following:
Unzip framework somewhere (done).
Create a Database (done).
Create a Database User ans assign it propert privileges (done).
Run Web Application's setup.
The tricky part is that not all Web Application implement a command line interface, such as drush for Drupal, hence I came out with two possible ways to complete the installation:
Simulate manual installation via CURL
Take note of the installation steps and the forms that need to be filled.
POST data to each of the forms using CURL.
I tried this method, still manually, with acceptable results. The Web Application gets installed as expected, and it can be used.
However, this requires a Web Server where the application can run. As far as I know, TeamCity Agents work in their own, random named directories and anything "installed" in them cannot be accessed via HTTP requests.
Backup/Restore
Manually pre-install a Web Application and configure its basic settings.
Zip the Application's directory and a backup of its database.
Before running the tests, unzip the directory in Agent's working directory.
Restore the backed-up database. The application will now be "configured".
This method is a bit "rough", but it doesn't require a Web Server to be running. Although the Web Application won't be able to server HTTP requests, that doesn't necessarily matter, as the tests will be run against plugin's classes.
This method has two major drawbacks, though:
Tests involving interaction with the Web Application (e.g. hooks, event handlers, and so on) can't be run.
Since the Web Application and its database are pre-configured, their parameters will be the same at every run. Therefore, it wouldn't be possible to run two Agents at the same time, for example to test two different plugins.
I'm now wondering if there's a better solution, as both the above look less than optimal to me.
Please note that, although I'm using TeamCity, the CI Server itself should not be a big deal, as I'm running everything with ANT. Therefore, any suggestion, even related to another CI Serverm, will be welcome (I know Hudson, CruiseControl and BuildMaster, I can adapt a concept easily). Thanks.

Looking for a .NET BuildServer SaaS

I've a question regarding Build Servers for .NET Projects. Currently I'm using TeamBuild in conjunction w/ TFS 2010 to do automated builds in the .NET world. Some older projects are built using plain old MSBuild scripts.
To get rid of the administrative effort I'm currently moving my sources to github. Github offers, as many other sites service hooks to trigger build servers for doing automated builds such as CI or nightly builds.
Sure I could use TeamCity OnPremise and dynamically create Build Agents in Windows Azure using VMRole and Virtual Disks, but I think this hybrid solution is a little bit moronic.
So what are your thoughts about the following architectural idea?
Let's say you're using github as source control platform. When commiting sources to your repository an Azure WebRole hosting a WCF Service will be triggered.
The WebRole itself will just use the Azure API to fire up a new instance of a custom Azure VMRole.
The Azure VMRole itself will use some kind of buildscript such as Rake or MSBuild to have as few developer tools installed on the build agent as needed. After building the entire project the artifacts will be published to Azure BlobStorage and the WebRole hosting the WCF service will be called again, but right now the Azure WebRole is going to terminate the BuildAgent.
While using such a setup you could minimize the costs for the build agent and build nearly any kind of project as far as you're able to install the required element for the build by using PowerShell.
So in bottom line: what are your thoughts on this architecture? Other Ideas? Is there an existing service offering such a solution?
Thorsten
have you looked at https://appharbor.com ? I know a number of people who are using it to do exactly what you are doing.
Check out Team Foundation Service as it can do the following:
Continuous Delivery to Azure
Deploy to production on Windows Azure with two clicks from Visual Studio, or automatically as part of your build process.
Just found this one http://www.appveyor.com/ AppVeyor is also free for OpenSource projects.

Are there any decent compiler web services?

Here's the idea: you commit your code to a repository and call a web service (or enter the request through a web app) to have it compiled. The results are then pushed up to a FTP server, S3 bucket, etc. Is there anything like this out there on the public internet?
TFS has a build queuing feature, but I'm thinking more along the lines of a internet (not intranet) web service. And if it can pull from known source control interfaces (Subversion, CVS, etc.) then the caller needs to pass very little besides selecting a compiler and specific compilation options.
My reasoning is more along the lines of removing a lot of software installation and configuration hassles, especially when working between different languages/platforms/frameworks/projects
See Is there build farm for checking open source apps against different OS'es? for related information.