How to have a run time config file for aurelia app built with webpacks 4 - webpack-4

I have a application built with aurelia and bundled with webpacks. I have a variables in a typescript file. When i do a producation build, I just want to change those variables when I deploy at various servers.
Example apiRoot= http://10.10.0.1/RESTSERVICES/---> when deployed at one server
when deployed at another server I what apiRoot do be different.
But I don't want to build the code multiple times to deploy at various locations.
For this reason I'm looking a run time config file for aurelia application built with webpacks. Thanks in Advance

I think what you are asking is potentially similar to the Q here Aureliajs Waiting For Data on App Constructor.
In that question, I gave suggestion on how to do it in different ways, which is copy-pasted below:
Aurelia provides many ways to handle asynchronous flow. If your custom element is a routed component, then you can leverage activate lifecycle to return a promise and initialize the http service asynchronously.
Otherwise, you can use CompositionTransaction to halt the process further, before you are done with initialization. You can see a preliminary example at https://tungphamblog.wordpress.com/2016/08/15/aurelia-customelement-async/
You can also leverage async nature of configure function in bootstrapping an Aurelia application to do initialization there:
export function configure(aurelia) {
...
await aurelia.container.get(HttpServiceInitializer).initialize();
}

Related

Unit and Integration Test for Azure Function with ServiceBusTrigger

I have an Azure Function which is triggered by an Azure Service Bus Queue.
The function is below.
How this Run method can be unit tested?
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
public static class AddContactFunction
{
[FunctionName("AddContactFunction")]
public static void Run([ServiceBusTrigger("AddContact", Connection = "AddContactFunctionConnectionString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
I had the exact same doubts.
Adding Unit Tests is not too complicated, at the end of the day, its a function, so all we got to do is to call the Azure Function with the correct string, for parameter string myQueueItem.
Adding Integration tests needs some additional ground work. In the Github project, the author uses the TestFunctionHost class from Azure/azure-functions-host project.
I tried following this strategy, but the amount of code needed to setup all these is uncomfortably high for my liking. Not a lot of it is well documented, and some of the stuff needs developers to use Azure App Services myGet feed.
I wanted a simpler approach, and thankfully I found one.
Azure Functions is built on top of the Azure WebJobs SDK package, and leverages its JobHost class to run. So in our integration tests, all we need to do, is to setup this Host, and tell it where to look for the Azure Functions to load and run.
IHost host = new HostBuilder()
.ConfigureWebJobs()
.ConfigureDefaultTestHost<CLASS_CONTAINING_THE_AZURE_FUNCTIONS>(webjobsBuilder => {
webjobsBuilder.AddAzureStorage();
webjobsBuilder.AddServiceBus();
})
.ConfigureServices(services => {
services.AddSingleton<INameResolver>(resolver);
})
.Build();
using (host) {
await host.StartAsync();
// ..
}
...
Once this is done, we can send messages to ServiceBus and our Azure Functions will get triggered. Once can even set breakpoints in the Functions getting tested and debug issues!
I have blogged about the whole process here and I have also created a github repository at this link, to showcase test driven development with Azure Functions.
How this Run method can be unit tested?
The method is a static public method. You can unit test it by invoking the static method AddContactFunction.Run(/* parameters /*); You will not need a Service Bus namespace or a message for that matter as your function expects to receive a string from the SDK. Which you can provide and verify the logic works as expected.
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
This would be a much more sophisticated scenario. This would require to run Functions runtime and generate a real Service Bus message to trigger the functions as well as validate that the blob was written. There's no integration/end-to-end testing framework that is shipped with Functions and you'd need to come up with something custom. Azure Functions Core Tools could be helpful to achieve that.

Why do embedded Jetty examples do server.dump(System.err)?

All the examples of embedded Jetty I see do server.dump(System.err). I can't find any docs on what that does or why.
When you see server.dump(System.err) this is a dump of the state of the Server object to the System.err console appendable.
Jetty operates on a LifeCycle model, where every component can be started and stopped and has the ability to track its state.
The Server object is a specialized ContainerLifeCycle which operates like all other LifeCycle objects but also has child beans that follow the LifeCycle behaviors.
The call server.dump(System.err) is actually the ContainerLifeCycle.dump(Appendable) call for the Server.
This top level Server dump is called The Jetty Dump Tool.
See https://www.eclipse.org/jetty/documentation/current/jetty-dump-tool.html for details, including a sample output.

GAE service running on Flexible Env. as target of a task queue

According to the google doc, a service running in the flexible enviroment can be the target of a push task:
Outside of the standard environment, you can't add tasks to push
queues, but a service running in the flexible environment can be the
target of a push task. You can specify this using the target parameter
when adding a task to queue or by specifying the default target for
the queue in queue.yaml.
However, when I tried to do it I get 404 errors in the flexible service.
That's totally normal due to the required endpoint (/_ah/queue/deferred) for task queues is it not defined in the flexible service.
How do I become a flexible service in a valid target for task queues?
Do I have to define that endpoint in my code in some way?
Usually, you'll need to write a handler in your worker service to do the processing after receiving a task. In the case of push tasks, the service will send HTTP requests to your whatever url you specify. If no url is specified the default URL /_ah/queue/[QUEUE_NAME] will be used.
Now, from the endpoint you mention, it seems you are using deferred tasks, which are a somewhat special kind. Please, see this thread for a workaround by adding the needed url entry. It mentions Managed VMS but it should still work.

Unable to launch task from a spring cloud data flow stream

I registered my task app in Spring Cloud Data Flow, created a definition for it and the status shows 'unknown'. I created the stream and trying to launch the task through task-sink and I get an error:
java.lang.IllegalStateException: failed to resolve MavenResource:
How to launch a task from the task-sink? Am I missing something? Any help is appreciated. Another question I have is how do I access the payload sent via TaskLaunchRequest in my task?
S1 http | step1: transformer-rabbit | log
S2 :S1.step1 > filter --expression=payload.contains('CUSTADDRMODRQ_V15') | task-processor | task-sink
task-sink is launching the task provided by the uri in the TaskLaunchRequest. It is looking for the resource as shown in the log
OUT Using manager EnhancedLocalRepositoryManager with priority 10.0 for /home/vcap/.m2/repository
OUT Using transporter HttpTransporter with priority 5.0 for https://repo.spring.io/libs-snapshot and finally failing.
The task is deployed in our repository and as mentioned I registered and created the definition for it as well.
This one is in cf environment and I am using SCDF server 1.0.0.M4.
In the application.properties for the task-sink i am providing maven.remote.repositories.snapshots.url=**
task create fis-ifx-event-task --definition "fis-event-task"
My goal is launching the task from the stream.
Thanks for the information. I am in fact using the BUILD-SNAPSHOT as I am unable to enable taks in 1.0.0M4 version. Here is the one I am using spring-cloud-dataflow-server-cloudfoundry-1.0.0.BUILD-20160808.144306-116. I am able to register and create task definitions. The status of the task definition is showing as 'unknown' even when I am using the sample task module provided by your team. But when I initiate the flow of the stream and when task-sink tries to launch the task, it is unable to find the maven resource. When I create the task definition, does the task module gets deployed? I don't see any app in Pivotal Apps Manager. As mentioned earlier, I provided maven.remote.repositories.snapshot.url in the application.properties file for the task-sink application. Another thing I observed is when I launch the task manually from dataflow shell it gives an error CF-UnprocessableEntity(10008): The request is semantically invalid: Unknown field(s): 'staging_disk_in_mb', 'staging_memory_in_mb' and also a message saying 'Source is empty'. Presently the task is supposed to print the timestamp and is not dependent on any input.
TaskProcessor code:
#EnableBinding(Processor.class)
#EnableConfigurationProperties(TaskProcessorProperties.class)
public class TaskProcessor {
#Autowired
private TaskProcessorProperties processorProperties;
public TaskProcessor() {
}
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
#ELI(level = "info", eventType = ELIEventType.INBOUND)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<String, String>();
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(processorProperties.getUri(), null, properties, null);
return new GenericMessage<>(request);
}
}
TaskSink code:
#SpringBootApplication
#EnableTaskLauncher
#EnableBinding(Sink.class)
#EnableConfigurationProperties(TaskSinkProperties.class)
public class FisIfxEventTaskSinkApplication {
public static void main(String[] args) {
SpringApplication.run(FisIfxEventTaskSinkApplication.class, args);
}
}
I provided the stream I am using earlier in the post. Sink is receiving the TaskLaunchRequest with uri and payload as you can see here and unable to launch the task.
OUT registering [40, java.io.File] with serializer org.springframework.integration.codec.kryo.FileSerializer
2016-08-10T16:08:55.02-0600 [APP/0]
OUT Launching Task for the following resource TaskLaunchRequest{uri='maven://com.xxx:fis.ifx.event-task:jar:1.0-SNAPSHOT', commandlineArguments=[], environmentProperties={payload={"statusCode":0,"fisT
opic":"CustomerDataUpdated","payloadId":"CUSTADDRMODR``Q_V15","customerIds":[1597304]}}, deploymentProperties={}}
Before I begin, you have a number of questions here. In the future, it's better to break them up into multiple questions so that they are easier to find by other users and easier to answer. That being said:
A little context on the current state of things
In order to understand how things will work, it's important to understand the current state of things. The current releases of the software involved are:
Pivotal Cloud Foundry (PCF) - 1.7.12. This version is required for any task support.
Spring Cloud Task (SCT) - 1.0.2.RELEASE
Spring Cloud Data Flow CF (SCDF) - 1.0.0.BUILD-SNAPSHOT (current as of the date of this post).
Currently PCF 1.7.12+ has all the capabilities to run tasks. You can create v3 applications (the type of application used to launch a task), run it as a task, etc. However, the tooling around that functionality is not currently complete. There is no support for v3 applications in Apps Manager or the CLI. There is a plugin for the CLI that is more of a dev tool that can be used to help with some functions (it will show you logs, etc), but it is not fully functional and requires a specific version of the CLI to work [1]. This is one of the reasons that the task functionality within PCF is still considered experimental.
Spring Cloud Task is currently GA and supports all the functionality needed to effectively run tasks on CF. However, it's important to note that SCT doesn't handle orchestration so the actual launching of tasks on CF is the responsibility of either the user, or Spring Cloud Data Flow (the easier route).
Spring Cloud Data Flow's Cloud Foundry server implementation currently has functionality to launch tasks on PCF in the latest snapshots. We have validated this against 1.7.12 as well as the development branch of 1.8.
The task workflow within SCDF
Tasks are fundamentally different from stream applications within the context of SCDF. When you create a stream definition, you are given the option to deploy it. What this does is it actually downloads the Spring Boot über jars and deploys them to PCF as long running processes. If they go down, PCF, will relaunch them as expected, etc.
Tasks on the other hand, are not deployed. They are launched. The difference is that while you create a task definition, there is nothing deployed until you click launch. And when the task completes, the software is shut down and cleaned up. So while a stream definition may have states, it's really a one to one relationship between the definition and the deployed software. Where with a task, you can launch a task definition as many times as you want.
Your issues
Reading through your post, I see a few things that you are struggling with. Let me see if I can help:
Task Definitions within SCDF and launching them via a stream - When launching a task from a stream, the task registry within SCDF is not used. The sink expects the URL for the resource to be within the TaskLauchRequest.
Apps Manager and tasks - As mentioned above, there is no support for v3 applications in Apps Manager yet so you won't be able to see your tasks there.
Viewing the logs - In order to debug what's going wrong with launching your task on CF, you're going to want to view the logs. To do so, use the v3 CLI plugin mentioned above to view them. It's important to note that you can only tail live logs with the plugin, not view logs that have previously been rendered. Because of that, when testing, you'll want to tail the logs as soon as the app is created, before it's launched.
Error in SCDF Shell - The error you received from the SCDF shell (CF-UnprocessableEntity(10008):...) leads me to wonder if you have both the correct version of PCF (1.7.12+) and the correct version of the following other libraries:
spring-cloud-deployer-cloudfoundry - The latest snapshots
cf-java-client - 2.0.0.M10+
reactor-core - 3.0.0.RC1+
I hope this helps!
[1] https://github.com/cloudfoundry/v3-cli-plugin
Task support is not available in 1.0.0.M4 release of SCDF's CF-server. In this release, the task commands/REST-APIs should be disabled - see here. And for that reason, you wouldn't see any docs related to Tasks in the 1.0.0.M4 reference guide.
That said, the Task support is available/enabled in the BUILD-SNAPSHOT release. If you're locally building the CF-server and upon pushing it to CF, you could take advantage the task commands in the shell to create and launch task definitions.

How to wait for server to come up and run unit test from Jenkins/Hudson

This is pretty simple question. I am posting this because I couldn't get any satisfying answer. First the background: I have Jenkins job that builds and deploys a web application on to a server. The server takes some time (in the order of 5 to 10 minutes). I would like to setup a job (or modify the existing as required) to rig up the unit test case execution which will test the application. I am thinking of the following approaches. I would like you to validate or suggest any alternatives:
Have an Ant target that waits for a fixed time
Have a custom Ant target which pings the URL and checks for app availability
Thanks in advance for your help.
-Vadiraj.
Waiting for a fixed time has the problem that the time you choose is either to short (build fails) or to long (waste of build time). So I think it would be better to check if the app is available.
I have done something similar for my Selenium tests. I had to wait until the Selenium Remote Server has started. I used the waitfor element. For a detailled documentation see here.
Here is a stripped down version of my ant-Target:
<parallel>
<sequential>
... Start web application server ...
</sequential>
<sequential>
<waitfor maxwait="10" maxwaitunit="minute">
<socket server="localhost" port="8080" />
</waitfor>
<junit>
...
</junit>
</sequential>
</parallel>
If your server is available before the web app is deployed you can try to use the http condition instead of socket to check for a HTTP error code. The conditions are documented here.