Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I do not know anything in Siebel. But we have a requirement that I need to comment one line of code in one of the applet scripts and build a new SRF. What is the procedure? I am using Siebel 6 and I have access to Siebel tools.
Also let me know any useful sites for help on Siebel 6.
You should have your local repository connected to a server repository so you can check out and check in your changes to the server repository. If you are not worried about check outs and check ins then you can do the following. Otherwise you have environment setup steps to perform before commenting out the code.
If you know the applet where the script is located, then search and right click that applet in Siebel Tools. Select Edit Server Scripts (if not there then choose the option Edit Browser Scripts). Find the line of code and prefix with //. Ctrl S to save the changes. That line of code should turn green.
Now the changes are in your local repository. You should put a SRF file from your server environment on your local machine. Right click your applet and select Compile Selected Objects and choose the file that you copied to your local machine.
Now your change has been compiled into the SRF that originated from the server and is ready to be deployed to your server for testing. You will have to shutdown, move the srf, and restart siebel services before being able to test your changes.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Sometimes binary files can't be downloaded with get or curl. For example,
let's say a colleague has shared a .zip file from his google drive.
Or perhaps there is a download button that doesn't have a direct link to the file location so that the only way to get it is clicking on that button within a browser. Following redirects works sometimes, but sometimes not.
These are just a couple of situations that I've personally run into where I cannot access a file I need for work without downloading it first to my personal computer (which may not have space, or may have slow connection). Then I have to find a way to upload those files to a different location where wget or curl works so that I can get them onto my cloud instance and actually get to work.
One solution I thought of would be to find a way to run a browser on/through the cloud instances internet connection. I'm not sure if this capability exists or not.
Would this work? If not, what other solutions have people come up with?
I tried to reproduce your issue in my own project.
I created an Ubuntu Instance, and I tried to download something from the internet without using curl or wget.
The only way I found to download something is using a browser for the command line.
I tested many of them in order to check which of them could be better, and for me links2 and Elinks are the best.
For this test, I shared a file from my Google Drive, I added the option to share with Anyone with a link to the file
I installed Links2 with the following command:
sudo apt-get install links2
To start it just type
links2
It will open a black screen then press g (or [esc] to see the menu)
Then paste the URL
https://drive.google.com/uc?id=GOOGLE-DRIVE-FILE-ID&export=download
Then save the file:
And then I was able to see the file in my Ubuntu:
MY-USER#INSTANCE-NAME:~$ ls -ltr
total 1540
-rw-rw-r-- 1 MY-USER MY-USER 1573568 Nov 2 19:23 SteamSetup.exe
Keep in mind that if the link works with Javascript you might have problems with the download.
In my case as the URL was public I was able to download the file but, I didn’t test with the authentication to Google.
Take in consideration that the url shared by drive is something like https://drive.google.com/file/d/GOOGLE-DRIVE-FILE-ID/view?usp=sharing
I just modified the URL to download the file to
https://drive.google.com/uc?id=GOOGLE-DRIVE-FILE-ID&export=download
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to setup my build and release pipelines in Azure dev ops to publish my webforms application to a file system on a VM on my companies network. Currently this is accomplished through Visual Studio by going to Build > Publish ...
I have a build pipeline set up previously I was using to catch build issues. But now I want to actually publish the builds from the cloud automatically when the master branch is updated.
I have an agent installed on a local VM and I can get Azure dev ops to run on this agent but I have some confusion as to what to do next. I've tried playing around with the Build Solution Task parameters, MSBuild task parameters and so on but it doesn't actually publish it.
The farthest I've gotten is to publish the build to a build folder on the agent but this only contains the solution and associated files, not the built output that would be published to the file system location.
I'm trying to understand how to actually publish the solution once it's been built and placed on the agent.
It also doesn't help that I can't find very good resources on the build variables that all of the default tasks are using.
Build pipeline is responsible for building your software and publishing built artifacts (build output) inside Azure DevOps for release pipelines. Release pipeline is responsible for deploying your website into test/production environment.
First you need to ensure that your build is publishing artifacts (output of build) inside azure devops for the release pipeline. This is achieved by using publish artifacts task.
Path to publish is the directory that contains your build output (files that are copied into VM). You can use relative paths with artifactsstagingdirectory. You can leave drop into artifact name, because that is just the name for zip file, also use Azure Pipelines in artifact publish location.
Now your build is creating artifacts for release pipelines. You can verify this by looking one of the builds in build history. Top right corner should contain artifacts:
If it's not there, then your build is not publishing artifacts correctly. Check build log for more details!
Now if you have azure devops agent running inside VM, the deployment is an easy task.
Stop IIS website with IIS Web App Manage task
Copy artifacts into desired folder (for example c:\wwwroot) with Copy files task.
Start IIS website with IIS Web App Manage task
Make use you have linked build into release pipeline. When you are creating the release pipeline first time. It will ask to set this setting.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to create setup.exe for my abc project (written in c++).
Before running the setup.exe i need to create and set the environment variable to some value.
Is it possible to add custom action of "creating and set the value of environment variable"
in the installer and if yes then how.
I'm using VS 2012 and Installshield.
Thanks
You don't describe your root problem but I can give you advice on environment variable race conditions I've had in the past. Typically I'll have my installer use standard techniques (Windows Installer Environment table which updates the registry and broadcasts a settings change) and then if still have a race condition for custom code running in the installer I'll have the custom action set the environment for the process to work around the issue. This way the permanent change is done correctly and a temporary change is injected to make the custom action happy.
The two most commonly seen race conditions are:
1) Variations of a child process hosted by a windows service doesn't get the settings change message due to service control manager behavior
2) A pending reboot causes MSI to not send the settings change message. In this scenario it is also possible to write a custom action that does nothing but send the message after the standard action has done it's work.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I’m using a Ubuntu VPS to host a couple of Ring web apps. I have a separate GNU Screen window for each one, and I start and stop them using lein run and ^C, respectively. This works, but it feels amateurish and if anything goes wrong these services won’t be restarted automatically.
I’d like to set something up so that I can start and stop my apps using Ubuntu’s service command (which I already use to start and stop nginx). Is there some kind of shortcut I can use to get these apps working with the service command? For example, is there some Leiningen- or Ring-friendly template into which I can just insert my application’s path? Failing that, what would be the best practices for writing my own service script to integrate with Jetty?
It depends on whether you want your service to run straight from your project directory, or whether you want to go through the intermediate step of creating and installing a build artifact.
Certainly during development it's more convenient to use lein run from your project directory. For the sake of repeatability, I'd recommend using the second approach for production systems.
The general approach would be to use the lein uberjar task to create a stand-alone JAR file. From there, it's pretty straightforward (though somewhat tedious) to create a script you can stick in /etc/init.d to run the JAR file, either directly via java or using jsvc.
It looks like there's a Leiningen plugin (lein-init-script) to automate the process of generating the service script, though I don't have any experience with it. You'll probably want to check that out.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am evaluating TDS to see if it solves our requirements, the followings are questions that I need helps;
Role and memberships
We like to sync users, roles, security settings, etc between environments using TDS, what are the options to support this requirement?
Deploy sitecore items
We have multiple environments which can be categorized into staging and production, we like to specify which items to be deployed by TDS for a specific environment using DeployOnce or AlwaysDeploy property i.e;
1. Staging environment: set AlwaysUpdate to all items
2. Prod environment: set DeployOnce for some items and AlwaysUpdate for others or just include items to be deployed.
Is there any option to specify which items to be deployed for each environment? One possible solution that I could think of is to create two different TDS projects one for each environment, however there may be other ways.
Automate syncing from sitecore to TDS project: is there MSBuild target that can be used from a build script to sync sitecore items to TDS project. Likewise other command which can be perform in Visual Studio such as Get Sitecore Items, Sync with SiteCore, Deploy, etc , can they be triggered from a build script?
Restrict Syncing direction: is it possible to specify items can only be synced from sitecore to TDS project in an environment and the same items can sync either direction in another environment?
Role and memberships: No, currently there is no way to bring users and roles into TDS as they are not stored as Sitecore items in the database. Security settings per item will be brought in, as that's stored in a field on the item itself.
Hedgehog mention this in the QnA section of this video:-
http://www.youtube.com/watch?v=Sbx7bk4UEO0&feature=player_detailpage#t=2530s (See 42.10 )
Deploy sitecore items: I'm not aware of a way to set deployment properties differently per configuration like that. One possibility is setting 'AlwaysUpdate' once, building to all environments, then setting the 'Exclude from Config' property on the ones you don't want to keep pushing to production, so that it never gets pushed again. Not ideal, but it's an alternative.
Automate syncing from sitecore to TDS project: There currently isn't a way you can tap into the TDS webservice to perform these actions from outside of TDS itself.
Restrict Syncing direction: Again, not that I'm aware of. If the Sitecore connector is installed on a site, you can do all TDS operations, but you cannot restrict it to a single direction, it's both ways or nothing. The closest thing I can think of regarding this is techphoria414's blog post on restricting builds to non-debug environments....But it's not quite what you're after.
http://www.techphoria414.com/Blog/2011/September/Failsafe-for-non-Debug-TDS-Builds