I have a Dockerfile that starts out...
FROM some.artifactory.example.com/openjdk:8-jre-alpine
ARG version
LABEL version=$version
...
I'd like to know what 'version' and '$version' are and what their utility is as the values to ARG and LABEL respectively. Like does ARG version somehow pull some value "in scope" and then LABEL version=$version use that...to what end? Nowhere else in the Dockerfile in question do I see any mention of version.
A LABEL is a piece of metadata on an image. You can add any key=val as a LABEL.
An ARG is something you can pass at build time, to the builder. You can then use the value in your Dockerfile during the build (but it is no longer available at runtime; so unless you somehow persist the value into the image itself, the container will have no idea what the value was).
docker build --build-arg version=1.2.3
Based on this Dockerfile, it looks like the author wanted to pass a version number at build time, and persist that in the metadata. They used the ARG (and --build-arg) to pass in the value, and they used the LABEL to store it in the resulting image, as metadata.
In other words, this appears to be some sort of organization / bookkeeping for the image, but it has no effect on the image's contents or runtime characteristics.
Related
In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command
I'd like to add a transformer to my kustomize setup, one that includes a dynamic value for the case where the local tag needs to override the production tag. If there's a simpler and better way to do this that would be great. I've looked through the list of transformers and generators to see if there was a way to provide this value at runtime (though I think specifically kustomize is designed to never use runtime values).
I can specify something like this:
images:
- name: my-image
newTag: my-sha1
The problem is to change the my-sha1 value after each new local build to then pick that image when I go to apply the local deployment.
How can I set newTag after I run a build locally to match a tag for the image I made locally? I can easily obtain the latest build tag and provide with to kubectl apply -f, but I'm not seeing a flag or environment variable, or something to do so.
I am trying to find a code support in python for moving a machine between Datacenter's folders without success, I saw in pysphere that you can define the folder just in the clone stage and not after machine already cloned.
This seems as a solution for my problem but it is in powershell, do anybody know a wrapping support for it in python
You can do this with pyVmomi. I would avoid pysphere because pyVmomi is maintained by VMWare and pysphere hasnt been updated in 4 years or more.
That said here is some sample code that uses pyVmomi
service_instance = connect.SmartConnect(host=args.host,
user=args.user,
pwd=args.password,
port=int(args.port))
search_index = service_instance.content.searchIndex
folder = search_index.FindByInventoryPath("LivingRoom/vm/new_folder")
vm_to_move = search_index.FindByInventoryPath("LivingRoom/vm/test-vm")
move_task = folder.MoveInto([vm_to_move])
In this example I create a ServiceInstance by connecting to a vCenter, next I grab an instance of the SearchIndex. The SearchIndex has several methods that can be used to locate your managed objects. In this example I decided to use the FindByInventoryPath method, but you could use any that will work for you. First I find the instance of the Folder named new_folder that I want to move my VirtualMachine into. Next I find the VirtualMachine I want to move. Finally I execute the Task that will move the vm for me. That task takes a param of the list of objects to be moved into the folder, and in this case its a single item list containing only the one vm I want to move. From here you can monitor the task if you want.
Keep in mind that if you use the FindByInventoryPath there are many hidden folders that are not visible from the GUI. I find that using the ManagedObjectBrowser is very helpful at times.
Helpful doc links:
https://github.com/vmware/pyvmomi/blob/master/docs/vim/SearchIndex.rst
https://github.com/vmware/pyvmomi/blob/master/docs/vim/Folder.rst
https://github.com/vmware/pyvmomi/blob/master/docs/vim/Task.rst
I recently noticed there is a difference in Item Id for a Sitecore template field between 2 environments (Source and Target). Due to this, any data changes to the field value for the dataitem using the template is not reflecting to target Sitecore database.
Hence, we manually copy the value from source to target and which takes lot of time to sync the 2 environments. Any idea how to change the template field Item Id in Sitecore without data loss in target instance?
Thanks
The template fields have most likely been created manually on the different servers, as #AdrianIorgu has suggested. I am going to suggest that you don't worry about merging fields and tools.
What you really care about is the content on the PRODUCTION instance of your site (assuming that this is Target). In any other environment, content should be regarded throwaway.
With that in mind, create a package of the template from your PRODUCTION instance and the install that in the other environments, deleting the duplicate field from the Source instance. The GUIDs of the field should now match across all environments. Check this into your source control (using TDS or Unicorn or whatever). You can then correctly update any standard values and that will be reflect through the server when you deploy again.
If your other environments (dev/qa/pre-prod) result in data loss for that field then don't worry about it, restore a backup from PROD.
Most likely that happened because the field or the template was added manually on the second environment, without migrating the items using packages, serialization or a third-party tool like TDS or Unicorn.
As #SitecoreClimber mentioned above, you can use Razl to sync the two environments and see the differences, but I don't think you will be able to change the field's GUID, to have the two environments consistent, without any data loss. Depending on the volume of your data, fixing this can be tricky.
What I would do:
make sure the target instance has the right template by installing a package with the correct template from source (with a MERGE-MERGE operation), which will end up having a duplicate field name
write a SQL query to get a list of all the items that have value for that field and update the value to the new field
Warning: this SQL query below is just a sample to get you started, make sure you extend and test this properly before running on a CD instance
use YOUR_DATABASE
begin tran
Declare #oldFieldId nvarchar(100), #newFieldId nvarchar(100), #previousValue nvarchar(100), #newValue nvarchar(100)
set #oldFieldID = '75577384-3C97-45DA-A847-81B00500E250' //old field ID
set #newFieldID = 'A2F96461-DE33-4CC6-B758-D5183676509B' //new field ID
/* versionedFields */
Select itemId, fieldid, value
from [dbo].[versionedFields] f with (nolock)
where f.FieldId like #oldFieldID
For this kind of stuff I sugest you to use Sitecore Razl.
It's a tool for comparing and merging sitecore databases.
Razl allows developers to have a complete side by side comparison between two Sitecore databases; highlighting features that are missing or not up to date. Razl also gives developers the ability to simply move the item from one database to another.
Whether it's finding that one missing template, moving your entire database or just one item, Razl allows you to do it seamlessly and worry free.
It's not a free tool, you can check here how you can buy it:
https://www.razl.net/purchase.aspx
Can't seem to rename an existing Verity collection in ColdFusion without deleting, recreating, and rebuilding the collection. Problem is, I have some very large collections I'd rather not have to delete and rebuild from scratch. Any one have a handy trick for this conundrum?
I don't believe that there is an easy way to rename a Verity collection. You can always use
<cfcollection action="map" ...>
to assign an alias to an existing collection, provided you do not need to re-use the original name.
For the Verity part (without considering ColdFusion), it's easy enough to detach a collection, rename it, and reattach it again:
rcadmin> indexdetach
Server Alias:YourDocserver
Index Alias:CollectionName
Index Type [(c)ollection,(t)ree,(p)arametric,(r)ecommendation]:c
Save changes? [y|n]:y
<<Return>> SUCCESS
rcadmin> collpurge
Collection alias:CollectionName
Admin Alias:AdminServer
Save changes? [y|n]:y
<<Return>> SUCCESS
rcadmin> adminsignal
Admin Alias:AdminServer
Type of signal (Shutdown=2,WSRefresh=3,RestartAllServers=4):4
Save changes? [y|n]:y
<<Return>> SUCCESS
Now you can rename the collection directory, and re-attach. (If you are unsure of any of these values, check them with collget before you take it offline).
rcadmin> collset
Admin Alias:AdminServer
Collection Alias:NewCollectionName
Modify Type (Update=0, Insert=1):1
Path:
Gateway[(o)dbc|(n)otes|(e)xchange|(d)ocumentum|(f)ilesys|(w)eb|o(t)her]:
Style Alias:
Document Access (Public=0,Secure=1,Anonymous=2):
Query Parser [(s)imple|(b)oolPlus|(f)reeText|(o)ldFreeText|O(l)dSimple|O(t)her]:
Description:
Max. Search Time(msecs):
Save changes? [y|n]:y
rcadmin> indexattach
Index Alias:NewCollectionName
Index Type [(c)ollection,(t)ree,(p)arametric,(r)ecommendation]:c
Server Alias:YourDocserver
Modify Type (Update=0, Insert=1):1
Index State (offline=0,hidden=1,online=2):2
Threads (default=3):
Save changes? [y|n]:y
<<Return>> SUCCESS
It should now show up again in the 'hierarchyview'.
You can also use the "merge" utility to copy content from one collection to another, with a new name.
Looks like this is not possible. Deleting and re-creating the collection with the desired name appears to be the only approach available.