For our project, we use google cloud container registry (gcr.io) to push all our container images.
We have our build system that tries to pull the base images from the container registry.
To pull the container image from the registry we do that using oauth2 access token mechanism and the build script runs the "gcloud auth print-access-token" command to get the access token.
Following is the sample run for gloud --verbosity=debug auth print-access-token,
$ date;gcloud --verbosity=debug auth print-access-token;date
Fri Jul 17 10:23:57 PDT 2020
DEBUG: Running [gcloud.auth.print-access-token] with arguments: [--verbosity: "debug"]
< -- Get stuck here for 2 minutes -- >
DEBUG: Making request: POST https://oauth2.googleapis.com/token
INFO: Display format: "value(token)"
<Output Token Here>
Fri Jul 17 10:25:58 PDT 2020
Output from the gcloud config list
[core]
account = <email address>
disable_usage_reporting = False
log_http_redact_token = False
project = <project-id>
Your active configuration is: [default]
After looking at the code for Google SDK i found out that the Google SDK is trying to make a http call to http://google.metadata.internal every 10 minutes and hence it gets stuck finishing those calls since these calls will get resolved only from internal Google Compute instances.
Questions:
Is it expected that the gcloud tool is making the calls to google internal DNS when i run the utility from my MacBook? ( I am new to GCP so i am ready to share more information about my config if needed )
Is there a way to avoid the calls to google internal DNS for gcloud auth print-access-token commands ?
If there is no way to avoid the calls(eventhough it fails from my Mac), is there a way to reduce the time-outs for the calls to google's internal DNS or is there a way to not do it every 10 minutes ?
I'm trying to use a queue from Tibco EMS as a source of the Siddhi application.
For that, I used the documentation about ActiveMQ as a reference and generated successfully the OSGi-converted jar for the tibjms.jar.
Within this step I was able to Register the InitialContextFactory:
C:\PROGRA~1\WSO2\STREAM~1\446521~1.0\bin>icf-provider.bat com.tibco.tibjms.naming.TibjmsInitialContextFactory C:/DevTools/tibjms.jar C:/DevTools/
JAVA_HOME environment variable is set to C:\Program Files\Java\jdk1.8.0_151
CARBON_HOME environment variable is set to C:\PROGRA~1\WSO2\STREAM~1\446521~1.0\bin\..
Feb 18, 2020 10:46:05 PM org.wso2.carbon.tools.spi.ICFProviderTool execute
INFO: Executing 'jar uf C:\DevTools\tibjms\tibjms.jar -C C:\DevTools\tibjms \internal\CustomBundleActivator.class'
Feb 18, 2020 10:46:05 PM org.wso2.carbon.tools.spi.ICFProviderTool addBundleActivatorHeader
INFO: Running jar to bundle conversion
Feb 18, 2020 10:46:06 PM org.wso2.carbon.tools.converter.utils.BundleGeneratorUtils convertFromJarToBundle
INFO: Created the OSGi bundle tibjms_1.0.0.jar for JAR file C:\DevTools\tibjms\tibjms.jar
Then, I created the OSGI-converted jars for the remaining tibco ems jars:
jms-2.0.jar
tibemsd_sec.jar
tibjmsadmin.jar
tibjmsapps.jar
tibjmsufo.jar
tibrvjms.jar
At this point, I copied all the OSGI jars to the "/lib" directory and the original jars to the "/samples/sample-clients/lib" directory.
Next, I sent a couple messages to the queue "queue.sample" with the textbody "hello queue".
Then, I created the following Siddhi application to use the EMS queue a source:
#App:name('TIBCOQueueSource')
#App:description('Use EMS que as SP source')
#source(type = 'jms', destination = "queue.sample", factory.initial = "com.tibco.tibjms.naming.TibjmsInitialContextFactory", provider.url = "tibjmsnaming://localhost:7222", transport.jms.ConnectionFactoryJNDIName= "QueueConnectionFactory", transport.jms.UserName= "admin", transport.jms.Password = "admin",#map(type = 'text'))
define stream inputStream (name string);
#info(name='query_name')
from inputStream
select name
insert into outputStream;
from outputStream#log("logger")
select *
insert into tmp;
Finally, when I ran the event simulator I got the following error in the logs:
[2020-02-18 22:59:31,006] ERROR {org.wso2.siddhi.core.SiddhiAppRuntime} - Error starting Siddhi App 'TIBCOQueueSource', triggering shutdown process. javax/jms/JMSContext
So, based on this discription am I doing something wrong? Am i missing any step of the process?
Tks in advance for all the help
Hi I have created an apache beam pipeline, tested it and ran it from inside eclipse, both locally and using dataflow runner. I can see in eclipse console that the pipeline is running I also see the details, i. e. logs on the console.
Now, how do I deploy this pipeline to GCP, so that it keeps working irrespective of the state of my machine. For e.g., if I run it using mvn compile exec:java the console shows it is running, but i can not find the job using the dataflow UI.
Also, what will happen if I kill the process locally, will the job on the GCP infrastructure also be stopped? How Do I know a job has been triggered independent of my machine`s state on the GCP infrastructure?
The maven compile exec:java with arguments output is as follows,
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/C:/Users/ThakurG/.m2/repository/org/slf4j/slf4j-
jdk14/1.7.14/slf4j-jdk14-1.7.14.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/Users/ThakurG/.m2/repository/org/slf4j/slf4j-nop/1.7.25/slf4j-nop-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]
Jan 08, 2018 5:33:22 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ main
INFO: starting the process...
Jan 08, 2018 5:33:25 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ
createStream
INFO: pipeline created::Pipeline#73387971
Jan 08, 2018 5:33:27 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ main
INFO: pie crated::Pipeline#73387971
Jan 08, 2018 5:54:57 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ$1 apply
INFO: Message received::1884408,16/09/2017,A,2007156,CLARK RUBBER FRANCHISING PTY LTD,A ,5075,6,Y,296,40467910,-34.868095,138.683535,66 SILKES RD,,,PARADISE,5075,0,7.4,5.6,18/09/2017 2:09,0.22
Jan 08, 2018 5:54:57 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ$1 apply
INFO: Payload from msg::1884408,16/09/2017,A,2007156,CLARK RUBBER FRANCHISING PTY LTD,A ,5075,6,Y,296,40467910,-34.868095,138.683535,66 SILKES RD,,,PARADISE,5075,0,7.4,5.6,18/09/2017 2:09,0.22
Jan 08, 2018 5:54:57 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ$1 apply
This is the maven command I`m using from cmd prompt,
`mvn compile exec:java -Dexec.mainClass=com.trial.apps.gcp.df.ReceiveAndPersistToBQ -Dexec.args="--project=analyticspoc-XXX --stagingLocation=gs://analytics_poc_staging --runner=DataflowRunner --streaming=true"`
This is the piece of code I`m using to create the pipeline and set the options on the same.
PipelineOptions options = PipelineOptionsFactory.create();
DataflowPipelineOptions dfOptions = options.as(DataflowPipelineOptions.class);
dfOptions.setRunner(DataflowRunner.class);
dfOptions.setJobName("gcpgteclipse");
dfOptions.setStreaming(true);
// Then create the pipeline.
Pipeline pipeL = Pipeline.create(dfOptions);
Can you clarify what exactly do you mean by "console shows it is running" and by "can not find the job using Dataflow UI"?
If your program's output prints the message:
To access the Dataflow monitoring console, please navigate to https://console.developers.google.com/project/.../dataflow/job/....
Then your job is running on the Dataflow service. Once it's running, killing the main program will not stop the job - all the main program does is periodically poll the Dataflow service for the status of the job and new log messages. Following the printed link should take you to the Dataflow UI.
If this message is not printed, then perhaps your program is getting stuck somewhere before actually starting the Dataflow job. If you include your program's output, that will help debugging.
To deploy a pipeline to be executed by Dataflow, you specify the runner and project execution parameters through the command line or via the DataflowPipelineOptions class. runner must be set to DataflowRunner (Apache Beam 2.x.x) and project is set to your GCP project ID. See Specifying Execution Parameters. If you do not see the job in the Dataflow Jobs UI list, then it is definitely not running in Dataflow.
If you kill the process that deploys a job to Dataflow, then the job will continue to run in Dataflow. It will not be stopped.
This is trivial, but to be absolutely clear, you must call run() on the Pipeline object in order for it to be executed (and therefore deployed to Dataflow). The return value of run() is a PipelineResult object which contains various methods for determining the status of a job. For example, you can call pipeline.run().waitUntilFinish(); to force your program to block execution until the job is complete. If your program is blocked, then you know the job was triggered. See the PipelineResult section of the Apache Beam Java SDK docs for all of the available methods.
Starting Friday afternoon last week, I'm suddenly unable to deploy to GCP for my project and I receive the following error:
...
Building and pushing image for service [myService]
ERROR: (gcloud.beta.app.deploy) Could not read [<googlecloudsdk.api_lib.s
storage_util.ObjectReference object at 0x045FD130>]: HttpError accessing
//www.googleapis.com/storage/v1/b/runtime-builders/o/gs%3A%2F%2Fruntime-b
%2Faspnetcore-default-builder-20170524113403.yaml?alt=media>: response: <
s': '404', 'content-length': '9', 'expires': 'Mon, 05 Jun 2017 14:33:42 G
ary': 'Origin, X-Origin', 'server': 'UploadServer', 'x-guploader-uploadid
B2UpOw2hMicKUV6j5FRap9x4UKxxZsb04j9JxWA_kc27S_AIPf0QZQ40H6OZgZcLJxCnnx5m4
8x3JV3p9kvZZy-A', 'cache-control': 'private, max-age=0', 'date': 'Mon, 05
17 14:33:42 GMT', 'alt-svc': 'quic=":443"; ma=2592000; v="38,37,36,35"',
t-type': 'text/html; charset=UTF-8'}>, content <Not Found>. Please retry.
I tried again this morning and even updated my gcloud components to version 157. I continue to see this error.
Item of note, the 20170524113403 value in that YAML filename is, I think, a match to the first successful deploy to .NET App Flex for my project. I had since deleted that version using the Google Cloud Explorer with a more recent version 'published' early Friday morning. My publish worked Friday morning, now it doesn't. I don't see any logs that help me understand why that file is even needed and an Agent Ransack search on my entire drive doesn't reveal where that filename is coming from to try and point it to a more recent version.
I'm doing this through both Google Cloud Tools integrated into my Visual Studio 2017 (Publish to Google Cloud...) as well as running the command lines:
dotnet restore
dotnet publish -c Release
copy app.yaml -> destination location
gcloud beta app deploy .\app.yaml in destination location
Not sure if this was "fixed" by Google or not, but 4 days later, the problem went away. On some additional information, I was able to locate the logs on my machine during the publish phase and saw something interesting.
When Working:
2017-05-25 09:36:48,821 DEBUG root Calculated builder definition using legacy version [gs://runtime-builders/aspnetcore-default-builder-20170524113403.yaml]
Later when it stopped working:
2017-06-02 15:25:15,312 DEBUG root Resolved runtime [aspnetcore] as build configuration [gs://runtime-builders/gs://runtime-builders/aspnetcore-default-builder-20170524113403.yaml]
What I noticed was the duplicated "gs://runtime-builders/gs//runtime-builders..."
which disappeared this morning... no, I didn't change a thing besides waiting until today.
2017-06-07 08:11:37,042 INFO root Using runtime builder [gs://runtime-builders/aspnetcore-default-builder-20170524113403.yaml]
You'll see that double "gs://runtime-builders/gs://runtime-builders/" is GONE.
I have an issue with my cloud foundry installation on vSphere. After an upgrade to 1.6 I started to get "migrator is not current errors" in the cloud controller clock and worker component. They do not come up anymore.
[2015-12-10 11:36:19+0000] ------------ STARTING cloud_controller_clock_ctl at Thu Dec 10 11:36:19 UTC 2015 --------------
[2015-12-10 11:36:23+0000] rake aborted!
[2015-12-10 11:36:23+0000] Sequel::Migrator::NotCurrentError: migrator is not current
[2015-12-10 11:36:23+0000] Tasks: TOP => clock:start
[2015-12-10 11:36:23+0000] (See full trace by running task with --trace)
After googling this I only found this mailing list https://lists.cloudfoundry.org/archives/list/cf-bosh#lists.cloudfoundry.org/message/GIOTVF2A77KREO4ESHSY7ZXZJKM5ZULA/. Can I migrate my Cloud Controller DB manually? Does anyone knows how to fix this? I'd be very grateful!