I am new to artifactory & gradle and am trying to setup them and getting the following error while publishing gradle artifacts:
15:14:44.544 [ERROR] [org.gradle.BuildExceptionReporter] Caused by: java.lang.UnsupportedOperationException
15:14:44.544 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.repositories.LegacyDependencyResolver.abortPublishTransaction(LegacyDependencyResolver.java:150)
15:14:44.544 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.ivyservice.IvyResolverBackedModuleVersionPublisher.publish(IvyResolverBackedModuleVersionPublisher.java:59)
15:14:44.544 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.ivyservice.DefaultIvyDependencyPublisher$Publication.publishTo(DefaultIvyDependencyPublisher.java:77)
15:14:44.545 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.ivyservice.DefaultIvyDependencyPublisher.publish(DefaultIvyDependencyPublisher.java:48)
15:14:44.545 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.ivyservice.IvyBackedArtifactPublisher.publish(IvyBackedArtifactPublisher.java:63)
15:14:44.545 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.tasks.Upload.upload(Upload.java:82)
15:14:44.545 [ERROR] [org.gradle.BuildExceptionReporter] ... 79 more
15:14:44.545 [ERROR] [org.gradle.BuildExceptionReporter]
And I have the following configuration in my build.gradle
apply plugin:'java'
apply plugin:'artifactory'
apply plugin:'maven'
buildscript {
repositories {
maven {
url 'http://localhost:9081/artifactory/plugins-release'
credentials {
username = "${artifactory_user}"
password = "${artifactory_password}"
}
}
}
dependencies {
classpath(group: 'org.jfrog.buildinfo', name: 'build-info-extractor-gradle', version: '2.0.9')
}
}
artifactory {
contextUrl = "${artifactory_contextUrl}" //The base Artifactory URL if not overridden by the publisher/resolver
publish {
repository {
repoKey = 'libs-release-local'
username = "${artifactory_user}"
password = "${artifactory_password}"
maven = true
}
}
resolve {
repository {
repoKey = 'libs-release'
username = "${artifactory_user}"
password = "${artifactory_password}"
maven = true
}
}
}
........
........
........
def myJar = file('build/libs/myjar-1.0.jar')
artifacts {
archives myJar
}
uploadArchives {
repositories {
mavenRepo url: "http://localhost:9082/artifactory/libs-release-local";
repositories.mavenDeployer {
repository(url: "http://localhost:9082/artifactory/libs-release-local") {
authentication(userName: "${artifactory_user}", password: "${artifactory_password}")
}
pom.version = versionNumber
pom.artifactId = artifactId
pom.groupId = groupId
}
}
}
I can see the from the exception that Ivy resolver is being used but I am not setting up ivy to be used anywhere. Can someone please point out what is it I am doing wrong here?
You don't need to configure the uploadArchives task when using Artifactory plugin. The plugin configures it for you.
Related
Encountering ioredis connection timeout when using cloud function in console
Error message below
Unhandled error event: Error: connect ETIMEDOUT
at TLSSocket.\<anonymous\> (/workspace/node_modules/ioredis/built/Redis.js:168:41)
at Object.onceWrapper (node:events:627:28)
at TLSSocket.emit (node:events:513:28)
at TLSSocket.emit (node:domain:552:15)
at TLSSocket.Socket.\_onTimeout (node:net:550:8)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
the part of my code for connecting to redis instance is as below
const REDISHOST = process.env.REDISHOST;
const REDISPORT = process.env.REDISPORT;
const REDISAUTH = process.env.AUTHSTRING;
const REDISKEY = process.env.REDISKEY;
const client = redis.createClient({
host: REDISHOST,
port: REDISPORT,
password: REDISAUTH,
tls: {
ca: REDISKEY
}
Any help would be appreciated
I am running a sample code from google to get a simple select query. Which is working fine in my local but from my k8s environment I am getting the below error
Exception in thread "main" com.google.cloud.bigquery.BigQueryException: Error getting access token for service account: connect timed out
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:115)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.create(HttpBigQueryRpc.java:220)
at com.google.cloud.bigquery.BigQueryImpl$5.call(BigQueryImpl.java:369)
at com.google.cloud.bigquery.BigQueryImpl$5.call(BigQueryImpl.java:366)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.bigquery.BigQueryImpl.create(BigQueryImpl.java:365)
at com.google.cloud.bigquery.BigQueryImpl.create(BigQueryImpl.java:340)
at com.rakuten.dps.dataplatform.ingest.utility.BQ_test.main(BQ_test.java:67)
Caused by: java.io.IOException: Error getting access token for service account: connect timed out
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:444)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:157)
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:145)
at com.google.auth.oauth2.ServiceAccountCredentials.getRequestMetadata(ServiceAccountCredentials.java:603)
at com.google.auth.http.HttpCredentialsAdapter.initialize(HttpCredentialsAdapter.java:91)
at com.google.cloud.http.HttpTransportOptions$1.initialize(HttpTransportOptions.java:159)
at com.google.api.client.http.HttpRequestFactory.buildRequest(HttpRequestFactory.java:88)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.buildHttpRequest(AbstractGoogleClientRequest.java:422)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:541)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:474)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:591)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.create(HttpBigQueryRpc.java:218)
... 8 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:284)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:264)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1162)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1340)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1315)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:264)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:113)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:84)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1012)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:441)
... 19 more
Below is the sample Code:
public class BQ_test {
private static final Logger logger = LoggerFactory.getLogger(BQ_test.class);
public static void main(String[] args) {
Job queryJob = null;
String actualValue = "";
NetHttpTransport transport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
String query = "SELECT * FROM `iconic-parsec-315409.bookmark_BQ.sbm_item_tbl``";
String projectId = "iconic-parsec-315409";
File credentialsPath = new File("/tmp/iconic-parsec-315409-823ef1c38a9d.json");
GoogleCredentials credentials;
try {
FileInputStream serviceAccountStream = new FileInputStream(credentialsPath);
credentials = ServiceAccountCredentials.fromStream(serviceAccountStream);
if (credentials.createScopedRequired()) {
Collection<String> bigqueryScopes = BigqueryScopes.all();
credentials = credentials.createScoped(bigqueryScopes);
}
BigQuery bigquery = BigQueryOptions
.newBuilder()
.setCredentials(credentials)
.setProjectId(projectId)
.build()
.getService();
QueryJobConfiguration queryConfig =
QueryJobConfiguration.newBuilder(query)
.setUseLegacySql(false)
.setJobTimeoutMs(180000L)
.build();
// Create a job ID so that we can safely retry.
JobId jobId = JobId.of(UUID.randomUUID().toString());
queryJob = bigquery.create(JobInfo.newBuilder(queryConfig).setJobId(jobId).build());
// Wait for the query to complete.
queryJob = queryJob.waitFor();
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
// Check for errors
if (queryJob == null) {
throw new RuntimeException("Job no longer exists");
} else if (queryJob.getStatus().getError() != null) {
// You can also look at queryJob.getStatus().getExecutionErrors() for all
// errors, not just the latest one.
throw new RuntimeException(queryJob.getStatus().getError().toString());
}
// Get the results.
TableResult result = null;
try {
result = queryJob.getQueryResults();
// Print all pages of the results.
// writeFvLToOrcFile(result,"/Users/susanta.a.adhikary/Downloads/test.orc");
for (FieldValueList row : result.iterateAll()) {
// String type
actualValue = row.get("sbm_item_id").getStringValue();
System.out.println(actualValue);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
I tried curl -I "https://oauth2.googleapis.com/token" from my remote k8s pod and getting
HTTP/2 404
content-type: text/html
date: Sun, 04 Jul 2021 05:54:09 GMT
server: scaffolding on HTTPServer2
So I dont think its a EGRESS issue.
The Data location is US-east-1 for GCP and the pod local timezone is UTC, I am not sure if its a NTP sync issue. Need Advice. Same Code runs fine from my local with the same serviceaccount key. ( Just to mention I have done a kubectl cp to move the serviceaccount.json to the pod for testing later I'll create a configmap or something)
I am creating an application with Ktor and using Jetty. There is a rule, that, before taking a certain action, I need to check if the endpoint is up / down.
For this I created a function that I check according to the service.
suspend fun checkStatus(
target: Target,
login: String,
passwordLogin: String,
url: String
) {
when (target) {
Target.Elasticsearch -> {
val client = HttpClient(Jetty) {
install(Auth) {
basic {
username = login
password = passwordLogin
}
}
}
runCatching {
client.get<String>(url)
}.onFailure {
it.printStackTrace()
throw it
}
}
}
}
To decrease the size of the function I just used the example with elasticsearch. So I have a function that checks if the elasticsearch is up / down
suspend fun checkElasticStatus(
username: String,
password: String,
https: Boolean,
host: String,
port: String
) = checkStatus(
target = Target.Elasticsearch,
login = username,
passwordLogin = password,
url = if (https) "https://$host:$port" else "http://$host:$port"
)
So I use this function in the Controller, before continuing with certain logic.
fun Route.orchestration() {
route("/test") {
post {
runCatching {
checkElasticStatus(
environmentVariable(ev, "elk.username"),
environmentVariable(ev, "elk.password"),
environmentVariable(ev, "elk.https").toBoolean(),
environmentVariable(ev, "elk.host"),
environmentVariable(ev, "elk.port")
)
/** other codes **/
}
}
}
}
But I'm always getting the error:
org.eclipse.jetty.io.EofException at
org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:283)
at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:422) at
org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:277) at
org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:381)
at
org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:259)
at
org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
at
org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:223)
at
org.eclipse.jetty.http2.HTTP2Session.newStream(HTTP2Session.java:543)
at
io.ktor.client.engine.jetty.JettyHttpRequestKt$executeRequest$jettyRequest$1.invoke(JettyHttpRequest.kt:40)
at
io.ktor.client.engine.jetty.JettyHttpRequestKt$executeRequest$jettyRequest$1.invoke(JettyHttpRequest.kt)
at io.ktor.client.engine.jetty.UtilsKt.withPromise(utils.kt:14) at
io.ktor.client.engine.jetty.JettyHttpRequestKt.executeRequest(JettyHttpRequest.kt:39)
at
io.ktor.client.engine.jetty.JettyHttpRequestKt$executeRequest$1.invokeSuspend(JettyHttpRequest.kt)
at
kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56) at
kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
at
kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
at
kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
at
kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
Caused by: java.nio.channels.AsynchronousCloseException at
java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:501)
at
org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:263)
... 18 more
org.eclipse.jetty.io.EofException at
org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:283)
at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:422) at
org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:277) at
org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:381)
at
org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:259)
at
org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
at
org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:223)
at
org.eclipse.jetty.http2.HTTP2Session.newStream(HTTP2Session.java:543)
at
io.ktor.client.engine.jetty.JettyHttpRequestKt$executeRequest$jettyRequest$1.invoke(JettyHttpRequest.kt:40)
at
io.ktor.client.engine.jetty.JettyHttpRequestKt$executeRequest$jettyRequest$1.invoke(JettyHttpRequest.kt)
at io.ktor.client.engine.jetty.UtilsKt.withPromise(utils.kt:14) at
io.ktor.client.engine.jetty.JettyHttpRequestKt.executeRequest(JettyHttpRequest.kt:39)
at
io.ktor.client.engine.jetty.JettyHttpRequestKt$executeRequest$1.invokeSuspend(JettyHttpRequest.kt)
at
kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56) at
kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
at
kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
at
kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
at
kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
Caused by: java.nio.channels.AsynchronousCloseException at
java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:501)
at
org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:263)
... 18 more
Could anyone help me, please?
Jetty engine supports HTTP/2 protocol only.
org.eclipse.jetty.io.EofException is thrown when I make a request to resource which can only respond with HTTP/1.1, e.g. http://www.google.com.
However, the error is misleading and there is an issue in bugtracker about it.
I just created a Managed Database on Digital Ocean that requires an SSL connection. How can I do that with Adonis?
This code works for me:
/**************************************************************************
* IMPORTS
***************************************************************************/
// NPM
const fs = require('fs')
// Providers
const Env = use('Env')
/**************************************************************************
* CONFIG > DATABASE
***************************************************************************/
const config = {
connection: Env.get('DB_CONNECTION', 'mysql'),
mysql: {
client: 'mysql',
connection: {
host: Env.get('DB_HOST', 'localhost'),
port: Env.get('DB_PORT', ''),
user: Env.get('DB_USER', 'root'),
password: Env.get('DB_PASSWORD', ''),
database: Env.get('DB_DATABASE', 'adonis'),
},
debug: Env.get('DB_DEBUG', false),
},
}
// Add certificate for production environment
if (Env.get('NODE_ENV') === 'production') {
config.mysql.connection.ssl = {
ca: fs.readFileSync(__dirname + '/certs/ca-database.crt'),
}
}
module.exports = config
I am trying to integrate AWS API Gateway with an AWS lambda function. The integration works flawlessly until I use the 'Lambda Proxy integration' in my Integration Request.
When I check 'Use Lambda Proxy integration' in my integration request, I start getting:
"Execution failed due to configuration error: Malformed Lambda proxy
response"
I googled around a bit and realized that I need to send back the response in a certain format:
{
"isBase64Encoded": true|false,
"statusCode": httpStatusCode,
"headers": { "headerName": "headerValue", ... },
"body": "..."
}
However, despite doing that, I still continue to see the same error. What am I doing wrong?
This is what my Lambda function looks like:
#Override
public String handleRequest(Object input, Context context) {
context.getLogger().log("Input: " + input);
return uploadN10KWebsiteRepositoryToS3();
}
private String uploadN10KWebsiteRepositoryToS3() {
/*BitbucketToS3Upload.JsonResponse jsonResponse = new BitbucketToS3Upload.JsonResponse();
jsonResponse.body = "n10k_website repository uploaded to S3...";
String jsonString = null;
try {
ObjectMapper mapper = new ObjectMapper();
jsonString = mapper.writeValueAsString(jsonResponse);
HashMap<String, Object> test = new HashMap<String, Object>();
test.put("statusCode", 200);
test.put("headers", null);
test.put("body", "n10k_website repository uploaded to S3");
test.put("isBase64Encoded", false);
jsonString = mapper.writeValueAsString(test);
}
catch (Exception e) {
int i = 0;
}*/
//return jsonString;
return "{\"isBase64Encoded\":false, \"statusCode\":200, \"headers\":null, \"body\": \"n10k_website repository uploaded to S3\"}";
}
When I test the API from API Gateway console, this is the response I get:
Received response. Integration latency: 4337 ms Mon Aug 07 00:33:45
UTC 2017 : Endpoint response body before transformations:
"{\"isBase64Encoded\":false, \"statusCode\":200, \"headers\":null,
\"body\": \"n10k_website repository uploaded to S3\"}"
Mon Aug 07 00:33:45 UTC 2017 : Endpoint response headers:
{x-amzn-Remapped-Content-Length=0,
x-amzn-RequestId=0ff74e9d-7b08-11e7-9234-a1a04edc223f,
Connection=keep-alive, Content-Length=121, Date=Mon, 07 Aug 2017
00:33:45 GMT,
X-Amzn-Trace-Id=root=1-5987b565-7a66a2fd5fe7a5ee14c22633;sampled=0,
Content-Type=application/json}
Mon Aug 07 00:33:45 UTC 2017 : Execution failed due to configuration
error: Malformed Lambda proxy response
Mon Aug 07 00:33:45 UTC 2017 : Method completed with status: 502
If I un-check the 'Use Lambda Proxy integration', everything works okay. But I want to know why my response is malformed and how to fix it. Thanks!
I figured it out. I was sending the response back in an incorrect manner.
The response had to be sent back as a POJO object directly rather than serializing the POJO and sending it back as a String. This is how I got it to work.
public class BitbucketToS3Upload implements RequestHandler<Object, JsonResponse> {
#Data
public static class JsonResponse {
boolean isBase64Encoded = false;
int statusCode = HttpStatus.SC_OK;
String headers = null;
String body = null;
}
#Override
public JsonResponse handleRequest(Object input, Context context) {
context.getLogger().log("Input: " + input);
return uploadN10KWebsiteRepositoryToS3();
}
private JsonResponse uploadN10KWebsiteRepositoryToS3() {
BitbucketToS3Upload.JsonResponse jsonResponse = new BitbucketToS3Upload.JsonResponse();
jsonResponse.body = "n10k_website repository uploaded to S3";
String jsonString = null;
try {
ObjectMapper mapper = new ObjectMapper();
jsonString = mapper.writeValueAsString(jsonResponse);
System.out.println(jsonString);
//jsonString = mapper.writeValueAsString(test);
}
catch (Exception e) {
int i = 0;
}
return jsonResponse;
//return "{\"isBase64Encoded\":false, \"statusCode\":200, \"headers\":null, \"body\": \"n10k_website repository uploaded to S3\"}";
}
}
Hope this helps someone!