Making New Directory with libcurl - c++

I have been tinkering with libcurl and so far its nice. I have some few things that are reeally confusing me. I need to create a directory to remote server and here are problems
What do I pass in CURLOPT_URL? Is it a root URL or full ith directory?
I want ripple effect in creating directory that is, if I have diectory /abc/def/ghi then they should be created if they do not exists. I have tried CURLOPT_FTP_CREATE_MISSING_DIRS but does not work.
Tried MKD it fails and I cannot say for sure why. Below are the relevant code and log from app
CODE
CURL* handle = curl_easy_init();
SetHandleOptions(handle); //set options
CURLcode res;
wxString uploadUrl =....;//full URL with path like ftp.xyz.com/public_html/dir1/
wxString command1 = "MKD "+uploadUrl;
wxString command2 = "CWD "+uploadUrl;
struct curl_slist *headers = NULL;
headers = curl_slist_append(headers, command1.c_str());
headers = curl_slist_append(headers, command2.c_str());
curl_easy_setopt(handle, CURLOPT_QUOTE, headers);
const char* uploadUrlStr = uploadUrl.c_str();
if(handle)
{
//do file upload here
/* upload to this place */
curl_easy_setopt(handle, CURLOPT_URL, uploadUrlStr);
/* enable verbose for easier tracing */
curl_easy_setopt(handle, CURLOPT_VERBOSE, 1L);
curl_easy_setopt(handle, CURLOPT_FTP_CREATE_MISSING_DIRS, 1L);
curl_easy_setopt(handle, CURLOPT_QUOTE, headers);
res = curl_easy_perform(handle);
if(res==CURLE_OK)
{
SendMessage(_("Successfully Created Directory: ")+uploadUrl, HERROR_TYPE_WARNING);
}
else
{
//send error message
wxString str(curl_easy_strerror(res));
SendMessage(str);
}
/* always cleanup */
curl_easy_cleanup(handle);
}
else
{
SendMessage(_("Could Not Connect to Server: Invalid Handle"), HERROR_TYPE_CRITICAL);
}
curl_slist_free_all(headers);
LOG
----------Wed Dec 18 01:33:15 2013----------
Changing Directory to / [01:33:20]
Successfully logged In [01:33:21]
No error [01:33:24]
Starting Files List Fetching... [01:33:24]
No error [01:33:26]
[01:33:32]
IDN support not present, can't parse Unicode domains
[01:33:32]
About to connect() to ftp.hosanna.site40.net port 21 (#2)
[01:33:33]
Trying 31.170.162.203...
[01:33:33]
Adding handle: conn: 0x7fffd0013110
[01:33:33]
Adding handle: send: 0
[01:33:33]
Adding handle: recv: 0
[01:33:33]
Curl_addHandleToPipeline: length: 1
[01:33:33]
- Conn 2 (0x7fffd0013110) send_pipe: 1, recv_pipe: 0
[01:33:33]
[01:33:33]
[01:33:33]
Closing connection 3
[01:33:33]
Couldn't resolve host name [01:33:33]
Connected to ftp.hosanna.site40.net (31.170.162.203) port 21 (#2)
[01:33:34]
220---------- Welcome to Pure-FTPd [privsep] ----------
220-You are user number 9 of 500 allowed.
220-Local time is now 17:33. Server port: 21.
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:35]
220-You are user number 9 of 500 allowed.
220-Local time is now 17:33. Server port: 21.
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:35]
220-Local time is now 17:33. Server port: 21.
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:35]
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:35]
220 You will be disconnected after 3 minutes of inactivity.
[01:33:35]
USER xxxxxx
[01:33:35]
331 User xxxxxx OK. Password required
tes of inactivity.
-You are user number 9 of 500 allowed.
220-Local time is now 17:33. Server port: 21.
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:35]
PASS xxxxxx
[01:33:35]
230-OK. Current restricted directory is /
230-124 files used (1%) - authorized: 10000 files
230 3051 Kbytes used (0%) - authorized: 1536000 Kb
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:36]
230-124 files used (1%) - authorized: 10000 files
230 3051 Kbytes used (0%) - authorized: 1536000 Kb
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:36]
230 3051 Kbytes used (0%) - authorized: 1536000 Kb
220-This is a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:36]
PWD
[01:33:36]
257 "/" is your current location
ized: 1536000 Kb
files used (1%) - authorized: 10000 files
230 3051 Kbytes used (0%) - authorized: 1536000 Kb
[01:33:37]
Entry path is '/'
[01:33:37]
MKD ftp://ftp.hosanna.site40.net/public_html/Zulu names and meanings
[01:33:37]
ftp_perform ends with SECONDARY: 0
[01:33:37]
550-Can't create directory: No such file or directory
550-124 files used (1%) - authorized: 10000 files
550 3051 Kbytes used (0%) - authorized: 1536000 Kb
a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:37]
550-124 files used (1%) - authorized: 10000 files
550 3051 Kbytes used (0%) - authorized: 1536000 Kb
a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:37]
550 3051 Kbytes used (0%) - authorized: 1536000 Kb
a private system - No anonymous login
220 You will be disconnected after 3 minutes of inactivity.
[01:33:37]
QUOT command failed with 550
[01:33:37]
Closing connection 2
[01:33:37]
Quote command returned error [01:33:37]

Make sure the Paths are in form of /public_html/somedir not ftp://ftp.somesite.com/public_html/somedir
That is what was going wrong with my code. So I resolved by removing URL. I believe there should be a section on libcurl explaining expected URLs format. I will contribute that once I fully grasp it!

Related

"queueproxy" : error reverse proxying request; sockstat: sockets TCP: inuse 27 orphan 2 tw 20 alloc 593 mem 52

I've issue with respect to above subject, using
knative v1.2.5
istio 1.12.7
Every 20mins we see below error in the queue proxy,
error: "context canceled"
knative.dev/key: "test-common-service/test-app-0-0-0"
knative.dev/pod: "test-app-0-0-0-deployment-xxxxxxx-xxxxx"
logger: "queueproxy"
message: "error reverse proxying request; sockstat: sockets: used 44
TCP: inuse 27 orphan 2 tw 20 alloc 593 mem 52
UDP: inuse 0 mem 3
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
Can someone please let me know how can I fix this.
Thank you!
It looks like you may have requests which are hitting request timeouts in the queue proxy. What is your typical request latency, and what is the Revision timeoutSeconds set to?
It's also possible that istio is cancelling (resetting) some of the TCP connections to the queue-proxy, but that seems unlikely.

Django REST Framework slow on Gunicorn

I deployed a new application made on Django REST Framework running with Gunicorn.
The application is deployed on 4 different servers and they are listening on a port (8001) which is consumed by an HAProxy load balancer.
Unfortunately I am suffering many performance problems: many requests takes seconds to be served, even 30 or 60 seconds.
Sometimes even the basic entry endpoint (like https://my.api/api/v2 which basically returns the list of the available endpoints) requires 10/20 seconds to respond.
I don't think the problem is the load balancer because I have latencies connecting directly to any backend servers with my client, so bypassing the load balancer.
And I don't think the problem is the database because the call to https://my.api/api/v2 as guest (not logged with any user) shouldn't make any database query.
This is a performance test made with hey (https://github.com/rakyll/hey) on the basic endpoint without authorization:
me#staging2:~$ hey -n 10000 -c 100 https://my.api/api/v2/
Summary:
Total: 38.9165 secs
Slowest: 18.6772 secs
Fastest: 0.0041 secs
Average: 0.3099 secs
Requests/sec: 256.9604
Total data: 20870000 bytes
Size/request: 2087 bytes
Response time histogram:
0.004 [1] |
1.871 [9723] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
3.739 [0] |
5.606 [175] |■
7.473 [0] |
9.341 [35] |
11.208 [29] |
13.075 [0] |
14.943 [0] |
16.810 [0] |
18.677 [37] |
Latency distribution:
10% in 0.0054 secs
25% in 0.0122 secs
50% in 0.0322 secs
75% in 0.2378 secs
90% in 0.3417 secs
95% in 0.3885 secs
99% in 8.5935 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0021 secs, 0.0041 secs, 18.6772 secs
DNS-lookup: 0.0001 secs, 0.0000 secs, 0.0123 secs
req write: 0.0001 secs, 0.0000 secs, 0.0153 secs
resp wait: 0.3075 secs, 0.0039 secs, 18.6770 secs
resp read: 0.0001 secs, 0.0000 secs, 0.0150 secs
Status code distribution:
[200] 10000 responses
This is my Gunicorn configuration:
bind = '0.0.0.0:8001'
backlog = 2048
workers = 1
worker_class = 'sync'
worker_connections = 1000
timeout = 120
keepalive = 5
spew = False
daemon = False
pidfile = None
umask = 0
user = None
group = None
tmp_upload_dir = None
errorlog = '-'
loglevel = 'debug'
accesslog = '-'
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
And Gunicorn is currently running with the following command:
/path/to/application/bin/python3 /path/to/application/bin/gunicorn --env DJANGO_SETTINGS_MODULE=settings.production -c /path/to/application/settings/gunicorn_conf.py --user=deployer --log-file=/path/to/application/django-application-gunicorn.log --chdir /path/to/application/django-application --daemon wsgi:application
What tests I can do to find out what is causing my performance problems?

Snort Config: PCRE Matching across TCP Packets

I am working with my Security Onion and at the moment all the longer PCRE is not working, because the rules and the regex is not applied to the TCP stream but only to single packets.
My Snort.conf should have everything enabled:
# Target-based IP defragmentation. For more inforation, see README.frag3
preprocessor frag3_global: max_frags 65536
preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180
# Target-Based stateful inspection/stream reassembly. For more inforation, see README.stream5
preprocessor stream5_global: track_tcp yes, \
track_udp yes, \
track_icmp no, \
max_tcp 262144, \
max_udp 131072, \
max_active_responses 2, \
min_response_seconds 5
preprocessor stream5_tcp: log_asymmetric_traffic no, policy windows, \
detect_anomalies, require_3whs 180, \
overlap_limit 10, small_segments 3 bytes 150, timeout 180, \
ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \
161 445 513 514 587 593 691 1433 1521 1741 2100 3306 6070 6665 6666 6667 6668 6669 \
7000 8181 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \
ports both 80 81 311 383 443 465 563 591 593 636 901 989 992 993 994 995 1220 1414 1830 2301 2381 2809 3037 3128 3702 4343 4848 5250 6988 7907 7000 7001 7144 7145 7510 7802 7777 7779 \
7801 7900 7901 7902 7903 7904 7905 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 \
7917 7918 7919 7920 8000 8008 8014 8028 8080 8085 8088 8090 8118 8123 8180 8243 8280 8300 8800 8888 8899 9000 9060 9080 9090 9091 9443 9999 11371 34443 34444 41080 50002 55555
preprocessor stream5_udp: timeout 180
Now I got a node.js server up with a simple XML file (I made it a little bit shorter):
<?xml version="1.0" encoding="UTF-8"?>
<saml2p:Response Destination="http://localhost:8080/sp/saml/index.html"
ID="_28940080d39ea1191d9910414147f372"
InResponseTo="_15b217492a05e534df8539c7a84014cd"
IssueInstant="2018-07-24T16:04:13.830Z" Version="2.0"
xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:oasis:names:tc:SAML:2.0:protocol">
<saml2:Issuer xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">eLearning SAML SSO IdP</saml2:Issuer>
<saml2p:Status>
<saml2p:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</saml2p:Status>
<saml2:Assertion ID="_evil_assertion_ID"
IssueInstant="2018-07-24T16:04:13.831Z" Version="2.0"
xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:oasis:names:tc:SAML:2.0:protocol">
<saml2:Issuer>eLearning SAML SSO IdP</saml2:Issuer>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">xmluser</saml2:NameID>
[..........................]
</saml2:AuthnContext>
</saml2:AuthnStatement>
</saml2:Assertion>
</saml2p:Response>
A rule matching only the content:"saml2p:response" or a small regex like pcre:"/saml2p:Response/smi" is working without problem, but a rule with a longer regex is not matching the pattern.
I made my rule as generic as possible:
alert tcp any any -> any any (msg:"ET WEB_SERVER SAML XSW3 Attack, Possible Signature Wrapping Attack v15"; pcre:"/saml2p:Response.*?/saml2p:Response/smi"; reference:url,https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final91.pdf; classtype:web-application-attack; sid:200000120; rev:1;)
edit: I have also tried this rule with all versions of flow:from_server,established; (to_server, from_client, to_client). None of them is working for me. But when I cut down the xml to a size which fits in one packet, all rules are firing!
The Regex pcre:"/saml2p:Response.*?/saml2p:Response/smi" should match everything from the first response tag to the closing response tag, but whenever it is splitted into different TCP Packets the regex is not matching.
Do i miss anything?
Thanks for your help!

The page cannot be displayed because an internal server error has occurred

I have a site deployed on azure with django. I have been working on the site for about 6 months and it was working fine. Today all of a sudden the site is not working any more.
The only output I am getting is - "The page cannot be displayed because an internal server error has occurred."
I have the same django code working perfectly on localhost. My azure account has 2 django sites running. Each site is deployed from 2 separate branches from a git repo. Both of them are down.
I tried deploying earlier deployments on azure. it is still showing the same error.
I have also tried setting debug to true on production server. Same results. It seems django is not even being loaded.
My azure account shows a lot of "html server error", when I check under the monitor tab for the site.
What is the problem and how do I solve it? Is a django not loading problem or an azure problem? How do I debug this?
Edit - error log from ftp - logfiles/http/rawlogs
# date time s-sitename cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
2013-07-19 13:20:30 MAKEYSTREET GET /robots.txt X-ARR-LOG-ID=dbb6383f-5342-4fe4-a419-e3f5f373a8f8 80 - 66.249.74.139 Mozilla/5.0+(compatible;+Googlebot/2.1;++http://www.google.com/bot.html) - - www.makeystreet.com 500 0 0 245 416 3484
2013-07-19 13:34:28 MAKEYSTREET GET / X-ARR-LOG-ID=b4c37c5c-46b3-4ff5-9ae7-e0f18f04b404 80 - 10.21.91.242 Mozilla/5.0+(X11;+Linux+i686)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Ubuntu+Chromium/28.0.1500.52+Chrome/28.0.1500.52+Safari/537.36 ARRAffinity=cf6d46f9fadcc9b088669ec23467baf056a50376566d6aacdbea46e06b6acfa5;+WAWebSiteSID=6c2bf64764594b7ca90e2b539b89b9e6;+sessionid=rhvhu9ibb814v1vwqe3nahkkt1lw0yvu;+csrftoken=gO1x5Vm6kzDRKhmbyjbUuVqWKLUjW3Qy;+_ga=GA1.2.1129966823.1374156576 - makeystreet.com 500 0 0 269 897 3546
2013-07-19 13:34:28 MAKEYSTREET GET /favicon.ico X-ARR-LOG-ID=d918daf9-815e-4ef7-8360-525c9c31506c 80 - 10.21.91.242 Mozilla/5.0+(X11;+Linux+i686)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Ubuntu+Chromium/28.0.1500.52+Chrome/28.0.1500.52+Safari/537.36 ARRAffinity=cf6d46f9fadcc9b088669ec23467baf056a50376566d6aacdbea46e06b6acfa5;+WAWebSiteSID=6c2bf64764594b7ca90e2b539b89b9e6;+sessionid=rhvhu9ibb814v1vwqe3nahkkt1lw0yvu;+csrftoken=gO1x5Vm6kzDRKhmbyjbUuVqWKLUjW3Qy;+_ga=GA1.2.1129966823.1374156576 - makeystreet.com 500 0 0 269 864 93
2013-07-19 14:04:47 MAKEYSTREET GET /robots.txt X-ARR-LOG-ID=1b516136-4a39-4543-b9a4-540ce1fce959 80 - 66.249.74.139 Mozilla/5.0+(compatible;+Googlebot/2.1;++http://www.google.com/bot.html) - - www.makeystreet.com 500 0 0 245 416 3515
2013-07-19 14:05:54 MAKEYSTREET GET / X-ARR-LOG-ID=7dc870f8-a7d4-4701-8b75-042789a362cf 80 - 122.166.237.111 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:22.0)+Gecko/20100101+Firefox/22.0 - - www.makeystreet.com 500 0 0 245 440 187
2013-07-19 14:05:54 MAKEYSTREET GET /favicon.ico X-ARR-LOG-ID=64681a43-c47e-4aea-982d-bd3dace9b19e 80 - 122.166.237.111 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:22.0)+Gecko/20100101+Firefox/22.0 ARRAffinity=cf6d46f9fadcc9b088669ec23467baf056a50376566d6aacdbea46e06b6acfa5;+WAWebSiteSID=2e0ff00f9fd84e28ac6b757c93b43b23 - www.makeystreet.com 500 0 0 245 595 62
2013-07-19 14:06:10 MAKEYSTREET GET / X-ARR-LOG-ID=db1870ec-3b2a-4449-ac13-05ab67ffa212 80 - 122.166.237.111 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:22.0)+Gecko/20100101+Firefox/22.0 ARRAffinity=cf6d46f9fadcc9b088669ec23467baf056a50376566d6aacdbea46e06b6acfa5;+WAWebSiteSID=2e0ff00f9fd84e28ac6b757c93b43b23 - www.makeystreet.com 500 0 0 245 573 109
2013-07-19 14:21:55 MAKEYSTREET GET / X-ARR-LOG-ID=4e72fbec-ff2e-4068-b9fe-0a536718e8c7 80 - 10.21.91.242 Mozilla/5.0+(X11;+Linux+i686)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Ubuntu+Chromium/28.0.1500.52+Chrome/28.0.1500.52+Safari/537.36 ARRAffinity=cf6d46f9fadcc9b088669ec23467baf056a50376566d6aacdbea46e06b6acfa5;+WAWebSiteSID=6c2bf64764594b7ca90e2b539b89b9e6;+sessionid=rhvhu9ibb814v1vwqe3nahkkt1lw0yvu;+csrftoken=gO1x5Vm6kzDRKhmbyjbUuVqWKLUjW3Qy;+_ga=GA1.2.1129966823.1374156576 - makeystreet.com 500 0 0 245 887 3515
2013-07-19 14:21:56 MAKEYSTREET GET /favicon.ico X-ARR-LOG-ID=ee05ce88-08cc-4837-94a4-4ef0326a46be 80 - 10.21.91.242 Mozilla/5.0+(X11;+Linux+i686)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Ubuntu+Chromium/28.0.1500.52+Chrome/28.0.1500.52+Safari/537.36 ARRAffinity=cf6d46f9fadcc9b088669ec23467baf056a50376566d6aacdbea46e06b6acfa5;+WAWebSiteSID=6c2bf64764594b7ca90e2b539b89b9e6;+sessionid=rhvhu9ibb814v1vwqe3nahkkt1lw0yvu;+csrftoken=gO1x5Vm6kzDRKhmbyjbUuVqWKLUjW3Qy;+_ga=GA1.2.1129966823.1374156576 - makeystreet.com 500 0 0 269 864 124

Curl is slow (I assume I am not setting an option correctly)

I have written this minimal example that shows my problem:
#include "curl/curl.h"
#include <stdexcept>
#include <string>
#include <iostream>
#include <stdlib.h>
int main(int argc,char* argv[])
{
// All error checking removed for clarity.
// But all calls return CURLE_OK
curl_global_init(CURL_GLOBAL_ALL);
CURL* curl = curl_easy_init();
curl_easy_setopt(curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:14.0) Gecko/20120405 Firefox/14.0a1");
curl_easy_setopt(curl, CURLOPT_URL, argv[1]);
curl_easy_perform(curl);
long result;
curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &result);
std::cout << "HTTP RESPONCE: " << result << "\n";
}
Compile and run:
> g++ test.cpp -lcurl
> ./a.out thorsanvil.com
<html><head><title>Nothing here</title></head><body><h1>Nothing Here</h1><h2>Go away</h2></body></html>
HTTP RESPONCE: 200
real 0m5.144s
user 0m0.016s
sys 0m0.000s
As you can see it is taking 5 seconds to get a response from the server.
If I repeat the same command using wget it takes only 0.2 -> 0.5 seconds
> time wget thorsanvil.com
--2012-05-28 05:24:17-- http://thorsanvil.com/
Resolving thorsanvil.com (thorsanvil.com)... 67.170.22.105
Connecting to thorsanvil.com (thorsanvil.com)|67.170.22.105|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104 [text/html]
Saving to: `index.html'
100%[======================================================================================================================>] 104 --.-K/s in 0s
2012-05-28 05:24:18 (589 KB/s) - `index.html.3' saved [104/104]
real 0m0.493s
user 0m0.008s
sys 0m0.000s
But using the curl command line tools is also slow
> time curl thorsanvil.com
<html><head><title>Nothing here</title></head><body><h1>Nothing Here</h1><h2>Go away</h2></body></html>
real 0m5.240s
user 0m0.004s
sys 0m0.012s
Any ideas on why my simple version is not working as expected?
Edited
From comments. Version of libcurl
> ls -la /usr/lib/x86_64-linux-gnu/libcurl*
-rw-r--r-- 1 root root 748262 Mar 22 16:51 /usr/lib/x86_64-linux-gnu/libcurl.a
lrwxrwxrwx 1 root root 19 Mar 22 16:51 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.3 -> libcurl-gnutls.so.4
lrwxrwxrwx 1 root root 23 Mar 22 16:51 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 -> libcurl-gnutls.so.4.2.0
-rw-r--r-- 1 root root 360488 Mar 22 16:52 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.2.0
-rw-r--r-- 1 root root 950 Mar 22 16:51 /usr/lib/x86_64-linux-gnu/libcurl.la
lrwxrwxrwx 1 root root 16 Mar 22 16:51 /usr/lib/x86_64-linux-gnu/libcurl.so -> libcurl.so.4.2.0
lrwxrwxrwx 1 root root 12 Mar 22 16:51 /usr/lib/x86_64-linux-gnu/libcurl.so.3 -> libcurl.so.4
lrwxrwxrwx 1 root root 16 Mar 22 16:51 /usr/lib/x86_64-linux-gnu/libcurl.so.4 -> libcurl.so.4.2.0
-rw-r--r-- 1 root root 381512 Mar 22 16:52 /usr/lib/x86_64-linux-gnu/libcurl.so.4.2.0
> curl --version
curl 7.21.7 (x86_64-unknown-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp smtp smtps telnet tftp
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz TLS-SRP
OK. Not sure what happened to the system.
So I go the admin to un-install then re-install curl.
Everything is now working correctly.
So it looks like the curl version got messed up somehow.
Thanks.