curl.h no such file or directory - c++

I installed curl this command (i use Ubuntu):
sudo apt-get install curl
When I test simple program using g++ test.cpp
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}
g++ shows me:
fatal error: curl/curl.h: No such file or directory
compilation terminated.
Can anyone help me?

sudo apt-get install curl-devel
sudo apt-get install libcurl-dev
(will install the default alternative)
OR
sudo apt-get install libcurl4-openssl-dev
(the OpenSSL variant)
OR
sudo apt-get install libcurl4-gnutls-dev
(the gnutls variant)

To those who use centos and have stumbled upon this post :
$ yum install curl-devel
and when compiling your program example.cpp, link to the curl library:
$ g++ example.cpp -lcurl -o example
"-o example" creates the executable example instead of the default a.out.
The next line runs example:
$ ./example

Instead of downloading curl, down libcurl.
curl is just the application, libcurl is what you need for your C++ program
http://packages.ubuntu.com/quantal/curl

yes please download curl-devel as instructed above.
also don't forget to link to lib curl:
-L/path/of/curl/lib/libcurl.a (g++)
cheers

If after the installation curl-dev luarocks does not see the headers:
find /usr -name 'curl.h'
Example: /usr/include/x86_64-linux-gnu/curl/curl.h
luarocks install lua-cURL CURL_INCDIR=/usr/include/x86_64-linux-gnu/

For those of view who stumbled on this post after a Google of "R curl curl.h no such file or directory" (first link), who are on Windows, and want to install curl in R, the solution is pretty simple and fast.
Launch Rtools Bash from the Rtools folder in the Windows all app menu.
Throw pacman -Syuv in the command line to make sure you're up-to-date.
pacman -S mingw-w64-x86_64-curl fixes the problem. You can now go back to R and install curl without any issues. No more curl.h missing errors.

encountered during building git in Centos 8 Stream.
dnf search libcurl
sudo yum install libcurl
sudo yum install libcurl-devel
Now, everything ran fine, and git installed.

i am running Ubuntu 21.10 and still cant get Curl.h to be recognized even after everything said above. I'm going to just grab it from someones Repo and use it alone. i will keep everyone updated

You can install libcurl, It can solve the problem. you can find the commands to install it, just check the other answers.
If you still facing the same problem. then what you can do is
You can find the curl.h file in your system, and copy files to the required location
you can find the curl file by
find /usr -name 'curl.h'
from the above, you'll get the location. copy the curl file from that location to the required location using the cp command
cp -r CURL_DIR/curl/ REQUIRED_DIR/curl/

Related

Fixing gcc undefined include<> by manually installing library

I am running the golang command "go get -t github.com/otiai10/gosseract" , causing the error tessbridge.cpp:5:10: fatal error: leptonica/allheaders.h: No such file or directory, #include <leptonica/allheaders.h>. That library is https://github.com/DanBloomberg/leptonica. How do I install it from source so that the gcc command will work.
Before that command was producing the error "gcc not found", but then I followed https://superuser.com/questions/1294343/install-gcc-in-git-for-windows-bash-environment to setup gcc on windows.
I have not been able to find any references for what gcc expects when it encounters an include<>, and where those files should be located on the file system for it to link properly. Is it possible to install this library manually?
Here is much simpler solution for you. There was no need to install gcc on git-bash.
Install MSYS2. Follow complete installation guide.
On MSYS2 console enter the following commands :
pacman -S mingw-w64-x86_64-gcc
pacman -S mingw-w64-x86_64-leptonica
Add C:\msys64\mingw64\bin to PATH.
First step can be further simplified if you use Chocolatey. Just run these commands in elevated powershell : (Ignore first command if choco is already installed.)
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
Reopen elevated powershell and run these:
choco install -y msys2 --params="/InstallDir:C:\msys64"
refreshenv
$env:Path += ";C:\msys64\usr\bin"
pacman -S mingw-w64-x86_64-gcc
pacman -S mingw-w64-x86_64-leptonica
[Environment]::SetEnvironmentVariable("Path", "C:\msys64\mingw64\bin;" + $env:Path, "User")

C++ compiling project with shared object (libtensorflow_cc.so) failed

at the moment I'm facing some problems compiling (and running) a (huge) own project with support of Tensorflow. On my own system (Ubuntu 16.04 LTS) everything works fine. Same procedure on a cluster leads to a compile error and I'm not able to find a solution yet.
System information
Have I written custom code: YES
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): CentOS 7.4.1708
TensorFlow installed from (source or binary): source (using the git repo)
TensorFlow version (use command below): 1.9
Python version: 2.7.15
Bazel version (if compiling from source): 0.16
GCC/Compiler version (if compiling from source): 7.30
CUDA/cuDNN version: not used
GPU model and memory: Tesla K20m
Exact command to reproduce:
Cloned tensorflow repo from github
Configure Bazel (./configure in tensorflow repo)
Built libtensorflow_cc.so with bazel (worked fine!!!)
Downloaded dependencies with delivered script tensorflow/contrib/makefile/download_dependencies.sh
(tried installing protobuf & eigen manually, too)!
Installed protobuf with ./autogen.sh && ./configure && make && make install
Installed Eigen from downloaded dependencies
Copied libraries, headers and includes into own project:
$ cp bazel-bin/tensorflow/libtensorflow_cc.so ../tf_project/lib/
$ cp bazel-bin/tensorflow/libtensorflow_framework.so ../tf_project/lib/
$ cp /tmp/proto/lib/libprotobuf.a ../tf_project/lib/
$ mkdir -p ../tf_project/include/tensorflow
$ cp -r bazel-genfiles/ * ../tf_project/include/
$ cp -r tensorflow/cc ../tf_project/include/tensorflow
$ cp -r tensorflow/core ../tf_project/include/tensorflow
$ cp -r third_party ../tf_project/include
$ cp -r /tmp/proto/include/ * ../tf_project/include
$ cp -r /tmp/eigen/include/eigen3/ * ../tf_project/include
Remark: On my own system it works this way for weeks now. I can use tensorflow in my own project with an exported model trained with keras inside a python project. I make predictions using client_sessions and many other functions of the tensorflow framework and it works properly.
Problem: At the cluster I can compile tensorflow as dynamic library, install protobuf and eigen.
When I try to compile my project (similar process regarding my own system) without significant changes it doesn't work and stops with following error message:
.../tensorflow/include/tensorflow/core/framework/tensor.pb.h:12:2: error: #error This file was generated by a newer version of protoc which is
#error This file was generated by a newer version of protoc which is
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:13:2: error: #error incompatible with your Protocol Buffer headers. Please update
#error incompatible with your Protocol Buffer headers. Please update
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:14:2: error: #error your headers.
#error your headers.
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:27:10: fatal error: google/protobuf/inlined_string_field.h: No such file or directory
#include <google/protobuf/inlined_string_field.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
So, clearly this should be the issue:
This file was generated by a newer version of protoc which is incompatible with your Protocol Buffer headers. Please update your headers.
and
fatal error: google/protobuf/inlined_string_field.h: No such file or directory
Tried solutions:
I tried different versions of protobuf but there was an error every single try.
I tried installing protobuf and eigen manually (without the download_dependencies.sh script)
I'm wondering because my own installation following exact the same steps works properly. Maybe there is an issue with one of the components unless I tried different versions to make sure that these aren't "new" issues.
Can someone help me solving this error that I can compile and run this project at the other machine?
Looking forward to get helpful solutions :) Thank you very much for the support!
Best regards from Germany!
First, I checked on github, tensorflow r1.9 requires protobuf >=3.6.0. With download_dependencies.sh script, you always get protobuf 3.5.0, in which lack of inlined_string_field.h and some other headers.
Second, some errors that ask you to update protobuf version. I tried many versions of protobuf too. Only version 3.6.0 works well, rather than version 3.6.1 or older ones.
For these errors,
error This file was generated by a newer version of protoc which is
fatal error: google/protobuf/inlined_string_field.h: No such file or directory
It is a mismatch version problem. My solutions is that manually download protobuf 3.6.0 from
https://github.com/protocolbuffers/protobuf/releases/download/v3.6.0/protoc-3.6.0-linux-x86_64.zip
install it and cp /usr/local/include/google to somewhere/tf/include. It works quite well for me. I don't know if you tried version 3.6.0.
download_dependencies.sh script would provide a mismatch version of
protobuf. See the issue I posted on github
https://github.com/tensorflow/tensorflow/issues/22536
Also, I noticed your copied files are different from mine. I did in this way.
sudo mkdir /usr/local/tensorflow/include
sudo cp -r tensorflow/contrib/makefile/downloads/eigen/Eigen /usr/local/tensorflow/include/
sudo cp -r tensorflow/contrib/makefile/downloads/eigen/unsupported /usr/local/tensorflow/include/
sudo cp -r tensorflow/contrib/makefile/gen/protobuf/include/google /usr/local/tensorflow/include/
sudo cp tensorflow/contrib/makefile/downloads/nsync/public/* /usr/local/tensorflow/include/
sudo cp -r bazel-genfiles/tensorflow /usr/local/tensorflow/include/
sudo cp -r tensorflow/cc /usr/local/tensorflow/include/tensorflow
sudo cp -r tensorflow/core /usr/local/tensorflow/include/tensorflow
sudo mkdir /usr/local/tensorflow/include/third_party
sudo cp -r third_party/eigen3 /usr/local/tensorflow/include/third_party/
sudo mkdir /usr/local/tensorflow/lib
sudo cp bazel-bin/tensorflow/libtensorflow_*.so /usr/local/tensorflow/lib
Btw, I run build_all_linux.sh instead of download_dependencies.sh.
I hope it will be helpful for you.
I meet this problem before, following is my solution:
I successfully built tensorflow-r1.8 with bazel. And I found protoc in following path is 3.5.0:
/home/zsb/.cache/bazel/_bazel_zsb/1372f28eb0671f692e7ac38330377d8c/execroot/org_tensorflow/bazel-out/host/bin/external/protobuf_archive/protoc --version
but actually my system config is using protoc version 3.4.0
I confirmed this by type "protoc --version" directly.
So finally I update system protobuffer version to 3.5.0 and this problem fixed.

update curl on centos

There is curl v7.19.7 on my cnetos. Since I want to develop c++ program to send email, I got the curl v7.50.3 source code and install it(configure, make, make install). Although my c++ program build successfully, but when I try to execute it, there are errors:
* Protocol smtp not supported or disabled in libcurl
* Unsupported protocol
curl_easy_perform() failed: Unsupported protocol
When I proceed command: curl --version, it shows:
curl 7.50.3 (x86_64-pc-linux-gnu) libcurl/7.19.7 NSS/3.13.6.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
It seems libcurl still refer to the older version!
Although I try to
1. rpm -q curl
2. rpm -e --nodeps curl-7.19.7-35.el6.x86_64
but it comes out no difference. I also add "/usr/local/lib" in /etc/ld.so.conf, still not working!
How can I do to remove old version curl library(v7.19.7) clearly, let my c++ program refer to the new version curl(v7.50.3)?
Try using "ldd" on your executable to check what version of curl it is using.
Make sure the libcurl path matches "/usr/local/lib" or whereever you installed the curl you compiled
[user#computer bin]$ ldd myExecutable | grep curl
libcurl.so.4 => /usr/local/lib/libcurl.so.4 (0x00...)
You can use following commands;
rpm -Uvh http://www.city-fan.org/ftp/contrib/yum-repo/city-fan.org-release-2-1.rhel6.noarch.rpm
yum --enablerepo=city-fan.org update curl
Enter this command to see the version;
curl -V

apktool build apk fails

I am experiencing very annoying problems with the application apktool problem.
I do not understand what i am doing wrong, or what the problem is.
I tried this on debian , and on linux mint. I used different versions of apktool,
resulting in the same error:
I: Checking whether sources has changed...
I: Checking whether resources has changed...
I: Building resources...
Exception in thread "main" brut.androlib.AndrolibException: brut.common.BrutException: could not exec command: [aapt, p, -F, /tmp/APKTOOL3630495287059303807.tmp, -I, /home/awesomename/apktool/framework/1.apk, -S, /home/awesomename/out/./res, -M, /home/awesomename/out/./AndroidManifest.xml]
at brut.androlib.res.AndrolibResources.aaptPackage(Unknown Source)
at brut.androlib.Androlib.buildResourcesFull(Unknown Source)
at brut.androlib.Androlib.buildResources(Unknown Source)
at brut.androlib.Androlib.build(Unknown Source)
at brut.androlib.Androlib.build(Unknown Source)
at brut.apktool.Main.cmdBuild(Unknown Source)
at brut.apktool.Main.main(Unknown Source)
Caused by: brut.common.BrutException: could not exec command: [aapt, p, -F, /tmp/APKTOOL3630495287059303807.tmp, -I, /home/windows/apktool/framework/1.apk, -S, /home/windows/out/./res, -M, /home/windows/out/./AndroidManifest.xml]
at brut.util.OS.exec(Unknown Source)
... 7 more
Caused by: java.io.IOException: Cannot run program "aapt": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
at java.lang.Runtime.exec(Runtime.java:617)
at java.lang.Runtime.exec(Runtime.java:485)
... 8 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
... 10 more
It seems it can not use aapt , but i read about apktool.
And it seems that aapt is build inside apktool , why is it not working ?
It seems there's some problem in building the resources while recompiling the apk.
what you can do is, when you decompile your apk use this command
apktool d -f -r apkfilename.apk
here -f is to replace previous decompiled apk's code and -r is to ignore the decompiling of resources.
this would prevent the resources from being decompiled and will simply copy the same resources when you recompile the apk.
In case you've been using v1 and now upgraded to v2, try manually deleting the framework file.
On windows 8 it's normally at C:\Users\YourName\apktool\framework\1.apk.
The file should be regenerated once you try to build something.
My problem was solved by deleting the \framework\1.apk, making a backup on the files I modified, ereasing the dir and decompiling the *.apk again, etc... (on linux, the path is home/[user]/apktool/...). After the update, apktool always loaded the old resource table. N
For me, I solved this problem by first clearing apktool's framework directory by typing in the terminal.
$ apktool empty-framework-dir
Afterwards I uninstalled apktool and related files by typing
$ sudo apt purge apktool
Then i went to https://bitbucket.org/iBotPeaches/apktool/downloads/ to get the latest jar file for apktool(apktool_2.5.0.jar as at the time of writing this).
On first run
$ java -jar apktool_2.5.0.jar b <MyAPP.apk> #Without ><
it works.
since I work with apktool most of the times I needed a situation where I can run apktool from anywhere so I gave the jar file execute permissions by typing
$ sudo chmod +x apktool_2.5.0.jar
Afterwards I moved it /usr/bin/ by typing
$ sudo apktool_2.5.0.jar /usr/bin/
Definitely seems like the aapt PATH problem I had awhile back. Have you added aapt to PATH? If you still have problems, I have made a good apk kit in bash to avoid all these dependency problems. It supports apktool, signapk, zipalign,adb, fastboot, and heimdall. Check it out. All you need is a current java install.
http://forum.xda-developers.com/android/development/toolkit-apk-munky-rench-t3026757/post58747626#post58747626
There isn’t really enough information to give you a definite answer.
How ever you mentioned using different versions but the aapt issue was solved in version 2.4. Dependencies have been reduced to java version 1.8 or greater and the framework.
I use Debian and have the following:
Apktool 2.4
java version 11
Android framework
That’s all it took to get rid of the aapt path error.
The last error I came across was unrelated to aapt but was on the framework so I ran this command
apktool empty-framework-dir
And it solved it.
try to put the dir which include aapt file to your PATH. for example, export PATH=$PATH:./ ./apktool b
try to install ia32-libs and update latest version of apktool. (if possible restart)
apktool requires "ia32-libs" which is not available after Ubuntu 12.04. install ia32-libs
sudo apt-get install lib32z1 lib32ncurses5 lib32bz2-1.0 lib32stdc++6
Download latest version of apktools.jar - https://bitbucket.org/iBotPeaches/apktool/downloads
apktool complete installation guide - http://ibotpeaches.github.io/Apktool/install/
I just encounter same problem when run apktool d foo.apk(decompiled success) and then apktool b foo(recompile failed with similar error).
The apktool tool above was installed via sudo apt-get install apktool on Kali Linux.
So, the solution was visits apktool's official site, e.g. https://connortumbleson.com/2017/01/23/apktool-v2-2-2-released/ (it's latest version at this time of writing), download it, md5sum it, e.g. md5sum apktool_2.2.2.jar to verify, then rename that apktool_2.2.2.jar to apktool.jar.
Then do java -jar ./apktool.jar b foo to recompile, it success without error (the generated apk located at ./foo/dist/foo.apk).
The main issue is apktool version you need 2.4.0
You must manually install it from ibotpeaches git hub
here some good info
https://www.youtube.com/watch?v=kB6s10Uwpcs
and a automated script for kali
https://github.com/catenatedgoose?tab=repositories
In my mind the problem is how you install apktool...
I had the same problem and I did this and it worked very well:
For installation you first have to remove any installed apktool by the command:
sudo apt purge apktool
Then you'll have to install apktool but in a different way.
To continue save the link bellow as apktool in a directory.
[https://raw.githubusercontent.com/iBotPeaches/Apktool/master/scripts/linux/apktool]
Then open this link below and download the latest apktool.jar file: https://bitbucket.org/iBotPeaches/apktool/downloads/
Then rename the file as apktool.jar
After that give both files the permission by the command:
Sudo chmod -x apktool.jar
And for the saved script:
Sudo chmod -x apktool
At the end copy both files in the directory:
/usr/local/bin
By the command:
Sudo cp apktool.jar /usr/local/bin
And the script file:
Sudo cp apktool /usr/local/bin
After that try running apktoolin the terminal.
The solution is to include your apktool directory into your system PATH.

How do I install and build against OpenSSL 1.0.0 on Ubuntu?

You can consider this a follow-up question to How do I install the OpenSSL C++ library on Ubuntu?
I'm trying to build some code on Ubuntu 10.04 LTS that requires OpenSSL 1.0.0.
Ubuntu 10.04 LTS comes with OpenSSL 0.9.8k:
$ openssl version
OpenSSL 0.9.8k 25 Mar 2009
So after running sudo apt-get install libssl-dev and building, running ldd confirms I've linked in 0.9.8:
$ ldd foo
...
libssl.so.0.9.8 => /lib/i686/cmov/libssl.so.0.9.8 (0x00110000)
...
libcrypto.so.0.9.8 => /lib/i686/cmov/libcrypto.so.0.9.8 (0x002b0000)
...
How do I install OpenSSL 1.0.0 and the 1.0.0 development package?
Update: I'm writing this update after reading SB's answer (but before trying it), because it's clear I need to explain that the obvious solution of downloading and installing OpenSSL 1.0.0 doesn't work:
After successfully doing the following (recommended in the INSTALL file):
$ ./config
$ make
$ make test
$ make install
...I still get:
OpenSSL 0.9.8k 25 Mar 2009
...and:
$ sudo apt-get install libssl-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
libssl-dev is already the newest version.
The following packages were automatically installed and are no longer required:
linux-headers-2.6.32-21 linux-headers-2.6.32-21-generic
Use 'apt-get autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
...and (just to make sure) after rebuilding my code, ldd still returns the same thing.
Update #2: I added the "-I/usr/local/ssl/include" and "-L/usr/local/ssl/lib" options (suggested by SB) to my makefile, but I'm now getting a bunch of undefine reference compile errors, for example:
/home/dspitzer/foo/foo.cpp:86: undefined reference to `BIO_f_base64'
/home/dspitzer/foo/foo.cpp:86: undefined reference to `BIO_new'
/usr/local/ssl/include/ contains only an openssl directory (which contains numerous .h files), so I also tried "-I/usr/local/ssl/include/openssl" but got the same errors.
Update #3: I tried changing the OpenSSL includes from (for example):
#include <openssl/bio.h>
...to:
#include "openssl/bio.h"
...in the .cpp source file but still get the same undefined reference errors.
Update #4: I now realize those undefined reference errors are linker errors. If I remove the "-L/usr/local/ssl/lib" from my Makefile, I don't get the errors (but it links to OpenSSL 0.9.8). The contents of /usr/local/ssl/lib/ are:
$ ls /usr/local/ssl/lib/
engines libcrypto.a libssl.a pkgconfig
I added -lcrypto, and the errors went away.
Get the 1.0.0a source from here.
# tar -xf openssl-1.0.0a.tar.gz
# cd openssl-1.0.0a
# ./config
# sudo make install
Note: if you have man pages build errors on modern systems, use make install_sw instead of make install.
This puts it in /usr/local/ssl by default
When you build, you need to tell gcc to look for the headers in /usr/local/ssl/include and link with libs in /usr/local/ssl/lib. You can specify this by doing something like:
gcc test.c -o test -I/usr/local/ssl/include -L/usr/local/ssl/lib -lssl -lcrypto
EDIT DO NOT overwrite any system libraries. It's best to keep new libs in /usr/local. Overwriting Ubuntu defaults can be hazardous to your health and break your system.
Additionally, I was wrong about the paths as I just tried this in Ubuntu 10.04 VM. Fixed.
Note, there is no need to change LD_LIBRARY_PATH since the openssl libs you link against by default are static libs (at least by default - there might be a way to configure them as dynamic libs in the ./config step)
You may need to link against libcrypto because you are using some calls that are built and defined in the libcrypto package. Openssl 1.0.0 actually builds two libraries, libcrypto and libssl.
EDIT 2 Added -lcrypto to gcc line.
Instead of:
$ ./config
$ make
$ make test
$ make install
Do:
$ sudo ./config --prefix=/usr
$ sudo make
$ sudo make test
$ sudo make install
This will help you update to openssl 1.0.1g to patch for CVE-2014-0160 (Heartbleed).
OpenSSL Security Advisory [07 Apr 2014]
TLS heartbeat read overrun (CVE-2014-0160)
A missing bounds check in the handling of the TLS heartbeat extension can be
used to reveal up to 64k of memory to a connected client or server.
Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected including
1.0.1f and 1.0.2-beta1.
Thanks for Neel Mehta of Google Security for discovering this bug and to
Adam Langley and Bodo Moeller for
preparing the fix.
Affected users should upgrade to OpenSSL 1.0.1g. Users unable to immediately
upgrade can alternatively recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.
1.0.2 will be fixed in 1.0.2-beta2.
Source: https://www.openssl.org/news/secadv_20140407.txt
Here's what solved it for me:
Upgrade latest version OpenSSL on Ubuntu
Transcribing the main information:
Download the OpenSSL v1.0.0g source:
$ wget http://www.openssl.org/source/openssl-1.0.0g.tar.gz
Unpack the archive and install:
$ tar xzvf openssl-1.0.0g.tar.gz
$ cd openssl-1.0.0g
$ ./config
$ make
$ make test
$ sudo make install
All files, including binaries and man pages are install under the directory /usr/local/ssl. To ensure users use this version of OpenSSL instead of the previous version you must update the paths for man pages and binaries.
Edit the file /etc/manpath.config adding the following line before the first MANPATH_MAP:
MANPATH_MAP /usr/local/ssl/bin /usr/local/ssl/man
Update the man database (I honestly can't remember and don't know for sure if this command was necessary - maybe try without it and at the end when testing if the man pages are still the old versions come back and run mandb):
sudo mandb
Edit the file /etc/environment and insert the path for OpenSSL binaries (/usr/local/ssl/bin) before the path for Ubuntu's version of OpenSSL (/usr/bin). My environment file looks like this:
PATH="/usr/local/sbin:/usr/local/bin:/usr/local/ssl/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"
Logout and login and test:
$ openssl version
OpenSSL 1.0.0g 18 Jan 2012
Also test the man pages by running man openssl and at the very bottom in the left hand corner it should report 1.0.0g.
Note that although the users will now automatically use the new version of OpenSSL, existing programs (e.g. Apache) may not as they are linked against the libraries from the Ubuntu version.