Can not restore postgreSQL dump file in AWS - django

I tried to restore PostgreSQL database from dump file in AWS but an error occurred.
it said
pg_restore: [archiver] unsupported version (1.14) in file header
What should I do?

Upgrade the PostgreSQL version and then try again.

Related

Profile file cannot be null error using DBeaver under Windows10 AWS Athena IAM profile

I checked this question and this thread in GitHub. It works for Mac but doesn't work for Windows.
And I still getting the error.
Looks like dbeaver doesn't have access to the credentials file.
Any ideas?
Below you may see my settings:
Try adding a file called config in the same .aws directory with the following structure:
[your_DataAccess_profile_name]
your-aws-region
Pay attention to file extensions, it should be:
config and
credentials
not
config.txt and credentials.txt

Issue with Heoku Postgres "could not access file “$libdir/postgis-2.1"

I have a project that works with GeoDjango, Postgis and its deploy it in Heroku.
Some info of what I'm using:
Python 2.7.15
Django 1.11.20
Heroku-18 (Stack)
Postgres 9.4.20
Postgis 2.1.8
In the last months the system threw me an error every time I want to load de geografic info, when I execute a geocode query.
ERROR: could not access file "$libdir/postgis-2.1": No such file or
directory
I have looked for the web and stackoverflow for the solutions and I found some that was really near of my problem but I tried their solutions and doesn't work for me.
I tried the "ALTER EXTENSION postgis UPDATE" solution but throw me this error:
ERROR: cannot create temporary table within security-restricted
operation
I tried the "backup your DB, Drop the local database and restaurations" but when I run the comand pg:backups:capture I get
An error occurred and the backup did not finish.
When I run the pg:backups:info
And trow me this:
2019-03-02 23:08:31 +0000 pg_dump: [archiver (db)] query failed:
ERROR: could not access file "$libdir/postgis-2.1": No such file or
directory ... (some database code) 2019-03-02 23:08:31 +0000
waiting for pg_dump to complete 2019-03-02 23:08:31 +0000 pg_dump
finished with errors
Then I found this entry
Update PostGIS extensions on Heroku
And found that it's the same problem that I have with heroku postgres, (but the author is using ruby) and the author says that was helped by the support team of Heroku. Well I create a ticket and find that "Technical support for Free applications is provided by the online community" and stackoverflow, so I tried to add a comment to this user to say samething like "hey, can you share the solution please? I have the same problem." but I haven't enough reputation to do it.
So what can I do?
I found the solution !!!
With an old db backup archive that I had, I reset the db from the heroku datastores section and after that I restored with the backup archive (with pgAdmin III) and the problem its gonne.
Its seems like the error was with postgis version, becouse when I had the problem my postgis version was 2.1.8 and now with the error solved my versión of postgis is 2.4.4.
I hope it is useful to someone.

Structured streaming kafka driver relaunch fails with HDFS file rename errors since new name file already exists

We are testing restarts and failover with structured streaming in Spark 2.1.
We have a stripped down kafka structured streaming driver that only performs an event count. When we relaunch the driver a second time gracefully (i.e. kill driver with yarn application -kill and resubmit with same checkpoint dir), the driver fails due to aborted jobs that cannot commit the state in HDFS with errors like:
"Failed to rename /user/spark/checkpoints/StructuredStreamingSignalCount/ss_signal_count/state/0/11/temp-1769618528278028159 to /user/spark/checkpoints/StructuredStreamingSignalCount/ss_signal_count/state/0/11/128.delta"
When I look in the HDFS, 128.delta already existed before the error. HDFS fundamentally does not allow rename when the target file name already exists with the rename command. Any insight greatly appreciated!
We are using:
spark 2.1.0
HDFS/YARN 2.7.3
Kafka 0.10.1
Heji
A bug in spark for not deleting state file before renaming:
https://issues.apache.org/jira/browse/SPARK-19677

Flyway Database Migrations on Bluehost Shared Hosting

Folks,
I'm trying to use Flyway on a shared Bluehost server and I'm getting a very cryptic error. Not sure how to troubleshoot.. I'm confident that the basic connection / credentials are working, since changing flyway config settings to a wrong password produces a different error.
Note that I'm able to connect to the database from the command line from the server box using 'mysql -u blah -p' just fine.
Any hints on how to troubleshoot the below? Output from flyway -X init is below.
Flyway config file: http://pastebin.com/8dsWE3W2
Output from ./flyway -X init
/usr/bin/tput
Flyway (Command-line Tool) v.3.0
DEBUG: Adding location to classpath: /home1/philost2/checkout/cocktailbuilder-master/server/_devtools/flyway/bin/../jars/mysql-connector-java-5.1.30-bin.jar
DEBUG: Adding location to classpath: /home1/philost2/checkout/cocktailbuilder-master/server/_devtools/flyway/bin/../jars/h2-1.3.170.jar
ERROR: Unexpected error
org.flywaydb.core.api.FlywayException: Unable to obtain Jdbc connection from DataSource (jdbc:mysql://localhost/philost2_cocktailbuilder_prod?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false) for user '<redacted>'
at org.flywaydb.core.internal.util.jdbc.DriverDataSource.getConnectionFromDriver(DriverDataSource.java:266)
at org.flywaydb.core.internal.util.jdbc.DriverDataSource.getConnection(DriverDataSource.java:226)
at org.flywaydb.core.internal.util.jdbc.JdbcUtils.openConnection(JdbcUtils.java:50)
at org.flywaydb.core.Flyway.execute(Flyway.java:1144)
at org.flywaydb.core.Flyway.init(Flyway.java:970)
at org.flywaydb.commandline.Main.executeOperation(Main.java:118)
at org.flywaydb.commandline.Main.main(Main.java:88)
Caused by: com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '??????????????' at line 1
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1049)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4232)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4164)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2615)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2776)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2832)
at com.mysql.jdbc.ConnectionImpl.configureClientCharacterSet(ConnectionImpl.java:1937)
at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3720)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2554)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2321)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:832)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:413)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:344)
at org.flywaydb.core.internal.util.jdbc.DriverDataSource.getConnectionFromDriver(DriverDataSource.java:264)

Boto.conf not found

I am running a flask app on an AWS EC2 server, and have been using boto to access data stored in dynamoDB. After accidentally adding boto.conf to a git commit (and push and pull on the server), I have found that my python code can no longer locate the boto.conf file. I rolled back the changes with git, but the problem remains.
The python module and boto.conf file exist in the same directory, but when the module calls
boto.config.load_credential_file('boto.conf')
I get the flask error IOError: [Errno 2] No such file or directory: 'boto.conf'.
As per Documentation:
I'm not really sure why you are using boto.config_load_credential_file. In general you can pick up the config in a file called either ~/.boto or /etc/boto.cfg.
You can also look at this questions from SO that also answers how to get the configuration for boto: Getting Credentials File in the boto.cfg for Python