6 Popular PostgreSQL CLI Libraries

6 Popular PostgreSQL CLI Libraries

  • pgcli – Postgres CLI with autocompletion and syntax highlighting
  • pgsh – Branch your PostgreSQL Database like Git
  • psql – The built-in PostgreSQL CLI client
  • psql2csv – Run a query in psql and output the result as CSV
  • nancy – The Nancy CLI is a unified way to manage automated database experiments either in clouds or on-premise
  • schemaspy – SchemaSpy is a JAVA JDBC-compliant tool for generating your database to HTML documentation, including Entity Relationship diagrams

What is PostgreSQL?

PostgreSQL is a powerful, open source object-relational database system that uses and extends the SQL language combined with many features that safely store and scale the most complicated data workloads. The origins of PostgreSQL date back to 1986 as part of the POSTGRES project at the University of California at Berkeley and has more than 30 years of active development on the core platform.

PostgreSQL has earned a strong reputation for its proven architecture, reliability, data integrity, robust feature set, extensibility, and the dedication of the open source community behind the software to consistently deliver performant and innovative solutions. PostgreSQL runs on all major operating systems, has been ACID-compliant since 2001, and has powerful add-ons such as the popular PostGIS geospatial database extender. It is no surprise that PostgreSQL has become the open source relational database of choice for many people and organisations.


10 Popular PostgreSQL CLI Libraries

  1. pgcli

Postgres CLI with autocompletion and syntax highlighting

This is a postgres client that does auto-completion and syntax highlighting.

Quick Start

If you already know how to install python packages, then you can simply do:

$ pip <span class="token function">install</span> -U pgcli

or

$ <span class="token function">sudo</span> <span class="token function">apt-get</span> <span class="token function">install</span> pgcli <span class="token comment"># Only on Debian based Linux (e.g. Ubuntu, Mint, etc)</span>
$ brew <span class="token function">install</span> pgcli  <span class="token comment"># Only on macOS</span>

If you don’t know how to install python packages, please check the detailed instructions.

Usage

$ pgcli <span class="token punctuation">[</span>database_name<span class="token punctuation">]</span>

or

$ pgcli postgresql://<span class="token punctuation">[</span>user<span class="token punctuation">[</span>:password<span class="token punctuation">]</span>@<span class="token punctuation">]</span><span class="token punctuation">[</span>netloc<span class="token punctuation">]</span><span class="token punctuation">[</span>:port<span class="token punctuation">]</span><span class="token punctuation">[</span>/dbname<span class="token punctuation">]</span><span class="token punctuation">[</span>?extra<span class="token operator">=</span>value<span class="token punctuation">[</span><span class="token operator">&</span><span class="token assign-left variable">other</span><span class="token operator">=</span>other-value<span class="token punctuation">]</span><span class="token punctuation">]</span>

Examples:

$ pgcli local_database

$ pgcli postgres://amjith:pa<span class="token variable">$</span>w0rd@example.com:5432/app_db?sslmode<span class="token operator">=</span>verify-ca<span class="token operator">&</span><span class="token assign-left variable">sslrootcert</span><span class="token operator">=</span>/myrootcert

View on GitHub


2.  pgsh

Branch your PostgreSQL Database like Git

Finding database migrations painful to work with? Switching contexts a chore? Pull requests piling up? pgsh helps by managing a connection string in your .env file and allows you to branch your database just like you branch with git.


Prerequisites

There are only a couple requirements:

  • your project reads its database configuration from the environment
  • it uses a .env file to do so in development.

See dotenv for more details, and The Twelve-Factor App for why this is a best practice.

Language / Framework <strong>.env</strong> solution Maturity
javascript dotenv high

pgsh can help even more if you use knex for migrations.

Installation

  1. yarn global add pgsh to make the pgsh command available everywhere
  2. pgsh init to create a .pgshrc config file in your project folder, beside your .env file (see src/pgshrc/default.js for futher configuration)
  3. You can now run pgsh anywhere in your project directory (try pgsh -a!)
  4. It is recommended to check your .pgshrc into version control. Why?

View on GitHub


3. psql

psql is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them to PostgreSQL, and see the query results. Alternatively, input can be from a file or from command line arguments. In addition, psql provides a number of meta-commands and various shell-like features to facilitate writing scripts and automating a wide variety of tasks.

View on source


4.  psql2csv

Run a query in psql and output the result as CSV.

Installation

Mac OS X

psql2csv is available on Homebrew.

$ brew install psql2csv

Manual

Grab the file psql2csv, put in somewhere in your $PATH, and make it executable:

$ curl https://raw.githubusercontent.com/fphilipe/psql2csv/master/psql2csv > /usr/local/bin/psql2csv && chmod +x /usr/local/bin/psql2csv

Usage

psql2csv [OPTIONS] < QUERY
psql2csv [OPTIONS] QUERY

Options

The query is assumed to be the contents of STDIN, if present, or the last argument. All other arguments are forwarded to psql except for these:

-?, --help                 show this help, then exit
--delimiter=DELIMITER      set the field delimiter (default: ,)
--quote=QUOTE              set the quote delimiter (default: ")
--escape=ESCAPE            set the escape character (default: QUOTE)
--null=NULL                set the string representing NULL; printed without quotes (default: empty string)
--force-quote=FORCE_QUOTE  set the columns to be force quoted; comma separated list of columns or * for all (default: none)
--encoding=ENCODING        set the output encoding; Excel likes latin1 (default: UTF8)
--no-header                do not output a header
--timezone=TIMEZONE        set the timezone config before running the query
--search-path=SEARCH_PATH  set the search_path config before running the query
--dry-run                  print the COPY statement that would be run without actually running it

View on GitHub


5.  Nancy

Nancy helps to conduct automated database experiments.

The Nancy Command Line Interface is a unified way to manage automated database experiments either in clouds or on-premise.

What is a Database Experiment?

Database experiment is a set of actions performed to test

  • (a) specified SQL queries (“workload”)
  • (b) on specified machine / OS / Postgres version (“environment”)
  • (c) against specified database (“object”)
  • (d) with an optional change – some DDL or config change (“target” or “delta”).

Two main goals for any database experiment:

  • (1) validation – check that the specified workload is valid,
  • (2) benchmark – perform deep SQL query analysis.

Database experiments are needed when you:

  • add or remove indexes;
  • for a new DB schema change, want to validate it and estimate migration time;
  • want to verify some query optimization ideas;
  • tune database configuration parameters;
  • do capacity planning and want to stress-test your DB in some environment;
  • plan to upgrade your DBMS to a new major version;
  • want to train ML model related to DB optimization.

View on Gitlab


6.  SchemaSpy

Please support the project by simply putting a Github star. Share this library with friends on Twitter and everywhere else you can.

If you notice a bug or have something not working, please report an issue, we’ll try to fix it as soon as possible. More documentation and features expected to be soon. Feel free to contribute.

This is a new code repository for SchemaSpy tool initially created and maintained by John Currier. I personally believe that work on SchemaSpy should be continued, and a lot of still existing issues should be resolved. Last released version of the SchemaSpy was in 2010, and I have a plan to change this.

I would like to say thank you to John Currier for the invention of this database entity-relationship (ER) diagram generator.

My plan is to release new SchemaSpy version and concentrate on refactoring and improving it.

In new SchemaSpy 6.0 version you will find:

Absolutely new amazing look and feel generated content

You can very easily change SchemaSpy theme because all of the logic was moved outside the Java code

Speed of database generation improved a little

Generation of the html pages was changed from plain concatenated text in Java code to Moustache engine

In database comments you can use Markdown language what should improve user experience from reading your database documentation

You can also easily create in database comments links to the tables and columns

More detail will came in next days plus I have plan to start working on all not resolved issues.

View on GitHub

Fix Missing GPG Key Apt Repository Errors (NO_PUBKEY)

You might see a missing public GPG key error (“NO_PUBKEY”) on Debian, Ubuntu or Linux Mint when running apt update / apt-get update. This can happen when you add a repository, and you forget to add its public key, or maybe there was a temporary key server error when trying to import the GPG key.

Source: Fix Missing GPG Key Apt Repository Errors (NO_PUBKEY) – Linux Uprising Blog

Quick’n’dirty solution:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys THE_MISSING_KEY_HERE

sudo
apt update 2>&1 1>/dev/null | sed -ne 's/.<em>NO_PUBKEY //p' |
while read key; do if ! [[ ${keys[</em>]} =~ "$key" ]]; then sudo apt-key
adv --keyserver keyserver.ubuntu.com --recv-keys "$key"; keys+=("$key");
fi; done

Yes, I know apt-key is being deprecated…

Self-Hosted Streaming Server – NGINX + RTMP – YouTube + Facebook Live

Copying this here in case the original goes offline

Self-Hosted Streaming Server – NGINX + RTMP – YouTube + Facebook Live

Published by Anthony on 14th April 2020

I’m writing these notes up, mainly as a reference for me in the future – but they might just help others looking to do similar.

Issue:

During the Coronavirus season, our church, like many, ventured into the world of streaming services, primarily to YouTube but with an eye to multiplexing to both YouTube and Facebook at the same time.

Initially, the plan was to stream from OBS Studio to an intermediate platform, restream.io

We tested with OBS sending the stream to restream and for the first week, everything worked fine.
Then, the second week about 10 minutes before due to go live restream.io suffered an outage. Fortunately, we were quickly able to flip to sending our stream directly to YouTube and our broadcast worked successfully.
However, for me, an issue with a shared service leaves a sour taste and so began the journey to discover what would be involved in running our own RTMP server.

It turns out the solution has been relatively straightforward…

Our solution:

Firstly, we signed up for a new VPS with vultr.com
We opted for a 1 x vCPU, 2GB RAM, High-Frequency machine with 64GB SSD storage, and 2TB monthly bandwidth. Research suggested that RTMP streaming/multiplexing is not particularly CPU/RAM intensive and is more about bandwidth consumption.

Once the VPS was provisioned, we selected the Ubuntu 18.04 LTS install image. The machine was duly created and connected via SSH to its IP shown in the portal.

Basic Installation:

Update all packages:

root@hostname#    sudo apt-get update
root@hostname#   sudo apt-get upgrade

Then install NGINX:

sudo apt-get install nginx
sudo apt install libnginx-mod-rtmp

At this point you can test that NGINX has been installed correctly by testing with your browser:

http://<your-servers-IP-address>

If successful, you should see the NGINX default page:

Configuration for YouTube

Now, on to adding the RTMP config.

Edit the nginx.conf file as follows:

nano /etc/nginx/nginx.conf

Add a section at the bottom of the file as follows:

rtmp {
     server {
             listen 1935;
             chunk_size 4096;

             application live { 
                        live on; 
                        record off; 
                        # Edit and enable line below to push incoming stream to YouTube 
                        # push rtmp://x.rtmp.youtube.com/live2/<your-Stream-Key-Copied-From-YouTube>; 
             }
      }
}

If you aren’t familiar with YouTube Live stream settings the key can be obtained from the following screens:

Save the nginx.conf file (Ctrl +X ) and start the NGINX service

root@hostname# systemctl restart nginx

Best practice would suggest installing a firewall package on your VPS to minimise its exposure to the Internet.

For Ubuntu 18.04, UFW is perfectly acceptable and well documented. Vultr have a comprehensive document at https://www.vultr.com/docs/configure-ubuntu-firewall-ufw-on-ubuntu-18-04

For me, I created 4 simple rules permitting inbound SSH from my home ISP’s fixed IP, and a friend’s IP plus equivalent rules permitting inbound TCP1935 from each of our addresses.

We chose to use OBS to create the stream footage, therefore we had to fill in the stream details within OBS Stream settings:

If you then click on Start Streaming within OBS, your OBS content should start appearing within your YouTube Studio account (albeit with a 30second delay).

Facebook Live

Facebook made some changes a little while back to only accepts RTMPS streams which NGINX and the RTMP can’t natively support. So, another jigsaw piece needs to be added to the puzzle. Enter stunnel!

Edit your nginx.conf file to include a new push statement.

push rtmp://127.0.0.1:1936/rtmp/<Facebook-persistent-stream-key>

restart nginx

systemctl restart nginx

Install Stunnel

apt-get install stunnel4 -y

Edit the stunnel boot configuration as follows (change ENABLE from 0 to 1)

nano /etc/default/stunnel4
ENABLE=1

Edit the stunnel config as follows:

nano /etc/stunnel/stunnel.conf
pid = /var/run/stunnel4/stunnel.pid
output = /var/log/stunnel4/stunnel.log

setuid = stunnel4
setgid = stunnel4

# https://www.stunnel.org/faq.html
socket = r:TCP_NODELAY=1
socket = l:TCP_NODELAY=1

debug = 4

[fb-live]

client = yes accept = 1936 connect = live-api-s.facebook.com:443 verifyChain = no

Enable Stunnel and start it

systemctl enable stunnel4.service
systemctl restart stunnel4.service

NTRU –Quantum resistant cryptography

NTRU – Software

According to Wikipedia

NTRU is an open-source public-key cryptosystem that uses lattice-based cryptography to encrypt and decrypt data.

Unlike other popular public-key cryptosystems, it is resistant to attacks using Shor’s algorithm.

NTRUEncrypt was patented, but it was placed in the public domain in 2017. NTRUSign is patented, but it can be used by software under the GPL.[1][2]

So this quantum resistant crypto algorithim is either public domain – which is good – or it’s signing version is patented but usable in GPL-licensed software. I must