Today we are going to walk through some of the preliminary data shaping steps in data science using SQL in Postgres.
Today we are going to finish up by showing how to use that stored model to make predictions on new data. By the way, I did all of the Postgres work for the entire blog series in Crunchy Bridge. I wanted to focus on the data and code and not on how to run PostgreSQL.
Greetings friends! We have finally come to the point in the Postgres for Data Science series where we are not doing data preparation. Today we are going to do modeling and prediction of fire occurrence given weather parameters… IN OUR DATABASE!
In our last blog post on using Postgres for statistics, I covered some of the decisions on how to handle calculated columns in PostgreSQL. I chose to go with adding extra columns to the same table and inserting the calculated values into these new columns. Today’s post is going to cover how to implement this solution using PL/pgSQL.
For those of you who have a bad taste in your mouth from earlier run-ins with regexs, this will be more use case focused and I will do my best to explain the search patterns I used.
Today we are going to examine methods for calculating z-scores for our data in the database. We want to do this transformation because, when we carry out logistic regression we want to be able to compare the effects of the different factors on fire probability.
How to easily spin up PostGIS in your Kubernetes or OpenShift cluster using the PostgreSQL Operator.
Crunchy Data is proud to announce the initial release of our application developer portal. An awesome team has been working behind the scenes to bring together this website to help application developers find all their Postgres needs in one place.
How to prepare your Windows computer to talk to the Crunchy PostgreSQL Operator for Kubernetes.