JavaJotter

programming, projects

Interfacing with the Database: PostgREST #

PostgREST serves as our reliable intermediary, establishing a seamless communication channel between our dashboard and our PostgreSQL database. Just like a skilled barista effortlessly brewing your coffee and delivering it right to your desk, PostgREST transforms your PostgreSQL database into a RESTful API, eliminating the need for additional steps.

Similar to the coffee-making process, which depends on factors like the type of coffee bean and brewing method, the structure and permissions of the PostgreSQL database determine the API endpoints and operations. In other words, the API doesn’t act independently but instead follows the structure and rules predefined within your database.

For example, if your PostgreSQL database has a table named ‘rolls,’ PostgREST automatically generates an endpoint like /rolls. You can utilize this endpoint to perform various HTTP methods such as GET, POST, PUT, DELETE, aligning with the common CRUD (Create, Read, Update, Delete) operations within the database.

Furthermore, the permissions configured in your PostgreSQL database control the actions available through the API. If a user lacks the necessary privileges to delete records in the ‘rolls’ table, they won’t be able to execute DELETE operations through the API endpoint, ensuring the security of your data.

In summary, PostgREST acts as a bridge connecting your database to end-users, enabling direct, secure, and efficient interaction with your data. It’s like having a direct line to the barista, ensuring your coffee (or data) arrives just the way you want it, promptly and without any issues.

When it came to building our data dashboard, we considered various technologies, ranging from D3.js to Looker Studio (formerly Google Data Studio). Ultimately, we chose Streamlit due to its cost-effectiveness, simplicity, and seamless integration with Python packages that I’m proficient in. Streamlit is a powerful Python library that empowers developers to create interactive, data-driven web applications with minimal coding effort. Its key advantages include user-friendly operations, rapid prototyping, and the ability to create visually appealing dashboards using built-in data visualization libraries.

The Magic Behind RESTful API #

The RESTful API, which stands for Representational State Transfer API, is an application program interface that uses HTTP requests to access and manipulate data. Its appeal lies in its scalability, statelessness, and capability to handle a variety of calls, return various data formats, and interact with nearly any programming language. PostgREST, acting as a standalone web server, transforms a PostgreSQL database into a fully functional RESTful API, dynamically offering customizable endpoints for every table in your database.

The Dashboard: Making Data Come Alive #

Streamlit’s built-in date input widget takes care of date selection in the dashboard’s sidebar. Users can set a date range, and the dashboard filters and displays data based on this selection. Our chosen data visualization library, Plotly, enhances the dashboard with interactivity and visual appeal. It allows users to zoom, pan, and hover over specific data points.

The histogram of dice values, our own little roll call, presents the frequency of each outcome from 1 to 6, charted by counting the frequency of each value in the dataset. Each user’s roll count is represented as a bar chart, with individual bars standing tall for each user, the height corresponding to the number of rolls made.

The temporal distribution of dice rolls transforms into a time series line chart, revealing patterns or trends in the frequency of dice rolls over time. Each day has its own bar in the bar chart of the lowest daily roll, with the height indicating the day’s lowest roll.

The timeline of each user’s rolls sheds light on their regularity of participation, highlighting who prefers to roll the dice more often or on a fixed schedule. Time analytics extend to the distribution of rolls over time, the lowest daily roll, and user participation regularity, aiding in the identification of trends and patterns over time.

The probability of each dice value appearing is calculated by dividing the count of each value by the total number of rolls, and is displayed as a bar chart. The dashboard uncovers patterns such as consistent periods of high activity, low rolls on specific days, or patterns in user participation.

Under the hood, Python code powers the dashboard by retrieving and processing data from the PostgreSQL database via the RESTful API, performing calculations, and creating visualizations. Errors are handled using try/except blocks, catching execution errors and preventing unexpected terminations.

Each function in the Python code serves a specific purpose. For example, the ‘date_to_unix_ms’ function converts a datetime object into a Unix timestamp in milliseconds, which is useful for interfacing with APIs requiring this timestamp format. The ‘fetch_and_process_data’ function queries data from the database, cleans it up, and prepares it for analysis and visualization. The ‘plot’ function uses the processed data to craft various visualizations, including histograms, bar charts, and line charts.

Upon clicking the ‘Confirm Dates’ button, the dashboard updates to display data within the selected date range, allowing users to tailor the data view to a specific timeframe. The ‘get_date_of_previous_sunday’ function calculates the preceding Sunday for a given date, useful for aligning weekly data.

This code is adaptable and can be modified to analyze other types of data. While the logic of retrieving, processing, and visualizing data is universal, specific implementation details may need adjustments based on the unique characteristics of new data. The dashboard can be shared by deploying it to a server or a cloud service, enabling users to access it via a provided URL.

In real-world applications, this dashboard can unveil intriguing patterns, such as certain users being more active on specific days, recurring low or high rolls on particular dates, or shifts in dice roll frequency over time. These insights can offer valuable insights into user behavior, predict future activity, and aid in making informed decisions.

Conclusion #

Overall, the harmonious fusion of PostgREST as our data communication intermediary and Streamlit as our dashboard development tool has empowered us to create a dynamic and insightful data dashboard. This amalgamation not only simplifies the interaction between our PostgreSQL database and end-users but also provides a user-friendly and visually engaging platform for data exploration.

With PostgREST acting as the bridge, our PostgreSQL database seamlessly transforms into a fully functional RESTful API, offering customizable endpoints for each table, while ensuring data security through defined permissions. This approach streamlines the process, eliminating unnecessary complexities and intermediaries.

Streamlit, on the other hand, emerges as the perfect canvas for our data storytelling. Its simplicity, cost-effectiveness, and integration with Python packages make it a powerful tool for building interactive data-driven web applications. Through its built-in data visualization libraries, it enhances our dashboard with eye-catching visualizations, enabling users to delve into the data effortlessly.

Our dashboard, an embodiment of this synergy, provides a multitude of insights into user behavior, patterns, and trends. From the distribution of dice rolls to user participation regularity, it equips us with the tools to make informed decisions, predict future activities, and understand our data comprehensively.

The flexibility of our code, combined with its adaptability for different data types, makes it a valuable asset for data analysis. Whether it’s revealing user preferences or identifying shifts in data patterns over time, our dashboard and its underlying code pave the way for valuable discoveries.

While the project as a whole could’ve gone smoother at times, it’s been a fun side project over the past few months and has led to a more indepth look at how the random number generator, which powers the dice rolls, actually works. My next project and blog posts will cover how I’ve been working on reverse engineering and cracking the dice rolls. Until then, happy brewing!