Data Modelling

Data modeling is an essential practice in data management and analysis that involves creating a conceptual model to represent complex information systems. This discipline is pivotal for structuring and defining data requirements needed to support business processes, data quality, and consistency across an organization's IT assets.

Why is it crucial? In today's data-driven world, effective data modeling aids in simplifying complex data environments, enhancing scalability, and ensuring security while facilitating data integration and interoperability among systems.

Our Focus: This section explores the nuances of data modeling, offering insights into best practices, tools, and strategies employed to design databases that are not only robust but also adaptable to changing business needs.

Token Supply, Demand & Capacity - Drafting ICN's First Predictive Model

Walk through the first-pass simulator that links network capacity, token supply and price dynamics, and discover why a seemingly simple 1% monthly inflation cap isn't always enough…

Graph of interconnected variables in the ICN tokenomics model
× Close

Token Supply, Demand & Capacity - Drafting ICN's First Predictive Model

Introduction

Hook: How many tokens should exist when your network triples in size? The answer shapes everything from node rewards to price stability.

Overview: In mid-2024 we built a Monte-Carlo style simulator to connect three moving parts—supply inflation, storage demand and hardware capacity—for the Inter-Cloud Network (ICN). This article walks through the v0.1 model, shares key graphs, and highlights early lessons in systemic risk.

Objective: Provide a reproducible blueprint for modelling token emissions against real-world adoption metrics so stakeholders can debate policy changes with data instead of gut-feel.

Background

ICN rewards HPs and SPs in ICNT for supplying decentralised storage. The white-paper sets a maximum 1% monthly inflation cap but stops short of linking it to demand shocks. Our task: verify whether that cap keeps inflation, utilisation and price in balance across best- and worst-case growth trajectories.

Challenges & Considerations

Problem statement: Uncapped demand spikes could drive capacity expansion faster than supply issuance, starving node rewards. Conversely, a demand crash could maroon excess tokens and nuke price. We need to test both edges.

  • ⚖️ Trade-off between predictable emissions and adaptive responsiveness
  • 📉 Impact of sudden utilisation drops on circulating supply and market cap
  • 🛡️ Guard-rails to prevent runaway inflation beyond 2× initial supply

Methodology

Tools & Tech: Python 3.11, Pandas, Matplotlib, Jupyter Lab, and a scratch Basic_Model_SSH_Scenarios.ipynb.

Simulation Parameters (60 months):

  • Starting supply: 1,000,000 ICNT
  • Max inflation: 1%/month
  • Starting capacity: 50 PB → expand/shrink ±10% on price triggers
  • Access fee: €2,000/PB/month
  • Rewards: base €500 + utilisation bonus €2,200 (PB/month)

We ran three demand paths—Extreme, Normal, and Demand Crash—against three emission schedules—Inflation, Inflation with Upper Limit, and Threshold Inflation. Each run produced monthly time-series for capacity, booked storage, utilisation, price and market cap.

Key Visuals

Directed graph of token-economy variables
Figure 1 - Variable map showing causal links between demand, capacity, utilisation and supply.
Normal demand scenario under inflation schedule
Figure 2 - Normal-demand run under simple inflation schedule: note price trough at month 12 and steady recovery as capacity tightens.
Demand-crash scenario with capped inflation
Figure 3 - Demand-crash run with capped inflation: utilisation whipsaws, but upper-limit prevents hyper-inflation.

Results

Findings:

  1. Normal demand: 1% cap keeps price within €0.03–€0.15, utilisation stabilises near target 75%.
  2. Demand crash: Without an upper supply limit, token price plummets below €0.01 as excess capacity idles; capping supply at 2M ICNT halves the draw-down.
  3. Extreme demand: Capacity lags demand; price spikes attract miners; inflation cap delays supply catch-up by ~6 months, but prevents runaway dilution.

Analysis: Introducing an adaptive supply ceiling (Threshold Inflation) out-performed the static cap by absorbing demand shocks while maintaining Scir/Demand ratio within a 15% band.

Conclusion

Version 0.1 of the ICN tokenomics simulator confirms that a flat 1% monthly inflation cap is serviceable under steady growth but fragile under demand reversals. An adaptive ceiling tied to network capacity provides smoother price paths and healthier node economics.

Future work: Add Monte-Carlo price noise, integrate staking/locking mechanics, and expose parameters via Streamlit for real-time policy debates.

Call to Action

The simulation code is private for now, but I'm happy to walk interested readers through the mechanics or run bespoke scenarios—just drop me a note via the site's contact form or connect with me on LinkedIn.

Author's Note

Building this model in a week revealed just how intertwined supply policy and hardware economics really are. Feedback is welcome—especially from token-economists with stress-test horror stories!

Streamlining Data Operations: Automated DBT Documentation with Jenkins

Explore how integrating DBT docs server with Jenkins can transform the documentation workflow in data projects, automating updates and deployment to an S3 hosted website...

Automated Documentation Process
× Close

Automating Documentation with DBT and Jenkins

Introduction

Hook: Imagine a world where your project's documentation updates itself, seamlessly and accurately, every time your data models change.

Overview: This article explores the powerful combination of DBT docs and Jenkins to automate the generation and deployment of documentation for data projects, ensuring it is always up-to-date and readily available as a hosted website.

Objective: By the end of this guide, you will learn how to set up and utilize a DBT docs server with Jenkins to automatically push updates to an S3 endpoint, effectively hosting your documentation online.

Background

DBT (Data Build Tool) is instrumental in transforming data in-warehouse and documenting the process, making data analytics work more transparent and manageable. Coupled with Jenkins, an automation server, the process of continuous integration and deployment extends to documentation, making it a pivotal part of development workflows.

Relevance: As data environments become increasingly complex, the need for reliable, scalable, and automated documentation systems becomes critical for efficient project management and compliance.

Challenges & Considerations

Problem Statement: Manually updating documentation can be time-consuming and prone to errors. Automating this process helps maintain accuracy but introduces challenges such as setup complexity and integration with existing CI/CD pipelines.

Ethical/Legal Considerations: It's important to ensure that automated processes comply with data governance policies and industry standards to avoid potential legal issues, especially when handling sensitive information.

Methodology

Tools & Technologies: This project utilizes DBT (Data Build Tool) for data transformation, Jenkins for continuous integration and deployment, and AWS S3 for hosting the generated documentation.

Step-by-Step Guide:

  1. Environment Setup: Start by setting up your development environment with the necessary tools including DBT and Jenkins. Ensure Python and the required DBT adapters are installed and configured.
  2. DBT Project Configuration: Configure your DBT project to connect to your data warehouse and set up the models that your documentation will cover. Utilize the dbt command line to run and test your models ensuring they compile and execute correctly.
  3. Automation with Jenkins: Set up a Jenkins job to automate the DBT tasks. This job will trigger the DBT commands to run the transformations, generate the documentation, and ensure everything is up to date.
  4. DBT Docs Generation: Use the 'dbt docs generate' command to create a comprehensive documentation site from your DBT models. This includes schema and data dictionary information, which DBT generates automatically from your model files.
  5. Hosting on AWS S3: Configure an AWS S3 bucket to host your DBT documentation. Set up the bucket for static website hosting and sync the generated DBT documentation to this bucket using Jenkins, which will execute AWS CLI commands to handle the upload.
  6. Access and Security: Implement necessary security measures to control access to the documentation. This includes setting up IP whitelisting and possibly integrating SSO (Single Sign-On) for secure and convenient access.

Tips & Best Practices: Maintain version control of your DBT models and Jenkins configuration to rollback changes if needed. Regularly update your documentation to reflect new changes in your data models and transformations. Always ensure that access to the S3 bucket is secure and monitored.

Results

Findings: Post-implementation, project documentation is more dynamic, accurate, and easier to access, significantly reducing manual oversight and updating tasks.

Analysis: The automation of documentation not only saves time but also enhances data model transparency and stakeholder trust.

Conclusion

Integrating DBT docs with Jenkins to automate documentation deployments into S3 has proven to be an effective strategy for maintaining up-to-date project documentation. This setup not only streamlines workflows but also ensures documentation accuracy and accessibility.

Future Directions: Further integration with other CI/CD tools and exploration of cloud-native solutions could enhance scalability and security.

Call to Action

We encourage data professionals and project managers to adopt these practices. Share your experiences or questions in the comments or on professional forums to foster a community of learning.

Author's Note

Personal Insight: Implementing this solution in my projects transformed how my team approaches documentation, making it a less daunting and more rewarding part of our process.

Contact Information: Feel free to connect with me on LinkedIn or via email to discuss this setup or share your insights.