Close Menu
    Trending
    • Barbara Corcoran Finds a Buyer in One Day for $12M Penthouse
    • They Didn’t Get It — And That’s the Point: Why the Tesla-AI Argument Breaks People’s Brains | by NickyCammarata | BehindTheSugar | May, 2025
    • Openlayer Raises $14.5 Million Series A
    • Why Sell Your Rental Property Even If You’re Bullish On Prices
    • Is It Time to Pivot Your Business? 3 Clear Signs You Shouldn’t Ignore
    • Parquet File Format – Everything You Need to Know!
    • Google DeepMind’s new AI agent uses large language models to crack real-world problems
    • AI for Dumdum: How Machines Learn | by Rachel Tumulak | May, 2025
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»Parquet File Format – Everything You Need to Know!
    Artificial Intelligence

    Parquet File Format – Everything You Need to Know!

    FinanceStarGateBy FinanceStarGateMay 14, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    the quantity of Data rising exponentially in the previous few years, one of many greatest challenges has turn into discovering probably the most optimum strategy to retailer numerous knowledge flavors. In contrast to within the (not to date) previous, when relational databases had been thought-about the one strategy to go, organizations now wish to carry out evaluation over uncooked knowledge – consider social media sentiment evaluation, audio/video recordsdata, and so forth – which often couldn’t be saved in a conventional (relational) method, or storing them in a conventional method would require vital time and effort, which improve the general time-for-analysis.

    One other problem was to someway keep on with a conventional method to have knowledge saved in a structured method, however with out the need to design complicated and time-consuming ETL workloads to maneuver this knowledge into the enterprise knowledge warehouse. Moreover, what if half of the info professionals in your group are proficient with, let’s say, Python (knowledge scientists, knowledge engineers), and the opposite half (knowledge engineers, knowledge analysts) with SQL? Would you insist that “Pythonists” be taught SQL? Or, vice-versa?

    Or, would you favor a storage possibility that may play to the strengths of your total knowledge staff? I’ve excellent news for you – one thing like this has already existed since 2013, and it’s referred to as Apache Parquet!

    Parquet file format in a nutshell

    Earlier than I present you the ins and outs of the Parquet file format, there are (at the very least) 5 foremost the explanation why Parquet is taken into account a de facto normal for storing knowledge these days:

    • Information compression – by making use of numerous encoding and compression algorithms, Parquet file gives decreased reminiscence consumption
    • Columnar storage – that is of paramount significance in analytic workloads, the place quick knowledge learn operation is the important thing requirement. However, extra on that later within the article…
    • Language agnostic – as already talked about beforehand, builders might use totally different programming languages to govern the info within the Parquet file
    • Open-source format – that means, you aren’t locked with a selected vendor
    • Assist for complicated knowledge varieties

    Row-store vs Column-store

    We’ve already talked about that Parquet is a column-based storage format. Nonetheless, to grasp the advantages of utilizing the Parquet file format, we first want to attract the road between the row-based and column-based methods of storing the info.

    In conventional, row-based storage, the info is saved as a sequence of rows. One thing like this:

    Picture by creator

    Now, after we are speaking about OLAP eventualities, a number of the frequent questions that your customers might ask are:

    • What number of balls did we promote?
    • What number of customers from the USA purchased a T-shirt?
    • What’s the complete quantity spent by buyer Maria Adams?
    • What number of gross sales did we’ve on January 2nd?

    To have the ability to reply any of those questions, the engine should scan each row from the start to the very finish! So, to reply the query: what number of customers from the USA purchased T-shirt, the engine has to do one thing like this:

    Picture by creator

    Basically, we simply want the data from two columns: Product (T-Shirts) and Nation (USA), however the engine will scan all 5 columns! This isn’t probably the most environment friendly resolution – I feel we will agree on that…

    Column retailer

    Let’s now study how the column retailer works. As you could assume, the method is 180 levels totally different:

    Picture by creator

    On this case, every column is a separate entity – that means, every column is bodily separated from different columns! Going again to our earlier enterprise query: the engine can now scan solely these columns which might be wanted by the question (Product and nation), whereas skipping scanning the pointless columns. And, generally, this could enhance the efficiency of the analytical queries.

    Okay, that’s good, however the column retailer existed earlier than Parquet and it nonetheless exists outdoors of Parquet as effectively. So, what’s so particular concerning the Parquet format?

    Parquet is a columnar format that shops the info in row teams

    Wait, what?! Wasn’t it difficult sufficient even earlier than this? Don’t fear, it’s a lot simpler than it sounds 🙂

    Let’s return to our earlier instance and depict how Parquet will retailer this similar chunk of knowledge:

    Picture by creator

    Let’s cease for a second and clarify the illustration above, as that is precisely the construction of the Parquet file (some extra issues had been deliberately omitted, however we’ll come quickly to elucidate that as effectively). Columns are nonetheless saved as separate items, however Parquet introduces extra constructions, referred to as Row group.

    Why is this extra construction tremendous essential?

    You’ll want to attend for a solution for a bit :). In OLAP eventualities, we’re primarily involved with two ideas: projection and predicate(s). Projection refers to a SELECT assertion in SQL language – which columns are wanted by the question. Again to our earlier instance, we want solely the Product and Nation columns, so the engine can skip scanning the remaining ones.

    Predicate(s) consult with the WHERE clause in SQL language – which rows fulfill standards outlined within the question. In our case, we’re all in favour of T-Shirts solely, so the engine can utterly skip scanning Row group 2, the place all of the values within the Product column equal socks!

    Picture by creator

    Let’s shortly cease right here, as I need you to understand the distinction between numerous sorts of storage when it comes to the work that must be carried out by the engine:

    • Row retailer – the engine must scan all 5 columns and all 6 rows
    • Column retailer – the engine must scan 2 columns and all 6 rows
    • Column retailer with row teams – the engine must scan 2 columns and 4 rows

    Clearly, that is an oversimplified instance, with solely 6 rows and 5 columns, the place you’ll undoubtedly not see any distinction in efficiency between these three storage choices. Nonetheless, in actual life, once you’re coping with a lot bigger quantities of knowledge, the distinction turns into extra evident.

    Now, the truthful query can be: how does Parquet “know” which row group to skip/scan?

    Parquet file comprises metadata

    Because of this each Parquet file comprises “knowledge about knowledge” – data corresponding to minimal and most values in a selected column inside a sure row group. Moreover, each Parquet file comprises a footer, which retains the details about the format model, schema data, column metadata, and so forth. You’ll find extra particulars about Parquet metadata varieties here.

    Necessary: With the intention to optimize the efficiency and remove pointless knowledge constructions (row teams and columns), the engine first must “get acquainted” with the info, so it first reads the metadata. It’s not a sluggish operation, but it surely nonetheless requires a sure period of time. Due to this fact, should you’re querying the info from a number of small Parquet recordsdata, question efficiency can degrade, as a result of the engine must learn metadata from every file. So, you need to be higher off merging a number of smaller recordsdata into one greater file (however nonetheless not too large :)…

    I hear you, I hear you: Nikola, what’s “small” and what’s “large”? Sadly, there is no such thing as a single “golden” quantity right here, however for instance, Microsoft Azure Synapse Analytics recommends that the individual Parquet file should be at least a few hundred MBs in size.

    What else is in there?

    Here’s a simplified, high-level illustration of the Parquet file format:

    Picture by creator

    Can it’s higher than this? Sure, with knowledge compression

    Okay, we’ve defined how skipping the scan of the pointless knowledge constructions (row teams and columns) might profit your queries and improve the general efficiency. However, it’s not solely about that – bear in mind once I instructed you on the very starting that one of many foremost benefits of the Parquet format is the decreased reminiscence footprint of the file? That is achieved by making use of numerous compression algorithms.

    I’ve already written about numerous knowledge compression varieties in Energy BI (and the Tabular mannequin normally) here, so possibly it’s a good suggestion to start out by studying this text.

    There are two foremost encoding varieties that allow Parquet to compress the info and obtain astonishing financial savings in house:

    • Dictionary encoding – Parquet creates a dictionary of the distinct values within the column, and afterward replaces “actual” values with index values from the dictionary. Going again to our instance, this course of appears to be like one thing like this:
    Picture by creator

    You may suppose: why this overhead, when product names are fairly brief, proper? Okay, however now think about that you just retailer the detailed description of the product, corresponding to: “Lengthy arm T-Shirt with utility on the neck”. And, now think about that you’ve this product bought million occasions…Yeah, as a substitute of getting million occasions repeating worth “Lengthy arm…bla bla”, the Parquet will retailer solely the Index worth (integer as a substitute of textual content).

    Can it’s higher than THIS?! Sure, with the Delta Lake file format

    Okay, what the heck is now a Delta Lake format?! That is the article about Parquet, proper?

    So, to place it in plain English: Delta Lake is nothing else however the Parquet format “on steroids”. Once I say “steroids”, the primary one is the versioning of Parquet recordsdata. It additionally shops a transaction log to allow monitoring all adjustments utilized to the Parquet file. That is often known as ACID-compliant transactions.

    Because it helps not solely ACID transactions, but additionally helps time journey (rollbacks, audit trails, and so forth.) and DML (Information Manipulation Language) statements, corresponding to INSERT, UPDATE and DELETE, you gained’t be mistaken should you consider the Delta Lake as a “knowledge warehouse on the info lake” (who stated: Lakehouse😉😉😉). Inspecting the professionals and cons of the Lakehouse idea is out of the scope of this text, however should you’re curious to go deeper into this, I recommend you learn this article from Databricks.

    Conclusion

    We evolve! Identical as we, the info can be evolving. So, new flavors of knowledge required new methods of storing it. The Parquet file format is likely one of the most effective storage choices within the present knowledge panorama, because it gives a number of advantages – each when it comes to reminiscence consumption, by leveraging numerous compression algorithms, and quick question processing by enabling the engine to skip scanning pointless knowledge.

    Thanks for studying!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGoogle DeepMind’s new AI agent uses large language models to crack real-world problems
    Next Article Is It Time to Pivot Your Business? 3 Clear Signs You Shouldn’t Ignore
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    MIT Department of Economics to launch James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work | MIT News

    May 14, 2025
    Artificial Intelligence

    Study shows vision-language models can’t handle queries with negation words | MIT News

    May 14, 2025
    Artificial Intelligence

    Rethinking the Environmental Costs of Training AI — Why We Should Look Beyond Hardware

    May 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Newton’s Method in Focus: How a Machine Learning Lesson Sparked AI Crypto Market Shifts on March 13, 2025 | by ButerinBard | Mar, 2025

    March 14, 2025

    MLCommons Releases AILuminate LLM v1.1 with French Language Capabilities

    February 12, 2025

    The Secret Inner Lives of AI Agents: Understanding How Evolving AI Behavior Impacts Business Risks

    April 29, 2025

    Easily Create Unlimited WordPress Websites Without Subscription Fees

    February 10, 2025

    Securities donation is the most tax-effective way to give

    February 22, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    Algorithms and AI for a better world | MIT News

    February 8, 2025

    How AI and Machine Learning Are Revolutionizing Video Streaming Platforms | by Fathima Parvin | Feb, 2025

    February 26, 2025

    How to Turn Social Media Moments Into Newsworthy Stories That Captivate Audiences

    February 8, 2025
    Our Picks

    Master Prompt Engineering with Google Cloud’s Introductory Prompt Design in Vertex AI Skill Badge | by Keshav Gupta | May, 2025

    May 14, 2025

    5 Digital Marketing Statistics to Improve Your Law Firm’s Strategy in 2025

    February 5, 2025

    Day 4: React Controlled Components: Master User Input Handling in Next.js (JSX Guide)🚀 | by Lokesh Prajapati | Apr, 2025

    April 18, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.