Timescale

Timescale

Software Development

New York, New York 11,054 followers

Timescale is the modern cloud platform built on PostgreSQL for time series, events, and analytics.

About us

Timescale is addressing one of the largest challenges (and opportunities) in databases for years to come: helping developers, businesses, and society make sense of the data that humans and their machines are generating in copious amounts. TimescaleDB is the only open-source time-series database that natively supports full-SQL, combining the power, reliability, and ease-of-use of a relational database with the scalability typically seen in NoSQL systems. It is built on PostgreSQL and optimized for fast ingest and complex queries. TimescaleDB is deployed for powering mission-critical applications, including industrial data analysis, complex monitoring systems, operational data warehousing, financial risk management, and geospatial asset tracking across industries as varied as manufacturing, space, utilities, oil & gas, logistics, mining, ad tech, finance, telecom, and more. Timescale is backed by NEA, Benchmark, Icon Ventures, Redpoint Ventures, Two Sigma Ventures, and Tiger Global. Documentation: https://docs.timescale.com GitHub: https://github.com/timescale/timescaledb Twitter: https://twitter.com/timescaledb

Industry
Software Development
Company size
51-200 employees
Headquarters
New York, New York
Type
Privately Held
Founded
2015
Specialties
RDBMS, OpenTelemetry, Observability, Promscale, Technology, PostgreSQL, SQL, Data Historian, Geospatial Data, Time-Series Data, Databases, IoT, Sensor Data, Metrics, Developer Community, Software Development, Open Source, Software, and Data Management

Products

Locations

  • Primary

    335 Madison Ave.

    Floor 5, Suite E

    New York, New York 10017, US

    Get directions

Employees at Timescale

Updates

  • View organization page for Timescale, graphic

    11,054 followers

    PostgreSQL and pgvector: Now Faster than Pinecone, 75% cheaper, 100% open-source. Introducing pgvectorscale, an open-source PostgreSQL extension that builds on pgvector, enabling greater performance and scalability. Here’s how pgvectorscale helps pgvector outperform specialized vector database like Pinecone: 1️⃣ StreamingDiskANN:  A new vector search index that overcomes limitations of in-memory indexes like HNSW the index on disk, making it more cost-efficient to run and scale as vector workloads grow. Inspired by the DiskANN paper from Microsoft. 2️⃣ Statistical Binary Quantization (SBQ): Developed by researchers at Timescale, this technique improves on standard binary quantization techniques by improving accuracy when using quantization to reduce the space needed for vector storage 3️⃣ Written in Rust, giving the PostgreSQL community to contribute to vector support. 📈The result? On our benchmark of 50 million Cohere embeddings (768 dimensions each), PostgreSQL with pgvector and pgvectorscale achieves 28x lower p95 latency and 16x higher query throughput compared to Pinecone for approximate nearest neighbor queries at 99 % recall, all at 75 % less cost when self-hosted on AWS EC2. We also tested it against Pinecone’s p2 high performance index, see the blog post at the end of this post for full results (spoiler: It’s just as impressive). Pgvectorscale is open-source under the PostgreSQL license and free for you to use on any PostgreSQL database for your AI projects. To get started, see the pgvectorscale github repo: https://lnkd.in/ghXj2e-U Or try it on Timescale Cloud on any new database service. Eager to learn more about pgvectorscale and how it works? Head over to our blog post with all the details: https://lnkd.in/gcMcxrVb

    Pgvector Is Now Faster than Pinecone at 75% Less Cost

    Pgvector Is Now Faster than Pinecone at 75% Less Cost

    timescale.com

  • Timescale reposted this

    View profile for 🔥 Avthar Sewrathan, graphic

    AI and Developer Product Leader | Product Marketing | Developer Relations | I talk about using AI, vector databases, RAG, search, agents and of course PostgreSQL

    📢[𝐋𝐢𝐯𝐞 𝐬𝐞𝐬𝐬𝐢𝐨𝐧] 𝐁𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐀𝐈 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 𝐰𝐢𝐭𝐡 𝐏𝐨𝐬𝐭𝐠𝐫𝐞𝐒𝐐𝐋: 𝐀 𝐛𝐮𝐬𝐲 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫'𝐬 𝐠𝐮𝐢𝐝𝐞 ➡️Register to attend live or get the recording: https://lu.ma/0v7nwfxd Postgres is all you need to build state of the art AI applications. Join me today for a "state of the union" of developing AI applications with PostgreSQL. For intermediate folks, you'll learn ideas and tactics for how to improve your AI application. For beginners, you'll learn enough to start building today. It's happening at 12 ET/ 9AM PT / 6PM CET (register even if you can't make it live). Here's a sneak peak of what we'll cover: #pgvector #vectorsearch #rag #aiagents #postgresql

    • No alternative text description for this image
  • View organization page for Timescale, graphic

    11,054 followers

    🚨 Only 2 hours left! 🚨 🗓 Join us for Building AI Applications with PostgreSQL: A Busy Developer's Guide to discover best practices, tools, and how to avoid common pitfalls in AI development with PostgreSQL! ⏰ When: Today at 13:00 ET | 10:00 PT | 18:00 CET 🎤 Speaker: 🔥 Avthar Sewrathan, PM AI & Vector, Timescale 💡 Learn why PostgreSQL is ideal for AI and watch live demos on Search, RAG, Agents, and Text to SQL! Can’t attend live? No worries—register now to receive the recording via email. https://lu.ma/0v7nwfxd

    View organization page for Timescale, graphic

    11,054 followers

    PostgreSQL is rapidly becoming the go-to database for AI applications, thanks to its solid relational core and extensions like pgvector, pgvectorscale, and pgai. It's all you need to build cutting-edge AI solutions. 🤖 🤔 But how do you make the most of PostgreSQL for AI? What are the best practices, common pitfalls, and tools to speed up your development? 🌟 Join us for a live webinar on the latest in AI development with PostgreSQL.👇 ✨ Building AI applications with PostgreSQL: A busy developer's guide✨ ⏰ When: Thursday, September 19, 2024, at 13:00 ET | 10:00 PT | 18:00 CET Speaker: 🔥 Avthar Sewrathan, PM AI and Vector Timescale. In this session (plus Q+A), you'll learn: 🤷Why PostgreSQL is ideal as a vector database 🧑💻 AI applications you can build: Search, RAG, Agents, Text to SQL (with live demos) 🔑 Key extensions for building AI applications with PostgreSQL: pgvector, pgvectorscale, and pgai 💪 Can't attend live? Register to receive the recording via email. 🔗 Link in the comments below!

    • No alternative text description for this image
  • Timescale reposted this

    View profile for 🐯 Michael Freedman, graphic

    Timescale cofounder & CTO | Princeton CS Professor

    Postgres at petabyte scale and ingesting almost a trillion metrics per day? 😱 Our multi-tenant Timescale Insights product is powered by a standard Timescale database service in our cloud. The exact same as our customers can use. - 800 billion metrics per day - 100 trillion metrics recorded - >1 PB of data With Timescale, build powerful applications on #PostgreSQL. 💥 But how are those storage volumes possible or cost effective? Our hybrid cloud-native storage architecture, that tiers data from our high performance row-columnar engine hyperstore, to Parquet files on bottomless, low-cost S3. And a lot of optimization went into supporting even point-queries against this S3 data as well. More on that soon!

    View organization page for Timescale, graphic

    11,054 followers

    ⚖️ Scaling PostgreSQL to Petabyte Scale—All on a Single Instance ⚖️ Last year, we launched our Insights product backed by a Timescale instance, handling hundreds of terabytes of data and billions of records daily. Now, a year later, we are still using the same instance, but check out the crazy numbers below! 🧨 The Power Behind Insights Our Insights feature offers detailed query performance stats: timing, memory, I/O usage, and more. This real-time monitoring tool helps users understand their databases better, showing under-performing queries which are dragging your system down. We started out with a dozen metrics per query, but we now track over 100 to ensure even deeper analysis and optimization. 🫶The Growth As customer needs grow and more queries are run, the volume of data we ingest has skyrocketed. But thanks to Timescale’s tiered storage architecture, hypertables, compression, and continuous aggregates our instance has kept up. 📈The Numbers Last year: 🔹 350+ TB stored 🔹 100 billion metrics/day This year: 🔹 1+ PB stored 🔹 800+ billion metrics/day 🔹 100+ trillion metrics ingested since launch And, yes, all of this runs on a Timescale instance which you could deploy yourself. It’s proof that you don’t need complex setups to scale PostgreSQL—you just need the right mindset and the right tools. Learn more about scaling PostgreSQL to a petabyte scale using Timescale Cloud. 👇👇👇 https://lnkd.in/g7SYMCh5

    • No alternative text description for this image
  • View organization page for Timescale, graphic

    11,054 followers

    ⚖️ Scaling PostgreSQL to Petabyte Scale—All on a Single Instance ⚖️ Last year, we launched our Insights product backed by a Timescale instance, handling hundreds of terabytes of data and billions of records daily. Now, a year later, we are still using the same instance, but check out the crazy numbers below! 🧨 The Power Behind Insights Our Insights feature offers detailed query performance stats: timing, memory, I/O usage, and more. This real-time monitoring tool helps users understand their databases better, showing under-performing queries which are dragging your system down. We started out with a dozen metrics per query, but we now track over 100 to ensure even deeper analysis and optimization. 🫶The Growth As customer needs grow and more queries are run, the volume of data we ingest has skyrocketed. But thanks to Timescale’s tiered storage architecture, hypertables, compression, and continuous aggregates our instance has kept up. 📈The Numbers Last year: 🔹 350+ TB stored 🔹 100 billion metrics/day This year: 🔹 1+ PB stored 🔹 800+ billion metrics/day 🔹 100+ trillion metrics ingested since launch And, yes, all of this runs on a Timescale instance which you could deploy yourself. It’s proof that you don’t need complex setups to scale PostgreSQL—you just need the right mindset and the right tools. Learn more about scaling PostgreSQL to a petabyte scale using Timescale Cloud. 👇👇👇 https://lnkd.in/g7SYMCh5

    • No alternative text description for this image
  • Timescale reposted this

    View profile for Francesco Tisiot, graphic

    Field CTO @ Aiven | Data and AI | Open Source | Streaming | Databases

    🗣️ We're just over halfway through Timescale PostgreSQL survey, and there's still time to share your thoughts! Survey closes Sept 30th. 👉 Take the Survey https://lnkd.in/dfHrTBPN #PostgreSQL #CommunitySurvey 🐘 Aiven

    State of PostgreSQL 2024 survey 🐘

    State of PostgreSQL 2024 survey 🐘

    https://typeform.com

  • View organization page for Timescale, graphic

    11,054 followers

    🏃💨 How We Made PostgreSQL Upserts 3️⃣0️⃣0️⃣✖️ Faster on Compressed Data 🏃💨 At Timescale, we’re always listening to our users. When Ndustrial, an industrial energy optimization platform, hit performance bottlenecks with PostgreSQL upserts on compressed data in TimescaleDB, we knew we had to step in. 🤗 🤔 Understanding the Problem Upserts (INSERT ON CONFLICT) are powerful but can get tricky with compressed data. Normally, PostgreSQL uses a unique index to check for conflicts during an upsert. If a conflict is found, it updates the row instead of inserting a new one. However, in TimescaleDB, compressed data complicates this process. Compressed hypertables store data in smaller, compressed batches, which lack the same indexing as uncompressed tables. For Ndustrial, which was upserting data into a staging table and then batch-writing to compressed tables, performance tanked. They experienced significant delays—up to seven minutes for 10,000 rows with just 10 conflicts—due to the need to decompress large batches. 🧑💻The Technical Solution The answer? Index scans on compressed data. In TimescaleDB, we have already created B-tree indexes on segment_by columns when compressing data. However, the upsert process wasn't taking advantage of these indexes. Instead, it relied on sequential scans to locate conflicting rows, a slow process for high-cardinality datasets like Ndustrial’s. By updating the upsert mechanism to use the existing index, we could quickly identify relevant compressed batches, drastically cutting the time required for conflict resolution. The system now only decompresses the batches that contain potential conflicts. If no index is available, it falls back to sequential scans (a rare scenario). 🤯The Results: 300x Faster Upserts! The impact of this optimization was dramatic. After the change, Ndustrial saw a 300x speedup in their upserts! A task that previously took over 427 seconds (7+ minutes) now completes in just over 1 second—an astonishing improvement. Here’s a comparison: ⏪Before (v2.14.2): Upserting 10,000 rows with 10 conflicts took 427,580 ms. ⏩After (v2.16): The same operation now completes in just 1,149 ms. 🫂Why It Matters This optimization unlocked a whole new level of efficiency for Ndustrial and highlights how seemingly small changes can have massive performance benefits. By leveraging existing indexes, we’ve not only improved upsert performance but also future-proofed TimescaleDB for high-cardinality workloads. As Ndustrial put it: “We’ve definitely appreciated working closely with Timescale on this issue and all the work they’ve been putting into the enhancements!” A faster, more efficient database equals happy customers—and happy developers. 🚀 Read how we made PostgreSQL Upserts 300x Faster on Compressed Data: https://lnkd.in/gAmn8H4y

    • No alternative text description for this image
  • Timescale reposted this

    View profile for 🐯 Ajay Kulkarni, graphic

    Co-Founder/CEO at Timescale (timescale.com/careers)

    The Timescale team does it again. 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗶𝗻𝗴 𝗖𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝗧𝘂𝗽𝗹𝗲 𝗙𝗶𝗹𝘁𝗲𝗿𝗶𝗻𝗴 𝗶𝗻 𝗧𝗶𝗺𝗲𝘀𝗰𝗮𝗹𝗲𝗗𝗕 𝟮.𝟭𝟲 With this release, our columnar compression engine takes another big leap forward. You can now get 500x faster updates and deletes and 10x faster upserts—all while continuing to enjoy the storage savings and performance gains of compression. https://lnkd.in/gN8RccQ4 P.S. Stay tuned—there’s even more to come. #LaunchWeek

    • No alternative text description for this image

Similar pages

Browse jobs

Funding