From c718dade35741fb2214df05681b782ccff5e9f6c Mon Sep 17 00:00:00 2001 From: Tudor Golubenco Date: Mon, 11 Nov 2024 19:14:05 +0200 Subject: [PATCH] More removed blog post --- bulletproof-docs-links.mdx | 565 ------------------ ...urces-xata-airbyte-zapier-integrations.mdx | 61 -- hackmamba.mdx | 34 -- jamstack-a-deep-dive.mdx | 94 --- new-starters.mdx | 51 -- next-era-of-databases.mdx | 64 -- 6 files changed, 869 deletions(-) delete mode 100644 bulletproof-docs-links.mdx delete mode 100644 connecting-data-sources-xata-airbyte-zapier-integrations.mdx delete mode 100644 hackmamba.mdx delete mode 100644 jamstack-a-deep-dive.mdx delete mode 100644 new-starters.mdx delete mode 100644 next-era-of-databases.mdx diff --git a/bulletproof-docs-links.mdx b/bulletproof-docs-links.mdx deleted file mode 100644 index db292c60..00000000 --- a/bulletproof-docs-links.mdx +++ /dev/null @@ -1,565 +0,0 @@ ---- -title: 'A journey to bulletproof links in documentation' -description: 'Learn how we solved the issue of broken internal links in our documentation, once and for all.' -image: - src: https://raw.githubusercontent.com/xataio/mdx-blog/main/images/bulletproof-docs-links.png - alt: Xata -author: Fabien Bernard -date: 09-05-2022 -published: true -slug: bulletproof-docs-links ---- - -At Xata, we are currently in closed beta. To support this, we want to have the best documentation we can. This, of course, is easier said than done, so we are experimenting by moving things around, listening to user feedback, and so on. - -Our documentation is written in [Markdown](https://www.notion.so/A-Journey-to-Bulletproof-Links-in-Documentation-f108c0ed0acd464bb1e6310519d19e37), served by [Next.js](https://www.notion.so/A-Journey-to-Bulletproof-Links-in-Documentation-f108c0ed0acd464bb1e6310519d19e37). This has a few advantages: - -- we have total freedom in the design, -- we can fetch Markdown documents from GitHub to keep the documentation of a given product close to its code, -- we can have custom components: graphs, Twitter cards, etc. - -So far, we had no way to make sure that links between pages were working and that they didn’t just 404. After moving things around, even after checking and rechecking everything manually… **we had 28 broken links in our production documentation.** - - - -This is fine! We all make mistakes, but let’s learn from this together! - -## The plan - -The first step is to list our requirements so we have our goal well defined. We want quick feedback if something gets broken, so we can fix it before merging to production. For that, we need to be able to run the docs consistently in different environments: - -- local -- preview/ staging -- production - -With this in mind, one constraint immediately becomes clear to us: this would be way easier if we don’t have a link checker that needs to run on a server. - -If we want to avoid a server, we need to extract the information: - -- the internal links in my markdown files -- the directory structure. - -## The Proof of Concept - -To achieve our final goal, we need a reliable way to extract links from our existing documentation and see if the link _actually goes somewhere_. This is a lot of files, with a lot of links. To simplify our problem, let’s omit the real markdown documents for now and focus on the link extraction part. - -> Our docs are at https://xata.io/docs by the way, if you’d like to explore them and use Xata yourself. - -1. First, we need to read, parse, and understand a Markdown document. Our parser of choice is `unified`. It returns an **Abstract Syntax Tree (AST)**. Let’s give it a Markdown document and see what happens. - -```tsx -import { describe, expect, it } from 'vitest'; - -import { unified } from 'unified'; -import remarkParse from 'remark-parse'; -import remarkStringify from 'remark-stringify'; - -describe('markdown link checker', () => { - it('should extract all links from a given document', async () => { - const document = ` -# Title - -this is a paragraph with [a link](/rest-api/get) - -## Another title - -and another [link](/rest-api/delete) -`; - - await unified() - .use(remarkParse) - .use(() => { - return function transform(tree) { - console.log(tree); - }; - }) - .use(remarkStringify) - .process(document); - - expect(true).toEqual(false); - }); -}); -``` - -When we `console.log` the output, we get this: - - - -And just like this, we can see our markdown content in a structured way, cool no? (we can also use [https://astexplorer.net/](https://astexplorer.net/) to see explore our AST) - -2. Extract the links - -To do this, we can use `unist-util-visit`, which gives us a very convenient and type-safe way to “visit” our AST. - -What does “visit our AST” mean? Every time we have a node with `type === "link"`, even very deep inside `children`, a function is called with this node as argument! This function is called a **visitor function**. Good news, this is exactly what we want! 😀 - -Instead of using our previous raw `tree` object and iterating through every node looking for those `link` objects, we can write this: - -```tsx -import { describe, expect, it } from 'vitest'; - -import { unified } from 'unified'; -import { visit } from 'unist-util-visit'; -import remarkParse from 'remark-parse'; -import remarkStringify from 'remark-stringify'; - -describe('markdown link checker', () => { - it('should extract all links from a given document', async () => { - const document = ` -# Title - -this is a paragraph with [a link](/rest-api/get) - -## Another title - -and another [link](/rest-api/delete) -`; - - const links: string[] = []; - - await unified() - .use(remarkParse) - .use(() => { - return function transform(tree) { - // Let's extract every link! - visit(tree, 'link', (linkNode) => { - links.push(linkNode.url); - }); - }; - }) - .use(remarkStringify) - .process(document); - - expect(links).toEqual(['/rest-api/get', '/rest-api/delete']); - }); -}); -``` - -That’s essentially all we need for a proof of concept (POC), we are now more confident that we should be able to extract every link from our documentation. Let’s try with some real data! - -## The real deal - -First, we need a way to load all our Markdown files. Luckily, [npm](https://www.notion.so/A-Journey-to-Bulletproof-Links-in-Documentation-f108c0ed0acd464bb1e6310519d19e37) already has a solution for us. 😀 - -```tsx -import { describe, expect, it } from 'vitest'; - -import { unified } from 'unified'; -import { visit } from 'unist-util-visit'; -import remarkParse from 'remark-parse'; -import remarkStringify from 'remark-stringify'; -import { getAllFiles } from 'get-all-files'; -import { readFile } from 'fs/promises'; - -describe('markdown link checker', async () => { - for await (const filename of getAllFiles('./content')) { - if (!filename.endsWith('.md')) continue; - it(filename, async () => { - const document = await readFile(filename, 'utf-8'); - const links: string[] = []; - - await unified() - .use(remarkParse) - .use(() => { - return function transform(tree) { - visit(tree, 'link', (linkNode) => { - links.push(linkNode.url); - }); - }; - }) - .use(remarkStringify) - .process(document); - - expect(links).toEqual([]); - }); - } -}); -``` - -We are iterating on every file from the `/content` directory and reusing our previous logic, replacing the `document` with our actual markdown content! - -And… it works! Not useful yet, but we are going in the right direction! - - - -This is also a nice opportunity for us to see what links we actually have! We can already spot some patterns that we need to deal with as external links (example: [`https://stackoverflow.com/questions/4423061/how-can-i-view-http-headers-in-google-chrome`](https://stackoverflow.com/questions/4423061/how-can-i-view-http-headers-in-google-chrome)) and links with anchors (example: `/cli/getting-started#code-generation`) - -First, let’s exclude external links, they are out of our scope since we’re not interested in _other people’s 404s._ We may add this later since we don’t want to link to broken content in the longer term, but for now **scope hammering keeps us focused**. - -```tsx -if (linkNode.url.startsWith('/')) { - links.push(linkNode.url); -} -``` - -Let’s improve the output of our test: we want one `describe` per Markdown document, and one `it` per link. This is our target output (for now): - - - -So we have: - -```tsx -describe(filename, async () => { - // ... - links.forEach((link) => { - it(`should have "${link}" defined`, () => { - expect("todo").toBe("done"); - }); - }); - }) -}) -``` - -And to finish this part, we need to check if the file exists, `fs.existsSync` will do the job: - -```tsx -import { describe, expect, it } from 'vitest'; - -import { unified } from 'unified'; -import { visit } from 'unist-util-visit'; -import remarkParse from 'remark-parse'; -import remarkStringify from 'remark-stringify'; -import { getAllFiles } from 'get-all-files'; -import { readFile } from 'fs/promises'; -import { existsSync } from 'fs'; - -describe('markdown link checker', async () => { - for await (const filename of getAllFiles('./content')) { - if (!filename.endsWith('.md')) continue; - describe(filename, async () => { - const document = await readFile(filename, 'utf-8'); - const links: string[] = []; - - await unified() - .use(remarkParse) - .use(() => { - return function transform(tree) { - visit(tree, 'link', (linkNode) => { - if (linkNode.url.startsWith('/')) { - links.push(linkNode.url); - } - }); - }; - }) - .use(remarkStringify) - .process(document); - - links.forEach((link) => { - it(`should have "${link}" define`, () => { - expect(existsSync(`./content${link}.md`)).toBeTruthy(); - }); - }); - }); - } -}); -``` - -### Dealing with Anchors - -Now, this is where, personally, I started by removing the anchor and called this done. - -```tsx -it(`should have "${link}" define`, () => { - const [path] = link.split('#'); - expect(existsSync(`./content${path}.md`)).toBeTruthy(); -}); -``` - -After a coffee and some time to think, I realized—we can do better! And also, I’m sure we have some broken anchors. (Spoiler alert, I found one 😅) - -The first problem is that we need to have a dictionary of all anchors per document. So far, our AST logic is in middle of my unit test. Time to refactor! - -Let’s extract our current logic into a function: - -```tsx -/** - * Retrieve useful information from a markdown file. - * - * @param path file path - */ -async function parseMarkdown(path: string) { - const document = await readFile(path, 'utf-8'); - const links = new Set(); - - await unified() - .use(remarkParse) - .use(() => { - return function transform(tree) { - visit(tree, 'link', (linkNode) => { - if (linkNode.url.startsWith('/')) { - links.add(linkNode.url); - } - }); - }; - }) - .use(remarkStringify) - .process(document); - - return { - links - }; -} -``` - -Let’s analyze what this method is doing in plain English: - -- We’re taking a `path: string` as input -- We’re reading the content of the file that has this `path` -- We’re instantiating a `links` set, ready to be filled -- We have our AST visitor that: - - on each node that fulfills the condition `type === "link"` - - if the url starts with `/` (so it’s an internal link), we add it to `links` -- We return the consolidated `links` in an object - -Now, we can extract the `anchors` following the same pattern - -```tsx {4,16-20,27} -async function parseMarkdown(path: string) { - const document = await readFile(path, "utf-8"); - const links = new Set(); - const anchors = new Set(); - - await unified() - .use(remarkParse) - .use(() => { - return function transform(tree) { - visit(tree, "link", (linkNode) => { - if (linkNode.url.startsWith("/")) { - links.add(linkNode.url); - } - }); - }); - visit(tree, "heading", (headingNode) => { - if (headingNode.children[0].type === "text") { - anchors.add(kebab(headingNode.children[0].value)); - } - }); - }; - }) - .use(remarkStringify) - .process(document); - - return { - anchors, - links, - }; - } -``` - -In Markdown, an anchor is any title converted to kebab case. For example, if I have a title `## Create Table` in `/getting-started`, I can point to `/getting-started#create-table` - -Time to try our brand new function! Remember, we are just doing small steps—no need to run. - -```tsx -import slash from 'slash'; - -type FilePath = string; - -describe('markdown link checker', async () => { - // Collect all the data - const pages = new Map; links: Set }>(); - - for await (const filename of getAllFiles('./content')) { - if (!filename.endsWith('.md')) continue; - - pages.set( - slash(filename) - .replace(/^\.\/content/, '') // remove `/content` - .slice(0, -3), // remove `.md` - await parseMarkdown(filename) - ); - } - - console.log(pages); -}); -``` - -This test yields this output: - - - -Again, since we’re not worried about the entire problem, we can take our time to clean and prepare our object for the next step: - -- Use `slash` so we don’t have those nasty backslashes (windows… my old friend…) -- Remove the `/content` and `.md` from my `path` so we have the same pattern as `anchor` - -We can also spot a little problem here, did you see it? The `sidebar-position-5\nsidebar-label-api-keys` entry is not a heading! It’s a `yaml` node and needs to be parsed with the `remarkFrontmatter` plugin! - -The source: - -```tsx ---- -sidebar_position: 5 -sidebar_label: API Keys ---- -# API Keys -``` - -We need to add `remarkFrontmatter` to the stack: - -```tsx -import remarkFrontmatter from 'remark-frontmatter'; - -// ... -await unified() - .use(remarkParse) - .use(remarkFrontmatter) // <- Just here - .use(() => { - /* ... */ - }); -``` - -## The wrap-up - -This is the final version (the one used in our actual documentation repository!) - -```tsx -import { describe, expect, it } from 'vitest'; -import { readFile } from 'fs/promises'; -import { unified } from 'unified'; -import { visit } from 'unist-util-visit'; -import remarkParse from 'remark-parse'; -import remarkStringify from 'remark-stringify'; -import remarkFrontmatter from 'remark-frontmatter'; -import { getAllFiles } from 'get-all-files'; -import { kebab } from 'case'; -import slash from 'slash'; - -type FilePath = string; - -describe('markdown link checker', async () => { - // 1. Collect data from `/content` - const pages = new Map; links: Set; isRef: boolean }>(); - - for await (const filename of getAllFiles('./content')) { - if (!filename.endsWith('.md')) continue; - - pages.set( - slash(filename) - .replace(/^\.\/content/, '') // remove `/content` - .slice(0, -3), // remove `.md` - await parseMarkdown(filename) - ); - } - - // 2. Generate unit tests - Array.from(pages.entries()).forEach(([page, def]) => { - if (def.links.size === 0) return; - describe(page, async () => { - Array.from(def.links.values()).forEach((link) => { - it(`should have ${link} define`, async () => { - const [path, anchor] = link.split('#'); - expect(pages.has(path)).toBeTruthy(); - if (anchor && !pages.get(path).isRef) { - expect(pages.get(path).anchors.has(anchor)).toBeTruthy(); - } - }); - }); - }); - }); -}); - -/** - * Retrieve useful information from a markdown file. - * - * @param path file path - */ -async function parseMarkdown(path: string) { - const document = await readFile(path, 'utf-8'); - const links = new Set(); - const anchors = new Set(); - let isRef = false; // `true` if the markdown content is `See …` - - await unified() - .use(remarkParse) - .use(remarkFrontmatter) - .use(() => { - return function transform(tree) { - visit(tree, 'link', (linkNode) => { - if (linkNode.url.startsWith('/') && !linkNode.url.startsWith('/api-reference')) { - links.add(linkNode.url); - } - }); - - visit(tree, 'heading', (headingNode) => { - if (headingNode.children[0].type === 'text') { - anchors.add(kebab(headingNode.children[0].value)); - } - }); - - visit(tree, 'paragraph', (paragraphNode) => { - if ( - paragraphNode.children.length === 1 && - paragraphNode.children[0].type === 'text' && - paragraphNode.children[0].value.startsWith('See ') - ) { - isRef = true; - } - }); - }; - }) - .use(remarkStringify) - .process(document); - - return { - anchors, - links, - isRef - }; -} -``` - -Few highlights: - -- We don’t need `fs.existsSync` anymore, we already have everything we need! -- We ignore `/api-reference` links, they are generated from our an external document at build time, so it’s out of our current scope. -- We added a last concept of `isRef`. This is again a very specific edge case that I don’t want to deal with, we have a part of our documentation fetched from GitHub with the following pattern: - - ```markdown - See \[link] - ``` - -During this journey, I spotted 1 broken link and 1 broken anchor, but more importantly, I’m way more confident that we will not ship broken links in our documentation anymore. - -**This is what’s most important to us as developers: confidence in our code.** Through these tests and this approach, we have a little more of that with our documentation. - -## The Conclusion - -I hope you learned as much as I did! Please borrow ideas & code from this article and iterate from this. Unit tests are sometimes seen as something that slows us down or a boring thing to do, but I personally had a lot of fun playing with `unified` and Markdown’s AST, and I’m 100% convinced that this was not a waste of time—you can’t imagine how much time I spent on the documentation clicking every link, _and I still missed 2 of them!_ This 100-line implementation is way more efficient than my manual QA 😅. - -Happy hacking! diff --git a/connecting-data-sources-xata-airbyte-zapier-integrations.mdx b/connecting-data-sources-xata-airbyte-zapier-integrations.mdx deleted file mode 100644 index 9909f184..00000000 --- a/connecting-data-sources-xata-airbyte-zapier-integrations.mdx +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: 'Connecting data sources to Xata with Airbyte and Zapier integrations' -description: 'Effortlessly automate data ingestion with Xata integrations' -image: - src: https://raw.githubusercontent.com/xataio/mdx-blog/main/images/zapier-airbyte-xata-integration.png - alt: Xata, Airbyte, and Zapier integration -author: Kostas Botsas -published: true -date: 07-06-2023 -tags: ['integrations', 'airbyte', 'zapier'] -slug: connecting-data-sources-xata-airbyte-zapier-integrations ---- - -We are thrilled to announce the release of two new key integrations for Xata with [Airbyte](https://docs.airbyte.com/integrations/destinations/xata) and [Zapier](https://zapier.com/apps/xata/integrations). Alongside the [Typescript](/docs/sdk/typescript/overview) and [Python](/docs/sdk/python/overview) SDKs, Xata users now have even more convenient ways to connect a vast number of data sources, making it easy to set up seamless workflows and unlock new possibilities for data management. - -## Simplifying data ingestion with Airbyte - -[Airbyte](https://airbyte.com/), an open-source data integration engine that offers hundreds of connectors with data warehouses and databases, has gained popularity for its seamless integration and data syncing capabilities. Xata's integration with Airbyte offers a streamlined data ingestion process from any Airbyte input source directly into your Xata database. - -Whether you want to import data from databases, cloud-based services or APIs, the Airbyte integration simplifies and streamlines the process. In just a few steps, you can configure data connections, replication streams and schedules to effortlessly replicate data into Xata. - - - -## Empowering automated workflows with Zapier - -[Zapier](https://zapier.com/) is a leading automation platform enabling workflows that seamlessly connect a vast number of applications and services. Zapier offers a library of pre-built integrations that enable task automation between different systems, while users can also mix and match Triggers and Actions to create custom workflows that suit their needs. - -With Xata's integration, you can set up automated workflows (Zaps) that move data from other apps you use every day into Xata, without writing any code. For example, you can automatically sync content from Google Docs and Sheets, or even replicate Postgres rows. The possibilities are virtually limitless and the setup process is quick and simple because Zapier provides a user-friendly interface for mapping fields from the input app to the corresponding columns in your Xata database. - - - -## How to get started - -Xata’s documentation pages cover everything you need to jump right into these new integrations: - -- [Airbyte](https://xata.io/docs/integrations/airbyte) -- [Zapier](https://xata.io/docs/integrations/zapier) - -Whether you are a data analyst, a business intelligence professional, or a developer, Xata's Airbyte and Zapier integrations are designed to make your life easier by bringing an automated data management experience. This means less time spent on setting up data feeds, allowing you to focus on the value of Xata’s features for [querying](/docs/sdk/get), [searching](/docs/sdk/search) and [asking AI](/docs/sdk/ask) questions on your own data. - -## Next steps with data pipelines in Xata - -We are constantly expanding Xata’s connectors and SDK ecosystems, so watch this space for new partnerships, integrations, language clients support and tooling to help with moving data in and out of Xata. - -**A hint on what’s coming next:** While there is already [CSV import](https://xata.io/docs/recipes/import-data#import-a-csv-file) capability in the Xata CLI, our team is working on an improved version that's fully integrated in the Web UI, which will make the CSV data loading experience even smoother. - -To supercharge your data workflows, sign up for [Xata](https://app.xata.io/) today and explore the endless possibilities! If you’d like to chat more or have any questions, come find us on [Discord](https://xata.io/discord) or submit a [support request](https://support.xata.io/hc/en-us/requests/new). diff --git a/hackmamba.mdx b/hackmamba.mdx deleted file mode 100644 index b43fe938..00000000 --- a/hackmamba.mdx +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: 'Hackmamba Jamstack Content Hackathon 2.0' -description: 'Two full weeks of learning, writing, and networking to support the dev community.' -image: - src: https://raw.githubusercontent.com/xataio/mdx-blog/main/images/hackmamba.png - alt: Winnders -author: Fabien Bernard -date: 12-12-2022 -tags: ['hackathon'] -published: true -slug: hackmamba ---- - -[Hackmamba](https://hackmamba.io/) is a great resource for creating technical content and strategy. We were excited to partner with them in their 2022 hackathon focused on developing participants technical writing skills. - -Learning, writing, and networking during two full weeks! What more could you ask for? This last Hackmamba Jamstack Content Hackathon 2.0. was quite a journey. - -The rules were simple: build whatever you want using Xata and Cloudinary, then write a blog article about it. - -This was a vibrant experience for us at Xata, since we also launched the product the same week as the hackathon. The Hackmamba community and participants were wonderful and built some great apps and content on top of Xata. - -Here are the winners: - -1. Gift Uhiene https://dev.to/hackmamba/build-a-full-stack-jamstack-application-with-xata-cloudinary-and-nextjs-50pd -2. Johnpaul Eze https://dev.to/hackmamba/modern-e-commerce-with-xata-and-cloudinary-foc -3. Ubaydah Abdulwasiu https://dev.to/hackmamba/how-to-build-an-online-library-in-nextjs-with-xata-and-cloudinary-26b4 - -Congratulations to them and all the participants! 🎉 🎉 - -You can find all published posts https://dev.to/hackmamba - -These articles are great if you're looking for some good examples to get started with Xata. If you're looking to try Xata out as the year winds down, be sure to enter your app in the [Xata Challenge](https://xata.io/challenge)! - -Chat with us on [Discord](https://xata.io/discord) if you have any questions or simply want to talk data. diff --git a/jamstack-a-deep-dive.mdx b/jamstack-a-deep-dive.mdx deleted file mode 100644 index 8b553110..00000000 --- a/jamstack-a-deep-dive.mdx +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: 'Jamstack: a deep dive' -description: 'Explore Jamstack, its alternatives, and how Xata complements it with a serverless database.' -image: - src: https://raw.githubusercontent.com/xataio/mdx-blog/main/images/jamstack-mern-lamp-stack-comparison.png - alt: Xata -author: Anjalee Sudasinghe -date: 02-28-2022 -tags: ['engineering'] -published: true -slug: jamstack-mern-lamp-stack-comparison ---- - -Compared to the static web pages of the early days, today’s web has come a long way in terms of content served to users and the technology used. Websites of the past with a handful of pages delivered via a [Content Delivery Network (CDN)](https://en.wikipedia.org/wiki/Content_delivery_network) have expanded to using backend servers, databases, dynamic content, and myriad other technologies to provide users a better experience while solving problems that arrive with ever-increasing demand. With all these changes, **modern web development has grown to be an incredibly sophisticated field**. - -Perhaps, it’s a reason why the emergence of Jamstack, which talks about returning to the simplicity of the early days, has become a baffling concept to some developers. Is it possible to bring back the days of delivering static web pages via a CDN when the web has already gotten so complex? How can it provide the performance and functionality of already established stacks like LAMP and MERN (more about these below). - -To find the answers to these questions, we have to look a little deeper into what Jamstack is and how it compares with traditional stacks like LAMP and MERN on different fronts. - -## What is the Jamstack? - -The _JAM_ in Jamstack refers to combining **Javascript, APIs, and Markup** as core technologies to develop web apps. However, the term has a more specific meaning which can be summarized like this: - -> Jamstack is the technology stack used to deliver fast and secure web apps to users by pre-rendering pages and serving them from a CDN, eliminating the need to manage servers. - -As mentioned at the beginning of the article, Jamstack is an attempt to bring back the simplicity of the static web by using two main principles. - -The first is **prerendering**. It refers to the idea of generating as much web page markup as possible during the site build time so that the content can be directly served via a CDN. It eliminates the need for setting up backend servers and other complex configurations associated with it—like managing clusters/nodes, implementing security measures, and caching—to serve dynamic content at user request. As a result, Jamstack websites can load faster and serve a better experience to users. The removal of backend from the picture also makes Jamstack websites more secure simply because it exposes fewer components to the outside world. - -The second main principle behind Jamstack is **decoupling**. With the introduction of powerful frontend frameworks to the market, the line between the frontend and backend is gradually becoming blurry. Inevitably, this has led to increased complexity in frontend development. In the Jamstack, decoupling looks to reestablish this clear divide between the frontend and backend by outsourcing backend tasks to third-party APIs and services. - -Preferring pre-rendered web pages over dynamically-rendered ones, however, doesn’t make Jamstack a bad candidate for building dynamic websites. This is where the Javascript and API parts of the JAM backronym become important. Jamstack websites use third-party APIs to dynamically load data through requests made via client-side Javascript or serverless functions. - -### Jamstack Ecosystem - -The reason why Jamstack has been able to bring back the simplicity of earlier websites, without losing the advanced functionality supported by traditional stacks, is the ecosystem that surrounds it. The components of this ecosystem have grown independent from Jamstack in the last decade. But now, combined together, they have formed a stack that lets us build faster, secure, cheaper websites. - -Jamstack ecosystem primarily consists of the following components: - -- CDNs: Recently, many CDN providers catering to Jamstack websites have emerged, like [Netlify](https://netlify.com) and [Vercel](https://vercel.com). Deploying a website to these CDNs is simple as connecting the hosted Git repository. The integrated CI/CD workflows of these platforms automatically trigger builds whenever new commits are made. -- Static Site Generators (SSG): The popularity and abundance of SSGs like [Next.js](https://nextjs.org/), [Hugo](https://gohugo.io/), [Gatsby](https://www.gatsbyjs.com/), and [Nuxt](https://nuxtjs.org/) in the past couple of years support pre-rendering web pages at build time. -- Third-party APIs: Mass availability of third-party APIs like [Auth0](https://auth0.com/) for authentication, [Stripe](https://stripe.com/en-de) for processing payments, and [Algolia](https://www.algolia.com/) for searching allows us to outsource previously backend-bound tasks and abstract their complexity. -- FaaS (serverless functions): Serverless functions help Jamstack websites to integrate heavy logic without calling for backend servers. - -### How the Jamstack Makes Web Development Better - -- Built with mostly pre-rendered markup, Jamstack sites load faster on the client-side. -- Having no backend servers to protect makes the sites more secure. -- Because they are stateless, the websites can scale on-demand without needing special server configurations or code changes. -- You often only have to pay for the resources you use and nothing more. -- With no backend to configure, you can host your entire website on Git and impose end-to-end version control. - -To understand this point even better, let’s compare Jamstack against two of the most popular traditional stacks on the web, LAMP and MERN. - -## Jamstack vs. LAMP stack - -Proven against the test of time, LAMP has been a go-to technology stack of web developers for more than a decade. Even the most famous app on the web, [WordPress](https://wordpress.com/), runs on LAMP. The four letters in LAMP represent four open-source components that make essential contributions to running dynamic web apps. - -- **L**inux operating system. -- **A**pache web server for processing client requests and HTTP routing. -- **M**ySQL database for data storage and querying. -- **P**HP for handling internal logic and serving dynamic content. - -Today, LAMP is supported by a mature and extensive ecosystem that supports building robust dynamic websites for any kind of use case. Its completely open-source architecture gives developers full control of their sites with no fear of [vendor lock-in](https://en.wikipedia.org/wiki/Vendor_lock-in). - -Despite these positives that have popularized LAMP in the development community, when compared against newer technology stacks like Jamstack, LAMP comes with too much complexity and too little flexibility for modern-day websites. - -- LAMP stack has a steep learning curve when learning to configure the backend for everything to work together. Beginners, especially, find this process too complex and cumbersome. -- Dynamically processing and rendering content at request time reduces the app performance from the user's perspective. -- LAMP doesn’t scale easily with high traffic without manual intervention to add more server power, caching layers, and optimized code. -- With all the backend configurations that cannot be automated, implementing CI/CD workflows for the entire website is not practical. - -## Jamstack vs. MERN stack - -Compared to LAMP and Jam, MERN is an open-source web development stack still in its early days. However, due to its relative simplicity and scalability, MERN has become immensely popular among developers in the past couple of years. It provides a full-stack solution to web development that consists of four main components: - -- **M**ongoDB, the NoSQL database -- **E**xpress.js, the backend development framework -- **R**eact, the frontend development framework -- **N**ode.js, the Javascript runtime environment for backend - -The introduction of Node to the backend gives developers the comfort of working with Javascript on both frontend and backend without having to learn another language. The use of Express and React frameworks also abstract away some of the complexity of backend and frontend configurations compared to the LAMP stack. - -However, MERN is still tied to the burden of backend, namely dynamically processing content at request time at the cost of site performance. It’s also an ill-fit for working with highly relational data. Though replacing the non-relational MongoDB in this stack with a relational database is a solution to this problem, Node still lacks ORMs that are advanced enough to support complex queries. This results in developers having to manually write SQL queries, increasing the complexity of the development process. - -## Jamstack and Databases - -The original philosophy of Jamstack recommends outsourcing database-bound tasks in a web app. Yet, this is not something that’s always attainable or practical. It’s easy to find yourself in situations where you need control of how your data is stored and accessed for functional, legal, or security reasons. But using a traditional database with Jamstack diminishes its initial benefits like increased performance, security, zero-configuration set up, and end-to-end version control. - -That’s why we developed Xata to bring the best of both worlds to developers. As a connectionless relational database, Xata is delivered as a lightning-fast and highly available API that Jamstack websites can query from the client-side or through serverless functions. **It’s the first-ever database to support the development workflow with version control features like branching, previewing, testing, and merging changes.** With these features, Xata gives back the ability to fully automate the CI/CD workflow as Jamstack promises. - -Xata also removes a lot of inconveniences tied to traditional databases like scaling, caching, maintenance, security updates, and downtime handling, making it accessible to even those without development experience to build and deploy their own apps. **With the addition of Xata, Jamstack becomes an even more powerful way to build websites that makes the lives of users and developers a lot easier.** It takes you back to the simplicity of the early days of web, while retaining all the breakthroughs this field has made in the past two decades to serve users a better experience. - -We're currently in private beta, so if you'd like to get started with Xata today, sign up on the [home page](/), or [@-us on Twitter](https://twitter.com/xatabase) and we'll be sure to hook you up. People with immediate use cases and nonprofits tend to get early access relatively easily. diff --git a/new-starters.mdx b/new-starters.mdx deleted file mode 100644 index d8a42d7b..00000000 --- a/new-starters.mdx +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: "Take a closer look at Xata's latest starter templates" -description: 'Begin your Xata journey with our new starters - SolidStart, Astro, and SvelteKit.' -image: - src: https://raw.githubusercontent.com/xataio/mdx-blog/main/images/new-starters-solidstart-sveltekit-astro.png - alt: Winnders -author: Fabien Bernard -date: 12-12-2022 -tags: ['engineering', 'starter'] -published: true -slug: new-starters-solidstart-sveltekit-astro ---- - -Diversity is one of the most important aspects of Xata’s culture. You'll see it in everything we build. Which is why we continue to add new JavaScript frameworks to our [examples repo](https://github.com/xataio/examples). Here are a few more to help you along on your frontend journey. - -Please keep in mind that whatever framework you pick, it needs to support server-side rendering and/or endpoints support. Indeed, any call to Xata needs to be server-side to avoid leaking credentials. - -Let's take a quick tour of our new options for starters - -## SolidStart - -[SolidStart](https://start.solidjs.com/getting-started/what-is-solidstart) is the meta-framework on top of SolidJS and Solid router. You can use Xata in any server-side function. In the SolidStart world, you can use Xata inside `createServerData$` and `createServerAction$` functions. - -To try out easily, you can use the SolidStart starter: `npx degit xataio/examples/apps/starter-solidstart solidstart-sample-app` - -## Astro - -[Astro](https://astro.build/) works a lot of JavaScript frameworks out-of-the-box as React, SolidJS or VueJS. Astro also has an SSR built-in feature, and this is precisely what we need to be able to query Xata securely! - -This is the starter for Astro: `npx degit xataio/examples/apps/starter-astro astro-sample-app` - -A few highlights, the server is set in SSR mode, and you will need a specific adapter depending on where you want to deploy application. - -For example, if I want to deploy to netlify, I can run `npx astro add netlify`. Please note that if you don’t want to setup Astro in SSR mode, you can, but you will need to rebuild your application whenever you want new data. Indeed, without SSR mode enabled, the data are requested and injected at build time! - -## SvelteKit - -Last but not least, from the new starers list -- [SvelteKit](https://kit.svelte.dev/)! Another great option is to have SSR support with TypeSafety between your frontend and backend (and database if you are using Xata!). -Please note that you need to have the dev server running to generate some types (I figured this the hard way). -All your database calls need to be in `+page.server.ts` to stay server-side. You can load any data in `load` and react to any action in `actions` ; very straight forward! - -As always, this is the command to let you play by yourself: -`npx degit xataio/examples/apps/starter-sveltekit sveltekit-sample-app` - -## Final note - -What ever flavor you are choosing, my advice (and this is just my point of view), is to check if you have a end to end type safety. This is important because when you will run `xata codegen`, TypeScript will compile or not, telling you what needs to be adjust in your application. - -Therefore, you are more confident that your application doesn’t consume stale data, so no bugs, and happy customer at the end :) - -If you encounter any issues or you just want to talk, please join us on [discord](https://xata.io/discord) diff --git a/next-era-of-databases.mdx b/next-era-of-databases.mdx deleted file mode 100644 index e8cd87cf..00000000 --- a/next-era-of-databases.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: 'The next era of databases are serverless, adaptive, and collaborative' -description: "Take a look at the ideal database for tomorrow's builders: exceeding application needs, encouraging teamwork, and enhancing developer efficiency" -image: - src: https://raw.githubusercontent.com/xataio/mdx-blog/main/images/next-era-of-databases.png - alt: Next era of databases -author: Alex Francoeur -date: 06-14-2023 -tags: ['engineering', 'product'] -published: true -slug: next-era-of-databases-serverless-adaptive-collaborative ---- - -## Tell me how you really feel about your database - -First, let’s get real. Before we dive into where we believe databases are heading, let’s take a step back and level set on where we are today. If you’re operating an application at any type of scale and have business critical data in it, you’re likely terrified to make changes to your production database. And rightfully so. It’s a time intensive process to backfill, many things can generally just go wrong and there are downstream impacts to your application, non-primary data stores and external services. As a result, you are likely to choose to only make additive changes, never renaming or modifying columns, until the schema becomes a weird puzzle that mirrors the evolution of your company and application. - - - -Take yourself out of reality for just a second. Doesn’t this feel wrong? You shouldn’t be afraid to make real changes to your database in production. You and your team should only be worrying about one thing, building the best and most efficient application for your users. - -## Most applications have similar data requirements - -The story feels as old as time, or at least as old as January 1, 1970. Whether it’s a monolithic application built through the waterfall methodology or a modern app crafted by a multitude of micro-services that deploy 100 times a day, you eventually run into the same problem: history does, in fact, repeat itself. - - - -As the needs of your users grow, so do the capabilities required to support their new use cases. Search experiences require different backends for recommendations, natural language and free text search. Event driven applications require a time series data store for real time analytics and workflows. Machine learning and artificial intelligence models vary by use case. The combination of these different requirements result in applications composed of various data services exposed through a wide range of developer experiences, pricing models, and new domain expertise within the team. As your business scales, this complexity grows exponentially and execution tends to slow down. Things get sticky when there’s a lot of glue. - -## Tomorrow’s builders will expect more from their database - -Databases have been around for a while now. The database as we know it today starts in the 1970’s, over 50 years ago. Since that time we’ve seen the birth of the relational database, the rise of NoSQL, the ever persistent lingua-franca of data (SQL), and the evolution from on-premise to “the cloud”. Over the last few years, serverless technologies have taken off and the database is no exception. Data stores that have been around for decades are in the process of or already providing serverless offerings. We’ve shared our thoughts on serverless databases in [this blog](https://xata.io/blog/what-is-a-serverless-database) and now live in a world where you really do not have to worry about configuration, scaling, upgrades or maintenance of your infrastructure. At the scale most applications operate at, things “just work”, production readiness is assumed, and low latency is an expectation. - -Alongside the rise of serverless are a few other emerging technologies. Maturing low code platforms and the explosive innovation surrounding generative AI. Building applications is getting easier and every developer is assumed to be a full-stack engineer. As these new offerings level-up the individual developer, the expectations placed upon them are also increasing. With the current state of the industry, smaller teams are not only being forced, but also empowered to be more impactful. If you’re interested why developer experience is important to your business, this [recent blog from GitHub](https://github.blog/2023-06-08-developer-experience-what-is-it-and-why-should-you-care/) provides a great overview to the problem space and shares how generative AI is changing the landscape. Removing friction and providing larger building blocks will allow junior, senior and non-traditional developers to tackle more complex problems faster. - -With this perfect storm of serverless infrastructure, an influx of tools to boost developer efficiency, and more people empowered to build applications, the next generation of builders will look a lot different than they do today. The [99% developers](https://future.com/software-development-building-for-99-developers/) will be more generalists. Meaning, they will not care to learn the intricacies of a new data store or service to provide some additional functionality to their application. Most applications eventually need similar flavors of the same things. The database for tomorrow’s applications is [multi-model](https://en.wikipedia.org/wiki/Multi-model_database), configurable, and built for developer efficiency. At Xata, we believe a [data platform](https://xata.io/blog/database-platforms-trend) will fuel the next wave of applications - one service that provides a complete data layer and evolves with your application needs. - -## A data experience for everyone - -Now this won’t be easy. Databases are hard, and hard for good reason. Data is at the heart of every businesses, user experience, and use case. The database is the one thing you don’t mess with. Ideally, it’s battle-tested, resilient, highly available and performant. Which is why it’s so hard to make changes to it in production applications. That being said, there are common data problems that don’t quite yet have standardized solutions. Data replication between services, zero downtime schema migrations, and horizontal sharding are just a few examples. These are the types of technical challenges that must be solved in order to make a data platform the go-to-solution for this next generation of application developers. Databases are hard, but the data experience for modern developers needs to be easy and quickly add value, not a time sink or system you’re terrified to touch. - -As the non-functional requirements tied to the data layer become solved problems and the harder parts about databases become easier, tools of choice will be selected based on the overall experience. We’ve seen this mental shift occur in many industries, technologies and consumer applications. Data solutions will be chosen based on what they can integrate with, how much more efficient they’ll make your team and how accessible they are to the broader organization. When availability, scalability, security and performance are simply expectations — an Amazon-style customer obsession towards developer and user experience becomes equally as important. - -Taking this a step further, there are data paradigms that will never go away. Primarily, the familiarity and flexibility of a tabular data store (spreadsheet) and the logical way to communicate with your data (SQL). These ways to enter, transform, and collaborate with data are understood by most today and commonly taught in school as tools to make you successful, regardless of future profession. As application builders become more generalists, they will gravitate towards solutions that not only meet their application needs, but are also easily understood by their colleagues. Not only because this is a good user experience, but because it makes the team more effective. There is less of a learning curve when accessing your data feels familiar. - -## Building the next phase of databases - -Application builders will need a database that meets all of their application needs, is built for developer efficiency and collaboration with everyone, not just your engineering team. At Xata, we believe this is the way the world is heading and we’re building a solution to meet the needs of tomorrow’s builders. Since our [launch last November](https://xata.io/blog/xata-public-release), we’re seeing that our vision is resonating with our community. Folks are excited to not have to worry about bolting on services to their application and that they do not have to sacrifice great development experience for their data layer. We meet them where they are today or where they will be soon. - -Looking ahead, you’ll see further investment in both our data platform, making hard data problems at scale feel magical and exposing this all through a non-negotiable, premium developer experience. To see what’s on deck, you can view our [roadmap](https://xata.io/roadmap) here. If you’d like to chat more about where we’re going, come find us on [Discord](https://xata.io/discord) or [book some time to chat](https://calendly.com/d/g2y-zwz-m9w/a-quick-chat-with-xata). - -If you find the types of problems we’re trying to solve interesting and our vision clicks with you, [we’re also hiring](https://xata.io/careers).