# Clip web data to spreadsheet

If you've ever copied and pasted data from a website into a spreadsheet, this tutorial is for you. We're going to build a scraper that pulls info from any webpage and drops it straight into Google Sheets or Excel, automagically.

No code. Just a few clicks with a prompt and you're done.

***

### What You'll Need

* A free PixieBrix account
* The Chrome extension installed
* A Google Sheet or Excel file ready to go

***

### Step 1: Set Up Your Spreadsheet

Before you do anything in PixieBrix, open Google Sheets (or Excel) and create a new spreadsheet. Add a header row at the top with the fields you want to collect. Think of these are what you are telling PixieBrix to look for on the page.

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2FH6oES5mXaFYOUwMUrVyT%2FCleanShot%202026-03-31%20at%2014.11.52%402x.png?alt=media&#x26;token=83df10bd-f08f-4002-a9da-bb5f8fbcd3aa" alt=""><figcaption></figcaption></figure>

For example, if you're scraping LinkedIn profiles, your headers might look like:

\| Name | Title | Company | URL |

If you're scraping Eventbrite for events:

\| Event Name | Date | Location | URL |

The header names don't have to be fancy. Just make them clear and be ready to reference them when PixieBrix asks what you want to scrape.

***

### Step 2: Go to the Page You Want to Scrape

Go to to the website you want to pull data from. PixieBrix works for scraping on most pages, including:&#x20;

* [LinkedIn](https://linkedin.com/)
* [Airbnb](https://www.airbnb.com/)
* [G2](https://www.g2.com/)
* [Eventbrite](https://www.eventbrite.com/)

Once you're on the page, open the **PixieBrix Page Editor**. You can do this by hovering on the floating PixieBrix logo on the page and clicking the brick icon.

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2FUiFQlrVrLaELuPydnt78%2FCleanShot%202026-03-31%20at%2014.34.04%402x.png?alt=media&#x26;token=0107502f-802e-4ec9-8e60-7c22c62f81de" alt=""><figcaption></figcaption></figure>

***

### Step 3: Click "Clip Web Data to Spreadsheet"

Inside the Page Editor, you'll see a list of ready-to-use prompts, but use this one for your custom webscraper. Copy the following prompt:

```
A context menu that extracts content from a page and sends to spreadsheets.

Ask me the following questions: 
- what page do i want to scrape
- what data do i want to scrape from that page
- if i want to send to google sheets or excel
```

... and click "Generate"

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2FkaK1seX0DgyJ80ABRrIR%2FCleanShot%202026-03-31%20at%2014.36.02%402x.png?alt=media&#x26;token=c4335637-ddec-43df-b4c8-e9d03a9ca9b2" alt=""><figcaption></figcaption></figure>

This kicks off a quick setup conversation where PixieBrix asks you a few questions:

* **What site are you scraping?** (e.g., LinkedIn, Airbnb, Trustpilot, Eventbrite)
* **What info do you want to grab?** (e.g., name, company, job title, URL, rating, price)
* **Where do you want to send it?** Google Sheets or Excel

You'll be prompted to connect to your spreadsheet tool of choice. Just follow the steps, then select the spreadsheet you just created.

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2FRjVTjrR2BkMmPMTDecjF%2FCleanShot%202026-03-31%20at%2014.38.55%402x.png?alt=media&#x26;token=2a15e484-8dbc-4992-b36b-6db7dfbf4296" alt=""><figcaption></figcaption></figure>

***

### Step 4: Open the Mod and Test It

Once it's configured, click **Open Mod**. This opens your custom scraper so you can see what was built.

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2F63c25VwK2cy3h5ED8Yx1%2FCleanShot%202026-03-31%20at%2014.40.57%402x.png?alt=media&#x26;token=afe76aee-f491-4274-b3fe-07baef65aa31" alt=""><figcaption></figcaption></figure>

Click **Try on \<your site>** and then right click on the LinkedIn page to see your new scraping menu. &#x20;

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2FUKi0qLL3FzQQVFhnooKk%2FCleanShot%202026-03-31%20at%2014.44.55%402x.png?alt=media&#x26;token=c0085778-7cf6-436d-9949-650835ef7412" alt=""><figcaption></figcaption></figure>

PixieBrix will look at the current page, extract the data you asked for, and send it to your spreadsheet.

{% hint style="info" %}
Don't see the **Try on \<your site>** modal? Just click the "**Test**" button in the top of the Page Editor and it will run on the connected page; no need to click the context menu.
{% endhint %}

***

### Step 5: Check Your Spreadsheet

Flip over to your spreadsheet. You should see a new row with the data from that page filled in. If everything looks right, you're good to go.

***

### Step 6: Tweak It If You Need To

Not quite right? No problem. Head back to the Page Editor and use the **chat copilot on the left side** to adjust things. You can describe what you want to change in plain language, like "also grab the email address" or "skip the name column."

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2Fe0OkPn3gXqKjhXPG5Agg%2FCleanShot%202026-03-31%20at%2014.47.19%402x.png?alt=media&#x26;token=58d3986d-cc8e-432d-8151-28a6edf1ac84" alt=""><figcaption></figcaption></figure>

You don't have to dig into this, but if you're curious or want to customize further, here's how the scraper is built:

The **middle panel** shows the steps in your workflow, called bricks. You'll see two main ones:

1. **Extract content from the page** (this is the AI doing the scraping)
2. **Send to Google Sheets** (or Excel, whichever you chose)

Click on any brick and the **right panel** shows you the settings for that step. You can adjust what fields you're pulling, change which spreadsheet it sends to, or add more actions.

For example, you could add a third brick to post the scraped data to Slack, or send yourself a notification. The workflow is yours to build on.

***

### Step 7: Save the mod to use it again

Before you close the Page Editor, click the Save button in the top to create a username and save the mod so you can run it on any page without having to open the Page Editor.

<figure><img src="https://2274778196-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fq123bF1HPQPV35s5vHa1%2Fuploads%2FJB4PmSyg66Tphlk3AuwN%2FCleanShot%202026-03-31%20at%2014.51.22%402x.png?alt=media&#x26;token=7b7ebfd0-fcb7-4ea6-ae2e-6ceeb3f65fff" alt=""><figcaption></figcaption></figure>

***

### That's It

You just built a web scraper. No code, no complicated setup, just a prompt and a few clicks. If you want to scrape more pages, just go to the next one and hit the button again. PixieBrix handles the rest.
