Skip to content

Commit

Permalink
feat(website): new documentation (#1048)
Browse files Browse the repository at this point in the history
shortcuts authored Oct 7, 2021
1 parent 6b4944f commit 9a89cde
Showing 140 changed files with 8,430 additions and 21,812 deletions.
81 changes: 67 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -4,8 +4,8 @@

The easiest way to add search to your documentation – for free.

[![npm version](https://img.shields.io/npm/v/@docsearch/js/alpha.svg?style=flat-square)](https://www.npmjs.com/package/@docsearch/js/v/alpha) [![License](https://img.shields.io/badge/license-MIT-green.svg?style=flat-square)](./LICENSE)
[![Netlify Status](https://api.netlify.com/api/v1/badges/30eacc09-d4b2-4a53-879b-04d40aaea454/deploy-status)](https://app.netlify.com/sites/docsearch/deploys) [![npm version](https://img.shields.io/npm/v/@docsearch/js/alpha.svg?style=flat-square)](https://www.npmjs.com/package/@docsearch/js/v/alpha) [![License](https://img.shields.io/badge/license-MIT-green.svg?style=flat-square)](./LICENSE)

<p align="center">
<strong>
<a href="https://docsearch.algolia.com">Documentation</a> •
@@ -32,42 +32,95 @@ DocSearch crawls your documentation, pushes the content to an Algolia index and

> Don't have your Algolia credentials yet? [Apply to DocSearch](https://docsearch.algolia.com/apply)!
**1.** Import the library as an ECMAScript module:
### JavaScript

#### Installation

```sh
npm install @docsearch/js@alpha
# or
yarn add @docsearch/js@alpha
# or
npm install @docsearch/js@alpha
```

```js
import docsearch from '@docsearch/js';
If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/js@alpha"></script>
```

**1–bis.** Or with a script tag (at the end of the `body`):
#### Get started

To get started, you need a [`container`](https://docsearch.algolia.com/docs/api#container) for your DocSearch component to go in. If you don’t have one already, you can insert one into your markup:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/js@alpha"></script>
<div id="docsearch"></div>
```

**2.** Use the library:
Then, insert DocSearch into it by calling the [`docsearch`](https://docsearch.algolia.com/docs/api) function and providing the container. It can be a [CSS selector](https://developer.mozilla.org/en-us/docs/web/css/css_selectors) or an [Element](https://developer.mozilla.org/en-us/docs/web/api/htmlelement).

Make sure to provide a [`container`](https://docsearch.algolia.com/docs/api#container) (for example, a `div`), not an `input`. DocSearch generates a fully accessible search box for you.

```js app.js
import docsearch from '@docsearch/js';

import '@docsearch/css';

```js
docsearch({
container: '#docsearch',
appId: 'YOUR_APP_ID',
indexName: 'YOUR_INDEX_NAME',
apiKey: 'YOUR_API_KEY',
apiKey: 'YOUR_SEARCH_API_KEY',
});
```

**3.** [Customize the color scheme](https://docsearch.algolia.com/docs/styling/).
### React

#### Installation

```bash
yarn add @docsearch/react@alpha
# or
npm install @docsearch/react@alpha
```

If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/react@alpha"></script>
```

#### Get started

DocSearch generates a fully accessible search box for you.

```jsx App.js
import { DocSearch } from '@docsearch/react';

import '@docsearch/css';

function App() {
return (
<DocSearch
appId="YOUR_APP_ID"
indexName="YOUR_INDEX_NAME"
apiKey="YOUR_SEARCH_API_KEY"
/>
);
}

export default App;
```

## Styling

[Read documentation →](https://docsearch.algolia.com/docs/styling)

## Related projects

DocSearch is made of the following repositories:

- **[algolia/docsearch](https://github.com/algolia/docsearch)**: DocSearch source code.
- **[algolia/docsearch-website](https://github.com/algolia/docsearch-website)**: DocSearch website and documentation.
- **[algolia/docsearch/packages/website](https://github.com/algolia/docsearch/tree/next/packages/website)**: DocSearch website and documentation.
- **[algolia/docsearch-configs](https://github.com/algolia/docsearch-configs)**: DocSearch websites configurations that DocSearch powers.
- **[algolia/docsearch-scraper](https://github.com/algolia/docsearch-scraper)**: DocSearch crawler that extracts data from your documentation.

3 changes: 1 addition & 2 deletions package.json
Original file line number Diff line number Diff line change
@@ -16,7 +16,7 @@
"build:types": "lerna run build:types --scope @docsearch/*",
"build:umd": "lerna run build:umd --scope @docsearch/*",
"build": "lerna run build --scope @docsearch/*",
"cy:clean": "rimraf cypress/screenshots",
"cy:clean": "rm -rf cypress/screenshots",
"cy:info": "cypress info",
"cy:record": "percy exec -- cypress run --spec 'cypress/integration/**/*' --headless --record",
"cy:run:chrome": "yarn run cy:run --browser chrome",
@@ -84,7 +84,6 @@
"prettier": "2.1.1",
"react": "17.0.2",
"react-dom": "17.0.2",
"rimraf": "3.0.2",
"rollup": "1.31.0",
"rollup-plugin-babel": "4.4.0",
"rollup-plugin-commonjs": "10.1.0",
18 changes: 14 additions & 4 deletions packages/docsearch-css/README.md
Original file line number Diff line number Diff line change
@@ -4,14 +4,24 @@ Style package for [DocSearch](http://docsearch.algolia.com/), the best search ex

## Installation

In a JavaScript environment:

```sh
```bash
yarn add @docsearch/css@alpha
# or
npm install @docsearch/css@alpha
```

If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/css@alpha"></script>
```

## Get started

```js
import '@docsearch/css';
```

## Documentation

[Read documentation →](https://autocomplete-experimental.netlify.app/docs/docsearch-css)
[Read documentation →](https://docsearch.algolia.com/docs/styling)
2 changes: 1 addition & 1 deletion packages/docsearch-css/package.json
Original file line number Diff line number Diff line change
@@ -16,7 +16,7 @@
"unpkg": "dist/style.css",
"jsdelivr": "dist/style.css",
"scripts": {
"build:clean": "rimraf ./dist",
"build:clean": "rm -rf ./dist",
"build:css": "node build-css.js",
"build": "yarn build:clean && mkdir dist && yarn build:css",
"prepare": "yarn build",
33 changes: 32 additions & 1 deletion packages/docsearch-js/README.md
Original file line number Diff line number Diff line change
@@ -10,6 +10,37 @@ yarn add @docsearch/js@alpha
npm install @docsearch/js@alpha
```

## Get started

If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/js@alpha"></script>
```

To get started, you need a [`container`](https://docsearch.algolia.com/docs/api#container) for your DocSearch component to go in. If you don’t have one already, you can insert one into your markup:

```html
<div id="docsearch"></div>
```

Then, insert DocSearch into it by calling the [`docsearch`](https://docsearch.algolia.com/docs/api) function and providing the container. It can be a [CSS selector](https://developer.mozilla.org/en-us/docs/web/css/css_selectors) or an [Element](https://developer.mozilla.org/en-us/docs/web/api/htmlelement).

Make sure to provide a [`container`](https://docsearch.algolia.com/docs/api#container) (for example, a `div`), not an `input`. DocSearch generates a fully accessible search box for you.

```js app.js
import docsearch from '@docsearch/js';

import '@docsearch/css';

docsearch({
container: '#docsearch',
appId: 'YOUR_APP_ID',
indexName: 'YOUR_INDEX_NAME',
apiKey: 'YOUR_SEARCH_API_KEY',
});
```

## Documentation

[Read documentation →](https://autocomplete-experimental.netlify.app/docs/docsearch-js)
[Read documentation →](https://docsearch.algolia.com/docs/DocSearch-v3)
2 changes: 1 addition & 1 deletion packages/docsearch-js/package.json
Original file line number Diff line number Diff line change
@@ -21,7 +21,7 @@
"unpkg": "dist/umd/index.js",
"jsdelivr": "dist/umd/index.js",
"scripts": {
"build:clean": "rimraf ./dist",
"build:clean": "rm -rf ./dist",
"build:esm": "cross-env BUILD=esm rollup --config",
"build:types": "tsc -p ./tsconfig.declaration.json --outFile ./dist/esm/index.d.ts",
"build:umd": "cross-env BUILD=umd rollup --config",
34 changes: 30 additions & 4 deletions packages/docsearch-react/README.md
Original file line number Diff line number Diff line change
@@ -2,16 +2,42 @@

React package for [DocSearch](http://docsearch.algolia.com/), the best search experience for docs.

[![Percy](https://percy.io/static/images/percy-badge.svg)](https://percy.io/DX/DocSearch)

## Installation

```sh
```bash
yarn add @docsearch/react@alpha
# or
npm install @docsearch/react@alpha
```

If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/react@alpha"></script>
```

## Get started

DocSearch generates a fully accessible search box for you.

```jsx App.js
import { DocSearch } from '@docsearch/react';

import '@docsearch/css';

function App() {
return (
<DocSearch
appId="YOUR_APP_ID"
indexName="YOUR_INDEX_NAME"
apiKey="YOUR_SEARCH_API_KEY"
/>
);
}

export default App;
```

## Documentation

[Read documentation →](https://autocomplete-experimental.netlify.app/docs/DocSearch)
[Read documentation →](https://docsearch.algolia.com/docs/DocSearch-v3)
22 changes: 6 additions & 16 deletions packages/website/.gitignore
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# dependencies
node_modules/
# Dependencies
/node_modules

# production
build/
# Production
/build

# generated files
# Generated files
.docusaurus
.cache-loader

# misc
# Misc
.DS_Store
.env.local
.env.development.local
@@ -18,13 +18,3 @@ build/
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# legacy
coverage/
dist/
.idea/
.vscode/
dist-es5-module/
tmp
docs/.textlintcache
.netlify
209 changes: 0 additions & 209 deletions packages/website/.textlint.terms.json

This file was deleted.

37 changes: 0 additions & 37 deletions packages/website/.textlintrc.js

This file was deleted.

36 changes: 0 additions & 36 deletions packages/website/CONTRIBUTING.md

This file was deleted.

21 changes: 0 additions & 21 deletions packages/website/LICENSE

This file was deleted.

35 changes: 5 additions & 30 deletions packages/website/README.md
100755 → 100644
Original file line number Diff line number Diff line change
@@ -4,47 +4,22 @@ The easiest way to add search to your documentation. For free.

[![Netlify Status][3]][4]

DocSearch will crawl your documentation website, push its content to an Algolia
index, and allow you to add a dropdown search menu for your users to find
relevant content in no time.

Check out our [website][2] for a complete explanation and documentation.

[![Bootstrap demo][5]][2]

## Related projects

DocSearch gathers 4 repositories:

- [algolia/docsearch][6] contains the `docsearch.js` code source.
- [algolia/docsearch-configs][7] contains the JSON files representing all the
configs for all the documentations DocSearch is powering
- [algolia/docsearch-scraper][8] contains the crawler we use to extract data
from your documentation. The code is open source and you can run it from a
Docker image
- [algolia/docsearch-website][9] contains the documentation website built
[thanks to docusaurus 2][10]

## Website

### Installation
# Installation

1. `yarn install` in the root of this repository (one level above this
directory).
1. `yarn install` in the root of this repository (two level above this directory).
1. In this directory, do `yarn start`.
1. A browser window will open up, pointing to the docs.

### Deployment
# Deployment

Netlify handles the deployment of this website. If you are part of the DocSearch
core team. [Access the Netlify Dashboard][11].
Netlify handles the deployment of this website. If you are part of the DocSearch core team. [Access the Netlify Dashboard][11].

[1]: ./static/img/docsearch-logo.svg
[2]: https://docsearch.algolia.com/
[3]:
https://api.netlify.com/api/v1/badges/30eacc09-d4b2-4a53-879b-04d40aaea454/deploy-status
[3]: https://api.netlify.com/api/v1/badges/30eacc09-d4b2-4a53-879b-04d40aaea454/deploy-status
[4]: https://app.netlify.com/sites/docsearch/deploys
[5]: ./static/img/demos/example-bootstrap.gif
[6]: https://github.com/algolia/docsearch
[7]: https://github.com/algolia/docsearch-configs
[8]: https://github.com/algolia/docsearch-scraper
3 changes: 3 additions & 0 deletions packages/website/babel.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};
258 changes: 258 additions & 0 deletions packages/website/docs/DocSearch-v3.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,258 @@
---
title: DocSearch v3
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

:::info

The following content is for **[DocSearch v3][2]**. If you are using **[DocSearch v2][3]**, see the **[legacy][4]** documentation.

:::

## Introduction

DocSearch v3 is built on top of the latest version of [Algolia Autocomplete][1], which provides better accessibility, increased responsiveness, themability, a better built-in design, and customizability under low-network conditions.

This version has been rewritten in React, and now exposes React components. The vanilla JavaScript version is based on the React version with an alias to Preact.

### Stable version

With the recent release of the stable version of [Algolia Autocomplete][1], and the huge adoption of DocSearch v3, we will start working on a stable release!

Thanks to all the Alpha testers, and to [all the integrations][5] who already support it!

## Installation

DocSearch packages are available on the [npm][10] registry.

<Tabs
groupId="language"
defaultValue="js"
values={[
{ label: 'JavaScript', value: 'js', },
{ label: 'React', value: 'react', }
]
}>
<TabItem value="js">


```bash
yarn add @docsearch/js@alpha
# or
npm install @docsearch/js@alpha
```

If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/js@alpha"></script>
```

</TabItem>
<TabItem value="react">


```bash
yarn add @docsearch/react@alpha
# or
npm install @docsearch/react@alpha
```

If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/react@alpha"></script>
```

</TabItem>


</Tabs>


## Get started

<Tabs
groupId="language"
defaultValue="js"
values={[
{ label: 'JavaScript', value: 'js', },
{ label: 'React', value: 'react', }
]
}>
<TabItem value="js">


To get started, you need a [`container`][11] for your DocSearch component to go in. If you don’t have one already, you can insert one into your markup:

```html
<div id="docsearch"></div>
```

Then, insert DocSearch into it by calling the [`docsearch`][12] function and providing the container. It can be a [CSS selector][6] or an [Element][7].

Make sure to provide a [`container`][11] (for example, a `div`), not an `input`. DocSearch generates a fully accessible search box for you.

```js app.js
import docsearch from '@docsearch/js';

import '@docsearch/css';

docsearch({
container: '#docsearch',
appId: 'YOUR_APP_ID',
indexName: 'YOUR_INDEX_NAME',
apiKey: 'YOUR_SEARCH_API_KEY',
});
```

</TabItem>


<TabItem value="react">


DocSearch generates a fully accessible search box for you.

```jsx App.js
import { DocSearch } from '@docsearch/react';

import '@docsearch/css';

function App() {
return (
<DocSearch
appId="YOUR_APP_ID"
indexName="YOUR_INDEX_NAME"
apiKey="YOUR_SEARCH_API_KEY"
/>
);
}

export default App;
```

</TabItem>


</Tabs>


### Testing

If you're eager to test DocSearch v3 and can't wait to receive your credentails, you can use ours!

<Tabs
groupId="language"
defaultValue="js"
values={[
{ label: 'JavaScript', value: 'js', },
{ label: 'React', value: 'react', }
]
}>
<TabItem value="js">


```js
docsearch({
appId: 'BH4D9OD16A',
apiKey: '25626fae796133dc1e734c6bcaaeac3c',
indexName: 'docsearch',
});
```

</TabItem>


<TabItem value="react">


```jsx
<DocSearch
appId="BH4D9OD16A"
apiKey="25626fae796133dc1e734c6bcaaeac3c"
indexName="docsearch"
/>
```

</TabItem>


</Tabs>


### Filtering your search

If your website supports [DocSearch meta tags][13] or if you've added [custom variables to your config][14], you'll be able to use the [`facetFilters`][16] option to scope your search results to a [`facet`][15]

This is useful to limit the scope of the search to one language or one version.

<Tabs
groupId="language"
defaultValue="js"
values={[
{ label: 'JavaScript', value: 'js', },
{ label: 'React', value: 'react', }
]
}>
<TabItem value="js">


```js
docsearch({
// ...
searchParameters: {
facetFilters: ['language:en', 'version:1.0.0'],
},
});
```

</TabItem>


<TabItem value="react">


```jsx
<DocSearch
// ...
searchParameters={{
facetFilters: ['language:en', 'version:1.0.0'],
}}
/>
```

</TabItem>


</Tabs>


## Performance optimization

### Preconnect

By adding this snippet to the `head` of your website, you can hint the browser that the website will load data from Algolia, and allows it to preconnect to the DocSearch cluster. It makes the first query faster, especially on mobile.

```html
<link rel="preconnect" href="https://YOUR_APP_ID-dsn.algolia.net" crossorigin />
```

[1]: https://www.algolia.com/doc/ui-libraries/autocomplete/introduction/what-is-autocomplete/
[2]: https://github.com/algolia/docsearch/
[3]: https://github.com/algolia/docsearch/tree/master
[4]: legacy/dropdown
[5]: integrations
[6]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors
[7]: https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement
[8]: https://codesandbox.io/s/docsearch-js-v3-playground-z9oxj
[9]: https://codesandbox.io/s/docsearch-react-v3-playground-619yg
[10]: https://www.npmjs.com/
[11]: api#container
[12]: api
[13]: required-configuration#introduce-global-information-as-meta-tags
[14]: record-extractor#with-custom-variables
[16]: https://www.algolia.com/doc/guides/managing-results/refine-results/filtering/#facetfilters
[15]: https://www.algolia.com/doc/guides/managing-results/refine-results/faceting/
225 changes: 225 additions & 0 deletions packages/website/docs/api.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,225 @@
---
title: API Reference
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

:::info

The following content is for **[DocSearch v3][2]**. If you are using **[DocSearch v2][3]**, see the **[legacy][4]** documentation.

:::

<Tabs
groupId="language"
defaultValue="js"
values={[
{ label: 'JavaScript', value: 'js', },
{ label: 'React', value: 'react', }
]
}>
<TabItem value="js">


## `container`

> `type: string | HTMLElement` | **required**
The container for the DocSearch search box. You can either pass a [CSS selector][5] or an [Element][6]. If there are several containers matching the selector, DocSearch picks up the first one.

## `environment`

> `type: typeof window` | `default: window` | **optional**
The environment in which your application is running.

This is useful if you’re using DocSearch in a different context than window.

## `appId`

> `type: string` | **required**
Your Algolia application ID.

## `apiKey`

> `type: string` | **required**
Your Algolia Search API key.

## `indexName`

> `type: string` | **required**
Your Algolia index name.

## `placeholder`

> `type: string` | `default: "Search docs" | **optional**
The placeholder of the input of the DocSearch pop-up modal.

## `searchParameters`

> `type: SearchParameters` | **optional**
The [Algolia Search Parameters][7].

## `transformItems`

> `type: function` | `default: items => items` | **optional**
Receives the items from the search response, and is called before displaying them. Should return a new array with the same shape as the original array. Useful for mapping over the items to transform, and remove or reorder them.

```js
docsearch({
// ...
transformItems(items) {
return items.map((item) => ({
...item,
content: item.content.toUpperCase(),
}));
},
});
```

## `hitComponent`

> `type: ({ hit, children }) => JSX.Element` | `default: Hit` | **optional**
The component to display each item.

See the [default implementation][8].

## `transformSearchClient`

> `type: function` | `default: searchClient => searchClient` | **optional**
Useful for transforming the [Algolia Search Client][10], for example to [debounce search queries][9]

## `disableUserPersonalization`

> `type: boolean` | `default: false` | **optional**
Disable saving recent searches and favorites to the local storage.

## `initialQuery`

> `type: string` | **optional**
The search input initial query.

## `navigator`

> `type: Navigator` | **optional**
An implementation of [Algolia Autocomplete][1]’s Navigator API to redirect the user when opening a link.

Learn more on the [Navigator API][11] documentation.

</TabItem>


<TabItem value="react">


## `appId`

> `type: string` | **required**
Your Algolia application ID.

## `apiKey`

> `type: string` | **required**
Your Algolia Search API key.

## `indexName`

> `type: string` | **required**
Your Algolia index name.

## `placeholder`

> `type: string` | `default: "Search docs" | **optional**
The placeholder of the input of the DocSearch pop-up modal.

## `searchParameters`

> `type: SearchParameters` | **optional**
The [Algolia Search Parameters][7].

## `transformItems`

> `type: function` | `default: items => items` | **optional**
Receives the items from the search response, and is called before displaying them. Should return a new array with the same shape as the original array. Useful for mapping over the items to transform, and remove or reorder them.

```jsx
<DocSearch
// ...
transformItems={(items) => {
return items.map((item) => ({
...item,
content: item.content.toUpperCase(),
}));
}}
/>
```

## `hitComponent`

> `type: ({ hit, children }) => JSX.Element` | `default: Hit` | **optional**
The component to display each item.

See the [default implementation][8].

## `transformSearchClient`

> `type: function` | `default: searchClient => searchClient` | **optional**
Useful for transforming the [Algolia Search Client][10], for example to [debounce search queries][9]

## `disableUserPersonalization`

> `type: boolean` | `default: false` | **optional**
Disable saving recent searches and favorites to the local storage.

## `initialQuery`

> `type: string` | **optional**
The search input initial query.

## `navigator`

> `type: Navigator` | **optional**
An implementation of [Algolia Autocomplete][1]’s Navigator API to redirect the user when opening a link.

Learn more on the [Navigator API][11] documentation.

</TabItem>


</Tabs>


[1]: https://www.algolia.com/doc/ui-libraries/autocomplete/introduction/what-is-autocomplete/
[2]: https://github.com/algolia/docsearch/
[3]: https://github.com/algolia/docsearch/tree/master
[4]: legacy/dropdown
[5]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors
[6]: https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement
[7]: https://www.algolia.com/doc/api-reference/search-api-parameters/
[8]: https://github.com/algolia/docsearch/blob/next/packages/docsearch-react/src/Hit.tsx
[9]: https://codesandbox.io/s/docsearch-v3-debounced-search-gnx87
[10]: https://www.algolia.com/doc/api-client/getting-started/what-is-the-api-client/javascript/?client=javascript
[11]: https://www.algolia.com/doc/ui-libraries/autocomplete/core-concepts/keyboard-navigation/
12 changes: 0 additions & 12 deletions packages/website/docs/apply.mdx

This file was deleted.

680 changes: 0 additions & 680 deletions packages/website/docs/config-file.md

This file was deleted.

173 changes: 38 additions & 135 deletions packages/website/docs/faq.md
Original file line number Diff line number Diff line change
@@ -2,187 +2,90 @@
title: FAQ
---

If you're not finding the answer to your question on this website, this page
will help you. If you're still unsure, don't hesitate to send [your question to
us][1] directly.
If you're not finding the answer to your question on this website, this page will help you. If you're still unsure, don't hesitate to send [your question to us][1] directly.

## How often will you crawl my website?
You can also read our [Crawler FAQ][17], to understand how it behaves:

- [One of my pages was not crawled][15]
- [Why are my pages skipped?][16]

Every day.
## How often will you crawl my website?

The exact time of day might vary each day, but we'll crawl your website at most
every 24 hours.
Crawls are scheduled on a random time to happen once a week, you are also able to trigger new ones directly from [the Crawler interface][2]

## What do I need to install on my side?

Nothing.

The DocSearch crawler is running on our own infra. It reads the HTML content
from your website and populates an Algolia index every day. All you need to do
is keep your website online, and we take care of the rest. To edit your
configuration, please [submit a pull request][14].
DocSearch leverages the [Algolia Crawler][3], which offers an [interface][2] to create, monitor, edit, start your Crawlers.

## How much does it cost?

Nothing.

We know that paying for search infrastructure is a cost not all open source
projects can afford. That's why we decided to keep DocSearch free for everyone.
All we ask in exchange is that you keep the "Search by [Algolia][2]" logo
displayed next to the search results.
We know that paying for search infrastructure is a cost not all open source projects can afford. That's why we decided to keep DocSearch free for everyone. All we ask in exchange is that you keep the "Search by [Algolia][4]" logo displayed next to the search results.

If this is not possible for you, you're free to [open your own Algolia
account][3] and run DocSearch on your own without this limitation. In that case,
though, depending on the size of your documentation, you might need a paid
account (free accounts can hold as much as 10k records).
If this is not possible for you, you're free to [open your own Algolia account][5] and run [DocSearch on your own][6] without this limitation. In that case, though, depending on the size of your documentation, you might need a paid account (free accounts can hold as much as 10k records).

## What data are you collecting?

We save the data we extract from your website markup, which we put in a custom
JSON format instead of HTML. This is the data we put in the Algolia DocSearch
index. The selectors in your config define what data to scrape.
We save the data we extract from your website markup, which we put in a custom JSON format instead of HTML. This is the data we put in the Algolia DocSearch index. The selectors in your config define what data to scrape.

As the website owner, we also give you access to the [Algolia Analytics][15]
dashboard. This will let you have more data about the anonymized searches in
your website. You'll see the most searched terms, or those that lead to no
results.

With such analytics, you will better understand what your users are searching
for.

_If you don't have Analytics access, [send us an email][1] and we'll enable it._
As the website owner, we also give you access to your own Algolia application. This will let you see how your website is indexed in Algolia, detailed analytics about the anonymized searches in your website, team managements, and more!

## Where is my data hosted?

We host the DocSearch data on Algolia's servers, with replications around the
globe. You can find more details about the actual [server specs here][4], and
more complete information in our [privacy policy][5].
We host the DocSearch data on Algolia's servers, with replications around the globe. You can find more details about the actual [server specs here][7], and more complete information in our [privacy policy][8].

## Can I use DocSearch on non-doc pages?

The free DocSearch we provide will crawl documentation pages. To use it on other
parts of your website, you'll need to create your own Algolia account and
either:
The free DocSearch we provide will **only** crawl open-source projects documentation pages or technical blogs. To use it on other parts of your website, you'll need to create your own Algolia account and either:

- Run the [DocSearch crawler][6] on your own
- Use one of our other [framework integrations or API clients][7]
- Use one of our other [framework integrations or API clients][9]

## Can you index code samples?

Yes, but we do not recommend it.

Code samples are a great way for humans to understand how people use a specific
method. It often requires boilerplate code though, repeated across examples,
which adds noise to the results.

What we recommend instead is to exclude the code blocks from the indexing (by
using the `selectors_exclude` option in your config), and instead structure your
content so the method names are present in the headers.
Code samples are a great way for humans to understand how people use a specific method. It often requires boilerplate code though, repeated across examples, which adds noise to the results.

## Why do I have duplicate content in my results?

This can happen when you have more than one URL pointing to the same content,
for example with `./docs`, `./docs/` and `./docs/index.html`.
This can happen when you have more than one URL pointing to the same content, for example with `./docs`, `./docs/` and `./docs/index.html`.

You set the `stop_urls` to all the patterns you want to exclude. The following
example will exclude all URLs ending with `/` or `index.html`
We recommend configuring canonical URLs on your website, you can read more on the ["Consolidate duplicate URLs" guide by Google](https://developers.google.com/search/docs/advanced/crawling/consolidate-duplicate-urls) and use our [`ignoreCanonicalTo`](https://www.algolia.com/doc/tools/crawler/apis/configuration/ignore-canonical-to/) option directly in your crawler config.

```json
{
"stop_urls": ["/$", "/index.html$"]
}
```

## Why are custom changes from the Algolia dashboard ineffective?

Changing your setting from the dashboard might be something you want to do for
some reasons .

Every successful crawl sets the DocSearch settings. These settings will be
overridden at the next crawl. We **do not recommend to edit anything from the
dashboard**. These changes come from the JSON configuration itself.

You can use the [custom_settings parameter][8] for such purpose.
Ultimately, it is possible to set set the [`exclusionPatterns`][10] to all the patterns you want to exclude.

## A documentation website I like does not use DocSearch. What can I do?

We'd love to help!

If one of your favorite tool documentation websites is missing DocSearch, we
encourage you to file an issue in their repository explaining how DocSearch
could help. Feel free to [send us an email][1] as well, and we'll provide all
the help we can.

## How many records does the DocSearch crawl create?

The [`nb_hits` property][8] in your configuration keeps track of the number of
records the crawl has extracted and indexed by the last DocSearch run. A crawl
updates this number automatically.

The DocSearch scraper follows [the recommended atomic-reindexing strategy][9].
It creates a brand new temporary index to populate the data scraped from your
website. When successful, the crawl overwrites the old index defined in your
configuration with the key `index_name`.

## Why aren't my pages indexed?

We are scraping your website according to your configuration. It might happen
that some pages are missing from the search. Some possible reasons for that are:

- Makes sure you are not filtering on the search by wrongly using
`facetFilters`. [See here for more details][10].
- Make sure that an other indexed page references the page missing with an
hyperlink tag `<a/>`.
- Make sure you are [providing a compliant sitemap from the configuration][11]
and that it references the page.

## Can I know when the next crawl will happen?

No you can't. You should be aware that we made every crawls in a day. The
position of your crawl in the queue is the lexicographic order of your
`index_name` amongs the whole list of featured website.

If none of the previous points help, you [can contact our support][1].
If one of your favorite tool documentation websites is missing DocSearch, we encourage you to file an issue in their repository explaining how DocSearch could help. Feel free to [send us an email][1] as well, and we'll provide all the help we can.

## How did we build this website?

We build this website with [Docusaurus v2][12]. We were helped by a great man
who inspired us a lot, Endi. We want [to pay a tribute to this exceptional human
being that will be always part of the DocSearch project][13]. Rest in peace
mate!

## What is the timeline on the v3?

We are pre-releasing the v3 on docusaurus 2. It will help us to iterate faster
on it and make sure we are ready to release a vanilla version. We will provide a
migration guide to help you move on this new version. If you want to have more
information on this version, you can [watch the search party we made about this
topic][16].
We build this website with [Docusaurus v2][11]. We were helped by a great man who inspired us a lot, Endi. We want [to pay a tribute to this exceptional human being that will be always part of the DocSearch project][12]. Rest in peace mate!

## Can I share the `apiKey` in my repo?

The `apiKey` the DocSearch team provides is [a search-only key][16] and can be
safely shared publicly. You can track it in your version control system (e.g.
git). If you are running the scraper on your own, please make sure to create a
search-only key and [do not share your Admin key][18].
The `apiKey` the DocSearch team provides is [a search-only key][13] and can be safely shared publicly. You can track it in your version control system (e.g. git). If you are running the scraper on your own, please make sure to create a search-only key and [do not share your Admin key][14].

[1]: mailto:docsearch@algolia.com
[2]: https://www.algolia.com/
[3]: https://www.algolia.com/pricing
[4]: https://www.algolia.com/doc/guides/infrastructure/servers/
[5]: https://www.algolia.com/policies/privacy
[6]: run-your-own.md
[7]: https://www.algolia.com/doc/api-reference/
[8]: config-file.md
[9]:
https://www.algolia.com/doc/guides/sending-and-managing-data/send-and-update-your-data/in-depth/asynchronicity-and-when-to-wait-for-tasks/#atomic-reindexing
[10]: https://www.algolia.com/doc/api-reference/api-parameters/facetFilters/
[11]: tips.md
[12]: https://v2.docusaurus.io/
[13]: https://docusaurus.io/blog/2020/01/07/tribute-to-endi
[14]: https://github.com/algolia/docsearch-configs/pulls
[15]:
https://www.algolia.com/doc/guides/getting-insights-and-analytics/search-analytics/understand-reports/
[16]: https://youtu.be/OXRjnG7SHJM
[17]: https://www.algolia.com/doc/guides/security/api-keys/#search-only-api-key
[18]: https://www.algolia.com/doc/guides/security/api-keys/#admin-api-key
[2]: https://crawler.algolia.com/
[3]: https://www.algolia.com/products/search-and-discovery/crawler/
[4]: https://www.algolia.com/
[5]: https://www.algolia.com/pricing
[6]: legacy/run-your-own
[7]: https://www.algolia.com/doc/guides/infrastructure/servers/
[8]: https://www.algolia.com/policies/privacy
[9]: https://www.algolia.com/doc/api-client/getting-started/install/javascript/?client=javascript
[10]: https://www.algolia.com/doc/tools/crawler/apis/configuration/exclusion-patterns/
[11]: https://docusaurus.io/
[12]: https://docusaurus.io/blog/2020/01/07/tribute-to-endi
[13]: https://www.algolia.com/doc/guides/security/api-keys/#search-only-api-key
[14]: https://www.algolia.com/doc/guides/security/api-keys/#admin-api-key
[15]: https://www.algolia.com/doc/tools/crawler/troubleshooting/faq/#one-of-my-pages-was-not-crawled
[16]: https://www.algolia.com/doc/tools/crawler/troubleshooting/faq/#when-are-pages-skipped-or-ignored
[17]: https://www.algolia.com/doc/tools/crawler/troubleshooting/faq
62 changes: 27 additions & 35 deletions packages/website/docs/how-does-it-work.mdx
Original file line number Diff line number Diff line change
@@ -4,10 +4,7 @@ title: How does it work?

import useBaseUrl from '@docusaurus/useBaseUrl';

Getting up and ready with DocSearch is a straightforward process that requires
three steps: you apply, we configure the crawler for you, and you integrate our
Search-UI in your frontend. You only need to copy and paste a JavaScript
snippet.
Getting up and ready with DocSearch is a straightforward process that requires three steps: you apply, we configure the crawler and the Algolia app for you, and you integrate our UI in your frontend. You only need to copy and paste a JavaScript snippet.

<img
src={useBaseUrl('img/assets/docsearch-how-it-works.png')}
@@ -16,44 +13,39 @@ snippet.

## You apply

The first thing you'll need to do is to apply for DocSearch by [filling out the
form on this page][1] (double check first that [you qualify][2]). We are
receiving a lot of requests, so this form makes sure we won't be forgetting
anyone.
The first thing you'll need to do is to apply for DocSearch by [filling out the form on this page][1] (double check first that [you qualify][2]). We are receiving a lot of requests, so this form makes sure we won't be forgetting anyone.

We guarantee that we will answer every request, but as we receive a lot of
applications, please give us a couple of days to get back to you :)
We guarantee that we will answer every request, but as we receive a lot of applications, please give us a couple of days to get back to you :)

## We create a configuration
## We create your Algolia application and a dedicated crawler

Once we receive [your application][1], we'll have a look at your website and
create a custom configuration file for it. This file defines which URLs we
should crawl or ignore, as well as the specific CSS selectors to use for
selecting headers, subheaders, etc. All configs are publicly available in our
[config repository][3].
Once we receive [your application][1], we'll have a look at your website, create an Algolia application and a dedicated [crawler][5] for it. Your crawler comes with [a configuration file][6] which defines which URLs we should crawl or ignore, as well as the specific CSS selectors to use for selecting headers, subheaders, etc.

This step still requires some manual work and human brain, but thanks to the +1
000 configs we already created, we're able to automate most of it. Once this
creation finishes, we'll run a first indexing of your website and have it run
automatically every 24 hours.
This step still requires some manual work and human brain, but thanks to the +4,000 configs we already created, we're able to automate most of it. Once this creation finishes, we'll run a first indexing of your website and have it run automatically at a random time of the week.

## You update your website
**With the Crawler, comes [a dedicated interface][8] for you to:**

- Start, schedule and monitor your crawls
- Edit and test your config file directly with [DocSearch v3][7]

**With the Algolia application comes access to the dashboard for you to:**

We'll then get back to you with the JavaScript snippet you'll need to add to
your website. This will bind your search `input` field to display results from
your Algolia index on each keystroke in a dropdown menu.
- Browse your index and see how your content is indexed
- Various analytics to understand how your search performs and ensure that your users are able to find what they’re searching for
- Trials for other Algolia features
- Team management

## You update your website

The default styling of the dropdown uses a grey theme to fit in most designs. We
have made the dropdown with HTML using custom CSS classes and we recommend that
[you overwrite those classes][4] to provide a theming more in line with the rest
of your website.
We'll then get back to you with the JavaScript snippet you'll need to add to your website. This will bind your [DocSearch component][7] to display results from your Algolia index on each keystroke in a pop-up modal.

Now that DocSearch is set, you don't have anything else to do. We'll keep
crawling your website every day and update your search results automatically.
All we ask is that you keep the "Search by Algolia" logo next to your search
results.
Now that DocSearch is set, you don't have anything else to do. We'll keep crawling your website and update your search results automatically. All we ask is that you keep the "Search by Algolia" logo next to your search results.

[1]: apply.mdx
[2]: who-can-apply.md
[1]: apply
[2]: who-can-apply
[3]: https://github.com/algolia/docsearch-configs/tree/master/configs
[4]: styling.mdx
[4]: styling
[5]: https://www.algolia.com/products/search-and-discovery/crawler/
[6]: https://www.algolia.com/doc/tools/crawler/apis/configuration/
[7]: DocSearch-v3
[8]: https://crawler.algolia.com/
70 changes: 0 additions & 70 deletions packages/website/docs/inside-the-engine.md

This file was deleted.

51 changes: 28 additions & 23 deletions packages/website/docs/integrations.md
Original file line number Diff line number Diff line change
@@ -2,32 +2,37 @@
title: Supported Integrations
---

We worked with **documentation website generators** to have DocSearch directly
embedded as a first class citizen in the websites they produce.
We worked with **documentation website generators** to have DocSearch directly embedded as a first class citizen in the websites they produce.

## Our great integrations

So, if you're using one of the following tools, checkout their documentation to
see how to enable DocSearch on your website:
So, if you're using one of the following tools, checkout their documentation to see how to enable DocSearch on your website:

- [Docusaurus][1] - [How to enable search][2]
- [VuePress][3] - [Algolia Search][4]
- [pkgdown][5] - [DocSearch indexing][6]
- [LaRecipe][7] - [Algolia Search][8]
- [Orchid][9] - [Algolia Search][10]
- [Docusaurus v1][1] - [How to enable search][2]
- [Docusaurus v2][3] - [Using Algolia DocSearch][4]
- [VuePress][5] - [Algolia Search][6]
- [pkgdown][7] - [DocSearch indexing][8]
- [LaRecipe][9] - [Algolia Search][10]
- [Orchid][11] - [Algolia Search][12]
- [Smooth DOC][13] - [DocSearch][14]
- [Docsy][15] - [Configure Algolia DocSearch][16]

If you're maintaining a similar tool and want us to add you to the list, [get in
touch with us][11]. We'd be happy to help.
If you're maintaining a similar tool and want us to add you to the list, [get in touch with us][17]. We'd be happy to help.

[1]: https://docusaurus.io/
[2]: https://docusaurus.io/docs/en/search#docsNav
[3]: https://vuepress.vuejs.org/
[4]:
https://vuepress.vuejs.org/theme/default-theme-config.html#algolia-docsearch
[5]: https://pkgdown.r-lib.org/
[6]: https://pkgdown.r-lib.org/articles/search.html
[7]: https://larecipe.binarytorch.com.my/docs/2.2/overview
[8]: https://larecipe.binarytorch.com.my/docs/2.2/configurations#search
[9]: https://orchid.run
[10]: https://orchid.run/plugins/orchidsearch#algolia-docsearch
[11]: mailto:docsearch@algolia.com
[1]: https://v1.docusaurus.io/
[2]: https://v1.docusaurus.io/docs/en/search
[3]: https://docusaurus.io/
[4]: https://docusaurus.io/docs/search#using-algolia-docsearch
[5]: https://vuepress.vuejs.org/
[6]: https://vuepress.vuejs.org/theme/default-theme-config.html#algolia-search
[7]: https://pkgdown.r-lib.org/
[8]: https://pkgdown.r-lib.org/articles/search.html
[9]: https://larecipe.binarytorch.com.my/docs/2.2/overview
[10]: https://larecipe.binarytorch.com.my/docs/2.2/configurations#search
[11]: https://orchid.run
[12]: https://orchid.run/plugins/orchidsearch#algolia-docsearch
[13]: https://smooth-doc.com/
[14]: https://smooth-doc.com/docs/docsearch/
[15]: https://www.docsy.dev/
[16]: https://www.docsy.dev/docs/adding-content/navigation/#configure-algolia-docsearch
[17]: mailto:docsearch@algolia.com
51 changes: 51 additions & 0 deletions packages/website/docs/manage-your-crawls.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
title: Manage your crawls
---

import useBaseUrl from '@docusaurus/useBaseUrl';

DocSearch comes with the [Algolia Crawler web interface](https://crawler.algolia.com/) that allows you to configure how and when your Algolia index will be populated.

## Trigger a new crawl

Head over to the `Overview` section to `start`, `restart` or `pause` your crawls and view a real-time summary.

<div className="uil-ta-center">
<img
src={useBaseUrl('img/assets/crawler-overview.png')}
alt="Algolia Crawler creation page"
/>
</div>

## Monitor your crawls

The `monitoring` section helps you find crawl errors or improve your search results.

<div className="uil-ta-center">
<img
src={useBaseUrl('img/assets/crawler-monitoring.png')}
alt="Algolia Crawler creation page"
/>
</div>

## Update your config

The live editor allows you to update your config file and test your URLs (`URL tester`).

<div className="uil-ta-center">
<img
src={useBaseUrl('img/assets/crawler-editor.png')}
alt="Algolia Crawler creation page"
/>
</div>

## Search preview

From the [`editor`](#update-your-config), you have access to a `Search preview` tab to browse search results with [`DocSearch v3`](/docs/docsearch-v3).

<div className="uil-ta-center">
<img
src={useBaseUrl('img/assets/crawler-search-preview.png')}
alt="Algolia Crawler creation page"
/>
</div>
113 changes: 113 additions & 0 deletions packages/website/docs/migrating-from-legacy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
---
title: Migrating from legacy
---

## Introduction

With this new version of the [DocSearch UI][1], we wanted to go further and provide better tooling for you to create and maintain your config file, and some extra Algolia features that you all have been requesting for a long time!

## What's new?

### Scraper

The DocSearch infrastructure now leverages the [Algolia Crawler][2]. We've teamed up with our friends and created a new [DocSearch helper][4], that extracts records as we were previously doing with our beloved [DocSearch scraper][3]!

The best part, is that you no longer need to install any tooling on your side if you want to maintain or update your index!

We now provide [a web interface][7] that will allow you to:

- Start, schedule and monitor your crawls
- Edit your config file from our live editor
- Test your results directly with [DocSearch v3][1]

### Algolia application and credentials

We've received a lot of requests asking for:

- A way to manage team members
- Browse and see how Algolia records are indexed
- See and subscribe to other Algolia features

They are now all available, in **your own Algolia application**, for free :D

## Config file key mapping

Below are the keys that can be found in the [`legacy` DocSearch configs][14] and their translation to an [Algolia Crawler config][16]. More detailed documentation of the Algolia Crawler can be found on the [the official documentation][15]

| `legacy` | `current` | description |
| --- | --- | --- |
| `start_urls` | [`startUrls`][20] | Now accepts URLs only, see [`helpers.docsearch`][30] to handle custom variables |
| `page_rank` | [`pageRank`][31] | Can be added to the `recordProps` in [`helpers.docsearch`][30] |
| `js_render` | [`renderJavaScript`][21] | Unchanged |
| `js_wait` | [`renderJavascript.waitTime`][22] | See documentation of [`renderJavaScript`][21] |
| `index_name` | **removed**, see [`actions`][23] | Handled directly in the [`actions`][23] |
| `sitemap_urls` | [`sitemaps`][24] | Unchanged |
| `stop_urls` | [`exclusionPatterns`][25] | Supports [`micromatch`][27] |
| `selectors_exclude` | **removed** | Should be handled in the [`recordExtractor`][28] and [`helpers.docsearch`][29] |
| `custom_settings` | [`initialIndexSettings`][26] | Unchanged |
| `scrape_start_urls` | **removed** | Can be handled with [`exclusionPatterns`][25] |
| `strip_chars` | **removed** | `#` are removed automatically from anchor links, edge cases should be handled in the [`recordExtractor`][28] and [`helpers.docsearch`][29] |
| `conversation_id` | **removed** | Not needed anymore |
| `nb_hits` | **removed** | Not needed anymore |
| `sitemap_alternate_links` | **removed** | Not needed anymore |
| `stop_content` | **removed** | Should be handled in the [`recordExtractor`][28] and [`helpers.docsearch`][29] |

## FAQ

### Migration seems to have started, but I don't have received any emails

Due to the large number of indices DocSearch has, we need to migrate configs in small incremental batches.

If you have not received a migration mail yet, don't worry, your turn will come!

### What do I need to do to migrate?

Nothing!

We handle all the migration on our side, [your existing config file][11] will be migrated to an [Algolia Crawler config][12], crawls will be started and scheduled for you, your Algolia application will be ready to go, and your Algolia index populated with your website content!

### What do I need to update to make the migration work?

We've tried to make the migration as seamless as possible for you, so **all you need to update is your frontend integration** with the new credentials you've received by mail, or directly from the [Algolia dashboard][13]!

### What should I do with my legacy config and credentials?

Your [legacy config][11] will be parsed to a [Crawler config][12], please use [the dedicated web interface][7] to make any changes if you already received your access!

Your credentials will remain available, but **once all the existing configs have been migrated, we will stop the daily crawl jobs**.

### Are the [`docsearch-scraper`][8] and [`docsearch-configs`][9] repository still maintained?

At the time you are reading this, the migration hasn't been completed, so yes they are still maintained.

**Once the migration has been completed:**

- The [`docsearch-scraper`][8] will be archived and not maintained in favor of our [Algolia Crawler][2], you'll still be able to use our [run your own][3] solution if you want!
- The [`docsearch-configs`][9] repository will be archived and and host **all** of [the existing and active **legacy** DocSearch config file][11], and [their parsed version][12]. You can get a preview [on this branch][10].

[1]: DocSearch-v3
[2]: https://www.algolia.com/products/search-and-discovery/crawler/
[3]: legacy/run-your-own
[4]: record-extractor
[7]: https://crawler.algolia.com/
[8]: https://github.com/algolia/docsearch-scraper
[9]: https://github.com/algolia/docsearch-configs
[10]: https://github.com/algolia/docsearch-configs/tree/feat/crawler
[11]: https://github.com/algolia/docsearch-configs
[12]: https://github.com/algolia/docsearch-configs/tree/feat/crawler/crawler-configs
[13]: https://www.algolia.com/dashboard
[14]: /docs/legacy/config-file
[15]: https://www.algolia.com/doc/tools/crawler/getting-started/overview/
[16]: https://www.algolia.com/doc/tools/crawler/apis/configuration/
[20]: https://www.algolia.com/doc/tools/crawler/apis/configuration/start-urls/
[21]: https://www.algolia.com/doc/tools/crawler/apis/configuration/render-java-script/
[22]: https://www.algolia.com/doc/tools/crawler/apis/configuration/render-java-script/#parameter-param-waittime
[23]: https://www.algolia.com/doc/tools/crawler/apis/configuration/actions/#parameter-param-indexname
[24]: https://www.algolia.com/doc/tools/crawler/apis/configuration/sitemaps/
[25]: https://www.algolia.com/doc/tools/crawler/apis/configuration/exclusion-patterns/
[26]: https://www.algolia.com/doc/tools/crawler/apis/configuration/initial-index-settings/
[27]: https://github.com/micromatch/micromatch
[28]: https://www.algolia.com/doc/tools/crawler/apis/configuration/actions/#parameter-param-recordextractor
[29]: record-extractor
[30]: record-extractor#with-custom-variables
[31]: record-extractor#pagerank
78 changes: 0 additions & 78 deletions packages/website/docs/migration-guide.mdx

This file was deleted.

47 changes: 0 additions & 47 deletions packages/website/docs/modal.md

This file was deleted.

240 changes: 240 additions & 0 deletions packages/website/docs/record-extractor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@
---
title: Record Extractor
---

:::info

The following content is for **[DocSearch v3][2]** and **[its new infrastructure][5]**. If you are using **[DocSearch v2][3]** or the **[docsearch-scraper][6]**, see the **[legacy][4]** documentation.

:::

## Introduction

:::info

This documentation will only contain information regarding the **helpers.docsearch** method, see **[Algolia Crawler Documentation][7]** for more information on the **[Algolia Crawler][8]**.

:::

Pages are extracted by a [`recordExtractor`][9]. These extractors are assigned to [`actions`][12] via the [`recordExtractor`][9] parameter. This parameter links to a function that returns the data you want to index, organized in a array of JSON objects.

_The helpers are a collection of functions to help you extract content and generate Algolia records._

### Useful links

- [Extracting records with the Algolia Crawler][11]
- [`recordExtractor` parameters][10]

## Usage

The most common way to use the DocSearch helper, is to return its result to the [`recordExtractor`][9] function.

```js
recordExtractor: ({ helpers }) => {
return helpers.docsearch({
recordProps: {
lvl0: {
selectors: "header h1",
},
lvl1: "article h2",
lvl2: "article h3",
lvl3: "article h4",
lvl4: "article h5",
lvl5: "article h6",
content: "main p, main li",
},
});
},
```

## Complex extractors

### Using the Cheerio instance (`$`)

You can also use the provided [`Cheerio instance ($)`][14] to exclude content from the DOM:

```js
recordExtractor: ({ $, helpers }) => {
// Removing DOM elements we don't want to crawl
$(".my-warning-message").remove();

return helpers.docsearch({
recordProps: {
lvl0: {
selectors: "header h1",
},
lvl1: "article h2",
lvl2: "article h3",
lvl3: "article h4",
lvl4: "article h5",
lvl5: "article h6",
content: "main p, main li",
},
});
},
```

### With fallback DOM selectors

Each `lvlX` and `content` supports fallback selectors as an array of string, which allows for robust config files:

```js
recordExtractor: ({ $, helpers }) => {
return helpers.docsearch({
recordProps: {
// `.exists h1` will be selected if `.exists-probably h1` does not exists.
lvl0: {
selectors: [".exists-probably h1", ".exists h1"],
}
lvl1: "article h2",
lvl2: "article h3",
lvl3: "article h4",
lvl4: "article h5",
lvl5: "article h6",
// `.exists p, .exists li` will be selected.
content: [
".does-not-exists p, .does-not-exists li",
".exists p, .exists li",
],
},
});
},
```

### With custom variables

Custom variables are useful to filter content in the frontend (`version`, `lang`, etc.).

_These selectors also support [`defaultValue`](#with-raw-text-defaultvalue) and [fallback selectors](#with-fallback-dom-selectors)_

```js
recordExtractor: ({ helpers }) => {
return helpers.docsearch({
recordProps: {
lvl0: {
selectors: "header h1",
},
lvl1: "article h2",
lvl2: "article h3",
lvl3: "article h4",
lvl4: "article h5",
lvl5: "article h6",
content: "main p, main li",
// The variables below can be used to filter your search
foo: ".bar",
language: {
// It also supports the fallback DOM selectors syntax!
selectors: ".does-not-exists",
// Since custom variables are used for filtering, we allow sending
// multiple raw values
defaultValue: ["en", "en-US"],
},
version: {
// You can send raw values without `selectors`
defaultValue: ["latest", "stable"],
},
},
});
},
```

The `version`, `lang` and `foo` attribute of these records will be :

```json
foo: "valueFromBarSelector",
language: ["en", "en-US"],
version: ["latest", "stable"]
```

You can now use them to [filter your search in the frontend][16]

### With raw text (`defaultValue`)

The `lvl0` and [custom variables][13] selectors also accepts a fallback raw value:

```js
recordExtractor: ({ $, helpers }) => {
return helpers.docsearch({
recordProps: {
lvl0: {
// It also supports the fallback DOM selectors syntax!
selectors: ".exists-probably h1",
defaultValue: "myRawTextIfDoesNotExists",
},
lvl1: "article h2",
lvl2: "article h3",
lvl3: "article h4",
lvl4: "article h5",
lvl5: "article h6",
content: "main p, main li",
// The variables below can be used to filter your search
language: {
// It also supports the fallback DOM selectors syntax!
selectors: ".exists-probably .language",
// Since custom variables are used for filtering, we allow sending
// multiple raw values
defaultValue: ["en", "en-US"],
},
},
});
},
```

## `recordProps` API Reference

### `lvl0`

> `type: Lvl0` | **required**
```ts
type Lvl0 = {
selectors: string | string[];
defaultValue?: string;
};
```

### `lvl1`, `content`

> `type: string | string[]` | **required**
### `lvl2`, `lvl3`, `lvl4`, `lvl5`, `lvl6`

> `type: string | string[]` | **optional**
### `pageRank`

> `type: string` | **optional**
### Custom variables (`[k: string]`)

> `type: string | string[] | CustomVariable` | **optional**
```ts
type CustomVariable =
| {
defaultValue: string | string[];
}
| {
selectors: string | string[];
defaultValue?: string | string[];
};
```

Contains values that can be used as [`facetFilters`][15]

[1]: DocSearch-v3
[2]: https://github.com/algolia/docsearch/
[3]: https://github.com/algolia/docsearch/tree/master
[4]: /docs/legacy/dropdown
[5]: /docs/migrating-from-legacy
[6]: /docs/legacy/run-your-own
[7]: https://www.algolia.com/doc/tools/crawler/getting-started/overview/
[8]: https://www.algolia.com/products/search-and-discovery/crawler/
[9]: https://www.algolia.com/doc/tools/crawler/apis/configuration/actions/#parameter-param-recordextractor
[10]: https://www.algolia.com/doc/tools/crawler/apis/configuration/actions/#parameter-param-recordextractor-2
[11]: https://www.algolia.com/doc/tools/crawler/guides/extracting-data/#extracting-records
[12]: https://www.algolia.com/doc/tools/crawler/apis/configuration/actions/
[13]: /docs/record-extractor#with-custom-variables
[14]: https://cheerio.js.org/
[15]: https://www.algolia.com/doc/guides/managing-results/refine-results/faceting/
[16]: DocSearch-v3#filtering-your-search
172 changes: 0 additions & 172 deletions packages/website/docs/required-configuration.md

This file was deleted.

208 changes: 208 additions & 0 deletions packages/website/docs/required-configuration.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,208 @@
---
title: Required configuration
---

This section, gives you the best practices to optimize our crawl. Adopting this following specification is required to let our crawler build the best experience from your website. You will need to update your website and follow these rules.

:::info

If your website is generated, thanks to one of [our supported tools][1], you do not need to change your website as it is already compliant with our requirements.

:::

## The generic configuration example

You can find the default DocSearch config template below and tweak it with some examples from our [`complex extractors` section][12].

If you are using one of [our integrations][13], please see [the templates page][11].

<details><summary>docsearch-default.js</summary>
<div>


```js
new Crawler({
appId: 'YOUR_APP_ID',
apiKey: 'YOUR_API_KEY',
startUrls: ['https://YOUR_START_URL.io/'],
sitemaps: ['https://YOUR_START_URL.io/sitemap.xml'],
actions: [
{
indexName: 'YOUR_INDEX_NAME',
pathsToMatch: ['https://YOUR_START_URL.io/**'],
recordExtractor: ({ helpers }) => {
return helpers.docsearch({
recordProps: {
lvl0: {
selectors: '',
defaultValue: 'Documentation',
},
lvl1: 'header h1',
lvl2: 'article h2',
lvl3: 'article h3',
lvl4: 'article h4',
lvl5: 'article h5',
lvl6: 'article h6',
content: 'article p, article li',
},
});
},
},
],
initialIndexSettings: {
YOUR_INDEX_NAME: {
attributesForFaceting: ['type'],
attributesToRetrieve: [
'hierarchy',
'content',
'anchor',
'url',
'url_without_anchor',
'type',
],
attributesToHighlight: ['hierarchy', 'hierarchy_camel', 'content'],
attributesToSnippet: ['content:10'],
camelCaseAttributes: ['hierarchy', 'hierarchy_radio', 'content'],
searchableAttributes: [
'unordered(hierarchy_radio_camel.lvl0)',
'unordered(hierarchy_radio.lvl0)',
'unordered(hierarchy_radio_camel.lvl1)',
'unordered(hierarchy_radio.lvl1)',
'unordered(hierarchy_radio_camel.lvl2)',
'unordered(hierarchy_radio.lvl2)',
'unordered(hierarchy_radio_camel.lvl3)',
'unordered(hierarchy_radio.lvl3)',
'unordered(hierarchy_radio_camel.lvl4)',
'unordered(hierarchy_radio.lvl4)',
'unordered(hierarchy_radio_camel.lvl5)',
'unordered(hierarchy_radio.lvl5)',
'unordered(hierarchy_radio_camel.lvl6)',
'unordered(hierarchy_radio.lvl6)',
'unordered(hierarchy_camel.lvl0)',
'unordered(hierarchy.lvl0)',
'unordered(hierarchy_camel.lvl1)',
'unordered(hierarchy.lvl1)',
'unordered(hierarchy_camel.lvl2)',
'unordered(hierarchy.lvl2)',
'unordered(hierarchy_camel.lvl3)',
'unordered(hierarchy.lvl3)',
'unordered(hierarchy_camel.lvl4)',
'unordered(hierarchy.lvl4)',
'unordered(hierarchy_camel.lvl5)',
'unordered(hierarchy.lvl5)',
'unordered(hierarchy_camel.lvl6)',
'unordered(hierarchy.lvl6)',
'content',
],
distinct: true,
attributeForDistinct: 'url',
customRanking: [
'desc(weight.pageRank)',
'desc(weight.level)',
'asc(weight.position)',
],
ranking: [
'words',
'filters',
'typo',
'attribute',
'proximity',
'exact',
'custom',
],
highlightPreTag: '<span class="algolia-docsearch-suggestion--highlight">',
highlightPostTag: '</span>',
minWordSizefor1Typo: 3,
minWordSizefor2Typos: 7,
allowTyposOnNumericTokens: false,
minProximity: 1,
ignorePlurals: true,
advancedSyntax: true,
attributeCriteriaComputedByMinProximity: true,
removeWordsIfNoResults: 'allOptional',
separatorsToIndex: '_',
},
},
});
```

</div>
</details>


### Overview of a clear layout

A website implementing these good practises will look simple and crystal clear. It can have this following aspect:

<img
src="https://docsearch.algolia.com/img/assets/recommended-layout.png"
alt="Recommended layout for your page"
/>

The main blue element will be your `.DocSearch-content` container. More details in the following guidelines.

### Use the right classes as [`recordProps`][2]

You can add some specific static classes to help us find your content role. These classes can not involve any style changes. These dedicated classes will help us to create a great learn-as-you-type experience from your documentation.

- Add a static class `DocSearch-content` to the main container of your textual content. Most of the time, this tag is a `<main>` or an `<article>` HTML element.

- Every searchable `lvl` element outside this main documentation container (for instance in a sidebar) must be a `global` selector. They will be globally picked up and injected to every record built from your page. Be careful, the level value matters and every matching element must have an increasing level along the HTML flow. A level `X` (for `lvlX`) should appear after a level `Y` while `X > Y`.

- `lvlX` selectors should use the standard title tags like `h1`, `h2`, `h3`, etc. You can also use static classes. Set a unique `id` or `name` attribute to these elements as detailed below.

- Every DOM element matching the `lvlX` selectors must have a unique `id` or `name` attribute. This will help the redirection to directly scroll down to the exact place of the matching elements. These attributes define the right anchor to use.

- Every textual element (recordProps `content`) must be wrapped in a `<p>` or `<li>` tag. This content must be atomic and split into small entities. Be careful to never nest one matching element into another one as it will create duplicates.

- Stay consistent and do not forget that we need to have some consistency along the HTML flow.

## Introduce global information as meta tags

Our crawler automatically extracts information from our DocSearch specific meta tags:

```html
<meta name="docsearch:language" content="en" />
<meta name="docsearch:version" content="1.0.0" />
```

The crawl adds the `content` value of these `meta` tags to all records extracted from the page. The meta tags `name` must follow the `docsearch:$NAME` pattern. `$NAME` is the name of the attribute set to all records.

The `docsearch:version` meta tag can be a set [of comma-separated tokens][5], each of which is a version relevant to the page. These tokens must be compliant with [the SemVer specification][6] or only contain alphanumeric characters (e.g. `latest`, `next`, etc.). As facet filters, these version tokens are case-insensitive.

For example, all records extracted from a page with the following meta tag:

```html
<meta name="docsearch:version" content="2.0.0-alpha.62,latest" />
```

The `version` attribute of these records will be :

```json
version:["2.0.0-alpha.62", "latest"]
```

You can then [transform these attributes as `facetFilters`][3] to [filter over them from the UI][10].

## Nice to have

- Your website should have [an updated sitemap][7]. This is key to let our crawler know what should be updated. Do not worry, we will still crawl your website and discover embedded hyperlinks to find your great content.

- Every page needs to have their full context available. Using global elements might help (see above).

- Make sure your documentation content is also available without JavaScript rendering on the client-side. If you absolutely need JavaScript turned on, you need to [set `renderJavaScript: true` in your configuration][8].

Any questions? [Send us an email][9].

[1]: integrations
[2]: record-extractor#recordProps-api-reference
[3]: https://www.algolia.com/doc/guides/managing-results/refine-results/faceting/
[5]: https://html.spec.whatwg.org/dev/common-microsyntaxes.html#comma-separated-tokens
[6]: https://semver.org/
[7]: https://www.sitemaps.org/
[8]: https://www.algolia.com/doc/tools/crawler/apis/configuration/render-java-script/
[9]: mailto:DocSearch@algolia.com
[10]: DocSearch-v3#filtering-your-search
[11]: templates
[12]: record-extractor#complex-extractors
[13]: integrations
174 changes: 0 additions & 174 deletions packages/website/docs/run-your-own.md

This file was deleted.

34 changes: 0 additions & 34 deletions packages/website/docs/scraper-overview.md

This file was deleted.

54 changes: 54 additions & 0 deletions packages/website/docs/styling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
title: Styling
---

:::info

The following content is for **[DocSearch v3][2]**. If you are using **[DocSearch v2][3]**, see the **[legacy][4]** documentation.

:::

:::caution

This documentation is in progress.

:::

## Introduction

DocSearch v3 comes with a theme package called `@docsearch/css`, which offers a sleek out of the box theme!

:::note

You don't need to install this package if you already have [`@docsearch/js`][1] or [`@docsearch/react`][1] installed!

:::

## Installation

```bash
yarn add @docsearch/css@alpha
# or
npm install @docsearch/css@alpha
```

If you don’t want to use a package manager, you can use a standalone endpoint:

```html
<script src="https://cdn.jsdelivr.net/npm/@docsearch/css@alpha"></script>
```

## Files

```
@docsearch/css
├── dist/style.css # all styles
├── dist/_variables.css # CSS variables
├── dist/button.css # CSS for the button
└── dist/modal.css # CSS for the modal
```

[1]: DocSearch-v3
[2]: https://github.com/algolia/docsearch/
[3]: https://github.com/algolia/docsearch/tree/master
[4]: legacy/dropdown
91 changes: 0 additions & 91 deletions packages/website/docs/styling.mdx

This file was deleted.

819 changes: 819 additions & 0 deletions packages/website/docs/templates.mdx

Large diffs are not rendered by default.

113 changes: 27 additions & 86 deletions packages/website/docs/tips.md
Original file line number Diff line number Diff line change
@@ -2,129 +2,70 @@
title: Tips for a good search
---

DocSearch can work with almost any website, but we've found that some site
structures yield more relevant results or faster indexing time. In this page
we'll share some tips on how you can make the most out of DocSearch.
DocSearch can work with almost any website, but we've found that some site structures yield more relevant results or faster indexing time. In this page we'll share some tips on how you can make the most out of DocSearch.

## Use a `sitemap.xml`

If you provide a sitemap in your configuration, DocSearch will use it to
directly browse the pages to index. Pages are still crawled which means we
extract every compliant link.
If you provide a sitemap in your configuration, DocSearch will use it to directly browse the pages to index. Pages are still crawled which means we extract every compliant link.

We highly recommend you add a `sitemap.xml` to your website if you don't have
one already. This will make the indexing faster, but will also give you more
control over which page you'd like to include or not in the indexing.
We highly recommend you add a `sitemap.xml` to your website if you don't have one already. This will make the indexing faster, but will also give you more control over which page you'd like to include or not in the indexing.

Sitemaps are also considered good practice for other aspects, including SEO
([more information on sitemaps][1]).
Sitemaps are also considered good practice for other aspects, including SEO ([more information on sitemaps][1]).

## Structure the hierarchy of information

DocSearch works better on structured documentation. Relevance of results is
based on the structural hierarchy of content. In simpler terms, it means that we
read the `<h1>`, ..., `<h6>` headings of your page to guess the hierarchy of
information. This hierarchy brings contextual information to your records.
DocSearch works better on structured documentation. Relevance of results is based on the structural hierarchy of content. In simpler terms, it means that we read the `<h1>`, ..., `<h6>` headings of your page to guess the hierarchy of information. This hierarchy brings contextual information to your records.

Documentation starts by explaining generic concepts first and then goes deeper
into specifics. This is represented in your HTML markup by the hierarchy of
headings you're using. For example, concepts discussed under a `<h4>` are more
specific than concepts discussed under a `<h2>` in the same page. The sooner the
information comes up within the page, the higher is it ranked.
Documentation starts by explaining generic concepts first and then goes deeper into specifics. This is represented in your HTML markup by the hierarchy of headings you're using. For example, concepts discussed under a `<h4>` are more specific than concepts discussed under a `<h2>` in the same page. The sooner the information comes up within the page, the higher is it ranked.

DocSearch uses this structure to fine-tune the relevance of results as well as
to provide potential filtering. Documentations that follow this pattern often
have better relevance in their search results.
DocSearch uses this structure to fine-tune the relevance of results as well as to provide potential filtering. Documentations that follow this pattern often have better relevance in their search results.

Finding the right depth of your documentation tree and how to split up your
content are two of the most complex tasks. For large pages, we recommend having
4 levels (from `lvl0` to `lvl3`). We recommend at least three different levels.
Finding the right depth of your documentation tree and how to split up your content are two of the most complex tasks. For large pages, we recommend having 4 levels (from `lvl0` to `lvl3`). We recommend at least three different levels.

_Note that you don't have to use `<hX>` tags and can use classes instead (e.g.,
`<span class="title-X">` ). You will need to update the `selectors`
configuration value._
_Note that you don't have to use `<hX>` tags and can use classes instead (e.g., `<span class="title-X">` )._

## Set a unique class to the element holding the content

DocSearch extracts content based on the HTML structure. We recommend that you
add a custom `class` to the HTML element wrapping all your textual content. This
will help narrow selectors to the relevant content.
DocSearch extracts content based on the HTML structure. We recommend that you add a custom `class` to the HTML element wrapping all your textual content. This will help narrow selectors to the relevant content.

Having such a unique identifier will make your configuration more robust as it
will make sure indexed content is relevant content. We found that this is the
most reliable way to exclude content in headers, sidebars, and footers that are
not relevant to the search.
Having such a unique identifier will make your configuration more robust as it will make sure indexed content is relevant content. We found that this is the most reliable way to exclude content in headers, sidebars, and footers that are not relevant to the search.

## Add anchors to headings

When using headings (as mentioned above), you should also try to add a custom
anchor to each of them. Anchors are specified by HTML attributes (`name` or
`id`) added to headers that allow browsers to directly scroll to the right
position in the page. They're accessible by clicking a link with `#` followed by
the anchor.
When using headings (as mentioned above), you should also try to add a custom anchor to each of them. Anchors are specified by HTML attributes (`name` or `id`) added to headers that allow browsers to directly scroll to the right position in the page. They're accessible by clicking a link with `#` followed by the anchor.

DocSearch will honor such anchors and automatically bring your users to the
anchor closest to the search result they selected.
DocSearch will honor such anchors and automatically bring your users to the anchor closest to the search result they selected.

## Marking the active page(s) in the navigation

If you're using a multi-level navigation, we recommend that you mark each active
level with a custom CSS class. This will make it easier for DocSearch to know
_where_ the current page fits in the website hierarchy.
If you're using a multi-level navigation, we recommend that you mark each active level with a custom CSS class. This will make it easier for DocSearch to know _where_ the current page fits in the website hierarchy.

For example, if your `troubleshooting.html` page is located under the
"Installation" menu in your sidebar, we recommend that you add a custom CSS
class to the "Installation" and "Troubleshooting" links in your sidebar.
For example, if your `troubleshooting.html` page is located under the "Installation" menu in your sidebar, we recommend that you add a custom CSS class to the "Installation" and "Troubleshooting" links in your sidebar.

The name of the CSS class does not matter, as long as it's something that can be
used as part of a CSS selector.
The name of the CSS class does not matter, as long as it's something that can be used as part of a CSS selector.

## Consistency of your content

Consistency is a pillar of meaningful documentation. It increases the
**intelligibility** of a document and shortens the time required for a user to
find the coveted information. The document **topic** should be **identifiable**
and its **outline** should be demarcated.

The hierarchy should always have the same size. Try to **avoid orphan records**
such as the introduction/conclusion, or asides. The selectors must be efficient
for **every document** and highlight the proper hierarchy. They need to match
the coveted elements depending on their level. Be careful to avoid the **edge
effect** by matching unexpected **superfluous elements**.

Selectors should match information from **real document web pages** and stay
ineffective for others ones (e.g., landing page, table of content, etc.). We
urge the maintainer to define a **dedicated class** for the **main DOM
container** that includes the actual document content such as
`.DocSearch-content`

Since documentation should be **interactive**, it is a key point to **verbalize
concepts with standardized words**. This **redundancy**, empowered with the
**search experience** (dropdown), will even enable the **learn-as-you-type
experience**. The **way to find the information** plays a key role in
**leading** the user to the **retrieved knowledge**. You can also use the
**synonym feature**.
Consistency is a pillar of meaningful documentation. It increases the **intelligibility** of a document and shortens the time required for a user to find the coveted information. The document **topic** should be **identifiable** and its **outline** should be demarcated.

The hierarchy should always have the same size. Try to **avoid orphan records** such as the introduction/conclusion, or asides. The selectors must be efficient for **every document** and highlight the proper hierarchy. They need to match the coveted elements depending on their level. Be careful to avoid the **edge effect** by matching unexpected **superfluous elements**.

Selectors should match information from **real document web pages** and stay ineffective for others ones (e.g., landing page, table of content, etc.). We urge the maintainer to define a **dedicated class** for the **main DOM container** that includes the actual document content such as `.DocSearch-content`

Since documentation should be **interactive**, it is a key point to **verbalize concepts with standardized words**. This **redundancy**, empowered with the **search experience** (dropdown), will even enable the **learn-as-you-type experience**. The **way to find the information** plays a key role in **leading** the user to the **retrieved knowledge**. You can also use the **synonym feature**.

## Avoid duplicates by promoting unicity

The more time-consuming reading documentation is, the more painful and reluctant
its use will be. You must avoid hazy points or catch-all. With being unhelpful,
the catch-all document may be **confusing** and **counterproductive**.
The more time-consuming reading documentation is, the more painful and reluctant its use will be. You must avoid hazy points or catch-all. With being unhelpful, the catch-all document may be **confusing** and **counterproductive**.

Duplicates introduce noise and mislead users. This is why you should always
focus on the relevant content and avoid duplicating content within your site
(for example landing page which contains all information, summing up, etc.). If
duplicates are expected because they belong to multiple datasets (for example a
different version), you should use [facets][3].
Duplicates introduce noise and mislead users. This is why you should always focus on the relevant content and avoid duplicating content within your site (for example landing page which contains all information, summing up, etc.). If duplicates are expected because they belong to multiple datasets (for example a different version), you should use [facets][3].

## Conciseness

What is clearly thought out is clearly and concisely expressed.

We highly recommend that you read this blog post about [how to build a helpful
search for technical documentation][2].
We highly recommend that you read this blog post about [how to build a helpful search for technical documentation][2].

[1]: https://www.sitemaps.org/index.html
[2]:
https://blog.algolia.com/how-to-build-a-helpful-search-for-technical-documentation-the-laravel-example/
[2]: https://blog.algolia.com/how-to-build-a-helpful-search-for-technical-documentation-the-laravel-example/
[3]: https://www.algolia.com/doc/guides/searching/faceting/
Loading

0 comments on commit 9a89cde

Please sign in to comment.