(1) Create a new semantic library using your content or third-party sources. These can be websites, or other proprietary content that you own, such as ebooks and digital papers.
(2) A dedicated search engine model is trained every time you ingest anything new.
(3) Run semantic searches to extract relevant text fragments from your library and use them as informational context with any general-purpose AI language model or writing tool to generate unique content based on your sources.
We currently support a large variety of sources where you can import content from. Websites, ebooks, papers, audio and video files and YouTube are some. Reach out if you need a particular source that is not yet supported.
We put a lot of effort into making our ingestors as good and flexible as possible. Parsing various sources of content is not an easy task. Every website is different, books come in multiple formats... we know that the better our parser is, the higher will be the quality of your content.
Whenever you ingest a new content source, we take all that text and fragment it into smaller pieces, so that they are indexed for you to search using natural language. How this fragmentation is done is important and we put a lot of effort into it —too large and searches would be too broad, too small and the search index won’t have enough context to know if it is relevant to the search term—.
Simply run a semantic query on your library using a similar prompt that you use to generate your content. Append the content fragments into your generation prompt to create unique content. You can also decide what fragments will go into the generation phase.