Perplexity.ai, Most Transparent Artificial Intelligence Search Engine with Citations

AI is getting more main stream perpetrating almost all aspects of our lives and positioning itself to challenge not only traditional businesses but also new technology companies. With its powerful large language models (LLMs), trained on trillions of parameters, AI as a system has access to more information than we could ever have in order to build truly world changing products. AI is poised to challenge the technological landscape with its access to vast resources and information. 

One such AI transformational product has been developed by Perplexity.ai. Perplexity Labs has developed two large language PPLX models namely: pplx-7b-online and pplx-70b-online launched in Nov. 2029. Perplexity Labs is trying to address two major concerns of the most commonly used LLMs today:

  1. Up to Date Information: Most LLMs are build with information loaded or trained with LLMs up to a particular date.
  2. Inaccurate Results: Most LLMs in order to get creative tend to lose out on factual information.

Both of the PPLX models are addressing these shortcomings by providing factual and up to date information becoming very close to what Google has developed with its recently launched LLM Google Gemini V1.0.

As both of the LLMs are online LLMs leveraging the factual and up to date information from the internet instead of relying on pre-trained models. Both of these models have been developed with open source models as mentioned in our previous article Mitral-7b and llama2-70b.

Key Features of Perplexity.ai:

  • Thread control over a topic: Perplexity.ai is not a one-size-fits-all AI tool. You can dive deeper into specific parts of your discovery journey by asking follow-up questions in the same thread making the answers more relevant and non-repetitive. Then, Perplexity.AI will intelligently adjust its answers to better understand your interests.
  • Community Curation: The “discover” feature of Perplexity is not an independent aspect of this AI tool. You can see community-generated knowledge graphs, as well as topic summaries, that offer different perspectives and insights. This human touch filters out the vastness of the AI-generated information.
  • Citation and Sourcing: The most important and unique feature of Perplexity AI models is to build trust and transparency. For every answer, perplexity.ai carefully cites its sources so you can verify the information and dig deeper. This not only encourages critical thinking, but also helps to combat the echo chamber of untrusted knowledge with verifiability and accountability with complete transparency.
  • User Experience and Interaction: Though Perplexity LLMs are conversational just like ChatGPT but it is focused more on the factual information and accuracy with source reliability and transparency. IT does come with a cost of not so strong in particular areas like storytelling, dialogues or book writing. In this particular aspect, Perplexity LLMs resembles more like Google Gemini unlike ChatGPT which is more open to bend in the artistic side for creative thinking.
  • Learning and Adaptability: Most deep learning LLMs like Perplexity, ChatGPT and Google Gemini have the capability to learn based on user experience and feedback. However, Perplexity LLMs interactive learning is designed to quickly adapt to user feedback within the context of providing fact-based answers. This implies a steeper learning curve and possibly a faster improvement in answer quality over time.
  • Integration and Personalization: Perplexity.ai provides APIs for developers to embed its functionalities into other programs. While OpenAI also offers API access to ChatGPT, the personalization features of Perplexity LLMs especially in terms of its AI personality represent a novel approach to user interaction.
  • Perplexity Playground: Unlike ChatGPT which uses only its in-house developed LLMs, Perplexity Playground gives you access to choose from different LLMs like pplx-7b-online, pplx-70b-online, pplx-7b-chat, pplx-70b-chat, mistral-7b-instruct, codellama-34b-instruct, llama-2-70b-chat, llava—7b-chat, mixtral-8x7b-instruct, mistral-medium to get the answers on which Perplexity LLMs have been developed.
  • Rate Limit:

Perplexity Labs limit usage by model if a user’s request rate or token usage rate hits all of the limits for that model:

ModelRequest rate limitToken rate limit
mistral-7b-instruct– 10/5seconds
– 50/minute
– 500/hour
– 8000/10seconds
– 80000/minute
– 256000/10minutes
mixtral-8x7b-instruct– 4/5seconds
– 12/minute
– 120/hour
– 8000/minute
– 32000/10minutes
codellama-34b-instruct– 10/5seconds
– 30/minute
– 300/hour
– 20000/minute
– 80000/10minutes
llama-2-70b-chat– 4/5seconds
– 12/minute
– 120/hour
– 8000/minute
– 32000/10minutes
pplx-7b-chat– 4/5seconds
– 12/minute
– 120/hour
– 8000/minute
– 32000/10minutes
pplx-70b-chat– 4/5seconds
– 12/minute
– 120/hour
– 8000/minute
– 32000/10minutes
pplx-7b-online– 10/minuteN/A
pplx-70b-online– 10/minuteN/A

Standing out from Other LLMs:

  • ChatGPT: ChatGPT and Perplexity both use conversational language models, but ChatGPT focuses more on factual accuracy and knowledge sourcing, while Perplexity focuses more on research and deep exploration designed to quickly adapt to user feedback within the context of providing fact-based answers.
  •  Google Gemini: This is one of the sibling projects at Google DeepMind. Gemini is similar to Perplexity in terms of factual accuracy and information retrieval, but it’s more focused on research and deeper exploration. Perplexity’s unique combination of AI and community curated content gives it an edge in terms of user experience and discovery.

Pricing of Perplexity.ai:

With a competitive pricing of $5/month, perplexity is going to give touch competition to ChatGPT. Though its closest cousin Google Gemini can be accessible for free in the current version 1.0.

Vanilla language models are priced on input and output tokens based on the size of the model.

Model Parameter Count$/1M input tokens$/1M output tokens
7B$0.07$0.28
34B$0.35$1.40
70B$0.70$2.8

For -online models, input tokens are free. Instead, a flat $5 is charged per thousand requests, in addition to the output token charge.

Online Model Parameter Count$/1000 requests$/1M output tokens
7B$5$0.28
70B$5$2.80

With its powerful LLMs and transparent sourcing and citations, Perplexity plans to be the best search engine providing users seamless and advertisement free experience. Sourcing and citation is particularly useful for students and for research.  

For more information about Perplexity LLM Models: https://blog.perplexity.ai/blog/introducing-pplx-online-llms?utm_source=labs&utm_medium=labs&utm_campaign=online-llms

For more information on API, Supported LLM Models and Pricing: https://docs.perplexity.ai/docs/getting-started