Post summary
The post summary component summarizes various content and associated meta data into a highly configurable component.
Classes
Section titled Classes| Class | Parent | Description |
|---|---|---|
.s-post-summary |
N/A |
Base parent container for a post summary |
.s-post-summary__answered |
.s-post-summary |
Adds the styling necessary for a question with accepted answers |
.s-post-summary__deleted |
.s-post-summary |
Adds the styling necessary for a deleted post |
.s-post-summary--sm-hide |
.s-post-summary |
Hides the stats container on small screens |
.s-post-summary--sm-show |
.s-post-summary |
Shows the stats container on small screens |
.s-post-summary--answers |
.s-post-summary |
Container for the post summary answers |
.s-post-summary--answer |
.s-post-summary--answers |
Container for the post summary answer |
.s-post-summary--answer__accepted |
.s-post-summary--answer |
Adds the styling necessary for an accepted answer |
.s-post-summary--content |
.s-post-summary |
Container for the post summary content |
.s-post-summary--content-meta |
.s-post-summary--content |
A container for post meta data, things like tags and user cards. |
.s-post-summary--content-type |
.s-post-summary--content |
Container for the post summary content type |
.s-post-summary--excerpt |
.s-post-summary--content |
Container for the post summary excerpt |
.s-post-summary--stats |
.s-post-summary |
Container for the post summary stats |
.s-post-summary--stats-answers |
.s-post-summary--stats |
Container for the post summary answers |
.s-post-summary--stats-bounty |
.s-post-summary--stats |
Container for the post summary bounty |
.s-post-summary--stats-item |
.s-post-summary--stats |
A genericcontainer for views, comments, read time, and other meta data which prepends a separator icon. |
.s-post-summary--stats-votes |
.s-post-summary--stats |
Container for the post summary votes |
.s-post-summary--tags |
.s-post-summary |
Container for the post summary tags |
.s-post-summary--title |
.s-post-summary |
Container for the post summary title |
.s-post-summary--title-link |
.s-post-summary--title |
Link styling for the post summary title |
.s-post-summary--title-icon |
.s-post-summary--title |
Icon styling for the post summary title |
Examples
Section titled ExamplesUse the post summary component to provide a concise summary of a question, article, or other content.
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">
<div class="s-post-summary--stats-votes">…</div>
<div class="s-post-summary--stats-answers">…</div>
</div>
<div class="s-post-summary--content">
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">
<div class="s-user-card--group" href="…">
<a class="s-avatar" href="…">
<img class="s-avatar--image" src="…">
<span class="v-visible-sr">…</span>
</a>
<span class="s-user-card--username">…</span>
<ul class="s-user-card--group">
<li class="s-user-card--rep">
<span class="s-bling s-bling__rep s-bling__sm">
<span class="v-visible-sr">reputation bling</span>
</span>
…
</li>
</ul>
<span>
<a class="s-user-card--time" title="…" data-controller="s-tooltip" href="…">
<time>…</time>
</a>
</span>
</div>
</div>
<div class="s-post-summary--stats s-post-summary--sm-show">
<div class="s-post-summary--stats-votes">
{% icon "Vote16Up" %}
…
</div>
<div class="s-post-summary--stats-answers">
{% icon "Answer16" %}
…
<span class="v-visible-sr">answers</span>
</div>
</div>
<div class="s-post-summary--stats-item">… views</div>
</div>
<div class="s-post-summary--title">
<a class="s-post-summary--title-link" href="…">…</a>
</div>
<p class="s-post-summary--excerpt v-truncate3">…</p>
<div class="s-post-summary--tags">
<a class="s-tag" href="…">…</a>
…
</div>
</div>
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Answered
Section titled Answered
Add the .s-post-summary__answered modifier class to indicate that the post has an accepted answer.
<div class="s-post-summary s-post-summary__answered">
…
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Bountied
Section titled Bountied
Include the .s-post-summary--stats-bounty element to indicate that the post has a bounty.
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">
<div class="s-post-summary--stats-votes">…</div>
<div class="s-post-summary--stats-answers">…</div>
<div class="s-post-summary--stats-bounty">
+50 <span class="v-visible-sr">bounty</span>
</div>
</div>
<div class="s-post-summary--content">
…
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">…</div>
<div class="s-post-summary--stats s-post-summary--sm-show">
<div class="s-post-summary--stats-votes">…</div>
<div class="s-post-summary--stats-answers">…</div>
<div class="s-post-summary--stats-bounty">
+50 <span class="v-visible-sr">bounty</span>
</div>
</div>
</div>
…
</div>
…
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Ignored
Section titled IgnoredIncluding an ignored tag will automatically apply custom ignored styling to the post summary.
<div class="s-post-summary">
…
<div class="s-post-summary--content">
…
<div class="s-post-summary--tags">
<a class="s-tag s-tag__ignored" href="…">…</a>
…
</div>
</div>
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Watched
Section titled WatchedIncluding a watched tag will automatically apply custom watched styling to the post summary.
<div class="s-post-summary">
…
<div class="s-post-summary--content">
…
<div class="s-post-summary--tags">
<a class="s-tag s-tag__watched" href="…">…</a>
…
</div>
</div>
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Deleted
Section titled DeletedInclude the .s-post-summary__deleted modifier class applies custom deleted styling to the post summary.
<div class="s-post-summary s-post-summary__deleted">
…
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
State badges
Section titled State badgesInclude the appropriate state badge to indicate the current state of the post.
<!-- Draft -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
<div class="s-post-summary--sm-show">
<span class="s-badge s-badge__sm s-badge__info">
{% icon "Compose" %} Draft
</span>
</div>
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">…</div>
<div class="s-post-summary--stats s-post-summary--sm-show">…</div>
<div class="s-post-summary--stats-item">… views</div>
<span class="s-badge s-badge__info ml-auto s-post-summary--sm-hide">
{% icon "Compose" %} Draft
</span>
</div>
…
</div>
</div>
<!-- Review -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
<div class="s-post-summary--sm-show">
<span class="s-badge s-badge__sm s-badge__warning">
{% icon "Eye" %} Review
</span>
</div>
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">…</div>
<div class="s-post-summary--stats s-post-summary--sm-show">…</div>
<div class="s-post-summary--stats-item">… views</div>
<span class="s-badge s-badge__warning ml-auto s-post-summary--sm-hide">
{% icon "Eye" %} Review
</span>
</div>
…
</div>
</div>
<!-- Closed -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
<div class="s-post-summary--sm-show">
<span class="s-badge s-badge__sm s-badge__danger">
{% icon "Flag" %} Closed
</span>
</div>
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">…</div>
<div class="s-post-summary--stats s-post-summary--sm-show">…</div>
<div class="s-post-summary--stats-item">… views</div>
<span class="s-badge s-badge__danger ml-auto s-post-summary--sm-hide">
{% icon "Flag" %} Closed
</span>
</div>
…
</div>
</div>
<!-- Archived -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
<div class="s-post-summary--sm-show">
<span class="s-badge s-badge__sm">
{% icon "Document" %} Archived
</span>
</div>
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">…</div>
<div class="s-post-summary--stats s-post-summary--sm-show">…</div>
<div class="s-post-summary--stats-item">… views</div>
<span class="s-badge ml-auto s-post-summary--sm-hide">
{% icon "Document" %} Archived
</span>
</div>
…
</div>
</div>
<!-- Pinned -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
<div class="s-post-summary--sm-show">
<span class="s-badge s-badge__sm s-badge__tonal">
{% icon "Key" %} Pinned
</span>
</div>
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">…</div>
<div class="s-post-summary--stats s-post-summary--sm-show">…</div>
<div class="s-post-summary--stats-item">… views</div>
<span class="s-badge s-badge__danger ml-auto s-post-summary--sm-hide">
{% icon "Key" %} Pinned
</span>
</div>
…
</div>
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Content types
Section titled Content typesInclude the appropriate content type badge to indicate the type of content the post represents.
<!-- Announcement -->
<div class="s-post-summary">
…
<div class="s-post-summary--content">
…
<div class="s-post-summary--tags">
<a class="s-post-summary--content-type" href="#">
{% icon "Document" %} Announcement
</a> …
</div>
</div>
</div>
<!-- How-to guide -->
<div class="s-post-summary">
…
<div class="s-post-summary--content">
…
<div class="s-post-summary--tags">
<a class="s-post-summary--content-type" href="#">
{% icon "Document" %} How to guide
</a> …
</div>
</div>
</div>
<!-- Knowledge article -->
<div class="s-post-summary">
…
<div class="s-post-summary--content">
…
<div class="s-post-summary--tags">
<a class="s-post-summary--content-type" href="#">
{% icon "Document" %} Knowledge article
</a> …
</div>
</div>
</div>
<!-- Policy -->
<div class="s-post-summary">
…
<div class="s-post-summary--content">
…
<div class="s-post-summary--tags">
<a class="s-post-summary--content-type" href="#">
{% icon "Document" %} Policy
</a> …
</div>
</div>
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Excerpt sizes
Section titled Excerpt sizesClasses
Section titled ClassesPost summaries can be shown without an excerpt or with an excerpt with one, two, or three lines of text. Exclude the excerpt container to hide the excerpt or apply the appropriate truncation class to the excerpt container. See also Truncation.
| Class | Description |
|---|---|
.v-truncate1 |
Truncates the excerpt to 1 lines of text. |
.v-truncate2 |
Truncates the excerpt to 2 lines of text. |
.v-truncate3 |
Truncates the excerpt to 3 lines of text. |
Examples
Section titled Examples<!-- No excerpt -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">…</div>
</div>
<!-- Small excerpt -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
…
<p class="s-post-summary--excerpt v-truncate1">…</p>
…
</div>
</div>
<!-- Medium excerpt -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
…
<p class="s-post-summary--excerpt v-truncate2">…</p>
…
</div>
</div>
<!-- Large excerpt -->
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
…
<p class="s-post-summary--excerpt v-truncate3">…</p>
…
</div>
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Small container
Section titled Small containerPost summaries adapt to their container size. When shown with a container smaller than 448px, the post summary renders with a compact layout.
<div class="s-post-summary">…</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Answers
Section titled AnswersAnswers to a question can be shown in a post summary. Include the .s-post-summary--answers container to show the answers.
For accepted answers, add the .s-post-summary--answer__accepted modifier class and display the Accepted answer text and icon as shown in the example below.
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">…</div>
<div class="s-post-summary--content">
<div class="s-post-summary--content-meta">…</div>
<div class="s-post-summary--title">…</div>
<p class="s-post-summary--excerpt v-truncate3">…</p>
<div class="s-post-summary--tags">…</div>
<div class="s-post-summary--answers">
<div class="s-post-summary--answer s-post-summary--answer__accepted">
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">
<div class="s-user-card--group" href="…">
<a class="s-avatar" href="…">
<img class="s-avatar--image" src="…">
<span class="v-visible-sr">…</span>
</a>
<span class="s-user-card--username">…</span>
</div>
<ul class="s-user-card--group">
<li class="s-user-card--rep">
<span class="s-bling s-bling__rep s-bling__sm">
<span class="v-visible-sr">reputation bling</span>
</span>
…
</li>
</ul>
<span>
<a class="s-user-card--time" title="…" data-controller="s-tooltip" href="…">
<time>…</time>
</a>
</span>
</div>
<div class="s-post-summary--stats">
<div class="s-post-summary--stats-votes">
{% icon "Vote16Up" %}
…
</div>
<div class="s-post-summary--stats-answers">
{% icon "Answer16Fill" %}
Accepted answer
</div>
</div>
</div>
<p class="s-post-summary--excerpt">…</p>
</div>
<div class="s-post-summary--answer">
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">
<div class="s-user-card--group" href="…">
<a class="s-avatar" href="…">
<img class="s-avatar--image" src="…">
<span class="v-visible-sr">…</span>
</a>
<span class="s-user-card--username">…</span>
</div>
<ul class="s-user-card--group">
<li class="s-user-card--rep">
<span class="s-bling s-bling__rep s-bling__sm">
<span class="v-visible-sr">reputation bling</span>
</span>
…
</li>
</ul>
<span>
<a class="s-user-card--time" title="…" data-controller="s-tooltip" href="…">
<time>…</time>
</a>
</span>
</div>
<div class="s-post-summary--stats">
<div class="s-post-summary--stats-votes">
{% icon "Vote16Up" %}
…
</div>
</div>
</div>
<p class="s-post-summary--excerpt">…</p>
</div>
</div>
</div>
</div>
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.