From 7e526262133fdbd34df89beb5e8ac2e0c4d92202 Mon Sep 17 00:00:00 2001 From: brandon Date: Wed, 21 Aug 2024 16:09:42 -0700 Subject: [PATCH] removing Sharmila's blog post and redirecting to new location --- docs/blog/2024-03-20-unclear-blog.md | 72 ---------------------------- docs/docusaurus.config.js | 6 ++- 2 files changed, 5 insertions(+), 73 deletions(-) delete mode 100644 docs/blog/2024-03-20-unclear-blog.md diff --git a/docs/blog/2024-03-20-unclear-blog.md b/docs/blog/2024-03-20-unclear-blog.md deleted file mode 100644 index 85dd2b56..00000000 --- a/docs/blog/2024-03-20-unclear-blog.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: "Navigating Ambiguity with Groundlight AI Detectors" -description: Let's talk more about ambiguous image queries -slug: dealing-with-unclear-images -authors: - - name: Sharmila Reddy Nangi - title: Applied ML Scientist - image_url: https://a-us.storyblok.com/f/1015187/1000x1000/b66d1cddeb/nangis.jpg -tags: [unclears, real-world ambiguity ] -image: ./images/unclear_blog/unclear_label.png -hide_table_of_contents: false ---- - -When you first explore the capabilities of our Groundlight AI detectors, you'll quickly notice that they excel at answering binary questions. These are queries expecting a straightforward "Yes" or "No" response. However, the world around us rarely conforms to such black-and-white distinctions, particularly when analyzing images. In reality, many scenarios present challenges that defy a simple binary answer. - - - -## Exploring the Gray Areas: Real-World Examples - -Consider the following scenarios that highlight the complexity of interpreting real-world images: - -1. **The Case of the Hidden Oven**: Imagine asking, "Is the oven light turned on?" only to find the view partially blocked by a person. With the contents on the other side hidden from view, providing a definitive "Yes" or "No" becomes impossible. Such instances are best described as "Unclear." -
- -
- - Oven is hidden from the camera view - -
-
- -2. **The Locked Garage Door Dilemma**: When faced with a query like, "Is the garage door locked?" accompanied by an image shrouded in darkness or blurred beyond recognition, identifying the status of the door lock is a challenge. In these circumstances, clarity eludes us, leaving us unable to confidently answer. -
- -
- - Dark images make it difficult to answer the query - -
-
- -3. **Irrelevant Imagery**: At times, the images presented may bear no relation to the question posed. These irrelevant scenes further underscore the limitations of binary responses in complex situations. For instance, responding to the question "Is there a black jacket on the coat hanger?" with the following image (that doesn't even include a coat hanger) exemplifies how such imagery can be off-topic and fail to address the query appropriately. -
- -
- - Images unrelated to the query lead to ambiguity - -
-
- - -## Strategies for Navigating Ambiguity - -Although encountering unclear images might seem like a setback, it actually opens up avenues for improvement and customization. Our detectors are designed to identify and flag these ambiguous cases, empowering you to steer their interpretation. Here are some strategies you can employ to enhance the process: - -1. **Clarify your queries** : It's crucial to formulate your questions to the system with precision, avoiding any vagueness. For instance, instead of asking, “Is the light ON?” opt for a more detailed inquiry such as, “Can you clearly see the red LED on the right panel turned ON?” This approach ensures your queries are direct and specific. -2. **Customize Yes/ No classifications**: You can specify how the model should interpret and deal with unclear images by reframing your queries and notes. For instance, by specifying “If the garage door is not visible, mark it as a NO” in your notes, you can make the detector sort unclear images into the “NO” class. You can refer to our [previous blog post](https://code.groundlight.ai/python-sdk/blog/best-practices) for best practices while refining your queries and notes. -3. **Flagging “Unclear” images**: Should you prefer to classify an obstructed view or irrelevant imagery as “Unclear”, simply add a couple of labels as “UNCLEAR” or provide instructions in the notes. Groundlight's machine learning systems will adapt to your preference and continue to flag them as "Unclear" for you. -
- -
- - Marking an image query as “Unclear" in the data review page - -
-
- - -The strategies outlined above will significantly improve your ability to navigate through unclear -scenarios. However, there exist many other situations, such as borderline classifications or cases where there's insufficient information for a definitive answer. Recognizing and managing the inherent uncertainty in these tasks is crucial as we progress. We are committed to building more tools that empower you to deal with these challenges. - diff --git a/docs/docusaurus.config.js b/docs/docusaurus.config.js index 58cd2e0e..7c2ce6a5 100644 --- a/docs/docusaurus.config.js +++ b/docs/docusaurus.config.js @@ -222,10 +222,14 @@ const config = { to: "https://www.groundlight.ai/blog/groundlight-ai-achieves-soc-2-type-2-compliance", // new marketing site route from: "/blog/groundlight-ai-achieves-soc-2-type-2-compliance", // old blog route }, + { + to: "https://www.groundlight.ai/blog/navigating-ambiguity-with-groundlight-ai-detectors", // new marketing site route + from: "/blog/dealing-with-unclear-images", // old blog route + }, ], }, ], - ], + ], }; module.exports = config; \ No newline at end of file