From 66672ab3575a59aafa1903d868158d5acd52b55c Mon Sep 17 00:00:00 2001 From: Michael Ilie Date: Sun, 9 Jun 2024 11:28:02 -0400 Subject: [PATCH 1/6] cleaning up unused files and writing more docs --- Prompt_Systematic_Review_Dataset | 1 - README.md | 38 +- data/model_citation_counts.csv | 29 - data/new_cleaned_merged_paper_references.json | 9832 ---------- data/prompt_engineering_arxiv.csv | 15116 ---------------- data/prompt_engineering_reviewed.csv | 21 - .../few_shot/vanilla_five-shot_STEM.txt | 39 - .../few_shot/vanilla_five-shot_humanities.txt | 42 - .../few_shot/vanilla_five-shot_other.txt | 47 - .../vanilla_five-shot_social_sciences.txt | 39 - data/prompts/zero_shot/CoT_zero_shot.txt | 1 - .../zero_shot/plan_and_solve_zero_shot.txt | 1 - data/prompts/zero_shot/thread_of_thoughts.txt | 1 - .../prompts/zero_shot/vanilla_zero_shot_1.txt | 1 - .../prompts/zero_shot/vanilla_zero_shot_2.txt | 1 - .../prompts/zero_shot/vanilla_zero_shot_3.txt | 1 - .../semantic_scholar_4_to_6_.csv | 192 - .../semantic_scholar_gpt_relevant.csv | 1272 -- ...c_scholar_human_review_papers_with_pdf.csv | 136 - ...cholar_human_review_papers_without_pdf.csv | 59 - .../semantic_scholar_papers_above_3.csv | 1463 -- ...antic_scholar_relevant_papers_with_pdf.csv | 946 - ...ic_scholar_relevant_papers_without_pdf.csv | 329 - data/semantic_scholar_data_cleaned.csv | 4255 ----- .../semantic_scholar_data_doubled_cleaned.csv | 3937 ---- example.env | 3 + requirements.txt | 7 +- setup.cfg | 25 +- .../collect_papers.py | 6 +- .../get_papers/download_arxiv.py | 2 +- 30 files changed, 57 insertions(+), 37785 deletions(-) delete mode 160000 Prompt_Systematic_Review_Dataset delete mode 100644 data/model_citation_counts.csv delete mode 100644 data/new_cleaned_merged_paper_references.json delete mode 100644 data/prompt_engineering_arxiv.csv delete mode 100644 data/prompt_engineering_reviewed.csv delete mode 100644 data/prompts/few_shot/vanilla_five-shot_STEM.txt delete mode 100644 data/prompts/few_shot/vanilla_five-shot_humanities.txt delete mode 100644 data/prompts/few_shot/vanilla_five-shot_other.txt delete mode 100644 data/prompts/few_shot/vanilla_five-shot_social_sciences.txt delete mode 100644 data/prompts/zero_shot/CoT_zero_shot.txt delete mode 100644 data/prompts/zero_shot/plan_and_solve_zero_shot.txt delete mode 100644 data/prompts/zero_shot/thread_of_thoughts.txt delete mode 100644 data/prompts/zero_shot/vanilla_zero_shot_1.txt delete mode 100644 data/prompts/zero_shot/vanilla_zero_shot_2.txt delete mode 100644 data/prompts/zero_shot/vanilla_zero_shot_3.txt delete mode 100644 data/semantic_scholar_data/semantic_scholar_4_to_6_.csv delete mode 100644 data/semantic_scholar_data/semantic_scholar_gpt_relevant.csv delete mode 100644 data/semantic_scholar_data/semantic_scholar_human_review_papers_with_pdf.csv delete mode 100644 data/semantic_scholar_data/semantic_scholar_human_review_papers_without_pdf.csv delete mode 100644 data/semantic_scholar_data/semantic_scholar_papers_above_3.csv delete mode 100644 data/semantic_scholar_data/semantic_scholar_relevant_papers_with_pdf.csv delete mode 100644 data/semantic_scholar_data/semantic_scholar_relevant_papers_without_pdf.csv delete mode 100644 data/semantic_scholar_data_cleaned.csv delete mode 100644 data/semantic_scholar_data_doubled_cleaned.csv create mode 100644 example.env diff --git a/Prompt_Systematic_Review_Dataset b/Prompt_Systematic_Review_Dataset deleted file mode 160000 index 7d8eb4c..0000000 --- a/Prompt_Systematic_Review_Dataset +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 7d8eb4c6a999db2999766af5a279449c20e688f5 diff --git a/README.md b/README.md index 290d3fa..2811d6d 100644 --- a/README.md +++ b/README.md @@ -4,18 +4,20 @@ after cloning, run `pip install -r requirements.txt` from root -## Set up API keys +## Setting up API keys Make a file at root called `.env`. -For HF: https://huggingface.co/docs/hub/security-tokens, also run `huggingface-cli login` +For OpenAI: https://platform.openai.com/docs/quickstart +For Hugging Face: https://huggingface.co/docs/hub/security-tokens, also run `huggingface-cli login` +For Sematic Scholar: https://www.semanticscholar.org/product/api#api-key -Put your key in like: - -`OPENAI_API_KEY=sk-...` +Use the reference `example.env` file to fill in your API keys/tokens. +`OPENAI_API_KEY=sk.-...` `SEMANTIC_SCHOLAR_API_KEY=...` `HF_TOKEN=...` +## Setting up keys for running tests Then to load the .env file, type: pip install pytest-dotenv @@ -29,17 +31,29 @@ env_files = .test.env .deploy.env +## Structure of the Repository +The script `main.py` calls the necessary functions to download all the papers, deduplicate and filter them, and then run all the experiments. + +The core of the repository is in `src/prompt_systematic_review`. The `config_data.py` script contains configurations that are important for running experiments and saving time. You can see in `main.py` how some of these options are used. + +The source folder is divided into 4 main sections: 3 scripts (`automated_review.py`, `collect_papers.py`,`config_data.py`) that deal with collecting the data and running the automated review, the `utils` folder that contains utility functions that are used throughout the repository, the `get_papers` folder that contains the scripts to download the papers, and the `experiments` folder that contains the scripts to run the experiments. + +At the root, there is a `data` folder. It comes pre-loaded with some data that is used for the experiments, however the bulk of the dataset can either be generated by running `main.py` or by downloading the data from huggingface. It is in `data/experiments_output` that the results of the experiments are saved. + +Notably, the keywords used in the automated review/scraping process are in `src/prompt_systematic_review/utils/keywords.py`. Anyone who wishes to run the automated review can adjust these keywords to their liking in that file. + +## Running the code +Running `main.py` will download the papers, run the automated review, and run the experiments. +However, if you wish to save time and only run the experiments, you can download the data from huggingface and move the papers folder into the data folder (should look like `data/papers/*.pdf`). Adjust main.py accordingly. + +Every experiment script has a `run_experiment` function that is called in `main.py`. The `run_experiment` function is responsible for running the experiment and saving the results. However each script can be run individually by just running `python src/prompt_systematic_review/experiments/.py` from root. + + ## blacklist.csv -Papers do not include due to them being poorly written or AI generated (or simply irrelevant). +Papers to not include due to them being poorly written or AI generated (or simply irrelevant). ## Notes - Sometimes a paper title may appear differently on the arXiv API. For example, "Visual Attention-Prompted Prediction and Learning" (arXiv:2310.08420), according to arXiv API is titled "A visual encoding model based on deep neural networks and transfer learning" -- When testing APIs, there may be latency and aborted connections - -- Publication dates of papers from IEEE are missing the day about half the time. They also may come in any of the following formats - - "April 1988" - - "2-4 April 2002" - - "29 Nov.-2 Dec. 2022" diff --git a/data/model_citation_counts.csv b/data/model_citation_counts.csv deleted file mode 100644 index 07ab0df..0000000 --- a/data/model_citation_counts.csv +++ /dev/null @@ -1,29 +0,0 @@ -model_name,count,list_of_papers -GPT-3,535,"['multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf', 'selfpolish enhance reasoning in large language models via problem refinement.pdf', 'complementary explanations for effective incontext learning.pdf', 'plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf', 'multilingual llms are better crosslingual incontext learners with alignment.pdf', 'efficient open domain multihop question answering with fewshot data synthesis.pdf', 'coveragebased example selection for incontext learning.pdf', 'casteist but not racist quantifying disparities in large language model bias between india and the west.pdf', 'game of tones faculty detection of gpt4 generated content in university assessments.pdf', 'crosslingual retrieval augmented incontext learning for bangla.pdf', 'identifying and extracting rare disease phenotypes with large language models.pdf', 'leveraging large language models for exploiting asr uncertainty.pdf', 'a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation.pdf', 'mathprompter mathematical reasoning using large language models.pdf', 'attack prompt generation for red teaming and defending large language models.pdf', 'automated fewshot classification with instructionfinetuned language models.pdf', 'text classification via large language models.pdf', 'covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds.pdf', 'autotrial prompting language models for clinical trial design.pdf', ""a wolf in sheep's clothing generalized nested jailbreak prompts can fool large language models easily.pdf"", 'prompt engineering for students of medicine and their teachers.pdf', 'measuring inductive biases of incontext learning with underspecified demonstrations.pdf', 'an empirical study on fewshot knowledge probing for pretrained language models.pdf', 'unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf', 'pretraining to learn in context.pdf', 'exploring the relationship between model architecture and incontext learning ability.pdf', 'developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer.pdf', 'can incontext learners learn a reasoning concept from demonstrations.pdf', 'towards unified prompt tuning for fewshot text classification.pdf', 'large language models are zeroshot rankers for recommender systems.pdf', 'affect recognition in conversations using large language models.pdf', 'winodict probing language models for incontext word acquisition.pdf', 'consprompt easily exploiting contrastive samples for fewshot prompt learning.pdf', 'large language models for failure mode classification an investigation.pdf', 'toward unified controllable text generation via regular expression instruction.pdf', 'structured prompting scaling incontext learning to 1,000 examples.pdf', 'retrievalaugmented code generation for universal information extraction.pdf', 'the utility of large language models and generative ai for education research.pdf', 'discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators.pdf', 'beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels.pdf', 'inducing anxiety in large language models increases exploration and bias.pdf', ""optimizing machine translation through prompt engineering an investigation into chatgpt's customizability.pdf"", 'udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers.pdf', 'large language models can be used to effectively scale spear phishing campaigns.pdf', 'towards zerolabel language learning.pdf', 'sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf', 'emotionconditioned text generation through automatic prompt optimization.pdf', 'what makes pretrained language models better zeroshot learners.pdf', 'the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues.pdf', 'can large language models write good propertybased tests.pdf', 'building emotional support chatbots in the era of llms.pdf', 'llmlingua compressing prompts for accelerated inference of large language models.pdf', 'differentiable entailment for parameter efficient few shot learning.pdf', 'systematic rectification of language models via deadend analysis.pdf', 'zeroshot information extraction from radiological reports using chatgpt.pdf', 'multidimensional evaluation of text summarization with incontext learning.pdf', 'lowresource authorship style transfer can nonfamous authors be imitated.pdf', 'knowledge crosswords geometric reasoning over structured knowledge with large language models.pdf', 'unified demonstration retriever for incontext learning.pdf', 'aicopilot for business optimisation a framework and a case study in production scheduling.pdf', 'are humangenerated demonstrations necessary for incontext learning.pdf', 'prompting large language models with chainofthought for fewshot knowledge base question generation.pdf', 'gpt takes the bar exam.pdf', 'chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt.pdf', 'a languageagent approach to formal theoremproving.pdf', 'exploring the integration of large language models into automatic speech recognition systems an empirical study.pdf', 'reranking for natural language generation from logical forms a study based on large language models.pdf', 'autoplan automatic planning of interactive decisionmaking tasks with large language models.pdf', 'probing llms for hate speech detection strengths and vulnerabilities.pdf', 'learning to retrieve incontext examples for large language models.pdf', 'sqlprompt incontext texttosql with minimal labeled data.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'sqlpalm improved large language model adaptation for texttosql.pdf', 'large language model prompt chaining for long legal document classification.pdf', 'neural machine translation models can learn to be fewshot learners.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'large language models are stateoftheart evaluators of translation quality.pdf', 'llm4dv using large language models for hardware test stimuli generation.pdf', 'aligning language models to user opinions.pdf', 'using large language models to generate engaging captions for data visualizations.pdf', 'let me check the examples enhancing demonstration learning via explicit imitation.pdf', 'fireact toward language agent finetuning.pdf', 'boosted prompt ensembles for large language models.pdf', 'continual training of language models for fewshot learning.pdf', 'sentiment analysis in the era of large language models a reality check.pdf', 'do we still need clinical language models.pdf', 'how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models.pdf', 'boosting crosslingual transferability in multilingual models via incontext learning.pdf', 'getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning.pdf', 'metricbased incontext learning a case study in text simplification.pdf', 'are large language models ready for healthcare a comparative study on clinical language understanding.pdf', 'cotbert enhancing unsupervised sentence representation through chainofthought.pdf', 'incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf', ""harnessing large language models' empathetic response generation capabilities for online mental health counselling support.pdf"", 'codeie large code generation models are better fewshot information extractors.pdf', 'annollm making large language models to be better crowdsourced annotators.pdf', 'adaptivesolver framework for dynamic strategy selection in large language model reasoning.pdf', 'a latent space theory for emergent abilities in large language models.pdf', 'epa easy prompt augmentation on large language models via multiple sources and multiple targets.pdf', 'towards using fewshot prompt learning for automating model completion.pdf', 'controlling personality style in dialogue with zeroshot promptbased learning.pdf', ""language models don't always say what they think unfaithful explanations in chainofthought prompting.pdf"", 'small models are valuable plugins for large language models.pdf', 'rtllm an opensource benchmark for design rtl generation with large language model.pdf', 'pair programming with large language models for sampling and estimation of copulas.pdf', 'dialogstudio towards richest and most diverse unified dataset collection for conversational ai.pdf', 'using chatgpt for entity matching.pdf', 'large language models as tax attorneys a case study in legal capabilities emergence.pdf', 'autonomous treesearch ability of large language models.pdf', 'evaluation of chatgpt family of models for biomedical reasoning and classification.pdf', 'towards making the most of chatgpt for machine translation.pdf', 'easynlp a comprehensive and easytouse toolkit for natural language processing.pdf', 'fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf', 'zerotop zeroshot taskoriented semantic parsing using large language models.pdf', 'conqx semantic expansion of spoken queries for intent detection based on conditioned text generation.pdf', 'boosting language models reasoning with chainofknowledge prompting.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'benchmarking cognitive biases in large language models as evaluators.pdf', 'detecting hate speech with gpt3.pdf', 'simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf', 'defending against alignmentbreaking attacks via robustly aligned llm.pdf', 'measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing.pdf', 'learning new tasks from a few examples with softlabel prototypes.pdf', 'towards explainable conversational recommender systems.pdf', 'prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages.pdf', 'tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf', 'enhancing incontext learning with answer feedback for multispan question answering.pdf', 'masterkey automated jailbreak across multiple large language model chatbots.pdf', 'xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf', 'explainable claim verification via knowledgegrounded reasoning with large language models.pdf', 'short answer grading using oneshot prompting and text similarity scoring model.pdf', 'chatrec towards interactive and explainable llmsaugmented recommender system.pdf', 'compositional semantic parsing with large language models.pdf', 'reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf', 'on bilingual lexicon induction with large language models.pdf', 'textbooks are all you need ii phi15 technical report.pdf', ""two timin' repairing smart contracts with a twolayered approach.pdf"", 'is chatgpt the ultimate programming assistant how far is it.pdf', 'large language models in fault localisation.pdf', 'incontext fewshot relation extraction via pretrained language models.pdf', 'time travel in llms tracing data contamination in large language models.pdf', 'wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf', 'posqa probe the world models of llms with size comparisons.pdf', 'ul2 unifying language learning paradigms.pdf', 'boosting incontext learning with factual knowledge.pdf', 'apiassisted code generation for question answering on varied table structures.pdf', 'sentiment analysis through llm negotiations.pdf', 'oneshot labeling for automatic relevance estimation.pdf', 'chainforge a visual toolkit for prompt engineering and llm hypothesis testing.pdf', 'toxicity detection with generative promptbased inference.pdf', 'benchmarking large language model capabilities for conditional generation.pdf', 'query2doc query expansion with large language models.pdf', 'instanceaware prompt learning for language understanding and generation.pdf', 'meal stable and active learning for fewshot prompting.pdf', 'solving and generating npr sunday puzzles with large language models.pdf', 'tabllm fewshot classification of tabular data with large language models.pdf', 'llm4dyg can large language models solve problems on dynamic graphs.pdf', 'autodan generating stealthy jailbreak prompts on aligned large language models.pdf', 'noisy exemplars make large language models more robust a domainagnostic behavioral analysis.pdf', 'automatic data transformation using large language model an experimental study on building energy data.pdf', ""do large language models know what they don't know.pdf"", 'expertprompting instructing large language models to be distinguished experts.pdf', 'efficient blackbox adversarial attacks on neural text detectors.pdf', 'prompt engineering in medical education.pdf', 'are hard examples also harder to explain a study with human and modelgenerated explanations.pdf', 'the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis.pdf', 'autoclip autotuning zeroshot classifiers for visionlanguage models.pdf', 'using natural language explanations to improve robustness of incontext learning for natural language inference.pdf', 'injecting a structural inductive bias into a seq2seq model by simulation.pdf', 'crosscodebench benchmarking crosstask generalization of source code models.pdf', 'do gpts produce less literal translations.pdf', 'ambiguityaware incontext learning with large language models.pdf', 'investigating the fairness of large language models for predictions on tabular data.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'towards legally enforceable hate speech detection for public forums.pdf', 'using incontext learning to improve dialogue safety.pdf', 'automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf', 'user simulation with large language models for evaluating taskoriented dialogue.pdf', 'development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews.pdf', 'retrieving supporting evidence for generative question answering.pdf', 'true fewshot learning with language models.pdf', 'calibrating llmbased evaluator.pdf', 'true fewshot learning with prompts a realworld perspective.pdf', 'discrete prompt optimization via constrained generation for zeroshot reranker.pdf', 'rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought.pdf', 'grips gradientfree, editbased instruction search for prompting large language models.pdf', 'robust prompt optimization for large language models against distribution shifts.pdf', 'prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf', 'review of large vision models and visual prompt engineering.pdf', 'the impact of symbolic representations on incontext learning for fewshot reasoning.pdf', 'metaincontext learning in large language models.pdf', 'unihd at tsar2022 shared task is compute all we need for lexical simplification.pdf', 'allure auditing and improving llmbased evaluation of text using iterative incontextlearning.pdf', 'incontext learning with iterative demonstration selection.pdf', 'calibrate before use improving fewshot performance of language models.pdf', 'knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms.pdf', 'plum prompt learning using metaheuristic.pdf', 'humanintheloop machine translation with large language model.pdf', 'check your facts and try again improving large language models with external knowledge and automated feedback.pdf', 'extractive summarization via chatgpt for faithful summary generation.pdf', 'latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf', 'entity matching using large language models.pdf', 'product information extraction using chatgpt.pdf', 'ai chains transparent and controllable humanai interaction by chaining large language model prompts.pdf', 'language model cascades.pdf', 'askit unified programming interface for programming with large language models.pdf', 'towards fewshot identification of morality frames using incontext learning.pdf', 'autoconv automatically generating informationseeking conversations with large language models.pdf', 'chitchat or deep talk prompt engineering for process mining.pdf', 'zara improving fewshot selfrationalization for small language models.pdf', 'contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning.pdf', 'masakhanews news topic classification for african languages.pdf', 'an explanation of incontext learning as implicit bayesian inference.pdf', 'generative type inference for python.pdf', 's3 socialnetwork simulation system with large language modelempowered agents.pdf', 'chainofdictionary prompting elicits translation in large language models.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'selective annotation makes language models better fewshot learners.pdf', 'mitigating label biases for incontext learning.pdf', 'pretrained tokenreplaced detection model as fewshot learner.pdf', 'language models are fewshot learners for prognostic prediction.pdf', 'unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving.pdf', 'genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf', 'post hoc explanations of language models can improve language models.pdf', 'backdooring instructiontuned large language models with virtual prompt injection.pdf', 'gps genetic prompt search for efficient fewshot learning.pdf', 'using large language models for cybersecurity capturetheflag challenges and certification questions.pdf', 'leveraging large language models to generate answer set programs.pdf', 'a communication theory perspective on prompting engineering methods for large language models.pdf', 'a closer look at incontext learning under distribution shifts.pdf', 'connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf', 'domain knowledge distillation from large language model an empirical study in the autonomous driving domain.pdf', 'a simple baseline for knowledgebased visual question answering.pdf', 'prompt programming for large language models beyond the fewshot paradigm.pdf', 'resources and fewshot learners for incontext learning in slavic languages.pdf', 'demonstrations of the potential of aibased political issue polling.pdf', 'large language models can implement policy iteration.pdf', 'divide and prompt chain of thought prompting for texttosql.pdf', 'global constraints with prompting for zeroshot event argument classification.pdf', ""what's in a measurement using gpt3 on semeval 2021 task 8 measeval.pdf"", 'chat2vis generating data visualisations via natural language using chatgpt, codex and gpt3 large language models.pdf', 'linking microblogging sentiments to stock price movement an application of gpt4.pdf', 'framing the newsfrom human perception to large language model inferences.pdf', 'codecot and beyond learning to program and test like a developer.pdf', 'improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning.pdf', 'a practical survey on zeroshot prompt design for incontext learning.pdf', 'chatgpt for robotics design principles and model abilities.pdf', 'how far are large language models from agents with theoryofmind.pdf', 'how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings.pdf', 'dricl demonstrationretrieved incontext learning.pdf', 'legal prompting teaching a language model to think like a lawyer.pdf', 'multistage collaborative knowledge distillation from large language models.pdf', 'openicl an opensource framework for incontext learning.pdf', 'linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf', 'honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model.pdf', 'adversarial robustness of promptbased fewshot learning for natural language understanding.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'large language modelaware incontext learning for code generation.pdf', 'incontext learning user simulators for taskoriented dialog systems.pdf', 'can gpt3 perform statutory reasoning.pdf', 'an empirical study on using large language models to analyze software supply chain security failures.pdf', 'benchmarking arabic ai with large language models.pdf', 'making large language models better data creators.pdf', 'incontext exemplars as clues to retrieving from large associative memory.pdf', 'how many demonstrations do you need for incontext learning.pdf', 'upar a kantianinspired prompting framework for enhancing large language model capabilities.pdf', 'large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf', 'harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf', 'can large language models capture public opinion about global warming an empirical assessment of algorithmic fidelity and bias.pdf', 'fixing hardware security bugs with large language models.pdf', 'ten quick tips for harnessing the power of chatgptgpt4 in computational biology.pdf', 'selective demonstrations for crossdomain texttosql.pdf', 'batch prompting efficient inference with large language model apis.pdf', 'exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods.pdf', 'can language models solve graph problems in natural language.pdf', 'choice over control how users write with large language models using diegetic and nondiegetic prompting.pdf', 'dcc help generating contextaware compiler error explanations with large language models.pdf', 'chatgpt as a mapping assistant a novel method to enrich maps with generative ai and content derived from streetlevel photographs.pdf', 'evaluating large language models on graphs performance insights and comparative analysis.pdf', 'generating training data with language models towards zeroshot language understanding.pdf', 'prompting large language models with the socratic method.pdf', 'atlas fewshot learning with retrieval augmented language models.pdf', 'abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models.pdf', 'algo synthesizing algorithmic programs with generated oracle verifiers.pdf', 'the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis.pdf', 'chatgpt for arabic grammatical error correction.pdf', 'diverse demonstrations improve incontext compositional generalization.pdf', 'fewshot reranking for multihop qa via language model prompting.pdf', 'large language models for aspectbased sentiment analysis.pdf', 'mastering the task of open information extraction with large language models and consistent reasoning environment.pdf', 'humans in humans out on gpt converging toward common sense in both success and failure.pdf', 'cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf', 'zicl zeroshot incontext learning with pseudodemonstrations.pdf', 'retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf', 'unveiling the potential of large language models in generating semantic and crosslanguage clones.pdf', 'towards zeroshot and fewshot table question answering using gpt3.pdf', 'adaptive machine translation with large language models.pdf', 'a search for prompts generating structured answers from contracts.pdf', 'are large language models post hoc explainers.pdf', 'fewshot queryfocused summarization with prefixmerging.pdf', 'the scope of incontext learning for the extraction of medical temporal constraints.pdf', 'continuous prompt tuning based textual entailment model for ecommerce entity typing.pdf', 'fewshot training llms for projectspecific codesummarization.pdf', 'instruction distillation makes large language models efficient zeroshot rankers.pdf', 'llmebench a flexible framework for accelerating llms benchmarking.pdf', 'the unreliability of explanations in fewshot prompting for textual reasoning.pdf', 'selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf', 'grammar prompting for domainspecific language generation with large language models.pdf', 'a chat about boring problems studying gptbased text normalization.pdf', 'retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf', 'llmintheloop leveraging large language model for thematic analysis.pdf', 'mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf', 'large language models and prompt engineering for biomedical query focused multidocument summarisation.pdf', 'enhancing small medical learners with privacypreserving contextual prompting.pdf', 'what incontext learning learns incontext disentangling task recognition and task learning.pdf', 'qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf', 'boosting theoryofmind performance in large language models via prompting.pdf', 'mixture of soft prompts for controllable data generation.pdf', 'does correction remain a problem for large language models.pdf', 'unleashing the potential of prompt engineering in large language models a comprehensive review.pdf', 'prompt injection attacks and defenses in llmintegrated applications.pdf', 'large language models for propaganda detection.pdf', 'gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench.pdf', 'ccprompt counterfactual contrastive prompttuning for manyclass classification.pdf', 'studenteval a benchmark of studentwritten prompts for large language models of code.pdf', 'semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking.pdf', 'joint foundation model caching and inference of generative ai services for edge intelligence.pdf', 'can language models learn from explanations in context.pdf', 'automatic multilabel prompting simple and interpretable fewshot classification.pdf', 'ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf', 'exploring parameterefficient finetuning techniques for code generation with large language models.pdf', 'cins comprehensive instruction for fewshot learning in taskoriented dialog systems.pdf', 'an exploration of incontext learning for speech language model.pdf', 'street a multitask structured reasoning and explanation benchmark.pdf', 'booookscore a systematic exploration of booklength summarization in the era of llms.pdf', 'corrpus codebased structured prompting for neurosymbolic story understanding.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'chainofthought prompting for responding to indepth dialogue questions with llm.pdf', 'a theory of emergent incontext learning as implicit structure induction.pdf', 'towards informative fewshot prompt with maximum information gain for incontext learning.pdf', 'leveraging pretrained language models for conversational information seeking from text.pdf', 'optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models.pdf', 'a unified framework for multiintent spoken language understanding with prompting.pdf', 'an empirical evaluation of prompting strategies for large language models in zeroshot clinical natural language processing.pdf', 'zeroshot approach to overcome perturbation sensitivity of prompts.pdf', 'teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf', 'selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf', 'detecting natural language biases with promptbased learning.pdf', 'toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf', 'hypothesis search inductive reasoning with language models.pdf', 'adaprompt adaptive model training for promptbased nlp.pdf', 'adelt transpilation between deep learning frameworks.pdf', 'how to unleash the power of large language models for fewshot relation extraction.pdf', 'list lite prompted selftraining makes parameterefficient fewshot learners.pdf', 'rationaleaugmented ensembles in language models.pdf', 'incontext instruction learning.pdf', 'how to design translation prompts for chatgpt an empirical study.pdf', 'extracting accurate materials data from research papers with conversational language models and prompt engineering.pdf', 'language quantized autoencoders towards unsupervised textimage alignment.pdf', 'unsupervised human activity recognition through twostage prompting with chatgpt.pdf', 'better patching using llm prompting, via selfconsistency.pdf', 'events realm event reasoning of entity states via language models.pdf', 'blackbox prompt optimization aligning large language models without model training.pdf', 'distractor generation for multiplechoice questions with predictive prompting and large language models.pdf', 'the formai dataset generative ai in software security through the lens of formal verification.pdf', 'glam efficient scaling of language models with mixtureofexperts.pdf', 'learning incontext learning for named entity recognition.pdf', 'narrowing the gap between zero and fewshot machine translation by matching styles.pdf', 'mgpt fewshot learners go multilingual.pdf', 'will it blend mixing training paradigms & prompting for argument quality prediction.pdf', 'exnet efficient incontext learning for dataless text classification.pdf', 'large language models in the workplace a case study on prompt engineering for job type classification.pdf', 'how good are commercial large language models on african languages.pdf', 'small language models improve giants by rewriting their outputs.pdf', 'prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf', 'hicl hashtagdriven incontext learning for social media natural language understanding.pdf', 'memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf', 'generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models.pdf', 'improving open information extraction with large language models a study on demonstration uncertainty.pdf', 'generating efficient training data via llmbased attribute manipulation.pdf', 'a minimalist dataset for systematic generalization of perception, syntax, and semantics.pdf', 'contextfaithful prompting for large language models.pdf', 'harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf', 'art automatic multistep reasoning and tooluse for large language models.pdf', 'enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf', 'finetuning language models with just forward passes.pdf', 'promptda labelguided data augmentation for promptbased fewshot learners.pdf', 'limits of an ai program for solving college math problems.pdf', 'lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5.pdf', 'stt soft template tuning for fewshot adaptation.pdf', 'spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization.pdf', 'cheapfake detection with llm using prompt engineering.pdf', 'efficient prompting via dynamic incontext learning.pdf', 'rethinking the role of demonstrations what makes incontext learning work.pdf', 'multistage large language model correction for speech recognition.pdf', 'prompt injection attack against llmintegrated applications.pdf', 'purr efficiently editing language model hallucinations by denoising language model corruptions.pdf', 'unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf', 'leveraging training data in fewshot prompting for numerical reasoning.pdf', 'flex unifying evaluation for fewshot nlp.pdf', 'large language models are biased to overestimate profoundness.pdf', 'ecologically valid explanations for label variation in nli.pdf', 'prototypical verbalizer for promptbased fewshot tuning.pdf', ""exploring generative ai assisted feedback writing for students' written responses to a physics conceptual question with prompt engineering and fewshot learning.pdf"", 'towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method.pdf', 'legal prompt engineering for multilingual legal judgement prediction.pdf', 'automatic prompt optimization with gradient descent and beam search.pdf', 'what makes good incontext examples for gpt$3$.pdf', 'are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization.pdf', 'unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf', 'can we edit factual knowledge by incontext learning.pdf', 'selficl zeroshot incontext learning with selfgenerated demonstrations.pdf', 'stprompt semanticguided and taskdriven prompts for effective fewshot classification.pdf', 'tram benchmarking temporal reasoning for large language models.pdf', 'stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf', 'evaluating the instructionfollowing robustness of large language models to prompt injection.pdf', 'prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf', 'instructed language models with retrievers are powerful entity linkers.pdf', 'from web catalogs to google a retrospective study of web search engines sustainable development.pdf', 'language models are weak learners.pdf', 'fixing rust compilation errors using llms.pdf', 'right to be forgotten in the era of large language models implications, challenges, and solutions.pdf', 'understanding the effectiveness of very large language models on dialog evaluation.pdf', 'can ai moderate online communities.pdf', 'impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf', 'chatgpt4pcg competition characterlike level generation for science birds.pdf', 'what do llms know about financial markets a case study on reddit market sentiment analysis.pdf', 'artificial intelligence for health message generation theory, method, and an empirical study using prompt engineering.pdf', 'dspy compiling declarative language model calls into selfimproving pipelines.pdf', 'crowd score a method for the evaluation of jokes using large language model ai voters as judges.pdf', 'zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model.pdf', 'automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models.pdf', 'a lightweight framework for highquality code generation.pdf', 'contextual stance classification using prompt engineering.pdf', 'marked personas using natural language prompts to measure stereotypes in language models.pdf', 'understanding stereotypes in language models towards robust measurement and zeroshot debiasing.pdf', 'explicit knowledge transfer for weaklysupervised code generation.pdf', 'pearl prompting large language models to plan and execute actions over long documents.pdf', 'llm4vv developing llmdriven testsuite for compiler validation.pdf', 'lmcanvas objectoriented interaction to personalize large language modelpowered writing environments.pdf', 'actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf', 'the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant.pdf', 'do emergent abilities exist in quantized large language models an empirical study.pdf', 'alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf', 'active example selection for incontext learning.pdf', 'making large language models better reasoners with stepaware verifier.pdf', 'gembamqm detecting translation quality error spans with gpt4.pdf', 'adaplanner adaptive planning from feedback with language models.pdf', 'diverse retrievalaugmented incontext learning for dialogue state tracking.pdf', 'tempera testtime prompting via reinforcement learning.pdf', 'chils zeroshot image classification with hierarchical label sets.pdf', 's$^3$hqa a threestage approach for multihop texttable hybrid question answering.pdf', 'multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf', 'a survey of large language models for autonomous driving.pdf', 'learning performanceimproving code edits.pdf', 'selfexplanation prompting improves dialogue understanding in large language models.pdf', 'how understanding large language models can inform their use in physics education.pdf', 'looking for a handsome carpenter! debiasing gpt3 job advertisements.pdf', 'folio natural language reasoning with firstorder logic.pdf', 'cocomo computational consciousness modeling for generative and ethical ai.pdf', 'selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator.pdf', 'dail data augmentation for incontext learning via selfparaphrase.pdf', 'can large language models design accurate label functions.pdf', 'sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf', 'stance detection with supervised, zeroshot, and fewshot applications.pdf', 'what makes good incontext demonstrations for code intelligence tasks with llms.pdf', 'retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering.pdf', 'can chatgpt detect intent evaluating large language models for spoken language understanding.pdf', 'instruction induction from few examples to natural language task descriptions.pdf', 'prompt2model generating deployable models from natural language instructions.pdf', 'not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning.pdf', 'healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf', 'how does prompt engineering affect chatgpt performance on unsupervised entity resolution.pdf', 'large language models as sous chefs revising recipes with gpt3.pdf', 'larger language models do incontext learning differently.pdf', 'robut a systematic study of table qa robustness against humanannotated adversarial perturbations.pdf', 'make a choice! knowledge base question answering with incontext learning.pdf', 'causallm is not optimal for incontext learning.pdf', 'chatgpt for plcdcs control logic generation.pdf', 'knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf', 'toward reproducing network research results using large language models.pdf', 'jailbreaking chatgpt via prompt engineering an empirical study.pdf', 'large language models are zeroshot reasoners.pdf', 'cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf', 'mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models.pdf', 'promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models.pdf', 'llmaugmented preference learning from natural language.pdf', 'a benchmark for learning to translate a new language from one grammar book.pdf', 'argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf', 'scalable approach to medical wearable postmarket surveillance.pdf', 'raft a realworld fewshot text classification benchmark.pdf', 'prompt position really matters in fewshot and zeroshot nlu tasks.pdf', 'ideal influencedriven selective annotations empower incontext learners in large language models.pdf', 'terminologyaware translation with constrained decoding and large language model prompting.pdf', 'a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf', 'promptbased length controlled generation with reinforcement learning.pdf', 'text2cohort democratizing the nci imaging data commons with natural language cohort discovery.pdf', 'understanding how model size affects fewshot instruction prompting.pdf', 'an incontext schema understanding method for knowledge base question answering.pdf', 'a new dataset and empirical study for sentence simplification in chinese.pdf', 'a promptbased fewshot learning approach to software conflict detection.pdf', 'developing a scalable benchmark for assessing large language models in knowledge graph engineering.pdf', 'incontext learning with many demonstration examples.pdf', ""impossible triangle what's next for pretrained language models.pdf"", 'transformers are efficient incontext estimators for wireless communication.pdf', 'just tell me prompt engineering in business process management.pdf', 'lpml llmprompting markup language for mathematical reasoning.pdf', 'can chatgpt understand causal language in science claims.pdf', 'large language models as data preprocessors.pdf', 'jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue.pdf', 'metareasoning semanticssymbol deconstruction for large language models.pdf', 'evaluating llms for privilegeescalation scenarios.pdf', 'prompts matter insights and strategies for prompt engineering in automated software traceability.pdf', 'fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems.pdf', 'fewshot adaptation for parsing contextual utterances with llms.pdf', 'sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation.pdf', 'prompting palm for translation assessing strategies and performance.pdf', 'gptfinre incontext learning for financial relation extraction using large language models.pdf', 'automatic chain of thought prompting in large language models.pdf', 'accelerated materials language processing enabled by gpt.pdf', 'gpt is becoming a turing machine here are some ways to program it.pdf', 'the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf', 'evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf', 'the learnability of incontext learning.pdf', 'cosmic data efficient instructiontuning for speech incontext learning.pdf', 'omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know.pdf', 'menucraft interactive menu system design with large language models.pdf', 'dissecting incontext learning of translations in gpts.pdf', 'breaking the bank with chatgpt fewshot text classification for finance.pdf', 'can large language models truly understand prompts a case study with negated prompts.pdf', 'dynamar dynamic prompt with mask token representation.pdf', 'stress testing chainofthought prompting for large language models.pdf', 'echoprompt instructing the model to rephrase queries for improved incontext learning.pdf', 'mixpro simple yet effective data augmentation for promptbased learning.pdf', 'coaudit tools to help humans doublecheck aigenerated content.pdf']" -GPT-4,257,"['multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf', 'game of tones faculty detection of gpt4 generated content in university assessments.pdf', 'identifying and extracting rare disease phenotypes with large language models.pdf', 'leveraging large language models for exploiting asr uncertainty.pdf', 'jailbreaking gpt4v via selfadversarial attacks with system prompts.pdf', 'boosting logical reasoning in large language models through a new framework the graph of thought.pdf', ""a wolf in sheep's clothing generalized nested jailbreak prompts can fool large language models easily.pdf"", 'prompt engineering for students of medicine and their teachers.pdf', 'unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf', 'exploring the relationship between model architecture and incontext learning ability.pdf', 'developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer.pdf', 'large language models are zeroshot rankers for recommender systems.pdf', 'affect recognition in conversations using large language models.pdf', 'large language models for failure mode classification an investigation.pdf', 'a chain of aibased solutions for resolving fqns and fixing syntax errors in partial code.pdf', 'the utility of large language models and generative ai for education research.pdf', 'beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels.pdf', ""optimizing machine translation through prompt engineering an investigation into chatgpt's customizability.pdf"", 'large language models can be used to effectively scale spear phishing campaigns.pdf', 'sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf', 'can large language models write good propertybased tests.pdf', 'can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning.pdf', 'llmlingua compressing prompts for accelerated inference of large language models.pdf', 'large language models can accomplish business process management tasks.pdf', 'knowledge crosswords geometric reasoning over structured knowledge with large language models.pdf', 'aicopilot for business optimisation a framework and a case study in production scheduling.pdf', 'gpt takes the bar exam.pdf', 'chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt.pdf', 'a languageagent approach to formal theoremproving.pdf', 'exploring the integration of large language models into automatic speech recognition systems an empirical study.pdf', 'autoplan automatic planning of interactive decisionmaking tasks with large language models.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'sqlpalm improved large language model adaptation for texttosql.pdf', 'large language model prompt chaining for long legal document classification.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'large language models are stateoftheart evaluators of translation quality.pdf', 'fireact toward language agent finetuning.pdf', 'sentiment analysis in the era of large language models a reality check.pdf', ""don't stop pretraining make promptbased finetuning powerful learner.pdf"", 'are large language models ready for healthcare a comparative study on clinical language understanding.pdf', ""harnessing large language models' empathetic response generation capabilities for online mental health counselling support.pdf"", 'adaptivesolver framework for dynamic strategy selection in large language model reasoning.pdf', 'small models are valuable plugins for large language models.pdf', 'rtllm an opensource benchmark for design rtl generation with large language model.pdf', 'pair programming with large language models for sampling and estimation of copulas.pdf', 'large language models as tax attorneys a case study in legal capabilities emergence.pdf', 'autonomous treesearch ability of large language models.pdf', 'evaluation of chatgpt family of models for biomedical reasoning and classification.pdf', 'interact exploring the potentials of chatgpt as a cooperative agent.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'alpacafarm a simulation framework for methods that learn from human feedback.pdf', 'benchmarking cognitive biases in large language models as evaluators.pdf', 'simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf', 'defending against alignmentbreaking attacks via robustly aligned llm.pdf', 'exploring automated distractor and feedback generation for math multiplechoice questions via incontext learning.pdf', 'prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages.pdf', 'little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task.pdf', 'enhancing incontext learning with answer feedback for multispan question answering.pdf', 'masterkey automated jailbreak across multiple large language model chatbots.pdf', 'promptbased extraction of social determinants of health using fewshot learning.pdf', 'textbooks are all you need ii phi15 technical report.pdf', 'blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'selfevolve a code evolution framework via large language models.pdf', 'is chatgpt the ultimate programming assistant how far is it.pdf', 'large language models in fault localisation.pdf', 'incontext fewshot relation extraction via pretrained language models.pdf', 'time travel in llms tracing data contamination in large language models.pdf', 'wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf', 'sentiment analysis through llm negotiations.pdf', 'dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning.pdf', 'chainforge a visual toolkit for prompt engineering and llm hypothesis testing.pdf', 'query2doc query expansion with large language models.pdf', 'zeroshot temporal relation extraction with chatgpt.pdf', 'zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis.pdf', 'bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games.pdf', 'i was blind but now i see implementing visionenabled dialogue in social robots.pdf', 'autodan generating stealthy jailbreak prompts on aligned large language models.pdf', 'automatic data transformation using large language model an experimental study on building energy data.pdf', ""do large language models know what they don't know.pdf"", 'prompt engineering in medical education.pdf', 'the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation.pdf', 'towards legally enforceable hate speech detection for public forums.pdf', 'using incontext learning to improve dialogue safety.pdf', 'automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf', 'development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews.pdf', 'calibrating llmbased evaluator.pdf', 'prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf', 'review of large vision models and visual prompt engineering.pdf', 'a prefrontal cortexinspired architecture for planning in large language models.pdf', 'exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf', 'selfplanning code generation with large language models.pdf', 'metaincontext learning in large language models.pdf', 'is chatgpt a good recommender a preliminary study.pdf', 'allure auditing and improving llmbased evaluation of text using iterative incontextlearning.pdf', 'incontext learning with iterative demonstration selection.pdf', 'inductivebias learning generating code models with large language model.pdf', 'multilingual mathematical autoformalization.pdf', 'entity matching using large language models.pdf', 'askit unified programming interface for programming with large language models.pdf', 'chitchat or deep talk prompt engineering for process mining.pdf', 'masakhanews news topic classification for african languages.pdf', 'camoscio an italian instructiontuned llama.pdf', 'autohint automatic prompt optimization with hint generation.pdf', 'revisiting prompt engineering via declarative crowdsourcing.pdf', 'enable language models to implicitly learn selfimprovement from data.pdf', 'unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving.pdf', 'genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf', 'backdooring instructiontuned large language models with virtual prompt injection.pdf', 'democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf', 'leveraging large language models to generate answer set programs.pdf', 'a communication theory perspective on prompting engineering methods for large language models.pdf', 'a closer look at incontext learning under distribution shifts.pdf', 'connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf', 'do llms possess a personality making the mbti test an amazing evaluation for large language models.pdf', 'linking microblogging sentiments to stock price movement an application of gpt4.pdf', 'codecot and beyond learning to program and test like a developer.pdf', 'how far are large language models from agents with theoryofmind.pdf', 'comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'incontext learning user simulators for taskoriented dialog systems.pdf', 'can gpt3 perform statutory reasoning.pdf', 'an empirical study on using large language models to analyze software supply chain security failures.pdf', 'benchmarking arabic ai with large language models.pdf', 'simulating hp lovecraft horror literature with the chatgpt large language model.pdf', 'mededit model editing for medical question answering with external knowledge bases.pdf', 'upar a kantianinspired prompting framework for enhancing large language model capabilities.pdf', 'prompt engineering through the lens of optimal control.pdf', 'can large language models capture public opinion about global warming an empirical assessment of algorithmic fidelity and bias.pdf', 'ten quick tips for harnessing the power of chatgptgpt4 in computational biology.pdf', 'selective demonstrations for crossdomain texttosql.pdf', 'promptengineering and transformerbased question generation and evaluation.pdf', 'batch prompting efficient inference with large language model apis.pdf', 'exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods.pdf', 'can language models solve graph problems in natural language.pdf', 'evaluating large language models on graphs performance insights and comparative analysis.pdf', 'comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students.pdf', 'abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models.pdf', 'algo synthesizing algorithmic programs with generated oracle verifiers.pdf', 'chatgpt for arabic grammatical error correction.pdf', 'large language models for aspectbased sentiment analysis.pdf', 'humans in humans out on gpt converging toward common sense in both success and failure.pdf', 'cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf', 'diagnosing infeasible optimization problems using large language models.pdf', 'retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf', 'adaptive machine translation with large language models.pdf', 'a search for prompts generating structured answers from contracts.pdf', 'datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation.pdf', 'are large language models post hoc explainers.pdf', 'instruction distillation makes large language models efficient zeroshot rankers.pdf', 'llmebench a flexible framework for accelerating llms benchmarking.pdf', 'grammar prompting for domainspecific language generation with large language models.pdf', 'a chat about boring problems studying gptbased text normalization.pdf', 'retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf', 'llmintheloop leveraging large language model for thematic analysis.pdf', 'llmfuncmapper function identification for interpreting complex clauses in building codes via llm.pdf', 'large language models and prompt engineering for biomedical query focused multidocument summarisation.pdf', 'qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf', 'boosting theoryofmind performance in large language models via prompting.pdf', 'mixture of soft prompts for controllable data generation.pdf', 'unleashing the potential of prompt engineering in large language models a comprehensive review.pdf', 'prompt injection attacks and defenses in llmintegrated applications.pdf', 'large language models for propaganda detection.pdf', 'joint foundation model caching and inference of generative ai services for edge intelligence.pdf', 'ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf', 'exploring parameterefficient finetuning techniques for code generation with large language models.pdf', 'booookscore a systematic exploration of booklength summarization in the era of llms.pdf', 'corrpus codebased structured prompting for neurosymbolic story understanding.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'augmented embeddings for custom retrievals.pdf', 'selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf', 'toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf', 'hypothesis search inductive reasoning with language models.pdf', 'how to unleash the power of large language models for fewshot relation extraction.pdf', 'extracting accurate materials data from research papers with conversational language models and prompt engineering.pdf', 'unsupervised human activity recognition through twostage prompting with chatgpt.pdf', 'blackbox prompt optimization aligning large language models without model training.pdf', 'distractor generation for multiplechoice questions with predictive prompting and large language models.pdf', 'the formai dataset generative ai in software security through the lens of formal verification.pdf', 'large language models in the workplace a case study on prompt engineering for job type classification.pdf', 'protect your prompts protocols for ip protection in llm applications.pdf', 'generating efficient training data via llmbased attribute manipulation.pdf', 'harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf', 'enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf', 'adversarial demonstration attacks on large language models.pdf', 'baseline defenses for adversarial attacks against aligned language models.pdf', 'spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization.pdf', 'exploring chainofthought style prompting for texttosql.pdf', 'algorithm of thoughts enhancing exploration of ideas in large language models.pdf', 'multistage large language model correction for speech recognition.pdf', 'prompt injection attack against llmintegrated applications.pdf', 'large language models are biased to overestimate profoundness.pdf', 'ecologically valid explanations for label variation in nli.pdf', 'cyber sentinel exploring conversational agents in streamlining security tasks with gpt4.pdf', 'benchmarking a foundation llm on its ability to relabel structure names in accordance with the aapm tg263 report.pdf', 'automatic prompt optimization with gradient descent and beam search.pdf', 'tram benchmarking temporal reasoning for large language models.pdf', 'evaluating the instructionfollowing robustness of large language models to prompt injection.pdf', 'prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf', 'instructed language models with retrievers are powerful entity linkers.pdf', 'language models are weak learners.pdf', 'fixing rust compilation errors using llms.pdf', 'is gpt4 a good trader.pdf', 'impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf', 'dspy compiling declarative language model calls into selfimproving pipelines.pdf', 'zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model.pdf', 'chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations.pdf', 'automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models.pdf', 'a lightweight framework for highquality code generation.pdf', 'nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection.pdf', 'marked personas using natural language prompts to measure stereotypes in language models.pdf', 'pearl prompting large language models to plan and execute actions over long documents.pdf', 'llm4vv developing llmdriven testsuite for compiler validation.pdf', 'actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf', 'the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant.pdf', 'do emergent abilities exist in quantized large language models an empirical study.pdf', 'gembamqm detecting translation quality error spans with gpt4.pdf', 'prompt, condition, and generate classification of unsupported claims with incontext learning.pdf', 'a survey of large language models for autonomous driving.pdf', 'learning performanceimproving code edits.pdf', 'llamarec twostage recommendation using large language models for ranking.pdf', 'selfexplanation prompting improves dialogue understanding in large language models.pdf', 'how understanding large language models can inform their use in physics education.pdf', 'cocomo computational consciousness modeling for generative and ethical ai.pdf', 'can large language models design accurate label functions.pdf', 'sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf', 'stance detection with supervised, zeroshot, and fewshot applications.pdf', 'what makes good incontext demonstrations for code intelligence tasks with llms.pdf', 'tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf', 'how does prompt engineering affect chatgpt performance on unsupervised entity resolution.pdf', 'chatgpt for plcdcs control logic generation.pdf', 'loke linked open knowledge extraction for automated knowledge graph construction.pdf', 'knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf', 'jailbreaking chatgpt via prompt engineering an empirical study.pdf', 'cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf', 'llmaugmented preference learning from natural language.pdf', 'decomposed prompting for machine translation between related languages using large language models.pdf', 's3dst structured opendomain dialogue segmentation and state tracking in the era of llms.pdf', 'a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf', 'promptbased length controlled generation with reinforcement learning.pdf', 'developing a scalable benchmark for assessing large language models in knowledge graph engineering.pdf', 'chatgpt opens a new door for bioinformatics.pdf', 'large language models as data preprocessors.pdf', 'evaluating llms for privilegeescalation scenarios.pdf', 'procedural text mining with large language models.pdf', 'prompts matter insights and strategies for prompt engineering in automated software traceability.pdf', 'trained transformers learn linear models incontext.pdf', 'fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems.pdf', 'prompting palm for translation assessing strategies and performance.pdf', 'gptfinre incontext learning for financial relation extraction using large language models.pdf', 'gpt is becoming a turing machine here are some ways to program it.pdf', 'the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf', 'evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf', 'overprompt enhancing chatgpt capabilities through an efficient incontext learning approach.pdf', 'breaking the bank with chatgpt fewshot text classification for finance.pdf', 'coaudit tools to help humans doublecheck aigenerated content.pdf']" -InstructGPT,69,"['multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf', 'complementary explanations for effective incontext learning.pdf', 'a study on prompt design, advantages and limitations of chatgpt for deep learning program repair.pdf', 'text classification via large language models.pdf', 'measuring inductive biases of incontext learning with underspecified demonstrations.pdf', 'affect recognition in conversations using large language models.pdf', 'factchecking complex claims with programguided reasoning.pdf', 'gpt takes the bar exam.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'lorahub efficient crosstask generalization via dynamic lora composition.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'making language models better tool learners with execution feedback.pdf', 'codeie large code generation models are better fewshot information extractors.pdf', 'instructeval systematic evaluation of instruction selection methods.pdf', 'pair programming with large language models for sampling and estimation of copulas.pdf', 'inboxbart get instructions into biomedical multitask learning.pdf', 'interact exploring the potentials of chatgpt as a cooperative agent.pdf', 'alpacafarm a simulation framework for methods that learn from human feedback.pdf', 'benchmarking cognitive biases in large language models as evaluators.pdf', 'compositional semantic parsing with large language models.pdf', 'on bilingual lexicon induction with large language models.pdf', 'is chatgpt the ultimate programming assistant how far is it.pdf', 'posqa probe the world models of llms with size comparisons.pdf', 'sentiment analysis through llm negotiations.pdf', 'benchmarking large language model capabilities for conditional generation.pdf', ""do large language models know what they don't know.pdf"", 'are hard examples also harder to explain a study with human and modelgenerated explanations.pdf', 'crosscodebench benchmarking crosstask generalization of source code models.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'grips gradientfree, editbased instruction search for prompting large language models.pdf', 'latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf', 'large language models are pretty good zeroshot video game bug detectors.pdf', 'masakhanews news topic classification for african languages.pdf', 'chainofdictionary prompting elicits translation in large language models.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'post hoc explanations of language models can improve language models.pdf', 'democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf', 'generative speech recognition error correction with large language models and taskactivating prompting.pdf', 'large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf', 'chatgpt for arabic grammatical error correction.pdf', 'the unreliability of explanations in fewshot prompting for textual reasoning.pdf', 'temporal knowledge graph forecasting without knowledge using incontext learning.pdf', 'enhancing small medical learners with privacypreserving contextual prompting.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'a theory of emergent incontext learning as implicit structure induction.pdf', 'selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf', 'how to unleash the power of large language models for fewshot relation extraction.pdf', 'language quantized autoencoders towards unsupervised textimage alignment.pdf', 'distractor generation for multiplechoice questions with predictive prompting and large language models.pdf', 'prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf', 'memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf', 'contextfaithful prompting for large language models.pdf', 'art automatic multistep reasoning and tooluse for large language models.pdf', 'unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf', 'synthetic prompting generating chainofthought demonstrations for large language models.pdf', 'automatic prompt optimization with gradient descent and beam search.pdf', 'selficl zeroshot incontext learning with selfgenerated demonstrations.pdf', 'evaluating the instructionfollowing robustness of large language models to prompt injection.pdf', 'right to be forgotten in the era of large language models implications, challenges, and solutions.pdf', 'understanding the effectiveness of very large language models on dialog evaluation.pdf', 'chatgpt4pcg competition characterlike level generation for science birds.pdf', 'instruction induction from few examples to natural language task descriptions.pdf', 'larger language models do incontext learning differently.pdf', 'large language models are zeroshot reasoners.pdf', 'cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf', 'a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf', 'promptbased length controlled generation with reinforcement learning.pdf', 'understanding how model size affects fewshot instruction prompting.pdf', 'can large language models truly understand prompts a case study with negated prompts.pdf']" -BERT,475,"['multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf', 'complementary explanations for effective incontext learning.pdf', 'plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf', 'buffet benchmarking large language models for fewshot crosslingual transfer.pdf', 'multilingual llms are better crosslingual incontext learners with alignment.pdf', 'efficient open domain multihop question answering with fewshot data synthesis.pdf', 'coveragebased example selection for incontext learning.pdf', 'casteist but not racist quantifying disparities in large language model bias between india and the west.pdf', 'crosslingual retrieval augmented incontext learning for bangla.pdf', 'identifying and extracting rare disease phenotypes with large language models.pdf', 'a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation.pdf', 'automated fewshot classification with instructionfinetuned language models.pdf', 'text classification via large language models.pdf', 'covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds.pdf', 'measuring inductive biases of incontext learning with underspecified demonstrations.pdf', 'red teaming language model detectors with language models.pdf', 'an empirical study on fewshot knowledge probing for pretrained language models.pdf', 'unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf', 'fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf', 'pretraining to learn in context.pdf', 'exploring the relationship between model architecture and incontext learning ability.pdf', 'towards unified prompt tuning for fewshot text classification.pdf', 'fewshot instruction prompts for pretrained language models to detect social biases.pdf', 'large language models are zeroshot rankers for recommender systems.pdf', 'consprompt easily exploiting contrastive samples for fewshot prompt learning.pdf', 'effective test generation using pretrained large language models and mutation testing.pdf', 'naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers.pdf', 'an empirical evaluation of using large language models for automated unit test generation.pdf', 'patchtoken aligned bayesian prompt learning for visionlanguage models.pdf', 'lambada backward chaining for automated reasoning in natural language.pdf', 'toward unified controllable text generation via regular expression instruction.pdf', 'acecoder utilizing existing code to enhance code generation.pdf', 'a chain of aibased solutions for resolving fqns and fixing syntax errors in partial code.pdf', 'the utility of large language models and generative ai for education research.pdf', 'discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators.pdf', 'beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels.pdf', 'udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers.pdf', 'factchecking complex claims with programguided reasoning.pdf', 'towards zerolabel language learning.pdf', 'sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf', 'emotionconditioned text generation through automatic prompt optimization.pdf', 'what makes pretrained language models better zeroshot learners.pdf', 'the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues.pdf', 'inverse is better! fast and accurate prompt for fewshot slot tagging.pdf', 'llmlingua compressing prompts for accelerated inference of large language models.pdf', 'differentiable entailment for parameter efficient few shot learning.pdf', 'systematic rectification of language models via deadend analysis.pdf', 'zeroshot information extraction from radiological reports using chatgpt.pdf', 'hqp a humanannotated dataset for detecting online propaganda.pdf', 'lowresource authorship style transfer can nonfamous authors be imitated.pdf', 'unified demonstration retriever for incontext learning.pdf', 'are humangenerated demonstrations necessary for incontext learning.pdf', 'arguments to key points mapping with promptbased learning.pdf', 'prompting large language models with chainofthought for fewshot knowledge base question generation.pdf', 'gpt takes the bar exam.pdf', 'chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt.pdf', 'exploring the integration of large language models into automatic speech recognition systems an empirical study.pdf', 'reranking for natural language generation from logical forms a study based on large language models.pdf', 'probing llms for hate speech detection strengths and vulnerabilities.pdf', 'learning to retrieve incontext examples for large language models.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'dialog2api taskoriented dialogue with api description and example programs.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'large language models are stateoftheart evaluators of translation quality.pdf', 'graphprompt biomedical entity normalization using graphbased prompt templates.pdf', 'automatic label sequence generation for prompting sequencetosequence models.pdf', 'ctqscorer combining multiple features for incontext example selection for machine translation.pdf', 'log parsing with promptbased fewshot learning.pdf', 'let me check the examples enhancing demonstration learning via explicit imitation.pdf', 'continual training of language models for fewshot learning.pdf', 'sentiment analysis in the era of large language models a reality check.pdf', 'pre visionlanguage prompt learning with reparameterization encoder.pdf', 'do we still need clinical language models.pdf', 'psg promptbased sequence generation for acronym extraction.pdf', 'scalable prompt generation for semisupervised learning with language models.pdf', 'boosting crosslingual transferability in multilingual models via incontext learning.pdf', 'getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning.pdf', 'metricbased incontext learning a case study in text simplification.pdf', ""don't stop pretraining make promptbased finetuning powerful learner.pdf"", 'are large language models ready for healthcare a comparative study on clinical language understanding.pdf', 'cotbert enhancing unsupervised sentence representation through chainofthought.pdf', 'code generation tools (almost) for free a study of fewshot, pretrained language models on code.pdf', 'incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf', 'flocks of stochastic parrots differentially private prompt learning for large language models.pdf', 'towards using fewshot prompt learning for automating model completion.pdf', 'controlling personality style in dialogue with zeroshot promptbased learning.pdf', 'discrete prompt compression with reinforcement learning.pdf', 'bbtv2 towards a gradientfree future with large language models.pdf', 'small models are valuable plugins for large language models.pdf', 'rtllm an opensource benchmark for design rtl generation with large language model.pdf', 'what makes datatotext generation hard for pretrained language models.pdf', 'pair programming with large language models for sampling and estimation of copulas.pdf', 'inboxbart get instructions into biomedical multitask learning.pdf', 'attempt parameterefficient multitask tuning via attentional mixtures of soft prompts.pdf', 'using chatgpt for entity matching.pdf', 'cplnovid contextaware promptbased learning for norm violation detection in online communities.pdf', 'evaluation of chatgpt family of models for biomedical reasoning and classification.pdf', 'towards making the most of chatgpt for machine translation.pdf', 'easynlp a comprehensive and easytouse toolkit for natural language processing.pdf', 'fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf', 'zerotop zeroshot taskoriented semantic parsing using large language models.pdf', 'conqx semantic expansion of spoken queries for intent detection based on conditioned text generation.pdf', 'boosting language models reasoning with chainofknowledge prompting.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'detecting hate speech with gpt3.pdf', 'simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf', 'improved compositional generalization by generating demonstrations for metalearning.pdf', 'measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing.pdf', 'learning new tasks from a few examples with softlabel prototypes.pdf', 'can language models be biomedical knowledge bases.pdf', 'exploring automated distractor and feedback generation for math multiplechoice questions via incontext learning.pdf', 'tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf', 'xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf', 'short answer grading using oneshot prompting and text similarity scoring model.pdf', 'promptbased extraction of social determinants of health using fewshot learning.pdf', 'steering large language models for machine translation with finetuning and incontext learning.pdf', 'hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks.pdf', 'a weak supervision approach for fewshot aspect based sentiment.pdf', 'reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf', 'on bilingual lexicon induction with large language models.pdf', 'a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems.pdf', 'blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'is chatgpt the ultimate programming assistant how far is it.pdf', 'large language models in fault localisation.pdf', 'incontext fewshot relation extraction via pretrained language models.pdf', 'wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf', 'ul2 unifying language learning paradigms.pdf', 'boosting incontext learning with factual knowledge.pdf', 'sentiment analysis through llm negotiations.pdf', 'dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning.pdf', 'oneshot labeling for automatic relevance estimation.pdf', 'rethinking the event coding pipeline with prompt entailment.pdf', 'benchmarking large language model capabilities for conditional generation.pdf', 'query2doc query expansion with large language models.pdf', 'zeroshot temporal relation extraction with chatgpt.pdf', 'instanceaware prompt learning for language understanding and generation.pdf', 'meal stable and active learning for fewshot prompting.pdf', 'zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis.pdf', 'an evaluation of gpt models for phenotype concept recognition.pdf', 'tabllm fewshot classification of tabular data with large language models.pdf', 'causal interventionsbased fewshot named entity recognition.pdf', 'automatic data transformation using large language model an experimental study on building energy data.pdf', 'efficient blackbox adversarial attacks on neural text detectors.pdf', 'are hard examples also harder to explain a study with human and modelgenerated explanations.pdf', 'the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis.pdf', 'mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf', 'using natural language explanations to improve robustness of incontext learning for natural language inference.pdf', 'fewclue a chinese fewshot learning evaluation benchmark.pdf', 'building a role specified opendomain dialogue system leveraging largescale language models.pdf', 'do gpts produce less literal translations.pdf', 'investigating the fairness of large language models for predictions on tabular data.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'promptner prompt locating and typing for named entity recognition.pdf', 'towards legally enforceable hate speech detection for public forums.pdf', 'using incontext learning to improve dialogue safety.pdf', 'automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf', 'is chatgpt a good causal reasoner a comprehensive evaluation.pdf', 'development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews.pdf', 'retrieving supporting evidence for generative question answering.pdf', 'true fewshot learning with language models.pdf', 'calibrating llmbased evaluator.pdf', 'true fewshot learning with prompts a realworld perspective.pdf', 'discrete prompt optimization via constrained generation for zeroshot reranker.pdf', 'prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf', 'review of large vision models and visual prompt engineering.pdf', 'exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf', 'revisiting automated prompting are we actually doing better.pdf', 'the impact of symbolic representations on incontext learning for fewshot reasoning.pdf', 'relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction.pdf', 'fewshot stance detection via targetaware prompt distillation.pdf', 'is chatgpt a good recommender a preliminary study.pdf', 'allure auditing and improving llmbased evaluation of text using iterative incontextlearning.pdf', 'incontext learning with iterative demonstration selection.pdf', 'calibrate before use improving fewshot performance of language models.pdf', 'knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering.pdf', 'vitaclip video and text adaptive clip via multimodal prompting.pdf', 'a mechanism for solving relational tasks in transformer language models.pdf', 'plum prompt learning using metaheuristic.pdf', 'good examples make a faster learner simple demonstrationbased learning for lowresource ner.pdf', 'humanintheloop machine translation with large language model.pdf', 'check your facts and try again improving large language models with external knowledge and automated feedback.pdf', 'extractive summarization via chatgpt for faithful summary generation.pdf', 'latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf', 'entity matching using large language models.pdf', 'product information extraction using chatgpt.pdf', 'a mlllm pairing for better code comment classification.pdf', 'towards fewshot identification of morality frames using incontext learning.pdf', 'contextual biasing of namedentities with large language models.pdf', 'zara improving fewshot selfrationalization for small language models.pdf', 'masakhanews news topic classification for african languages.pdf', 'an explanation of incontext learning as implicit bayesian inference.pdf', 'generative type inference for python.pdf', 'iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve.pdf', 's3 socialnetwork simulation system with large language modelempowered agents.pdf', 'narrowing the gap between supervised and unsupervised sentence representation learning with large language model.pdf', 'exploring promptbased fewshot learning for grounded dialog generation.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'selective annotation makes language models better fewshot learners.pdf', 'mitigating label biases for incontext learning.pdf', 'camoscio an italian instructiontuned llama.pdf', 'autohint automatic prompt optimization with hint generation.pdf', 'enable language models to implicitly learn selfimprovement from data.pdf', 'transferring procedural knowledge across commonsense tasks.pdf', 'the inductive bias of incontext learning rethinking pretraining example design.pdf', 'pretrained tokenreplaced detection model as fewshot learner.pdf', 'language models are fewshot learners for prognostic prediction.pdf', 'unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving.pdf', 'rethink the effectiveness of text data augmentation an empirical analysis.pdf', 'chemical identification and indexing in pubmed articles via bert and texttotext approaches.pdf', 'post hoc explanations of language models can improve language models.pdf', 'what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf', 'gps genetic prompt search for efficient fewshot learning.pdf', 'using large language models for cybersecurity capturetheflag challenges and certification questions.pdf', 'extracting multivalued relations from language models.pdf', 'leveraging large language models to generate answer set programs.pdf', 'a communication theory perspective on prompting engineering methods for large language models.pdf', 'generative speech recognition error correction with large language models and taskactivating prompting.pdf', 'a simple baseline for knowledgebased visual question answering.pdf', 'prompt programming for large language models beyond the fewshot paradigm.pdf', 'prompting to distill boosting datafree knowledge distillation via reinforced prompt.pdf', 'transfer learning for power outage detection task with limited training data.pdf', 'demonstrations of the potential of aibased political issue polling.pdf', 'divide and prompt chain of thought prompting for texttosql.pdf', 'converser fewshot conversational dense retrieval with synthetic data generation.pdf', 'understanding incontext learning via supportive pretraining data.pdf', 'global constraints with prompting for zeroshot event argument classification.pdf', 'convolutional bypasses are better vision transformer adapters.pdf', ""what's in a measurement using gpt3 on semeval 2021 task 8 measeval.pdf"", 'chat2vis generating data visualisations via natural language using chatgpt, codex and gpt3 large language models.pdf', 'linking microblogging sentiments to stock price movement an application of gpt4.pdf', 'framing the newsfrom human perception to large language model inferences.pdf', 'codecot and beyond learning to program and test like a developer.pdf', 'improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning.pdf', 'a practical survey on zeroshot prompt design for incontext learning.pdf', 'promptbased learning for thread structure prediction in cybersecurity forums.pdf', 'chatgpt for robotics design principles and model abilities.pdf', 'how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings.pdf', 'dricl demonstrationretrieved incontext learning.pdf', 'legal prompting teaching a language model to think like a lawyer.pdf', 'multistage collaborative knowledge distillation from large language models.pdf', 'a fewshot approach to resume information extraction via prompts.pdf', 'linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf', 'adversarial robustness of promptbased fewshot learning for natural language understanding.pdf', 'relation extraction as openbook examination retrievalenhanced prompt tuning.pdf', 'towards answering openended ethical quandary questions.pdf', 'automatic prompt rewriting for personalized text generation.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'large language modelaware incontext learning for code generation.pdf', 'incontext learning user simulators for taskoriented dialog systems.pdf', 'can gpt3 perform statutory reasoning.pdf', 'benchmarking arabic ai with large language models.pdf', 'mededit model editing for medical question answering with external knowledge bases.pdf', 'making large language models better data creators.pdf', 'large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf', 'harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf', 'selective demonstrations for crossdomain texttosql.pdf', 'promptengineering and transformerbased question generation and evaluation.pdf', 'choice over control how users write with large language models using diegetic and nondiegetic prompting.pdf', 'generating training data with language models towards zeroshot language understanding.pdf', 'gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles.pdf', 'templatefree prompt tuning for fewshot ner.pdf', 'atlas fewshot learning with retrieval augmented language models.pdf', 'the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis.pdf', 'chatgpt for arabic grammatical error correction.pdf', 'diverse demonstrations improve incontext compositional generalization.pdf', 'fewshot reranking for multihop qa via language model prompting.pdf', 'disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective.pdf', 'large language models for aspectbased sentiment analysis.pdf', 'mastering the task of open information extraction with large language models and consistent reasoning environment.pdf', 'cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf', 'retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf', 'towards zeroshot and fewshot table question answering using gpt3.pdf', 'adaptive machine translation with large language models.pdf', 'a search for prompts generating structured answers from contracts.pdf', 'proqa structural promptbased pretraining for unified question answering.pdf', 'instructionner a multitask instructionbased generative framework for fewshot ner.pdf', 'reordering examples helps during primingbased fewshot learning.pdf', 'lowresource multigranularity academic function recognition based on multiple prompt knowledge.pdf', 'continuous prompt tuning based textual entailment model for ecommerce entity typing.pdf', 'fewshot nested named entity recognition.pdf', 'fewshot training llms for projectspecific codesummarization.pdf', 'adapting prompt for fewshot tabletotext generation.pdf', 'the unreliability of explanations in fewshot prompting for textual reasoning.pdf', 'selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf', 'grammar prompting for domainspecific language generation with large language models.pdf', 'retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf', 'llmfuncmapper function identification for interpreting complex clauses in building codes via llm.pdf', 'mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf', 'large language models and prompt engineering for biomedical query focused multidocument summarisation.pdf', 'empower textattributed graphs learning with large language models (llms).pdf', 'enhancing small medical learners with privacypreserving contextual prompting.pdf', 'qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf', 'a reinforcement learningbased offensive semantics censorship system for chatbots.pdf', 'mixture of soft prompts for controllable data generation.pdf', 'does correction remain a problem for large language models.pdf', 'unleashing the potential of prompt engineering in large language models a comprehensive review.pdf', 'jampatoisnli a jamaican patois natural language inference dataset.pdf', 'robust retrieval augmented generation for zeroshot slot filling.pdf', 'large language models for propaganda detection.pdf', 'ccprompt counterfactual contrastive prompttuning for manyclass classification.pdf', 'joint foundation model caching and inference of generative ai services for edge intelligence.pdf', 'automatic multilabel prompting simple and interpretable fewshot classification.pdf', 'low resource pipeline for spoken language understanding via weak supervision.pdf', 'better integrating vision and semantics for improving fewshot classification.pdf', 'visualizing linguistic diversity of text datasets synthesized by large language models.pdf', 'exploring parameterefficient finetuning techniques for code generation with large language models.pdf', 'cins comprehensive instruction for fewshot learning in taskoriented dialog systems.pdf', 'an exploration of incontext learning for speech language model.pdf', 'street a multitask structured reasoning and explanation benchmark.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'chainofthought prompting for responding to indepth dialogue questions with llm.pdf', 'towards informative fewshot prompt with maximum information gain for incontext learning.pdf', 'leveraging pretrained language models for conversational information seeking from text.pdf', 'reticl sequential retrieval of incontext examples with reinforcement learning.pdf', 'augmented embeddings for custom retrievals.pdf', 'a unified framework for multiintent spoken language understanding with prompting.pdf', 'zeroshot approach to overcome perturbation sensitivity of prompts.pdf', 'multimodal prompt learning for product title generation with extremely limited labels.pdf', 'selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf', 'detecting natural language biases with promptbased learning.pdf', 'toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf', 'adaprompt adaptive model training for promptbased nlp.pdf', 'adelt transpilation between deep learning frameworks.pdf', 'how to unleash the power of large language models for fewshot relation extraction.pdf', 'list lite prompted selftraining makes parameterefficient fewshot learners.pdf', 'statistical depth for ranking and characterizing transformerbased text embeddings.pdf', 'language quantized autoencoders towards unsupervised textimage alignment.pdf', 'events realm event reasoning of entity states via language models.pdf', 'distractor generation for multiplechoice questions with predictive prompting and large language models.pdf', 'glam efficient scaling of language models with mixtureofexperts.pdf', 'learning incontext learning for named entity recognition.pdf', 'dataefficient goaloriented conversation with dialogue knowledge transfer networks.pdf', 'mgpt fewshot learners go multilingual.pdf', 'will it blend mixing training paradigms & prompting for argument quality prediction.pdf', 'exnet efficient incontext learning for dataless text classification.pdf', 'large language models in the workplace a case study on prompt engineering for job type classification.pdf', 'how good are commercial large language models on african languages.pdf', 'prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf', 'hicl hashtagdriven incontext learning for social media natural language understanding.pdf', 'gpts at factify 2022 prompt aided factverification.pdf', 'improving open information extraction with large language models a study on demonstration uncertainty.pdf', 'generating efficient training data via llmbased attribute manipulation.pdf', 'are structural concepts universal in transformer language models towards interpretable crosslingual generalization.pdf', 'contextfaithful prompting for large language models.pdf', 'harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf', 'discrete and soft prompting for multilingual models.pdf', 'information extraction from documents question answering vs token classification in realworld setups.pdf', 'finetuning language models with just forward passes.pdf', 'promptda labelguided data augmentation for promptbased fewshot learners.pdf', 'do language models learn about legal entity types during pretraining.pdf', 'baseline defenses for adversarial attacks against aligned language models.pdf', 'lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5.pdf', 'stt soft template tuning for fewshot adaptation.pdf', 'cheapfake detection with llm using prompt engineering.pdf', 'efficient prompting via dynamic incontext learning.pdf', 'rethinking the role of demonstrations what makes incontext learning work.pdf', 'claret pretraining a correlationaware contexttoevent transformer for eventcentric generation and classification.pdf', 'unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf', 'large language models are biased to overestimate profoundness.pdf', 'unsupervised contrastconsistent ranking with language models.pdf', 'prototypical verbalizer for promptbased fewshot tuning.pdf', ""exploring generative ai assisted feedback writing for students' written responses to a physics conceptual question with prompt engineering and fewshot learning.pdf"", 'cyber sentinel exploring conversational agents in streamlining security tasks with gpt4.pdf', 'towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method.pdf', 'the cultivated practices of texttoimage generation.pdf', 'legal prompt engineering for multilingual legal judgement prediction.pdf', 'synthetic prompting generating chainofthought demonstrations for large language models.pdf', 'a survey on fewshot knowledge graph completion with structural and commonsense knowledge.pdf', 'what makes good incontext examples for gpt$3$.pdf', 'compositional exemplars for incontext learning.pdf', 'unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf', 'can we edit factual knowledge by incontext learning.pdf', 'allsh active learning guided by local sensitivity and hardness.pdf', 'stprompt semanticguided and taskdriven prompts for effective fewshot classification.pdf', 'tram benchmarking temporal reasoning for large language models.pdf', 'stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf', 'prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf', 'instructed language models with retrievers are powerful entity linkers.pdf', 'from web catalogs to google a retrospective study of web search engines sustainable development.pdf', 'right to be forgotten in the era of large language models implications, challenges, and solutions.pdf', 'codestyle incontext learning for knowledgebased question answering.pdf', 'understanding the effectiveness of very large language models on dialog evaluation.pdf', 'impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf', 'what do llms know about financial markets a case study on reddit market sentiment analysis.pdf', 'artificial intelligence for health message generation theory, method, and an empirical study using prompt engineering.pdf', 'dspy compiling declarative language model calls into selfimproving pipelines.pdf', 'crowd score a method for the evaluation of jokes using large language model ai voters as judges.pdf', 'chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations.pdf', 'automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models.pdf', 'a lightweight framework for highquality code generation.pdf', 'nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection.pdf', 'contextual stance classification using prompt engineering.pdf', 'understanding stereotypes in language models towards robust measurement and zeroshot debiasing.pdf', 'actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf', 'the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant.pdf', 'citeprompt using prompts to identify citation intent in scientific papers.pdf', 'paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation.pdf', 'alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf', 'active example selection for incontext learning.pdf', 'megatts 2 zeroshot texttospeech with arbitrary length speech prompts.pdf', 'making large language models better reasoners with stepaware verifier.pdf', 'gembamqm detecting translation quality error spans with gpt4.pdf', 'prompt, condition, and generate classification of unsupported claims with incontext learning.pdf', 'tempera testtime prompting via reinforcement learning.pdf', 'knowledgegrounded dialog state tracking.pdf', 's$^3$hqa a threestage approach for multihop texttable hybrid question answering.pdf', 'multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf', 'distillation of encoderdecoder transformers for sequence labelling.pdf', 'fiat fusing learning paradigms with instructionaccelerated tuning.pdf', 'a survey of large language models for autonomous driving.pdf', 'llamarec twostage recommendation using large language models for ranking.pdf', 'demonstrations are all you need advancing offensive content paraphrasing using incontext learning.pdf', 'sociocultural knowledge is needed for selection of shots in hate speech detection tasks.pdf', 'looking for a handsome carpenter! debiasing gpt3 job advertisements.pdf', 'folio natural language reasoning with firstorder logic.pdf', 'can large language models design accurate label functions.pdf', 'sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf', 'stance detection with supervised, zeroshot, and fewshot applications.pdf', 'what makes good incontext demonstrations for code intelligence tasks with llms.pdf', 'tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf', 'can chatgpt detect intent evaluating large language models for spoken language understanding.pdf', 'instruction induction from few examples to natural language task descriptions.pdf', 'prompt2model generating deployable models from natural language instructions.pdf', 'healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf', 'how does prompt engineering affect chatgpt performance on unsupervised entity resolution.pdf', 'robut a systematic study of table qa robustness against humanannotated adversarial perturbations.pdf', 'make a choice! knowledge base question answering with incontext learning.pdf', 'causallm is not optimal for incontext learning.pdf', 'automatic short math answer grading via incontext metalearning.pdf', 'chatgpt for plcdcs control logic generation.pdf', 'knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf', 'prodigy enabling incontext learning over graphs.pdf', 'better fewshot relation extraction with label prompt dropout.pdf', 'large language models are zeroshot reasoners.pdf', 'mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models.pdf', 'promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models.pdf', 'llmaugmented preference learning from natural language.pdf', 'argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf', 'scalable approach to medical wearable postmarket surveillance.pdf', 'raft a realworld fewshot text classification benchmark.pdf', 'memobert pretraining model with promptbased learning for multimodal emotion recognition.pdf', 's3dst structured opendomain dialogue segmentation and state tracking in the era of llms.pdf', 'prompt position really matters in fewshot and zeroshot nlu tasks.pdf', 'ideal influencedriven selective annotations empower incontext learners in large language models.pdf', 'positionbased prompting for health outcome generation.pdf', 'a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf', 'promptbased length controlled generation with reinforcement learning.pdf', 'finding support examples for incontext learning.pdf', 'cup curriculum learning based prompt tuning for implicit event argument extraction.pdf', 'prefinetuning for fewshot emotional speech recognition.pdf', 'a promptbased fewshot learning approach to software conflict detection.pdf', 'controlled text generation with natural language instructions.pdf', ""impossible triangle what's next for pretrained language models.pdf"", 'just tell me prompt engineering in business process management.pdf', 'zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts.pdf', 'alt towards finegrained alignment between language and ctr models for clickthrough rate prediction.pdf', 'can chatgpt understand causal language in science claims.pdf', 'clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction.pdf', 'large language models as data preprocessors.pdf', 'evaluating llms for privilegeescalation scenarios.pdf', 'prompts matter insights and strategies for prompt engineering in automated software traceability.pdf', 'fewshot adaptation for parsing contextual utterances with llms.pdf', 'prompting palm for translation assessing strategies and performance.pdf', 'gptfinre incontext learning for financial relation extraction using large language models.pdf', 'automatic chain of thought prompting in large language models.pdf', 'accelerated materials language processing enabled by gpt.pdf', 'evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf', 'modelling latent translations for crosslingual transfer.pdf', 'the learnability of incontext learning.pdf', 'parameterefficient crosslingual transfer of vision and language models via translationbased alignment.pdf', 'cosmic data efficient instructiontuning for speech incontext learning.pdf', 'semanticoriented unlabeled priming for largescale language models.pdf', 'breaking the bank with chatgpt fewshot text classification for finance.pdf', 'dynamar dynamic prompt with mask token representation.pdf', 'mixpro simple yet effective data augmentation for promptbased learning.pdf']" -Codex,101,"['complementary explanations for effective incontext learning.pdf', 'a study on prompt design, advantages and limitations of chatgpt for deep learning program repair.pdf', 'coveragebased example selection for incontext learning.pdf', 'unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf', 'effective test generation using pretrained large language models and mutation testing.pdf', 'an empirical evaluation of using large language models for automated unit test generation.pdf', 'acecoder utilizing existing code to enhance code generation.pdf', 'factchecking complex claims with programguided reasoning.pdf', 'can large language models write good propertybased tests.pdf', 'unified demonstration retriever for incontext learning.pdf', 'aicopilot for business optimisation a framework and a case study in production scheduling.pdf', 'prompting large language models with chainofthought for fewshot knowledge base question generation.pdf', 'gpt takes the bar exam.pdf', 'reranking for natural language generation from logical forms a study based on large language models.pdf', 'sqlprompt incontext texttosql with minimal labeled data.pdf', 'dialog2api taskoriented dialogue with api description and example programs.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'llm4dv using large language models for hardware test stimuli generation.pdf', 'boosted prompt ensembles for large language models.pdf', 'how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models.pdf', 'code generation tools (almost) for free a study of fewshot, pretrained language models on code.pdf', 'codeie large code generation models are better fewshot information extractors.pdf', 'annollm making large language models to be better crowdsourced annotators.pdf', 'pair programming with large language models for sampling and estimation of copulas.pdf', 'zerotop zeroshot taskoriented semantic parsing using large language models.pdf', 'benchmarking cognitive biases in large language models as evaluators.pdf', 'measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing.pdf', 'exploring automated distractor and feedback generation for math multiplechoice questions via incontext learning.pdf', 'xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf', 'compositional semantic parsing with large language models.pdf', 'selfevolve a code evolution framework via large language models.pdf', 'is chatgpt the ultimate programming assistant how far is it.pdf', 'large language models in fault localisation.pdf', 'apiassisted code generation for question answering on varied table structures.pdf', 'automatic data transformation using large language model an experimental study on building energy data.pdf', 'crosscodebench benchmarking crosstask generalization of source code models.pdf', 'selfplanning code generation with large language models.pdf', 'multilingual mathematical autoformalization.pdf', 'askit unified programming interface for programming with large language models.pdf', 'structured chainofthought prompting for code generation.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'selective annotation makes language models better fewshot learners.pdf', 'transferring procedural knowledge across commonsense tasks.pdf', 'genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf', 'domain knowledge distillation from large language model an empirical study in the autonomous driving domain.pdf', 'large language models can implement policy iteration.pdf', 'divide and prompt chain of thought prompting for texttosql.pdf', 'chat2vis generating data visualisations via natural language using chatgpt, codex and gpt3 large language models.pdf', 'chatgpt for robotics design principles and model abilities.pdf', 'how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings.pdf', 'large language modelaware incontext learning for code generation.pdf', 'benchmarking arabic ai with large language models.pdf', 'incontext exemplars as clues to retrieving from large associative memory.pdf', 'fixing hardware security bugs with large language models.pdf', 'selective demonstrations for crossdomain texttosql.pdf', 'batch prompting efficient inference with large language model apis.pdf', 'dcc help generating contextaware compiler error explanations with large language models.pdf', 'algo synthesizing algorithmic programs with generated oracle verifiers.pdf', 'diverse demonstrations improve incontext compositional generalization.pdf', 'retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf', 'unveiling the potential of large language models in generating semantic and crosslanguage clones.pdf', 'towards zeroshot and fewshot table question answering using gpt3.pdf', 'fewshot training llms for projectspecific codesummarization.pdf', 'grammar prompting for domainspecific language generation with large language models.pdf', 'studenteval a benchmark of studentwritten prompts for large language models of code.pdf', 'semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking.pdf', 'exploring parameterefficient finetuning techniques for code generation with large language models.pdf', 'corrpus codebased structured prompting for neurosymbolic story understanding.pdf', 'reticl sequential retrieval of incontext examples with reinforcement learning.pdf', 'adelt transpilation between deep learning frameworks.pdf', 'how to unleash the power of large language models for fewshot relation extraction.pdf', 'better patching using llm prompting, via selfconsistency.pdf', 'small language models improve giants by rewriting their outputs.pdf', 'art automatic multistep reasoning and tooluse for large language models.pdf', 'limits of an ai program for solving college math problems.pdf', 'exploring chainofthought style prompting for texttosql.pdf', 'overthinking the truth understanding how language models process false demonstrations.pdf', 'leveraging training data in fewshot prompting for numerical reasoning.pdf', 'compositional exemplars for incontext learning.pdf', 'unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf', 'stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf', 'prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf', 'instructed language models with retrievers are powerful entity linkers.pdf', 'fixing rust compilation errors using llms.pdf', 'codestyle incontext learning for knowledgebased question answering.pdf', 'a lightweight framework for highquality code generation.pdf', 'explicit knowledge transfer for weaklysupervised code generation.pdf', 'llm4vv developing llmdriven testsuite for compiler validation.pdf', 'actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf', 'diverse retrievalaugmented incontext learning for dialogue state tracking.pdf', 'selfexplanation prompting improves dialogue understanding in large language models.pdf', 'folio natural language reasoning with firstorder logic.pdf', 'what makes good incontext demonstrations for code intelligence tasks with llms.pdf', 'can chatgpt detect intent evaluating large language models for spoken language understanding.pdf', 'larger language models do incontext learning differently.pdf', 's3dst structured opendomain dialogue segmentation and state tracking in the era of llms.pdf', 'large language models as data preprocessors.pdf', 'metareasoning semanticssymbol deconstruction for large language models.pdf', 'automatic chain of thought prompting in large language models.pdf', 'gpt is becoming a turing machine here are some ways to program it.pdf', 'coaudit tools to help humans doublecheck aigenerated content.pdf']" -OPT,97,"['complementary explanations for effective incontext learning.pdf', 'efficient open domain multihop question answering with fewshot data synthesis.pdf', 'measuring inductive biases of incontext learning with underspecified demonstrations.pdf', 'unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf', 'llmlingua compressing prompts for accelerated inference of large language models.pdf', 'lowresource authorship style transfer can nonfamous authors be imitated.pdf', 'aicopilot for business optimisation a framework and a case study in production scheduling.pdf', 'a languageagent approach to formal theoremproving.pdf', 'learning to retrieve incontext examples for large language models.pdf', 'lorahub efficient crosstask generalization via dynamic lora composition.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf', 'flocks of stochastic parrots differentially private prompt learning for large language models.pdf', 'epa easy prompt augmentation on large language models via multiple sources and multiple targets.pdf', 'instructeval systematic evaluation of instruction selection methods.pdf', 'dialogstudio towards richest and most diverse unified dataset collection for conversational ai.pdf', 'fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf', 'satisfiabilityaided language models using declarative prompting.pdf', 'alpacafarm a simulation framework for methods that learn from human feedback.pdf', 'textbooks are all you need ii phi15 technical report.pdf', 'hint hypernetwork instruction tuning for efficient zero & fewshot generalisation.pdf', 'ul2 unifying language learning paradigms.pdf', 'boosting incontext learning with factual knowledge.pdf', 'learn to explore on bootstrapping interactive data exploration with metalearning.pdf', 'mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf', 'using incontext learning to improve dialogue safety.pdf', 'user simulation with large language models for evaluating taskoriented dialogue.pdf', 'discrete prompt optimization via constrained generation for zeroshot reranker.pdf', 'grips gradientfree, editbased instruction search for prompting large language models.pdf', 'unihd at tsar2022 shared task is compute all we need for lexical simplification.pdf', 'incontext learning with iterative demonstration selection.pdf', 'check your facts and try again improving large language models with external knowledge and automated feedback.pdf', 'genderspecific machine translation with large language models.pdf', 'large language models are pretty good zeroshot video game bug detectors.pdf', 'autoconv automatically generating informationseeking conversations with large language models.pdf', 'chainofdictionary prompting elicits translation in large language models.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'selective annotation makes language models better fewshot learners.pdf', 'mitigating label biases for incontext learning.pdf', 'extracting multivalued relations from language models.pdf', 'a communication theory perspective on prompting engineering methods for large language models.pdf', 'connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf', 'a simple baseline for knowledgebased visual question answering.pdf', 'resources and fewshot learners for incontext learning in slavic languages.pdf', 'large language models can implement policy iteration.pdf', 'demystifying prompts in language models via perplexity estimation.pdf', 'understanding incontext learning via supportive pretraining data.pdf', 'codecot and beyond learning to program and test like a developer.pdf', 'dricl demonstrationretrieved incontext learning.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'can language models solve graph problems in natural language.pdf', 'fewshot reranking for multihop qa via language model prompting.pdf', 'the unreliability of explanations in fewshot prompting for textual reasoning.pdf', 'selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf', 'mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf', 'what incontext learning learns incontext disentangling task recognition and task learning.pdf', 'mixture of soft prompts for controllable data generation.pdf', 'unleashing the potential of prompt engineering in large language models a comprehensive review.pdf', 'selfadaptive incontext learning an information compression perspective for incontext example selection and ordering.pdf', 'large language models can be lazy learners analyze shortcuts in incontext learning.pdf', 'exploring parameterefficient finetuning techniques for code generation with large language models.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models.pdf', 'how to unleash the power of large language models for fewshot relation extraction.pdf', 'rationaleaugmented ensembles in language models.pdf', 'incontext instruction learning.pdf', 'learning incontext learning for named entity recognition.pdf', 'dictionarybased phraselevel prompting of large language models for machine translation.pdf', 'mgpt fewshot learners go multilingual.pdf', 'prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf', 'memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf', 'finetuning language models with just forward passes.pdf', 'baseline defenses for adversarial attacks against aligned language models.pdf', 'efficient prompting via dynamic incontext learning.pdf', 'prompt injection attack against llmintegrated applications.pdf', 'exploring the intersection of large language models and agentbased modeling via prompt engineering.pdf', 'synthetic prompting generating chainofthought demonstrations for large language models.pdf', 'can we edit factual knowledge by incontext learning.pdf', 'instructed language models with retrievers are powerful entity linkers.pdf', 'understanding the effectiveness of very large language models on dialog evaluation.pdf', ""what's the magic word a control theory of llm prompting.pdf"", 'dspy compiling declarative language model calls into selfimproving pipelines.pdf', 'marked personas using natural language prompts to measure stereotypes in language models.pdf', 'alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf', 'active example selection for incontext learning.pdf', 'learning performanceimproving code edits.pdf', 'folio natural language reasoning with firstorder logic.pdf', 'tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf', 'can chatgpt detect intent evaluating large language models for spoken language understanding.pdf', 'chatgpt for plcdcs control logic generation.pdf', 'large language models are zeroshot reasoners.pdf', 'finding support examples for incontext learning.pdf', 'understanding how model size affects fewshot instruction prompting.pdf', 'controlled text generation with natural language instructions.pdf', 'incontext learning with many demonstration examples.pdf', 'chatgpt opens a new door for bioinformatics.pdf', 'can large language models truly understand prompts a case study with negated prompts.pdf']" -BLOOM,51,"['plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf', 'buffet benchmarking large language models for fewshot crosslingual transfer.pdf', 'crosslingual retrieval augmented incontext learning for bangla.pdf', 'measuring inductive biases of incontext learning with underspecified demonstrations.pdf', 'structured prompting scaling incontext learning to 1,000 examples.pdf', 'the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues.pdf', 'multidimensional evaluation of text summarization with incontext learning.pdf', 'lowresource authorship style transfer can nonfamous authors be imitated.pdf', 'neural machine translation models can learn to be fewshot learners.pdf', 'ctqscorer combining multiple features for incontext example selection for machine translation.pdf', 'boosting crosslingual transferability in multilingual models via incontext learning.pdf', 'instructeval systematic evaluation of instruction selection methods.pdf', 'using chatgpt for entity matching.pdf', 'tabllm fewshot classification of tabular data with large language models.pdf', 'user simulation with large language models for evaluating taskoriented dialogue.pdf', 'grips gradientfree, editbased instruction search for prompting large language models.pdf', 'unihd at tsar2022 shared task is compute all we need for lexical simplification.pdf', 'a mechanism for solving relational tasks in transformer language models.pdf', 'entity matching using large language models.pdf', 'product information extraction using chatgpt.pdf', 'multimethod selftraining improving code generation with text, and vice versa.pdf', 'genderspecific machine translation with large language models.pdf', 'autoconv automatically generating informationseeking conversations with large language models.pdf', 'masakhanews news topic classification for african languages.pdf', 'chainofdictionary prompting elicits translation in large language models.pdf', 'revisiting nonenglish text simplification a unified multilingual benchmark.pdf', 'mitigating label biases for incontext learning.pdf', 'camoscio an italian instructiontuned llama.pdf', 'democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf', 'generative speech recognition error correction with large language models and taskactivating prompting.pdf', 'resources and fewshot learners for incontext learning in slavic languages.pdf', 'framing the newsfrom human perception to large language model inferences.pdf', 'dricl demonstrationretrieved incontext learning.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'benchmarking arabic ai with large language models.pdf', 'adaptive machine translation with large language models.pdf', 'llmebench a flexible framework for accelerating llms benchmarking.pdf', 'selfadaptive incontext learning an information compression perspective for incontext example selection and ordering.pdf', 'teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf', 'dictionarybased phraselevel prompting of large language models for machine translation.pdf', 'mgpt fewshot learners go multilingual.pdf', 'will it blend mixing training paradigms & prompting for argument quality prediction.pdf', 'understanding the effectiveness of very large language models on dialog evaluation.pdf', 'marked personas using natural language prompts to measure stereotypes in language models.pdf', 'alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf', 'prompt, condition, and generate classification of unsupported claims with incontext learning.pdf', 'argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf', 'decomposed prompting for machine translation between related languages using large language models.pdf', 'incontext learning with many demonstration examples.pdf', 'dissecting incontext learning of translations in gpts.pdf', 'towards effective disambiguation for machine translation with large language models.pdf']" -BLOOMZ,7,"['plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf', 'buffet benchmarking large language models for fewshot crosslingual transfer.pdf', 'crosslingual retrieval augmented incontext learning for bangla.pdf', 'benchmarking arabic ai with large language models.pdf', 'adaptive machine translation with large language models.pdf', 'llmebench a flexible framework for accelerating llms benchmarking.pdf', 'towards effective disambiguation for machine translation with large language models.pdf']" -BioBERT,21,"['plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf', 'identifying and extracting rare disease phenotypes with large language models.pdf', 'graphprompt biomedical entity normalization using graphbased prompt templates.pdf', 'do we still need clinical language models.pdf', 'are large language models ready for healthcare a comparative study on clinical language understanding.pdf', 'inboxbart get instructions into biomedical multitask learning.pdf', 'evaluation of chatgpt family of models for biomedical reasoning and classification.pdf', 'can language models be biomedical knowledge bases.pdf', 'an evaluation of gpt models for phenotype concept recognition.pdf', 'automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf', 'language models are fewshot learners for prognostic prediction.pdf', 'chemical identification and indexing in pubmed articles via bert and texttotext approaches.pdf', 'mededit model editing for medical question answering with external knowledge bases.pdf', 'cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf', 'lowresource multigranularity academic function recognition based on multiple prompt knowledge.pdf', 'impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf', 'artificial intelligence for health message generation theory, method, and an empirical study using prompt engineering.pdf', 'multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf', 'healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf', 'positionbased prompting for health outcome generation.pdf', 'can chatgpt understand causal language in science claims.pdf']" -BART,129,"['plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf', 'multilingual llms are better crosslingual incontext learners with alignment.pdf', 'automated fewshot classification with instructionfinetuned language models.pdf', 'unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf', 'fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf', 'effective test generation using pretrained large language models and mutation testing.pdf', 'an empirical evaluation of using large language models for automated unit test generation.pdf', 'toward unified controllable text generation via regular expression instruction.pdf', 'acecoder utilizing existing code to enhance code generation.pdf', 'the unreasonable effectiveness of fewshot learning for machine translation.pdf', 'what makes pretrained language models better zeroshot learners.pdf', 'can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning.pdf', 'inverse is better! fast and accurate prompt for fewshot slot tagging.pdf', 'multidimensional evaluation of text summarization with incontext learning.pdf', ""reframing instructional prompts to gptk's language.pdf"", 'utilizing language models for energy load forecasting.pdf', 'arguments to key points mapping with promptbased learning.pdf', 'reranking for natural language generation from logical forms a study based on large language models.pdf', 'large language model prompt chaining for long legal document classification.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'scalable prompt generation for semisupervised learning with language models.pdf', 'metricbased incontext learning a case study in text simplification.pdf', 'incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf', 'bbtv2 towards a gradientfree future with large language models.pdf', 'what makes datatotext generation hard for pretrained language models.pdf', 'inboxbart get instructions into biomedical multitask learning.pdf', 'towards making the most of chatgpt for machine translation.pdf', 'zerotop zeroshot taskoriented semantic parsing using large language models.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf', 'towards explainable conversational recommender systems.pdf', 'xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf', 'compositional semantic parsing with large language models.pdf', 'reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf', 'on bilingual lexicon induction with large language models.pdf', 'is chatgpt the ultimate programming assistant how far is it.pdf', 'incontext fewshot relation extraction via pretrained language models.pdf', 'wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf', 'boosting incontext learning with factual knowledge.pdf', 'benchmarking large language model capabilities for conditional generation.pdf', 'crosscodebench benchmarking crosstask generalization of source code models.pdf', 'promptner prompt locating and typing for named entity recognition.pdf', 'user simulation with large language models for evaluating taskoriented dialogue.pdf', 'retrieving supporting evidence for generative question answering.pdf', 'calibrating llmbased evaluator.pdf', 'true fewshot learning with prompts a realworld perspective.pdf', 'exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf', 'relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction.pdf', 'allure auditing and improving llmbased evaluation of text using iterative incontextlearning.pdf', 'plum prompt learning using metaheuristic.pdf', 'good examples make a faster learner simple demonstrationbased learning for lowresource ner.pdf', 'check your facts and try again improving large language models with external knowledge and automated feedback.pdf', 'towards fewshot identification of morality frames using incontext learning.pdf', 'zara improving fewshot selfrationalization for small language models.pdf', 'exploring promptbased fewshot learning for grounded dialog generation.pdf', 'the student becomes the master matching gpt3 on scientific factual error correction.pdf', 'camoscio an italian instructiontuned llama.pdf', 'mutual reinforcement effects in japanese sentence classification and named entity recognition tasks.pdf', 'instruction tuning for fewshot aspectbased sentiment analysis.pdf', 'extracting multivalued relations from language models.pdf', 'divide and prompt chain of thought prompting for texttosql.pdf', 'global constraints with prompting for zeroshot event argument classification.pdf', 'a practical survey on zeroshot prompt design for incontext learning.pdf', 'linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf', 'comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues.pdf', 'large language modelaware incontext learning for code generation.pdf', 'sentence simplification via large language models.pdf', 'causal interventionbased prompt debiasing for event argument extraction.pdf', 'choice over control how users write with large language models using diegetic and nondiegetic prompting.pdf', 'generating training data with language models towards zeroshot language understanding.pdf', 'templatefree prompt tuning for fewshot ner.pdf', 'comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students.pdf', 'chatgpt for arabic grammatical error correction.pdf', 'proqa structural promptbased pretraining for unified question answering.pdf', 'datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation.pdf', 'instructionner a multitask instructionbased generative framework for fewshot ner.pdf', 'fewshot queryfocused summarization with prefixmerging.pdf', 'fewshot training llms for projectspecific codesummarization.pdf', 'adapting prompt for fewshot tabletotext generation.pdf', 'selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf', 'mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf', 'mixture of soft prompts for controllable data generation.pdf', 'does correction remain a problem for large language models.pdf', 'unleashing the potential of prompt engineering in large language models a comprehensive review.pdf', 'jampatoisnli a jamaican patois natural language inference dataset.pdf', 'robust retrieval augmented generation for zeroshot slot filling.pdf', 'better integrating vision and semantics for improving fewshot classification.pdf', 'cins comprehensive instruction for fewshot learning in taskoriented dialog systems.pdf', 'corrpus codebased structured prompting for neurosymbolic story understanding.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'a unified framework for multiintent spoken language understanding with prompting.pdf', 'multimodal prompt learning for product title generation with extremely limited labels.pdf', 'adaprompt adaptive model training for promptbased nlp.pdf', 'learning incontext learning for named entity recognition.pdf', 'mgpt fewshot learners go multilingual.pdf', 'hicl hashtagdriven incontext learning for social media natural language understanding.pdf', 'generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models.pdf', 'harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf', 'discrete and soft prompting for multilingual models.pdf', 'spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization.pdf', 'rethinking the role of demonstrations what makes incontext learning work.pdf', 'claret pretraining a correlationaware contexttoevent transformer for eventcentric generation and classification.pdf', 'towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method.pdf', 'what makes good incontext examples for gpt$3$.pdf', 'unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf', 'stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf', 'prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf', 'instructed language models with retrievers are powerful entity linkers.pdf', 'impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf', 'llm4vv developing llmdriven testsuite for compiler validation.pdf', 'paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation.pdf', 'alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf', 's$^3$hqa a threestage approach for multihop texttable hybrid question answering.pdf', 'demonstrations are all you need advancing offensive content paraphrasing using incontext learning.pdf', 'what makes good incontext demonstrations for code intelligence tasks with llms.pdf', 'retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering.pdf', 'scifix outperforming gpt3 on scientific factual error correction.pdf', 'prompt2model generating deployable models from natural language instructions.pdf', 'healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf', 'robut a systematic study of table qa robustness against humanannotated adversarial perturbations.pdf', 'fewfedweight fewshot federated learning framework across multiple nlp tasks.pdf', 'raft a realworld fewshot text classification benchmark.pdf', 'decomposed prompting for machine translation between related languages using large language models.pdf', 'a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf', 'cup curriculum learning based prompt tuning for implicit event argument extraction.pdf', 'prefinetuning for fewshot emotional speech recognition.pdf', 'modelling latent translations for crosslingual transfer.pdf', 'parameterefficient crosslingual transfer of vision and language models via translationbased alignment.pdf', 'towards effective disambiguation for machine translation with large language models.pdf']" -RoBERTa,179,"['multilingual llms are better crosslingual incontext learners with alignment.pdf', 'efficient open domain multihop question answering with fewshot data synthesis.pdf', 'casteist but not racist quantifying disparities in large language model bias between india and the west.pdf', 'automated fewshot classification with instructionfinetuned language models.pdf', 'text classification via large language models.pdf', 'measuring inductive biases of incontext learning with underspecified demonstrations.pdf', 'red teaming language model detectors with language models.pdf', 'unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf', 'pretraining to learn in context.pdf', 'towards unified prompt tuning for fewshot text classification.pdf', 'consprompt easily exploiting contrastive samples for fewshot prompt learning.pdf', 'the utility of large language models and generative ai for education research.pdf', 'udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers.pdf', 'factchecking complex claims with programguided reasoning.pdf', 'towards zerolabel language learning.pdf', 'emotionconditioned text generation through automatic prompt optimization.pdf', 'what makes pretrained language models better zeroshot learners.pdf', 'hqp a humanannotated dataset for detecting online propaganda.pdf', 'unified demonstration retriever for incontext learning.pdf', 'arguments to key points mapping with promptbased learning.pdf', 'reranking for natural language generation from logical forms a study based on large language models.pdf', 'automatic label sequence generation for prompting sequencetosequence models.pdf', 'ctqscorer combining multiple features for incontext example selection for machine translation.pdf', 'log parsing with promptbased fewshot learning.pdf', 'let me check the examples enhancing demonstration learning via explicit imitation.pdf', 'continual training of language models for fewshot learning.pdf', 'do we still need clinical language models.pdf', 'scalable prompt generation for semisupervised learning with language models.pdf', 'boosting crosslingual transferability in multilingual models via incontext learning.pdf', 'getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning.pdf', 'metricbased incontext learning a case study in text simplification.pdf', ""don't stop pretraining make promptbased finetuning powerful learner.pdf"", 'are large language models ready for healthcare a comparative study on clinical language understanding.pdf', 'cotbert enhancing unsupervised sentence representation through chainofthought.pdf', 'incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf', 'flocks of stochastic parrots differentially private prompt learning for large language models.pdf', 'towards using fewshot prompt learning for automating model completion.pdf', 'bbtv2 towards a gradientfree future with large language models.pdf', 'small models are valuable plugins for large language models.pdf', 'pair programming with large language models for sampling and estimation of copulas.pdf', 'attempt parameterefficient multitask tuning via attentional mixtures of soft prompts.pdf', 'using chatgpt for entity matching.pdf', 'easynlp a comprehensive and easytouse toolkit for natural language processing.pdf', 'fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf', 'zerotop zeroshot taskoriented semantic parsing using large language models.pdf', 'conqx semantic expansion of spoken queries for intent detection based on conditioned text generation.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf', 'can language models be biomedical knowledge bases.pdf', 'tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf', 'hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks.pdf', 'reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf', 'incontext fewshot relation extraction via pretrained language models.pdf', 'boosting incontext learning with factual knowledge.pdf', 'sentiment analysis through llm negotiations.pdf', 'dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning.pdf', 'rethinking the event coding pipeline with prompt entailment.pdf', 'zeroshot temporal relation extraction with chatgpt.pdf', 'instanceaware prompt learning for language understanding and generation.pdf', 'meal stable and active learning for fewshot prompting.pdf', 'zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis.pdf', 'efficient blackbox adversarial attacks on neural text detectors.pdf', 'are hard examples also harder to explain a study with human and modelgenerated explanations.pdf', 'mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf', 'fewclue a chinese fewshot learning evaluation benchmark.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'promptner prompt locating and typing for named entity recognition.pdf', 'towards legally enforceable hate speech detection for public forums.pdf', 'using incontext learning to improve dialogue safety.pdf', 'is chatgpt a good causal reasoner a comprehensive evaluation.pdf', 'true fewshot learning with prompts a realworld perspective.pdf', 'exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf', 'revisiting automated prompting are we actually doing better.pdf', 'the impact of symbolic representations on incontext learning for fewshot reasoning.pdf', 'fewshot stance detection via targetaware prompt distillation.pdf', 'knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering.pdf', 'plum prompt learning using metaheuristic.pdf', 'latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf', 'entity matching using large language models.pdf', 'towards fewshot identification of morality frames using incontext learning.pdf', 'contextual biasing of namedentities with large language models.pdf', 'zara improving fewshot selfrationalization for small language models.pdf', 'masakhanews news topic classification for african languages.pdf', 'an explanation of incontext learning as implicit bayesian inference.pdf', 'iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve.pdf', 'narrowing the gap between supervised and unsupervised sentence representation learning with large language model.pdf', 'selective annotation makes language models better fewshot learners.pdf', 'mitigating label biases for incontext learning.pdf', 'transferring procedural knowledge across commonsense tasks.pdf', 'the inductive bias of incontext learning rethinking pretraining example design.pdf', 'pretrained tokenreplaced detection model as fewshot learner.pdf', 'rethink the effectiveness of text data augmentation an empirical analysis.pdf', 'what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf', 'extracting multivalued relations from language models.pdf', 'understanding incontext learning via supportive pretraining data.pdf', 'global constraints with prompting for zeroshot event argument classification.pdf', 'linking microblogging sentiments to stock price movement an application of gpt4.pdf', 'framing the newsfrom human perception to large language model inferences.pdf', 'a practical survey on zeroshot prompt design for incontext learning.pdf', 'promptbased learning for thread structure prediction in cybersecurity forums.pdf', 'dricl demonstrationretrieved incontext learning.pdf', 'legal prompting teaching a language model to think like a lawyer.pdf', 'a fewshot approach to resume information extraction via prompts.pdf', 'adversarial robustness of promptbased fewshot learning for natural language understanding.pdf', 'relation extraction as openbook examination retrievalenhanced prompt tuning.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'mededit model editing for medical question answering with external knowledge bases.pdf', 'making large language models better data creators.pdf', 'selective demonstrations for crossdomain texttosql.pdf', 'generating training data with language models towards zeroshot language understanding.pdf', 'gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles.pdf', 'fewshot reranking for multihop qa via language model prompting.pdf', 'towards zeroshot and fewshot table question answering using gpt3.pdf', 'reordering examples helps during primingbased fewshot learning.pdf', 'lowresource multigranularity academic function recognition based on multiple prompt knowledge.pdf', 'the unreliability of explanations in fewshot prompting for textual reasoning.pdf', 'selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf', 'retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf', 'mixture of soft prompts for controllable data generation.pdf', 'jampatoisnli a jamaican patois natural language inference dataset.pdf', 'large language models for propaganda detection.pdf', 'ccprompt counterfactual contrastive prompttuning for manyclass classification.pdf', 'automatic multilabel prompting simple and interpretable fewshot classification.pdf', 'low resource pipeline for spoken language understanding via weak supervision.pdf', 'a unified framework for multiintent spoken language understanding with prompting.pdf', 'zeroshot approach to overcome perturbation sensitivity of prompts.pdf', 'detecting natural language biases with promptbased learning.pdf', 'toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf', 'adelt transpilation between deep learning frameworks.pdf', 'how to unleash the power of large language models for fewshot relation extraction.pdf', 'list lite prompted selftraining makes parameterefficient fewshot learners.pdf', 'statistical depth for ranking and characterizing transformerbased text embeddings.pdf', 'language quantized autoencoders towards unsupervised textimage alignment.pdf', 'events realm event reasoning of entity states via language models.pdf', 'will it blend mixing training paradigms & prompting for argument quality prediction.pdf', 'how good are commercial large language models on african languages.pdf', 'prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf', 'hicl hashtagdriven incontext learning for social media natural language understanding.pdf', 'gpts at factify 2022 prompt aided factverification.pdf', 'generating efficient training data via llmbased attribute manipulation.pdf', 'contextfaithful prompting for large language models.pdf', 'discrete and soft prompting for multilingual models.pdf', 'finetuning language models with just forward passes.pdf', 'promptda labelguided data augmentation for promptbased fewshot learners.pdf', 'do language models learn about legal entity types during pretraining.pdf', 'stt soft template tuning for fewshot adaptation.pdf', 'cheapfake detection with llm using prompt engineering.pdf', 'claret pretraining a correlationaware contexttoevent transformer for eventcentric generation and classification.pdf', 'prototypical verbalizer for promptbased fewshot tuning.pdf', 'what makes good incontext examples for gpt$3$.pdf', 'stprompt semanticguided and taskdriven prompts for effective fewshot classification.pdf', 'tram benchmarking temporal reasoning for large language models.pdf', 'understanding the effectiveness of very large language models on dialog evaluation.pdf', 'chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations.pdf', 'active example selection for incontext learning.pdf', 'making large language models better reasoners with stepaware verifier.pdf', 'tempera testtime prompting via reinforcement learning.pdf', 'multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf', 'sociocultural knowledge is needed for selection of shots in hate speech detection tasks.pdf', 'folio natural language reasoning with firstorder logic.pdf', 'can large language models design accurate label functions.pdf', 'stance detection with supervised, zeroshot, and fewshot applications.pdf', 'healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf', 'prodigy enabling incontext learning over graphs.pdf', 'mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models.pdf', 'promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models.pdf', 'llmaugmented preference learning from natural language.pdf', 'argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf', 'prefinetuning for fewshot emotional speech recognition.pdf', 'a promptbased fewshot learning approach to software conflict detection.pdf', 'controlled text generation with natural language instructions.pdf', 'alt towards finegrained alignment between language and ctr models for clickthrough rate prediction.pdf', 'clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction.pdf', 'fewshot adaptation for parsing contextual utterances with llms.pdf', 'prompting palm for translation assessing strategies and performance.pdf', 'modelling latent translations for crosslingual transfer.pdf', 'semanticoriented unlabeled priming for largescale language models.pdf', 'breaking the bank with chatgpt fewshot text classification for finance.pdf', 'dynamar dynamic prompt with mask token representation.pdf']" -LLaMA,132,"['efficient open domain multihop question answering with fewshot data synthesis.pdf', 'coveragebased example selection for incontext learning.pdf', 'casteist but not racist quantifying disparities in large language model bias between india and the west.pdf', 'game of tones faculty detection of gpt4 generated content in university assessments.pdf', 'identifying and extracting rare disease phenotypes with large language models.pdf', 'leveraging large language models for exploiting asr uncertainty.pdf', 'attack prompt generation for red teaming and defending large language models.pdf', 'red teaming language model detectors with language models.pdf', 'large language models are zeroshot rankers for recommender systems.pdf', 'affect recognition in conversations using large language models.pdf', 'large language models for failure mode classification an investigation.pdf', 'acecoder utilizing existing code to enhance code generation.pdf', 'large language models can be used to effectively scale spear phishing campaigns.pdf', 'what makes pretrained language models better zeroshot learners.pdf', 'building emotional support chatbots in the era of llms.pdf', 'llmlingua compressing prompts for accelerated inference of large language models.pdf', 'learning to retrieve incontext examples for large language models.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'lorahub efficient crosstask generalization via dynamic lora composition.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'making language models better tool learners with execution feedback.pdf', 'sentiment analysis in the era of large language models a reality check.pdf', 'incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf', 'annollm making large language models to be better crowdsourced annotators.pdf', 'instructeval systematic evaluation of instruction selection methods.pdf', 'discrete prompt compression with reinforcement learning.pdf', 'autonomous treesearch ability of large language models.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'alpacafarm a simulation framework for methods that learn from human feedback.pdf', 'benchmarking cognitive biases in large language models as evaluators.pdf', 'steering large language models for machine translation with finetuning and incontext learning.pdf', 'hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks.pdf', 'on bilingual lexicon induction with large language models.pdf', 'blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'query2doc query expansion with large language models.pdf', 'solving and generating npr sunday puzzles with large language models.pdf', 'llm4dyg can large language models solve problems on dynamic graphs.pdf', 'i was blind but now i see implementing visionenabled dialogue in social robots.pdf', 'noisy exemplars make large language models more robust a domainagnostic behavioral analysis.pdf', ""do large language models know what they don't know.pdf"", 'expertprompting instructing large language models to be distinguished experts.pdf', 'mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf', 'using natural language explanations to improve robustness of incontext learning for natural language inference.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation.pdf', 'using incontext learning to improve dialogue safety.pdf', 'user simulation with large language models for evaluating taskoriented dialogue.pdf', 'is chatgpt a good causal reasoner a comprehensive evaluation.pdf', 'rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought.pdf', 'prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf', 'review of large vision models and visual prompt engineering.pdf', 'a prefrontal cortexinspired architecture for planning in large language models.pdf', 'exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf', 'allure auditing and improving llmbased evaluation of text using iterative incontextlearning.pdf', 'knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering.pdf', 'latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf', 'multilingual mathematical autoformalization.pdf', 'genderspecific machine translation with large language models.pdf', 'contextual biasing of namedentities with large language models.pdf', 's3 socialnetwork simulation system with large language modelempowered agents.pdf', 'structured chainofthought prompting for code generation.pdf', 'camoscio an italian instructiontuned llama.pdf', 'what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf', 'democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf', 'generative speech recognition error correction with large language models and taskactivating prompting.pdf', 'a simple baseline for knowledgebased visual question answering.pdf', 'converser fewshot conversational dense retrieval with synthetic data generation.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'incontext learning user simulators for taskoriented dialog systems.pdf', 'poe process of elimination for multiple choice reasoning.pdf', 'mededit model editing for medical question answering with external knowledge bases.pdf', 'large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf', 'harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf', 'ten quick tips for harnessing the power of chatgptgpt4 in computational biology.pdf', 'selective demonstrations for crossdomain texttosql.pdf', 'promptengineering and transformerbased question generation and evaluation.pdf', 'abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models.pdf', 'chatgpt for arabic grammatical error correction.pdf', 'cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf', 'diagnosing infeasible optimization problems using large language models.pdf', 'adaptive machine translation with large language models.pdf', 'a search for prompts generating structured answers from contracts.pdf', 'selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf', 'llmintheloop leveraging large language model for thematic analysis.pdf', 'llmfuncmapper function identification for interpreting complex clauses in building codes via llm.pdf', 'what incontext learning learns incontext disentangling task recognition and task learning.pdf', 'qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf', 'unleashing the potential of prompt engineering in large language models a comprehensive review.pdf', 'prompt injection attacks and defenses in llmintegrated applications.pdf', 'large language models for propaganda detection.pdf', 'ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf', 'booookscore a systematic exploration of booklength summarization in the era of llms.pdf', 'teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf', 'selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf', 'toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf', 'blackbox prompt optimization aligning large language models without model training.pdf', 'prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf', 'memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf', 'are structural concepts universal in transformer language models towards interpretable crosslingual generalization.pdf', 'contextfaithful prompting for large language models.pdf', 'harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf', 'adversarial demonstration attacks on large language models.pdf', 'baseline defenses for adversarial attacks against aligned language models.pdf', 'prompt injection attack against llmintegrated applications.pdf', 'think before you speak cultivating communication skills of large language models via inner monologue.pdf', 'evaluating the instructionfollowing robustness of large language models to prompt injection.pdf', 'prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf', 'instructed language models with retrievers are powerful entity linkers.pdf', 'fixing rust compilation errors using llms.pdf', 'right to be forgotten in the era of large language models implications, challenges, and solutions.pdf', 'impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf', 'do emergent abilities exist in quantized large language models an empirical study.pdf', 'can large language models design accurate label functions.pdf', 'sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf', 'retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering.pdf', 'tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf', 'knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf', 'jailbreaking chatgpt via prompt engineering an empirical study.pdf', 'cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf', 'llmaugmented preference learning from natural language.pdf', 'a benchmark for learning to translate a new language from one grammar book.pdf', 'argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf', 'prompt position really matters in fewshot and zeroshot nlu tasks.pdf', 'promptbased length controlled generation with reinforcement learning.pdf', 'an incontext schema understanding method for knowledge base question answering.pdf', 'a new dataset and empirical study for sentence simplification in chinese.pdf', 'llm self defense by self examination, llms know they are being tricked.pdf', 'large language models as data preprocessors.pdf', 'the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf', 'cosmic data efficient instructiontuning for speech incontext learning.pdf', 'breaking the bank with chatgpt fewshot text classification for finance.pdf', 'towards effective disambiguation for machine translation with large language models.pdf']" -PaLM,153,"['efficient open domain multihop question answering with fewshot data synthesis.pdf', 'mathprompter mathematical reasoning using large language models.pdf', 'red teaming language model detectors with language models.pdf', 'lambada backward chaining for automated reasoning in natural language.pdf', 'structured prompting scaling incontext learning to 1,000 examples.pdf', 'beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels.pdf', 'large language models can be used to effectively scale spear phishing campaigns.pdf', 'the unreasonable effectiveness of fewshot learning for machine translation.pdf', 'llmlingua compressing prompts for accelerated inference of large language models.pdf', 'sqlprompt incontext texttosql with minimal labeled data.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'sqlpalm improved large language model adaptation for texttosql.pdf', 'neural machine translation models can learn to be fewshot learners.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', 'fireact toward language agent finetuning.pdf', 'boosted prompt ensembles for large language models.pdf', 'sentiment analysis in the era of large language models a reality check.pdf', 'cotbert enhancing unsupervised sentence representation through chainofthought.pdf', 'incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf', ""harnessing large language models' empathetic response generation capabilities for online mental health counselling support.pdf"", 'annollm making large language models to be better crowdsourced annotators.pdf', 'pair programming with large language models for sampling and estimation of copulas.pdf', 'attempt parameterefficient multitask tuning via attentional mixtures of soft prompts.pdf', 'using chatgpt for entity matching.pdf', 'autonomous treesearch ability of large language models.pdf', 'evaluation of chatgpt family of models for biomedical reasoning and classification.pdf', 'easynlp a comprehensive and easytouse toolkit for natural language processing.pdf', 'interact exploring the potentials of chatgpt as a cooperative agent.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'satisfiabilityaided language models using declarative prompting.pdf', 'tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf', 'explainable claim verification via knowledgegrounded reasoning with large language models.pdf', 'promptbased extraction of social determinants of health using fewshot learning.pdf', 'on bilingual lexicon induction with large language models.pdf', 'textbooks are all you need ii phi15 technical report.pdf', ""two timin' repairing smart contracts with a twolayered approach.pdf"", 'blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'bootstrapping multilingual semantic parsers using large language models.pdf', 'selfevolve a code evolution framework via large language models.pdf', 'ul2 unifying language learning paradigms.pdf', 'chainforge a visual toolkit for prompt engineering and llm hypothesis testing.pdf', 'benchmarking large language model capabilities for conditional generation.pdf', 'query2doc query expansion with large language models.pdf', 'jen1 textguided universal music generation with omnidirectional diffusion models.pdf', ""do large language models know what they don't know.pdf"", 'do gpts produce less literal translations.pdf', 'ambiguityaware incontext learning with large language models.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation.pdf', 'automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf', 'rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought.pdf', 'prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf', 'review of large vision models and visual prompt engineering.pdf', 'exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf', 'the impact of symbolic representations on incontext learning for fewshot reasoning.pdf', 'selfplanning code generation with large language models.pdf', 'knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms.pdf', 'check your facts and try again improving large language models with external knowledge and automated feedback.pdf', 'latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf', 'entity matching using large language models.pdf', 'product information extraction using chatgpt.pdf', 'language model cascades.pdf', 's3 socialnetwork simulation system with large language modelempowered agents.pdf', 'chainofdictionary prompting elicits translation in large language models.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'mitigating label biases for incontext learning.pdf', 'enable language models to implicitly learn selfimprovement from data.pdf', 'genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf', 'what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf', 'using large language models for cybersecurity capturetheflag challenges and certification questions.pdf', 'a closer look at incontext learning under distribution shifts.pdf', 'divide and prompt chain of thought prompting for texttosql.pdf', 'codecot and beyond learning to program and test like a developer.pdf', 'improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning.pdf', 'a practical survey on zeroshot prompt design for incontext learning.pdf', 'how far are large language models from agents with theoryofmind.pdf', 'dricl demonstrationretrieved incontext learning.pdf', 'honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model.pdf', 'automatic prompt rewriting for personalized text generation.pdf', 'the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf', 'large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf', 'harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf', 'selective demonstrations for crossdomain texttosql.pdf', 'promptengineering and transformerbased question generation and evaluation.pdf', 'batch prompting efficient inference with large language model apis.pdf', 'atlas fewshot learning with retrieval augmented language models.pdf', 'abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models.pdf', 'diverse demonstrations improve incontext compositional generalization.pdf', 'humans in humans out on gpt converging toward common sense in both success and failure.pdf', 'retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf', 'adaptive machine translation with large language models.pdf', 'a search for prompts generating structured answers from contracts.pdf', 'the unreliability of explanations in fewshot prompting for textual reasoning.pdf', 'selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf', 'grammar prompting for domainspecific language generation with large language models.pdf', 'mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf', 'enhancing small medical learners with privacypreserving contextual prompting.pdf', 'qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf', 'unleashing the potential of prompt engineering in large language models a comprehensive review.pdf', 'prompt injection attacks and defenses in llmintegrated applications.pdf', 'visualizing linguistic diversity of text datasets synthesized by large language models.pdf', 'teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf', 'selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf', 'rationaleaugmented ensembles in language models.pdf', 'how good are commercial large language models on african languages.pdf', 'small language models improve giants by rewriting their outputs.pdf', 'prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf', 'memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf', 'harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf', 'art automatic multistep reasoning and tooluse for large language models.pdf', 'efficient prompting via dynamic incontext learning.pdf', 'prompt injection attack against llmintegrated applications.pdf', 'unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf', 'leveraging training data in fewshot prompting for numerical reasoning.pdf', 'synthetic prompting generating chainofthought demonstrations for large language models.pdf', 'compositional exemplars for incontext learning.pdf', 'selficl zeroshot incontext learning with selfgenerated demonstrations.pdf', 'tram benchmarking temporal reasoning for large language models.pdf', 'prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf', 'fixing rust compilation errors using llms.pdf', 'right to be forgotten in the era of large language models implications, challenges, and solutions.pdf', 'impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf', 'what do llms know about financial markets a case study on reddit market sentiment analysis.pdf', 'dspy compiling declarative language model calls into selfimproving pipelines.pdf', 'zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model.pdf', 'explicit knowledge transfer for weaklysupervised code generation.pdf', 'alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf', 'making large language models better reasoners with stepaware verifier.pdf', 'gembamqm detecting translation quality error spans with gpt4.pdf', 'fiat fusing learning paradigms with instructionaccelerated tuning.pdf', 'dail data augmentation for incontext learning via selfparaphrase.pdf', 'tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf', 'can chatgpt detect intent evaluating large language models for spoken language understanding.pdf', 'larger language models do incontext learning differently.pdf', 'causallm is not optimal for incontext learning.pdf', 'large language models are zeroshot reasoners.pdf', 'llmaugmented preference learning from natural language.pdf', 'decomposed prompting for machine translation between related languages using large language models.pdf', 'terminologyaware translation with constrained decoding and large language model prompting.pdf', 'incontext learning with many demonstration examples.pdf', ""impossible triangle what's next for pretrained language models.pdf"", 'metareasoning semanticssymbol deconstruction for large language models.pdf', 'fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems.pdf', 'prompting palm for translation assessing strategies and performance.pdf', 'gpt is becoming a turing machine here are some ways to program it.pdf', 'the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf', 'evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf', 'cosmic data efficient instructiontuning for speech incontext learning.pdf', 'omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know.pdf', 'menucraft interactive menu system design with large language models.pdf', 'dissecting incontext learning of translations in gpts.pdf', 'towards effective disambiguation for machine translation with large language models.pdf', 'mixpro simple yet effective data augmentation for promptbased learning.pdf']" -Lambda,10,"['coveragebased example selection for incontext learning.pdf', 'an empirical evaluation of using large language models for automated unit test generation.pdf', 'unified demonstration retriever for incontext learning.pdf', 'code generation tools (almost) for free a study of fewshot, pretrained language models on code.pdf', 'xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf', 'generative type inference for python.pdf', 'a prompt pattern catalog to enhance prompt engineering with chatgpt.pdf', 'learning incontext learning for named entity recognition.pdf', 'tempera testtime prompting via reinforcement learning.pdf', 'large language models are zeroshot reasoners.pdf']" -FLAN,46,"['a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation.pdf', 'can incontext learners learn a reasoning concept from demonstrations.pdf', 'toward unified controllable text generation via regular expression instruction.pdf', 'beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels.pdf', 'factchecking complex claims with programguided reasoning.pdf', 'sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf', 'lowresource authorship style transfer can nonfamous authors be imitated.pdf', 'learning to retrieve incontext examples for large language models.pdf', 'sqlprompt incontext texttosql with minimal labeled data.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'sqlpalm improved large language model adaptation for texttosql.pdf', 'large language model prompt chaining for long legal document classification.pdf', 'lorahub efficient crosstask generalization via dynamic lora composition.pdf', 'selfprompting large language models for zeroshot opendomain qa.pdf', ""don't stop pretraining make promptbased finetuning powerful learner.pdf"", 'discrete prompt compression with reinforcement learning.pdf', 'inboxbart get instructions into biomedical multitask learning.pdf', 'logicllm exploring selfsupervised logicenhanced training for large language models.pdf', 'wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf', 'ul2 unifying language learning paradigms.pdf', 'jen1 textguided universal music generation with omnidirectional diffusion models.pdf', 'mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf', 'crosscodebench benchmarking crosstask generalization of source code models.pdf', 'beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf', 'is chatgpt a good causal reasoner a comprehensive evaluation.pdf', 'grips gradientfree, editbased instruction search for prompting large language models.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf', 'resources and fewshot learners for incontext learning in slavic languages.pdf', 'linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf', 'incontext learning user simulators for taskoriented dialog systems.pdf', 'poe process of elimination for multiple choice reasoning.pdf', 'instruction distillation makes large language models efficient zeroshot rankers.pdf', 'mixture of soft prompts for controllable data generation.pdf', 'low resource pipeline for spoken language understanding via weak supervision.pdf', 'ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf', 'incontext instruction learning.pdf', 'glam efficient scaling of language models with mixtureofexperts.pdf', 'efficient prompting via dynamic incontext learning.pdf', 'evaluating the instructionfollowing robustness of large language models to prompt injection.pdf', 'understanding the effectiveness of very large language models on dialog evaluation.pdf', 'fiat fusing learning paradigms with instructionaccelerated tuning.pdf', 'causallm is not optimal for incontext learning.pdf', 'scalable approach to medical wearable postmarket surveillance.pdf', 'incontext learning with many demonstration examples.pdf', 'the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf']" -LLaVA,1,['jailbreaking gpt4v via selfadversarial attacks with system prompts.pdf'] -Flamingo,10,"['jailbreaking gpt4v via selfadversarial attacks with system prompts.pdf', 'can prompt learning benefit radiology report generation.pdf', 'blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'is chatgpt the ultimate programming assistant how far is it.pdf', 'llm4dyg can large language models solve problems on dynamic graphs.pdf', 'review of large vision models and visual prompt engineering.pdf', 'a simple baseline for knowledgebased visual question answering.pdf', 'adapting languageaudio models as fewshot audio learners.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'language quantized autoencoders towards unsupervised textimage alignment.pdf']" -CLIP,43,"['fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf', 'prototypeformer learning to explore prototype relationships for fewshot image classification.pdf', 'patchtoken aligned bayesian prompt learning for visionlanguage models.pdf', 'actionclip a new paradigm for video action recognition.pdf', 'pre visionlanguage prompt learning with reparameterization encoder.pdf', 'do we still need clinical language models.pdf', 'noise2music textconditioned music generation with diffusion models.pdf', 'improved compositional generalization by generating demonstrations for metalearning.pdf', 'blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'localized latent updates for finetuning visionlanguage models.pdf', 'learning from taxonomy multilabel fewshot classification for everyday sound recognition.pdf', 'autoclip autotuning zeroshot classifiers for visionlanguage models.pdf', 'true fewshot learning with language models.pdf', 'a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models.pdf', 'review of large vision models and visual prompt engineering.pdf', 'clara multilingual contrastive learning for audio representation acquisition.pdf', 'vitaclip video and text adaptive clip via multimodal prompting.pdf', 'large language models are pretty good zeroshot video game bug detectors.pdf', 'a simple baseline for knowledgebased visual question answering.pdf', 'convolutional bypasses are better vision transformer adapters.pdf', 'chatgpt for robotics design principles and model abilities.pdf', 'anovl adapting visionlanguage models for unified zeroshot anomaly localization.pdf', 'adapting languageaudio models as fewshot audio learners.pdf', 'choice over control how users write with large language models using diegetic and nondiegetic prompting.pdf', 'abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models.pdf', 'disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective.pdf', 'joint foundation model caching and inference of generative ai services for edge intelligence.pdf', 'better integrating vision and semantics for improving fewshot classification.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'multimodal prompt learning for product title generation with extremely limited labels.pdf', 'language quantized autoencoders towards unsupervised textimage alignment.pdf', 'enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf', 'baseline defenses for adversarial attacks against aligned language models.pdf', 'the cultivated practices of texttoimage generation.pdf', 'artificial intelligence for health message generation theory, method, and an empirical study using prompt engineering.pdf', 'chils zeroshot image classification with hierarchical label sets.pdf', 'a survey of large language models for autonomous driving.pdf', 'a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf', 'promptbased length controlled generation with reinforcement learning.pdf', 'alt towards finegrained alignment between language and ctr models for clickthrough rate prediction.pdf', 'clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction.pdf', 'evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf', 'parameterefficient crosslingual transfer of vision and language models via translationbased alignment.pdf']" -VLP,3,"['fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf', 'patchtoken aligned bayesian prompt learning for visionlanguage models.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf']" -SAM,34,"['pretraining to learn in context.pdf', 'can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning.pdf', 'prompt engineering guiding the way to effective large language models.pdf', 'salmon selfalignment with principlefollowing reward models.pdf', 'boosted prompt ensembles for large language models.pdf', 'epa easy prompt augmentation on large language models via multiple sources and multiple targets.pdf', 'promisepromptdriven 3d medical image segmentation using pretrained image foundation models.pdf', 'dialogstudio towards richest and most diverse unified dataset collection for conversational ai.pdf', 'fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf', ""two timin' repairing smart contracts with a twolayered approach.pdf"", 'time travel in llms tracing data contamination in large language models.pdf', 'ul2 unifying language learning paradigms.pdf', 'instanceaware prompt learning for language understanding and generation.pdf', 'investigating the fairness of large language models for predictions on tabular data.pdf', 'using incontext learning to improve dialogue safety.pdf', 'review of large vision models and visual prompt engineering.pdf', 'fit parameter efficient fewshot transfer learning for personalized and federated image classification.pdf', 'clara multilingual contrastive learning for audio representation acquisition.pdf', 'askit unified programming interface for programming with large language models.pdf', 'generate rather than retrieve large language models are strong context generators.pdf', 'selective annotation makes language models better fewshot learners.pdf', 'connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf', 'domain knowledge distillation from large language model an empirical study in the autonomous driving domain.pdf', 'the scope of incontext learning for the extraction of medical temporal constraints.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'leveraging pretrained language models for conversational information seeking from text.pdf', 'assessing testtime variability for interactive 3d medical image segmentation with diverse point prompts.pdf', 'lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5.pdf', 'think before you speak cultivating communication skills of large language models via inner monologue.pdf', 'chatgpt4pcg competition characterlike level generation for science birds.pdf', 'dspy compiling declarative language model calls into selfimproving pipelines.pdf', 'explicit knowledge transfer for weaklysupervised code generation.pdf', 'active example selection for incontext learning.pdf', 'a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf']" -CoCoOp,7,"['patchtoken aligned bayesian prompt learning for visionlanguage models.pdf', 'pre visionlanguage prompt learning with reparameterization encoder.pdf', 'localized latent updates for finetuning visionlanguage models.pdf', 'a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models.pdf', 'convolutional bypasses are better vision transformer adapters.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf']" -Codellama,2,"['fireact toward language agent finetuning.pdf', 'llm4vv developing llmdriven testsuite for compiler validation.pdf']" -GatorTron,1,['do we still need clinical language models.pdf'] -Vision Transformer,7,"['incontext convergence of transformers.pdf', 'blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'review of large vision models and visual prompt engineering.pdf', 'convolutional bypasses are better vision transformer adapters.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'chatgpt4pcg competition characterlike level generation for science birds.pdf', 'neural finetuning search for fewshot learning.pdf']" -BLIP-2,4,"['blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf', 'chatgpt as a mapping assistant a novel method to enrich maps with generative ai and content derived from streetlevel photographs.pdf', 'a systematic survey of prompt engineering on visionlanguage foundation models.pdf', 'efficient prompting via dynamic incontext learning.pdf']" -Grounding DINO,1,['review of large vision models and visual prompt engineering.pdf'] -FinBERT,2,"['linking microblogging sentiments to stock price movement an application of gpt4.pdf', 'what do llms know about financial markets a case study on reddit market sentiment analysis.pdf']" -DreamFusion,1,['a systematic survey of prompt engineering on visionlanguage foundation models.pdf'] diff --git a/data/new_cleaned_merged_paper_references.json b/data/new_cleaned_merged_paper_references.json deleted file mode 100644 index c90d37e..0000000 --- a/data/new_cleaned_merged_paper_references.json +++ /dev/null @@ -1,9832 +0,0 @@ -{ - "1104d766527dead44a40532e8a89444d9cef5c65": [ - "6987c95f7054d2653178ac93df52aa3c0b99fcf5", - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "661e8ac4908a9d2a85835245ea99b6a314cc4a60", - "7ec9e9ec1c26f7977f54dd7830d970101e3a683e", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045" - ], - "3c784cd3150a359e269c70cfbadd18774d66055d": [ - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617" - ], - "3e30a7ac4886b28eb50151f58e14a1d698cccd0e": [ - "19443d48399d4fe89a4b0a96917c50c6fd9c5af1", - "1104d766527dead44a40532e8a89444d9cef5c65", - "cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f" - ], - "ace98e1e58bcc364afbb2feff6d136232f5f47da": [ - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "2afb07359e9c67499e1f373ac6f1520d3ea9c46a" - ], - "cd29c25c489562b409a60f83365f93f33ee1a0a1": [ - "34f9c825ba24889fa5e164ba9f99bfe4fc2f3e61", - "1104d766527dead44a40532e8a89444d9cef5c65", - "661e8ac4908a9d2a85835245ea99b6a314cc4a60", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e64df7e9448f7a9a4cb5d22c21c460134c8646ac": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f3f23f7f9f5369aade19f20bc5d028cce7b9c9aa": [ - "3e30a7ac4886b28eb50151f58e14a1d698cccd0e", - "6987c95f7054d2653178ac93df52aa3c0b99fcf5", - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "661e8ac4908a9d2a85835245ea99b6a314cc4a60", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64" - ], - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5": [ - "1bc9974780230573bfe9f89789115cb4fbf8bfc6", - "08b85bce712168998004ee80ce4e475390413c74", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "07955e96cbd778d0ae2a68f09d073b866dd84c2a": [ - "e070ff286709db28312e08b52b05539debe88146", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0afb64ce430c5f26752c8aed246ead6820b02049": [], - "0c110794ae91b4c165b0de3ff11fc841e2455bdb": [ - "914254fac74a2da051cccf6ca16afcaad416a079", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "204fd6c5e247c477d607f507ee01d94a8dbd408f": [], - "243ac5656c4f8ed6e1eb757b7145fb12b837c166": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "24dd96da6f700f57132713aeb5e9b06905abab5d": [], - "2bb4fe9bc10dbf1ea70135e52452f9f63bb10671": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "d318e0169f649656c71f02a1f84194a734fe1962", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "2f75de70511fa9f5c7a1e7f61f2d7928d121adbf": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "370cea8b4220917f45a69358c0303df71f5063c7": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "663a41c866d49ce052801fbc88947d39764cad29", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3b27092740a489a63589cdcf40fad6a0e093daa0": [ - "eda54452d8a8a412c2a985ef11572cb468906b1f", - "70da4fb798a86cbe8cad96c27ced0415885bbd9d", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "40c9280d87059c0cc28f2a08d46a7045fa3e9736": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4895d443c36bd136a818be2db34442354ba408d1": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4950bf6f873ba1409a7bbad25cf5c93c8f833453": [ - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b" - ], - "4b091d92f793161046b483ee93df244bf93bb508": [ - "08b85bce712168998004ee80ce4e475390413c74", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "605c32428861eb26b8631617b8f6c97a850d6a04", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4cf527e9e0d68e3fc16d39fbcdb3869cd3ccf60f": [ - "e418bddc14666671c4df6a9747f39f0f522a1bad", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "341bdbcfc3febef7691a97c216ad394653211095" - ], - "4ee96f0757e517928590a2300af5d40ba768a5a7": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "62176de125738e3b95850d1227bac81fd646b78e", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5db0f55332839c408e3049cea1a6ad48fefba70c": [ - "17170575aa8b4fa4e3eef5d366ada706a94dd836", - "0cfdd655100055f234fd23ebecd915504b8e00e3" - ], - "65fe385a665480b41fafc56d76a3bd72e92e8886": [ - "fd80f7f3673fc6ca02f192d5d73426f11a4be659", - "cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa" - ], - "6be6fe206f8ca735f8df26758bf877572abb10d3": [ - "40047a74b707743157051d38f76061ba5ff9aab4", - "0d6bb585493e34975f0437faa3179db3a02f6ae8", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "4da1cfb77084ef04a1bb6924de11eac05daa381b" - ], - "72fb75f7c38a83424308c8205bb36cd88995494b": [ - "daf9e24adbba3d1aead91cbac26502d3043db069", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7d87fbdfbf5038a4e0ff09801b6d3b8a2e0c613a": [], - "8d17234680db76f99efd22fbcb169f45d2d79d93": [ - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "0f30612423381eb5d271c4ca4f4254149b0d22fa", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8d9ca1e2c703e2752a4904c967a65d45d0bef5f6": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221" - ], - "9141480721653789597b6e537ee0eeab401f3e60": [ - "1e45200057eb4e47817634f183cb882677be1a14", - "0a2ac054c533314c0659f3b139388527df0d42f3", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "96d6bb5d6abdeda9b2db9af6296527200ba7aa32": [ - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a21de70160c91dcf9b1e7a93fbb32f4b2687860a": [ - "bad6fa523ecf782c837a2eecaaffa4e1f7477c24", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "e070ff286709db28312e08b52b05539debe88146", - "e86009d9f9b1cdf083a48d087552bc4153784451", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "0d6bb585493e34975f0437faa3179db3a02f6ae8", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a3509cef906a4517238c1764676cf637efcd1d5e": [ - "0f733817e82026f7c29909a51cb4df7d2685f0e7" - ], - "b43e9b674d4572e1aba8b40a28056ab118ad5e83": [ - "53e7475a3ed0caee37122a9dbdb53d1da0691a33" - ], - "bc70af9248d210663edf22e5fc84ca9313c697b0": [ - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e0867e9f3a715851a90d17423f7f3b33a2a66bb1": [], - "f64e49d76048c902cc02e8ae27dcd4ac0dbcb97f": [ - "69619a2a47faee7a29ec596db13172e2a42ff921", - "23c265ba884b92ecbd9d18641078d964697e4590", - "c2a79e2a65b721d4de5f6d4806323174b9f8f393", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f743287be3ced6757de7ecb26d03815b22cd737b": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e070ff286709db28312e08b52b05539debe88146", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f8b5ee53c3410f20049e7def47bd52403fa388e3": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "33729913908d187dc0db6e41073c35643324fe4f": [ - "e070ff286709db28312e08b52b05539debe88146", - "e86009d9f9b1cdf083a48d087552bc4153784451", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3436ff7a1dd4c6547ba78968d3eec2545a6dccb9": [ - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "6d951d939d3f27054215f2606a0cf89ed21550e9", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "49b499598a8864eee55ab264fc16a5bf8d2f87ef": [ - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5581bf85386737bd3378eec68189759a05280bea": [ - "e3a9af420cd2c0c8241856da92374027fefb87be", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "64ce6ef1f5cf227bf2bf917c87273386ae16256f": [ - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "197ba7bbfdbb052b0770088815c110774220f397", - "80c0416048614be75362c2c332d22dd1d2b22f65", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "6af986a2cab884fbd30ad6da2928dc19c12d83a7": [ - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "70b73e272621562c6261f86d2ebf814703b760ed": [ - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "71d68782c3da41b77866c2fd0cb65726f60b3af1": [ - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "73397ec77081b46f5e49a4e7486129fe2ffe7adf": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8da6e4537122af618c36563caef5863f8728d789": [ - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9573e2025440219a1d3393664b3c80bda51ac8f4": [ - "70da4fb798a86cbe8cad96c27ced0415885bbd9d", - "a8e510680ecbf5ad1fa32a486b3135f9886a6c2f", - "30873c32db5a219a58be928d5692cce48be1d3a0" - ], - "9ea3d90a172a0b5799c13287484f7406946f7311": [ - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a4929de687f3c6937dabbf733258af635781d3c4": [ - "2577d053f8aab912d29b424e1f09133d83740fd2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "b2542a738b75ee9b7ce1a13d8b78f9095d212412": [ - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b9c263500281e05fddfe1f84839491f605815230": [ - "0942bd8fad71282994ff4e9a779c09745da68edc", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d318e0169f649656c71f02a1f84194a734fe1962": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "341bdbcfc3febef7691a97c216ad394653211095", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d75d11d2c89c01cd284383546ae057cb827dc272": [ - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "23c265ba884b92ecbd9d18641078d964697e4590", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "775514b8f5a320b8772f93a3168701ad0c9eeebb" - ], - "e1dafedfbb55cd2200411841c2ec40e7ea827773": [ - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "0f30612423381eb5d271c4ca4f4254149b0d22fa", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "fed7e4a0e8c798777f3f1613be62a2dfb776b462": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0894585294c67193ff3190240554677b56fd79a0": [ - "db4cf9f6a653d5c15973e836c800ea47743251ae", - "661e8ac4908a9d2a85835245ea99b6a314cc4a60", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1c475acaa1060c8318a625f24bfd88c12f367516": [ - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2427527c1a1bc61b32c28a107192c3e22ed629bb": [ - "fa4ea8b773ffd6706ba5cf427f9671f81b04dcdd", - "1104d766527dead44a40532e8a89444d9cef5c65", - "f4d543ff431359947bf41152ac01233b8062221f", - "286c3587f2616839286748461cbc90261ea49caf", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8c035150f883007b5af9e5bb753b78d9c0b75a55": [ - "70da4fb798a86cbe8cad96c27ced0415885bbd9d", - "ec27f85979899a4193a8ec3b932ddb677c59be62", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045" - ], - "9be0dea0d6b892a2162490fb02712efaf10c0c87": [ - "1104d766527dead44a40532e8a89444d9cef5c65", - "0894585294c67193ff3190240554677b56fd79a0", - "db4cf9f6a653d5c15973e836c800ea47743251ae", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f" - ], - "db4cf9f6a653d5c15973e836c800ea47743251ae": [ - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "c7a3f9cc61cfafdc307f8ae24430b6b1121f9b2c", - "be2b0396de9431bae931642516a1d3e4906329f5", - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f" - ], - "04892382200a9d48ad5f8d3cb3cd3d63a8206a01": [ - "17170575aa8b4fa4e3eef5d366ada706a94dd836", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc": [ - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0ad677b4172e5aef8b18bc6832145d1a03e11da4": [ - "d3ca116177369bf6fbe27de64506a2f401aca996", - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "b5e9406a65de7384af041c357ca5481489345b73", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "4d81c33b295c092016ac236cfd32020a5bb70b97", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0da5adf32fe7501a5b98eb6549b2c42af08ee094": [], - "1e8403af2e1e7a8f803d8df9e8daac584f99c2a0": [], - "294b4613b21abf1e9ba499de274569360093b107": [ - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2e588fe7e07948cb9112c37d5e9dcc3a13b1bd0f": [ - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3120c2763edab339b937ddbe76991ebdfe0e01e6": [], - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b": [ - "914254fac74a2da051cccf6ca16afcaad416a079", - "7a25155364476839b6d1fc0653cd8611327ab9ba", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "838e1317454724a9bb758d05d97e6058e11a8251": [ - "bda605928d6ebe4db906e69ab5d343df75918727", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "4d81c33b295c092016ac236cfd32020a5bb70b97", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "5437e8adab596d7294124c0e798708e050e25321", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0adec918885dff698acf359988ed79a543157f80" - ], - "b0b237dd905f12b23e3fc48ac7139e275158a007": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "732627c703a9dbc78d9384f1be4c791c3a554391", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b349f3dd5b764168cba57bb4ad3fc240c2b3eddf": [ - "4d81c33b295c092016ac236cfd32020a5bb70b97", - "0d0dbfb1b315a43216020abaf74d289456198219", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3": [ - "4d81c33b295c092016ac236cfd32020a5bb70b97", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "37f0f1f55f44bff84aac27a346dd47d0c6c136e3", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "d61f0820943a667917fb6d32225826aa5279f694": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ef5cd0eb266e3df3eb64aec18e1854fe0244d228": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "a221def07d3cd4f29c38234e271faf8a523e0f5a", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9": [ - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "e418bddc14666671c4df6a9747f39f0f522a1bad", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "3841234dd49250c4fcbba79eed6593d3b57932c1", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "37f0f1f55f44bff84aac27a346dd47d0c6c136e3", - "663a41c866d49ce052801fbc88947d39764cad29", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "ff96527c03fbea7c3bb7d44d1d656d875ddba75e": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "020e473d8c987dcfb03fcfffeb87b17812447031": [ - "89184ab496b2a1ae31e068e628479b4cd8f4b9d2", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0270ec4bc946b59c5cf6204be2553682dee0346c": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0392d58335ce674a70f5e58ac8c438de296a0e6a": [ - "9ba50f992ccd92f428503ea6246157260a26cd77", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "040ec58865ab50b5e6d91a355ffc146ec5034e9f": [], - "06ab0710c8a7315e70c15c0d7eb1aa50210d945c": [ - "08b85bce712168998004ee80ce4e475390413c74", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722" - ], - "06d8562831c32844285a691c5250d04726df3c61": [ - "3b6179c293df29e31d31cea46476f104ab6950f2", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "7ec9e9ec1c26f7977f54dd7830d970101e3a683e", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "03fb95e6be583ca954c3d00812a9e9a40f118e51", - "d2cfabde7383704e876373e9da9891714b0bd62b", - "09b7338021fff3200c2098b19824aecc83a66cb5", - "20cb40199d03395d63615854863f9eda9c7863e2", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "0d0dbfb1b315a43216020abaf74d289456198219", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "732627c703a9dbc78d9384f1be4c791c3a554391", - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "073972fa0de48db1304509041e877e568c94e7de": [ - "50d40d05598e456188a3be42983b8daabd3f04f7" - ], - "079be8c8a93fc80274ff22251a3dac9804bec66a": [], - "0809c278fcdec2ce297da3a9d6e031fc192263f6": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "08b85bce712168998004ee80ce4e475390413c74": [ - "3599a236f285af48782fc30b1341d13ec7320735", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "ef3a38b9f15e9dcb5652cb3f86f19b845cdaaef7", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5437e8adab596d7294124c0e798708e050e25321", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "0968f1592f9401d72bf0d97e740496818c1a3135": [ - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "0a0d6a98bd246a82aaaa9d33ec0eadf4ceae69dc": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0a61802b71aa044cf1fe0e81befec148e0d5001b": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0ba581718f294db1d7b3dbc159cc3d3380f74606": [ - "25425e299101b13ec2872417a14f961f4f8aa18e", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0c8446eedfe083e0ee32f5c4f793e5435904014a": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "0e1ae0bdcc8469db99a4f8008288e20f285f1c6d": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0e8e3d2a2f4413808c7aff7bee6e8e11ec2700d7": [ - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0f6fe87afd1a3571f77c790893b03717e5d0422a": [ - "8793066d170b6a742c4fcdb478d4f100c1e4bf17", - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "142ebbf4760145f591166bde2564ac70c001e927" - ], - "0fb8f3f86476e9ab8fa4679620acb7d525b222a8": [ - "08b85bce712168998004ee80ce4e475390413c74", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1059b79598d6e08121503093f45d50fa963d2843": [ - "b159dffadb69940e14693e812bdaa32e3957717f", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "10e8dc07ea256c6a88d7043cf135417402ed38f4": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "13fafa40eb7b15813cdf6c2ead1e1032e7b085f0": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "14dcafae548d578f6b8c683d0972531bc46423ca": [ - "7bc9607c5cf3fc817675d46844f529097d579514" - ], - "16877baf3874038233279e07e330f891455fd880": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "1696e03a35f1bcc724ed9bfe69bb028b789415e8": [], - "16acd2d2faa236dfe5f6ab67a0b94a9ed1b1de57": [ - "08b85bce712168998004ee80ce4e475390413c74", - "0392d58335ce674a70f5e58ac8c438de296a0e6a" - ], - "186e96fe036927182ec963b63f9dd7f8ff650158": [ - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1bc9974780230573bfe9f89789115cb4fbf8bfc6": [ - "fccf8776d7525627c518a56a1f4db367a4d7120b", - "b559f253e245d4306b1ee3e9ec972d9f43a8ecd6", - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "b66c1c7617b42f3814d516faf7d2ca3b771a0c9e", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "1e5743366625128e225879dbcfb568f6b8f1bcdc": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1fc89ce338b94f6a46e41b9a13aa99366a762eea": [ - "e0e1fcdbc5b41fcd1cd15001b4861a738411c910", - "327e0290fd71609bfc1a30478a95f690668fe622", - "2d765d953efd738034782f9afdb311e3ba015edd", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "20d448a8712238ea34d9a18287e3bf05bc61dd2c": [ - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "142ebbf4760145f591166bde2564ac70c001e927", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "20db2ac68c0a0daa8417696cced923e518c07681": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "221a72a3631ebf8b555c27bc864338390611feb1": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "25ec4e51e515548cb55e0270f449ac55f3b0840c": [ - "fca92fe287c44c9ec79ca1f2762b0bf2e5e8df2b", - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "26f560e592419891c9de1b25d0e4d4d16014d54e": [], - "279c798fd53c8dc84044273d08b6a060dbe9f702": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722" - ], - "27c16cca907aa43397cc226a182b73b396c5cf66": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "341bdbcfc3febef7691a97c216ad394653211095", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "29203f0b8b9be7fd70d99bf7390c6a78b68a9289": [ - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2af6a21a1b682ceb585165359d3605e89f4cf6b0": [ - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0968f1592f9401d72bf0d97e740496818c1a3135" - ], - "2afb07359e9c67499e1f373ac6f1520d3ea9c46a": [ - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "2430cd1ac1c480894f2ef9368b8962a3a73e7b57", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "c78bb96ee119950b4f6a9dc0155199826b0bc8c8", - "20da8033ed8b696e2e27ec40b1aa8a0ab82b964c" - ], - "2bb34cfe22d0d46394dd91ba8934e525563e1274": [ - "08b85bce712168998004ee80ce4e475390413c74", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a", - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "2c66f49e328ca5815c13dda106abc2c326d4f28b": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2ed64d90670177bf58cdce6bda04a48a8731a18f": [ - "a2c8d1c5470435176185bf891c76711a9b44808a", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c" - ], - "3034d8571e16e25c6a839bf492f20daf855d04a0": [ - "b33577b6624da43765d215d9954531e3c7e48e52", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "31e04aec55f749dc560afe1d8673112f9b32f46b": [ - "94bcf0390d5acb1b92323bd15cc1dc311314122c" - ], - "3352d4bb5756a8a6bfcc1cde169b6aa9fd94497d": [ - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "344f801663a76aa15e0dd13344261d8648c382a2": [ - "1bc9974780230573bfe9f89789115cb4fbf8bfc6", - "08b85bce712168998004ee80ce4e475390413c74", - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "34f9c825ba24889fa5e164ba9f99bfe4fc2f3e61": [ - "4b0b56be0ae9479d2bd5c2f0943db1906343c10f", - "3e30a7ac4886b28eb50151f58e14a1d698cccd0e", - "ace98e1e58bcc364afbb2feff6d136232f5f47da", - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "30c0cdc414f68211d5d0514df027cec22e005174", - "7715ba5e75f5256e1061c7473afe61bb0dbb9065", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "34fd95dd4dd32e704d4284fc31165e85b303bb1e": [ - "142ebbf4760145f591166bde2564ac70c001e927", - "732627c703a9dbc78d9384f1be4c791c3a554391", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "377d4d6c1be01b9df32edfd94b2c5946971b0108": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3784fd84b61d482b52f7ef72aac66bcb886b892b": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "df2beaae63e4d68ef8e762bcd4704c9f11f856d9", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "a37153a5f42ee2951ad8a2c9ec86b52c4bf81c77" - ], - "385376b8aa48c25403f17d6206db7c09b67e1314": [ - "a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151", - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "08b85bce712168998004ee80ce4e475390413c74", - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "6ad26eb2d2aa6679d16d9c16fb75cd2cbe1127bc", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "c2903ea606e409d49994c801bb5aab321f623e5c", - "2c12d24c5ba5ad3bb3994635fcfcb9f8caac31d0", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "bf22ef16a6a912763780aea454198edc3e2bb3c9", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3a733c27bff68259b17dc4f835b0d192ac8fab70": [], - "3c4f1244301577cffff9affc73690669725e7e08": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3e0a691277183a6704310af3e4e9e271400612bc": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3e1ca026052d30e3b9677e363616fae23f6616df": [ - "27f0ce04403158b61328716ae4aaab5840c0d123", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3e4991bd206214f596a10e9932cd441fe5bd1f8c": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "08b85bce712168998004ee80ce4e475390413c74", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "407a8d6227ece351d9870f96576d4c287a746166": [ - "ed5ebed7ff668fd7362d531a40b49b3aea33b3a9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "42219b26a503d03bf70e9953edc3af94c255cb2a": [ - "a01a9c4a114fbf201540268f928ccf77bc3f9357", - "42ea55edb46395469aee1b760829657e65ab6577", - "43a55dbd95c9d5cd82de8db276f41adeec4a937d", - "26d31d641116b656826737335b2accb802ac9931", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4279a38a098d1d359881b73c6a88a112fe93443a": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3" - ], - "43a55dbd95c9d5cd82de8db276f41adeec4a937d": [], - "458147b5f7242c998ec4f33798a59b7c48867329": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4591f6cea22b66eccda0103b83002be45e8216b6": [ - "08b85bce712168998004ee80ce4e475390413c74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "45c46687bc8d2dbdea6f92fc14d4dc7a548ddd12": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "47d04bcfe0f1bed72d03c68cce76b4cf4be03f11": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4a99a85f071e67bf15ae4bc53ec37af28b650ec4": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4b6df5f9885c9dc0ce3125791fd01824e3cf37b7": [], - "4d21debb0f5fec315181e0912b5105c6ce4fc67f": [ - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4d81c33b295c092016ac236cfd32020a5bb70b97": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4dd461b2392a6983d36618744d2384349c4170f9": [], - "4e96d7fa9f27857523d786230294fbcc6060212c": [ - "2577d053f8aab912d29b424e1f09133d83740fd2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4ff0fccc922f9da7c818c86c8a13aef23ea08345": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "50aaac5fdc2b5a33bfd3ba93cdf4e5e302f34297": [ - "69619a2a47faee7a29ec596db13172e2a42ff921", - "102e4c860e39a2bfd7bf3f03b9ad69aac7bf3b5f", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "50bbca86de82d6b72d92bba0ec988b58e644dac3": [], - "50d40d05598e456188a3be42983b8daabd3f04f7": [ - "08b85bce712168998004ee80ce4e475390413c74", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "07cd498aacfb4d39fa2e0e8d8a9c8ad881257300" - ], - "521ccc898395a2818fced22b4cf371b0e5121f94": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "531678c18fd2c5a9620b68f3550131fc3fd3636c": [ - "f66aeec98816c3a52685e570a04fa8f2bd53dfb4", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "53e7475a3ed0caee37122a9dbdb53d1da0691a33": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "56a9c96a29f4047be8465244576d731f0df2d9df": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "57404bd8c71e2b17fce63b49229b278b6a66bf13": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "57a4f8f69908d3474565d3cd6f58b1ca651ff673": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "57bb978b8075fd5701a61770c5ee7244c414e8fd": [ - "28a7ced384549eaae74ea9ad3ee21189a0625afe", - "930e86d49477c9d3305cd1f9d01b93749f85bb8b", - "717392dac099d1506b766787382d61b277863163", - "dd889342b0de45f7434cdfa7543e3bd46ec824cb", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5ce2a1dc9dfa8b4f1368220ac7f7d30a395ffca9": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5d49c7401c5f2337c4cc88d243ae39ed659afe64": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5de60d53bce194b34dae1e531876af9acffba1a3": [ - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a" - ], - "5e8dd82419f78025093acbec3ba2e345fff85d11": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "615962d8969c8e0ffe43319689dce6c50cbf1f29": [ - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045" - ], - "615ef4518f9a41a10881b66ce10f0eb490e2d75c": [ - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "632dc69c2e504d693533fc434b8122a2a8a42844": [], - "6474370fe46e38896288305c35d3058a403b1db2": [ - "0ba581718f294db1d7b3dbc159cc3d3380f74606" - ], - "661e8ac4908a9d2a85835245ea99b6a314cc4a60": [ - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6862113724aa1a578c5d4e0ec7f1d9bed4288241": [ - "385376b8aa48c25403f17d6206db7c09b67e1314", - "08b85bce712168998004ee80ce4e475390413c74", - "0392d58335ce674a70f5e58ac8c438de296a0e6a" - ], - "6b8f26678785ebd7b7b27984af3cb9a273b722b0": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "71020779c6eeb9c76fe0a0eb2155d1d4f7d29ff9": [ - "d2cfabde7383704e876373e9da9891714b0bd62b" - ], - "732627c703a9dbc78d9384f1be4c791c3a554391": [ - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "dc05240a06326b5b1664f7e8c95c330b08cd0349", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "7619a98ef077c8f75e0bfb98953457649209e07e": [ - "385376b8aa48c25403f17d6206db7c09b67e1314", - "a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151", - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "a171780f04780f1dca6965e6c451915b8ff5458f", - "0d0dbfb1b315a43216020abaf74d289456198219", - "50a260631a28bfed18eccf8ebfc75ff34917518f", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "dc05240a06326b5b1664f7e8c95c330b08cd0349", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "76bb8f753c40d66f435015f2776c672b3999d8b5": [ - "632ab7663e6d64578ceda1d1df9ec525b503bacb", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0968f1592f9401d72bf0d97e740496818c1a3135" - ], - "790c247dabe004f022ef9330fb59c36a77bdbbb2": [], - "7919cb1a1dcf70ed7803c43a71d43dba696ef149": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "793eb805800c4af0b06260079e178efa0377b9d7": [ - "385376b8aa48c25403f17d6206db7c09b67e1314", - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "08b85bce712168998004ee80ce4e475390413c74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7aad760762c4a10dfbc2d3391eb8bdb28c80b236": [], - "7d3d98707182e0733d8cf5ee763314c60d638f4a": [ - "75ce9634d281cc12cbe434f86c737df8e10796fa", - "3fa956ed52c7038e099223429900e9ca5baeab21", - "486a8c8655b81c7f87ff257141466ec1186d4aea", - "bb58f2f63888456a3e04a56a18996ab8dacdb257", - "c6808575096a6e4f3cbdc5f893384bc5a01cc6f8" - ], - "80785017029cab501fcdb90b98985cd2b36e1fb8": [ - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "3599a236f285af48782fc30b1341d13ec7320735", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "831fd0c18d10e42330cca36e0c5769762fb419e7": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "877e27a1d89095fcf686ab675f62a8432d3285ee": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8a1a8290f7d42b0ce60445a4c0130ef737b3ff69": [ - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "4f53020119eba43891d4566df28466a92229b8fb", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8a419947c46b8fa491ec613664372e376eb9f0c6": [ - "aa2fa431ce1d5a8d56d138e3330d3df381d36e3a", - "ec27f85979899a4193a8ec3b932ddb677c59be62", - "bf22ef16a6a912763780aea454198edc3e2bb3c9", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8a8ac2467aee4d70866a1b2410e59565ef6ae292": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8c52b3bbe5897ba3f42b38c5bfc33bbd48f9a1f2": [ - "4e3c65511292a800b17be6653bd057e7a545a0b0", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8c90bfe05c06fd47eaec0f5b1662e06862572afe": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8ca384547bb4b21b7f38d478119bf3168eb9c9cd": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8efcdc15c5f028f968d6a004a64593245c49927b": [], - "8f93f95e093aab16e594b4a246a205007e107c7a": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "90b1baf2cf299ef0e0ef7611a12311bd6cab3ed7": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "91a291780103b328f65e700896ae6fa2230ec2e7": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "975da5bb7fdd800ba577535d8c6ee5a5bc835d52": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "ef3a38b9f15e9dcb5652cb3f86f19b845cdaaef7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "97717368b5f7e6a544f0a1c73a441bdcb4b6a046": [ - "763eb8d43e2f8a5d9da26269a4985efd1c099a5b", - "08b85bce712168998004ee80ce4e475390413c74", - "663a41c866d49ce052801fbc88947d39764cad29" - ], - "994a6040fab375669a92cab0e67fb2fd203cd67f": [ - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "1dd344ce28f1e5a078f9d8396b5078388e555d99" - ], - "05fab50acb26203a944a955131a2388c9731a8f7": [], - "0704a96e1c57c12031f1c3ca492a91dbed1f61ce": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "fa133b4200729a57db96ae50aff8c4a5ff819f43", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "075e16a0774b1a9d44a7d512c50b7f997e16befe": [ - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0783c214623c18f6a8ad96b8eaf4a67a382e68ee": [ - "e86009d9f9b1cdf083a48d087552bc4153784451", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "07bc02bd16f6fe78a7ea3bb8d966fcc6e3893195": [ - "914254fac74a2da051cccf6ca16afcaad416a079", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "089f6328085066263fedc083952624ca121ebbf3": [ - "a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151", - "020e473d8c987dcfb03fcfffeb87b17812447031", - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0942bd8fad71282994ff4e9a779c09745da68edc": [ - "1c0c13edd4442ceb7eac70bbcaebaf84512f9a3c", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "a74b7301d2df228c266c6405dceb547d07a022fa", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0953ada119f384f328b6102e6b7963b3bde7cc9e": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "09b7338021fff3200c2098b19824aecc83a66cb5": [ - "3268da152f1149675b5f1cfd03f97026128b9e09", - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "0b413633f14ec7f96948067abf1d4ca930fa38a1": [], - "0b71af0bf02ab58b8d8e342c1c803322cfede603": [ - "a221def07d3cd4f29c38234e271faf8a523e0f5a", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0c75cda2bb0812217bf0e5460e910212ad512944": [], - "0d09c569477457c32637f9e866727cc4623b1165": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "104c878d17a179e86ba094b221993cfdd3277943": [ - "914254fac74a2da051cccf6ca16afcaad416a079", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "10717aefce06cc41465619ec8c956f4b0b0fa6e1": [ - "732627c703a9dbc78d9384f1be4c791c3a554391", - "605c32428861eb26b8631617b8f6c97a850d6a04" - ], - "1dd344ce28f1e5a078f9d8396b5078388e555d99": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "21e46f11898748778a31b5b2fe2f60128eb66ba1": [ - "30873c32db5a219a58be928d5692cce48be1d3a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "59ef1b67c5f238d5d6d175d84fb6b239b4221a97": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "632ab7663e6d64578ceda1d1df9ec525b503bacb": [ - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7db7653c581d7823cb9c328f2d742ec70d7a0ce4": [ - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "0f4ab3fe492ececbfd38be9682047371e2e9b8c6", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "80c0416048614be75362c2c332d22dd1d2b22f65": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9ecf603dbebbfbdd9858d21903c77074d12518b4": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "a07701abd506f67368cb75ef2b649dd51df7abd4": [], - "a29a0e679e626e8961dc217081eae2a6c63a15ad": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a45bdbbf9a197a21ef97291c60b77de47bc51db2": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a81470aa3721f6cd8a61139f9c4c60923bee093f": [ - "22d5459d1f47341b355feeb1becc37208d6ec365", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ab346a9d9a71bc59671e52cae96eabba16c24eeb": [ - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "ac7e270fcd365c84b29a710d58bf1243e850df4c": [ - "0100785773b8217c44606ab260e3212f93b0a4fd", - "f64e49d76048c902cc02e8ae27dcd4ac0dbcb97f", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "eaa20a9090861a778d754a6d50dbb0396ae09e80" - ], - "b159dffadb69940e14693e812bdaa32e3957717f": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "b1d5c08a6fb6a5ee5b6b6693e10a587733ca05ed": [ - "65d88194a902332b78dd5a7b919fa577bfa7ee9f" - ], - "bad6fa523ecf782c837a2eecaaffa4e1f7477c24": [ - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "197ba7bbfdbb052b0770088815c110774220f397", - "5437e8adab596d7294124c0e798708e050e25321", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c10ab4733b43f19547308c15ca231a668181a36c": [], - "d235a9085e0543fcbe502fbc269f9a8ee01dcbab": [ - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d96997265f8146e93b4c9350f19d55e46d1317f0": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "db0d67057b41927b5b51d3a393c250be64a405ae": [ - "93d6fa92d60938b5bd0e405e159832b91332f169", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eb36681fc4c5dfce4f3e05540fc92b007de278ca": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "34d24b2d9f116f8f652c112d4ac924afcf11bd0d", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "2e6fa3095df1d1ed041dfb4f5a18e31d4b7bd7bb", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f7ccf8ecd508e0b2d423169588dd1c1a82dd3b4d": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fb1d85fe28b5e92e22d084eca674d4a2b48cdc5a": [], - "002cfed5d4d9bf2fdaddb11d32f14751f2250e0c": [ - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "40047a74b707743157051d38f76061ba5ff9aab4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0040dac7a1bf7a1eeb01c86ddb993f331f35b158": [ - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "03d8b1e78d124a561f3c2a67d3199472ee73228d": [ - "9c39e942b87cbada41a4a52364f996915c7c2d98", - "914254fac74a2da051cccf6ca16afcaad416a079", - "0d6bb585493e34975f0437faa3179db3a02f6ae8", - "bd16d3d5f4f237bd76395b17e56cde3f01a41584", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "03fb95e6be583ca954c3d00812a9e9a40f118e51": [ - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "663a41c866d49ce052801fbc88947d39764cad29", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "5581bf85386737bd3378eec68189759a05280bea", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "04526876688e5a56106629229309fae272da1c79": [ - "0ae12d63f77f40b430f17c791a5191ff5fee5086", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "b6207fe49e29c77402f8dbab052e949990949609", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "b645e706651391eca1f692e7f560051c21b3dea4", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "04e838c16f3d1fb8d69d34fe0a0a92c59717875b": [ - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "2e6b6de08f459e2165b11ed8d2103916966b0fcf", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "7715ba5e75f5256e1061c7473afe61bb0dbb9065", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "088ba3cfb904ccd0aa1993a1e30c725b061aad7e": [ - "40047a74b707743157051d38f76061ba5ff9aab4", - "6e10343767ab09dde83cf99ea3442907402a9810", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0adec918885dff698acf359988ed79a543157f80": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0ba5fb80d2c3ea3a8505415e32d954b4e4eea170": [ - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0d42221038c05cee8443c5b5af838505ee137dc3": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0d6bb585493e34975f0437faa3179db3a02f6ae8": [ - "2430cd1ac1c480894f2ef9368b8962a3a73e7b57", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0f0a973c6457bcaf7255f891f9b34d658a0a84ae": [ - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "107aa1e3b1ce604d953475baf98674e92a723bda": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1786a2f9140ed7211b21302977de64e948b92308": [ - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "197ba7bbfdbb052b0770088815c110774220f397": [ - "64ce6ef1f5cf227bf2bf917c87273386ae16256f", - "3b16a709a5b18e52b0b6741cbc3c0e68a03ecd8e", - "9689acb6cb760e8bc21c16f368368b37dee977f9", - "bad6fa523ecf782c837a2eecaaffa4e1f7477c24", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1ed5d06c4dc46e6a983597b740ab0a31d0ce22ad": [ - "17170575aa8b4fa4e3eef5d366ada706a94dd836" - ], - "1f0dfbbc13ac31de8709bbb4d0f6478aa1222cef": [ - "6d951d939d3f27054215f2606a0cf89ed21550e9", - "176ec99005b5085d5d9a34fb770d75d34166c9f5", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1f86bf1e334200ec0481349255559fbfe7a33caa": [ - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2069aaaa281eb13bcd9330fc4d43f24f6b436a53": [ - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "69619a2a47faee7a29ec596db13172e2a42ff921", - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "f208ea909fa7f54fea82def9a92fd81dfc758c39", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2522410b1cac0c14fa656a0aaeaff08bacb358a9": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f" - ], - "2577d053f8aab912d29b424e1f09133d83740fd2": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2a99239f09e95f4dbccec572d66f4519206762f9": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "32426b96ff3c680125bde3b835bfa931288b8ade": [ - "c2391a8c8e24a450f00810ecb441e26413ea3791", - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3566e1245bfc90096fe0cdb8b18674da6519c8d6": [ - "e86009d9f9b1cdf083a48d087552bc4153784451", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3599a236f285af48782fc30b1341d13ec7320735": [], - "3841234dd49250c4fcbba79eed6593d3b57932c1": [ - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3886f3bd2a0af9e75bf9fa5b7db4224969dbf346": [ - "1abfc211793c683972ded8d3268475e3ee7a88b0", - "6cd26d124ffeb6ce301ef351aada27fa0852f81b", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321" - ], - "3b88526a0f0337e3a6b632b4af8fd0882eb4b470": [ - "d96997265f8146e93b4c9350f19d55e46d1317f0", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3d7d385d9ee75a286e8da27f7d3cf9f12651c899": [ - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "446fb5dead075a1a08862662738f462e9a0e91c8": [ - "b458fc5261595f44b36325e5eaea1f874d65138f", - "62176de125738e3b95850d1227bac81fd646b78e", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "e05483a41e8002e7024d39457e55a3fe533f5835", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "b626560f19f815808a289ef5c24a17c57320da70", - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4988b3d378b79eb8669112620baf1ff4e3e536fd": [], - "4e1a4d6804c7983c659feb7e41d49ad8c21aaa43": [ - "39bca01efce8765f0a5d3a8981bc30d56f196b96" - ], - "53addc28b106440a3c306b2cff8e259ad63d6d53": [ - "930e86d49477c9d3305cd1f9d01b93749f85bb8b", - "717392dac099d1506b766787382d61b277863163", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "587352c3b95c90de6d37f061c8e117f42be0b575": [ - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "e7ad08848d5d7c5c47673ffe0da06af443643bda" - ], - "5df5ebcaed745a5252b4fae64dc1d7ca90e68ff6": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5e3675bdbe898cb28a0fc3c2f72a578a97fe64bb": [ - "30873c32db5a219a58be928d5692cce48be1d3a0", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5f5253fb15ac382e96ade0335baf1cfaa240fb1d": [ - "458147b5f7242c998ec4f33798a59b7c48867329", - "cc43306e22dbfd5bc35251ab8c8ba37e4fc2a1b3", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5f88b907cb6b79ce22e826832f05c0471ecb095e": [ - "03fb95e6be583ca954c3d00812a9e9a40f118e51", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "61bbdbf481a6d3519c22513ebe8d6c3cd381851e": [ - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "91bc42852997bc774467e9ef8cda19a85507f663", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "663a41c866d49ce052801fbc88947d39764cad29": [ - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "40047a74b707743157051d38f76061ba5ff9aab4", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "0d6bb585493e34975f0437faa3179db3a02f6ae8", - "5437e8adab596d7294124c0e798708e050e25321", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "67daf8c4fe1958d20ebdf95c2a36dd490c73836f": [ - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bbb2fc6e95d24fb58ab6c25b216b14ac49a32fbe", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "e070ff286709db28312e08b52b05539debe88146", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "68040213e9a83408cdc491ed3e235b52b537eed1": [ - "47c3b8dd2c8a9326249ac98900b2c3fc71f46ab1", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "40047a74b707743157051d38f76061ba5ff9aab4", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "663a41c866d49ce052801fbc88947d39764cad29", - "e070ff286709db28312e08b52b05539debe88146", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "700da3f3758e053c379f905bebee261ba69f1073": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "663a41c866d49ce052801fbc88947d39764cad29", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92": [ - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "72491b96d8a614d1a9a099707d44593d4b5a8f49": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "755853c6b30f5a186131e23a63c68a3f2737068e": [ - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "75ce9634d281cc12cbe434f86c737df8e10796fa": [ - "214fbadc57e954e325dc055fee5ac0e224dfde11", - "62176de125738e3b95850d1227bac81fd646b78e", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7655f05cd394da6cb0f707068203c9ff05d8f05a": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7beec352ac2597c3cd3dc7aceb2f8cd068b72d15": [ - "748a2700ec11f51560a69ec05c67ca9f97014be7", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7cf4f8cb8b4a373d869e785b79160dda7a49a250": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "663a41c866d49ce052801fbc88947d39764cad29", - "0f4ab3fe492ececbfd38be9682047371e2e9b8c6", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77" - ], - "7df3595bdb4003589e8ca1757cc39ec03a39a2ff": [ - "aad167be3c902388ea625da4117fcae4325b8b7d", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "815c6ca281536d18ec0eb408b6e46e72a0826163": [ - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "82beb8a86d438e85a134182128d47607b1b04004": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221" - ], - "895f3c9e452ae51fb02786de424ce6d2bba11c3b": [ - "9ba50f992ccd92f428503ea6246157260a26cd77", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8ab27849799286459465d2262f926354093b20a9": [], - "8f84dcbad8cd3b5b4d9229c56bc95f24be859a35": [ - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c" - ], - "927fc7652e033c9eb17296df087e3e6491112bb0": [ - "df2beaae63e4d68ef8e762bcd4704c9f11f856d9", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "97782a67971c4ff1a74bf07e82fe20b2c4bf86c4": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9a9b1e2968302eb882870537d4af6e2c722dfd1a": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9b9fb973e5d3b413baa90648d9eb0743bd889747": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9bf587d032e3764720cccd5beaf941f5c32880bc": [ - "458147b5f7242c998ec4f33798a59b7c48867329", - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "ec27f85979899a4193a8ec3b932ddb677c59be62", - "cc43306e22dbfd5bc35251ab8c8ba37e4fc2a1b3", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c" - ], - "9c01786f8195d53ad3902fc8d0872784b059adf3": [ - "e418bddc14666671c4df6a9747f39f0f522a1bad", - "587352c3b95c90de6d37f061c8e117f42be0b575", - "b626560f19f815808a289ef5c24a17c57320da70", - "5f5253fb15ac382e96ade0335baf1cfaa240fb1d", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a04883d1d780b438de6c127caf7ebe3d9233e193": [ - "732627c703a9dbc78d9384f1be4c791c3a554391", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a3a241e9397fe29b37f96cb5e8f4b8bebed3d3da": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a6a0963fcf21ed47a2616ca3980f8f4f21e6d5ad": [ - "1c0c13edd4442ceb7eac70bbcaebaf84512f9a3c", - "96d6bb5d6abdeda9b2db9af6296527200ba7aa32", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "886499f0ab825a266f953f952dccda4b721e80f7", - "3f83582c08a62e5bd02398fafc93f7eaf1e4b84e", - "e070ff286709db28312e08b52b05539debe88146", - "1c1ca2392155ddf30408a442e6b504b5d60d4f2a" - ], - "aad167be3c902388ea625da4117fcae4325b8b7d": [ - "0783c214623c18f6a8ad96b8eaf4a67a382e68ee", - "0f4ab3fe492ececbfd38be9682047371e2e9b8c6", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ac3cdb50606f7770eef8e4cd951840a4f71287a0": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b3d6fec3f1a878b0c612f0ffed820b045c2c46d8": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b4170009de40c1c46adea6a314734434ecd4b0dc": [ - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b6bea98ca29267acbebca6cdf64eb07a5671e000": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b6e5855b6a4e425ba251a93516f2bccffe5ba403": [ - "3b16a709a5b18e52b0b6741cbc3c0e68a03ecd8e", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "197ba7bbfdbb052b0770088815c110774220f397", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "914254fac74a2da051cccf6ca16afcaad416a079", - "b21670e8061a06ab97e7d6052c9345a326e84ff8" - ], - "b70075b496c1f519093884945be5670c32cbceed": [ - "e418bddc14666671c4df6a9747f39f0f522a1bad", - "94bcf0390d5acb1b92323bd15cc1dc311314122c", - "1fc21645ccc8e99eb8162e5f91407148b7f77e3d", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "be177300487b6d0f25e6cade9a31900454b13281": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "e070ff286709db28312e08b52b05539debe88146", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c1647923704251875f4160e91b59afbbdc58483e": [ - "f2cd02c03d0169374442d9bc227c9aed178f4b20", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c2391a8c8e24a450f00810ecb441e26413ea3791": [ - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c5fa70db839fd05b1111f3586a601d8a93e78d0c": [ - "08b85bce712168998004ee80ce4e475390413c74", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722" - ], - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd": [ - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3": [ - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cc43306e22dbfd5bc35251ab8c8ba37e4fc2a1b3": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ccc772d88c231275f24c4fac9b28bbe0942e1107": [ - "e070ff286709db28312e08b52b05539debe88146", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cd77ea482d9245f3fcaeb670261a00c3fb5cabbd": [ - "293499319bdd460cb3fca1f0f5eb330e64bf3ff9", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "197ba7bbfdbb052b0770088815c110774220f397", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ce0154d9251f67c262512b6e598f3aa3ba9fe9a4": [ - "aad167be3c902388ea625da4117fcae4325b8b7d", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "663a41c866d49ce052801fbc88947d39764cad29", - "42e1790c7979796634d15920b4a08990e847243e", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "341bdbcfc3febef7691a97c216ad394653211095" - ], - "d4fc988c6510420a5290dfe8d1a991ca4878d696": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "d589c49e1cd1dd3b994dcac01b4c6e7fb8eef161": [ - "62176de125738e3b95850d1227bac81fd646b78e", - "c6808575096a6e4f3cbdc5f893384bc5a01cc6f8", - "9a31b2ce43fe198ab1fd046ca4ec70fded154aee", - "e6210c6f37bda0a9add8f01acc98ebd2e370814a" - ], - "d5a6fc6aa139066e3b66ba63002e7d84c109aebc": [ - "385376b8aa48c25403f17d6206db7c09b67e1314", - "08b85bce712168998004ee80ce4e475390413c74", - "bf22ef16a6a912763780aea454198edc3e2bb3c9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e" - ], - "dca6c3927ade6481a1ae080f5c24decbfeced1be": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dda0f7f086fc875d583604f8b0cf4a8678bc4de4": [ - "914254fac74a2da051cccf6ca16afcaad416a079", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e69684fb06a7b1fe621d7ef0c97fc2ca0e122c43": [ - "70da4fb798a86cbe8cad96c27ced0415885bbd9d", - "ee805f55c98920f74d0182aaf136330a97b4123f", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eda54452d8a8a412c2a985ef11572cb468906b1f": [ - "1fb5a5298747b8c7d60f98640a543f20d42ab053", - "3b27092740a489a63589cdcf40fad6a0e093daa0", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "f00e7326baa9600e46b3a8e7077dc3a349f90a01": [ - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f834aed32f5531bfa426faab71878c549572500e": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "386bd4d25043516f076ea7b2296a1ebec84f43ce": [ - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4d3a49d1439a0b8fbb0e9f588970ad0f1d70dec8": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d1aa858644154af50e36860e6761ae52ae655bd3": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "034f1d77d832460a239072c81b5bb178b93c1e9f": [ - "214fbadc57e954e325dc055fee5ac0e224dfde11" - ], - "0786c88990235414611478099e43611542d973b0": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "99752e255a866484291866a5ff5cf94e96d6bdc4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0820a7ec1b7cac3470836161a92da7d59f626d14": [], - "118802f91718ea2c566f2eaf1b4e25c439459f4d": [], - "19b43ff57e5d8f8a99da4110fbc30b4ecc39a527": [ - "b626560f19f815808a289ef5c24a17c57320da70", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1ad735714ad2e4ee5b94ce26c976e5ee5c7cde3b": [ - "e418bddc14666671c4df6a9747f39f0f522a1bad", - "587352c3b95c90de6d37f061c8e117f42be0b575", - "94bcf0390d5acb1b92323bd15cc1dc311314122c", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1fc0e5b30bfede1b78389d00f8c41bacd29ecd7f": [ - "3599a236f285af48782fc30b1341d13ec7320735", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "20cb4e0bd8871d33d82fc72ea82a0aa1dd922810": [ - "1a55d16c14587edda62dc9c9ff09e0b531dd169c", - "9e93ab728e3e174ec1492009055885a9123d434f", - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "1c1ca2392155ddf30408a442e6b504b5d60d4f2a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "26d31d641116b656826737335b2accb802ac9931": [ - "42780f9c7f73d73d7a887e2f787af0e079703d40", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "29965a1efc21a637e03a5e0a869d77eca77f5085": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "2a663560b669a0b8d975675b3ac2546cc7386f3a": [ - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2bd1b8990db73b6495c11082bea2d5f925c5226f": [ - "4d3a49d1439a0b8fbb0e9f588970ad0f1d70dec8", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3" - ], - "352bcafbcc95a84d96019688955cab5c43eb23f0": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "40047a74b707743157051d38f76061ba5ff9aab4", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "5581bf85386737bd3378eec68189759a05280bea", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3e4afde5a9de2c1801da99b8aff5ae05923f256b": [ - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "70ece7b4ba8f3b67f5a797daed544fb6a0b627bf", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e7cfc3362dd85b17c747e9f9636749696f87a88b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "42780f9c7f73d73d7a887e2f787af0e079703d40": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4a6d7b11c4aba5a23f68856989366dd4311e960b": [ - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4b99e8273227fd05f2be20248050d81e97ab4f4e": [ - "ddc9aeac18638575bbb90ede4c6829ec15c2947e", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62" - ], - "4d17732d90440682b0500f4e209c6cc4fac20e0e": [ - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4edd2d2770729380eda23826af1b78298b334a23": [ - "09b7338021fff3200c2098b19824aecc83a66cb5", - "20cb40199d03395d63615854863f9eda9c7863e2", - "0d0dbfb1b315a43216020abaf74d289456198219", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5076bbbf831a92174c9cc1b347bd0584560435fc": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "62176de125738e3b95850d1227bac81fd646b78e", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "7715ba5e75f5256e1061c7473afe61bb0dbb9065", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "50e8ab900d2ca4d83da120bbfe5338ee93dbe741": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "511ad6b37cb028bdfbd6096e6d20aa4b8b34fafc": [ - "6d951d939d3f27054215f2606a0cf89ed21550e9", - "0d0dbfb1b315a43216020abaf74d289456198219", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "0adec918885dff698acf359988ed79a543157f80" - ], - "55e3fe05598be7c3dd357d51166869f6571b824f": [ - "e3a9af420cd2c0c8241856da92374027fefb87be", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5ba1e498665d2b3536cb436f0cf484dce03459fe": [ - "99752e255a866484291866a5ff5cf94e96d6bdc4", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a29a0e679e626e8961dc217081eae2a6c63a15ad", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "657e364ec6932558f426583dc31953e547bf6575": [ - "dca6c3927ade6481a1ae080f5c24decbfeced1be", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "67455478e77c8672d0dd08f89735a8813bbfec65": [ - "486a8c8655b81c7f87ff257141466ec1186d4aea", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "b626560f19f815808a289ef5c24a17c57320da70", - "08b85bce712168998004ee80ce4e475390413c74" - ], - "674c5ec7b144aea1f6b143baeb17cc839f52416e": [], - "69619a2a47faee7a29ec596db13172e2a42ff921": [ - "0942bd8fad71282994ff4e9a779c09745da68edc", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "40047a74b707743157051d38f76061ba5ff9aab4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7307ee3c819c34b7c93ccbbd330a4c889956b36f": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "748a2700ec11f51560a69ec05c67ca9f97014be7": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8db1dcae055842f43ccac04182957b20d15bbe6b": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "7715ba5e75f5256e1061c7473afe61bb0dbb9065", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8efc20988021ce3b4b05dd44b13e27260ee9b99b": [ - "1f86bf1e334200ec0481349255559fbfe7a33caa", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789": [ - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a01a9c4a114fbf201540268f928ccf77bc3f9357": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a86e12654376323b712dd3d39d5ff22283f87a7b": [ - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b626560f19f815808a289ef5c24a17c57320da70": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ba4aa83248a1d08b521392eb971e47d10b7c74e1": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "d7386e8859b22e05ce9c4a972613d4b1e1e44198", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0f733817e82026f7c29909a51cb4df7d2685f0e7" - ], - "c20b18d6b919695a69e416debf8bf1ffeac03992": [ - "8793066d170b6a742c4fcdb478d4f100c1e4bf17", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "e070ff286709db28312e08b52b05539debe88146" - ], - "c218cd1772999517b137bbbc9872c4f67e540b7f": [ - "95c11cc5820ba32c60d5f2671f6567b9914a4978", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d1bd7ae97588eccfbcd31ffce4fc924d12a5de4d": [ - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "197ba7bbfdbb052b0770088815c110774220f397", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ddc9aeac18638575bbb90ede4c6829ec15c2947e": [ - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "4c5f4ddc68be643fb34ea969bf2c105ff7538995", - "6fc4a39bb4697a21286bb1cf503ecf17407aeae2", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "def24fb1e977db69f4b1b866b807f9ab9bad5227": [], - "e61a96cf602ebff6683929aaf916e25614a475bc": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "fca92fe287c44c9ec79ca1f2762b0bf2e5e8df2b", - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "142ebbf4760145f591166bde2564ac70c001e927", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ed5ebed7ff668fd7362d531a40b49b3aea33b3a9": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f330f502bf1e92fabf7f246597fa9320d956c0c8": [ - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f669d7a6fab0147253178a6fc854e05e3d92fb3f": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fc9bd3642df2a378c11131362b27deecbd02b70a": [ - "0d0dbfb1b315a43216020abaf74d289456198219", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "fd80f7f3673fc6ca02f192d5d73426f11a4be659": [ - "4d74a5048b884e8bb3842240abf98915c619c8f8", - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9e93ab728e3e174ec1492009055885a9123d434f": [ - "293499319bdd460cb3fca1f0f5eb330e64bf3ff9", - "08b85bce712168998004ee80ce4e475390413c74", - "3599a236f285af48782fc30b1341d13ec7320735", - "ef3a38b9f15e9dcb5652cb3f86f19b845cdaaef7", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a7d8a6d8c04bd4554da4219be0f9d3bf87e2e56b": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda" - ], - "08fd45ac85916b95f734cc75af8660cff73c33ca": [ - "385376b8aa48c25403f17d6206db7c09b67e1314", - "26f560e592419891c9de1b25d0e4d4d16014d54e", - "08b85bce712168998004ee80ce4e475390413c74" - ], - "0f71c1e2acf286951544d3bd9eb5d85acfba5af1": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "14d81c84662a1de7b5605a5a68bb0f63d6e293e5": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "e86009d9f9b1cdf083a48d087552bc4153784451", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "1cf50a2e906dc89463d7eab827de9a3c371e7c53", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "19c63eade265d8a47d160098d97194b3b83d3770": [ - "a86e12654376323b712dd3d39d5ff22283f87a7b", - "27c16cca907aa43397cc226a182b73b396c5cf66", - "1bc9974780230573bfe9f89789115cb4fbf8bfc6", - "d2cfabde7383704e876373e9da9891714b0bd62b", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1c1b83df13de4334e48a4c2039bc7ddfa374c486": [ - "08b85bce712168998004ee80ce4e475390413c74", - "3599a236f285af48782fc30b1341d13ec7320735" - ], - "1fc21645ccc8e99eb8162e5f91407148b7f77e3d": [ - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c" - ], - "27d6d02e24de259e3aa38e556a81f89ec505816e": [ - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2cdff023cd4b185bb452f3c7399580db2d0fdfcd": [ - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2f2a430ba6c93bcfaf4818316ff8a27b1e034b1a": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "30f0abb793772c15f2cdfec97c994685348177c1": [], - "33d944de189d6edf3a510ea195803a381c5a3bab": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3bd83ff979f3c0e9470f23c360a18333593dc5a1": [ - "b458fc5261595f44b36325e5eaea1f874d65138f", - "8e37dc1215681aa153a51c07078ba8befd6a6e01", - "7919cb1a1dcf70ed7803c43a71d43dba696ef149", - "c7a3f9cc61cfafdc307f8ae24430b6b1121f9b2c", - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321" - ], - "3dc1b657bf821b731c5ed0396823b67c10d54ba1": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "3e4afde5a9de2c1801da99b8aff5ae05923f256b", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b" - ], - "50bdea5132ef4b8cf25b0d9f3ac2ee0d09bf18cb": [ - "9141480721653789597b6e537ee0eeab401f3e60", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "f2cd02c03d0169374442d9bc227c9aed178f4b20", - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645" - ], - "53e8d327e7ceda6f4efd321752da57edbaee6257": [ - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "3b239d232ebb0fdb0515f41fd439e54ed4e8f86a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5645502d73c6907f1671923638773152e55bfb00": [ - "08b85bce712168998004ee80ce4e475390413c74", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "864cb3a725ae829cbfb675761cd2313897b1b7a8": [], - "8e37dc1215681aa153a51c07078ba8befd6a6e01": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b099104d1a065cbc1432af22e6085b1a44dbc839": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bcefc74b20649fd41ea05d87a3fa512d2559fc8d": [ - "5882dd04d95c9c88cdec389059fcf44d56cbb789", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "49b499598a8864eee55ab264fc16a5bf8d2f87ef", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "da0a170656a336f82fa8cf00289d1cc944d9b630": [ - "3d8e6358968c8bd5e97f21fead73bf4ba0c2a8d7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e5c72b92c48d68594b290c84a8904da7c8335554": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e86009d9f9b1cdf083a48d087552bc4153784451": [ - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ec56f49bef8925dc8931cc261ab3aca4dd36ad2d": [ - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "30c0cdc414f68211d5d0514df027cec22e005174", - "8add69e155596bc128df70e1ddd5a41c68698399", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f53a4f34757d1f237446b4d887d5323f2a17ed02": [ - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "dca6c3927ade6481a1ae080f5c24decbfeced1be", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "f7842099bbde74dc5aec70bb6af85b88de08ed13": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0f733817e82026f7c29909a51cb4df7d2685f0e7": [ - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2d30d800e946d3699d9c41bb95c36a6db63676e7": [], - "a0d83f9e15e722f23c14eb83cb2f87c1d1ea6400": [ - "cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa", - "a2c8d1c5470435176185bf891c76711a9b44808a", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "ee805f55c98920f74d0182aaf136330a97b4123f", - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "49b499598a8864eee55ab264fc16a5bf8d2f87ef", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b8ba16a107621f760e7830ddaab8c3d5c5ff06b0": [ - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c": [ - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "0adec918885dff698acf359988ed79a543157f80", - "098370508aaf56f718a472511987ac2072d0f917", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e90d30148ecf633db3bbabdcfa3a0ec06236e0d1": [ - "64ce6ef1f5cf227bf2bf917c87273386ae16256f", - "9689acb6cb760e8bc21c16f368368b37dee977f9", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "197ba7bbfdbb052b0770088815c110774220f397" - ], - "31d8bdef7b81e107bf04f226d877fd5aa2f51d34": [ - "dedfe929d182cc3537a9ed765d589b4735ce062a", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1": [ - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9c39e942b87cbada41a4a52364f996915c7c2d98": [ - "914254fac74a2da051cccf6ca16afcaad416a079", - "0d6bb585493e34975f0437faa3179db3a02f6ae8", - "bd16d3d5f4f237bd76395b17e56cde3f01a41584", - "9ba50f992ccd92f428503ea6246157260a26cd77" - ], - "a8a71f9b10b281e796fdc2ee7aaec40067739574": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c879413103f8950bdd414c7f60a39bd7748c9be8": [ - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "197ba7bbfdbb052b0770088815c110774220f397", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "cd7d770eabb4dab6894d9f91d2c3bc337e94a4e1": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d0e3af5f20a451c04770929979d7a8406a1a2466": [], - "e7d21ad4da122bf1db19e4fda57bf94c1dfa24a4": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ed40889e11e812ef33578506844be06d713f6092": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "e070ff286709db28312e08b52b05539debe88146", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fe425e341cf646689e42adead17f14eeac5d03e6": [ - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "27f0ce04403158b61328716ae4aaab5840c0d123", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0088c9f4d50706c7ab71efa13bcb4b42cf2058e2": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0095acc4f2c3255cf38fdf844003c97858adb418": [ - "2430cd1ac1c480894f2ef9368b8962a3a73e7b57", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "00c367427d9135209d84008e6cb5e90f0adba881": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "03532123ccffae8d411264320e8a5ae2b6eddea0": [ - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "fb1f7ffef65341aa5aee2bb0b240d4ef51680fce", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0366177b44ed13d86b9d704a3a82ea3750e5abed": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "06edda0310b4ec7c5012d012349252a3a77521b6": [ - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "070b91f80ac118b910c1d2ab5be9f65f685979fe": [ - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "0744783bbefc12b2b1383bed137e8a80061274b7": [ - "f208ea909fa7f54fea82def9a92fd81dfc758c39", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "5437e8adab596d7294124c0e798708e050e25321", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "097dc73d5d422b3c09286e72d16b2561ae5fb395": [ - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "e070ff286709db28312e08b52b05539debe88146", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6e10343767ab09dde83cf99ea3442907402a9810", - "5437e8adab596d7294124c0e798708e050e25321", - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "09a85806442373f167e45eaf662a7914df048b10": [ - "3b16a709a5b18e52b0b6741cbc3c0e68a03ecd8e", - "9689acb6cb760e8bc21c16f368368b37dee977f9", - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0a2ac054c533314c0659f3b139388527df0d42f3": [ - "355b66a65aee97822eb7404183ee72b18cb648de", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0a67a5e3f4125445ed84f2db3c92429010aad68a": [ - "e070ff286709db28312e08b52b05539debe88146", - "ddc9aeac18638575bbb90ede4c6829ec15c2947e", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "7a25155364476839b6d1fc0653cd8611327ab9ba", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "0aa5940fda7c994675d08c41eca2a6909eb6d205": [ - "daf9e24adbba3d1aead91cbac26502d3043db069", - "142ebbf4760145f591166bde2564ac70c001e927" - ], - "0ae12d63f77f40b430f17c791a5191ff5fee5086": [ - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "40047a74b707743157051d38f76061ba5ff9aab4", - "6e10343767ab09dde83cf99ea3442907402a9810", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0cfdd655100055f234fd23ebecd915504b8e00e3": [ - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0d0dbfb1b315a43216020abaf74d289456198219": [ - "3268da152f1149675b5f1cfd03f97026128b9e09", - "732627c703a9dbc78d9384f1be4c791c3a554391", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "0ea7fc93d4947d9024ccaa202987a2070683bc1f": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "663a41c866d49ce052801fbc88947d39764cad29", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90" - ], - "0f45608ddc01b3e192f3490330f4c4b8de074f79": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6001dce1c8f63350263e013e0e6ff69816f0a9af", - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "341bdbcfc3febef7691a97c216ad394653211095", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0f4ab3fe492ececbfd38be9682047371e2e9b8c6": [ - "e3a9af420cd2c0c8241856da92374027fefb87be", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c" - ], - "102e4c860e39a2bfd7bf3f03b9ad69aac7bf3b5f": [], - "10955e63aa49fab146267949f8ebc9ebe8275183": [ - "9efa81ec4954b0859c47dad8f42edfaf8bced69b", - "6289de84a02f0c27734f295ada565603ac958948", - "b9d75f361b5310c6ddcddfe7858bb0416eb78de4", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "e070ff286709db28312e08b52b05539debe88146", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "135ae2ea7a2c966815e85a232469a0a14b4d8d67": [ - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "154493f69d7db3d49da0e51df0192c6ad5f1724a": [ - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "15fcd80193d1c446bc3d37fcc30f5475b9ebd5b0": [ - "8add69e155596bc128df70e1ddd5a41c68698399", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "16aacf48048ac128a07fe2c0761439e1d7211492": [ - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "18143a4c2da37444e06feed04cc9efeb0856352d": [ - "f7212245d3787c66b8dc1e9fa4bc48349cef1155", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "663a41c866d49ce052801fbc88947d39764cad29", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "355b66a65aee97822eb7404183ee72b18cb648de", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "18bd959aaa8a83b5b2192282224d700da7459857": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "19443d48399d4fe89a4b0a96917c50c6fd9c5af1": [ - "5d49c7401c5f2337c4cc88d243ae39ed659afe64" - ], - "197022486b2e2584302bd9b6442e44d15bf3e351": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "663a41c866d49ce052801fbc88947d39764cad29", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1a01c982aa20c1a1ad1ad94866e3197da99a52a2": [ - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1a55d16c14587edda62dc9c9ff09e0b531dd169c": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1a62bc8ed9732bcdb6893a11f5cf239640883f87": [ - "fb1f7ffef65341aa5aee2bb0b240d4ef51680fce", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1abfc211793c683972ded8d3268475e3ee7a88b0": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1b9fc8268b392742ea43c2c017a767cf62386139": [ - "186e96fe036927182ec963b63f9dd7f8ff650158", - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "52bea123acbbefac6ebb3b40b6482465845ef014" - ], - "1d75f8de31bf47ec46fa5586056420ec8bc97e86": [ - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "6e10343767ab09dde83cf99ea3442907402a9810", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1ddeb500dd88d4b860b32bec1e2a85f8a53910d6": [ - "0100785773b8217c44606ab260e3212f93b0a4fd", - "30c0cdc414f68211d5d0514df027cec22e005174", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "0adec918885dff698acf359988ed79a543157f80", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1fb5a5298747b8c7d60f98640a543f20d42ab053": [ - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "20177a85f632a34d085bcf645507e461733fcc96": [ - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "214fbadc57e954e325dc055fee5ac0e224dfde11": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2392b6d3a5cad9e5cf349169eaeee848266adf6a": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a" - ], - "2447d22655803bfacb880f117cc34d2ac5ac7e74": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "24df244bf7a6e8c93c5f183d3f62d39c0f773c68": [ - "f2ba9e7d9624bd94a786ea5e3161a9425a21a475", - "cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa", - "2e6b6de08f459e2165b11ed8d2103916966b0fcf", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "663a41c866d49ce052801fbc88947d39764cad29", - "142ebbf4760145f591166bde2564ac70c001e927", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "65d88194a902332b78dd5a7b919fa577bfa7ee9f": [], - "11e3efa08b5db1a8958dfe8119593a4d3f18796a": [ - "8b5f4b383008bfb365cee72e5301ee04a24221f7", - "483757dff12df441c6991dd5e7408d922fe01c3d", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "159d2980566fa00bc752e180471ee46d7899d66e": [], - "185e79641a8e7b18ac5a73b8c3cb82fdee3a0c6d": [ - "8b5f4b383008bfb365cee72e5301ee04a24221f7", - "483757dff12df441c6991dd5e7408d922fe01c3d", - "0d0dbfb1b315a43216020abaf74d289456198219", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "25425e299101b13ec2872417a14f961f4f8aa18e": [ - "15cbccf71d1cd3f886ae9b0f3cc001d14577d264", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "37d91ebd5ec969e2b81027e05f886febf09d2504": [ - "4237cbebe788a97174f48dc398082739bbffe95b", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "483757dff12df441c6991dd5e7408d922fe01c3d": [ - "0d0dbfb1b315a43216020abaf74d289456198219", - "c10ab4733b43f19547308c15ca231a668181a36c", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "534675abb9d72fc0c08d080d4f73335ceb75902c": [], - "6c925427841ea4a776a578d438f9e47a64c3014e": [], - "8b5f4b383008bfb365cee72e5301ee04a24221f7": [ - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "dc05240a06326b5b1664f7e8c95c330b08cd0349", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "93565fe6db3948c9c414af1d1edccf4aff5e2e10": [ - "8f84dcbad8cd3b5b4d9229c56bc95f24be859a35", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9dbb39eccbcd31b8f6b4ff0a2c96f61a7c34e54b": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "25425e299101b13ec2872417a14f961f4f8aa18e", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "befcb92f313030632717a74a2afd651a1445a745": [ - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e4abc33cbb84934029af6d50360f7ad3bba3df3c": [], - "fd7082630257b03771c72a926a64b13eb16e00af": [ - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "0213827d882ec34aa9935f2b03a80362af806778": [], - "1c89d8672a3742672850fa46f1e8ec51f3261019": [ - "458147b5f7242c998ec4f33798a59b7c48867329" - ], - "1e25118f99e03ffecf79412b46dda8a2966752c8": [ - "197022486b2e2584302bd9b6442e44d15bf3e351", - "4f53020119eba43891d4566df28466a92229b8fb", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba" - ], - "34d24b2d9f116f8f652c112d4ac924afcf11bd0d": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3d71d4097a3dcc1289b709872d7523a035e6986f": [ - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77" - ], - "4e33c5756aa18d248cf50fef9382acda1e0f65da": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5dbc2b2ee6e65e39fa3fc4bd5030be7a4a9f9a76": [], - "5df422fc18974d687febd171adcac35b3012c50a": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "88a3abf671d922ebd61a34007908a5f6b6978bd4": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "99f121a70fa683487bb0da3678a8144f57f65c60": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "7cf4f8cb8b4a373d869e785b79160dda7a49a250", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64" - ], - "a71207f1d036969bf92959ea56cf146d5d8eb297": [ - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151": [ - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "a171780f04780f1dca6965e6c451915b8ff5458f", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "b378e54c88d241aa917131beb65c96be3730f40c" - ], - "a7ff4d1a89baa5007b3c9ee46492aaf88dfc257f": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "afa0188e454495c08bfaecf29596f01efb468b9a": [ - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045" - ], - "cb5cfc2dd4965262d2ce302362b1f2dbfa4a5419": [ - "914254fac74a2da051cccf6ca16afcaad416a079", - "c2a79e2a65b721d4de5f6d4806323174b9f8f393" - ], - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "355b66a65aee97822eb7404183ee72b18cb648de", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e4282cab4a435d5249fc8db49fc1c9268438fedb": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f2ba9e7d9624bd94a786ea5e3161a9425a21a475": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f" - ], - "fb30166c218bef3597b0d9789ad340defc3989ca": [ - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "a29a0e679e626e8961dc217081eae2a6c63a15ad", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "0adec918885dff698acf359988ed79a543157f80", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "0386711d1f9c4240ded4de56026ca18e475b507a": [ - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "12bad2032f3efa5a142d7dd25712960a4f9ca5a7": [ - "d7386e8859b22e05ce9c4a972613d4b1e1e44198", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1467ced85b3ae2d695079a1557063a445c43988a": [ - "d235a9085e0543fcbe502fbc269f9a8ee01dcbab", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "171412ef2410fad3f9a09238ad9e272c4e31aed4": [ - "a29a0e679e626e8961dc217081eae2a6c63a15ad", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1a2e90dff605dad7dbefeed121e6d295c7a77d62": [ - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "20cb40199d03395d63615854863f9eda9c7863e2": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "236375f49e3deb8ee7918c1f5e65175e453deb2e": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "2c12d24c5ba5ad3bb3994635fcfcb9f8caac31d0": [ - "4c5f4ddc68be643fb34ea969bf2c105ff7538995", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2d7a6a52264e8f875105cfb34c6c901bfd1f3229": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2e403ad2cd02409e1fdc15839da0a3f89886a990": [ - "cc893dfaa8693c5a7cbe37035a0fcc56e669e0ca", - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2ee1f98649ff27378fc341cae907eb89aba8fba4": [], - "316206a2f89eb94ce02a81fba1dc304586f21b39": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "35d2276749c2c31290d2ff410a305112e742da71": [ - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "40fba1fc70e23abf9a3ea428f186dd44e57723fb": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4683d3d6cb31111cf4499a199c0b036662b3eb32": [ - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4c5f4ddc68be643fb34ea969bf2c105ff7538995": [], - "5d5b6b6c033c36a8b730042392cd29da84b67481": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "68ee8a53f0b1ff146194980337dd6d533b17c59b": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6b87c9700b8de4912fe7c361574640b5dc536ca9": [ - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "bf22ef16a6a912763780aea454198edc3e2bb3c9", - "2122ed4a82bc7d8affc5f7ae5026d174ea34ea52", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6c1a53c05f1b1a024af740df84e530d79400ab86": [], - "6c4d35d67f843e7de6ec00c088e339b2237d222c": [ - "08b85bce712168998004ee80ce4e475390413c74", - "e7ad08848d5d7c5c47673ffe0da06af443643bda" - ], - "6f05be4a0045cee3575fb39e88fc361d96f2cc4f": [ - "01b5412f3d17e90e09226d7c40ad4d4468a1414d" - ], - "743dcf234cffd54c4e096a10a284dd81572b16ea": [ - "23c265ba884b92ecbd9d18641078d964697e4590", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "781f4f7dd871c0eea0ce71692bcbc1283df6b550": [ - "97717368b5f7e6a544f0a1c73a441bdcb4b6a046", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "819f477065088220a6f706cd9ef76dbcb4b4c134": [], - "850b8f31a1bb762544bd35163923784a664b315a": [ - "293499319bdd460cb3fca1f0f5eb330e64bf3ff9", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8c2dbf98b75a01f7e93b68a9407f00b1728b66af": [ - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "8eeb6cf85e6bf305fb761a6e6a22de20f09909de": [], - "94db2ba208a3ab2e469a5a65d6192f4dd04ef0bf": [], - "99bd3e04b6b65abf3f03de69654059c3710d03e8": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9d81ec931b85d6c6cf3453126670cd7a30a689e7": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "9c36c8f398a074801d6098287c4353bcf87a1d6c", - "2afb07359e9c67499e1f373ac6f1520d3ea9c46a", - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045" - ], - "a2c8d1c5470435176185bf891c76711a9b44808a": [ - "5ba1e498665d2b3536cb436f0cf484dce03459fe", - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "aa207668318fec38d60b79f407fb64982e46fce9": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "b0f915c8e33afdf3829af71f189ddc34077dcc8e": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b6499bcc10d4a70c3ca8b84995270cfd0d29de4c": [ - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "a2412fdebd53bd25476f834ae2b8aa8cb44cb1e1", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b7d643503f03dd0a23278932daa4fe01076e9ce6": [ - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "baf63d7cf115d674a8c8da3a3d789aa84521977a": [ - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "faca71d01e3dd3379f4027176e8cf1f02d31c03c", - "4237cbebe788a97174f48dc398082739bbffe95b", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bd2c32285e8ad5b6e322391cca5d475de4f84169": [ - "b159dffadb69940e14693e812bdaa32e3957717f", - "0a2ac054c533314c0659f3b139388527df0d42f3", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "c1372b08e382030e905d1c8751a7794ee91e9d31": [], - "c2903ea606e409d49994c801bb5aab321f623e5c": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "c6808575096a6e4f3cbdc5f893384bc5a01cc6f8": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "53661ff6fdbfb8557c5b19895fad151792c62da7" - ], - "c79852e9c9cc6734c9150847deb5449e489354ea": [ - "55a250868627de2d202d06e7cb3f6cbcd3a66f88", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "d235a9085e0543fcbe502fbc269f9a8ee01dcbab", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cb3379177c6e119dca0d32d41fa0c9b9fce172c8": [ - "c963c505ffc4cc8b33315eb967784d0a466b3910", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d3ca116177369bf6fbe27de64506a2f401aca996": [ - "fca92fe287c44c9ec79ca1f2762b0bf2e5e8df2b", - "507acddb0b7f36b83fd7c8bff2f121eb506ac8fb", - "5bac7d00035bc1e246a34f9ee3152b290f97bb92", - "8e37dc1215681aa153a51c07078ba8befd6a6e01", - "70ece7b4ba8f3b67f5a797daed544fb6a0b627bf", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d40430275383ef8a453eefb693c44cbc686008e0": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3" - ], - "d7386e8859b22e05ce9c4a972613d4b1e1e44198": [ - "3d7d385d9ee75a286e8da27f7d3cf9f12651c899", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "daa34ae46c82e6980ac1daaf2dd9716ef3718f21": [ - "aa207668318fec38d60b79f407fb64982e46fce9", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "dd568e6838903ad7c381f13c1268c94c5db08b02": [ - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e92f4ff44def2273d9fcb02921b257dcbe3c9626": [], - "e96be7c55d139965b15bc0527d6d528b225f9a61": [ - "3487c12512fa41d3a4d64f00cb842525a8590ad3", - "ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3", - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3" - ], - "f7d57f223154965e6e5584d3a51561aaea7ca13b": [ - "e5d30a65cb267dc185770c40cce732f9770cbd27" - ], - "f84d6d6d58b836a64c4a96b062bfff769d08a595": [ - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "5dbc2b2ee6e65e39fa3fc4bd5030be7a4a9f9a76", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fdbdcc3a65dfd6f258c533fd12d58bbfcab15bc3": [ - "92c9ec1ef7511f3e3a355bae28a4d1555ebeeb4f", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "fe583403c95c3e9b4148d6276f04bda5ace33660": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ff7f75989d125a3356fdb5ad76f504037cc27d5c": [ - "4e3c65511292a800b17be6653bd057e7a545a0b0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bfc0e3e651cd4b715272fe68add8a180a112293c": [ - "f3f23f7f9f5369aade19f20bc5d028cce7b9c9aa", - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64" - ], - "6987c95f7054d2653178ac93df52aa3c0b99fcf5": [ - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "1bc9974780230573bfe9f89789115cb4fbf8bfc6", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "08b85bce712168998004ee80ce4e475390413c74", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "d4177489596748e43aa571f59556097f2cc4c8be": [ - "6987c95f7054d2653178ac93df52aa3c0b99fcf5", - "db4cf9f6a653d5c15973e836c800ea47743251ae", - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "b5e9406a65de7384af041c357ca5481489345b73", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "aa29ef31f9e907d186240d6bee665ca6e8d82761": [ - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dd22e2eb52b835f2c0abe8895b5a355c0eefda94": [ - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "0d4ebbd22abcd0549d437dd64c2f7d3f7e57dee9": [ - "47c3b8dd2c8a9326249ac98900b2c3fc71f46ab1", - "6d951d939d3f27054215f2606a0cf89ed21550e9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "2d2b05f0969568ac3fd3c2cca5df04c4136c5416": [ - "df2beaae63e4d68ef8e762bcd4704c9f11f856d9", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "04fa3ebc4c0c0b4a2d2d1a3fc612134a05057696": [ - "3e30a7ac4886b28eb50151f58e14a1d698cccd0e", - "0894585294c67193ff3190240554677b56fd79a0", - "db4cf9f6a653d5c15973e836c800ea47743251ae", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e25b4bdfc3f5a8293ea6cd687a0203e446594188": [ - "1104d766527dead44a40532e8a89444d9cef5c65", - "ace98e1e58bcc364afbb2feff6d136232f5f47da", - "f330f502bf1e92fabf7f246597fa9320d956c0c8", - "db4cf9f6a653d5c15973e836c800ea47743251ae", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f": [ - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "2da3a84e72a2973504cd9cf1c0a377f5a5a91f09", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "37665dd5ae7245f087d663785c17eef068578676": [ - "c4561fd08636b5f5f6b9f3f6d89f3cee39e678b0", - "db4cf9f6a653d5c15973e836c800ea47743251ae", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda" - ], - "34fd5aeae91078cd7b23dc145ee04e6ae0005d29": [], - "0ce1e3ba7cb96dab6db15ee7b4add29cab5d0efb": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045" - ], - "3b0c49ca5ac0f441c302c9ca4def4804253552d5": [ - "8f936af93fb2b52b9678ff8f17c1ebe8de236a88", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "9af3b6b3f8dcedbb02b88936c428e1cd02503a8a", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0887b3324d64c4c9ee35593a2260002089658572": [ - "5df422fc18974d687febd171adcac35b3012c50a", - "ff96527c03fbea7c3bb7d44d1d656d875ddba75e", - "4d81c33b295c092016ac236cfd32020a5bb70b97", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1eb1a8c7f88de27af224153f43ecdd41774600f2": [ - "2392b6d3a5cad9e5cf349169eaeee848266adf6a", - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "c7a3f9cc61cfafdc307f8ae24430b6b1121f9b2c", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "2da3a84e72a2973504cd9cf1c0a377f5a5a91f09", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "37f0f1f55f44bff84aac27a346dd47d0c6c136e3", - "663a41c866d49ce052801fbc88947d39764cad29", - "370cea8b4220917f45a69358c0303df71f5063c7", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e" - ], - "e327ef8d46ea0413316c80ee1404453834d84f05": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1944edf86dd46dbc92e88d296b270e8bf5fa3a87": [ - "09b7338021fff3200c2098b19824aecc83a66cb5", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "7283d616e40d7ab7422e3697218f3fc42f292bf2": [ - "f27f6d1d521d189e78f5623098ced0deea613d33", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "e070ff286709db28312e08b52b05539debe88146", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7" - ], - "7d083d654f66f763302d8a5f0678beb753f6507b": [ - "4b0b56be0ae9479d2bd5c2f0943db1906343c10f", - "e7c85d7d58d4b1fde4be8a8f166e46c995dc0f1b", - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "71d68782c3da41b77866c2fd0cb65726f60b3af1", - "6af986a2cab884fbd30ad6da2928dc19c12d83a7", - "994a6040fab375669a92cab0e67fb2fd203cd67f", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "256ef1f8d0ea2982cc50d3e85e5f1b4920f037fe", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e1d66b33f3ce596c7f34a4cb649709a446a32d6d": [ - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d651380f8c99f2522ead2d86d60cb4af4413abfa": [ - "e4282cab4a435d5249fc8db49fc1c9268438fedb", - "e2ffb7b4215cbe7b5d06be4a37aacedc8762fd50", - "763eb8d43e2f8a5d9da26269a4985efd1c099a5b", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "08b85bce712168998004ee80ce4e475390413c74", - "d7386e8859b22e05ce9c4a972613d4b1e1e44198", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "59e0e0c1aa06d51430792eb5d8308911a1b0110f": [ - "50bdea5132ef4b8cf25b0d9f3ac2ee0d09bf18cb", - "375a571174ea59b1f4aa62ad2619e9593fc03436", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eedb8159755c317ad2f9f07dfb667c62d72a3181": [ - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "341bdbcfc3febef7691a97c216ad394653211095", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "06826c7023303af28b5f362f4286ea4d14c2531a": [], - "aa2fa431ce1d5a8d56d138e3330d3df381d36e3a": [ - "08b85bce712168998004ee80ce4e475390413c74", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f35f55d2e8a478884ba5c5e1e5f7fc69014a8f9f": [], - "e5d30a65cb267dc185770c40cce732f9770cbd27": [ - "0270ec4bc946b59c5cf6204be2553682dee0346c", - "b66c1c7617b42f3814d516faf7d2ca3b771a0c9e", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "dd36ff1bbd79eb9990e3429f5050d76fa10d6a2a": [ - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f7f7a0b473d3a345db11330dd6b9e53c4041c90e": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ec27f85979899a4193a8ec3b932ddb677c59be62": [ - "7a25155364476839b6d1fc0653cd8611327ab9ba", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b559f253e245d4306b1ee3e9ec972d9f43a8ecd6": [ - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "07cd498aacfb4d39fa2e0e8d8a9c8ad881257300" - ], - "bb5757418d3962668a836a879a289e4a338e9713": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ea982b0cdd588340c1b6e2abc2dae51bf5463b21": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "afe4ee7ba225d8965cd96056b3e1e76a31451a75": [ - "4d81c33b295c092016ac236cfd32020a5bb70b97", - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "07cd498aacfb4d39fa2e0e8d8a9c8ad881257300" - ], - "50b59143bf3469f082b2308fa394bb6d55091a41": [ - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8a1792a03c4b0ced9650940ca6e9ea495922ace2": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "831fd0c18d10e42330cca36e0c5769762fb419e7", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "2e6b6de08f459e2165b11ed8d2103916966b0fcf", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "b626560f19f815808a289ef5c24a17c57320da70", - "7715ba5e75f5256e1061c7473afe61bb0dbb9065", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "31b092d23154d3467d4a92065d77e8b441dfa440": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "657945a83b062679887c49334f9b47687f6e1d64", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "40047a74b707743157051d38f76061ba5ff9aab4", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d235a9085e0543fcbe502fbc269f9a8ee01dcbab", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d2cfabde7383704e876373e9da9891714b0bd62b": [], - "b67184a99e78cc53c2bcb50a73ac8c3c873f29f4": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6345f3ac2dd5ebe6592be9f9f8e249a74c2e9efe": [], - "e0ea1433901cab691c3bd8dca24439101da321e5": [ - "0968f1592f9401d72bf0d97e740496818c1a3135", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "d5816ec83fdf5b9f4ccd4ad71914ff8e3b4579bf": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a55d0bc16c859b2417158dfac41fb9756a77b3ec": [ - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "64bed4a665d3f51de76ea55388ff0a04e6f2db11": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e21899907bc9f861b5997f67965a6b136285aad6": [ - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "96ea07447d2f9adefe03852a878517a2a6d45b96": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6fc4a39bb4697a21286bb1cf503ecf17407aeae2": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f7987fa2aadc0b368c185dc4d2fdb1337a202c32": [ - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "d2a1008594eb1ab856def3b143c2df407cc87e0e": [ - "c10ab4733b43f19547308c15ca231a668181a36c", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "ad14599aa96cb5c3524c8bf31b4bca06eb7cb78e": [ - "0968f1592f9401d72bf0d97e740496818c1a3135" - ], - "b66c1c7617b42f3814d516faf7d2ca3b771a0c9e": [ - "0270ec4bc946b59c5cf6204be2553682dee0346c", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b17cc18e4130505b939f7d527082eb6be2a7fd5b": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eda36cdf6dbe28624bfad6482ca8e1575ab76d99": [ - "ddc9aeac18638575bbb90ede4c6829ec15c2947e", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f66aeec98816c3a52685e570a04fa8f2bd53dfb4": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "af444ae0ba12c6506a02118cf4036a9262c4aa3c": [ - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f7212245d3787c66b8dc1e9fa4bc48349cef1155": [ - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f7c21f11dca84d443304e8909c9b87eebda0017c": [ - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5861df95084cf739a6ca3185d6523dd702bd1f10": [ - "5437e8adab596d7294124c0e798708e050e25321", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d6d6b17601074b124b9cdad3e119e38467545f1a": [], - "b458fc5261595f44b36325e5eaea1f874d65138f": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "48385ded07af641da331c05f6ea3f93694a08425": [ - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e85ffdf9a797dc1584838ad00330f98ebfe8e50c": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722" - ], - "a91e4ebfd7e2bc66d9c0c0197bc218f68bc42052": [ - "08b85bce712168998004ee80ce4e475390413c74", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b4043ebf549954522c80c47e35421f2b14ce03d7": [ - "b559f253e245d4306b1ee3e9ec972d9f43a8ecd6", - "0270ec4bc946b59c5cf6204be2553682dee0346c", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "b66c1c7617b42f3814d516faf7d2ca3b771a0c9e", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "aafcd6a4dbf775333b93d3d5b3b21cd6e84ea7b6": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a2a53a3d328b8b72672626a4c4d50f721418e6d7": [ - "08b85bce712168998004ee80ce4e475390413c74", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "098370508aaf56f718a472511987ac2072d0f917", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eae18cf748729ab740dbb8c11a28ea377dc41db9": [ - "0ba581718f294db1d7b3dbc159cc3d3380f74606" - ], - "ba622a5136681f25ceead6815c932fbcfbde0429": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bfeb828af35c552409fcc8854418321f9afac17a": [ - "cd77ea482d9245f3fcaeb670261a00c3fb5cabbd", - "9689acb6cb760e8bc21c16f368368b37dee977f9" - ], - "b4115b608b934627bcdfb0750f0e5305ea7351f9": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d2c67422e8cb9a9ebbc9b5b5a08e6ccf7ee37597": [ - "ba4aa83248a1d08b521392eb971e47d10b7c74e1", - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "842f79c5acab440f8d7a592201738a3e854a5186": [], - "c39ae3e539eb18f4e5fb06c3f1d71f484a48409e": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d6eedd8f08e929f7d11b6e20c31730d11f8f4297": [ - "a2a53a3d328b8b72672626a4c4d50f721418e6d7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b9e0806dee01f91ca42803d1c5bbe729c716094b": [ - "ad454e24bd32408559512b4bac4cd5237794210f", - "39bca01efce8765f0a5d3a8981bc30d56f196b96", - "aad167be3c902388ea625da4117fcae4325b8b7d", - "0783c214623c18f6a8ad96b8eaf4a67a382e68ee", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d71a5eb690e1bc120844cbe062be4dd98d166f80": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cf9318ab2b86926d5435d73b715e71e8ffc5f769": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9c39115c5e767f49523b77feadb88cffa7ae5840": [ - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867" - ], - "ac2cce2c9b8208e9a37933b38d8ffbb4c6ec8bf6": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "08b85bce712168998004ee80ce4e475390413c74", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "5cadd5902f0335767e4cd95abb91fbf73d2d431c": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "729fc01274cc26798654a318d1a95e73c61f99a3": [ - "da061a6e0016d6b625a8e86d64a797ca8ddb92a5", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "2d30d800e946d3699d9c41bb95c36a6db63676e7", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "65896b638de95d436f1e75fa4967b9d2763edd67": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a7e23394b15501f903c0847516e06754710eff9f": [], - "675e079cc3c11f9234f8f70bab9f763911b97955": [ - "de11dd9386518012fec7d6f564755b6e6cdbd241", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "03fb95e6be583ca954c3d00812a9e9a40f118e51", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "751855a18fceb0c613d7c0366824560d2077eb14": [ - "4683d3d6cb31111cf4499a199c0b036662b3eb32", - "3487c12512fa41d3a4d64f00cb842525a8590ad3", - "e86009d9f9b1cdf083a48d087552bc4153784451", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "813370072963a32c6a3a371fd5000bdbf777eca3": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "68c6b65b127158df2e74f36757117613b9ae9146": [ - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1702e7a52a5367e5b5f267ff77e3e67b17d09c3f": [ - "08b85bce712168998004ee80ce4e475390413c74", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dc05240a06326b5b1664f7e8c95c330b08cd0349": [], - "d6c73f758b05f38529c1a96cab7e908a2047dabd": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "a9ab78ff9424794cd4de4bd1ff5a87e721a79ac4": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "e6296cf7c2c7b4578f1ae644edae4ceee5a5faea": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d422d79726461806c74ec4bb9a89e3408d6c4a75": [ - "dc05240a06326b5b1664f7e8c95c330b08cd0349", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ef3a38b9f15e9dcb5652cb3f86f19b845cdaaef7": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7ec9e9ec1c26f7977f54dd7830d970101e3a683e": [ - "4d81c33b295c092016ac236cfd32020a5bb70b97", - "e5d30a65cb267dc185770c40cce732f9770cbd27", - "0968f1592f9401d72bf0d97e740496818c1a3135" - ], - "edfb5696b5431bd20eb57964c083e9118e153e97": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "44e8b1aba0b1d1366b74993593d39bafe8c1b4ac": [ - "08b85bce712168998004ee80ce4e475390413c74" - ], - "5d879530c443dd06d3686f31d32cfe34c7ade9bc": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "be2b0396de9431bae931642516a1d3e4906329f5": [ - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "08b85bce712168998004ee80ce4e475390413c74", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bb58f2f63888456a3e04a56a18996ab8dacdb257": [ - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dae6d5439f5a7664377c9c10de66321372ef22de": [ - "458147b5f7242c998ec4f33798a59b7c48867329", - "e1227daa4877599e13de41a5207a222e1b197456", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "de11dd9386518012fec7d6f564755b6e6cdbd241": [], - "496dab67b98785b46867173f0d777eaa9a32ca9c": [], - "45f1646c3d2e23af7ad11a3f60b85a95fc279c03": [], - "b81bb773019607a1c2fb0e956fb8ce0f49c1aaa2": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f43b8a87a96f8abc2467b90538b643a6061416e9": [ - "c6808575096a6e4f3cbdc5f893384bc5a01cc6f8", - "470754e17de89081f63dde4719922fe9b63251d5", - "34d24b2d9f116f8f652c112d4ac924afcf11bd0d", - "4e3c65511292a800b17be6653bd057e7a545a0b0", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "55e3fe05598be7c3dd357d51166869f6571b824f" - ], - "219e33388cacbcbf4b063ebf60c0bee48936fe37": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f6df21a3cc9941eae7c032bd44ebe4243c477d4b": [ - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "aa0b5821458e34d35d96ba0878b595950ec8bb8e": [ - "293499319bdd460cb3fca1f0f5eb330e64bf3ff9", - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "08b85bce712168998004ee80ce4e475390413c74", - "e6210c6f37bda0a9add8f01acc98ebd2e370814a", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b71ae06365dc81bc4b4e513a9a2a67fdc204a7e0": [ - "877e27a1d89095fcf686ab675f62a8432d3285ee" - ], - "208b93a39802466785169494caa7f2a8995ea39f": [ - "221a72a3631ebf8b555c27bc864338390611feb1", - "49b499598a8864eee55ab264fc16a5bf8d2f87ef" - ], - "49f302e9a76eb39c88fcd861bdd0d954bd3d76b0": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "bb58f2f63888456a3e04a56a18996ab8dacdb257", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "3841234dd49250c4fcbba79eed6593d3b57932c1", - "e070ff286709db28312e08b52b05539debe88146", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e" - ], - "bcdc44ef48ffadbdaa3bd5cacfe9ddb9b9f48750": [ - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7" - ], - "f79ad114fa68b58c59b16339c88b06d7b285baa5": [], - "439f1aacbcb32cba6da8f75bd9b15f7ea35e9d4d": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "341bdbcfc3febef7691a97c216ad394653211095", - "23c265ba884b92ecbd9d18641078d964697e4590", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "c2a79e2a65b721d4de5f6d4806323174b9f8f393", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3031eedb7cf3ef20f1911c2902f3a8e5aeeb2c3f": [], - "d3e4553f0a1fd465ae358701f1bdc2e8265308d6": [ - "1dd344ce28f1e5a078f9d8396b5078388e555d99" - ], - "bf8491bef353df126e2306ad2fe4b898697b906a": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "30873c32db5a219a58be928d5692cce48be1d3a0" - ], - "eeab466d688b94133950788eaf8d1bf33bf037cb": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "b33577b6624da43765d215d9954531e3c7e48e52": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b96a4bd6c65c32f881401cfe191b16a8d73c91ea": [], - "9fa3596d561f2d9de632030253c8d05c787d9e53": [ - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "08b85bce712168998004ee80ce4e475390413c74", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ccac57515a8fedc0631de58879f886e827e725ad": [ - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8a2c3f2cd5083d888d2a7d44759b23341a3984b7": [ - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03", - "e1227daa4877599e13de41a5207a222e1b197456", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4237cbebe788a97174f48dc398082739bbffe95b": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "605c32428861eb26b8631617b8f6c97a850d6a04": [ - "e1227daa4877599e13de41a5207a222e1b197456", - "faca71d01e3dd3379f4027176e8cf1f02d31c03c", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "4237cbebe788a97174f48dc398082739bbffe95b", - "355b66a65aee97822eb7404183ee72b18cb648de", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "94f09dd93d278e80e342168eee9973830af6fd5c": [ - "92c9ec1ef7511f3e3a355bae28a4d1555ebeeb4f", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "355b66a65aee97822eb7404183ee72b18cb648de": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7f3bc301ae0e2bbb78a0d42f074865e87d908f9a": [ - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "23c265ba884b92ecbd9d18641078d964697e4590", - "c2a79e2a65b721d4de5f6d4806323174b9f8f393", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "20da8033ed8b696e2e27ec40b1aa8a0ab82b964c": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c78bb96ee119950b4f6a9dc0155199826b0bc8c8": [ - "20da8033ed8b696e2e27ec40b1aa8a0ab82b964c", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6071b89c257d36490fd6f0877174adabad265d92": [ - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "eaa20a9090861a778d754a6d50dbb0396ae09e80" - ], - "fd5a9dfe39e1918b631e0519e272d2643d8e6bca": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a": [ - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eceb0085f2c529838e7c2ce15bc300a4a0ca5e67": [ - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ca3d845ee518365ca73a3f7a7143c2a1875591a7": [ - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9822153f31934faa216f8f0af17b51929f2eb93d": [ - "0f30612423381eb5d271c4ca4f4254149b0d22fa", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "30873c32db5a219a58be928d5692cce48be1d3a0": [ - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "c78bb96ee119950b4f6a9dc0155199826b0bc8c8", - "20da8033ed8b696e2e27ec40b1aa8a0ab82b964c", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "166b64f2ae8e52f5779682fab756cbd617a6e74b": [], - "098370508aaf56f718a472511987ac2072d0f917": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b58d8579ece27a60432e667bfbdb750590fa65d9": [ - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1cf50a2e906dc89463d7eab827de9a3c371e7c53": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "01b5412f3d17e90e09226d7c40ad4d4468a1414d": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b645e706651391eca1f692e7f560051c21b3dea4": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9c36c8f398a074801d6098287c4353bcf87a1d6c": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "37f0f1f55f44bff84aac27a346dd47d0c6c136e3": [ - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1e983fccd65cbd39712fa360b92235e5497d81b6": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e6210c6f37bda0a9add8f01acc98ebd2e370814a": [ - "a29a0e679e626e8961dc217081eae2a6c63a15ad", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "2f1555f7f601c3826165fa5f9db1dd5c717d1c60": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03", - "55a250868627de2d202d06e7cb3f6cbcd3a66f88", - "b0f915c8e33afdf3829af71f189ddc34077dcc8e", - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "605c32428861eb26b8631617b8f6c97a850d6a04", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "e1227daa4877599e13de41a5207a222e1b197456", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "5393f75c0e2432f1b64fad221a0cb8c49fa155cb": [ - "a671866bad8b298fbbfb00be004df0b6b7daff26", - "aa2fa431ce1d5a8d56d138e3330d3df381d36e3a", - "08b85bce712168998004ee80ce4e475390413c74", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1403e6b9adf7712c35ae56327d52fe54603b87e1": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7ffb212356df9980347b3d3b9910dfba75a5d0c7": [ - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6328c11081558487a4029388e9a5ebb45194b1b7": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "85e7d63f75c0916bd350a229e040c5fbb1472e7a": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2122ed4a82bc7d8affc5f7ae5026d174ea34ea52": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "775514b8f5a320b8772f93a3168701ad0c9eeebb": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d42a82a3c62f2adbbe130e869c541f6c3b9e19f4": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "6584892e7d0f0055ee9adabf03f36f2fa74319e5": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "732627c703a9dbc78d9384f1be4c791c3a554391", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e1334c3b6a4d128053a0ce9381576a15c5aa09c7": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a4c0144062d8e36485bad438968894cbf49ab998": [ - "7cf4f8cb8b4a373d869e785b79160dda7a49a250", - "2afb07359e9c67499e1f373ac6f1520d3ea9c46a", - "605c32428861eb26b8631617b8f6c97a850d6a04", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "53c0abe83fe9b4fdaf2208295d8504fcf5241694": [ - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "96bb0beb0599c361dd6e560bd9581c2635dad285": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f1d791b9dd32577609ddd48e6001e46f1780062c": [], - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eea7bca03bda3ee2448cd012bbcb2b33822861d8": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1bd44ce763d246b44426a146fd5239b34b852c3d": [ - "0adec918885dff698acf359988ed79a543157f80", - "098370508aaf56f718a472511987ac2072d0f917", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7": [ - "e1227daa4877599e13de41a5207a222e1b197456", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2afde58474acb35f1091614f189d731e4d47861f": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9ba50f992ccd92f428503ea6246157260a26cd77": [ - "341bdbcfc3febef7691a97c216ad394653211095", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "76a134e245367d2a1d0fc35801a549d47ec98d0a": [], - "49a328730d3c6397820b733bbac903545568cd9c": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "62b4845be5a4b8ccc7ac3896d03c023193208e95": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "23c265ba884b92ecbd9d18641078d964697e4590": [ - "c2a79e2a65b721d4de5f6d4806323174b9f8f393", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4da1cfb77084ef04a1bb6924de11eac05daa381b": [], - "52bea123acbbefac6ebb3b40b6482465845ef014": [], - "92c9ec1ef7511f3e3a355bae28a4d1555ebeeb4f": [ - "faca71d01e3dd3379f4027176e8cf1f02d31c03c", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "544ecfd78567dabc8684b79e5cc3fb9c126a494d": [ - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cc893dfaa8693c5a7cbe37035a0fcc56e669e0ca": [ - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5d79d41b74999a286de2621ce732a928727efb3f": [ - "098370508aaf56f718a472511987ac2072d0f917", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "403fcde5c36533036e7f29705221376d80dd1dba": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a8e510680ecbf5ad1fa32a486b3135f9886a6c2f": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6d19a6ba6b1e89fb81ce100452c069d011ac8e40": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "182e296d349662376d91f2b2ee1f05c5d577ea6e": [], - "375c41566e8710c7b6cbf12c1bf6347f5aa23ab8": [ - "6d19a6ba6b1e89fb81ce100452c069d011ac8e40", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c963c505ffc4cc8b33315eb967784d0a466b3910": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2f211fc941a215ce2f2ddbd0444e06bb74f5e5c9": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "6a483cd1cbecd66150c9bbcd01606723950281bc": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "23c265ba884b92ecbd9d18641078d964697e4590", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "c2a79e2a65b721d4de5f6d4806323174b9f8f393", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0f30612423381eb5d271c4ca4f4254149b0d22fa": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3268da152f1149675b5f1cfd03f97026128b9e09": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "15cbccf71d1cd3f886ae9b0f3cc001d14577d264": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "53661ff6fdbfb8557c5b19895fad151792c62da7": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "50a260631a28bfed18eccf8ebfc75ff34917518f": [ - "3268da152f1149675b5f1cfd03f97026128b9e09", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "32ff43704a04191cf0b90a1dac7cbbc8f5df12a3": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e3a9af420cd2c0c8241856da92374027fefb87be": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "15bacb240e2598457af4ded3039b6988aa9706f0": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "961f8cecba607f9116009310cda3138e90d17f64": [], - "398e4061dde8f5c80606869cebfa2031de7b5b74": [ - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "53b3b15e8cbe2baf82cd0de3709e0a1e6f677415": [ - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03": [ - "605c32428861eb26b8631617b8f6c97a850d6a04", - "e1227daa4877599e13de41a5207a222e1b197456", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4989c08930e42d322b3bfed167d7ea434a698f2c": [ - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e053be7f36a0772b68eaaa14f15650c14071e4ab": [ - "fa133b4200729a57db96ae50aff8c4a5ff819f43", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c1614ab718dad97018ee34fd57864bb58b6ecaba": [], - "b287a2765e5bceb732de39dafdf70594dc9cd664": [], - "faf73f722cb72f6fd4a0ebf9646f5e3407a72609": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "de217f5a555ba0eaf938ed938daa2e2321d8bdfc": [ - "9ba50f992ccd92f428503ea6246157260a26cd77", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b8a69be1ea8b1e6cad89d939a5234f0f8ce11c61": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3e9abcaf919bfa8698d5458d20d577eff6b558a0": [ - "76a134e245367d2a1d0fc35801a549d47ec98d0a" - ], - "cc7462b76d5e8acb28e61ea1f57e17905540a415": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1e45200057eb4e47817634f183cb882677be1a14": [ - "cc893dfaa8693c5a7cbe37035a0fcc56e669e0ca", - "1dd344ce28f1e5a078f9d8396b5078388e555d99" - ], - "6d951d939d3f27054215f2606a0cf89ed21550e9": [ - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "356633ba77851f4a9f8e2ba713ca83a4ef2c140e": [], - "75c3a39b080abfb8381e1ce1085573413b239dd8": [], - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ff4f21978fca01bd6bcbe7aac92bcec3cca295b7": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "95c11cc5820ba32c60d5f2671f6567b9914a4978": [ - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "83ae44d08ee5c1348b0fd5fa3ed9b4236f98ca95": [], - "f9f49d9e8ff142c4dcbff82dfe27448f45408dea": [ - "6071b89c257d36490fd6f0877174adabad265d92", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9af3b6b3f8dcedbb02b88936c428e1cd02503a8a": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "fa133b4200729a57db96ae50aff8c4a5ff819f43", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "886499f0ab825a266f953f952dccda4b721e80f7": [ - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "0adec918885dff698acf359988ed79a543157f80", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "236445f0a3b1e30b2542e5e64616ff6a8af7e3ea": [ - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "2122ed4a82bc7d8affc5f7ae5026d174ea34ea52", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b8bea82e7a8028fbe79942c4331546701976dd36": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "23c265ba884b92ecbd9d18641078d964697e4590", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "357613ea0e90bd41fb942fd65f39498e71e2dbc3": [ - "6e10343767ab09dde83cf99ea3442907402a9810", - "55a250868627de2d202d06e7cb3f6cbcd3a66f88", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "c2a79e2a65b721d4de5f6d4806323174b9f8f393", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "58e1eced542996c0a0e771a52880d612a3a401cb": [ - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "30809168fff23c852867ad359baaebfae532f0a7": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "14876d2d95e8ede7910465d836fe1f524109c7b1": [ - "99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f93d5d62a227d0c4ae85c08d7de07d7c2ce28a28": [ - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5ee2d218ce3bdeca8bf76462cc9f305978dff29f": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "db1c83ef73d2f7731b0dd255835f2f26db749e17": [ - "4d9a59d4253cd54fda6090d11867a7c048e0d4ca", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "dc05240a06326b5b1664f7e8c95c330b08cd0349", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a1a7c730fe9bf6da1fb83170d840f1387d31f444": [ - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "0adec918885dff698acf359988ed79a543157f80", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "657945a83b062679887c49334f9b47687f6e1d64": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c61dab079efc1a296b53c5ae164c49b394570f8f": [], - "e55e905cc02cfc9b844d91c53fdd23b5f0f6ddd2": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "15ee6be20da1223c19873d4f7f50a4924d1f01cf": [], - "41c271154cb8be6e36ce97ca06cbbadf15c82538": [ - "0f30612423381eb5d271c4ca4f4254149b0d22fa", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "589b3d8a310d45f9e33b3474c777f83ec50d048e": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a4d9686d2faa48c3d279c459d82640b1f7f36f7d": [], - "73b879dfb0367a3910612f080f2fb6e7ec04a26e": [ - "7df3595bdb4003589e8ca1757cc39ec03a39a2ff", - "aad167be3c902388ea625da4117fcae4325b8b7d", - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "f208ea909fa7f54fea82def9a92fd81dfc758c39", - "0783c214623c18f6a8ad96b8eaf4a67a382e68ee", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c82e95e282e85f649f901f16e3cbf434b582ba74": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4780d0a027c5c5a8e01d7cf697f6296880ffc945": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0" - ], - "83a252b2adfc5aaaff1e0ffc04bfc89855df19ab": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "45f02191fc0fd83a2104bb71d30d465d46681d52": [], - "237032b6256087766e6d366a47227aef980fd2b7": [ - "6584892e7d0f0055ee9adabf03f36f2fa74319e5", - "83ae44d08ee5c1348b0fd5fa3ed9b4236f98ca95", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "f0d172b41055b0e3d6c5ac2d4f880d037dc10387": [ - "09b7338021fff3200c2098b19824aecc83a66cb5", - "0d0dbfb1b315a43216020abaf74d289456198219", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "541a214557b95aa35c5d564cee89d38c92c2da53": [ - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1a3ba6662ef1c5aebd3b343d3d9f77a8543e474d": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f" - ], - "705604661576a12a2a4fb0ec57b7cde67e6559eb": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "45c32573ca63698a704132f99bcc5680726f22fd": [ - "6328c11081558487a4029388e9a5ebb45194b1b7" - ], - "cc89798f85653bc57224692693cd1de29901e5ab": [ - "aad167be3c902388ea625da4117fcae4325b8b7d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "97d9d728f924c1f6cc085844136a481cac07c4b0": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "160900e8b976dabe1453c7629227342bdd9a27ce": [ - "c79852e9c9cc6734c9150847deb5449e489354ea", - "d235a9085e0543fcbe502fbc269f9a8ee01dcbab", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "692fd8ea468071ed73221ab1a7d3e1f25360387f": [ - "40cfe84ce5ce0d24f4ac189f8aac441992ab3233", - "3268da152f1149675b5f1cfd03f97026128b9e09" - ], - "745cde3c527d834124ed9c219135285be907b83f": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "341d516cc858dc92ba14a788ef40d0559b5a2b26": [ - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f227e955e40084f2cee68a5148e404c94d4f1490": [ - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bd39fe3a93813a75c32a987db09b3516483a9d56": [ - "01b5412f3d17e90e09226d7c40ad4d4468a1414d" - ], - "2dae1ad9b0c5cafc870ca2c300d639fa6047fb00": [ - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "b645e706651391eca1f692e7f560051c21b3dea4", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7c2c35ad8ad3f146f07521aa00c2747c6560bffb": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1c0c13edd4442ceb7eac70bbcaebaf84512f9a3c": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "95241ac76c3e8df8fc6ad7ba7f6ad0d42ffee370": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e" - ], - "851bd5d845ecdb153927c67c2914438d335836c6": [ - "3d7d385d9ee75a286e8da27f7d3cf9f12651c899", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "718941f22cb8f3c1564777fca6588a7aba3c76eb": [], - "8d36390a430845849b62646a8c8be4c79f2b3d62": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "64a257d1e5d8a18a0fbef5b25fe19413d87ef4d9": [], - "3fecb6df989c51592cb1a1a737f259c5e3b0eab1": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6729422d35054c0bccf752e6df306638e9bf1401": [ - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03" - ], - "92d2aca10f1aa68ca580e51ca732e517daeec102": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "82731ea473d36f0ba9113dbcdba20943d9c9f302": [ - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2e9f3a96d6f6011ff1b4eab541ef1df12cde042f": [ - "6e10343767ab09dde83cf99ea3442907402a9810", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "811f451f1991ec5508e67d00375ca4f5d05e0eeb": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "42780f9c7f73d73d7a887e2f787af0e079703d40", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2883358201b604b86b4d11fa5d00653e244bedac": [ - "5d79d41b74999a286de2621ce732a928727efb3f" - ], - "355e4bbb1c36bd5f57175f2a509c809aa47509e5": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "fa2a052567f94f37c78be2ae56d430d5b00d8f6e": [], - "d706448bcf8123344d21245dbf7bdaea7380fcc6": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8738be33dcae524a10ccbe29afa84a870c5f6cec": [ - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "61530d10ac9c86c6315541e6ffb810400eec979c": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2be910eb19f2f8f2e8038d2a835bc48f868ccbf1": [ - "5db0f55332839c408e3049cea1a6ad48fefba70c", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e9147d13c90682aaecab5aefc7c6132dca6483f7": [ - "4e33c5756aa18d248cf50fef9382acda1e0f65da" - ], - "85bb4acab1d2a169472f85477eff4ef0a4047582": [ - "2430cd1ac1c480894f2ef9368b8962a3a73e7b57", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d628f3c65e0c8c3024d9ff0678d44383fc9fe27c": [ - "3fb0731538c59f8520a309996a0567b58965f0fe", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d0ffb09a00b67365efb9e217c3fd45d804733810": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7a079dd04a9c4709fe2abefd4fbdf6dee8e6704d": [ - "b6499bcc10d4a70c3ca8b84995270cfd0d29de4c", - "e053be7f36a0772b68eaaa14f15650c14071e4ab", - "e070ff286709db28312e08b52b05539debe88146", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "36d6d614c7b36144c59e3892a6812b2a68af2c93": [ - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde" - ], - "598a297ac28e323db917ffb19e5481209e588430": [ - "c60116a51bf66bc363d11b797d97eba84b13cfd7" - ], - "baa727294fd380959118abae5e7a985e4975f857": [], - "22b485136af2da87618d0ca5253b9f54ab2d3457": [], - "59641c10ed7431a3cf841f308367dc2dc0281b74": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ef76276f9ef929496f03282fa85ae1bbcdc69767": [], - "61dc10fec277e4e1c328f690db923a35d05c4ca5": [], - "eaa20a9090861a778d754a6d50dbb0396ae09e80": [], - "4767eeb232918ab5585ac52a910f96327f58f424": [], - "5a19b3e8dcdef157f002bca2839268c4933b7b7b": [], - "dd3a59227dbc55273edaed0a9f0de7105c5f8ef6": [], - "9e8bad21221b88b516edd28fe902e591ac08efa5": [], - "cec495c9338929c20f6991ae80a97d6300d1c242": [], - "e1227daa4877599e13de41a5207a222e1b197456": [ - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fa133b4200729a57db96ae50aff8c4a5ff819f43": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "47df3fd32d00220c85c2c51a571254fd99b2ecc7": [ - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6c15605b4b77f970975757a875d349ba240f4caf": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f3ce9ba3fcec362b70263a7ed63d9404975496a0": [ - "dc05240a06326b5b1664f7e8c95c330b08cd0349", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "3b239d232ebb0fdb0515f41fd439e54ed4e8f86a": [], - "3d39a73d32bb9f72ec21fe3218e7f190d623ccd0": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7a25155364476839b6d1fc0653cd8611327ab9ba": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "4237cbebe788a97174f48dc398082739bbffe95b", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "13a0d8bb38f739990c8cd65a44061c6534f17221": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "098370508aaf56f718a472511987ac2072d0f917", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f2eb62c997cc8334ec734abdbc4586666f012d48": [ - "62b4845be5a4b8ccc7ac3896d03c023193208e95", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a221def07d3cd4f29c38234e271faf8a523e0f5a": [ - "6d19a6ba6b1e89fb81ce100452c069d011ac8e40", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0376b7ff6bd5fd3df5dc766cb24f9ca8736ea34e": [ - "d3e4553f0a1fd465ae358701f1bdc2e8265308d6", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "176ec99005b5085d5d9a34fb770d75d34166c9f5": [ - "375c41566e8710c7b6cbf12c1bf6347f5aa23ab8", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1b2bf55e8432c2d42bc94c2abb868ff3bb23f175": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "55a250868627de2d202d06e7cb3f6cbcd3a66f88": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a8fd9c1625011741f74401ff9bdc1c584e25c86d": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5945baeb4c52ad091e7e2a2c351424073154250a": [ - "50a260631a28bfed18eccf8ebfc75ff34917518f", - "3268da152f1149675b5f1cfd03f97026128b9e09", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8c389a029c58926202bbfdcdd788974787fe945f": [], - "914254fac74a2da051cccf6ca16afcaad416a079": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "be2672ba4b68a5ebf69ce7c2f6024bd60f1d75c4": [], - "b74f11455b2a61059ae3ee56e86b8799abbf7f86": [], - "848228a5df1c7cec594148230648bd7edccdbd8a": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4d9a59d4253cd54fda6090d11867a7c048e0d4ca": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "d4b8b03d5e301b23de5180e7f630d53fbd45a5b5": [ - "fdb1cbb3ea42a47c8b6eecc94817f5276a4d0ea8", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ce9d803168d64cd7caf0340041cb7bb8b0ac07f7": [ - "e1227daa4877599e13de41a5207a222e1b197456", - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "4237cbebe788a97174f48dc398082739bbffe95b", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "63fa4d16c3319f3e661a5512cc56ab7469eccdd2": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "840028b328612a7eb7c3d5e60f5cb779065d7b4c": [ - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e7cfc3362dd85b17c747e9f9636749696f87a88b": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fdb1cbb3ea42a47c8b6eecc94817f5276a4d0ea8": [ - "f3ce9ba3fcec362b70263a7ed63d9404975496a0" - ], - "341546ab4ff1945b004d18300749419c3896c6c9": [ - "10717aefce06cc41465619ec8c956f4b0b0fa6e1", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "732627c703a9dbc78d9384f1be4c791c3a554391", - "605c32428861eb26b8631617b8f6c97a850d6a04", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3585e08f2491859679b761eae8444afe7ec62f74": [ - "faca71d01e3dd3379f4027176e8cf1f02d31c03c", - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "54a4517022703a27e1670d3b84214521882f0108": [ - "e070ff286709db28312e08b52b05539debe88146", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "341bdbcfc3febef7691a97c216ad394653211095" - ], - "40cfe84ce5ce0d24f4ac189f8aac441992ab3233": [], - "681f05c1952ce3e1dc48a4ceb230a6033092be68": [], - "6ad26eb2d2aa6679d16d9c16fb75cd2cbe1127bc": [ - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "29bd550d0ab53296790ceba31dfe0a06754bcdde": [ - "36f7bc27c9a37eb337c35df4ae86f148e13d4e9a", - "70ece7b4ba8f3b67f5a797daed544fb6a0b627bf", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "34bc28087e1d6f047e2736791f79d769293f447c", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d3bc7ba19e274bb6fb5e055a3f1b62924c731432": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3b16a709a5b18e52b0b6741cbc3c0e68a03ecd8e": [ - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "782f3d43b37790a83c98d5fd3ef142b296f20616": [ - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "717d4cc5188e06791ef2043045e6e570ae764091": [ - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "a37153a5f42ee2951ad8a2c9ec86b52c4bf81c77" - ], - "a171780f04780f1dca6965e6c451915b8ff5458f": [ - "a221def07d3cd4f29c38234e271faf8a523e0f5a", - "faca71d01e3dd3379f4027176e8cf1f02d31c03c", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f06c2565fe4b4cabeed1835eb9dbf0a9cea36959": [], - "dea7d906dfc2a34c016fa0420d2f94515ac3d373": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "83205781287d3c8b974027da782d9a42d436f1cf": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "73f6927da9258a9637c7acffc9b201a9c0db38e4": [ - "3268da152f1149675b5f1cfd03f97026128b9e09" - ], - "bc931ed644c3ac7b89c480966f91f525d4075b29": [ - "a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151", - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3fa956ed52c7038e099223429900e9ca5baeab21": [ - "0a0d6a98bd246a82aaaa9d33ec0eadf4ceae69dc", - "e6210c6f37bda0a9add8f01acc98ebd2e370814a", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "1161fc662f9a2f1c8dba084b01a08cf86208850f": [ - "d6c73f758b05f38529c1a96cab7e908a2047dabd", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "a964f5b3b56baa43dee1ec24bc2682b1236f302d": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "b34862afacf36e7011d40c67bb67c5ee9cf7da22": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "20a81a7f60002e4e0c28fbd818622cf27c278cc3": [ - "a29a0e679e626e8961dc217081eae2a6c63a15ad" - ], - "b39e28597aee25ddde24020424a55bbeaee9dff3": [ - "763eb8d43e2f8a5d9da26269a4985efd1c099a5b", - "93d6fa92d60938b5bd0e405e159832b91332f169", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c60116a51bf66bc363d11b797d97eba84b13cfd7": [ - "ad454e24bd32408559512b4bac4cd5237794210f", - "a55dfc1482c9fa32859d1e8e8c5813f5a22982cc", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "30c0cdc414f68211d5d0514df027cec22e005174", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "4c9eb09bfe9524574c0d4cd9614789f25f533623": [ - "8ce6ad6d8a73757309d3b9f525cf15cb68e32397", - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "914254fac74a2da051cccf6ca16afcaad416a079", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "558f81f3fb2284d9561482143aba363c624de559": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "b31a5884a8ebe96b6300839b28608b97f8f8ef76": [ - "398e4061dde8f5c80606869cebfa2031de7b5b74" - ], - "6c04cd18fd87aad37c1c04450dce781f02bf2d1b": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9a0de1abf99a312553e02544b56174b6af7517cb": [ - "386bd4d25043516f076ea7b2296a1ebec84f43ce", - "80785017029cab501fcdb90b98985cd2b36e1fb8" - ], - "1f8d74abcb89ee21bf01e7133cea503d8c99fef7": [ - "b626560f19f815808a289ef5c24a17c57320da70", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8205d2fe15dfac7b6d09946f065d422fca3b31f5": [], - "368fb35a07076eba01c2e4700499323cd4524513": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1f3d3aaa4e33aee616ee7fbf1bb7ad40ca7ff5f1": [ - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "4c0428917aeee6aa7bd434f337d039f35996b736": [ - "2392b6d3a5cad9e5cf349169eaeee848266adf6a", - "b31a5884a8ebe96b6300839b28608b97f8f8ef76", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "3d5922d71a370f32b7f232a596def914f67eebd1" - ], - "dfddbee50f2da60ac0dde58361cf11fbcf02b2fa": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b217b6bc340af9a10bebbf8acc36ea30871769bd": [ - "aafa168f9756f42c4ff707f6577cdd2eccc62b12", - "18143a4c2da37444e06feed04cc9efeb0856352d", - "ff2a0fb125e7f03428420230c6ecbeafd4cf07a8", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "57be0448d168e8d6d0b6e0d1a4405fb5fbaa1b56", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "b6207fe49e29c77402f8dbab052e949990949609", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "1d75f8de31bf47ec46fa5586056420ec8bc97e86", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c8a940f2015afad576d35e7a6916cc1a0cec169d": [ - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "84d99893ee24fc825e359598d44d602c45c4865e": [ - "eae18cf748729ab740dbb8c11a28ea377dc41db9", - "c5b2243baf88a00db2d4e4f9edb33cde08eb153f", - "6584892e7d0f0055ee9adabf03f36f2fa74319e5", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a38e0f993e4805ba8a9beae4c275c91ffcec01df": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e7ad08848d5d7c5c47673ffe0da06af443643bda": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4e3c65511292a800b17be6653bd057e7a545a0b0": [ - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a43a3fadc9190e61b34f59a913f1716e443519e4": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "958530b2a6267fd0c251a7c82e0267c21dca9cdf": [ - "4e3c65511292a800b17be6653bd057e7a545a0b0", - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9e6e6e3b9680e4d4ee15fa6ed5a3a178371f4cf4": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d847ab7f4109d0a4c640d5ee34b510a76002fddb": [ - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "faca71d01e3dd3379f4027176e8cf1f02d31c03c": [ - "4237cbebe788a97174f48dc398082739bbffe95b", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "df4c580b37f54c11eb76922a67b2dd5a6672a93d": [ - "c78bb96ee119950b4f6a9dc0155199826b0bc8c8" - ], - "7b4e0aee93666bfa4bc1f2837036a916b25d8826": [], - "c96a8150c82a0ce9c8c1e069590f534939a30038": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "17a6116e5bbd8b87082cbb2e795885567300c483": [ - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "08b85bce712168998004ee80ce4e475390413c74", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "0adec918885dff698acf359988ed79a543157f80", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "2677645b0f96c8c055b83c904d531cfe22b2e623": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7bc9607c5cf3fc817675d46844f529097d579514": [ - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c2a79e2a65b721d4de5f6d4806323174b9f8f393": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "33285e02758788b681754d283df20971fef6e31f": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "ebb3d299213bae89b5d302cc3dfc36573ec83956": [ - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fcf4dcd27ae9a26c0719d493f7b90d1ec3d620ea": [ - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "341bdbcfc3febef7691a97c216ad394653211095", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "133777180e326dfa53523bf53b0a969bbdccb0ee": [ - "da061a6e0016d6b625a8e86d64a797ca8ddb92a5", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c49fd6cac5382cdbc2bc31be195e42bc28dc615d": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "142ebbf4760145f591166bde2564ac70c001e927" - ], - "281f8411654a8bbca3b15dd95fdba42e03d6d666": [ - "b4170009de40c1c46adea6a314734434ecd4b0dc", - "3b16a709a5b18e52b0b6741cbc3c0e68a03ecd8e", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "4f94d8c270c57ea697d3d9e15796cba8352347a6": [ - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "9ba50f992ccd92f428503ea6246157260a26cd77" - ], - "d1e10a539cc83d3df1b612fa098ceea1be63cc29": [], - "bd16d3d5f4f237bd76395b17e56cde3f01a41584": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bad49c894e1607fa839d4a3ce07dc1fb7939f915": [ - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "716178841e169f5c02a1fd5da241825699501248": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5b8f647e1a02cace17ed0896eb9b37be9a9fa45e": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "9af2b40a6d9edeb3d53d7e612018bdbff993ffd2": [ - "3b16a709a5b18e52b0b6741cbc3c0e68a03ecd8e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6036f424468a5be5dd9b427ae266b72cb8468b5f": [ - "64ce6ef1f5cf227bf2bf917c87273386ae16256f", - "197ba7bbfdbb052b0770088815c110774220f397", - "7a25155364476839b6d1fc0653cd8611327ab9ba", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3df2b2a90ed3e3db6abc6c0d36165d1778cdcb37": [], - "f120490d06d1d30c389ed60b634b8bf69cd64efd": [], - "e31a820dd9324791ab294f89528455ac380d0d87": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1fa0a012a83348c02a113ec18e1a165e273402f2": [ - "5d879530c443dd06d3686f31d32cfe34c7ade9bc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7ca2e9f768311f29c8abf58e5ec6acf7cf268d9a": [ - "2577d053f8aab912d29b424e1f09133d83740fd2", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "40047a74b707743157051d38f76061ba5ff9aab4": [ - "6e10343767ab09dde83cf99ea3442907402a9810", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "338da2650fbee5b35d1e37b16c2603b466eea962": [ - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1" - ], - "00c2aea466034c563b7aa3cd8eadb1fc46b119fa": [], - "696c6337ca3ded52e6135b9f13b77db3eca1c7c6": [], - "94beb9f249d6d2f1c00d8edfa2db861633aee6f9": [ - "2577d053f8aab912d29b424e1f09133d83740fd2", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e2ffb7b4215cbe7b5d06be4a37aacedc8762fd50": [ - "496dab67b98785b46867173f0d777eaa9a32ca9c", - "bb58f2f63888456a3e04a56a18996ab8dacdb257", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045" - ], - "25e48d36c34d958c89244953501638cfcb7a2839": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9d21467c22b1709ca5a7f6c21cbbcbf5a5c4c9a9": [ - "102e4c860e39a2bfd7bf3f03b9ad69aac7bf3b5f", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bb70fdb77293197cd0b372aac5ebf57eb1fe0949": [], - "52f18f95b6e92019bcc83e2e4aa17ee4bdd75d31": [], - "a676a587b0e079096f7f292b4a886d5d3dbcdb53": [ - "bcefc74b20649fd41ea05d87a3fa512d2559fc8d", - "d1bd7ae97588eccfbcd31ffce4fc924d12a5de4d", - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7d9b662031fa785b493ed8255cd6ad19d2d8ea1e": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ef0679f8b3114c339bdf5a0c202403a08d160a88": [ - "17170575aa8b4fa4e3eef5d366ada706a94dd836", - "ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3" - ], - "4a99be7d5e0fbbdb28914bd5e96df26949ecb75e": [], - "b5cccb9a2a0c1e2c22fd1efe1cda7f9beb57bcbe": [], - "0e87f4c721c2a5302e9cf7e2b3a6ceacfaceb469": [ - "df2beaae63e4d68ef8e762bcd4704c9f11f856d9", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ae5767106f8e6b1d6a2da3992e3c4faaf6dee31c": [ - "fd80f7f3673fc6ca02f192d5d73426f11a4be659", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4ea9ee6ff16e4c7da58d10f8a2322e6a5aaccdf5": [ - "fca92fe287c44c9ec79ca1f2762b0bf2e5e8df2b", - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "507acddb0b7f36b83fd7c8bff2f121eb506ac8fb", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "2da3a84e72a2973504cd9cf1c0a377f5a5a91f09", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e814deb54d154aad19ae2b72a2e4dd3376175bb5": [ - "8e37dc1215681aa153a51c07078ba8befd6a6e01", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "13f62693ab4483566dca1d818d0122a7b08eef98": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722" - ], - "f64bff444f4ca1287496e8a0c23316b02e2f1586": [], - "35b45a852ff24e789f9406b96170cfcbbaed1781": [ - "34ff1da13770908ef0bf389365cdde743d3c9db1" - ], - "69f0c3a693d5f7f1512f2fcb4104692e4ae36184": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "89184ab496b2a1ae31e068e628479b4cd8f4b9d2", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "7f3bc301ae0e2bbb78a0d42f074865e87d908f9a", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "d3e4553f0a1fd465ae358701f1bdc2e8265308d6", - "23c265ba884b92ecbd9d18641078d964697e4590", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6cd26d124ffeb6ce301ef351aada27fa0852f81b": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "33633c0da2dd0a514fb281841b5e8668d52380c8": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "286c3587f2616839286748461cbc90261ea49caf": [ - "27c16cca907aa43397cc226a182b73b396c5cf66", - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "31e28e7f32645e844dd543d5daeeeb9215443698": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "34bc28087e1d6f047e2736791f79d769293f447c": [ - "42e1790c7979796634d15920b4a08990e847243e", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f32fb177deec417b825eb7b2b64fdf08357ecede": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "42e1790c7979796634d15920b4a08990e847243e", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3a899fbda8071af3fce011ae2c1f6c00264c070f": [ - "3c8444cc4e96bdbe6853b886caf032afd1ee1d20", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9": [ - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3fb0731538c59f8520a309996a0567b58965f0fe": [ - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "a2412fdebd53bd25476f834ae2b8aa8cb44cb1e1", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "76c8e90dfd0f1e78e6a94d702a5b14b3e7206003": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "af365d54e237fb213d980b2dc0c2ef1a4280bbd7": [ - "3e4afde5a9de2c1801da99b8aff5ae05923f256b", - "135ae2ea7a2c966815e85a232469a0a14b4d8d67", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "dd889342b0de45f7434cdfa7543e3bd46ec824cb", - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2e6fa3095df1d1ed041dfb4f5a18e31d4b7bd7bb": [ - "3d5922d71a370f32b7f232a596def914f67eebd1", - "1c475acaa1060c8318a625f24bfd88c12f367516", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "da3aca9d7b50da823f669c983edeb60445720fe0": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "a2412fdebd53bd25476f834ae2b8aa8cb44cb1e1", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7fc133b3a61e88338ae15a2bf72f08fdc2beb504": [ - "34bc28087e1d6f047e2736791f79d769293f447c", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dcbf62f17dad0f4554f91c822d141fb92f78429a": [ - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b8bd29a6104d26a16687400049a4e7e026ae6258": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5b37aa7ee3f517380f92fdac67753adbaa58514f": [ - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "57be0448d168e8d6d0b6e0d1a4405fb5fbaa1b56": [ - "02540ae926814f4b7972d3fa4dd33932fdc4b58b", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8f936af93fb2b52b9678ff8f17c1ebe8de236a88": [ - "06edda0310b4ec7c5012d012349252a3a77521b6", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4d74a5048b884e8bb3842240abf98915c619c8f8": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "27343da734be32fa5347f367730cacaa6d9b3c01": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ff3dbed244fc5f1787a9f92a6b91be27855d4adc": [ - "327e0290fd71609bfc1a30478a95f690668fe622", - "2d765d953efd738034782f9afdb311e3ba015edd", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a884d6284f330a38fbdb3ecc147d48482da28867": [ - "d628f3c65e0c8c3024d9ff0678d44383fc9fe27c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b": [ - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "605c32428861eb26b8631617b8f6c97a850d6a04", - "2122ed4a82bc7d8affc5f7ae5026d174ea34ea52", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9cef5a098486aeab6ed3700c5e3d29488488d16f": [ - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "e070ff286709db28312e08b52b05539debe88146", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "69bfa665e507fcee4a8d003933998eb89f336c9f": [ - "9bb3deca32af8d632e0d916c587cca6c185a6576", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "b626560f19f815808a289ef5c24a17c57320da70", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f4d543ff431359947bf41152ac01233b8062221f": [ - "dd889342b0de45f7434cdfa7543e3bd46ec824cb", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "42e1790c7979796634d15920b4a08990e847243e", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "50073733a6f8ddafd7ef9a8221cd940fa910b6e2": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "30b45e1d990c3e73fb328bbcfd0f616ac51f60b1": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "62176de125738e3b95850d1227bac81fd646b78e", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "882394bee14cb882cb4b685e6e4641a2627cb9cf": [ - "da3aca9d7b50da823f669c983edeb60445720fe0", - "08b85bce712168998004ee80ce4e475390413c74", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "355b66a65aee97822eb7404183ee72b18cb648de", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ae10d1462e63fd361550546d2470510e9a93fe75": [ - "bc70af9248d210663edf22e5fc84ca9313c697b0", - "ad454e24bd32408559512b4bac4cd5237794210f", - "1fb5a5298747b8c7d60f98640a543f20d42ab053", - "a74b7301d2df228c266c6405dceb547d07a022fa", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "48abfc41a0abf023d2037ebb2f274835e0d322d0": [ - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "6e10343767ab09dde83cf99ea3442907402a9810", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f9838a3be5c94bb2674a0e224de349b50e18f3c4": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "a2412fdebd53bd25476f834ae2b8aa8cb44cb1e1", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a74b7301d2df228c266c6405dceb547d07a022fa": [ - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "34ff1da13770908ef0bf389365cdde743d3c9db1": [ - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "40047a74b707743157051d38f76061ba5ff9aab4", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "6e10343767ab09dde83cf99ea3442907402a9810", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ad5573cb25fd403f7620332f363ae87327c69a49": [ - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "e070ff286709db28312e08b52b05539debe88146", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3d5922d71a370f32b7f232a596def914f67eebd1": [ - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6e41a4cbb34c4d403efb73d74f5be5556b1f13d6": [ - "f0676c081f12c9395cd0e920d137a90a9ceb2c4a", - "2f2a430ba6c93bcfaf4818316ff8a27b1e034b1a", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "91bc42852997bc774467e9ef8cda19a85507f663": [ - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "8dbb29f93292d8b1b861c322d232fe087b2ef7b1": [ - "a55dfc1482c9fa32859d1e8e8c5813f5a22982cc", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "7ce0c89a452e3c2917b63847495533865697c79c", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5bb69bb7eadf1344b3cb8849855b23ddf28a1528": [ - "f2cd02c03d0169374442d9bc227c9aed178f4b20", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "36ad72488276cdf4d308cdd433a293445d2b3913": [ - "e3519e59a1dd374535d162bf400887e2be7429ab", - "0139e689add40a61c9454674edac4e93702aa5fc", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "423cbdde746d691deb23263005e587ec087d2cdc": [ - "18bd959aaa8a83b5b2192282224d700da7459857", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "43b6f7081eac15bf05906a6de8af5f2f4c03f1a5": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "6fd2752be2673688eb3efe2a0c1f6b97f3559185": [ - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "e070ff286709db28312e08b52b05539debe88146", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "87e02a265606f31e65986f3c1c448a3e3a3a066e": [ - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fb1f7ffef65341aa5aee2bb0b240d4ef51680fce": [ - "dd36ff1bbd79eb9990e3429f5050d76fa10d6a2a", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9689acb6cb760e8bc21c16f368368b37dee977f9": [ - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "30c0cdc414f68211d5d0514df027cec22e005174", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ee805f55c98920f74d0182aaf136330a97b4123f": [ - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "49b499598a8864eee55ab264fc16a5bf8d2f87ef", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "70c483ea8e6fc6fd128ddff951419dd8d1b763bf": [ - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c": [ - "73207b9fd2dcfeead7fe086cfdb097e4929a7b44", - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8ce6ad6d8a73757309d3b9f525cf15cb68e32397": [ - "2e6fa3095df1d1ed041dfb4f5a18e31d4b7bd7bb", - "142ebbf4760145f591166bde2564ac70c001e927", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e5754bb65a648f319a02d47c356df0db1e936b7f": [ - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "663a41c866d49ce052801fbc88947d39764cad29", - "341bdbcfc3febef7691a97c216ad394653211095", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "aafa168f9756f42c4ff707f6577cdd2eccc62b12": [ - "dca6c3927ade6481a1ae080f5c24decbfeced1be", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "4b0f05fb3b6c3fd9828a60dea0c716119f416c7a": [ - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c6693cb9be95038ff3eedc3ce1b6a061a1fc25dc": [ - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ab643f5b02786fc4772b662adcdb558c557d3bf6": [ - "cbec8bf16a459b0ae38856f604a6a14cd1343477", - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "f4d543ff431359947bf41152ac01233b8062221f", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "70ece7b4ba8f3b67f5a797daed544fb6a0b627bf", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ebe31c5080dda7f99e935b43e573fe0aa25c93f8": [ - "18143a4c2da37444e06feed04cc9efeb0856352d", - "94beb9f249d6d2f1c00d8edfa2db861633aee6f9", - "7ca2e9f768311f29c8abf58e5ec6acf7cf268d9a", - "47c3b8dd2c8a9326249ac98900b2c3fc71f46ab1", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "2577d053f8aab912d29b424e1f09133d83740fd2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "46d64d0c1dd240f5035b1af57e738b3f70850ca2": [ - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "775c439186b037c09cd9f95b9daf81d23ca21b54": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "89744cbaa080c82785b1cb8d54710bbbca32f8ed": [ - "b6207fe49e29c77402f8dbab052e949990949609", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "30c0cdc414f68211d5d0514df027cec22e005174": [ - "2ccac575a4899144a875a817b46e4423192a7ac5", - "ff2a0fb125e7f03428420230c6ecbeafd4cf07a8", - "dd889342b0de45f7434cdfa7543e3bd46ec824cb", - "0088c9f4d50706c7ab71efa13bcb4b42cf2058e2", - "8ce6ad6d8a73757309d3b9f525cf15cb68e32397", - "3fb0731538c59f8520a309996a0567b58965f0fe", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "8dbb29f93292d8b1b861c322d232fe087b2ef7b1", - "ca0c955699f552e1c2fbda747bd41faf8a2513ce", - "1fb5a5298747b8c7d60f98640a543f20d42ab053", - "0ae12d63f77f40b430f17c791a5191ff5fee5086", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "f2cd02c03d0169374442d9bc227c9aed178f4b20", - "57be0448d168e8d6d0b6e0d1a4405fb5fbaa1b56", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "9cef5a098486aeab6ed3700c5e3d29488488d16f", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "197022486b2e2584302bd9b6442e44d15bf3e351", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "a55dfc1482c9fa32859d1e8e8c5813f5a22982cc", - "4f53020119eba43891d4566df28466a92229b8fb", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "b6207fe49e29c77402f8dbab052e949990949609", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "2e6fa3095df1d1ed041dfb4f5a18e31d4b7bd7bb", - "1d75f8de31bf47ec46fa5586056420ec8bc97e86", - "06edda0310b4ec7c5012d012349252a3a77521b6", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "663a41c866d49ce052801fbc88947d39764cad29", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "5437e8adab596d7294124c0e798708e050e25321", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b13668fe9c944b8ad441edb473c218d4cf303de8": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a55dfc1482c9fa32859d1e8e8c5813f5a22982cc": [ - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "9079997b918a4f24d55a1c70216d5e010dfd02a2": [ - "89184ab496b2a1ae31e068e628479b4cd8f4b9d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f2cd02c03d0169374442d9bc227c9aed178f4b20": [ - "0a67a5e3f4125445ed84f2db3c92429010aad68a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7445ca3b53cf597b1c81b347b3e954e70b71dee9": [], - "ff2a0fb125e7f03428420230c6ecbeafd4cf07a8": [ - "236445f0a3b1e30b2542e5e64616ff6a8af7e3ea", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "52350cfb03f4cccdbe141727334a5083bd613222": [ - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "40047a74b707743157051d38f76061ba5ff9aab4", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "6e10343767ab09dde83cf99ea3442907402a9810", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d198b0b155313afe350e91a77c3d73cffa39d2a9": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fd42031baa3fe8690a00767c2fdf52dbcf945713": [ - "3599a236f285af48782fc30b1341d13ec7320735", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ae22f7c57916562e2729a1a7f34298e4220b77a7": [ - "18143a4c2da37444e06feed04cc9efeb0856352d", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "f7212245d3787c66b8dc1e9fa4bc48349cef1155", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cf7368f38cc1f0861d4b35db1307776c7f3f237d": [ - "dd889342b0de45f7434cdfa7543e3bd46ec824cb", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "70ece7b4ba8f3b67f5a797daed544fb6a0b627bf", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "42e1790c7979796634d15920b4a08990e847243e", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "142ebbf4760145f591166bde2564ac70c001e927", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "82d93c0a9fb8bfb77330280924247638c2aed9c4": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3c8444cc4e96bdbe6853b886caf032afd1ee1d20": [ - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "3099d6f4965b4d73aa1e2b2880522ec89ed2dc0a", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2ccac575a4899144a875a817b46e4423192a7ac5": [ - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d4fba10db7b4c8912cea3aa9a7fbdeb1587f1092": [ - "f32fb177deec417b825eb7b2b64fdf08357ecede", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ff5eed73dca84fcc7b98142055c35dd7b8724c80": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b5baedd5b7c270903e6861bebbfda81b10d59419": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f0676c081f12c9395cd0e920d137a90a9ceb2c4a": [ - "2f2a430ba6c93bcfaf4818316ff8a27b1e034b1a", - "84cde4abc973e3bb508f03506a7fa946222f4e6b", - "6e41a4cbb34c4d403efb73d74f5be5556b1f13d6", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "74acec7abaa6193107c6e3dfc630758ad9474af5": [ - "27d6d02e24de259e3aa38e556a81f89ec505816e", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "fa4ea8b773ffd6706ba5cf427f9671f81b04dcdd": [ - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "30c0cdc414f68211d5d0514df027cec22e005174", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a3cfcd731331dc81884be01e28a617fcf77b2fec": [ - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "380ba3def262f92e44c95ea9ff0eac755db34c59": [ - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "605c32428861eb26b8631617b8f6c97a850d6a04", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "2ee03e28208a9310a9be4032c2b04ebdddb83cc7", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a7277d5aff39ca6d2c2c9880fc4e75d9c3ca0e3b": [ - "cbec8bf16a459b0ae38856f604a6a14cd1343477", - "4c60ce3e5116037390b3b92866f43df83f3e9c6f", - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bd2bbaa226be8fe6564e878e262942dd54c4fdad": [ - "28a7ced384549eaae74ea9ad3ee21189a0625afe", - "04526876688e5a56106629229309fae272da1c79", - "286c3587f2616839286748461cbc90261ea49caf", - "dd889342b0de45f7434cdfa7543e3bd46ec824cb", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "4f53020119eba43891d4566df28466a92229b8fb", - "3841234dd49250c4fcbba79eed6593d3b57932c1", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0577ca1b6f8d9cddbad7f76ea7f82dc71b5af043": [ - "0ae12d63f77f40b430f17c791a5191ff5fee5086", - "775c439186b037c09cd9f95b9daf81d23ca21b54", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "03613effe356d2a8815f899027d6a5868822fd93": [ - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "dca6c3927ade6481a1ae080f5c24decbfeced1be", - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "3436ff7a1dd4c6547ba78968d3eec2545a6dccb9", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "b6207fe49e29c77402f8dbab052e949990949609", - "34bc28087e1d6f047e2736791f79d769293f447c", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "76e287f89bcfa45389293c007e86c46f878a7f9e": [ - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "453277762bc0e794667c97880110a0c4feabc374": [ - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "86d0d3855f94105e25d81cab9f3d269c6062a9c4": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b6207fe49e29c77402f8dbab052e949990949609": [ - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "cc7462b76d5e8acb28e61ea1f57e17905540a415", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "355b66a65aee97822eb7404183ee72b18cb648de", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a994b90a982e622dfb473a9c7a51b1993f12f511": [ - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6e10343767ab09dde83cf99ea3442907402a9810": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8add69e155596bc128df70e1ddd5a41c68698399": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "b645e706651391eca1f692e7f560051c21b3dea4", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "c2a79e2a65b721d4de5f6d4806323174b9f8f393", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c": [ - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e05483a41e8002e7024d39457e55a3fe533f5835": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4586258d63ccbef4e30f0adc22d56d112afa3be7": [], - "05e003a34148d4663734d3f39deefa0979d2a0e6": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1": [ - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0139e689add40a61c9454674edac4e93702aa5fc": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "663a41c866d49ce052801fbc88947d39764cad29", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6001dce1c8f63350263e013e0e6ff69816f0a9af": [ - "f2cd02c03d0169374442d9bc227c9aed178f4b20", - "f208ea909fa7f54fea82def9a92fd81dfc758c39", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ca0c955699f552e1c2fbda747bd41faf8a2513ce": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "34bc28087e1d6f047e2736791f79d769293f447c", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c9f83c0fa1425d61c5b16aadc4492ad53e4fbda2": [ - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a7a2161a25f3e4e1eaadcce8e63242211d8fcdb7": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "b159dffadb69940e14693e812bdaa32e3957717f", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0a2ac054c533314c0659f3b139388527df0d42f3", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4280acb444242ab708e36c817d4d6682dad70373": [ - "1d75f8de31bf47ec46fa5586056420ec8bc97e86", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "8add69e155596bc128df70e1ddd5a41c68698399" - ], - "c7a3f9cc61cfafdc307f8ae24430b6b1121f9b2c": [ - "05e003a34148d4663734d3f39deefa0979d2a0e6", - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "458147b5f7242c998ec4f33798a59b7c48867329", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "370cea8b4220917f45a69358c0303df71f5063c7", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dd889342b0de45f7434cdfa7543e3bd46ec824cb": [ - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "42e1790c7979796634d15920b4a08990e847243e", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ad454e24bd32408559512b4bac4cd5237794210f": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "197ba7bbfdbb052b0770088815c110774220f397", - "0783c214623c18f6a8ad96b8eaf4a67a382e68ee", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5fe9c0b37d6e3231fda9fea581b3dc360f99078e": [ - "bf5fbd690f24f873df86d1b0a06579cf42f7dc36", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "20d7965c0b282a0cd7f990e435d0f6bc9535bbc6": [ - "80785017029cab501fcdb90b98985cd2b36e1fb8", - "5b37aa7ee3f517380f92fdac67753adbaa58514f", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "28a7ced384549eaae74ea9ad3ee21189a0625afe": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4c60ce3e5116037390b3b92866f43df83f3e9c6f": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "36f7bc27c9a37eb337c35df4ae86f148e13d4e9a": [ - "3fb0731538c59f8520a309996a0567b58965f0fe", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "30c0cdc414f68211d5d0514df027cec22e005174", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "787f3c102342efb14e3691d310e77ab3c07b5343": [ - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "5437e8adab596d7294124c0e798708e050e25321", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9806cac2d36feda043dcdfe0f4de2608127da27c": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eea5d85160b256c20d314c2bb34a831c6953facc": [ - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9" - ], - "c04cad3c37697a23682019e78975ed1b4eacac2d": [ - "18143a4c2da37444e06feed04cc9efeb0856352d", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "f7212245d3787c66b8dc1e9fa4bc48349cef1155", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c", - "42e1790c7979796634d15920b4a08990e847243e", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a5d27bf7a2155d4ca016565a78b52ee90f81624c": [ - "d47524cd5c3c4b57af2e5a29f6f91c420310f236", - "3099d6f4965b4d73aa1e2b2880522ec89ed2dc0a", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "c5b2243baf88a00db2d4e4f9edb33cde08eb153f", - "1d75f8de31bf47ec46fa5586056420ec8bc97e86", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d4c1517ca5e550ce43515b54e475082fba80bd56": [ - "f4d543ff431359947bf41152ac01233b8062221f", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "f32fb177deec417b825eb7b2b64fdf08357ecede", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a44dd81e42c690f6b0fe86f6142722491ae36278": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "357613ea0e90bd41fb942fd65f39498e71e2dbc3", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "419a7e9df13a3a03234aba0f05f48291261b8e59": [ - "4c60ce3e5116037390b3b92866f43df83f3e9c6f", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "375a571174ea59b1f4aa62ad2619e9593fc03436": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a2412fdebd53bd25476f834ae2b8aa8cb44cb1e1": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e66f0f822d4c4853b39b27daaafa2993005fd55e": [ - "40047a74b707743157051d38f76061ba5ff9aab4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9cd329e3b86e6869e73a91c467459b1947655b07": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "68f1b94bbc900d2b5c60192a7e9eea4b046dd18a": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "27f0ce04403158b61328716ae4aaab5840c0d123": [ - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dc385646887a3669ae0ee506a263d592f4f7c7a6": [ - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "89744cbaa080c82785b1cb8d54710bbbca32f8ed", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "7f3bc301ae0e2bbb78a0d42f074865e87d908f9a", - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4f53020119eba43891d4566df28466a92229b8fb": [ - "0942bd8fad71282994ff4e9a779c09745da68edc", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "47c3b8dd2c8a9326249ac98900b2c3fc71f46ab1": [ - "a4c0144062d8e36485bad438968894cbf49ab998", - "68ad0ed1e21fd9fd7b2bd58769d8bec88c996b01", - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "5437e8adab596d7294124c0e798708e050e25321", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2da3a84e72a2973504cd9cf1c0a377f5a5a91f09": [ - "0d42221038c05cee8443c5b5af838505ee137dc3", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "370cea8b4220917f45a69358c0303df71f5063c7", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "93d6fa92d60938b5bd0e405e159832b91332f169": [ - "293499319bdd460cb3fca1f0f5eb330e64bf3ff9", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "197ba7bbfdbb052b0770088815c110774220f397", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "70ece7b4ba8f3b67f5a797daed544fb6a0b627bf": [ - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f29b145e1facc3986cdbe77b2dfeb5224088378a": [ - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a671866bad8b298fbbfb00be004df0b6b7daff26": [], - "742a4098ddc25fd3c53d4ebdfc845d1e5d2bbff7": [ - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "30c0cdc414f68211d5d0514df027cec22e005174", - "769f8b6d73fcb5b5013ae2c1c48b94c801c88ba3", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "97992c13baa6185c03d9e672f53185bc59822596": [ - "293499319bdd460cb3fca1f0f5eb330e64bf3ff9", - "64ce6ef1f5cf227bf2bf917c87273386ae16256f", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "197ba7bbfdbb052b0770088815c110774220f397", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "33727cfa2710e9f502480b7eb9ac1925cb3bc06b": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2d765d953efd738034782f9afdb311e3ba015edd": [ - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8793066d170b6a742c4fcdb478d4f100c1e4bf17": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e070ff286709db28312e08b52b05539debe88146", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d4d9274a8968b97c95d94dd14c41c860a1ac9a1f": [], - "daf9e24adbba3d1aead91cbac26502d3043db069": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "171412ef2410fad3f9a09238ad9e272c4e31aed4", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "de0593223973eccfee04d598a68bc55784c7fc17": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e3519e59a1dd374535d162bf400887e2be7429ab": [ - "0139e689add40a61c9454674edac4e93702aa5fc", - "0100785773b8217c44606ab260e3212f93b0a4fd", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "429eabec241809edcf70cb437d56501394d99253": [ - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e81466aab95dfba46b27f5d24dd3d2860cad45cd": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "85b5068d3e1364b44ec9f46b0930b521b4089df6": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "763eb8d43e2f8a5d9da26269a4985efd1c099a5b": [ - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5d77cd554909a12e8ae6660b24e5903074c56ba5": [ - "186e96fe036927182ec963b63f9dd7f8ff650158", - "12c826f4195da172b212a529f8fcf10cc79e35da", - "f64e49d76048c902cc02e8ae27dcd4ac0dbcb97f" - ], - "a460d28507b63b7265461cd62badd3dc095f600f": [ - "7a25155364476839b6d1fc0653cd8611327ab9ba", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "823fb3163a5600b5be957fc8337a9f7cdd177fef": [ - "34bc28087e1d6f047e2736791f79d769293f447c" - ], - "3ed538484f8ded6b2ffd29bcd19972504909cebf": [ - "9689acb6cb760e8bc21c16f368368b37dee977f9", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "197ba7bbfdbb052b0770088815c110774220f397" - ], - "72986cada4a8269041f1b153b08c51f8707afd91": [], - "af705d648b5b16daa3dcc593bc593f2574d76c07": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "f27f6d1d521d189e78f5623098ced0deea613d33", - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "663a41c866d49ce052801fbc88947d39764cad29", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "6e10343767ab09dde83cf99ea3442907402a9810", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3cebb93c399db7e1434741338b0a24db19786b15": [ - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "da061a6e0016d6b625a8e86d64a797ca8ddb92a5": [ - "e070ff286709db28312e08b52b05539debe88146", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3" - ], - "ba3a814f8d75a2eb6b326aaada829a19333b2b31": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8568d7bd9dfb5ba0b91940b938b44a88fafdf95b": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "95016925f9401d7b983d1aa95939ccc2e6944eb7": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "880bf72f9e2c6a99a759d3be702046e57ce2b423": [ - "02540ae926814f4b7972d3fa4dd33932fdc4b58b" - ], - "fca92fe287c44c9ec79ca1f2762b0bf2e5e8df2b": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "69619a2a47faee7a29ec596db13172e2a42ff921", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "40047a74b707743157051d38f76061ba5ff9aab4", - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f77c114553bc851aab4fff535570a9e83a18227e": [], - "64f6efeed573a0862f4cf66897f0ec34ad7fc95d": [ - "e194f641c554cb00cdd4bd6993f14c2dff8c3c03", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fb2719aa3245a1757144d273be0a9b3a96d43a3f": [ - "1fb5a5298747b8c7d60f98640a543f20d42ab053", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "3b16a709a5b18e52b0b6741cbc3c0e68a03ecd8e", - "9689acb6cb760e8bc21c16f368368b37dee977f9", - "bad6fa523ecf782c837a2eecaaffa4e1f7477c24", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "30c0cdc414f68211d5d0514df027cec22e005174", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6049f92e687e5db4ea509a83df4372099c516fd8": [ - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645" - ], - "f8b9929fde93c170fd284b17ea812a9031be8858": [ - "30c0cdc414f68211d5d0514df027cec22e005174", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3c89fdc20c1aa8eda4cc49dffc1f806c238c8077": [ - "e86009d9f9b1cdf083a48d087552bc4153784451", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eaf01348d644f95d338ac61f5f26db5cb2bc1871": [ - "1a55d16c14587edda62dc9c9ff09e0b531dd169c", - "b6499bcc10d4a70c3ca8b84995270cfd0d29de4c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "815736291d5de45958a5f2ddb692db2aab8e43e5": [ - "f9838a3be5c94bb2674a0e224de349b50e18f3c4" - ], - "ee1c4b8549d6d94ddd6b25741eaeac27931b72d3": [ - "48f4f65df8eb4c435dcb14851f876c270ce2cfd5", - "80c0416048614be75362c2c332d22dd1d2b22f65", - "cb5cfc2dd4965262d2ce302362b1f2dbfa4a5419", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "914a0f5e7eb98842f220a5082dba4f9382086f27": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5ca0a9e4c0277e379740f889f00c79ddf507569c": [ - "b4170009de40c1c46adea6a314734434ecd4b0dc", - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "9689acb6cb760e8bc21c16f368368b37dee977f9", - "bad6fa523ecf782c837a2eecaaffa4e1f7477c24", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "197ba7bbfdbb052b0770088815c110774220f397", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4f9ebbb53e93fa5821b599d00e9beff6322821ef": [ - "e3519e59a1dd374535d162bf400887e2be7429ab", - "286c3587f2616839286748461cbc90261ea49caf", - "a55dfc1482c9fa32859d1e8e8c5813f5a22982cc", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "866dfa1e5487b871894cee44605cd50107916a7a": [ - "214fbadc57e954e325dc055fee5ac0e224dfde11", - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2d12f95dd521101f3092cc3bb04e7e88aba8f562": [ - "cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "c234381686e782987a556e44aed061aaedd8c2de": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dfab0f3ee6f47e36cccee145794cd117773e6f73": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e070ff286709db28312e08b52b05539debe88146", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d25abd12a3453180900bd6194a6f30f6ed893f94": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "62e633f4b5cf8bc573e496602d3aa6e5919bbe61": [ - "d47524cd5c3c4b57af2e5a29f6f91c420310f236", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "57d39574c14565b377e522f7afeb1c9dc19ac301": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820" - ], - "f67b030ae52797622413a433fcdae0367aef2a4c": [ - "a7a2161a25f3e4e1eaadcce8e63242211d8fcdb7", - "f2cd02c03d0169374442d9bc227c9aed178f4b20", - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e47d08f5bb01acb56ab998e987d44d9d85dee1ba": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6cfcdf82ee19afb9cb5cfa9452d195bd03e3c3be": [ - "04526876688e5a56106629229309fae272da1c79", - "68ad0ed1e21fd9fd7b2bd58769d8bec88c996b01", - "327e0290fd71609bfc1a30478a95f690668fe622", - "2d765d953efd738034782f9afdb311e3ba015edd", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6b135e922a0c673aeb0b05c5aeecdb6c794791c6": [ - "3e4afde5a9de2c1801da99b8aff5ae05923f256b", - "3e30a7ac4886b28eb50151f58e14a1d698cccd0e", - "1abfc211793c683972ded8d3268475e3ee7a88b0", - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8" - ], - "3f13324da3856cba6f66ccbb81faa0fed4e78b28": [ - "08b85bce712168998004ee80ce4e475390413c74", - "cc43306e22dbfd5bc35251ab8c8ba37e4fc2a1b3", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6084f6c90b29ed7fce630c3b8a2e0e3a8d0289b1": [ - "90022c80ea85a41d8d1a7765fd95824bf3a9830f", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3ba5024cd95c1baefdd219c4e709653776884cd1": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2cbdd4ef5a14dbf3446e1d5b0ecf4b222ded820d": [ - "90022c80ea85a41d8d1a7765fd95824bf3a9830f", - "b645e706651391eca1f692e7f560051c21b3dea4", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "30873c32db5a219a58be928d5692cce48be1d3a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4bee662733b3299b54fd65b52fd395efdcb8097d": [ - "f2cd02c03d0169374442d9bc227c9aed178f4b20", - "1ddeb500dd88d4b860b32bec1e2a85f8a53910d6", - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5fe9f91ad322cc94047b59f9d28e13e6f18fbd48": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c2b13c2a04a9aeee1b77371ff1708f494989fca5": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "468defe97b8b590bf88b44db60e2e3390cc74c18": [ - "db4cf9f6a653d5c15973e836c800ea47743251ae", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda" - ], - "fd4a67189e64d9ec5b1f6466b4e4bdfb6b88ab2d": [ - "020e473d8c987dcfb03fcfffeb87b17812447031", - "89744cbaa080c82785b1cb8d54710bbbca32f8ed", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e314d182fd9d35a05870b38a56ee38eb3149b47d": [ - "705e49afd92130f2bc1e0d4d0b1f6cb14e88803f", - "30c0cdc414f68211d5d0514df027cec22e005174", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ed8306efe54cd507f299f3f7e8fb8b5cd9ba2cd4": [ - "1fb5a5298747b8c7d60f98640a543f20d42ab053", - "0a67a5e3f4125445ed84f2db3c92429010aad68a", - "ddc9aeac18638575bbb90ede4c6829ec15c2947e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "f1d791b9dd32577609ddd48e6001e46f1780062c" - ], - "c3068c7947ca290496c4d0280904686ba0b2903f": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7962e99f184ce45add505baecfb02fb1450c31b8": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a793e007fab0d675668f11ae98761673ed4ee879": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9eae005a2810f6b8fffc04109542fd798f79a5ac": [ - "8793066d170b6a742c4fcdb478d4f100c1e4bf17", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1ab033093df6a92fd93d52d09fb6322bb6e306a5": [ - "219e33388cacbcbf4b063ebf60c0bee48936fe37", - "a6a0963fcf21ed47a2616ca3980f8f4f21e6d5ad", - "5f5253fb15ac382e96ade0335baf1cfaa240fb1d", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4c7bb94b7e3764d71a567ae38cc2673b74d8544b": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "1cf50a2e906dc89463d7eab827de9a3c371e7c53", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "588ee5b7f2441754cdf3f9adbbe50d03d9c21b7a": [], - "e1770838ec0667cad48729a81764ed9964d6a8e6": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5c6e22b8ed1c4cf224f19c3c14560a9a8a12ea6a": [ - "14876d2d95e8ede7910465d836fe1f524109c7b1", - "99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b0a0f36abc5a63ae886b11cb3eff8d40e6f746f0": [ - "28a7ced384549eaae74ea9ad3ee21189a0625afe", - "f27f6d1d521d189e78f5623098ced0deea613d33", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "663a41c866d49ce052801fbc88947d39764cad29", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8e3e7deb95d2a984cba615ec847e64f354626cdf": [ - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a32a461ba94367157bd15c1bfa03a37a9449e5da": [ - "bda605928d6ebe4db906e69ab5d343df75918727", - "748a2700ec11f51560a69ec05c67ca9f97014be7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7f48fbb13c5a31529ab4bc2bf53adeb4bd213825": [ - "3e4afde5a9de2c1801da99b8aff5ae05923f256b", - "36f7bc27c9a37eb337c35df4ae86f148e13d4e9a", - "4c60ce3e5116037390b3b92866f43df83f3e9c6f", - "8f936af93fb2b52b9678ff8f17c1ebe8de236a88", - "0ae12d63f77f40b430f17c791a5191ff5fee5086", - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "3599a236f285af48782fc30b1341d13ec7320735", - "30c0cdc414f68211d5d0514df027cec22e005174", - "663a41c866d49ce052801fbc88947d39764cad29", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bac133a9cb14eadea94c55ec15ea3bb866bf6c03": [ - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "197ba7bbfdbb052b0770088815c110774220f397", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6974eeca4aa3e2fc904364ac7b3eca9cbdf9ca7c": [], - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "eaa20a9090861a778d754a6d50dbb0396ae09e80" - ], - "80d0116d77beeded0c23cf48946d9d10d4faee14": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "341bdbcfc3febef7691a97c216ad394653211095": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5217f9d7ffd515424dfda13db8c58b2c2def90ee": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c4fff3bdbbdcff01545f1f53ec6290b3556c41ac": [ - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "15cbccf71d1cd3f886ae9b0f3cc001d14577d264", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "42e1790c7979796634d15920b4a08990e847243e": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a08086808517c1f1274a0df592cab1528797cc79": [ - "17170575aa8b4fa4e3eef5d366ada706a94dd836", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a37153a5f42ee2951ad8a2c9ec86b52c4bf81c77": [ - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ad2149957cd288a5626adcce48f9981a2ab59184": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9cffc161896ce2b8d1a3083ab4f293bc166134ce": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "0adec918885dff698acf359988ed79a543157f80", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ddfa4bc33d96ff98275718a68a85f68845b73c1b": [ - "e070ff286709db28312e08b52b05539debe88146", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5bb3bd2ec1e99b11a84ccd0e4dce4bdb2a776a5e": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bf5fbd690f24f873df86d1b0a06579cf42f7dc36": [ - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "70d89d380ca5d20564e1dd8ed2f4c59f5c7b3656": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "980e55d9226cac302d0fae7732da4e67b8bc952c": [ - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "e1227daa4877599e13de41a5207a222e1b197456", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d293d19168200aefc4dc9eaadeecc6c177114101": [ - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "89184ab496b2a1ae31e068e628479b4cd8f4b9d2": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c5b2243baf88a00db2d4e4f9edb33cde08eb153f": [ - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "293499319bdd460cb3fca1f0f5eb330e64bf3ff9": [ - "cd77ea482d9245f3fcaeb670261a00c3fb5cabbd", - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "30c0cdc414f68211d5d0514df027cec22e005174", - "197ba7bbfdbb052b0770088815c110774220f397", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3ec5f0da304a606c5989de5b00e1246ee64b3e46": [ - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9a31b2ce43fe198ab1fd046ca4ec70fded154aee": [ - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "b645e706651391eca1f692e7f560051c21b3dea4", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5b4fbc167ac3d0045e362282745d298af63ae664": [ - "3599a236f285af48782fc30b1341d13ec7320735", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "48f4f65df8eb4c435dcb14851f876c270ce2cfd5": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3487c12512fa41d3a4d64f00cb842525a8590ad3": [ - "2ee1f98649ff27378fc341cae907eb89aba8fba4", - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "ccc772d88c231275f24c4fac9b28bbe0942e1107", - "34d24b2d9f116f8f652c112d4ac924afcf11bd0d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "619c3ab944d7786db3ec4078a6b074a0f70a1987": [ - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "df2beaae63e4d68ef8e762bcd4704c9f11f856d9": [ - "ef5f7cd21b5d34797636239a7b9c8ba6af440aab", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6fc4a39bb4697a21286bb1cf503ecf17407aeae2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9048b1dd06c5c6ab563d4c9dd8524d26c5ea5c6d": [ - "5b4fbc167ac3d0045e362282745d298af63ae664", - "3599a236f285af48782fc30b1341d13ec7320735", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "327e0290fd71609bfc1a30478a95f690668fe622": [ - "30c0cdc414f68211d5d0514df027cec22e005174", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "e070ff286709db28312e08b52b05539debe88146", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "68ad0ed1e21fd9fd7b2bd58769d8bec88c996b01": [ - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "64d803276417158d5d459084dfae391a23143aa3": [ - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "930e86d49477c9d3305cd1f9d01b93749f85bb8b": [ - "717392dac099d1506b766787382d61b277863163", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "142ebbf4760145f591166bde2564ac70c001e927", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6a483cd1cbecd66150c9bbcd01606723950281bc", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "84cde4abc973e3bb508f03506a7fa946222f4e6b": [ - "6e41a4cbb34c4d403efb73d74f5be5556b1f13d6", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ad4b365630f1c13d74d78f0f5d8cee87ef356d41": [ - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "0f30612423381eb5d271c4ca4f4254149b0d22fa", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eb971944bccf9793ac463c3e2f4d4251d4e8e071": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "142ebbf4760145f591166bde2564ac70c001e927", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d4af12327385260116dfd68ed1ec6d0602d26d1f": [ - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e0e1fcdbc5b41fcd1cd15001b4861a738411c910": [ - "d1e10a539cc83d3df1b612fa098ceea1be63cc29", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "5437e8adab596d7294124c0e798708e050e25321", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "42ea55edb46395469aee1b760829657e65ab6577": [ - "f3ce9ba3fcec362b70263a7ed63d9404975496a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d47524cd5c3c4b57af2e5a29f6f91c420310f236": [ - "96d104dfe727f78a35faaafe81481f3672b485ee", - "b458fc5261595f44b36325e5eaea1f874d65138f", - "cb5cfc2dd4965262d2ce302362b1f2dbfa4a5419", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3" - ], - "98670c679e888f4c97f4a5e29b93eb3a2c77ab15": [ - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fae57797d357bfa3b39b220336d1a2e8deba5318": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fd366479f49343024544e93c90f80b387d8cdd16": [ - "6b5d1e50894b1f28e4798cf20e9ffa88b9ec011a", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f2f5c0d00e6a4ccaf099c11a9790aa0afefe611f": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3b6179c293df29e31d31cea46476f104ab6950f2": [ - "a8fd9c1625011741f74401ff9bdc1c584e25c86d", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3" - ], - "5bac7d00035bc1e246a34f9ee3152b290f97bb92": [ - "da3aca9d7b50da823f669c983edeb60445720fe0", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "15cbccf71d1cd3f886ae9b0f3cc001d14577d264", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e040d2b971717e9e6b789d0c5496e36e1ea243e4": [], - "a9e00c216ce69325a15fd139da0624978e54058a": [ - "00c367427d9135209d84008e6cb5e90f0adba881", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "376f494126d1ea4f571ea0263c43ac2b6331800a": [ - "d3bc7ba19e274bb6fb5e055a3f1b62924c731432", - "815c6ca281536d18ec0eb408b6e46e72a0826163", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "db0b45458c4c6aa1dab9e56afd60a479c77494ad": [ - "3ec5f0da304a606c5989de5b00e1246ee64b3e46", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cbec8bf16a459b0ae38856f604a6a14cd1343477": [ - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e418bddc14666671c4df6a9747f39f0f522a1bad": [ - "4cf527e9e0d68e3fc16d39fbcdb3869cd3ccf60f", - "5bac7d00035bc1e246a34f9ee3152b290f97bb92", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "d318e0169f649656c71f02a1f84194a734fe1962", - "663a41c866d49ce052801fbc88947d39764cad29", - "42e1790c7979796634d15920b4a08990e847243e", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "15cbccf71d1cd3f886ae9b0f3cc001d14577d264", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6f1499cf3d836ca763040074a52bb0d59aef1093": [ - "00c367427d9135209d84008e6cb5e90f0adba881", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c48fc69b62c749e78928d6a3bae98ffe278f761a": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4703cbd3743ff81297c64007db0109d96dec98c0": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "55a250868627de2d202d06e7cb3f6cbcd3a66f88", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3f459219d75de63b5b7a26a8c6447ec1e79a985c": [ - "9822153f31934faa216f8f0af17b51929f2eb93d", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7d78238a9bad60433d616abdd93c735087d99670": [ - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "33de773be1733347a01cb07a5bb1b6cdfa956a47": [ - "02540ae926814f4b7972d3fa4dd33932fdc4b58b" - ], - "d6d0fd994f37b630f35945736b5e1bb198148404": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3bab38ba17a9fea40b0862befa0d1ac247d7ea03": [ - "cbec8bf16a459b0ae38856f604a6a14cd1343477", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9a7b9515b66bf83c9c808626206eabe9a8837c22": [ - "375a571174ea59b1f4aa62ad2619e9593fc03436", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "2577d053f8aab912d29b424e1f09133d83740fd2", - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "3ac3c10e1317fe8419f794cf30ce3227e95e1f54": [ - "286c3587f2616839286748461cbc90261ea49caf", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d0a7f7fe31e0e0c42b471b4c47a313bd8c8e5206": [], - "c11dad59cbca5cc4875391ebf5360f945aec933a": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64" - ], - "f2ec0182c6646d3128afa5100f37d9de7b533463": [ - "aa207668318fec38d60b79f407fb64982e46fce9" - ], - "f6a752ec416df387895db0f9ae8bbdbca2eb27d3": [ - "5437e8adab596d7294124c0e798708e050e25321", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e26888285436bc7998e5c95102a9beb60144be5e": [ - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "9bb3deca32af8d632e0d916c587cca6c185a6576": [ - "3c8444cc4e96bdbe6853b886caf032afd1ee1d20", - "cf7368f38cc1f0861d4b35db1307776c7f3f237d", - "cbec8bf16a459b0ae38856f604a6a14cd1343477", - "4c60ce3e5116037390b3b92866f43df83f3e9c6f", - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "42e1790c7979796634d15920b4a08990e847243e", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5798d2efb64925117bd5bfe6f328240d3158c590": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6988596f88276920a4e555cbe624e1431bc8a9f7": [ - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "1403e6b9adf7712c35ae56327d52fe54603b87e1" - ], - "460fc460695bce86e66da0f1e4f7a7a7a3b1481a": [ - "2af6a21a1b682ceb585165359d3605e89f4cf6b0", - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8d727ce66eeef021462c14ab7afbbc1110495b01": [ - "4c60ce3e5116037390b3b92866f43df83f3e9c6f", - "39bca01efce8765f0a5d3a8981bc30d56f196b96", - "64ce6ef1f5cf227bf2bf917c87273386ae16256f", - "197ba7bbfdbb052b0770088815c110774220f397", - "663a41c866d49ce052801fbc88947d39764cad29", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d55ed10e6a77e8f0a2359eb92221915f56481843": [ - "4161ad2d2495d8af1d62dc5e71882bde642cd1c1", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c57c2b454747021eb11412bfbac6f2d668c9a644": [], - "58219d9826f9ddde448c73e7ecc690111f5698f4": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "355b66a65aee97822eb7404183ee72b18cb648de", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "55367fbade73f96181ffcf52169d0471d4c014a2": [ - "2d2b05f0969568ac3fd3c2cca5df04c4136c5416", - "df2beaae63e4d68ef8e762bcd4704c9f11f856d9", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6a75460afd98e7479bd7a77cf1c8cfb2d017f08d": [ - "aa0b5821458e34d35d96ba0878b595950ec8bb8e", - "ddf2e20427e24b422cc11f58a27458b75e1d3cca", - "d589c49e1cd1dd3b994dcac01b4c6e7fb8eef161", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "e6210c6f37bda0a9add8f01acc98ebd2e370814a", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "42b877f7423fc727bff5b6e173ad336da33079a9": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b1e097be0e7fe37a90507b66abfbb6f22fef38b5": [ - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d75387fcf6a839f2aa8af5778aa6931eea5ec969": [ - "8793066d170b6a742c4fcdb478d4f100c1e4bf17", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "03fb95e6be583ca954c3d00812a9e9a40f118e51", - "f208ea909fa7f54fea82def9a92fd81dfc758c39", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e070ff286709db28312e08b52b05539debe88146", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c59b23ee87ba01942a7bab79728f85e66ba34ec7": [ - "50bdea5132ef4b8cf25b0d9f3ac2ee0d09bf18cb", - "5e8dd82419f78025093acbec3ba2e345fff85d11", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "da7bc9c47bc06ed2742142540cc94b918c1fe723": [ - "e5754bb65a648f319a02d47c356df0db1e936b7f", - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "94db6ae7ffb90d64a6c4d971654f75687cb0057d": [ - "3b88526a0f0337e3a6b632b4af8fd0882eb4b470", - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "bb2ef694e8b5a99e1f7ceb014968b4d1dc2e122a": [], - "eafc97d979e790a84329f0a49d1b627bd5979499": [ - "62176de125738e3b95850d1227bac81fd646b78e", - "0139e689add40a61c9454674edac4e93702aa5fc", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "c879413103f8950bdd414c7f60a39bd7748c9be8", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "a9d460f8eb9001b1bed11b7fb2af555185c70fcf": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "36f7bc27c9a37eb337c35df4ae86f148e13d4e9a", - "4c60ce3e5116037390b3b92866f43df83f3e9c6f", - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bcb197654f39bb9312d8d0333646b71254d29239": [ - "25425e299101b13ec2872417a14f961f4f8aa18e", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "836906c334159d46a691dfc5466c33caa3d22f65": [ - "9cef5a098486aeab6ed3700c5e3d29488488d16f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "4edd2d2770729380eda23826af1b78298b334a23", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d395c771f6537259610497ba218cce5b9bfc2c50": [ - "368fb35a07076eba01c2e4700499323cd4524513", - "3fb0731538c59f8520a309996a0567b58965f0fe", - "12c826f4195da172b212a529f8fcf10cc79e35da", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "a2412fdebd53bd25476f834ae2b8aa8cb44cb1e1", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc", - "a37153a5f42ee2951ad8a2c9ec86b52c4bf81c77" - ], - "271ff9dbeacd05276f1f192453793e772ad91b69": [ - "a2a53a3d328b8b72672626a4c4d50f721418e6d7", - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "06edda0310b4ec7c5012d012349252a3a77521b6", - "89744cbaa080c82785b1cb8d54710bbbca32f8ed", - "34bc28087e1d6f047e2736791f79d769293f447c", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "6ca16c1c2c60ceda87242c8f8e522d12cc4a13bc": [ - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "94bcf0390d5acb1b92323bd15cc1dc311314122c", - "d318e0169f649656c71f02a1f84194a734fe1962", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "0aaf7a76507248d80f65b6a49e200d2370bcb2c9": [ - "e070ff286709db28312e08b52b05539debe88146", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "1ca0b2975010c3804ee90be36292f2212443b8e2": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "12aa2b1e9556c20752e37e8b18d0e396c0cea1c5": [ - "b21670e8061a06ab97e7d6052c9345a326e84ff8" - ], - "5ee4e42aa20eadb86fce62bd37a41f9ec1188a59": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "a74b7301d2df228c266c6405dceb547d07a022fa", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "96d104dfe727f78a35faaafe81481f3672b485ee": [], - "b13947c7598aa91992cf04048afa19c7cfe69795": [ - "1c0c13edd4442ceb7eac70bbcaebaf84512f9a3c", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d5c358380b2b2fb9c0e8be6212158e0a01dacb16": [ - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1ded7b62f6946ab27390ffd3c2feb386abf4bebf": [ - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "53661ff6fdbfb8557c5b19895fad151792c62da7", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c9a6aae4bedf6fd7a85b359e76137848265d4d1e": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "8236010c2ecc94d826be6010ff187fdc000e7df6", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "eaa70e42a10364ffe87e84656df435cd58fb430e": [ - "0f45608ddc01b3e192f3490330f4c4b8de074f79", - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c1aebd69e8bf8deac8b61e8ba699c78704a90828": [], - "316aaca2b691c350aba317a392a603f304d7926f": [ - "e418bddc14666671c4df6a9747f39f0f522a1bad", - "94bcf0390d5acb1b92323bd15cc1dc311314122c", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c" - ], - "92bfcbad75a4fffaf662fb3b1177e5728dc54327": [ - "0139e689add40a61c9454674edac4e93702aa5fc", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "305bbb460af5f27a279c85d7f57231fd1871f64f": [ - "97782a67971c4ff1a74bf07e82fe20b2c4bf86c4", - "bbb2fc6e95d24fb58ab6c25b216b14ac49a32fbe", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9218965ff29c4747033c26098c598ee1e9bfa8e1": [ - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b21670e8061a06ab97e7d6052c9345a326e84ff8": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bfe6fd05f09647b001c7eb6e333a95c881c88344": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "cff26bda86237d113ed01c812ad8bedd0afbe070": [ - "c234381686e782987a556e44aed061aaedd8c2de", - "089f6328085066263fedc083952624ca121ebbf3", - "a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151", - "08b85bce712168998004ee80ce4e475390413c74", - "a171780f04780f1dca6965e6c451915b8ff5458f", - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "3599a236f285af48782fc30b1341d13ec7320735", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ac7771c332da42b29a913b116bd6ef622cbf89cf": [ - "0d42221038c05cee8443c5b5af838505ee137dc3", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "83b8e18488d8f31dd017ec0b26531cef4b635b36": [ - "3099d6f4965b4d73aa1e2b2880522ec89ed2dc0a" - ], - "470754e17de89081f63dde4719922fe9b63251d5": [ - "1786a2f9140ed7211b21302977de64e948b92308", - "097dc73d5d422b3c09286e72d16b2561ae5fb395", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2e6b6de08f459e2165b11ed8d2103916966b0fcf": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "39bca01efce8765f0a5d3a8981bc30d56f196b96": [ - "ad454e24bd32408559512b4bac4cd5237794210f", - "40047a74b707743157051d38f76061ba5ff9aab4", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69" - ], - "a10843d1349fff8d2a7d9722f800802187fef67f": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "53c0abe83fe9b4fdaf2208295d8504fcf5241694", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3099d6f4965b4d73aa1e2b2880522ec89ed2dc0a": [ - "154493f69d7db3d49da0e51df0192c6ad5f1724a", - "4d3a49d1439a0b8fbb0e9f588970ad0f1d70dec8", - "9b9fb973e5d3b413baa90648d9eb0743bd889747", - "5437e8adab596d7294124c0e798708e050e25321", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3": [], - "256d20b96fa0ec65a373bfe64f128eb56b4ea508": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "99752e255a866484291866a5ff5cf94e96d6bdc4", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba", - "5437e8adab596d7294124c0e798708e050e25321", - "fb30166c218bef3597b0d9789ad340defc3989ca", - "a29a0e679e626e8961dc217081eae2a6c63a15ad", - "605c32428861eb26b8631617b8f6c97a850d6a04", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "70d89d380ca5d20564e1dd8ed2f4c59f5c7b3656" - ], - "eaa7853facb9b49444b48a96192cb4be66b62671": [ - "8e37dc1215681aa153a51c07078ba8befd6a6e01", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "94bcf0390d5acb1b92323bd15cc1dc311314122c": [ - "d318e0169f649656c71f02a1f84194a734fe1962", - "0ba581718f294db1d7b3dbc159cc3d3380f74606", - "8f84dcbad8cd3b5b4d9229c56bc95f24be859a35", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8": [ - "f4d543ff431359947bf41152ac01233b8062221f", - "70c3d5ab03a54281be91709b19e3f50a2e4be0e3", - "f32fb177deec417b825eb7b2b64fdf08357ecede", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90" - ], - "bfd2b76998a0521c12903ef5ced517adf70ad2ba": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ddf2e20427e24b422cc11f58a27458b75e1d3cca": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "13c2ae7831c0f1579bc8c6f1a31c9aa8689e24a8": [ - "619c3ab944d7786db3ec4078a6b074a0f70a1987", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9c0102443a1b5adc0c2235fab23a80bf8122ce72": [ - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2eecb2218a742a8d9cf0b18a2f4803710a9875bb": [ - "cbec8bf16a459b0ae38856f604a6a14cd1343477", - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "f4d543ff431359947bf41152ac01233b8062221f", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b6e993684951e6041a32e79caee39d0dec3c74d2": [ - "aa207668318fec38d60b79f407fb64982e46fce9", - "09b7338021fff3200c2098b19824aecc83a66cb5", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "7231a4cc87e6a9c6c1a800662c9beea1eaca52e7": [ - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "54a65fb8c740f96c83efb4181c4311474a7835c2": [], - "0c7ce5898dab92da540457b754254d72b8592fc2": [ - "0d0dbfb1b315a43216020abaf74d289456198219", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "96ea07447d2f9adefe03852a878517a2a6d45b96", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b": [ - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6052486bc9144dc1730c12bf35323af3792a1fd0": [ - "142ebbf4760145f591166bde2564ac70c001e927", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bbb2fc6e95d24fb58ab6c25b216b14ac49a32fbe": [ - "743dcf234cffd54c4e096a10a284dd81572b16ea", - "a29a0e679e626e8961dc217081eae2a6c63a15ad", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2430cd1ac1c480894f2ef9368b8962a3a73e7b57": [ - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b4583fe942ec2514dc21ce63a697453063f74fb5": [ - "cff26bda86237d113ed01c812ad8bedd0afbe070", - "a171780f04780f1dca6965e6c451915b8ff5458f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda" - ], - "bf22ef16a6a912763780aea454198edc3e2bb3c9": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "21bbbd300f73db57650dfbec6a8c2bbf5103a0e2": [ - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a" - ], - "7772380e6c6501c522974302389056a9c9320bf0": [], - "0e1a7f453976d7aa74eed46686c943bc6e630b56": [], - "da9b8b4073e6ad44b3da66e1e117cb1ddbf8836d": [], - "b1d2a29860e69c6ce9987ddefbe112feb1efa16a": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "28456f63013fcbb43766563584efc7a8cb8a0275": [ - "743dcf234cffd54c4e096a10a284dd81572b16ea", - "1dd344ce28f1e5a078f9d8396b5078388e555d99", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6ac0ad894e2af0bb2b0b3d8c2878faf41f77eb0b": [], - "2c23a8c8b65c3dfe3bdbe93e60e04637fee48e2b": [ - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0a2ac054c533314c0659f3b139388527df0d42f3", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "90022c80ea85a41d8d1a7765fd95824bf3a9830f": [ - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3bbd9a5a0fccf2e18041db9118fff7807501876c": [ - "eb971944bccf9793ac463c3e2f4d4251d4e8e071", - "bd2c32285e8ad5b6e322391cca5d475de4f84169", - "0a67a5e3f4125445ed84f2db3c92429010aad68a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3b3b1aba98388dead7c1cf964eff34de85b50af7": [ - "4683d3d6cb31111cf4499a199c0b036662b3eb32", - "ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3", - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9426ed538470ad98b881a20fd9725bf8536a674f": [ - "3487c12512fa41d3a4d64f00cb842525a8590ad3", - "ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3", - "0100785773b8217c44606ab260e3212f93b0a4fd", - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0100785773b8217c44606ab260e3212f93b0a4fd": [ - "97782a67971c4ff1a74bf07e82fe20b2c4bf86c4", - "ac7e270fcd365c84b29a710d58bf1243e850df4c", - "bbb2fc6e95d24fb58ab6c25b216b14ac49a32fbe", - "f64e49d76048c902cc02e8ae27dcd4ac0dbcb97f", - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "023f0045686f86332a26856f8d8c3203566925ad": [ - "b626560f19f815808a289ef5c24a17c57320da70", - "ad5573cb25fd403f7620332f363ae87327c69a49", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "1a2e90dff605dad7dbefeed121e6d295c7a77d62", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0630a18fe3fe4765132ad52a591f9776cf3284bf": [ - "08b85bce712168998004ee80ce4e475390413c74", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "16707317eb7f71b1b4d47f27d703a2cdb5142baf": [ - "7ffb212356df9980347b3d3b9910dfba75a5d0c7", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "16ecaa7cf142605331fc21c9be73c7b13e8c1acd": [ - "587352c3b95c90de6d37f061c8e117f42be0b575", - "0ba581718f294db1d7b3dbc159cc3d3380f74606" - ], - "191e300e381d4128b749d16fe3d83c8643a3bd1f": [ - "e0e1fcdbc5b41fcd1cd15001b4861a738411c910", - "cc0f0cb09a73f82ed44d900f5ca710bec784acc1", - "5437e8adab596d7294124c0e798708e050e25321", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "19e5b780a2dd1ffa1962e392976308b9fe644c7f": [ - "94bcf0390d5acb1b92323bd15cc1dc311314122c", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "d318e0169f649656c71f02a1f84194a734fe1962", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c" - ], - "1ee8c8dd9d04247515b33775532b72df7b8ec0f3": [ - "3b6179c293df29e31d31cea46476f104ab6950f2", - "4279a38a098d1d359881b73c6a88a112fe93443a", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "26218bdcc3945c7edae7aa2adbfba4cd820a2df3" - ], - "22d5459d1f47341b355feeb1becc37208d6ec365": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2ef1c2438c3a4552db9e7080e15d8c51bc071f58": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2ff69c238e26c473a6d8bcbb9292ded74d7fd1c2": [ - "3e0a691277183a6704310af3e4e9e271400612bc", - "f7c21f11dca84d443304e8909c9b87eebda0017c", - "0d0dbfb1b315a43216020abaf74d289456198219", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "34b35c89e192b5aa3118f667ce0a3cc0d89d82c3": [ - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "35d855c49334ef1b8f945f13e9bc84868dab55c9": [ - "385376b8aa48c25403f17d6206db7c09b67e1314", - "aa2fa431ce1d5a8d56d138e3330d3df381d36e3a" - ], - "45ee010607cad91728ae7fbad6cce3d805b93526": [ - "7aad760762c4a10dfbc2d3391eb8bdb28c80b236", - "7283d616e40d7ab7422e3697218f3fc42f292bf2", - "6d951d939d3f27054215f2606a0cf89ed21550e9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "486a8c8655b81c7f87ff257141466ec1186d4aea": [ - "be2b0396de9431bae931642516a1d3e4906329f5", - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "03fb95e6be583ca954c3d00812a9e9a40f118e51", - "2677645b0f96c8c055b83c904d531cfe22b2e623", - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "0f733817e82026f7c29909a51cb4df7d2685f0e7" - ], - "4a8fe7ecf225e5bada08642fcd77d3cbb322b967": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "52136f813243ac3de8e277906112a41590a376d4": [ - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5882dd04d95c9c88cdec389059fcf44d56cbb789": [ - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "62176de125738e3b95850d1227bac81fd646b78e": [ - "4ee96f0757e517928590a2300af5d40ba768a5a7", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "663a41c866d49ce052801fbc88947d39764cad29", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "70916fbeb446ab7dc811ab74b193365d789bf1eb": [ - "17170575aa8b4fa4e3eef5d366ada706a94dd836", - "0cfdd655100055f234fd23ebecd915504b8e00e3", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "3599a236f285af48782fc30b1341d13ec7320735", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "70da4fb798a86cbe8cad96c27ced0415885bbd9d": [ - "f4cba0db34aa0c389cec267ca1f3ba5255ea2645", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "74b94891f8f7ac8d73d9df817b6720e1cb792bcc": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "76f54657eb0893a0b203da57dcf0b4fffeebfc2c": [ - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7c1707db9aafd209aa93db3251e7ebd593d55876": [ - "142ebbf4760145f591166bde2564ac70c001e927", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7ce0c89a452e3c2917b63847495533865697c79c": [ - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "80ae1347b2dda02748f8f09da8a738121f5edfb5": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8fdd34153d1035d09dd4a6efa9cb0c91d23d0045": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9efa81ec4954b0859c47dad8f42edfaf8bced69b": [ - "eb971944bccf9793ac463c3e2f4d4251d4e8e071", - "7fa85f9c0fe44f1bf9e58a55f0f009296578c2f0", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "3d5922d71a370f32b7f232a596def914f67eebd1", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ab90169f7213482efff246cc5f5f057351265f18": [ - "1c1ca2392155ddf30408a442e6b504b5d60d4f2a", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b8d06dd769f89d08bdd9997d7bd363c89ede845b": [ - "40047a74b707743157051d38f76061ba5ff9aab4", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c4f9f0cc8c138047a61bdb11b1a352e3d1aed035": [ - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c91f6eb320c70e2f64b6fb935494978a8699f06a": [ - "08b85bce712168998004ee80ce4e475390413c74" - ], - "cb2954127a7fce8ab84486765392ce95dcdd8175": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d44031f253668c61ac6d68b95bbe9cac57730d51": [ - "e86009d9f9b1cdf083a48d087552bc4153784451", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "dedfe929d182cc3537a9ed765d589b4735ce062a": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "02540ae926814f4b7972d3fa4dd33932fdc4b58b": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "12c826f4195da172b212a529f8fcf10cc79e35da": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "17170575aa8b4fa4e3eef5d366ada706a94dd836": [ - "49a328730d3c6397820b733bbac903545568cd9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "19da40fd01c711fb2b3b0b19b3956b86b75f575d": [ - "142ebbf4760145f591166bde2564ac70c001e927", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "216555443355ac615598a99d2949711726a1c36f": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "22b39e38e2fd52591ca23904b474eb19dc17b610": [], - "300b01dc726fe8acbededd805501811d427920bd": [], - "31ae42394959fb1a336886379a5527bec5c9c9c4": [ - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3d8e6358968c8bd5e97f21fead73bf4ba0c2a8d7": [ - "c03fa01fbb9c77fe3d10609ba5f1dee33a723867", - "41531594d7e0f3b2e138ae43e0a0f6e24a9b014c" - ], - "437cfee2a7f7beadf09ad712f71b3265740e44a0": [ - "34fd95dd4dd32e704d4284fc31165e85b303bb1e", - "96ea07447d2f9adefe03852a878517a2a6d45b96" - ], - "4f9e7eb2f009e30f15eca18f4e540915b637b603": [ - "e070ff286709db28312e08b52b05539debe88146", - "0968f1592f9401d72bf0d97e740496818c1a3135", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5ece96203cd1dc9ff3f99867faa451939d86d545": [ - "763eb8d43e2f8a5d9da26269a4985efd1c099a5b" - ], - "6384921f1bd1059c6b4c37ac3c4e4f19e45d40c1": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e7ad08848d5d7c5c47673ffe0da06af443643bda" - ], - "803a3dd98d72a9fe730f082f3364f9b1f9a0029a": [ - "b458fc5261595f44b36325e5eaea1f874d65138f", - "c7a3f9cc61cfafdc307f8ae24430b6b1121f9b2c", - "8dbb29f93292d8b1b861c322d232fe087b2ef7b1", - "62176de125738e3b95850d1227bac81fd646b78e", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8a4320fd903677a3ea2bf606a6537b59885b1108": [ - "01b5412f3d17e90e09226d7c40ad4d4468a1414d", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "90350aa626bed47b02d0c162462e5b0ca82be6b2": [ - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "eea7bca03bda3ee2448cd012bbcb2b33822861d8", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "91099bbb96133c70db091041900ecff502a5e3a8": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "6bd91a3183ddb844641acb9f3fe9faec6a9ff617", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "9dcee248452d84b6bf26911ba6726ae5ce1a46f3": [ - "544ecfd78567dabc8684b79e5cc3fb9c126a494d", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b378e54c88d241aa917131beb65c96be3730f40c": [ - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "3268da152f1149675b5f1cfd03f97026128b9e09", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b" - ], - "b5e9406a65de7384af041c357ca5481489345b73": [ - "3599a236f285af48782fc30b1341d13ec7320735", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64" - ], - "b9d75f361b5310c6ddcddfe7858bb0416eb78de4": [ - "e070ff286709db28312e08b52b05539debe88146", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "47df3fd32d00220c85c2c51a571254fd99b2ecc7", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c4561fd08636b5f5f6b9f3f6d89f3cee39e678b0": [ - "aad167be3c902388ea625da4117fcae4325b8b7d", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ca60126b2b534a3f1cd8007ba84fdbd163968770": [ - "bf8491bef353df126e2306ad2fe4b898697b906a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d53945d4afb4528590d79e20de52883d29037e86": [], - "da5fcb26c830663b79c9aa1c550ae62e7725fcad": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "e1decb86f2a6aba8682d2fc4e427424b0b49e0d0": [], - "e4c466cf3df4887e0121561be90e0bac78d3e1cb": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ee025d7030d4767062af2bcd32a4d586737d30bf": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f1bb5051965a3a4c9288f0123dd03c26a08e1378": [ - "b5e9406a65de7384af041c357ca5481489345b73", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f208ea909fa7f54fea82def9a92fd81dfc758c39": [ - "03532123ccffae8d411264320e8a5ae2b6eddea0", - "e070ff286709db28312e08b52b05539debe88146", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "398e4061dde8f5c80606869cebfa2031de7b5b74", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "f27f6d1d521d189e78f5623098ced0deea613d33": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "a3a241e9397fe29b37f96cb5e8f4b8bebed3d3da", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "fccf8776d7525627c518a56a1f4db367a4d7120b": [ - "0392d58335ce674a70f5e58ac8c438de296a0e6a", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2c2b40b4f1967dc1fb640c7c4bec140110dbf2cf": [], - "2e536dcd013be93dc1841dd0e7a0a87b2846f341": [], - "439c2a5c4883b421ca316617b1306583cc1d706c": [ - "6052486bc9144dc1730c12bf35323af3792a1fd0" - ], - "93e09c5feb9b2ffc8926b4edff13b3d8e02e41de": [], - "98090bbc7b784a1f64d4522c5e1987b196863fd0": [], - "07cd498aacfb4d39fa2e0e8d8a9c8ad881257300": [ - "0968f1592f9401d72bf0d97e740496818c1a3135", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "08e0e696732103e585fd629e23888fd4acbb22df": [ - "0968f1592f9401d72bf0d97e740496818c1a3135" - ], - "0b94b999fdd9488e1a0914d37f8fb3ea7e9ea0fd": [ - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691" - ], - "221e801f9a39ff055773b2a20d91e3efadbea921": [], - "27d80545d142ced9b921290b5b2798cabd55468b": [ - "020e473d8c987dcfb03fcfffeb87b17812447031", - "4f53020119eba43891d4566df28466a92229b8fb", - "bf8491bef353df126e2306ad2fe4b898697b906a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "cfc12c38a4d848ff3c4225488a2c72e7d4300f4b", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2d90460431c093757fcf651e333bc0da5f5404c2": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3159478fbc81e562c812b9d5dc1891271b21f0c4": [ - "385376b8aa48c25403f17d6206db7c09b67e1314", - "040ec58865ab50b5e6d91a355ffc146ec5034e9f" - ], - "358d1d9eed69a6eadcda9996b3f13b0e0a356b88": [], - "3613299c54bbea66dd6db1b00573f7ade021a5a9": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3da79f3fe4e0ff1bb59efb34c8baa2bcf632c2b9": [], - "513b96c7d5d1f9a74afd9d946d5a7c83fe592869": [], - "579ee305d538a679d72b808ffe8322680561a177": [], - "59266e06cdb867c2541603f9d94e13f67d55938f": [ - "142ebbf4760145f591166bde2564ac70c001e927", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5e01b8383e9260b2e251274a6bad89677cb1bbd3": [], - "6b80c6e220ca2e2434f5a80b2eb5e8b645e97ae1": [ - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7de25ad5ac7433e4d4071f450461b03fd2a39b8d": [], - "c4ff1be5c254b60b96b7455eefcc4ec9583f82ed": [ - "cd29c25c489562b409a60f83365f93f33ee1a0a1", - "3e30a7ac4886b28eb50151f58e14a1d698cccd0e", - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5", - "661e8ac4908a9d2a85835245ea99b6a314cc4a60", - "70da4fb798a86cbe8cad96c27ced0415885bbd9d", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "5d49c7401c5f2337c4cc88d243ae39ed659afe64" - ], - "18a8b97d75a87e8fef07542d8875d4a62b553744": [ - "9fa3596d561f2d9de632030253c8d05c787d9e53", - "d4177489596748e43aa571f59556097f2cc4c8be", - "6987c95f7054d2653178ac93df52aa3c0b99fcf5", - "f330f502bf1e92fabf7f246597fa9320d956c0c8", - "fc50a6202e2f675604543c1ae4ef22ec74f61ad5" - ], - "2e7cc95145665bae4fa98b7f81b9d551f1b1c021": [ - "52350cfb03f4cccdbe141727334a5083bd613222", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "722aa3bb6e426afc40f05c42a2fc0623adb51af9": [ - "ac7771c332da42b29a913b116bd6ef622cbf89cf", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a" - ], - "39f0d1b894130852ee9f39a5df58905a09645c81": [ - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c874aa93efe663ed31f2ec72d45a5dd4b4cdffba": [ - "d4177489596748e43aa571f59556097f2cc4c8be", - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "37f0f1f55f44bff84aac27a346dd47d0c6c136e3", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "0f30612423381eb5d271c4ca4f4254149b0d22fa", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "355b66a65aee97822eb7404183ee72b18cb648de", - "0adec918885dff698acf359988ed79a543157f80", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0544cad023bf49bbf51d69f44f8280dc63b20f57": [ - "1eb1a8c7f88de27af224153f43ecdd41774600f2", - "04892382200a9d48ad5f8d3cb3cd3d63a8206a01", - "8d17234680db76f99efd22fbcb169f45d2d79d93", - "838e1317454724a9bb758d05d97e6058e11a8251", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "4c5f4ddc68be643fb34ea969bf2c105ff7538995", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "394ade86b0aee8f2b0ab09128eb8743bac1193d1": [ - "08b85bce712168998004ee80ce4e475390413c74", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "9ba50f992ccd92f428503ea6246157260a26cd77" - ], - "e3f1de20342370ef29c4e2b45ab5b67157bd8b57": [], - "077e8f6d633c2ee7a7ba82579ac3d1fb98740785": [ - "9ba50f992ccd92f428503ea6246157260a26cd77", - "0adec918885dff698acf359988ed79a543157f80", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0ab79543d98e375b9de1354766c024e165cc2369": [ - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "dcbf62f17dad0f4554f91c822d141fb92f78429a", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "0adec918885dff698acf359988ed79a543157f80", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "82c39d297d4ca723e6faa4bfe0dd7cc9918d623f": [], - "20487deedce041069721992efc574d84837c8106": [], - "95dae66002ebe9e560c01f6aa1ad412d7362c15c": [ - "e7c85d7d58d4b1fde4be8a8f166e46c995dc0f1b", - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "507acddb0b7f36b83fd7c8bff2f121eb506ac8fb", - "8236010c2ecc94d826be6010ff187fdc000e7df6", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "9a9b1e2968302eb882870537d4af6e2c722dfd1a", - "22d5459d1f47341b355feeb1becc37208d6ec365", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "62176de125738e3b95850d1227bac81fd646b78e", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "b626560f19f815808a289ef5c24a17c57320da70", - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5581bf85386737bd3378eec68189759a05280bea", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ac113be880a94ed0f3108fbe0476824320b27b88": [], - "a33b11069721c8912e03f8555cb4aee71f0461b4": [ - "3159478fbc81e562c812b9d5dc1891271b21f0c4", - "344f801663a76aa15e0dd13344261d8648c382a2" - ], - "b1fea3027cc5414265ee8b9add782db91a70aa28": [ - "4a7530bbaee7563ee244f3ffed6b706bd96f08a8", - "823fb3163a5600b5be957fc8337a9f7cdd177fef", - "da3aca9d7b50da823f669c983edeb60445720fe0", - "0ea7fc93d4947d9024ccaa202987a2070683bc1f", - "08b85bce712168998004ee80ce4e475390413c74", - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "89744cbaa080c82785b1cb8d54710bbbca32f8ed", - "d03a9b2a0e090cc9fd2ba0a457ecea35372f1018", - "e7cfc3362dd85b17c747e9f9636749696f87a88b", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "46d64d0c1dd240f5035b1af57e738b3f70850ca2", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "9ba50f992ccd92f428503ea6246157260a26cd77", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "93a10102cdd501fdc1ccc79868416313fe719e01": [ - "b2542a738b75ee9b7ce1a13d8b78f9095d212412", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "90c0759c2004e5c2d6f9a7fb913ccb1bea548809": [ - "f79ad114fa68b58c59b16339c88b06d7b285baa5" - ], - "56f77cc79143aecd677e52429fcfe50c8f47f01a": [], - "a607f5fa6b7f9606584eb93a2afd5225b71a88c8": [ - "8d9ca1e2c703e2752a4904c967a65d45d0bef5f6", - "5db0f55332839c408e3049cea1a6ad48fefba70c", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "49b499598a8864eee55ab264fc16a5bf8d2f87ef" - ], - "1f21b90600045d41c5f584843d7dd97a88773708": [], - "3f83582c08a62e5bd02398fafc93f7eaf1e4b84e": [ - "341bdbcfc3febef7691a97c216ad394653211095", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6b85b63579a916f705a8e10a49bd8d849d91b1fc": [], - "cfd7550d6c25f4771549726b289665c8aa5ae1be": [ - "4c0428917aeee6aa7bd434f337d039f35996b736", - "c589ddc6c6fb07189af7c1212f6eb15c5ff72cde", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1c1ca2392155ddf30408a442e6b504b5d60d4f2a": [ - "5d49c7401c5f2337c4cc88d243ae39ed659afe64", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "256ef1f8d0ea2982cc50d3e85e5f1b4920f037fe": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "11b95e33f1a61079105e06090984b9dd8742887e": [ - "8236010c2ecc94d826be6010ff187fdc000e7df6", - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "e070ff286709db28312e08b52b05539debe88146", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "c88cafa3e980765a64febe369ceb7c2aa7261d2a": [ - "663a41c866d49ce052801fbc88947d39764cad29", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3fc3460c4554a28e489a0ea6ef067b79b7d301d9": [ - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "40047a74b707743157051d38f76061ba5ff9aab4", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "73207b9fd2dcfeead7fe086cfdb097e4929a7b44": [ - "bda605928d6ebe4db906e69ab5d343df75918727", - "6a2d96d2a7adde6349f15c1e680b67d114e7b67c", - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "dc385646887a3669ae0ee506a263d592f4f7c7a6", - "236445f0a3b1e30b2542e5e64616ff6a8af7e3ea", - "48abfc41a0abf023d2037ebb2f274835e0d322d0", - "69619a2a47faee7a29ec596db13172e2a42ff921", - "0942bd8fad71282994ff4e9a779c09745da68edc", - "34ff1da13770908ef0bf389365cdde743d3c9db1", - "515cf674fcdced5a7d5bb156dd5fcc1f5290e79b", - "38e1a9c5599fc7597b7c5ffd37951ba5f528094c", - "e070ff286709db28312e08b52b05539debe88146", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "142ebbf4760145f591166bde2564ac70c001e927", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "316206a2f89eb94ce02a81fba1dc304586f21b39", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "b21670e8061a06ab97e7d6052c9345a326e84ff8", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74" - ], - "e070ff286709db28312e08b52b05539debe88146": [ - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "99752e255a866484291866a5ff5cf94e96d6bdc4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "1358f90705b05cdb20ebe6799b02196205e7e9f0": [ - "b8bd29a6104d26a16687400049a4e7e026ae6258", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "86d0d3855f94105e25d81cab9f3d269c6062a9c4", - "b17cc18e4130505b939f7d527082eb6be2a7fd5b", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "6289de84a02f0c27734f295ada565603ac958948": [ - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "99752e255a866484291866a5ff5cf94e96d6bdc4": [ - "3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5437e8adab596d7294124c0e798708e050e25321": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820": [ - "62176de125738e3b95850d1227bac81fd646b78e", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "bda605928d6ebe4db906e69ab5d343df75918727": [ - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "507acddb0b7f36b83fd7c8bff2f121eb506ac8fb": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "5581bf85386737bd3378eec68189759a05280bea", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "bda605928d6ebe4db906e69ab5d343df75918727", - "62176de125738e3b95850d1227bac81fd646b78e", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "1358f90705b05cdb20ebe6799b02196205e7e9f0", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e3a9af420cd2c0c8241856da92374027fefb87be", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "0f733817e82026f7c29909a51cb4df7d2685f0e7", - "166b64f2ae8e52f5779682fab756cbd617a6e74b", - "d3640eb3b542eaf36fee2261f037a6bf0d8eac9c", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "ef5f7cd21b5d34797636239a7b9c8ba6af440aab": [ - "07955e96cbd778d0ae2a68f09d073b866dd84c2a", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321" - ], - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691": [ - "62176de125738e3b95850d1227bac81fd646b78e", - "0d42221038c05cee8443c5b5af838505ee137dc3", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "e66f0f822d4c4853b39b27daaafa2993005fd55e", - "d96997265f8146e93b4c9350f19d55e46d1317f0", - "40047a74b707743157051d38f76061ba5ff9aab4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "87e02a265606f31e65986f3c1c448a3e3a3a066e", - "80d0116d77beeded0c23cf48946d9d10d4faee14", - "10bd4160b44803ada6a3d2e366c44b7e2a4ffe90", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d": [ - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "dca6c3927ade6481a1ae080f5c24decbfeced1be", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "3d473cbb7a377cf960abff31748a1a39bb6c7d7c": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "507acddb0b7f36b83fd7c8bff2f121eb506ac8fb", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "e070ff286709db28312e08b52b05539debe88146", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "d53e70d834243d3d8d4b621c0c52dfec26081155": [ - "b58d8579ece27a60432e667bfbdb750590fa65d9", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5f19ae1135a9500940978104ec15a5b8751bc7d2": [ - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "9ffefdf1fcd780cb71450b0a7a29247c66aa87be", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "85e7d63f75c0916bd350a229e040c5fbb1472e7a", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "5f66d1a667eec13b5d337c3fc5619bcef95092bd": [ - "f8a2dca1e8fe56e698984c077f7ff58d8ca867e9", - "815c6ca281536d18ec0eb408b6e46e72a0826163", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "a38e0f993e4805ba8a9beae4c275c91ffcec01df" - ], - "142ebbf4760145f591166bde2564ac70c001e927": [ - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f": [ - "e5c72b92c48d68594b290c84a8904da7c8335554", - "1786a2f9140ed7211b21302977de64e948b92308", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "e070ff286709db28312e08b52b05539debe88146", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "7715ba5e75f5256e1061c7473afe61bb0dbb9065": [ - "29bd550d0ab53296790ceba31dfe0a06754bcdde", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "baf63d7cf115d674a8c8da3a3d789aa84521977a", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "8236010c2ecc94d826be6010ff187fdc000e7df6": [ - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "a3a241e9397fe29b37f96cb5e8f4b8bebed3d3da", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "6c943670dca38bfc7c8b477ae7c2d1fba1ad3691", - "4d17732d90440682b0500f4e209c6cc4fac20e0e", - "711d5e8ddbb840ad31a9ffa3d38590603ba69a92", - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "40047a74b707743157051d38f76061ba5ff9aab4", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "13a0d8bb38f739990c8cd65a44061c6534f17221", - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "4b0b56be0ae9479d2bd5c2f0943db1906343c10f": [ - "8236010c2ecc94d826be6010ff187fdc000e7df6", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "7c1707db9aafd209aa93db3251e7ebd593d55876", - "e5c72b92c48d68594b290c84a8904da7c8335554", - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "e070ff286709db28312e08b52b05539debe88146", - "142ebbf4760145f591166bde2564ac70c001e927", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f": [ - "341bdbcfc3febef7691a97c216ad394653211095", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "521ccc898395a2818fced22b4cf371b0e5121f94", - "0adec918885dff698acf359988ed79a543157f80", - "56fa0b9cba4d9aee5ccc327365b3b3a721031c69", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "01286de359fa9dd3d8f78c48157a3e929533d94e": [ - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "30c0cdc414f68211d5d0514df027cec22e005174", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "663a41c866d49ce052801fbc88947d39764cad29", - "de32da8f5c6a50a6c311e9357ba16aa7d05a1bc9", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "4e5f7cd537a1bbcd090f9887b1b59f39a3715dba" - ], - "e7c85d7d58d4b1fde4be8a8f166e46c995dc0f1b": [ - "aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3", - "4780d0a027c5c5a8e01d7cf697f6296880ffc945", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "62176de125738e3b95850d1227bac81fd646b78e", - "ccc772d88c231275f24c4fac9b28bbe0942e1107", - "5f5253fb15ac382e96ade0335baf1cfaa240fb1d", - "b115c1e1e9e51f8ad7d47b745bc04e29a654b84d", - "6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7", - "663a41c866d49ce052801fbc88947d39764cad29", - "c88cafa3e980765a64febe369ceb7c2aa7261d2a", - "e826ac71dad8c4ce36d82fb7add43e3d306bb7e1", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "0aa150619e07fa41492517368beaaf8ae56fe061": [ - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2" - ], - "717392dac099d1506b766787382d61b277863163": [ - "90350aa626bed47b02d0c162462e5b0ca82be6b2", - "4988b3d378b79eb8669112620baf1ff4e3e536fd", - "07759a84f27e43cfa5bc8d579f8227c96e6ae1dc", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "f9838a3be5c94bb2674a0e224de349b50e18f3c4", - "59641c10ed7431a3cf841f308367dc2dc0281b74", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ], - "850538c1759c56a9f2dab8e84ec63801c41d6396": [ - "5aa834e8088818e6ba138d5a1b7b66f69d619555", - "4b0b56be0ae9479d2bd5c2f0943db1906343c10f", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e", - "e070ff286709db28312e08b52b05539debe88146", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321" - ], - "5aa834e8088818e6ba138d5a1b7b66f69d619555": [ - "ed40889e11e812ef33578506844be06d713f6092", - "2f3822eb380b5e753a6d579f31dfc3ec4c4a0820", - "7dc928f41e15f65f1267bd87b0fcfcc7e715cb56", - "c76dd4a70361c3afd2e19d046343e2dedd16ecc3", - "385376b8aa48c25403f17d6206db7c09b67e1314", - "3aaf6a2cbad5850ad81ab5c163599cb3d523436f", - "08b85bce712168998004ee80ce4e475390413c74", - "69619a2a47faee7a29ec596db13172e2a42ff921", - "4610ffb1b016acaa82a2065ffd1a3adbae1ce722", - "66242baf48b0f6b828e7547ac39ffaa5e1b2cb3e", - "e070ff286709db28312e08b52b05539debe88146", - "50b0c6ee2b3d53ba5af69d6c00b5d60888a9026f", - "e7ad08848d5d7c5c47673ffe0da06af443643bda", - "5437e8adab596d7294124c0e798708e050e25321", - "5f19ae1135a9500940978104ec15a5b8751bc7d2", - "d53e70d834243d3d8d4b621c0c52dfec26081155", - "ac3cdb50606f7770eef8e4cd951840a4f71287a0", - "6b85b63579a916f705a8e10a49bd8d849d91b1fc" - ] -} \ No newline at end of file diff --git a/data/prompt_engineering_arxiv.csv b/data/prompt_engineering_arxiv.csv deleted file mode 100644 index f526d5b..0000000 --- a/data/prompt_engineering_arxiv.csv +++ /dev/null @@ -1,15116 +0,0 @@ -Title,Link -"Prompting AI Art: An Investigation into the Creative Skill of Prompt - Engineering",http://arxiv.org/abs/2303.13534v1 -"Unleashing the potential of prompt engineering in Large Language Models: - a comprehensive review",http://arxiv.org/abs/2310.14735v1 -"Prompt Engineering or Fine Tuning: An Empirical Assessment of Large - Language Models in Automated Software Engineering Tasks",http://arxiv.org/abs/2310.10508v1 -Prompt Engineering For Students of Medicine and Their Teachers,http://arxiv.org/abs/2308.11628v1 -"An Empirical Evaluation of Prompting Strategies for Large Language - Models in Zero-Shot Clinical Natural Language Processing",http://arxiv.org/abs/2309.08008v1 -"A Systematic Survey of Prompt Engineering on Vision-Language Foundation - Models",http://arxiv.org/abs/2307.12980v1 -"PACE: Improving Prompt with Actor-Critic Editing for Large Language - Model",http://arxiv.org/abs/2308.10088v1 -"Prompt Space Optimizing Few-shot Reasoning Success with Large Language - Models",http://arxiv.org/abs/2306.03799v1 -Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study,http://arxiv.org/abs/2305.13860v1 -A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT,http://arxiv.org/abs/2302.11382v1 -"LoGoPrompt: Synthetic Text Images Can Be Good Visual Prompts for - Vision-Language Models",http://arxiv.org/abs/2309.01155v2 -"A Simple Zero-shot Prompt Weighting Technique to Improve Prompt - Ensembling in Text-Image Models",http://arxiv.org/abs/2302.06235v2 -"PromptMagician: Interactive Prompt Engineering for Text-to-Image - Creation",http://arxiv.org/abs/2307.09036v2 -Optimizing Prompts for Text-to-Image Generation,http://arxiv.org/abs/2212.09611v1 -Structured Chain-of-Thought Prompting for Code Generation,http://arxiv.org/abs/2305.06599v3 -"QaNER: Prompting Question Answering Models for Few-shot Named Entity - Recognition",http://arxiv.org/abs/2203.01543v2 -"Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good - movie, and a good prompt too?",http://arxiv.org/abs/2212.10539v1 -A Brief History of Prompt: Leveraging Language Models,http://arxiv.org/abs/2310.04438v1 -Patch-Token Aligned Bayesian Prompt Learning for Vision-Language Models,http://arxiv.org/abs/2303.09100v1 -Prompt position really matters in few-shot and zero-shot NLU tasks,http://arxiv.org/abs/2305.14493v2 -PRE: Vision-Language Prompt Learning with Reparameterization Encoder,http://arxiv.org/abs/2309.07760v1 -Review of Large Vision Models and Visual Prompt Engineering,http://arxiv.org/abs/2307.00855v1 -"An Information-theoretic Approach to Prompt Engineering Without Ground - Truth Labels",http://dx.doi.org/10.18653/v1/2022.acl-long.60 -Unsupervised Prompt Learning for Vision-Language Models,http://arxiv.org/abs/2204.03649v2 -"User-friendly Image Editing with Minimal Text Input: Leveraging - Captioning and Injection Techniques",http://arxiv.org/abs/2306.02717v1 -"Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation - with Large Language Models",http://arxiv.org/abs/2208.07852v1 -Prompt Stealing Attacks Against Text-to-Image Generation Models,http://arxiv.org/abs/2302.09923v1 -Large Language Models Are Human-Level Prompt Engineers,http://arxiv.org/abs/2211.01910v2 -"Prompts Matter: Insights and Strategies for Prompt Engineering in - Automated Software Traceability",http://arxiv.org/abs/2308.00229v1 -Revisiting Prompt Engineering via Declarative Crowdsourcing,http://arxiv.org/abs/2308.03854v1 -Prompt Performance Prediction for Generative IR,http://arxiv.org/abs/2306.08915v1 -"Cases of EFL Secondary Students' Prompt Engineering Pathways to Complete - a Writing Task with ChatGPT",http://dx.doi.org/10.13140/RG.2.2.31464.85762 -Prompt Engineering for Healthcare: Methodologies and Applications,http://arxiv.org/abs/2304.14670v1 -"Prompt Sapper: LLM-Empowered Software Engineering Infrastructure for - AI-Native Services",http://arxiv.org/abs/2306.02230v1 -A Taxonomy of Prompt Modifiers for Text-To-Image Generation,http://arxiv.org/abs/2204.13988v3 -Protect Your Prompts: Protocols for IP Protection in LLM Applications,http://arxiv.org/abs/2306.06297v1 -"Optimizing Mobile-Edge AI-Generated Everything (AIGX) Services by Prompt - Engineering: Fundamental, Framework, and Case Study",http://arxiv.org/abs/2309.01065v1 -"PEACE: Prompt Engineering Automation for CLIPSeg Enhancement in Aerial - Robotics",http://arxiv.org/abs/2310.00085v1 -Manipulating Embeddings of Stable Diffusion Prompts,http://arxiv.org/abs/2308.12059v1 -Rationale-Augmented Ensembles in Language Models,http://arxiv.org/abs/2207.00747v1 -"Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language - Models",http://arxiv.org/abs/2209.07511v1 -"Promptify: Text-to-Image Generation through Interactive Prompt - Exploration with Large Language Models",http://arxiv.org/abs/2304.09337v1 -"StudentEval: A Benchmark of Student-Written Prompts for Large Language - Models of Code",http://arxiv.org/abs/2306.04556v1 -Can Prompt Learning Benefit Radiology Report Generation?,http://arxiv.org/abs/2308.16269v1 -"ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis - Testing",http://arxiv.org/abs/2309.09128v1 -"Understanding prompt engineering may not require rethinking - generalization",http://arxiv.org/abs/2310.03957v1 -Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains,http://arxiv.org/abs/2306.12028v1 -"Making Pre-trained Language Models End-to-end Few-shot Learners with - Contrastive Prompt Tuning",http://arxiv.org/abs/2204.00166v1 -"IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image - Diffusion Models",http://arxiv.org/abs/2308.06721v1 -Prompt Engineering and Calibration for Zero-Shot Commonsense Reasoning,http://arxiv.org/abs/2304.06962v1 -Time lag between prompt optical emission and gamma-rays in GRBs,http://dx.doi.org/10.1051/0004-6361:20065547 -"Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with - Language Models",http://arxiv.org/abs/2106.13353v2 -Polyglot Prompt: Multilingual Multitask PrompTraining,http://arxiv.org/abs/2204.14264v2 -Controllable Image Captioning via Prompting,http://arxiv.org/abs/2212.01803v1 -"Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot - Task Generalization",http://arxiv.org/abs/2305.11095v3 -ContrastNER: Contrastive-based Prompt Tuning for Few-shot NER,http://dx.doi.org/10.1109/COMPSAC57700.2023.00038 -SAMAug: Point Prompt Augmentation for Segment Anything Model,http://arxiv.org/abs/2307.01187v1 -On Conditional and Compositional Language Model Differentiable Prompting,http://arxiv.org/abs/2307.01446v1 -Prompt-Enhanced Software Vulnerability Detection Using ChatGPT,http://arxiv.org/abs/2308.12697v1 -A Practical Survey on Zero-shot Prompt Design for In-context Learning,http://dx.doi.org/10.26615/978-954-452-092-2_069 -VPA: Fully Test-Time Visual Prompt Adaptation,http://arxiv.org/abs/2309.15251v1 -SPELL: Semantic Prompt Evolution based on a LLM,http://arxiv.org/abs/2310.01260v1 -Investigating Prompt Engineering in Diffusion Models,http://arxiv.org/abs/2211.15462v1 -"DirectGPT: A Direct Manipulation Interface to Interact with Large - Language Models",http://arxiv.org/abs/2310.03691v1 -"Tailored Visions: Enhancing Text-to-Image Generation with Personalized - Prompt Rewriting",http://arxiv.org/abs/2310.08129v1 -"CoPrompt: Supporting Prompt Sharing and Referring in Collaborative - Natural Language Programming",http://arxiv.org/abs/2310.09235v1 -"Testing the Curvature Effect and Internal Origin of Gamma-Ray Burst - Prompt Emissions and X-ray Flares with Swift Data",http://dx.doi.org/10.1086/504684 -"Large Language Models in the Workplace: A Case Study on Prompt - Engineering for Job Type Classification",http://arxiv.org/abs/2303.07142v3 -"Learning to Prompt for Open-Vocabulary Object Detection with - Vision-Language Model",http://arxiv.org/abs/2203.14940v1 -"No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code - Intelligence",http://dx.doi.org/10.1145/3540250.3549113 -"Not what you've signed up for: Compromising Real-World LLM-Integrated - Applications with Indirect Prompt Injection",http://arxiv.org/abs/2302.12173v2 -Prompt Injection attack against LLM-integrated Applications,http://arxiv.org/abs/2306.05499v1 -Reverse Stable Diffusion: What prompt was used to generate this image?,http://arxiv.org/abs/2308.01472v1 -"Connecting Large Language Models with Evolutionary Algorithms Yields - Powerful Prompt Optimizers",http://arxiv.org/abs/2309.08532v1 -Just Tell Me: Prompt Engineering in Business Process Management,http://arxiv.org/abs/2304.07183v1 -"How understanding large language models can inform their use in physics - education",http://arxiv.org/abs/2309.12074v1 -"How does prompt engineering affect ChatGPT performance on unsupervised - entity resolution?",http://arxiv.org/abs/2310.06174v1 -Legal Prompting: Teaching a Language Model to Think Like a Lawyer,http://arxiv.org/abs/2212.01326v2 -Improving ChatGPT Prompt for Code Generation,http://arxiv.org/abs/2305.08360v1 -"Exploring EFL students' prompt engineering in human-AI story writing: an - Activity Theory perspective",http://arxiv.org/abs/2306.01798v1 -Detecting Natural Language Biases with Prompt-based Learning,http://arxiv.org/abs/2309.05227v1 -GPTutor: an open-source AI pair programming tool alternative to Copilot,http://arxiv.org/abs/2310.13896v2 -"The Infinite Index: Information Retrieval on Generative Text-To-Image - Models",http://dx.doi.org/10.1145/3576840.3578327 -"BIM-GPT: a Prompt-Based Virtual Assistant Framework for BIM Information - Retrieval",http://arxiv.org/abs/2304.09333v1 -Small Language Models Improve Giants by Rewriting Their Outputs,http://arxiv.org/abs/2305.13514v1 -What's the Magic Word? A Control Theory of LLM Prompting,http://arxiv.org/abs/2310.04444v2 -"Benchmarking and Explaining Large Language Model-based Code Generation: - A Causality-Centric Approach",http://arxiv.org/abs/2310.06680v1 -Repository-Level Prompt Generation for Large Language Models of Code,http://arxiv.org/abs/2206.12839v3 -AutoHint: Automatic Prompt Optimization with Hint Generation,http://arxiv.org/abs/2307.07415v2 -"Prompt-Free Diffusion: Taking ""Text"" out of Text-to-Image Diffusion - Models",http://arxiv.org/abs/2305.16223v2 -"ChatGPT Prompt Patterns for Improving Code Quality, Refactoring, - Requirements Elicitation, and Software Design",http://arxiv.org/abs/2303.07839v1 -"Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate - Fairytales",http://arxiv.org/abs/2302.08961v2 -"ChatGPT4PCG Competition: Character-like Level Generation for Science - Birds",http://arxiv.org/abs/2303.15662v2 -Large Language and Text-to-3D Models for Engineering Design Optimization,http://arxiv.org/abs/2307.01230v1 -"I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box - Generative Language Models",http://arxiv.org/abs/2306.03423v2 -"Exploring Small Language Models with Prompt-Learning Paradigm for - Efficient Domain-Specific Text Classification",http://arxiv.org/abs/2309.14779v1 -"Promptor: A Conversational and Autonomous Prompt Generation Agent for - Intelligent Text Entry Techniques",http://arxiv.org/abs/2310.08101v2 -ChatGPT for PLC/DCS Control Logic Generation,http://arxiv.org/abs/2305.15809v1 -Design Guidelines for Prompt Engineering Text-to-Image Generative Models,http://arxiv.org/abs/2109.06977v3 -Toxicity Detection with Generative Prompt-based Inference,http://arxiv.org/abs/2205.12390v1 -RAPGen: An Approach for Fixing Code Inefficiencies in Zero-Shot,http://arxiv.org/abs/2306.17077v1 -"Optimizing Machine Translation through Prompt Engineering: An - Investigation into ChatGPT's Customizability",http://arxiv.org/abs/2308.01391v1 -"Open-Ended Instructable Embodied Agents with Memory-Augmented Large - Language Models",http://arxiv.org/abs/2310.15127v1 -"Connecting the early afterglow to the prompt GRB and the central engine - in the striped jet model",http://dx.doi.org/10.1093/mnras/stad1865 -Long lived central engines in Gamma Ray Bursts,http://dx.doi.org/10.1063/1.3027898 -Arguments to Key Points Mapping with Prompt-based Learning,http://arxiv.org/abs/2211.14995v1 -"Data-Driven Approach for Formality-Sensitive Machine Translation: - Language-Specific Handling and Synthetic Data Generation",http://arxiv.org/abs/2306.14514v2 -"Exploring the Intersection of Large Language Models and Agent-Based - Modeling via Prompt Engineering",http://arxiv.org/abs/2308.07411v1 -Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts,http://arxiv.org/abs/2309.15915v1 -Learning to Prompt for Vision-Language Models,http://dx.doi.org/10.1007/s11263-022-01653-1 -"Hyperaccreting Black Hole as Gamma-Ray Burst Central Engine. II. - Temporal evolution of central engine parameters during Prompt and Afterglow - Phases",http://dx.doi.org/10.3847/1538-4357/aa9074 -"Prompt Optical Emission from Gamma-ray Bursts with Non-single Timescale - Variability of Central Engine Activities",http://dx.doi.org/10.1088/1674-4527/14/4/004 -Log Parsing with Prompt-based Few-shot Learning,http://arxiv.org/abs/2302.07435v1 -Modeling Gamma-Ray Burst X-Ray Flares within the Internal Shock Model,http://dx.doi.org/10.1088/0004-637X/707/2/1623 -"Don't Complete It! Preventing Unhelpful Code Completion for Productive - and Sustainable Neural Code Completion Systems",http://arxiv.org/abs/2209.05948v2 -"Automatic Prompt Augmentation and Selection with Chain-of-Thought from - Labeled Data",http://arxiv.org/abs/2302.12822v1 -"Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts - health answer correctness",http://arxiv.org/abs/2302.13793v1 -AceCoder: Utilizing Existing Code to Enhance Code Generation,http://arxiv.org/abs/2303.17780v3 -"ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based - Healthcare Decision Support using ChatGPT",http://arxiv.org/abs/2308.09731v1 -"Thought Propagation: An Analogical Approach to Complex Reasoning with - Large Language Models",http://arxiv.org/abs/2310.03965v2 -"Testing High-latitude Curvature Effect of Gamma-Ray Bursts with {\it - Fermi} Data: Evidence of Bulk Acceleration in Prompt Emission",http://dx.doi.org/10.3847/1538-4365/abded1 -"A Chain of AI-based Solutions for Resolving FQNs and Fixing Syntax - Errors in Partial Code",http://arxiv.org/abs/2306.11981v1 -"Prompting Code Interpreter to Write Better Unit Tests on Quixbugs - Functions",http://arxiv.org/abs/2310.00483v1 -"A Link between Prompt Optical and Prompt Gamma-Ray Emission in Gamma-Ray - Bursts",http://dx.doi.org/10.1038/nature03515 -"Gamma Ray Burst engine activity within the quark nova scenario: Prompt - emission, X-ray Plateau, and sharp drop-off",http://dx.doi.org/10.1111/j.1365-2966.2008.13465.x -"Simulating H.P. Lovecraft horror literature with the ChatGPT large - language model",http://arxiv.org/abs/2305.03429v1 -A Universal Central Engine Hypothesis for Short and Long GRBs,http://dx.doi.org/10.1088/0004-637X/690/1/L61 -Diagnosing GRB Prompt Emission Site with Spectral Cut-Off Energy,http://dx.doi.org/10.1111/j.1745-3933.2007.00411.x -UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation,http://arxiv.org/abs/2303.08518v3 -"LLMSecEval: A Dataset of Natural Language Prompts for Security - Evaluations",http://arxiv.org/abs/2303.09384v1 -Code Detection for Hardware Acceleration Using Large Language Models,http://arxiv.org/abs/2307.10348v1 -ECO: Ensembling Context Optimization for Vision-Language Models,http://arxiv.org/abs/2307.14063v1 -"LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked",http://arxiv.org/abs/2308.07308v3 -"Data-to-text Generation for Severely Under-Resourced Languages with - GPT-3.5: A Bit of Help Needed from Google Translate",http://arxiv.org/abs/2308.09957v1 -"LLMs Killed the Script Kiddie: How Agents Supported by Large Language - Models Change the Landscape of Network Threat Testing",http://arxiv.org/abs/2310.06936v1 -Multimodal Large Language Model for Visual Navigation,http://arxiv.org/abs/2310.08669v1 -End-to-End Software Construction using ChatGPT: An Experience Report,http://arxiv.org/abs/2310.14843v1 -"ConES: Concept Embedding Search for Parameter Efficient Tuning Large - Vision Language Models",http://arxiv.org/abs/2305.18993v1 -"An implementation of the ""Guess who?"" game using CLIP",http://dx.doi.org/10.1007/978-3-030-91608-4_41 -What GPT Knows About Who is Who,http://arxiv.org/abs/2205.07407v1 -Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements,http://arxiv.org/abs/2205.11374v1 -Legal Prompt Engineering for Multilingual Legal Judgement Prediction,http://arxiv.org/abs/2212.02199v1 -"What does CLIP know about a red circle? Visual prompt engineering for - VLMs",http://arxiv.org/abs/2304.06712v2 -Log Parsing: How Far Can ChatGPT Go?,http://arxiv.org/abs/2306.01590v2 -"Unsupervised Human Activity Recognition through Two-stage Prompting with - ChatGPT",http://arxiv.org/abs/2306.02140v1 -"Copilot for Xcode: Exploring AI-Assisted Programming by Prompting - Cloud-based Large Language Models",http://arxiv.org/abs/2307.14349v1 -"Linking microblogging sentiments to stock price movement: An application - of GPT-4",http://arxiv.org/abs/2308.16771v1 -Conceptual Design Generation Using Large Language Models,http://arxiv.org/abs/2306.01779v1 -Building Financial Accuracy into Spreadsheets,http://arxiv.org/abs/0805.4219v1 -Towards using Few-Shot Prompt Learning for Automating Model Completion,http://arxiv.org/abs/2212.03404v1 -"Differentiable Prompt Makes Pre-trained Language Models Better Few-shot - Learners",http://arxiv.org/abs/2108.13161v7 -CLIP-Adapter: Better Vision-Language Models with Feature Adapters,http://arxiv.org/abs/2110.04544v1 -On Measuring Social Biases in Prompt-Based Multi-Task Learning,http://arxiv.org/abs/2205.11605v1 -"PromptonomyViT: Multi-Task Prompt Learning Improves Video Transformers - using Synthetic Scene Data",http://arxiv.org/abs/2212.04821v2 -A Prompt Log Analysis of Text-to-Image Generation Systems,http://dx.doi.org/10.1145/3543507.3587430 -"Evidence for self-organized criticality phenomena in prompt phase of - short gamma-ray bursts",http://dx.doi.org/10.3847/1538-4365/acc398 -"Sensitivity and Robustness of Large Language Models to Prompt Template - in Japanese Text Classification Tasks",http://arxiv.org/abs/2305.08714v2 -Exploring Continual Learning for Code Generation Models,http://arxiv.org/abs/2307.02435v1 -Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts,http://arxiv.org/abs/2307.11661v2 -Observations of X-ray Flares from Gamma Ray Bursts,http://dx.doi.org/10.1063/1.2943557 -Flares In Long And Short Gamma Ray Bursts,http://dx.doi.org/10.1088/0004-637X/712/2/1172 -"Prompt Emission of Gamma-Ray Bursts in the High-density Environment of - Active Galactic Nuclei Accretion Disks",http://dx.doi.org/10.3847/2041-8213/ac98ad -"A study on Prompt Design, Advantages and Limitations of ChatGPT for Deep - Learning Program Repair",http://arxiv.org/abs/2304.08191v1 -ChatGPT for Robotics: Design Principles and Model Abilities,http://arxiv.org/abs/2306.17582v2 -"Multi-party Goal Tracking with LLMs: Comparing Pre-training, - Fine-tuning, and Prompt Engineering",http://arxiv.org/abs/2308.15231v1 -"From Misuse to Mastery: Enhancing Code Generation with Knowledge-Driven - AI Chaining",http://arxiv.org/abs/2309.15606v1 -"The First Survey of X-ray Flares from Gamma Ray Bursts Observed by - Swift: Spectral Properties and Energetics",http://dx.doi.org/10.1086/523296 -X-ray flare candidates in short gamma-ray bursts,http://dx.doi.org/10.1111/j.1365-2966.2011.19397.x -"LLM4CBI: Taming LLMs to Generate Effective Test Programs for Compiler - Bug Isolation",http://arxiv.org/abs/2307.00593v1 -"A complete sample of bright Swift Gamma-Ray Bursts: X-ray afterglow - luminosity and its correlation with the prompt emission",http://dx.doi.org/10.1111/j.1365-2966.2012.21489.x -"Domain Knowledge Matters: Improving Prompts with Fix Templates for - Repairing Python Type Errors",http://arxiv.org/abs/2306.01394v1 -"Quiescent times in gamma-ray bursts: II. Dormant periods in the central - engine?",http://dx.doi.org/10.1046/j.1365-8711.2001.04413.x -Automated Essay Scoring based on Two-Stage Learning,http://arxiv.org/abs/1901.07744v2 -"Prompting Is All You Need: Automated Android Bug Replay with Large - Language Models",http://arxiv.org/abs/2306.01987v2 -"Developing a Scalable Benchmark for Assessing Large Language Models in - Knowledge Graph Engineering",http://arxiv.org/abs/2308.16622v1 -"Constraints on millisecond magnetars as the engines of prompt emission - in gamma-ray bursts",http://dx.doi.org/10.1093/mnras/stx2095 -"LogPrompt: Prompt Engineering Towards Zero-Shot and Interpretable Log - Analysis",http://arxiv.org/abs/2308.07610v1 -"Geotechnical Parrot Tales (GPT): Harnessing Large Language Models in - geotechnical engineering",http://arxiv.org/abs/2304.02138v3 -"Polarization of the prompt gamma-ray emission from the gamma-ray burst - of 6 December 2002",http://dx.doi.org/10.1038/nature01612 -"Search for polarization from the prompt gamma-ray emission of GRB - 041219a with SPI on INTEGRAL",http://dx.doi.org/10.1086/510676 -X-ray Flares in Early GRB Afterglows,http://dx.doi.org/10.1098/rsta.2006.1970 -"Underlying global features of the x-ray light curves of {\it swift} - gamma-ray bursts",http://dx.doi.org/10.1088/2041-8205/719/2/L172 -"Accounting for the XRT early steep decay in models of the prompt GRB - emission",http://dx.doi.org/10.1051/0004-6361/201219339 -From the earliest pulses to the latest flares in long GRBs,http://dx.doi.org/10.1051/0004-6361/201732270 -A Simple and Effective Approach to the Story Cloze Test,http://arxiv.org/abs/1803.05547v1 -"An Empirical Study on Few-shot Knowledge Probing for Pretrained Language - Models",http://arxiv.org/abs/2109.02772v2 -"StyleMC: Multi-Channel Based Fast Text-Guided Image Generation and - Manipulation",http://arxiv.org/abs/2112.08493v1 -"Gamma-ray Burst Prompt Emission Spectrum and $E_p$ Evolution Patterns in - the ICMART Model",http://dx.doi.org/10.3847/1538-4357/ac46a8 -"Language Models in the Loop: Incorporating Prompting into Weak - Supervision",http://arxiv.org/abs/2205.02318v1 -"Text-Guided Synthesis of Artistic Images with Retrieval-Augmented - Diffusion Models",http://arxiv.org/abs/2207.13038v1 -"Will It Blend? Mixing Training Paradigms & Prompting for Argument - Quality Prediction",http://arxiv.org/abs/2209.08966v2 -"CoRRPUS: Code-based Structured Prompting for Neurosymbolic Story - Understanding",http://dx.doi.org/10.18653/v1/2023.findings-acl.832 -Prompting Large Language Models With the Socratic Method,http://arxiv.org/abs/2303.08769v2 -Exploring the Benefits of Visual Prompting in Differential Privacy,http://arxiv.org/abs/2303.12247v2 -"An Approach to Solving the Abstraction and Reasoning Corpus (ARC) - Challenge",http://arxiv.org/abs/2306.03553v1 -Trapping LLM Hallucinations Using Tagged Context Prompts,http://arxiv.org/abs/2306.06085v1 -Addressing Compiler Errors: Stack Overflow or Large Language Models?,http://arxiv.org/abs/2307.10793v1 -Backdoor Attacks for In-Context Learning with Language Models,http://arxiv.org/abs/2307.14692v1 -"Spellburst: A Node-based Interface for Exploratory Creative Coding with - Natural Language Prompts",http://dx.doi.org/10.1145/3586183.3606719 -"Incorporating Pre-trained Model Prompting in Multimodal Stock Volume - Movement Prediction",http://arxiv.org/abs/2309.05608v1 -LLM4VV: Developing LLM-Driven Testsuite for Compiler Validation,http://arxiv.org/abs/2310.04963v1 -"Contrastive Prompt Learning-based Code Search based on Interaction - Matrix",http://arxiv.org/abs/2310.06342v1 -"Forgetful Large Language Models: Lessons Learned from Using LLMs in - Robot Programming",http://arxiv.org/abs/2310.06646v1 -"CoLadder: Supporting Programmers with Hierarchical Code Generation in - Multi-Level Abstraction",http://arxiv.org/abs/2310.08699v1 -"The Utility of Large Language Models and Generative AI for Education - Research",http://arxiv.org/abs/2305.18125v1 -Deciphering the properties of the central engine in GRB collapsars,http://dx.doi.org/10.1093/mnras/staa1695 -On-Axis Orphan Afterglows,http://dx.doi.org/10.1016/S1384-1076(02)00202-6 -Gamma-Ray Burst Prompt Emission,http://dx.doi.org/10.1142/S021827181430002X -Jet or Shock Breakout? The Low-Luminosity GRB 060218,http://dx.doi.org/10.1093/mnras/stw1058 -"Flares in gamma-ray burst X-ray afterglows as prompt emission from - slightly misaligned structured jets",http://dx.doi.org/10.1093/mnras/stac938 -"P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with - Point-to-Pixel Prompting",http://arxiv.org/abs/2208.02812v2 -Robot Behavior-Tree-Based Task Generation with Large Language Models,http://arxiv.org/abs/2302.12927v1 -EvoPrompting: Language Models for Code-Level Neural Architecture Search,http://arxiv.org/abs/2302.14838v2 -Towards Interpretable Mental Health Analysis with Large Language Models,http://arxiv.org/abs/2304.03347v4 -Inducing anxiety in large language models increases exploration and bias,http://arxiv.org/abs/2304.11111v1 -"ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, - Causal, and Discourse Relations",http://arxiv.org/abs/2304.14827v2 -Prompting for Automatic Log Template Extraction,http://arxiv.org/abs/2307.09950v2 -"Batch Calibration: Rethinking Calibration for In-Context Learning and - Prompt Engineering",http://arxiv.org/abs/2309.17249v1 -"Investigating the Limitation of CLIP Models: The Worst-Performing - Categories",http://arxiv.org/abs/2310.03324v1 -"Physical processes shaping GRB X-ray afterglow lightcurves: theoretical - implications from the Swift XRT observations",http://dx.doi.org/10.1086/500723 -"Early multi-wavelength emission from Gamma-ray Bursts: from Gamma-ray to - X-ray",http://dx.doi.org/10.1088/1367-2630/8/7/121 -"Fast optical variability of Naked-Eye Burst - manifestation of periodic - activity of internal engine",http://dx.doi.org/10.1088/2041-8205/719/1/L10 -Attributes of flares in Gamma Ray Bursts: sample I,http://dx.doi.org/10.1393/ncb/i2007-10276-y -"Towards an understanding of GRB prompt emission mechanism: I. The origin - of spectral lags",http://dx.doi.org/10.3847/0004-637X/825/2/97 -"PERFECT: Prompt-free and Efficient Few-shot Learning with Language - Models",http://arxiv.org/abs/2204.01172v2 -"No Token Left Behind: Explainability-Aided Image Classification and - Generation",http://arxiv.org/abs/2204.04908v2 -Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine,http://arxiv.org/abs/2301.08745v3 -"Extracting Accurate Materials Data from Research Papers with - Conversational Language Models and Prompt Engineering",http://arxiv.org/abs/2303.05352v2 -On Codex Prompt Engineering for OCL Generation: An Empirical Study,http://arxiv.org/abs/2303.16244v1 -TagGPT: Large Language Models are Zero-shot Multimodal Taggers,http://arxiv.org/abs/2304.03022v1 -"Better patching using LLM prompting, via Self-Consistency",http://arxiv.org/abs/2306.00108v2 -"AI Chain on Large Language Model for Unsupervised Control Flow Graph - Generation for Statically-Typed Partial Code",http://arxiv.org/abs/2306.00757v1 -Impact of Large Language Models on Generating Software Specifications,http://arxiv.org/abs/2306.03324v2 -A Lightweight Framework for High-Quality Code Generation,http://arxiv.org/abs/2307.08220v1 -Transforming Sentiment Analysis in the Financial Domain with ChatGPT,http://arxiv.org/abs/2308.07935v1 -Intermittent hydrodynamic jets in collapsars do not produce GRBs,http://dx.doi.org/10.1093/mnras/staa1216 -"Adaptive Intellect Unleashed: The Feasibility of Knowledge Transfer in - Large Language Models",http://arxiv.org/abs/2308.04788v1 -The prompt emission & peculiar break of GRB 060124,http://dx.doi.org/10.1393/ncb/i2007-10278-9 -"Prior Emission Model for X-ray Plateau Phase of Gamma-Ray Burst - Afterglows",http://dx.doi.org/10.1088/0004-637X/690/2/L118 -Delayed jet breakouts from binary neutron star mergers,http://dx.doi.org/10.3847/2041-8213/aae51b -"Artificial Intelligence for Health Message Generation: Theory, Method, - and an Empirical Study Using Prompt Engineering",http://arxiv.org/abs/2212.07507v1 -"API Entity and Relation Joint Extraction from Text via Dynamic - Prompt-tuned Language Model",http://arxiv.org/abs/2301.03987v1 -"Prompt Engineering for Transformer-based Chemical Similarity Search - Identifies Structurally Distinct Functional Analogues",http://arxiv.org/abs/2305.16330v1 -Submodular Minimax Optimization: Finding Effective Sets,http://arxiv.org/abs/2305.16903v1 -Is GPT a Computational Model of Emotion? Detailed Analysis,http://arxiv.org/abs/2307.13779v1 -Activation Addition: Steering Language Models Without Optimization,http://arxiv.org/abs/2308.10248v2 -"Natlog: Embedding Logic Programming into the Python Deep-Learning - Ecosystem",http://dx.doi.org/10.4204/EPTCS.385.15 -"CoT-BERT: Enhancing Unsupervised Sentence Representation through - Chain-of-Thought",http://arxiv.org/abs/2309.11143v1 -Interactive Task Planning with Language Models,http://arxiv.org/abs/2310.10645v1 -Prompt Engineering Through the Lens of Optimal Control,http://arxiv.org/abs/2310.14201v1 -Ending the prompt phase in photospheric models of gamma-ray bursts,http://arxiv.org/abs/2310.15660v1 -"Performance of ChatGPT on the US Fundamentals of Engineering Exam: - Comprehensive Assessment of Proficiency and Potential Implications for - Professional Environmental Engineering Practice",http://arxiv.org/abs/2304.12198v1 -"GRB 050713A: High Energy Observations of the GRB Prompt and Afterglow - Emission",http://dx.doi.org/10.1086/509098 -"Energy input and response from prompt and early optical afterglow - emission in gamma-ray bursts",http://dx.doi.org/10.1038/nature04913 -"Capturing the electromagnetic counterparts of binary neutron star - mergers through low latency gravitational wave triggers",http://dx.doi.org/10.1093/mnras/stw576 -Peculiar prompt emission and afterglow in H.E.S.S. detected GRB 190829A,http://dx.doi.org/10.3847/1538-4357/ab9606 -Generative Type Inference for Python,http://arxiv.org/abs/2307.09163v1 -Repair Is Nearly Generation: Multilingual Program Repair with LLMs,http://arxiv.org/abs/2208.11640v3 -"What Changes Can Large-scale Language Models Bring? Intensive Study on - HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers",http://arxiv.org/abs/2109.04650v2 -"Identifying and Extracting Rare Disease Phenotypes with Large Language - Models",http://arxiv.org/abs/2306.12656v1 -"Benchmarking Causal Study to Interpret Large Language Models for Source - Code",http://arxiv.org/abs/2308.12415v1 -"Characterizing the time variability in magnetized neutrino--cooled - accretion disks: signatures of the gamma-ray burst central engine",http://dx.doi.org/10.1088/2041-8205/727/2/L41 -A Survey of RDF Stores & SPARQL Engines for Querying Knowledge Graphs,http://arxiv.org/abs/2102.13027v4 -Constraining Properties of GRB Central Engines with X-ray flares,http://dx.doi.org/10.1093/mnras/stab2186 -Pop Quiz! Can a Large Language Model Help With Reverse Engineering?,http://arxiv.org/abs/2202.01142v1 -Repairing Bugs in Python Assignments Using Large Language Models,http://arxiv.org/abs/2209.14876v1 -Boosting GUI Prototyping with Diffusion Models,http://dx.doi.org/10.1109/RE57278.2023.00035 -"Domain Knowledge Distillation from Large Language Model: An Empirical - Study in the Autonomous Driving Domain",http://arxiv.org/abs/2307.11769v1 -A ML-LLM pairing for better code comment classification,http://arxiv.org/abs/2310.10275v1 -"Late-Time X-ray Flares during GRB Afterglows: Extended Internal Engine - Activity",http://dx.doi.org/10.1063/1.2207925 -"Evidence of high latitude emission in the prompt phase of GRBs: How far - from the central engine are the GRBs produced?",http://arxiv.org/abs/2212.07094v2 -"Enhancing Automated Program Repair through Fine-tuning and Prompt - Engineering",http://arxiv.org/abs/2304.07840v2 -Is ChatGPT the Ultimate Programming Assistant -- How far is it?,http://arxiv.org/abs/2304.11938v2 -"Magnetar central engines in gamma-ray busts follow the universal - relation of accreting magnetic stars",http://dx.doi.org/10.3847/2041-8213/acccec -"Towards Understanding the Capability of Large Language Models on Code - Clone Detection: A Survey",http://arxiv.org/abs/2308.01191v3 -Large Language Models in Fault Localisation,http://arxiv.org/abs/2308.15276v3 -Configuration Validation with Large Language Models,http://arxiv.org/abs/2310.09690v1 -"Prompt Agnostic Essay Scorer: A Domain Generalization Approach to - Cross-prompt Automated Essay Scoring",http://arxiv.org/abs/2008.01441v1 -"Photosphere emission from a hybrid relativistic outflow with arbitrary - dimensionless entropy and magnetization in GRBs",http://dx.doi.org/10.1088/0004-637X/801/2/103 -Do Prompts Solve NLP Tasks Using Natural Language?,http://arxiv.org/abs/2203.00902v1 -"On the observed duration distribution of gamma-ray bursts from - collapsars",http://dx.doi.org/10.1093/mnras/stt1705 -"""Late prompt"" emission in Gamma Ray Bursts?",http://dx.doi.org/10.1086/515570 -"Pulse Width Evolution of Late Time X-rays Flares in GRBs: Evidence For - Internal Shocks",http://dx.doi.org/10.1086/520041 -Is GeV Emission from Gamma-Ray Bursts of External Shock Origin?,http://dx.doi.org/10.1111/j.1365-2966.2011.18648.x -Shock Dissipation in Magnetically Dominated Impulsive Flows,http://dx.doi.org/10.1111/j.1365-2966.2012.20609.x -"Gamma Ray Burst Prompt Emission Variability in Synchrotron and - Synchrotron Self-Compton Lightcurves",http://dx.doi.org/10.1111/j.1365-2966.2012.21531.x -How to switch on and off a Gamma-ray burst through a magnetar,http://dx.doi.org/10.1088/0004-637X/775/1/67 -"GRB 110731A: Early afterglow in stellar wind powered by a magnetized - outflow",http://dx.doi.org/10.1088/0004-637X/804/2/105 -"Jet Structure in the Afterglow Phase for Gamma-ray Bursts with a - Precessing Jet",http://dx.doi.org/10.1093/mnras/stz1426 -"Task-Oriented API Usage Examples Prompting Powered By Programming Task - Knowledge Graph",http://arxiv.org/abs/2006.07058v1 -"Two distinct phases in the first 13 seconds of GRB110731A prompt - emission",http://dx.doi.org/10.1007/s10509-012-1249-5 -Can we quickly flag Ultra-long Gamma-Ray Bursts?,http://dx.doi.org/10.1093/mnras/stz1036 -"On explaining prompt emission from GRB central engines with photospheric - emission model",http://arxiv.org/abs/2110.14792v1 -"OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal - Regression",http://arxiv.org/abs/2206.02338v2 -"Prompt-tuned Code Language Model as a Neural Knowledge Base for Type - Inference in Statically-Typed Partial Code",http://arxiv.org/abs/2208.05361v2 -Multi-messenger model for the prompt emission from GRB 221009A,http://dx.doi.org/10.3847/2041-8213/acb6d7 -"Trash to Treasure: Using text-to-image models to inform the design of - physical artefacts",http://arxiv.org/abs/2302.00561v1 -"Chat2VIS: Generating Data Visualisations via Natural Language using - ChatGPT, Codex and GPT-3 Large Language Models",http://arxiv.org/abs/2302.02094v2 -"Prompting Large Language Models with Answer Heuristics for - Knowledge-based Visual Question Answering",http://arxiv.org/abs/2303.01903v2 -Automating Method Naming with Context-Aware Prompt-Tuning,http://arxiv.org/abs/2303.05771v1 -Zero-shot Nuclei Detection via Visual-Language Pre-trained Models,http://arxiv.org/abs/2306.17659v1 -"Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise - Given to Students in Synthetic Dialogues",http://arxiv.org/abs/2307.02018v1 -In-IDE Generation-based Information Support with a Large Language Model,http://arxiv.org/abs/2307.08177v2 -"Inductive-bias Learning: Generating Code Models with Large Language - Model",http://arxiv.org/abs/2308.09890v1 -"Incorprating Prompt tuning for Commit classification with prior - Knowledge",http://arxiv.org/abs/2308.10576v1 -"AskIt: Unified Programming Interface for Programming with Large Language - Models",http://arxiv.org/abs/2308.15645v1 -Leveraging Large Language Models for Exploiting ASR Uncertainty,http://arxiv.org/abs/2309.04842v2 -AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models,http://arxiv.org/abs/2309.16414v2 -"Pretrain, Prompt, and Transfer: Evolving Digital Twins for Time-to-Event - Analysis in Cyber-physical Systems",http://arxiv.org/abs/2310.00032v2 -"Vision-Language Models are Zero-Shot Reward Models for Reinforcement - Learning",http://arxiv.org/abs/2310.12921v1 -"The peculiar GRB 110731A: Lorentz factor, jet composition, central - engine, and progenitor",http://arxiv.org/abs/1706.00898v2 -Improving Few-Shot Prompts with Relevant Static Analysis Products,http://arxiv.org/abs/2304.06815v2 -"PACE-LM: Prompting and Augmentation for Calibrated Confidence Estimation - with GPT-4 in Cloud Incident Root Cause Analysis",http://arxiv.org/abs/2309.05833v3 -"FreshLLMs: Refreshing Large Language Models with Search Engine - Augmentation",http://arxiv.org/abs/2310.03214v1 -"Prompting through Prototype: A Prototype-based Prompt Learning on - Pretrained Vision-Language Models",http://arxiv.org/abs/2210.10841v1 -"High Energy Neutrino Flashes from Far-Ultraviolet and X-ray Flares in - Gamma-Ray Bursts",http://dx.doi.org/10.1103/PhysRevLett.97.051101 -"A three stage model for the inner engine of Gamma Ray Burst: Prompt - emission and early afterglow",http://dx.doi.org/10.1086/519545 -Flares in Gamma Ray Bursts (II),http://arxiv.org/abs/0809.2151v1 -"Gamma-Ray Burst Prompt Emission Light Curves and Power Density Spectra - in the ICMART Model",http://dx.doi.org/10.1088/0004-637X/782/2/92 -"There is a short gamma-ray burst prompt phase at the beginning of each - long one",http://dx.doi.org/10.1093/mnras/stu2664 -Fast response electromagnetic follow-ups from low latency GW triggers,http://dx.doi.org/10.1088/1742-6596/716/1/012009 -"Near-extremal black holes as initial conditions of long GRB-supernovae - and probes of their gravitational wave emission",http://dx.doi.org/10.1088/0004-637X/810/1/7 -Evidence of Bulk Acceleration of the GRB X-ray Flare Emission Region,http://dx.doi.org/10.3847/2041-8205/824/1/L16 -EM counterparts of structured jets from 3D GRMHD simulations,http://dx.doi.org/10.1093/mnrasl/slz012 -"GPT Understands, Too",http://arxiv.org/abs/2103.10385v1 -"Generating Disentangled Arguments with Prompts: A Simple Event - Extraction Framework that Works",http://arxiv.org/abs/2110.04525v2 -CLIP-CLOP: CLIP-Guided Collage and Photomontage,http://arxiv.org/abs/2205.03146v3 -"Automatically Generating CS Learning Materials with Large Language - Models",http://dx.doi.org/10.1145/3545947.3569630 -"Fake it till you make it: Learning transferable representations from - synthetic ImageNet clones",http://arxiv.org/abs/2212.08420v2 -Explanation Regeneration via Information Bottleneck,http://arxiv.org/abs/2212.09603v2 -"Using Large Language Models to Generate Engaging Captions for Data - Visualizations",http://arxiv.org/abs/2212.14047v1 -"A Case Study in Engineering a Conversational Programming Assistant's - Persona",http://arxiv.org/abs/2301.10016v1 -Distilling Internet-Scale Vision-Language Models into Embodied Agents,http://arxiv.org/abs/2301.12507v2 -Fixing Hardware Security Bugs with Large Language Models,http://arxiv.org/abs/2302.01215v1 -"Evaluation of ChatGPT Family of Models for Biomedical Reasoning and - Classification",http://arxiv.org/abs/2304.02496v1 -"VOICE: Visual Oracle for Interaction, Conversation, and Explanation",http://arxiv.org/abs/2304.04083v1 -Automated Reading Passage Generation with OpenAI's Large Language Model,http://arxiv.org/abs/2304.04616v1 -ZeroPrompt: Streaming Acoustic Encoders are Zero-Shot Masked LMs,http://dx.doi.org/10.21437/Interspeech.2023-1497 -"GPT4Tools: Teaching Large Language Model to Use Tools via - Self-instruction",http://arxiv.org/abs/2305.18752v1 -"Applying Standards to Advance Upstream & Downstream Ethics in Large - Language Models",http://arxiv.org/abs/2306.03503v2 -Solving and Generating NPR Sunday Puzzles with Large Language Models,http://arxiv.org/abs/2306.12255v1 -Insert-expansions for Tool-enabled Conversational Agents,http://arxiv.org/abs/2307.01644v1 -"Do LLMs Possess a Personality? Making the MBTI Test an Amazing - Evaluation for Large Language Models",http://arxiv.org/abs/2307.16180v1 -InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent,http://arxiv.org/abs/2308.01552v1 -Accelerated materials language processing enabled by GPT,http://arxiv.org/abs/2308.09354v1 -"ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel - Patching",http://arxiv.org/abs/2308.13062v1 -Situated Natural Language Explanations,http://arxiv.org/abs/2308.14115v1 -"Fermi Constraints on the Ejecta Speed and Prompt Emission Region of the - Distant GRB 220101A",http://arxiv.org/abs/2309.01308v2 -DevGPT: Studying Developer-ChatGPT Conversations,http://arxiv.org/abs/2309.03914v1 -Co-audit: tools to help humans double-check AI-generated content,http://arxiv.org/abs/2310.01297v1 -"Beyond Factuality: A Comprehensive Evaluation of Large Language Models - as Knowledge Generators",http://arxiv.org/abs/2310.07289v1 -Nature of the ultrarelativistic prompt emission phase of GRB 190114C,http://dx.doi.org/10.1103/PhysRevD.104.063043 -Some Recent Peculiarities of the Early Afterglow,http://dx.doi.org/10.1063/1.1810827 -The interpretation of the Swift GRB X-ray afterglows,http://dx.doi.org/10.1393/ncb/i2007-10247-4 -"Synchrotron Self-Absorption Process in GRBs and the Isotropic Energy - - Peak Energy Fundamental Relation",http://arxiv.org/abs/0708.2800v1 -Analysis of the Prompt Optical Emission of the Naked-Eye GRB 080319B,http://arxiv.org/abs/0906.4144v1 -"Are Donation Badges Appealing? A Case Study of Developer Responses to - Eclipse Bug Reports",http://arxiv.org/abs/1803.04129v2 -Lighting (In)consistency of Paint by Text,http://arxiv.org/abs/2207.13744v2 -Towards Zero-Shot and Few-Shot Table Question Answering using GPT-3,http://arxiv.org/abs/2210.17284v1 -"Visualization in the Era of Artificial Intelligence: Experiments for - Creating Structural Visualizations by Prompting Large Language Models",http://dx.doi.org/10.48550/arXiv.2305.03380 -Refining the Responses of LLMs by Themselves,http://arxiv.org/abs/2305.04039v1 -"The radiative efficiency of relativistic jet and wind: A case study of - GRB 070110",http://dx.doi.org/10.1093/mnras/stw1869 -"Recommendation as Language Processing (RLP): A Unified Pretrain, - Personalized Prompt & Predict Paradigm (P5)",http://arxiv.org/abs/2203.13366v7 -"Exploring the Responses of Large Language Models to Beginner - Programmers' Help Requests",http://dx.doi.org/10.1145/3568813.3600139 -"APICom: Automatic API Completion via Prompt Learning and Adversarial - Training-based Data Augmentation",http://arxiv.org/abs/2309.07026v1 -"ConstitutionMaker: Interactively Critiquing Large Language Models by - Converting Feedback into Principles",http://arxiv.org/abs/2310.15428v1 -Episodic Jets as the Central Engine of Gamma-Ray Bursts,http://dx.doi.org/10.1088/0004-637X/757/1/56 -Nucleosynthesis of heavy elements in gamma ray bursts,http://arxiv.org/abs/1504.00145v1 -"Better Modeling the Programming World with Code Concept Graphs-augmented - Multi-modal Learning",http://dx.doi.org/10.1145/3510455.3512771 -"Internet-based Social Engineering Attacks, Defenses and Psychology: A - Survey",http://arxiv.org/abs/2203.08302v2 -"ProtFIM: Fill-in-Middle Protein Sequence Design via Protein Language - Models",http://arxiv.org/abs/2303.16452v1 -"ChatGPT for Vulnerability Detection, Classification, and Repair: How Far - Are We?",http://arxiv.org/abs/2310.09810v1 -Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts,http://arxiv.org/abs/2305.15689v2 -"Diagnosing The Ejecta Properties of Engine-Driven Supernovae from - Observables in Their Initial Phase",http://dx.doi.org/10.1093/mnras/stad1075 -"Large Language Models Help Humans Verify Truthfulness -- Except When - They Are Convincingly Wrong",http://arxiv.org/abs/2310.12558v1 -Gamma Ray Bursts Flares detected and observed by the Swift Satellite,http://dx.doi.org/10.1016/j.asr.2007.04.058 -The multiwavelength counterparts of fast radio bursts,http://dx.doi.org/10.3847/1538-4357/ab982b -"Afterglows from precursors in Gamma Ray Bursts. Application to the - optical afterglow of GRB 091024",http://dx.doi.org/10.1093/mnras/stu1832 -"Internal Energy Dissipation of Gamma-Ray Bursts Observed with Swift: - Precursors, Prompt Gamma-rays, Extended emission and Late X-ray Flares",http://dx.doi.org/10.1088/0004-637X/789/2/145 -"External Inverse-Compton Emission from Low-Luminosity Gamma-Ray Bursts: - Application to GRB 190829A",http://dx.doi.org/10.3847/1538-4357/ac0cfc -"Is GitHub's Copilot as Bad as Humans at Introducing Vulnerabilities in - Code?",http://arxiv.org/abs/2204.04741v4 -"Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black - Magic?",http://arxiv.org/abs/2210.14699v2 -"Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 - Problems Using Natural Language",http://arxiv.org/abs/2210.15157v1 -"Fill in the Blank: Context-aware Automated Text Input Generation for - Mobile GUI Testing",http://arxiv.org/abs/2212.04732v1 -"SE Factual Knowledge in Frozen Giant Code Model: A Study on FQN and its - Retrieval",http://arxiv.org/abs/2212.08221v1 -InferFix: End-to-End Program Repair with LLMs,http://arxiv.org/abs/2303.07263v1 -Automatic Code Summarization via ChatGPT: How Far Are We?,http://arxiv.org/abs/2305.12865v1 -Cheap-fake Detection with LLM using Prompt Engineering,http://arxiv.org/abs/2306.02776v1 -SelfEvolve: A Code Evolution Framework via Large Language Models,http://arxiv.org/abs/2306.02907v1 -"Improving Knowledge Extraction from LLMs for Task Learning through Agent - Analysis",http://arxiv.org/abs/2306.06770v3 -Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation,http://arxiv.org/abs/2308.15363v3 -A unifying view of Gamma Ray Burst Afterglows,http://dx.doi.org/10.1111/j.1365-2966.2008.14214.x -The Proto-Magnetar Model for Gamma-Ray Bursts,http://dx.doi.org/10.1111/j.1365-2966.2011.18280.x -"Polarization of gamma-ray burst afterglows in the synchrotron - self-Compton process from a highly relativistic jet",http://dx.doi.org/10.1088/1674-1137/41/4/045101 -A long-duration gamma-ray burst with a peculiar origin,http://dx.doi.org/10.1038/s41586-022-05403-8 -A Prompt-based Few-shot Learning Approach to Software Conflict Detection,http://arxiv.org/abs/2211.02709v1 -CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model,http://arxiv.org/abs/2310.06266v1 -"Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified - Multilingual Prompt",http://arxiv.org/abs/2202.11451v2 -"STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot - Classification",http://arxiv.org/abs/2210.16489v1 -Prompting Large Language Model for Machine Translation: A Case Study,http://arxiv.org/abs/2301.07069v2 -Prompt- and Trait Relation-aware Cross-prompt Essay Trait Scoring,http://dx.doi.org/10.18653/v1/2023.findings-acl.98 -Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution,http://arxiv.org/abs/2309.16797v1 -"Magnetar emergence in a peculiar gamma-ray burst from a compact star - merger",http://arxiv.org/abs/2307.05689v1 -"Towards Alleviating the Object Bias in Prompt Tuning-based Factual - Knowledge Extraction",http://arxiv.org/abs/2306.03378v2 -"Zero-Shot Continuous Prompt Transfer: Generalizing Task Semantics Across - Language Models",http://arxiv.org/abs/2310.01691v1 -"Swift observations of GRB 070110: an extraordinary X-ray afterglow - powered by the central engine",http://dx.doi.org/10.1086/519450 -"Probing the central engine of long gamma-ray bursts and hypernovae with - gravitational waves and neutrinos",http://dx.doi.org/10.1103/PhysRevD.80.123008 -"From Engine to Afterglow: Collapsars Naturally Produce Top-Heavy Jets - and Early-Time Plateaus in Gamma Ray Burst Afterglows",http://arxiv.org/abs/1407.8250v3 -"Three-dimensional simulations of long duration gamma-ray burst jets: - time scales from variable engines",http://dx.doi.org/10.3847/0004-637X/826/2/180 -A GRB and Broad-lined Type Ic Supernova from a Single Central Engine,http://dx.doi.org/10.3847/1538-4357/aabf84 -"ChatGPT vs. Google: A Comparative Study of Search Performance and User - Experience",http://arxiv.org/abs/2307.01135v1 -MetaPrompting: Learning to Learn Better Prompts,http://arxiv.org/abs/2209.11486v4 -"Residual Prompt Tuning: Improving Prompt Tuning with Residual - Reparameterization",http://arxiv.org/abs/2305.03937v1 -"TreePrompt: Learning to Compose Tree Prompts for Explainable Visual - Grounding",http://arxiv.org/abs/2305.11497v1 -"Discrete Prompt Optimization via Constrained Generation for Zero-shot - Re-ranker",http://arxiv.org/abs/2305.13729v1 -"Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large - Language Models",http://arxiv.org/abs/2305.18507v2 -"Prompt Middleware: Mapping Prompts for Large Language Models to UI - Affordances",http://arxiv.org/abs/2307.01142v1 -"Enhance Multi-domain Sentiment Analysis of Review Texts through - Prompting Strategies",http://arxiv.org/abs/2309.02045v1 -"The Giant X-ray Flare of GRB 050502B: Evidence for Late-Time Internal - Engine Activity",http://dx.doi.org/10.1086/500655 -"The First Survey of X-ray Flares from Gamma Ray Bursts Observed by - Swift: Temporal Properties and Morphology",http://dx.doi.org/10.1086/521591 -"Tale of GRB 171010A/SN 2017htp and GRB 171205A/SN 2017iuk: Magnetar - origin?",http://dx.doi.org/10.1016/j.newast.2022.101889 -Universal Fuzzing via Large Language Models,http://arxiv.org/abs/2308.04748v1 -"A Critical Review of Large Language Model on Software Engineering: An - Example from ChatGPT and Automated Program Repair",http://arxiv.org/abs/2310.08879v1 -Magnetic Fields in Gamma-Ray Bursts: A Short Review,http://dx.doi.org/10.1063/1.2077181 -Flares in GRB afterglows from delayed magnetic dissipation,http://dx.doi.org/10.1051/0004-6361:20065578 -Analysis of X-ray flares in GRBs,http://dx.doi.org/10.1063/1.2774833 -Nonvolatile memory with molecule-engineered tunneling barriers,http://dx.doi.org/10.1063/1.2911741 -"Revealing physical activity of GRB central engine with - macronova/kilonova data",http://dx.doi.org/10.3847/2041-8213/835/2/L22 -Fixing Bug Reporting for Mobile and GUI-Based Applications,http://arxiv.org/abs/1801.05937v1 -On Challenges of Cloud Monitoring,http://arxiv.org/abs/1806.05914v1 -User Engagement Prediction for Clarification in Search,http://arxiv.org/abs/2102.04163v1 -Galaxy-Classification Activity for All Ages,http://arxiv.org/abs/2210.01822v1 -Scale invariance in X-ray flares of gamma-ray bursts,http://dx.doi.org/10.1103/PhysRevResearch.5.013019 -"A Review of ChatGPT Applications in Education, Marketing, Software - Engineering, and Healthcare: Benefits, Drawbacks, and Research Directions",http://arxiv.org/abs/2305.00237v1 -Automatic Evaluation of Attribution by Large Language Models,http://arxiv.org/abs/2305.06311v2 -Clickbait Classification and Spoiling Using Natural Language Processing,http://arxiv.org/abs/2306.14907v1 -Diffusion Models for Computational Design at the Example of Floor Plans,http://arxiv.org/abs/2307.02511v1 -Federated Large Language Model: A Position Paper,http://arxiv.org/abs/2307.08925v1 -A Preliminary Evaluation of LLM-Based Fault Localization,http://arxiv.org/abs/2308.05487v2 -"Reverse-Engineering Decoding Strategies Given Blackbox Access to a - Language Generation System",http://arxiv.org/abs/2309.04858v1 -"Leveraging Contextual Information for Effective Entity Salience - Detection",http://arxiv.org/abs/2309.07990v1 -Large Language Models for Failure Mode Classification: An Investigation,http://arxiv.org/abs/2309.08181v1 -Safurai 001: New Qualitative Approach for Code LLM Evaluation,http://arxiv.org/abs/2309.11385v1 -ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts,http://arxiv.org/abs/2205.15509v1 -"PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model - Adaptation",http://arxiv.org/abs/2208.10160v1 -Bayesian Prompt Learning for Image-Language Model Generalization,http://arxiv.org/abs/2210.02390v3 -Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts,http://arxiv.org/abs/2210.11292v2 -Diversity-Aware Meta Visual Prompting,http://arxiv.org/abs/2303.08138v1 -On the Role of Attention in Prompt-tuning,http://arxiv.org/abs/2306.03435v1 -"Thick fireballs and the steep decay in the early X-ray afterglow of - gamma-ray bursts",http://dx.doi.org/10.1086/500502 -"Dead Zone Formation and Nonsteady Hyperaccretion in Collapsar Disks : A - Possible Origin of Short-Term Variability in the Prompt Emission of Gamma-Ray - Bursts",http://dx.doi.org/10.1086/518088 -"A three stage model for the inner engine of GRBs: Prompt emission and - early afterglow",http://dx.doi.org/10.1142/S0218271808012954 -"Early emission of rising optical afterglows: The case of GRB 060904B and - GRB 070420",http://dx.doi.org/10.1051/0004-6361:20078677 -"Spectral curvature behavior during X-ray flares in GRB afterglow - emission",http://dx.doi.org/10.1063/1.3621758 -"The ultra-long Gamma-Ray Burst 111209A: the collapse of a blue - supergiant?",http://dx.doi.org/10.1088/0004-637X/766/1/30 -Fall back accretion and energy injections in gamma-ray bursts,http://dx.doi.org/10.1093/mnras/stu2336 -"Evidence for jet launching close to the black hole in GRB 101219B - a - Fermi GRB dominated by thermal emission",http://dx.doi.org/10.1088/2041-8205/800/2/L34 -"Short gamma-ray bursts from binary neutron star mergers: the - time-reversal scenario",http://arxiv.org/abs/1505.01420v1 -"Gravitational wave observations may constrain gamma-ray burst models: - the case of GW 150914 - GBM",http://dx.doi.org/10.3847/2041-8205/827/2/L34 -"Evaluating the bulk Lorentz factors of outflow material: lessons learned - from the extremely-energetic outburst GRB 160625B",http://dx.doi.org/10.3847/1538-4357/aa56c6 -"Contract-based Hierarchical Resilience Management for Cyber-Physical - Systems",http://dx.doi.org/10.1109/MC.2018.2876071 -"Scattered Short Gamma-Ray Bursts as Electromagnetic Counterparts to - Gravitational Waves and Implications of GW170817 and GRB 170817A",http://dx.doi.org/10.3847/1538-4357/aae30a -MARS15 Simulation Of Radiation Environment At The ESS Linac,http://arxiv.org/abs/1705.02022v1 -"Fallback accretion on to a newborn magnetar: long GRBs with giant X-ray - flares",http://dx.doi.org/10.1093/mnras/sty1363 -"X-ray Plateaus in Gamma Ray Bursts' light-curves from jets viewed - slightly off-axis",http://dx.doi.org/10.1093/mnras/staa070 -Linking extended and plateau emissions of short gamma-ray bursts,http://dx.doi.org/10.1093/mnras/staa305 -ActionCLIP: A New Paradigm for Video Action Recognition,http://arxiv.org/abs/2109.08472v1 -Zero-Shot Program Representation Learning,http://arxiv.org/abs/2204.08360v1 -"Generative Action Description Prompts for Skeleton-based Action - Recognition",http://arxiv.org/abs/2208.05318v2 -Unsupervised Hashing with Semantic Concept Mining,http://arxiv.org/abs/2209.11475v1 -Promptagator: Few-shot Dense Retrieval From 8 Examples,http://arxiv.org/abs/2209.11755v1 -Measuring and Narrowing the Compositionality Gap in Language Models,http://arxiv.org/abs/2210.03350v3 -"Robust Preference Learning for Storytelling via Contrastive - Reinforcement Learning",http://arxiv.org/abs/2210.07792v2 -"Beyond Prompting: Making Pre-trained Language Models Better Zero-shot - Learners by Clustering Representations",http://arxiv.org/abs/2210.16637v2 -"CodeLMSec Benchmark: Systematically Evaluating and Finding Security - Vulnerabilities in Black-Box Code Language Models",http://arxiv.org/abs/2302.04012v2 -"A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on - Reasoning, Hallucination, and Interactivity",http://arxiv.org/abs/2302.04023v2 -"Ten Quick Tips for Harnessing the Power of ChatGPT/GPT-4 in - Computational Biology",http://arxiv.org/abs/2303.16429v1 -Low-code LLM: Visual Programming over LLMs,http://arxiv.org/abs/2304.08103v2 -Fully Autonomous Programming with Large Language Models,http://dx.doi.org/10.1145/3583131.3590481 -GPTutor: a ChatGPT-powered programming tool for code explanation,http://arxiv.org/abs/2305.01863v2 -ChatUniTest: a ChatGPT-based automated unit test generation tool,http://arxiv.org/abs/2305.04764v1 -"GPT-3.5, GPT-4, or BARD? Evaluating LLMs Reasoning Ability in Zero-Shot - Setting and Performance Boosting Through Prompts",http://arxiv.org/abs/2305.12477v2 -"Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For - Large Language Models",http://arxiv.org/abs/2305.15074v3 -Do you still need a manual smart contract audit?,http://arxiv.org/abs/2306.12338v2 -"Exploring the Robustness of Large Language Models for Solving - Programming Problems",http://arxiv.org/abs/2306.14583v1 -"The Potential and Pitfalls of using a Large Language Model such as - ChatGPT or GPT-4 as a Clinical Assistant",http://arxiv.org/abs/2307.08152v1 -"Jailbreaker: Automated Jailbreak Across Multiple Large Language Model - Chatbots",http://arxiv.org/abs/2307.08715v1 -"S3: Social-network Simulation System with Large Language Model-Empowered - Agents",http://arxiv.org/abs/2307.14984v2 -"Evaluating ChatGPT text-mining of clinical records for obesity - monitoring",http://arxiv.org/abs/2308.01666v1 -"ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned - Samples in NLP",http://arxiv.org/abs/2308.02122v1 -"Synergistic Integration of Large Language Models and Cognitive - Architectures for Robust AI: An Exploratory Analysis",http://arxiv.org/abs/2308.09830v3 -When Do Program-of-Thoughts Work for Reasoning?,http://arxiv.org/abs/2308.15452v3 -Using LLMs to Facilitate Formal Verification of RTL,http://arxiv.org/abs/2309.09437v2 -How well does LLM generate security tests?,http://arxiv.org/abs/2310.00710v2 -"Observations of the intense and ultra-long burst GRB041219a with the - Germanium Spectrometer on INTEGRAL",http://dx.doi.org/10.1051/0004-6361:20065203 -The early X-ray afterglows of Gamma Ray Bursts,http://arxiv.org/abs/astro-ph/0702620v1 -Testing an unifying view of Gamma Ray Burst afterglows,http://arxiv.org/abs/0908.0338v1 -A magnetar powering the ordinary monster GRB 130427A?,http://dx.doi.org/10.1093/mnrasl/slu003 -"The History of GRB Outflows: Ejection Lorentz Factor and Radiation - Location of X-Ray Flares",http://dx.doi.org/10.3847/0004-637X/831/1/111 -"What can we learn from ""internal plateaus""? The peculiar afterglow of - GRB 070110",http://dx.doi.org/10.1051/0004-6361/201730523 -"Onset of particle acceleration during the prompt phase in gamma-ray - bursts as revealed by synchrotron emission in GRB160821A",http://dx.doi.org/10.3847/2041-8213/ac73fe -"BigBIO: A Framework for Data-Centric Biomedical Natural Language - Processing",http://arxiv.org/abs/2206.15076v1 -ReCode: Robustness Evaluation of Code Generation Models,http://arxiv.org/abs/2212.10264v1 -Few-shot Multimodal Multitask Multilingual Learning,http://arxiv.org/abs/2303.12489v1 -"Universal and Transferable Adversarial Attacks on Aligned Language - Models",http://arxiv.org/abs/2307.15043v1 -"Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive - Synthesis using Large Language Models and Satisfiability Solving",http://arxiv.org/abs/2309.16436v1 -"Mini-DALLE3: Interactive Text to Image by Prompting Large Language - Models",http://arxiv.org/abs/2310.07653v2 -Instance-aware Prompt Learning for Language Understanding and Generation,http://arxiv.org/abs/2201.07126v1 -Position-based Prompting for Health Outcome Generation,http://arxiv.org/abs/2204.03489v1 -IDPG: An Instance-Dependent Prompt Generation Method,http://arxiv.org/abs/2204.04497v1 -"Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt - Tuning and Discovery",http://arxiv.org/abs/2302.03668v2 -"Can discrete information extraction prompts generalize across language - models?",http://arxiv.org/abs/2302.09865v2 -Prompt Learning for Action Recognition,http://arxiv.org/abs/2305.12437v1 -"Effective Structured Prompting by Meta-Learning and Representative - Verbalizer",http://arxiv.org/abs/2306.00618v1 -"Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in - Language Model Prompting",http://arxiv.org/abs/2307.10573v2 -"High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: - Establishing a Novel Baseline and Benchmark",http://arxiv.org/abs/2308.08443v1 -Structured Prompt Tuning,http://arxiv.org/abs/2205.12309v1 -"X-ray flares, neutrino cooled disks, and the dynamics of late accretion - in GRB engines",http://dx.doi.org/10.1111/j.1745-3933.2008.00490.x -Climate Engineering Responses to Climate Emergencies,http://arxiv.org/abs/0907.5140v2 -Prolonged activity of the central engine of Gamma Ray Bursts,http://arxiv.org/abs/0908.0732v1 -GRB 120422A: A Low-luminosity Gamma-ray Burst Driven by Central Engine,http://dx.doi.org/10.1088/0004-637X/756/2/190 -Three-Peak GRBs and Their Implications for Central Engines,http://dx.doi.org/10.1016/j.newast.2015.05.003 -"A further study of $t_{\rm burst}$ of GRBs: rest frame properties, - external plateau contributions and multiple parameter analysis",http://dx.doi.org/10.3847/1538-4357/aa7e30 -"Aalto's End-to-End DNN systems for the INTERSPEECH 2020 Computational - Paralinguistics Challenge",http://arxiv.org/abs/2008.02689v1 -Identifying Black Hole Central Engines in Gamma-Ray Bursts,http://dx.doi.org/10.3847/2041-8213/abd53f -"Characterizing and Mitigating Anti-patterns of Alerts in Industrial - Cloud Systems",http://arxiv.org/abs/2204.09670v1 -"Right to be Forgotten in the Era of Large Language Models: Implications, - Challenges, and Solutions",http://arxiv.org/abs/2307.03941v3 -"""With Great Power Comes Great Responsibility!"": Student and Instructor - Perspectives on the influence of LLMs on Undergraduate Engineering Education",http://arxiv.org/abs/2309.10694v2 -"Balancing Autonomy and Alignment: A Multi-Dimensional Taxonomy for - Autonomous LLM-powered Multi-Agent Architectures",http://arxiv.org/abs/2310.03659v1 -Prompt-aligned Gradient for Prompt Tuning,http://arxiv.org/abs/2205.14865v2 -Learning Domain Invariant Prompt for Vision-Language Models,http://arxiv.org/abs/2212.04196v2 -Prompt-Tuning Decision Transformer with Preference Ranking,http://arxiv.org/abs/2305.09648v1 -"InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural - Language Understanding",http://arxiv.org/abs/2306.04933v1 -"VPUFormer: Visual Prompt Unified Transformer for Interactive Image - Segmentation",http://arxiv.org/abs/2306.06656v1 -The early X-ray emission from GRBs,http://dx.doi.org/10.1086/505457 -"Multi-Wavelength Observations of GRB 111228A and Implications for the - Fireball and its environment",http://dx.doi.org/10.3847/0004-637X/817/2/152 -"Large Language Models are Few-Shot Summarizers: Multi-Intent Comment - Generation via In-Context Learning",http://arxiv.org/abs/2304.11384v3 -PromptTTS 2: Describing and Generating Voices with Text Prompt,http://arxiv.org/abs/2309.02285v2 -Do Prompt-Based Models Really Understand the Meaning of their Prompts?,http://arxiv.org/abs/2109.01247v2 -Co-training Improves Prompt-based Learning for Large Language Models,http://arxiv.org/abs/2202.00828v1 -Adaptive Prompt Learning-based Few-Shot Sentiment Analysis,http://arxiv.org/abs/2205.07220v1 -Instance-wise Prompt Tuning for Pretrained Language Models,http://arxiv.org/abs/2206.01958v1 -Prompting Decision Transformer for Few-Shot Policy Generalization,http://arxiv.org/abs/2206.13499v1 -Prompt Tuning for Generative Multimodal Pretrained Models,http://arxiv.org/abs/2208.02532v1 -Reducing Retraining by Recycling Parameter-Efficient Prompts,http://arxiv.org/abs/2208.05577v1 -Prompt Vision Transformer for Domain Generalization,http://arxiv.org/abs/2208.08914v1 -"Can Large Language Models Truly Understand Prompts? A Case Study with - Negated Prompts",http://arxiv.org/abs/2209.12711v1 -"Efficiently Enhancing Zero-Shot Performance of Instruction Following - Model via Retrieval of Soft Prompt",http://arxiv.org/abs/2210.03029v4 -Unified Vision and Language Prompt Learning,http://arxiv.org/abs/2210.07225v1 -"Multilingual Relation Classification via Efficient and Effective - Prompting",http://arxiv.org/abs/2210.13838v2 -Zero-Label Prompt Selection,http://arxiv.org/abs/2211.04668v1 -Demystifying Prompts in Language Models via Perplexity Estimation,http://arxiv.org/abs/2212.04037v1 -SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning,http://arxiv.org/abs/2212.10929v1 -Evaluating the Robustness of Discrete Prompts,http://arxiv.org/abs/2302.05619v1 -"Choice Over Control: How Users Write with Large Language Models using - Diegetic and Non-Diegetic Prompting",http://dx.doi.org/10.1145/3544548.3580969 -"Global Prompt Cell: A Portable Control Module for Effective Prompt - Tuning",http://arxiv.org/abs/2304.05642v2 -"Automatic Prompt Optimization with ""Gradient Descent"" and Beam Search",http://arxiv.org/abs/2305.03495v2 -Exploring Lottery Prompts for Pre-trained Language Models,http://arxiv.org/abs/2305.19500v1 -"PromptCARE: Prompt Copyright Protection by Watermark Injection and - Verification",http://arxiv.org/abs/2308.02816v1 -"Dynamic Strategy Chain: Dynamic Zero-Shot CoT for Long Mental Health - Support Generation",http://arxiv.org/abs/2308.10444v1 -When Prompt-based Incremental Learning Does Not Meet Strong Pretraining,http://arxiv.org/abs/2308.10445v1 -A Chinese Prompt Attack Dataset for LLMs with Evil Content,http://arxiv.org/abs/2309.11830v1 -"How Reliable Are AI-Generated-Text Detectors? An Assessment Framework - Using Evasive Soft Prompts",http://arxiv.org/abs/2310.05095v1 -PPT: Pre-trained Prompt Tuning for Few-shot Learning,http://arxiv.org/abs/2109.04332v3 -On Transferability of Prompt Tuning for Natural Language Processing,http://dx.doi.org/10.18653/v1/2022.naacl-main.290 -RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning,http://arxiv.org/abs/2205.12548v3 -Exploring Sparse Visual Prompt for Domain Adaptive Dense Prediction,http://arxiv.org/abs/2303.09792v2 -Exploring Effective Factors for Improving Visual In-Context Learning,http://arxiv.org/abs/2304.04748v1 -Multi-Prompt with Depth Partitioned Cross-Modal Learning,http://arxiv.org/abs/2305.06221v3 -Contextual Prompt Learning for Vision-Language Understanding,http://arxiv.org/abs/2307.00910v1 -"Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt - Optimization for Few-shot Learning",http://arxiv.org/abs/2308.07272v1 -Certifying LLM Safety against Adversarial Prompting,http://arxiv.org/abs/2309.02705v1 -Visual Attention-Prompted Prediction and Learning,http://arxiv.org/abs/2310.08420v1 -Quantifying Privacy Risks of Prompts in Visual Prompt Learning,http://arxiv.org/abs/2310.11970v1 -Multitask Vision-Language Prompt Tuning,http://arxiv.org/abs/2211.11720v3 -"Measuring the prompt atmospheric neutrino flux with down-going muons in - neutrino telescopes",http://dx.doi.org/10.1103/PhysRevD.67.017301 -"Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot - Classification",http://arxiv.org/abs/2204.06305v2 -Using Natural Sentences for Understanding Biases in Language Models,http://arxiv.org/abs/2205.06303v1 -Black Box Adversarial Prompting for Foundation Models,http://arxiv.org/abs/2302.04237v2 -"Prompt and non-prompt $J/ψ$ and $ψ(2\mathrm{S})$ suppression at - high transverse momentum in 5.02 TeV Pb+Pb collisions with the ATLAS - experiment",http://dx.doi.org/10.1140/epjc/s10052-018-6219-9 -Towards Unified Prompt Tuning for Few-shot Text Classification,http://arxiv.org/abs/2205.05313v1 -"Vector-Quantized Input-Contextualized Soft Prompts for Natural Language - Understanding",http://arxiv.org/abs/2205.11024v2 -Improving Task Generalization via Unified Schema Prompt,http://arxiv.org/abs/2208.03229v1 -"LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of - Vision & Language Models",http://arxiv.org/abs/2210.01115v2 -"Model ensemble instead of prompt fusion: a sample-specific knowledge - transfer method for few-shot prompt tuning",http://arxiv.org/abs/2210.12587v3 -"Gradient-Regulated Meta-Prompt Learning for Generalizable - Vision-Language Models",http://arxiv.org/abs/2303.06571v2 -Progressive Visual Prompt Learning with Contrastive Feature Re-formation,http://arxiv.org/abs/2304.08386v1 -"Flocks of Stochastic Parrots: Differentially Private Prompt Learning for - Large Language Models",http://arxiv.org/abs/2305.15594v1 -"""Do Anything Now"": Characterizing and Evaluating In-The-Wild Jailbreak - Prompts on Large Language Models",http://arxiv.org/abs/2308.03825v1 -"Query-Dependent Prompt Evaluation and Optimization with Offline Inverse - RL",http://arxiv.org/abs/2309.06553v3 -Automatic Prompt Rewriting for Personalized Text Generation,http://arxiv.org/abs/2310.00152v1 -"A contemporaneous infrared flash from a long gamma-ray burst: an echo - from the central engine",http://dx.doi.org/10.1038/nature03520 -Stellar Explosions by Magnetic Towers,http://dx.doi.org/10.1086/505621 -High Energy Radiation from Gamma Ray Bursts,http://dx.doi.org/10.1063/1.1291372 -The Fireball Shock Model of Gamma Ray Bursts,http://dx.doi.org/10.1063/1.1361591 -Origin of Gamma Ray Bursters,http://dx.doi.org/10.1143/PTPS.136.300 -The updated E_peak - E_gamma correlation in GRBs,http://dx.doi.org/10.1393/ncc/i2005-10046-0 -Gamma-Ray Burst Early Afterglows,http://dx.doi.org/10.1063/1.2141841 -MeV-GeV emission from neutron-loaded short gamma-ray burst jets,http://dx.doi.org/10.1086/507261 -"A two component jet model for the X-ray afterglow flat segment in short - GRB 051221A",http://dx.doi.org/10.1086/512971 -The shallow phase of X-ray afterglows,http://dx.doi.org/10.1063/1.2943505 -"Hyperaccretion after the Blandford-Znajek Process: a New Model for GRBs - with X-Ray Flares Observed in Early Afterglows",http://dx.doi.org/10.1088/1009-9271/8/4/04 -High energy gamma-ray emission from Gamma-Ray Bursts -- before GLAST,http://dx.doi.org/10.1007/s11467-008-0033-z -"Expected performance of a hard X-ray polarimeter (POLAR) by Monte Carlo - Simulation",http://dx.doi.org/10.1016/j.nima.2009.04.033 -What do we know about gamma-ray bursts?,http://arxiv.org/abs/1009.4648v2 -"Possible Origin of Rapid Variability of Gamma-Ray Bursts due to - Convective Energy Transfer in Hyperaccretion Disks",http://dx.doi.org/10.1111/j.1365-2966.2011.19733.x -Gamma-Ray Burst without Baryonic and Magnetic Load?,http://dx.doi.org/10.1143/PTP.126.555 -"The physical origin of optical flares following GRB 110205A and the - nature of the outflow",http://dx.doi.org/10.1088/1674-4527/11/11/007 -"Magnetic Structures in Gamma-Ray Burst Jets Probed by Gamma-Ray - Polarization",http://dx.doi.org/10.1088/2041-8205/758/1/L1 -"Astrophysical ZeV acceleration in the relativistic jet from an accreting - supermassive blackhole",http://dx.doi.org/10.1016/j.astropartphys.2014.02.004 -"Neutrino-cooled Accretion Model with Magnetic Coupling for X-ray Flares - in GRBs",http://dx.doi.org/10.1088/0004-637X/773/2/142 -Jet Luminosity from Neutrino-Dominated Accretion Flows in GRBs,http://arxiv.org/abs/1308.3236v1 -3D manipulation with scanning near field optical nanotweezers,http://dx.doi.org/10.1038/nnano.2014.24 -"Tuning a Multiple Classifier System for Side Effect Discovery using - Genetic Algorithms",http://arxiv.org/abs/1409.1053v1 -Molten-Salt Depleted-Uranium Reactor,http://arxiv.org/abs/1503.03183v1 -X-ray flares in GRBs: general considerations and photospheric origin,http://dx.doi.org/10.1093/mnrasl/slw003 -"Water-Induced Bimetallic Alloy Surface Segregation: A First Principle - Study",http://arxiv.org/abs/1601.02346v1 -Rates and singlet/triplet ratios from TADF transients,http://arxiv.org/abs/1603.08998v2 -Physical limits to magnetogenetics,http://dx.doi.org/10.7554/eLife.17210 -The Dark Side of Ethical Robots,http://arxiv.org/abs/1606.02583v1 -"Numerical and analytical solutions of Neutrino-Dominated Accretion Flows - with a Non-Zero Torque Boundary Condition and its applications in Gamma-ray - Bursts",http://dx.doi.org/10.3847/1538-4357/833/2/129 -"High-energy emission as signature of magnetic field amplification in - Neutron Star Mergers",http://arxiv.org/abs/1701.01184v1 -Gamma-ray burst models in light of the GRB 170817A - GW170817 connection,http://arxiv.org/abs/1802.07328v1 -"Surface modified mesoporous g-C3N4@FeNi3 as prompt and proficient - magnetic adsorbent for crude oil recovery",http://dx.doi.org/10.1016/j.apsusc.2018.12.166 -The Perfect State Transfer Graph Limbo,http://arxiv.org/abs/1808.00696v2 -Variabilities of Gamma-ray Bursts from Black Hole Hyper-accretion Disks,http://dx.doi.org/10.1093/mnras/stw1985 -"Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial - Domains",http://dx.doi.org/10.1016/j.neucom.2018.02.007 -Migrating large codebases to C++ Modules,http://dx.doi.org/10.1088/1742-6596/1525/1/012051 -Mn(II)-doped 2D perovskite for light emitting devices,http://arxiv.org/abs/1906.05099v1 -"Deep Sequential Feature Learning in Clinical Image Classification of - Infectious Keratitis",http://arxiv.org/abs/2006.02666v1 -Hydrodynamics of core-collapse supernovae and their progenitors,http://dx.doi.org/10.1007/s41115-020-0008-5 -"Agent Programming for Industrial Applications: Some Advantages and - Drawbacks",http://arxiv.org/abs/2006.05613v1 -X-ray plateaus in $γ$-ray bursts explained by structured jets,http://arxiv.org/abs/2006.13966v1 -POLAR: A Space-borne X-Ray Polarimeter for Transient Sources,http://dx.doi.org/10.5194/astra-7-43-2011 -"The change of GRB polarization angles in the magnetic-dominated jet - model",http://dx.doi.org/10.1093/mnras/stu2051 -Perspective: Quantum Thermodynamics,http://dx.doi.org/10.1088/1367-2630/18/1/011002 -"Observational evidence for mass ejection accompanying short gamma ray - bursts",http://dx.doi.org/10.1093/mnrasl/slx131 -Photospheric Emission From Variable Engine Gamma Ray Burst Simulations,http://dx.doi.org/10.3847/1538-4357/aaeed1 -"The Divide-and-Conquer Framework: A Suitable Setting for the DDM of the - Future",http://arxiv.org/abs/1901.00229v1 -Spectral Puzzle of the Off-Axis Gamma-Ray Burst in GW170817,http://dx.doi.org/10.1093/mnras/stz1650 -"Equation-of-State, Critical Constants, and Thermodynamic Properties of - Lithium at High Energy Density",http://dx.doi.org/10.1063/1.5143308 -"Interpreting the X-ray afterglows of gamma-ray bursts with radiative - losses and millisecond magnetars",http://dx.doi.org/10.1093/mnras/staa3090 -"Wavelet Denoising and Attention-based RNN-ARIMA Model to Predict Forex - Price",http://arxiv.org/abs/2008.06841v1 -"Testing Blandford-Znajek mechanism in black hole hyperaccretion flows - for long-duration gamma-ray bursts",http://dx.doi.org/10.3847/1538-4357/abd6bd -"An Inquisitive Code Editor for Addressing Novice Programmers' - Misconceptions of Program Behavior",http://arxiv.org/abs/2102.06098v1 -"Deep Learning-Based Detection of the Acute Respiratory Distress - Syndrome: What Are the Models Learning?",http://arxiv.org/abs/2109.12323v1 -Cut the CARP: Fishing for zero-shot story evaluation,http://arxiv.org/abs/2110.03111v3 -"Governance of Ethical and Trustworthy AI Systems: Research Gaps in the - ECCOLA Method",http://dx.doi.org/10.1109/REW53955.2021.00042 -Solving Probability and Statistics Problems by Program Synthesis,http://arxiv.org/abs/2111.08267v1 -"Continuation-Passing Style, Defunctionalization, Accumulations, and - Associativity",http://dx.doi.org/10.22152/programming-journal.org/2022/6/7 -"Executive Function: A Contrastive Value Policy for Resampling and - Relabeling Perceptions via Hindsight Summarization?",http://arxiv.org/abs/2204.12639v1 -"helyOS: A customized off-the-shelf solution for autonomous driving - applications in delimited areas",http://dx.doi.org/10.1109/SII55687.2023.10039276 -The Creativity of Text-to-Image Generation,http://dx.doi.org/10.1145/3569219.3569352 -The Structure of Gamma Ray Burst Jets,http://arxiv.org/abs/2206.11088v2 -Training Transformers Together,http://arxiv.org/abs/2207.03481v1 -A Hazard Analysis Framework for Code Synthesis Large Language Models,http://arxiv.org/abs/2207.14157v1 -Erasure qubits: Overcoming the $T_1$ limit in superconducting circuits,http://arxiv.org/abs/2208.05461v1 -Griffith-based analysis of crack initiation location in a Brazilian test,http://arxiv.org/abs/2209.06456v1 -"ObSynth: An Interactive Synthesis System for Generating Object Models - from Natural Language Specifications",http://arxiv.org/abs/2210.11468v1 -Formalizing Chemical Physics using the Lean Theorem Prover,http://arxiv.org/abs/2210.12150v4 -Explaining the Explainers in Graph Neural Networks: a Comparative Study,http://arxiv.org/abs/2210.15304v2 -Using Developer Discussions to Guide Fixing Bugs in Software,http://arxiv.org/abs/2211.06335v1 -Coder Reviewer Reranking for Code Generation,http://arxiv.org/abs/2211.16490v1 -AI-driven Mobile Apps: an Explorative Study,http://arxiv.org/abs/2212.01635v1 -Pseudo Redshifts of Gamma-Ray Bursts Derived from the L-T-E Correlation,http://dx.doi.org/10.3847/1538-4357/acaefd -"Natural Language to Code Generation in Interactive Data Science - Notebooks",http://arxiv.org/abs/2212.09248v1 -Conversational Automated Program Repair,http://arxiv.org/abs/2301.13246v1 -RealFusion: 360° Reconstruction of Any Object from a Single Image,http://arxiv.org/abs/2302.10663v2 -"Implication of GRB 221009A: Can TeV Emission Come from the GRB Prompt - Phase?",http://dx.doi.org/10.1007/s11433-023-2128-9 -"Large Language Models and Simple, Stupid Bugs",http://arxiv.org/abs/2303.11455v1 -DroidBot-GPT: GPT-powered UI Automation for Android,http://arxiv.org/abs/2304.07061v1 -CodeKGC: Code Language Model for Generative Knowledge Graph Construction,http://arxiv.org/abs/2304.09048v1 -"SkillGPT: a RESTful API service for skill extraction and standardization - using a Large Language Model",http://arxiv.org/abs/2304.11060v2 -"Generative AI Perceptions: A Survey to Measure the Perceptions of - Faculty, Staff, and Students on Generative AI Tools in Academia",http://arxiv.org/abs/2304.14415v1 -"Exploring the Effectiveness of Large Language Models in Generating Unit - Tests",http://arxiv.org/abs/2305.00418v1 -"FrugalGPT: How to Use Large Language Models While Reducing Cost and - Improving Performance",http://arxiv.org/abs/2305.05176v1 -Constructing Dreams using Generative AI,http://arxiv.org/abs/2305.12013v1 -"Interactive Data Synthesis for Systematic Vision Adaptation via - LLMs-AIGCs Collaboration",http://arxiv.org/abs/2305.12799v1 -Making Language Models Better Tool Learners with Execution Feedback,http://arxiv.org/abs/2305.13068v1 -Enabling Large Language Models to Generate Text with Citations,http://arxiv.org/abs/2305.14627v1 -"Conformal Prediction with Large Language Models for Multi-Choice - Question Answering",http://arxiv.org/abs/2305.18404v3 -Test-Time Training on Nearest Neighbors for Large Language Models,http://arxiv.org/abs/2305.18466v2 -"Coeditor: Leveraging Contextual Changes for Multi-round Code - Auto-editing",http://arxiv.org/abs/2305.18584v1 -"CONA: A novel CONtext-Aware instruction paradigm for communication using - large language model",http://arxiv.org/abs/2305.18620v1 -"Contextualizing Problems to Student Interests at Scale in Intelligent - Tutoring System Using Large Language Models",http://arxiv.org/abs/2306.00190v1 -"ChatGPT as a mapping assistant: A novel method to enrich maps with - generative AI and content derived from street-level photographs",http://dx.doi.org/10.25436/E2ZW27 -Scalable 3D Captioning with Pretrained Models,http://arxiv.org/abs/2306.07279v2 -CMMLU: Measuring massive multitask language understanding in Chinese,http://arxiv.org/abs/2306.09212v1 -FALL-E: A Foley Sound Synthesis Model and Strategies,http://arxiv.org/abs/2306.09807v2 -Approaching Unanticipated Consequences,http://arxiv.org/abs/2306.09959v1 -The Cultivated Practices of Text-to-Image Generation,http://arxiv.org/abs/2306.11393v1 -Mass-Producing Failures of Multimodal Systems with Language Models,http://arxiv.org/abs/2306.12105v1 -"Full Automation of Goal-driven LLM Dialog Threads with And-Or Recursors - and Refiner Oracles",http://arxiv.org/abs/2306.14077v1 -Pumping with Symmetry,http://arxiv.org/abs/2306.16401v1 -Can Large Language Models Write Good Property-Based Tests?,http://arxiv.org/abs/2307.04346v1 -MGit: A Model Versioning and Management System,http://arxiv.org/abs/2307.07507v1 -Chit-Chat or Deep Talk: Prompt Engineering for Process Mining,http://arxiv.org/abs/2307.09909v1 -Large Language Models can accomplish Business Process Management Tasks,http://arxiv.org/abs/2307.09923v1 -"SentimentGPT: Exploiting GPT for Advanced Sentiment Analysis and its - Departure from Current Machine Learning",http://arxiv.org/abs/2307.10234v2 -Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment,http://arxiv.org/abs/2308.00016v1 -Fixing Rust Compilation Errors using LLMs,http://arxiv.org/abs/2308.05177v1 -"RTLLM: An Open-Source Benchmark for Design RTL Generation with Large - Language Model",http://arxiv.org/abs/2308.05345v2 -"Multiplicity counting using organic scintillators to distinguish neutron - sources: An advanced teaching laboratory",http://arxiv.org/abs/2308.06282v1 -Data Race Detection Using Large Language Models,http://arxiv.org/abs/2308.07505v2 -"DataRaceBench V1.4.1 and DataRaceBench-ML V0.1: Benchmark Suites for - Data Race Detection",http://arxiv.org/abs/2308.08473v1 -FocalDreamer: Text-driven 3D Editing via Focal-fusion Assembly,http://arxiv.org/abs/2308.10608v2 -"LLM2KB: Constructing Knowledge Bases using instruction tuned context - aware Large Language Models",http://arxiv.org/abs/2308.13207v1 -"FurChat: An Embodied Conversational Agent using LLMs, Combining Open and - Closed-Domain Dialogue with Facial Expressions",http://arxiv.org/abs/2308.15214v2 -Large Language Models as Data Preprocessors,http://arxiv.org/abs/2308.16361v1 -"ChatGPT and Excel -- trust, but verify",http://arxiv.org/abs/2309.00120v1 -LogGPT: Exploring ChatGPT for Log-Based Anomaly Detection,http://arxiv.org/abs/2309.01189v1 -"Towards Foundational AI Models for Additive Manufacturing: Language - Models for G-Code Debugging, Manipulation, and Comprehension",http://arxiv.org/abs/2309.02465v1 -FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning,http://arxiv.org/abs/2309.04663v2 -Toward Reproducing Network Research Results Using Large Language Models,http://arxiv.org/abs/2309.04716v1 -"Characterizing Cyber Attacks against Space Systems with Missing Data: - Framework and Case Study",http://arxiv.org/abs/2309.04878v1 -"Kani: A Lightweight and Highly Hackable Framework for Building Language - Model Applications",http://arxiv.org/abs/2309.05542v1 -Interpretable learning of effective dynamics for multiscale systems,http://arxiv.org/abs/2309.05812v1 -"The first step is the hardest: Pitfalls of Representing and Tokenizing - Temporal Data for Large Language Models",http://arxiv.org/abs/2309.06236v1 -Commands as AI Conversations,http://dx.doi.org/10.1109/MS.2023.3307170 -Two Timin': Repairing Smart Contracts With A Two-Layered Approach,http://arxiv.org/abs/2309.07841v1 -"LLMR: Real-time Prompting of Interactive Worlds using Large Language - Models",http://arxiv.org/abs/2309.12276v1 -A Chat About Boring Problems: Studying GPT-based text normalization,http://arxiv.org/abs/2309.13426v1 -Watch Your Language: Large Language Models and Content Moderation,http://arxiv.org/abs/2309.14517v1 -"ANNCRIPS: Artificial Neural Networks for Cancer Research In Prediction & - Survival",http://arxiv.org/abs/2309.15803v1 -DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs,http://arxiv.org/abs/2309.16031v1 -"Cyber Sentinel: Exploring Conversational Agents in Streamlining Security - Tasks with GPT-4",http://arxiv.org/abs/2309.16422v1 -"A Sign Language Recognition System with Pepper, Lightweight-Transformer, - and LLM",http://arxiv.org/abs/2309.16898v1 -"Voice2Action: Language Models as Agent for Efficient Real-Time - Interaction in Virtual Reality",http://arxiv.org/abs/2310.00092v1 -"Chain of Natural Language Inference for Reducing Large Language Model - Ungrounded Hallucinations",http://arxiv.org/abs/2310.03951v2 -"CIFAR-10-Warehouse: Broad and More Realistic Testbeds in Model - Generalization Analysis",http://arxiv.org/abs/2310.04414v2 -Auto-survey Challenge,http://arxiv.org/abs/2310.04480v2 -"DiffNAS: Bootstrapping Diffusion Models by Prompting for Better - Architectures",http://arxiv.org/abs/2310.04750v2 -"Words into Action: Learning Diverse Humanoid Robot Behaviors using - Language Guided Iterative Motion Refinement",http://arxiv.org/abs/2310.06226v1 -Large Language Models for Propaganda Detection,http://arxiv.org/abs/2310.06422v1 -Jailbreaking Black Box Large Language Models in Twenty Queries,http://arxiv.org/abs/2310.08419v2 -Welfare Diplomacy: Benchmarking Language Model Cooperation,http://arxiv.org/abs/2310.08901v1 -Large Search Model: Redefining Search Stack in the Era of LLMs,http://arxiv.org/abs/2310.14587v1 -TaskDiff: A Similarity Metric for Task-Oriented Conversations,http://arxiv.org/abs/2310.15298v1 -"Late internal shock model for bright X-ray flares in Gamma-ray Burst - afterglows and GRB 011121",http://dx.doi.org/10.1111/j.1745-3933.2005.00102.x -"The variable X-ray light curve of GRB 050713A: the case of refreshed - shocks",http://dx.doi.org/10.1051/0004-6361:20065038 -"Relational Approach to Knowledge Engineering for POMDP-based Assistance - Systems as a Translation of a Psychological Model",http://dx.doi.org/10.1016/j.ijar.2013.03.006 -Scientific Literature Text Mining and the Case for Open Access,http://dx.doi.org/10.21428/14888 -"STREAK: An Efficient Engine for Processing Top-k SPARQL Queries with - Spatial Filters",http://arxiv.org/abs/1710.07411v1 -"A Large-Scale Empirical Comparison of Static and Dynamic Test Case - Prioritization Techniques",http://dx.doi.org/10.1145/2950290.2950344 -"Measuring and engineering the atomic mass density wave of a Gaussian - mass-polariton pulse in optical fibers",http://dx.doi.org/10.1117/12.2288288 -"CMI: An Online Multi-objective Genetic Autoscaler for Scientific and - Engineering Workflows in Cloud Infrastructures with Unreliable Virtual - Machines",http://arxiv.org/abs/1811.00989v1 -"Polarization with a 3-dimensional Mixed Magnetic Field and Its - Application to GRB 170817A",http://dx.doi.org/10.3847/1538-4357/aaf41d -"Propagation of Relativistic, Hydrodynamic, Intermittent Jets in a - Rotating, Collapsing GRB Progenitor Star",http://dx.doi.org/10.3847/1538-4357/833/1/116 -"Bimodal Long-Lasting Components in Short Gamma-Ray Bursts: Promising - Electromagnetic Counterparts to Neutron Star Binary Mergers",http://dx.doi.org/10.3847/1538-4357/aa8775 -"A security approach based on honeypots: Protecting Online Social network - from malicious profiles",http://arxiv.org/abs/1804.09988v1 -Hyperparameter Optimization for Effort Estimation,http://arxiv.org/abs/1805.00336v4 -"Event-shape engineering for the D-meson elliptic flow in mid-central - Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV",http://dx.doi.org/10.1007/JHEP02(2019)150 -Towards Quantifying Neurovascular Resilience,http://dx.doi.org/10.1007/978-3-030-33327-0_18 -"Collapsar R-Process Yields Can Reproduce [Eu/Fe] Abundance Scatter in - Metal-Poor Stars",http://dx.doi.org/10.3847/1538-4357/ac00b2 -"ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for - Programming Languages",http://arxiv.org/abs/2212.06742v2 -"Heterogeneous Anomaly Detection for Software Systems via Semi-supervised - Cross-modal Attention",http://arxiv.org/abs/2302.06914v1 -"Meta Learning to Bridge Vision and Language Models for Multimodal - Few-Shot Learning",http://arxiv.org/abs/2302.14794v1 -"Evaluating the Code Quality of AI-Assisted Code Generation Tools: An - Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT",http://arxiv.org/abs/2304.10778v2 -"NER-to-MRC: Named-Entity Recognition Completely Solving as Machine - Reading Comprehension",http://arxiv.org/abs/2305.03970v1 -"Dynamic Graph Representation Learning for Depression Screening with - Transformer",http://arxiv.org/abs/2305.06447v1 -"Knowledge Refinement via Interaction Between Search Engines and Large - Language Models",http://arxiv.org/abs/2305.07402v2 -"A New Era in Software Security: Towards Self-Healing Software via Large - Language Models and Formal Verification",http://arxiv.org/abs/2305.14752v1 -Leveraging object detection for the identification of lung cancer,http://dx.doi.org/10.17148/IARJSET.2020.7526 -"Exploring the MIT Mathematics and EECS Curriculum Using Large Language - Models",http://arxiv.org/abs/2306.08997v2 -"Exploring and Characterizing Large Language Models For Embedded System - Development and Debugging",http://arxiv.org/abs/2307.03817v1 -"Software Testing with Large Language Model: Survey, Landscape, and - Vision",http://arxiv.org/abs/2307.07221v1 -GPT-3 Models are Few-Shot Financial Reasoners,http://arxiv.org/abs/2307.13617v2 -"The Hitchhiker's Guide to Program Analysis: A Journey with Large - Language Models",http://arxiv.org/abs/2308.00245v2 -"LLM is Like a Box of Chocolates: the Non-determinism of ChatGPT in Code - Generation",http://arxiv.org/abs/2308.02828v1 -"Maat: Performance Metric Anomaly Anticipation for Cloud Services with - Conditional Diffusion",http://arxiv.org/abs/2308.07676v1 -"What Do Code Models Memorize? An Empirical Study on Large Language - Models of Code",http://arxiv.org/abs/2308.09932v1 -"Practical Anomaly Detection over Multivariate Monitoring Metrics for - Online Services",http://arxiv.org/abs/2308.09937v1 -"Unveiling the potential of large language models in generating semantic - and cross-language clones",http://arxiv.org/abs/2309.06424v1 -"BioinspiredLLM: Conversational Large Language Model for the Mechanics of - Biological and Bio-inspired Materials",http://arxiv.org/abs/2309.08788v1 -"Beyond Traditional Teaching: The Potential of Large Language Models and - Chatbots in Graduate Engineering Education",http://dx.doi.org/10.32388/MD04B0 -"Make LLM a Testing Expert: Bringing Human-like Interaction to Mobile GUI - Testing via Functionality-aware Decisions",http://arxiv.org/abs/2310.15780v1 -Prompt Photons in Photoproduction at HERA,http://dx.doi.org/10.1142/9789812778345_0055 -"Directed Diversity: Leveraging Language Embedding Distances for - Collective Creativity in Crowd Ideation",http://dx.doi.org/10.1145/3411764.3445782 -Learning How to Ask: Querying LMs with Mixtures of Soft Prompts,http://arxiv.org/abs/2104.06599v1 -OpenPrompt: An Open-source Framework for Prompt-learning,http://arxiv.org/abs/2111.01998v1 -"Decorate the Examples: A Simple Method of Prompt Design for Biomedical - Relation Extraction",http://arxiv.org/abs/2204.10360v1 -Prompt Injection: Parameterization of Fixed Inputs,http://arxiv.org/abs/2206.11349v2 -STT: Soft Template Tuning for Few-Shot Adaptation,http://arxiv.org/abs/2207.08408v1 -PLOT: Prompt Learning with Optimal Transport for Vision-Language Models,http://arxiv.org/abs/2210.01253v2 -Causal Intervention-based Prompt Debiasing for Event Argument Extraction,http://arxiv.org/abs/2210.01561v1 -XPrompt: Exploring the Extreme of Prompt Tuning,http://arxiv.org/abs/2210.04457v1 -Massive prompt cusps: A new signature of warm dark matter,http://dx.doi.org/10.1093/mnrasl/slad043 -"Scalable Prompt Generation for Semi-supervised Learning with Language - Models",http://arxiv.org/abs/2302.09236v1 -Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning,http://arxiv.org/abs/2303.02861v1 -"A Survey of Graph Prompting Methods: Techniques, Applications, and - Challenges",http://arxiv.org/abs/2303.07275v2 -Visual-Language Prompt Tuning with Knowledge-guided Context Optimization,http://arxiv.org/abs/2303.13283v1 -Boosted Prompt Ensembles for Large Language Models,http://arxiv.org/abs/2304.05970v1 -Towards Robust Prompts on Vision-Language Models,http://arxiv.org/abs/2304.08479v1 -"Prompts Should not be Seen as Secrets: Systematically Measuring Prompt - Extraction Attack Success",http://arxiv.org/abs/2307.06865v1 -LLM-Rec: Personalized Recommendation via Prompting Large Language Models,http://arxiv.org/abs/2307.15780v2 -"Unnatural language processing: How do language models handle - machine-generated prompts?",http://arxiv.org/abs/2310.15829v1 -"Model Tuning or Prompt Tuning? A Study of Large Language Models for - Clinical Concept and Relation Extraction",http://arxiv.org/abs/2310.06239v1 -Learning to Transfer Prompts for Text Generation,http://arxiv.org/abs/2205.01543v2 -"ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures - of Soft Prompts",http://arxiv.org/abs/2205.11961v2 -Decomposed Prompting: A Modular Approach for Solving Complex Tasks,http://arxiv.org/abs/2210.02406v2 -Unleashing the Power of Visual Prompting At the Pixel Level,http://arxiv.org/abs/2212.10556v2 -Dynamic Prompting: A Unified Framework for Prompt Tuning,http://arxiv.org/abs/2303.02909v2 -SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency,http://arxiv.org/abs/2303.07033v2 -Prompt-Learning for Cross-Lingual Relation Extraction,http://arxiv.org/abs/2304.10354v1 -"Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning - by Large Language Models",http://arxiv.org/abs/2305.04091v3 -"Improving Probability-based Prompt Selection Through Unified Evaluation - and Analysis",http://arxiv.org/abs/2305.14877v1 -PromptNER: Prompt Locating and Typing for Named Entity Recognition,http://arxiv.org/abs/2305.17104v1 -"Self-regulating Prompts: Foundational Model Adaptation without - Forgetting",http://arxiv.org/abs/2307.06948v2 -"Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech - Prompts",http://arxiv.org/abs/2307.07218v2 -"Inclusive, prompt and non-prompt $\rm{J}/ψ$ identification in - proton-proton collisions at the Large Hadron Collider using machine learning",http://arxiv.org/abs/2308.00329v1 -ROSGPT_Vision: Commanding Robots Using Only Language Models' Prompts,http://arxiv.org/abs/2308.11236v2 -Evidence for a Canonical GRB Afterglow Light Curve in the Swift/XRT Data,http://dx.doi.org/10.1086/500724 -The nature of the late achromatic bump in GRB 120326A,http://dx.doi.org/10.1051/0004-6361/201424338 -"A lesson from GW170817: most neutron star mergers result in tightly - collimated successful GRB jets",http://dx.doi.org/10.1093/mnras/sty3093 -"Study of J/$ψ$ azimuthal anisotropy at forward rapidity in Pb-Pb - collisions at $\sqrt{{\textit s}_{\rm NN}}$ = 5.02 TeV",http://dx.doi.org/10.1007/JHEP02(2019)012 -"High latitude emission from structured jet of Gamma-Ray Bursts observed - off-axis",http://dx.doi.org/10.1051/0004-6361/202038265 -"Measuring the beaming angle of GRB 030329 by fitting the rebrightenings - in its multiband afterglow",http://dx.doi.org/10.1088/1674-4527/10/11/004 -"Constraining GRB Emission Physics with Extensive Early-Time, Multiband - Follow-up",http://dx.doi.org/10.1088/0004-637X/743/2/154 -"Jet Launching from Merging Magnetized Binary Neutron Stars with - Realistic Equations of State",http://dx.doi.org/10.1103/PhysRevD.104.124049 -"Code Generation Tools (Almost) for Free? A Study of Few-Shot, - Pre-Trained Language Models on Code",http://arxiv.org/abs/2206.01335v2 -FOLIO: Natural Language Reasoning with First-Order Logic,http://arxiv.org/abs/2209.00840v1 -GPT Takes the Bar Exam,http://arxiv.org/abs/2212.14402v1 -"GRB minimum variability timescale with Insight-HXMT and Swift: - implications for progenitor models, dissipation physics and GRB - classifications",http://dx.doi.org/10.1051/0004-6361/202245657 -Learning Performance-Improving Code Edits,http://arxiv.org/abs/2302.07867v3 -"Photometric and Spectroscopic Observations of GRB 190106A: Emission from - Reverse and Forward Shocks with Late-time Energy Injection",http://dx.doi.org/10.3847/1538-4357/acbd96 -Revisiting the Plastic Surgery Hypothesis via Large Language Models,http://arxiv.org/abs/2303.10494v1 -Capabilities of GPT-4 on Medical Challenge Problems,http://arxiv.org/abs/2303.13375v2 -"Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each - using ChatGPT",http://arxiv.org/abs/2304.00385v1 -"Evaluation of GPT-3.5 and GPT-4 for supporting real-world information - needs in healthcare delivery",http://arxiv.org/abs/2304.13714v3 -"Go Beyond The Obvious: Probing the gap of INFORMAL reasoning ability - between Humanity and LLMs by Detective Reasoning Puzzle Benchmark",http://arxiv.org/abs/2307.05113v2 -Label Supervised LLaMA Finetuning,http://arxiv.org/abs/2310.01208v1 -Human-in-the-loop Machine Translation with Large Language Model,http://arxiv.org/abs/2310.08908v1 -X-ray flare in XRF 050406: evidence for prolonged engine activity,http://dx.doi.org/10.1051/0004-6361:20054172 -Recent Results in Prompt Photon Production,http://arxiv.org/abs/hep-ex/0511035v1 -"Prompt Programming for Large Language Models: Beyond the Few-Shot - Paradigm",http://arxiv.org/abs/2102.07350v1 -How Many Data Points is a Prompt Worth?,http://arxiv.org/abs/2103.08493v2 -Prompt-Learning for Fine-Grained Entity Typing,http://arxiv.org/abs/2108.10604v1 -Reframing Instructional Prompts to GPTk's Language,http://arxiv.org/abs/2109.07830v3 -"Context-Tuning: Learning Contextualized Prompts for Natural Language - Generation",http://arxiv.org/abs/2201.08670v2 -Enhancing Cross-lingual Prompting with Dual Prompt Augmentation,http://arxiv.org/abs/2202.07255v2 -Evaluating Prompts Across Multiple Choice Tasks In a Zero-Shot Setting,http://arxiv.org/abs/2203.15754v1 -"Maieutic Prompting: Logically Consistent Reasoning with Recursive - Explanations",http://arxiv.org/abs/2205.11822v2 -Learning a Better Initialization for Soft Prompts via Meta-Learning,http://arxiv.org/abs/2205.12471v1 -"A Unified Generative Framework based on Prompt Learning for Various - Information Extraction Tasks",http://arxiv.org/abs/2209.11570v1 -Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation,http://arxiv.org/abs/2210.02952v2 -"Knowledge Prompting in Pre-trained Language Model for Natural Language - Understanding",http://arxiv.org/abs/2210.08536v1 -GPS: Genetic Prompt Search for Efficient Few-shot Learning,http://arxiv.org/abs/2210.17041v1 -Time-aware Prompting for Text Generation,http://arxiv.org/abs/2211.02162v1 -Prompt and non-prompt charm baryons with ALICE,http://arxiv.org/abs/2211.11015v1 -TEMPERA: Test-Time Prompting via Reinforcement Learning,http://arxiv.org/abs/2211.11890v1 -Towards Robust NLG Bias Evaluation with Syntactically-diverse Prompts,http://arxiv.org/abs/2212.01700v1 -Exploiting Prompt Caption for Video Grounding,http://arxiv.org/abs/2301.05997v2 -Batch Prompting: Efficient Inference with Large Language Model APIs,http://arxiv.org/abs/2301.08721v2 -"Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video - Relation Detection",http://arxiv.org/abs/2302.00268v1 -"Exploring the Representation Manifolds of Stable Diffusion Through the - Lens of Intrinsic Dimension",http://arxiv.org/abs/2302.09301v1 -Learning to Compress Prompts with Gist Tokens,http://arxiv.org/abs/2304.08467v2 -Divide and Prompt: Chain of Thought Prompting for Text-to-SQL,http://arxiv.org/abs/2304.11556v1 -Exploring the Curious Case of Code Prompts,http://arxiv.org/abs/2304.13250v1 -Black-box Prompt Tuning with Subspace Learning,http://arxiv.org/abs/2305.03518v1 -Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency,http://arxiv.org/abs/2305.10713v2 -"Robust Prompt Optimization for Large Language Models Against - Distribution Shifts",http://arxiv.org/abs/2305.13954v2 -Do We Really Need a Large Number of Visual Prompts?,http://arxiv.org/abs/2305.17223v1 -NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models,http://arxiv.org/abs/2305.17826v1 -Prompt Algebra for Task Composition,http://arxiv.org/abs/2306.00310v1 -"Knowledge-Prompted Estimator: A Novel Approach to Explainable Machine - Translation Assessment",http://arxiv.org/abs/2306.07486v1 -Attribute Controlled Dialogue Prompting,http://arxiv.org/abs/2307.05228v1 -Discrete Prompt Compression with Reinforcement Learning,http://arxiv.org/abs/2308.08758v1 -Prompting Audios Using Acoustic Properties For Emotion Representation,http://arxiv.org/abs/2310.02298v2 -Towards Better Chain-of-Thought Prompting Strategies: A Survey,http://arxiv.org/abs/2310.04959v1 -"Text-driven Prompt Generation for Vision-Language Models in Federated - Learning",http://arxiv.org/abs/2310.06123v1 -Tree Prompting: Efficient Task Adaptation without Fine-Tuning,http://arxiv.org/abs/2310.14034v1 -"Instabilities in the Gamma Ray Burst central engine. What makes the jet - variable?",http://dx.doi.org/10.1017/S1743921310016388 -"Hot Electromagnetic Outflows. III. Displaced Fireball in a Strong - Magnetic Field",http://dx.doi.org/10.1088/0004-637X/791/1/46 -"LSQ14bdq: A Type Ic super-luminous supernova with a double-peaked light - curve",http://dx.doi.org/10.1088/2041-8205/807/1/L18 -"Sensor Fault Detection, Isolation and Identification Using Multiple - Model-based Hybrid Kalman Filter for Gas Turbine Engines",http://arxiv.org/abs/1505.02063v2 -"A Comprehensive Study of Gamma-Ray Burst Optical Emission: I. Flares and - Early Shallow Decay Component",http://dx.doi.org/10.1088/0004-637X/758/1/27 -Short gamma-ray burst central engines,http://dx.doi.org/10.1142/S021827181842004X -"Magnetar as Central Engine of Gamma-Ray Bursts: Central Engine-Jet - Connection, Wind-Jet Energy Partition, and Origin of Some Ultra-Long Bursts",http://dx.doi.org/10.3847/1538-4357/ab17dc -LOFAR early-time search for coherent radio emission from GRB 180706A,http://dx.doi.org/10.1093/mnras/stz2866 -"Automatic Generation of Test Cases based on Bug Reports: a Feasibility - Study with Large Language Models",http://arxiv.org/abs/2310.06320v1 -LPT: Long-tailed Prompt Tuning for Image Classification,http://arxiv.org/abs/2210.01033v2 -"Searching for Prompt and Long-Lived Dark Photons in Electro-Produced - $e^+e^-$ Pairs with the Heavy Photon Search Experiment at JLab",http://dx.doi.org/10.1103/PhysRevD.108.012015 -Promptness and Bounded Fairness in Concurrent and Parameterized Systems,http://arxiv.org/abs/1911.03122v2 -Discrete and Soft Prompting for Multilingual Models,http://arxiv.org/abs/2109.03630v1 -The Art of Prompting: Event Detection based on Type Specific Prompts,http://arxiv.org/abs/2204.07241v1 -"Prompt and non-prompt $J/ψ$ elliptic flow in Pb+Pb collisions at - $\sqrt{s_{_\text{NN}}} = 5.02$ TeV with the ATLAS detector",http://dx.doi.org/10.1140/epjc/s10052-018-6243-9 -PTR: Prompt Tuning with Rules for Text Classification,http://arxiv.org/abs/2105.11259v3 -"The capability of the Australian Square Kilometre Array Pathfinder to - detect prompt radio bursts from neutron star mergers",http://dx.doi.org/10.1017/pasa.2020.42 -SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer,http://arxiv.org/abs/2110.07904v2 -"Prompt Waywardness: The Curious Case of Discretized Interpretation of - Continuous Prompts",http://arxiv.org/abs/2112.08348v2 -"PromptSource: An Integrated Development Environment and Repository for - Natural Language Prompts",http://arxiv.org/abs/2202.01279v3 -Prompt-Guided Injection of Conformation to Pre-trained Protein Model,http://arxiv.org/abs/2202.02944v1 -"Tailor: A Prompt-Based Approach to Attribute-Based Controlled Text - Generation",http://arxiv.org/abs/2204.13362v1 -"Least-to-Most Prompting Enables Complex Reasoning in Large Language - Models",http://arxiv.org/abs/2205.10625v3 -"Supporting Vision-Language Model Inference with Causality-pruning - Knowledge Prompt",http://arxiv.org/abs/2205.11100v1 -Prompt-to-Prompt Image Editing with Cross Attention Control,http://arxiv.org/abs/2208.01626v1 -Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model,http://dx.doi.org/10.1109/TMM.2023.3291588 -Bidirectional Language Models Are Also Few-shot Learners,http://arxiv.org/abs/2209.14500v2 -Universal Prompt Tuning for Graph Neural Networks,http://arxiv.org/abs/2209.15240v4 -Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning,http://arxiv.org/abs/2210.07565v3 -"DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image - Generative Models",http://arxiv.org/abs/2210.14896v4 -"Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based - Learning",http://arxiv.org/abs/2302.03848v1 -"PromptAid: Prompt Exploration, Perturbation, Testing and Iteration using - Visual Analytics for Large Language Models",http://arxiv.org/abs/2304.01964v2 -"Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM - Inference with Transferable Prompt",http://arxiv.org/abs/2305.11186v2 -"PromptBench: Towards Evaluating the Robustness of Large Language Models - on Adversarial Prompts",http://arxiv.org/abs/2306.04528v4 -POP: Prompt Of Prompts for Continual Learning,http://arxiv.org/abs/2306.08200v1 -"Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot - Classification",http://arxiv.org/abs/2307.11031v1 -"Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author - Prompt Editing",http://arxiv.org/abs/2310.13855v1 -"Visual-Attribute Prompt Learning for Progressive Mild Cognitive - Impairment Prediction",http://arxiv.org/abs/2310.14158v1 -"Cross-lingual Prompting: Improving Zero-shot Chain-of-Thought Reasoning - across Languages",http://arxiv.org/abs/2310.14799v1 -Ask Me Anything: A simple strategy for prompting language models,http://arxiv.org/abs/2210.02441v3 -The Swift X-ray flaring afterglow of GRB 050607,http://dx.doi.org/10.1086/504518 -"Measurement of chi_c1 and chi_c2 production with sqrt(s) = 7 TeV pp - collisions at ATLAS",http://dx.doi.org/10.1007/JHEP07(2014)154 -"Measurement of the differential cross-sections of prompt and non-prompt - production of $J/ψ$ and $ψ(2\mathrm{S})$ in $pp$ collisions at - $\sqrt{s} = 7$ and $8$ TeV with the ATLAS detector",http://dx.doi.org/10.1140/epjc/s10052-016-4050-8 -The Power of Scale for Parameter-Efficient Prompt Tuning,http://arxiv.org/abs/2104.08691v2 -Exploring Prompt-based Few-shot Learning for Grounded Dialog Generation,http://arxiv.org/abs/2109.06513v2 -"P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally - Across Scales and Tasks",http://arxiv.org/abs/2110.07602v3 -AdaPrompt: Adaptive Model Training for Prompt-based NLP,http://arxiv.org/abs/2202.04824v2 -Exploring Visual Prompts for Adapting Large-Scale Models,http://arxiv.org/abs/2203.17274v2 -Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging,http://arxiv.org/abs/2204.00885v1 -Prompt Distribution Learning,http://arxiv.org/abs/2205.03340v1 -"Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual - Style Transfer with Small Language Models",http://arxiv.org/abs/2205.11503v1 -Continuous QA Learning with Structured Prompts,http://arxiv.org/abs/2208.14602v2 -"Prompt Compression and Contrastive Conditioning for Controllability and - Toxicity Reduction in Language Models",http://arxiv.org/abs/2210.03162v1 -"Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of - Rewards",http://arxiv.org/abs/2210.12050v1 -Generative Prompt Tuning for Relation Classification,http://arxiv.org/abs/2210.12435v1 -"ConsPrompt: Easily Exploiting Contrastive Samples for Few-shot Prompt - Learning",http://arxiv.org/abs/2211.04118v1 -FPT: Improving Prompt Tuning Efficiency via Progressive Training,http://arxiv.org/abs/2211.06840v1 -Extensible Prompts for Language Models,http://arxiv.org/abs/2212.00616v1 -"Leveraging Large Language Models to Power Chatbots for Collecting User - Self-Reported Data",http://arxiv.org/abs/2301.05843v2 -Debiased Fine-Tuning for Vision-language Models by Prompt Regularization,http://arxiv.org/abs/2301.12429v2 -"Global Constraints with Prompting for Zero-Shot Event Argument - Classification",http://arxiv.org/abs/2302.04459v1 -"À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable - Prompting",http://arxiv.org/abs/2302.07994v1 -"StyLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based - Domain Generalization",http://arxiv.org/abs/2302.09251v2 -Self-Supervised Convolutional Visual Prompts,http://arxiv.org/abs/2303.00198v1 -"From Visual Prompt Learning to Zero-Shot Transfer: Mapping Is All You - Need",http://arxiv.org/abs/2303.05266v1 -Learning Expressive Prompting With Residuals for Vision Transformers,http://arxiv.org/abs/2303.15591v1 -Does Prompt-Tuning Language Model Ensure Privacy?,http://arxiv.org/abs/2304.03472v2 -Revisiting Automated Prompting: Are We Actually Doing Better?,http://arxiv.org/abs/2304.03609v2 -"MVP-SEG: Multi-View Prompt Learning for Open-Vocabulary Semantic - Segmentation",http://arxiv.org/abs/2304.06957v1 -Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT,http://arxiv.org/abs/2305.00201v1 -PromptUNet: Toward Interactive Medical Image Segmentation,http://arxiv.org/abs/2305.10300v1 -The CLIP Model is Secretly an Image-to-Prompt Converter,http://arxiv.org/abs/2305.12716v1 -"LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image - Diffusion Models with Large Language Models",http://arxiv.org/abs/2305.13655v2 -Exploring Chain-of-Thought Style Prompting for Text-to-SQL,http://arxiv.org/abs/2305.14215v1 -Hierarchical Prompting Assists Large Language Model on Web Navigation,http://arxiv.org/abs/2305.14257v2 -"Syntax-aware Hybrid prompt model for Few-shot multi-modal sentiment - analysis",http://arxiv.org/abs/2306.01312v2 -Retrieval-Enhanced Visual Prompt Learning for Few-shot Classification,http://arxiv.org/abs/2306.02243v1 -"Gotta: Generative Few-shot Question Answering by Prompt-based Cloze Data - Augmentation",http://arxiv.org/abs/2306.04101v1 -Background Prompting for Improved Object Depth,http://arxiv.org/abs/2306.05428v1 -"Using Foundation Models to Detect Policy Violations with Minimal - Supervision",http://arxiv.org/abs/2306.06234v1 -"MuDPT: Multi-modal Deep-symphysis Prompt Tuning for Large Pre-trained - Vision-Language Models",http://arxiv.org/abs/2306.11400v1 -"ProRes: Exploring Degradation-aware Visual Prompt for Universal Image - Restoration",http://arxiv.org/abs/2306.13653v1 -"PM-DETR: Domain Adaptive Prompt Memory for Object Detection with - Transformers",http://arxiv.org/abs/2307.00313v1 -"Analyzing Chain-of-Thought Prompting in Large Language Models via - Gradient-based Feature Attributions",http://arxiv.org/abs/2307.13339v1 -"Forward production of prompt neutrinos in the atmosphere and at - high-energy colliders",http://arxiv.org/abs/2308.02808v1 -Prompt Guided Copy Mechanism for Conversational Question Answering,http://arxiv.org/abs/2308.03422v1 -"Emotion-Conditioned Text Generation through Automatic Prompt - Optimization",http://arxiv.org/abs/2308.04857v1 -"Prompting a Large Language Model to Generate Diverse Motivational - Messages: A Comparison with Human-Written Messages",http://arxiv.org/abs/2308.13479v1 -Language Prompt for Autonomous Driving,http://arxiv.org/abs/2309.04379v1 -Are Soft Prompts Good Zero-shot Learners for Speech Recognition?,http://arxiv.org/abs/2309.09413v1 -"UniPCM: Universal Pre-trained Conversation Model with Task-aware - Automatic Prompt",http://arxiv.org/abs/2309.11065v1 -"EvalLM: Interactive Evaluation of Large Language Model Prompts on - User-Defined Criteria",http://arxiv.org/abs/2309.13633v1 -Tuning Multi-mode Token-level Prompt Alignment across Modalities,http://arxiv.org/abs/2309.13847v1 -"SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via - Substitution",http://arxiv.org/abs/2309.14122v1 -Decomposed Prompt Tuning via Low-Rank Reparameterization,http://arxiv.org/abs/2310.10094v1 -"Progressive3D: Progressively Local Editing for Text-to-3D Content - Creation with Complex Semantic Prompts",http://arxiv.org/abs/2310.11784v1 -Prompt Injection Attacks and Defenses in LLM-Integrated Applications,http://arxiv.org/abs/2310.12815v1 -"The Non-Relativistic Evolution of GRBs 980703 and 970508: - Beaming-Independent Calorimetry",http://dx.doi.org/10.1086/422809 -"Late-Time Radio Re-Brightening of Gamma-Ray Burst Afterglows: Evidence - for Double-Sided Jets",http://dx.doi.org/10.1086/425498 -Prompt optical observations of GRB050319 with the Swift UVOT,http://dx.doi.org/10.1086/499293 -"The optical flare and afterglow light curve of GRB 050904 at redshift - z=6.29",http://dx.doi.org/10.1086/500259 -X-ray flares: late internal and late external shocks,http://arxiv.org/abs/astro-ph/0512555v1 -"Gamma-Ray Burst Spectral Correlations: Photospheric and Injection - Effects",http://dx.doi.org/10.1086/508410 -Spectral and timing properties of a dissipative GRB photosphere,http://dx.doi.org/10.1051/0004-6361:20066739 -"Temporal Evolution Of Thermal Emission From Relativistically Expanding - Plasma",http://dx.doi.org/10.1086/588136 -"Prospects for detection of very high-energy emission from GRB in the - context of the external shock model",http://dx.doi.org/10.1051/0004-6361:200810218 -"Phase transitions and He-synthesis driven winds in neutrino cooled - accretion disks: prospects for late flares in short gamma-ray bursts",http://dx.doi.org/10.1088/0004-637X/699/2/L93 -"XRF 100316D/SN 2010bh: clue to the diverse origin of nearby - supernova-associated GRBs",http://dx.doi.org/10.1088/0004-637X/726/1/32 -"Stepwise Filter Correlation Method and Evidence of Superposed - Variability Components in GRB Prompt Emission Lightcurves",http://dx.doi.org/10.1088/0004-637X/748/2/134 -The Price of Privacy in Untrusted Recommendation Engines,http://arxiv.org/abs/1207.3269v2 -"The Discovery of a New Instability in a Hyperaccretion Flow and its - Implication for Gamma-ray Bursts",http://dx.doi.org/10.1088/2041-8205/777/1/L15 -"Radio observations of GRB 100418a: Test of an energy injection model - explaining long-lasting GRB afterglows",http://dx.doi.org/10.1088/0004-637X/779/2/105 -"Nuclear Equation of State from Observations of Short Gamma-Ray Burst - Remnants",http://dx.doi.org/10.1103/PhysRevD.89.047302 -"Gamma-ray polarization of synchrotron-self-Compton process from a highly - relativistic jet",http://dx.doi.org/10.1088/0004-637X/795/1/36 -"Optical light curve of GRB 121011A: a textbook for the onset of GRB - afterglow in a mixture of ISM and wind-type medium",http://dx.doi.org/10.1088/1674-4527/16/1/012 -"Comprehensive study of the X-ray flares from gamma-ray bursts observed - by Swift",http://dx.doi.org/10.3847/0067-0049/224/2/20 -Observation of Topological Nodal Fermion Semimetal Phase in ZrSiS,http://dx.doi.org/10.1103/PhysRevB.93.201104 -Black hole accretion in gamma ray bursts,http://arxiv.org/abs/1701.07753v1 -More declarative tabling in Prolog using multi-prompt delimited control,http://arxiv.org/abs/1708.07081v2 -"Norm violation in online communities -- A study of Stack Overflow - comments",http://dx.doi.org/10.1007/978-3-030-72376-7_2 -"To power the X-ray plateaus of gamma-ray bursts through larger amplitude - electromagnetic waves",http://dx.doi.org/10.3847/1538-4357/abaf4d -"Tailoring magnetic energies to form dipole skyrmions and skyrmion - lattices",http://dx.doi.org/10.1103/PhysRevB.95.024415 -"Musical NeuroPicks: a consumer-grade BCI for on-demand music streaming - services",http://arxiv.org/abs/1709.01116v1 -Techniques and Challenges in Speech Synthesis,http://arxiv.org/abs/1709.07552v1 -"Coordination Technology for Active Support Networks: Context, - Needfinding, and Design",http://dx.doi.org/10.1007/s00146-017-0778-4 -"The impact of hexagonal boron nitride encapsulation on the structural - and vibrational properties of few layer black phosphorus",http://dx.doi.org/10.1088/1361-6528/ab0332 -"Mixing enhancement induced by viscoelastic micromotors in microfluidic - platform",http://dx.doi.org/10.1016/j.cej.2019.123572 -"Transverse-momentum and event-shape dependence of D-meson flow harmonics - in Pb-Pb collisions at $\sqrt{s_{NN}} = 5.02$ TeV",http://dx.doi.org/10.1016/j.physletb.2020.136054 -"Countering Gattaca: Efficient and Secure Testing of Fully-Sequenced - Human Genomes (Full Version)",http://arxiv.org/abs/1110.2478v3 -"Spreadsheets in Financial Departments: An Automated Analysis of 65,000 - Spreadsheets using the Luminous Technology",http://arxiv.org/abs/1111.6866v1 -"Diversity Of Short Gamma-Ray Burst Afterglows From Compact Binary - Mergers Hosting Pulsars",http://dx.doi.org/10.1088/2041-8205/790/1/L3 -"A Comparison between Radio Loud and Quiet Gamma-Ray Bursts, and Evidence - for a Potential Correlation between Intrinsic Duration and Redshift in the - Radio Loud Population",http://arxiv.org/abs/1809.04190v2 -"A Role-Based Approach for Orchestrating Emergent Configurations in the - Internet of Things",http://arxiv.org/abs/1809.09870v1 -Towards Multi-container Deployment on IoT Gateways,http://arxiv.org/abs/1810.07753v1 -"""Double-tracking"" Characteristic of the Spectral Evolution of GRB - 131231A: Synchrotron Origin?",http://dx.doi.org/10.3847/1538-4357/ab40b9 -"A lifestyle-based model of household neighbourhood location and - individual travel mode choice behaviours",http://arxiv.org/abs/1902.01986v2 -Diversity of kilonova light curves,http://dx.doi.org/10.3847/1538-4357/ab61f6 -Higgs-mediated optical amplification in a non-equilibrium superconductor,http://dx.doi.org/10.1103/PhysRevX.11.011055 -"Constraining short gamma-ray burst jet properties with gravitational - waves and gamma rays",http://dx.doi.org/10.3847/1538-4357/ab7eaf -"Analysis of Ownership and Travel Behavior of Women Who Drive Electric - Vehicles: The case of Maryland",http://arxiv.org/abs/1911.03943v1 -Multi-messenger signals from short gamma ray bursts,http://arxiv.org/abs/1911.05364v2 -Plunge into the Underworld: A Survey on Emergence of Darknet,http://dx.doi.org/10.1109/CSCI49370.2019.00032 -Self-organized Criticality in Multi-pulse Gamma-Ray Bursts,http://arxiv.org/abs/2009.06180v1 -Performance-based screening of porous materials for carbon capture,http://arxiv.org/abs/2009.12289v2 -COSEA: Convolutional Code Search with Layer-wise Attention,http://arxiv.org/abs/2010.09520v1 -"Temporal Properties of Precursors, Main peaks and Extended Emissions of - Short GRBs in the Third Swift/BAT GRB Catalog",http://dx.doi.org/10.3847/1538-4365/abd3fd -Do Abstractions Have Politics? Toward a More Critical Algorithm Analysis,http://dx.doi.org/10.1109/RESPECT51740.2021.9620635 -"Symbolic Knowledge Distillation: from General Language Models to - Commonsense Models",http://arxiv.org/abs/2110.07178v2 -"Statistical analyses on the energies of X-ray plateaus and flares in - gamma-ray bursts",http://dx.doi.org/10.3847/1538-4357/ac35e7 -"Can Pre-trained Language Models be Used to Resolve Textual and Semantic - Merge Conflicts?",http://arxiv.org/abs/2111.11904v1 -COVID-19 Electrocardiograms Classification using CNN Models,http://arxiv.org/abs/2112.08931v1 -"Reproducibility Challenges and Their Impacts on Technical Q&A Websites: - The Practitioners' Perspectives",http://arxiv.org/abs/2112.10056v1 -"Designing Social Distancing Policies for the COVID-19 Pandemic: A - probabilistic model predictive control approach",http://dx.doi.org/10.3934/mbe.2022409 -What Makes a Good Commit Message?,http://dx.doi.org/10.1145/3510003.3510205 -Red Teaming Language Models with Language Models,http://arxiv.org/abs/2202.03286v1 -"The Perception of Filipinos on the Advent of Cryptocurrency and - Non-Fungible Token (NFT) Games",http://dx.doi.org/10.25147/ijcsr.2017.001.1.89 -"A Grounded Theory of Coordination in Remote-First and Hybrid Software - Teams",http://arxiv.org/abs/2202.10445v2 -Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language,http://arxiv.org/abs/2204.00598v2 -Gamma-Ray Bursts,http://dx.doi.org/10.1007/978-981-16-4544-0_126-1 -"Repeating fast radio burst 20201124A originates from a magnetar/Be star - binary",http://dx.doi.org/10.1038/s41467-022-31923-y -"Comparison of Brick and Project Haystack to Support Smart Building - Applications",http://dx.doi.org/10.1016/j.aei.2020.101233 -On LGRB progenitors: an approach from thermally-produced neutrinos,http://arxiv.org/abs/2205.06967v1 -Referring Image Matting,http://arxiv.org/abs/2206.05149v3 -"Optimal Dichotomy of Temporal Scales and Boundedness and Stability of - Time-Varying Multidimensional Nonlinear Systems",http://arxiv.org/abs/2206.07224v1 -"Heterogeneous Anomaly Detection for Software Systems via Attentive - Multi-modal Learning",http://arxiv.org/abs/2207.02918v2 -"Determine the Core Structure and Nuclear Equation of State of Rotating - Core-Collapse Supernovae with Gravitational Waves by Convolutional Neural - Networks",http://arxiv.org/abs/2209.10089v1 -Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors,http://arxiv.org/abs/2210.02506v1 -"PENTATRON: PErsonalized coNText-Aware Transformer for Retrieval-based - cOnversational uNderstanding",http://arxiv.org/abs/2210.12308v1 -Scalable Modular Synthetic Data Generation for Advancing Aerial Autonomy,http://dx.doi.org/10.1016/j.robot.2023.104464 -DISCO: Distilling Counterfactuals with Large Language Models,http://arxiv.org/abs/2212.10534v3 -"The Algonauts Project 2023 Challenge: How the Human Brain Makes Sense of - Natural Scenes",http://arxiv.org/abs/2301.03198v4 -User-Customizable Transpilation of Scripting Languages,http://arxiv.org/abs/2301.11220v2 -"Towards Equitable Representation in Text-to-Image Synthesis Models with - the Cross-Cultural Understanding Benchmark (CCUB) Dataset",http://arxiv.org/abs/2301.12073v2 -Mnemosyne: Learning to Train Transformers with Transformers,http://arxiv.org/abs/2302.01128v3 -CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets,http://arxiv.org/abs/2302.02551v3 -GRB 190114C: Fireball Energy Budget and Radiative Efficiency Revisited,http://arxiv.org/abs/2302.06116v1 -Controlled and Conditional Text to Image Generation with Diffusion Prior,http://arxiv.org/abs/2302.11710v2 -"Electric Vehicle Sales Forecasting Model Considering Green Premium: A - Chinese Market-based Perspective",http://arxiv.org/abs/2302.13893v1 -"GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue - Generation",http://arxiv.org/abs/2302.14401v1 -"Homochiral antiferromagnetic merons, antimerons and bimerons realized in - synthetic antiferromagnets",http://arxiv.org/abs/2303.14853v1 -"Humans in Humans Out: On GPT Converging Toward Common Sense in both - Success and Failure",http://arxiv.org/abs/2303.17276v1 -"Pair Programming with Large Language Models for Sampling and Estimation - of Copulas",http://arxiv.org/abs/2303.18116v1 -"Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its - Applications, Advantages, Limitations, and Future Directions in Natural - Language Processing",http://arxiv.org/abs/2304.02017v5 -"Explainable Automated Debugging via Large Language Model-driven - Scientific Debugging",http://arxiv.org/abs/2304.02195v1 -GRB 211211A: a Neutron Star$-$White Dwarf Merger?,http://dx.doi.org/10.3847/2041-8213/acca83 -"Prompt: Probability-Conserved Cross Section Biasing Monte Carlo Particle - Transport System",http://arxiv.org/abs/2304.06226v1 -"Framing the News:From Human Perception to Large Language Model - Inferences",http://dx.doi.org/10.1145/3591106.3592278 -"Automated Code generation for Information Technology Tasks in YAML - through Large Language Models",http://arxiv.org/abs/2305.02783v4 -Zelda: Video Analytics using Vision-Language Models,http://arxiv.org/abs/2305.03785v1 -"Large Language Models Can Be Used To Effectively Scale Spear Phishing - Campaigns",http://arxiv.org/abs/2305.06972v2 -"Text2Cohort: Democratizing the NCI Imaging Data Commons with Natural - Language Cohort Discovery",http://arxiv.org/abs/2305.07637v2 -"Exploring In-Context Learning Capabilities of Foundation Models for - Generating Knowledge Graphs from Text",http://arxiv.org/abs/2305.08804v1 -"Knowledge Graph Completion Models are Few-shot Learners: An Empirical - Study of Relation Labeling in E-commerce with LLMs",http://arxiv.org/abs/2305.09858v1 -"Federated Foundation Models: Privacy-Preserving and Collaborative - Learning for Large Models",http://arxiv.org/abs/2305.11414v1 -VisorGPT: Learning Visual Prior via Generative Pre-Training,http://arxiv.org/abs/2305.13777v4 -ChipGPT: How far are we from natural language hardware design,http://arxiv.org/abs/2305.14019v3 -Query Rewriting for Retrieval-Augmented Large Language Models,http://arxiv.org/abs/2305.14283v3 -VIP5: Towards Multimodal Foundation Models for Recommendation,http://arxiv.org/abs/2305.14302v2 -ALGO: Synthesizing Algorithmic Programs with Generated Oracle Verifiers,http://arxiv.org/abs/2305.14591v1 -"Game of Tones: Faculty detection of GPT-4 generated content in - university assessments",http://arxiv.org/abs/2305.18081v1 -"What can Large Language Models do in chemistry? A comprehensive - benchmark on eight tasks",http://arxiv.org/abs/2305.18365v2 -Event-Centric Query Expansion in Web Search,http://arxiv.org/abs/2305.19019v1 -"Responsible Task Automation: Empowering Large Language Models as - Responsible Task Automators",http://arxiv.org/abs/2306.01242v1 -EvLog: Identifying Anomalous Logs over Software Evolution,http://arxiv.org/abs/2306.01509v2 -"Determination of Outflow Properties for the Quasi-thermal - Radiation-Dominated Gamma-Ray Bursts",http://arxiv.org/abs/2306.02248v2 -Properties of Gamma-Ray Bursts Associated with Supernovae and Kilonovae,http://dx.doi.org/10.1093/mnras/stad1648 -Can large language models democratize access to dual-use biotechnology?,http://arxiv.org/abs/2306.03809v1 -Detecting Phishing Sites Using ChatGPT,http://arxiv.org/abs/2306.05816v1 -"A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets - Prompt Engineering",http://arxiv.org/abs/2306.06211v3 -The economic trade-offs of large language models: A case study,http://arxiv.org/abs/2306.07402v1 -Adding guardrails to advanced chatbots,http://arxiv.org/abs/2306.07500v1 -TART: A plug-and-play Transformer module for task-agnostic reasoning,http://arxiv.org/abs/2306.07536v1 -"RM-PRT: Realistic Robotic Manipulation Simulator and Benchmark with - Progressive Reasoning Tasks",http://arxiv.org/abs/2306.11335v2 -"Exploring the Effectiveness of Dataset Synthesis: An application of - Apple Detection in Orchards",http://arxiv.org/abs/2306.11763v1 -"MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language - Models",http://arxiv.org/abs/2306.13394v2 -"InterCode: Standardizing and Benchmarking Interactive Coding with - Execution Feedback",http://arxiv.org/abs/2306.14898v2 -"MyCrunchGPT: A chatGPT assisted framework for scientific machine - learning",http://arxiv.org/abs/2306.15551v2 -"From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and - Privacy",http://arxiv.org/abs/2307.00691v1 -"Exploring the Effectiveness of LLMs in Automated Logging Generation: An - Empirical Study",http://arxiv.org/abs/2307.05950v1 -"Multi-Method Self-Training: Improving Code Generation With Text, And - Vice Versa",http://arxiv.org/abs/2307.10633v1 -Private-Library-Oriented Code Generation with Large Language Models,http://arxiv.org/abs/2307.15370v1 -"Bitcoin Gold, Litecoin Silver:An Introduction to Cryptocurrency's - Valuation and Trading Strategy",http://arxiv.org/abs/2308.00013v1 -"FinPT: Financial Risk Prediction with Profile Tuning on Pretrained - Foundation Models",http://arxiv.org/abs/2308.00065v1 -MetaGPT: Meta Programming for Multi-Agent Collaborative Framework,http://arxiv.org/abs/2308.00352v4 -Flows: Building Blocks of Reasoning and Collaborating AI,http://arxiv.org/abs/2308.01285v1 -"The All-Seeing Project: Towards Panoptic Visual Recognition and - Understanding of the Open World",http://arxiv.org/abs/2308.01907v1 -"An Empirical Study on Using Large Language Models to Analyze Software - Supply Chain Security Failures",http://arxiv.org/abs/2308.04898v1 -"Your DRM Can Watch You Too: Exploring the Privacy Implications of - Browsers (mis)Implementations of Widevine EME",http://dx.doi.org/10.56553/popets-2023-0112 -CodeCoT and Beyond: Learning to Program and Test like a Developer,http://arxiv.org/abs/2308.08784v1 -"Towards Automatically Addressing Self-Admitted Technical Debt: How Far - Are We?",http://arxiv.org/abs/2308.08943v1 -"Dcc --help: Generating Context-Aware Compiler Error Explanations with - Large Language Models",http://arxiv.org/abs/2308.11873v2 -Diagnosing Infeasible Optimization Problems Using Large Language Models,http://arxiv.org/abs/2308.12923v1 -"The effect of branchless collisions and population control on - correlations in Monte Carlo power iteration",http://arxiv.org/abs/2309.03767v1 -Comparing Llama-2 and GPT-3 LLMs for HPC kernels generation,http://arxiv.org/abs/2309.07103v1 -"GPTFUZZER: Red Teaming Large Language Models with Auto-Generated - Jailbreak Prompts",http://arxiv.org/abs/2309.10253v2 -Is GPT4 a Good Trader?,http://arxiv.org/abs/2309.10982v1 -"AI-Copilot for Business Optimisation: A Framework and A Case Study in - Production Scheduling",http://arxiv.org/abs/2309.13218v3 -"An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of - Service-oriented Systems",http://arxiv.org/abs/2309.14391v1 -"Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind - Aware GPT-4",http://arxiv.org/abs/2309.17277v2 -"L2CEval: Evaluating Language-to-Code Generation Capabilities of Large - Language Models",http://arxiv.org/abs/2309.17446v2 -"L2MAC: Large Language Model Automatic Computer for Unbounded Code - Generation",http://arxiv.org/abs/2310.02003v1 -Reverse Chain: A Generic-Rule for LLMs to Master Multi-API Planning,http://arxiv.org/abs/2310.04474v2 -"JVNV: A Corpus of Japanese Emotional Speech with Verbal Content and - Nonverbal Expressions",http://arxiv.org/abs/2310.06072v1 -"LgTS: Dynamic Task Sampling using LLM-generated sub-goals for - Reinforcement Learning Agents",http://arxiv.org/abs/2310.09454v1 -Large Language Model-Aware In-Context Learning for Code Generation,http://arxiv.org/abs/2310.09748v1 -"Large Language Model-Empowered Agents for Simulating Macroeconomic - Activities",http://arxiv.org/abs/2310.10436v1 -"ClarifyGPT: Empowering LLM-based Code Generation with Intention - Clarification",http://arxiv.org/abs/2310.10996v1 -Large Language Model for Multi-objective Evolutionary Optimization,http://arxiv.org/abs/2310.12541v1 -Eureka: Human-Level Reward Design via Coding Large Language Models,http://arxiv.org/abs/2310.12931v1 -"Enhancing Zero-Shot Crypto Sentiment with Fine-tuned Language Model and - Prompt Engineering",http://arxiv.org/abs/2310.13226v1 -How Can We Know What Language Models Know?,http://arxiv.org/abs/1911.12543v2 -Black-box Prompt Learning for Pre-trained Language Models,http://arxiv.org/abs/2201.08531v3 -Personalized Prompt for Sequential Recommendation,http://arxiv.org/abs/2205.09666v2 -"FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning - in Federated Learning",http://arxiv.org/abs/2208.12268v3 -Prompt Tuning with Soft Context Sharing for Vision-Language Models,http://arxiv.org/abs/2208.13474v1 -"Natural Language Inference Prompts for Zero-shot Emotion Classification - in Text across Corpora",http://arxiv.org/abs/2209.06701v2 -Complexity-Based Prompting for Multi-Step Reasoning,http://arxiv.org/abs/2210.00720v2 -MaPLe: Multi-modal Prompt Learning,http://arxiv.org/abs/2210.03117v3 -Texts as Images in Prompt Tuning for Multi-Label Image Recognition,http://arxiv.org/abs/2211.12739v2 -"Decorate the Newcomers: Visual Domain Prompt for Continual Test Time - Adaptation",http://arxiv.org/abs/2212.04145v2 -"Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech - Recognition",http://arxiv.org/abs/2302.08102v1 -Rethinking Visual Prompt Learning as Masked Visual Token Modeling,http://arxiv.org/abs/2303.04998v1 -Iterative Prompt Learning for Unsupervised Backlit Image Enhancement,http://arxiv.org/abs/2303.17569v2 -TransHP: Image Classification with Hierarchical Prompting,http://arxiv.org/abs/2304.06385v4 -"PTP: Boosting Stability and Performance of Prompt Tuning with - Perturbation-Based Regularizer",http://arxiv.org/abs/2305.02423v1 -Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning,http://arxiv.org/abs/2305.18170v2 -Fine-Grained Visual Prompting,http://arxiv.org/abs/2306.04356v1 -Improving Visual Prompt Tuning for Self-supervised Vision Transformers,http://arxiv.org/abs/2306.05067v1 -"Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? - Insights from Cross-Lingual Language Understanding",http://arxiv.org/abs/2307.07880v1 -"Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language - Models",http://arxiv.org/abs/2307.08303v3 -"SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion - Segmentation",http://arxiv.org/abs/2308.04911v1 -"Prompt and non-prompt J$/ψ$ production at midrapidity in Pb$-$Pb - collisions at $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV",http://arxiv.org/abs/2308.16125v1 -"Content Prompting: Modeling Content Provider Dynamics to Improve User - Welfare in Recommender Ecosystems",http://arxiv.org/abs/2309.00940v1 -"Context-Aware Prompt Tuning for Vision-Language Model with - Dual-Alignment",http://arxiv.org/abs/2309.04158v1 -"PromptTTS++: Controlling Speaker Identity in Prompt-Based Text-to-Speech - Using Natural Language Descriptions",http://arxiv.org/abs/2309.08140v1 -Prompt Backdoors in Visual Prompt Learning,http://arxiv.org/abs/2310.07632v1 -"Towards Training-free Open-world Segmentation via Image Prompting - Foundation Models",http://arxiv.org/abs/2310.10912v1 -"Quantifying Language Models' Sensitivity to Spurious Features in Prompt - Design or: How I learned to start worrying about prompt formatting",http://arxiv.org/abs/2310.11324v1 -Radiation Processes in GRBs. Prompt Emission,http://dx.doi.org/10.1063/1.1810794 -Prompt Atmospheric Neutrinos: Phenomenology and Implications,http://arxiv.org/abs/hep-ph/0105271v1 -Polarization of Prompt J/psi and Upsilon(nS),http://arxiv.org/abs/hep-ph/0208238v1 -Prompt-photon production in DIS,http://arxiv.org/abs/0909.4026v1 -Prompt Scheduling for Selfish Agents,http://arxiv.org/abs/1804.03244v1 -Physics with prompt photons at SPD,http://dx.doi.org/10.1088/1742-6596/1435/1/012035 -"Low-Luminosity GRB 060218: A Collapsar Jet from a Neutron Star, Leaving - a Magnetar as a Remnant?",http://dx.doi.org/10.1086/512481 -"Rhythms of Memory and Bits on Edge: Symbol Recognition as a Physical - Phenomenon",http://arxiv.org/abs/1106.1639v1 -"Temporal Deconvolution study of Long and Short Gamma-Ray Burst Light - curves",http://dx.doi.org/10.1088/0004-637X/744/2/141 -"Physical origin of multi-wavelength emission of GRB 100418A and - implications for its progenitor",http://dx.doi.org/10.1088/1674-4527/12/4/005 -"On the Subclasses in Swift Long Gamma-Ray Bursts: A Clue to Different - Central Engines",http://dx.doi.org/10.1093/pasj/psu008 -"Black hole hyperaccretion inflow-outflow model. I. long and ultra-long - gamma-ray bursts",http://dx.doi.org/10.3847/1538-4357/aa9e4f -GRB 180620A: Evidence for late-time energy injection,http://dx.doi.org/10.3847/1538-4357/ab5859 -"The Reproducibility of Programming-Related Issues in Stack Overflow - Questions",http://arxiv.org/abs/2111.12204v2 -"Thermal and nonthermal emission from a peculiar long-duration GRB - 211211A",http://dx.doi.org/10.3847/1538-4357/aca969 -"Late-time Hubble Space Telescope Observations of AT 2018cow. I. Further - Constraints on the Fading Prompt Emission and Thermal Properties 50--60 Days - Post-discovery",http://dx.doi.org/10.3847/1538-4357/ace965 -"Nuances are the Key: Unlocking ChatGPT to Find Failure-Inducing Tests - with Differential Prompting",http://arxiv.org/abs/2304.11686v6 -"ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF - Synthesis",http://dx.doi.org/10.1021/jacs.3c05819 -"GPTCloneBench: A comprehensive benchmark of semantic clones and - cross-language clones using GPT-3 model and SemanticCloneBench",http://arxiv.org/abs/2308.13963v2 -CodePlan: Repository-level Coding using LLMs and Planning,http://arxiv.org/abs/2309.12499v1 -Prompt neutrino fluxes from atmospheric charm,http://dx.doi.org/10.1103/PhysRevD.78.043005 -Detecting Off-topic Responses to Visual Prompts,http://arxiv.org/abs/1707.05233v1 -"Possible non-prompt photons in $pp$ collisions and their effects in $AA$ - analyses",http://arxiv.org/abs/1812.08987v1 -GRB prompt phase spectra under backscattering dominated model,http://arxiv.org/abs/2111.14163v1 -"Zero-shot Domain Adaptation for Neural Machine Translation with - Retrieved Phrase-level Prompts",http://arxiv.org/abs/2209.11409v1 -This joke is [MASK]: Recognizing Humor and Offense with Prompting,http://arxiv.org/abs/2210.13985v1 -Large Language Models Perform Diagnostic Reasoning,http://arxiv.org/abs/2307.08922v1 -"Promptly: Using Prompt Problems to Teach Learners How to Effectively - Utilize AI Code Generators",http://arxiv.org/abs/2307.16364v1 -"Deeply virtual Compton scattering and prompt photon production at ZEUS - and H1 experiments",http://arxiv.org/abs/hep-ex/0510072v1 -The prompt photon photoproduction at THERA,http://arxiv.org/abs/hep-ph/0103344v1 -Spectral and Timing Analysis of the Prompt Emission of Gamma Ray Bursts,http://arxiv.org/abs/1409.5626v1 -"Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods - in Natural Language Processing",http://arxiv.org/abs/2107.13586v1 -"NSP-BERT: A Prompt-based Few-Shot Learner Through an Original - Pre-training Task--Next Sentence Prediction",http://arxiv.org/abs/2109.03564v2 -"A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based - Learning for Vision-Language Models",http://arxiv.org/abs/2110.08484v2 -Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models,http://arxiv.org/abs/2203.03131v1 -"GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large - Language Models",http://arxiv.org/abs/2203.07281v2 -PSP: Pre-trained Soft Prompts for Few-Shot Abstractive Summarization,http://arxiv.org/abs/2204.04413v2 -"On the polarization of the non-prompt contribution to inclusive J/$ψ$ - production in pp collisions",http://dx.doi.org/10.1007/JHEP10(2022)010 -Prompt-Matched Semantic Segmentation,http://arxiv.org/abs/2208.10159v3 -"Automatic Label Sequence Generation for Prompting Sequence-to-sequence - Models",http://arxiv.org/abs/2209.09401v1 -Federated Adaptive Prompt Tuning for Multi-domain Collaborative Learning,http://arxiv.org/abs/2211.07864v2 -BadPrompt: Backdoor Attacks on Continuous Prompts,http://arxiv.org/abs/2211.14719v1 -Guiding Large Language Models via Directional Stimulus Prompting,http://arxiv.org/abs/2302.11520v4 -R-Tuning: Regularized Prompt Tuning in Open-Set Scenarios,http://arxiv.org/abs/2303.05122v1 -"Learning Combinatorial Prompts for Universal Controllable Image - Captioning",http://arxiv.org/abs/2303.06338v3 -"Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization - for Few-shot Generalization",http://arxiv.org/abs/2303.12314v4 -Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models,http://arxiv.org/abs/2303.17169v1 -"EPVT: Environment-aware Prompt Vision Transformer for Domain - Generalization in Skin Lesion Recognition",http://arxiv.org/abs/2304.01508v3 -Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting,http://arxiv.org/abs/2304.03307v1 -RPLKG: Robust Prompt Learning with Knowledge Graph,http://arxiv.org/abs/2304.10805v1 -"TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse - Relation Recognition",http://arxiv.org/abs/2305.10866v1 -Prompting with Pseudo-Code Instructions,http://arxiv.org/abs/2305.11790v3 -Reward Collapse in Aligning Large Language Models,http://arxiv.org/abs/2305.17608v1 -Universality and Limitations of Prompt Tuning,http://arxiv.org/abs/2305.18787v1 -Learning Domain-Aware Detection Head with Prompt Tuning,http://arxiv.org/abs/2306.05718v3 -"Prompting classes: Exploring the Power of Prompt Class Learning in - Weakly Supervised Semantic Segmentation",http://arxiv.org/abs/2307.00097v2 -"Skills-in-Context Prompting: Unlocking Compositionality in Large - Language Models",http://arxiv.org/abs/2308.00304v2 -Prompt2Gaussia: Uncertain Prompt-learning for Script Event Prediction,http://arxiv.org/abs/2308.02103v1 -"Improving Generalization of Image Captioning with Unsupervised Prompt - Learning",http://arxiv.org/abs/2308.02862v1 -"FedLogic: Interpretable Federated Multi-Domain Chain-of-Thought Prompt - Selection for Large Language Models",http://arxiv.org/abs/2308.15324v1 -"Efficient Model Personalization in Federated Learning via - Client-Specific Prompt Generation",http://arxiv.org/abs/2308.15367v1 -Introducing Language Guidance in Prompt-based Continual Learning,http://arxiv.org/abs/2308.15827v1 -"Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by - Finding Problematic Prompts",http://arxiv.org/abs/2309.06135v1 -Search for the Prompt Atmospheric Neutrino Flux in IceCube,http://arxiv.org/abs/2309.07560v1 -DCPT: Darkness Clue-Prompted Tracking in Nighttime UAVs,http://arxiv.org/abs/2309.10491v3 -(Dynamic) Prompting might be all you need to repair Compressed LLMs,http://arxiv.org/abs/2310.00867v2 -"Prompting and Adapter Tuning for Self-supervised Encoder-Decoder Speech - Model",http://arxiv.org/abs/2310.02971v2 -Sentence-level Prompts Benefit Composed Image Retrieval,http://arxiv.org/abs/2310.05473v1 -"Survival of the Most Influential Prompts: Efficient Black-Box Prompt - Search via Clustering and Pruning",http://arxiv.org/abs/2310.12774v1 -"HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained - Heterogeneous Graph Neural Networks",http://arxiv.org/abs/2310.15318v1 -"A Comptonization Model for the Prompt Optical and Infrared Emission of - GRB 041219A",http://dx.doi.org/10.1086/506608 -The Dark Side of ROTSE-III Prompt GRB Observations,http://dx.doi.org/10.1086/521668 -"Suppression of non-prompt J/psi, prompt J/psi, and Y(1S) in PbPb - collisions at sqrt(sNN) = 2.76 TeV",http://dx.doi.org/10.1007/JHEP05(2012)063 -Application of Jitter Radiation: Gamma-ray Burst Prompt Polarization,http://dx.doi.org/10.1088/0004-637X/776/1/17 -"Yields and polarizations of prompt $\jpsi$ and $\psits$ production in - hadronic collisions",http://arxiv.org/abs/1411.3300v2 -"Answer-Type Modification without Tears: Prompt-Passing Style Translation - for Typed Delimited-Control Operators",http://dx.doi.org/10.4204/EPTCS.212.3 -"Detectability of Electromagnetic counterparts from Neutron Star mergers: - prompt emission vs afterglow",http://dx.doi.org/10.1093/mnras/stab3120 -Noisy Channel Language Model Prompting for Few-Shot Text Classification,http://arxiv.org/abs/2108.04106v3 -Response Generation with Context-Aware Prompt Learning,http://arxiv.org/abs/2111.02643v5 -Black-Box Tuning for Language-Model-as-a-Service,http://arxiv.org/abs/2201.03514v4 -"EPPAC: Entity Pre-typing Relation Classification with Prompt - AnswerCentralizing",http://arxiv.org/abs/2203.00193v2 -"ProQA: Structural Prompt-based Pre-training for Unified Question - Answering",http://arxiv.org/abs/2205.04040v2 -Are Prompt-based Models Clueless?,http://arxiv.org/abs/2205.09295v2 -Neural Prompt Search,http://arxiv.org/abs/2206.04673v2 -"Low Resource Pipeline for Spoken Language Understanding via Weak - Supervision",http://arxiv.org/abs/2206.10559v1 -Beauty production in heavy-ion collisions with ALICE at the LHC,http://arxiv.org/abs/2207.10259v2 -"PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead - of Models -- Federated Learning in Age of Foundation Model",http://arxiv.org/abs/2208.11625v1 -A Few-shot Approach to Resume Information Extraction via Prompts,http://dx.doi.org/10.1007/978-3-031-35320-8_32 -Prompt-guided Scene Generation for 3D Zero-Shot Learning,http://arxiv.org/abs/2209.14690v1 -Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation,http://arxiv.org/abs/2209.15210v3 -"Medical Image Understanding with Pretrained Vision Language Models: A - Comprehensive Study",http://arxiv.org/abs/2209.15517v2 -"Analogy Generation by Prompting Large Language Models: A Case Study of - InstructGPT",http://arxiv.org/abs/2210.04186v2 -"Knowledge Prompts: Injecting World Knowledge into Language Models - through Soft Prompts",http://arxiv.org/abs/2210.04726v1 -Prompting GPT-3 To Be Reliable,http://arxiv.org/abs/2210.09150v2 -"Discriminative Language Model as Semantic Consistency Scorer for - Prompt-based Few-Shot Text Classification",http://arxiv.org/abs/2210.12763v1 -"Don't Prompt, Search! Mining-based Zero-Shot Learning with Language - Models",http://arxiv.org/abs/2210.14803v1 -Bi-Directional Iterative Prompt-Tuning for Event Argument Extraction,http://arxiv.org/abs/2210.15843v1 -Latent Prompt Tuning for Text Summarization,http://arxiv.org/abs/2211.01837v2 -"Prompting Large Pre-trained Vision-Language Models For Compositional - Concept Learning",http://arxiv.org/abs/2211.05077v1 -"Empowering Sentence Encoders with Prompting and Label Retrieval for - Zero-shot Text Classification",http://arxiv.org/abs/2212.10391v2 -"The Exploration of Knowledge-Preserving Prompts for Document - Summarisation",http://arxiv.org/abs/2301.11719v4 -Prompt-Based Editing for Text Style Transfer,http://arxiv.org/abs/2301.11997v1 -Progressive Prompts: Continual Learning for Language Models,http://arxiv.org/abs/2301.12314v1 -"Synthetic Prompting: Generating Chain-of-Thought Demonstrations for - Large Language Models",http://arxiv.org/abs/2302.00618v1 -"Conversation Regression Testing: A Design Technique for Prototyping - Generalizable Prompt Strategies for Pre-trained Language Models",http://arxiv.org/abs/2302.03154v1 -Prompting for Multimodal Hateful Meme Classification,http://arxiv.org/abs/2302.04156v1 -"SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for - Classification in Low-Resource Domains",http://arxiv.org/abs/2302.06868v1 -"Bounding the Capabilities of Large Language Models in Open Text - Generation with Prompt Constraints",http://arxiv.org/abs/2302.09185v1 -Explicit Visual Prompting for Low-Level Structure Segmentations,http://arxiv.org/abs/2303.10883v2 -The Prompt Artists,http://arxiv.org/abs/2303.12253v1 -"$P^{3}O$: Transferring Visual Representations for Reinforcement Learning - via Prompting",http://arxiv.org/abs/2303.12371v2 -"Error Analysis Prompting Enables Human-Like Translation Evaluation in - Large Language Models: A Case Study on ChatGPT",http://arxiv.org/abs/2303.13809v2 -Prompt Tuning based Adapter for Vision-Language Model Adaption,http://arxiv.org/abs/2303.15234v1 -"Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary - Visual Recognition",http://arxiv.org/abs/2304.04704v2 -"D2CSE: Difference-aware Deep continuous prompts for Contrastive Sentence - Embeddings",http://arxiv.org/abs/2304.08991v1 -Reliable Gradient-free and Likelihood-free Prompt Tuning,http://arxiv.org/abs/2305.00593v1 -In-Context Learning Unlocked for Diffusion Models,http://arxiv.org/abs/2305.01115v2 -"How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, - and Cross-domain Settings",http://arxiv.org/abs/2305.11853v2 -"Automated Few-shot Classification with Instruction-Finetuned Language - Models",http://arxiv.org/abs/2305.12576v2 -"Enhancing Cross-lingual Natural Language Inference by Soft Prompting - with Multilingual Verbalizer",http://arxiv.org/abs/2305.12761v1 -"Prompting is not a substitute for probability measurements in large - language models",http://arxiv.org/abs/2305.13264v2 -Evaluating Factual Consistency of Summaries with Large Language Models,http://arxiv.org/abs/2305.14069v2 -"Navigating Prompt Complexity for Zero-Shot Classification: A Study of - Large Language Models in Computational Social Science",http://arxiv.org/abs/2305.14310v2 -Few-shot Unified Question Answering: Tuning Models or Prompts?,http://arxiv.org/abs/2305.14569v1 -"Deliberate then Generate: Enhanced Prompting Framework for Text - Generation",http://arxiv.org/abs/2305.19835v1 -ProTeCt: Prompt Tuning for Hierarchical Consistency,http://arxiv.org/abs/2306.02240v1 -"TKDP: Threefold Knowledge-enriched Deep Prompt Tuning for Few-shot Named - Entity Recognition",http://arxiv.org/abs/2306.03974v2 -ATT3D: Amortized Text-to-3D Object Synthesis,http://arxiv.org/abs/2306.07349v1 -"Generative Zero-Shot Prompt Learning for Cross-Domain Slot Filling with - Inverse Prompting",http://arxiv.org/abs/2307.02830v1 -E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning,http://arxiv.org/abs/2307.13770v1 -"Towards Personalized Prompt-Model Retrieval for Generative - Recommendation",http://arxiv.org/abs/2308.02205v1 -"Large Language Model Prompt Chaining for Long Legal Document - Classification",http://arxiv.org/abs/2308.04138v1 -Metacognitive Prompting Improves Understanding in Large Language Models,http://arxiv.org/abs/2308.05342v3 -"Self-Prompting Large Vision Models for Few-Shot Medical Image - Segmentation",http://arxiv.org/abs/2308.07624v1 -Read-only Prompt Optimization for Vision-Language Few-shot Learning,http://arxiv.org/abs/2308.14960v1 -Distribution-Aware Prompt Tuning for Vision-Language Models,http://arxiv.org/abs/2309.03406v1 -Large Language Models as Optimizers,http://arxiv.org/abs/2309.03409v1 -"Prompt-based Context- and Domain-aware Pretraining for Vision and - Language Navigation",http://arxiv.org/abs/2309.03661v1 -PromptASR for contextualized ASR with controllable style,http://arxiv.org/abs/2309.07414v2 -"PerPLM: Personalized Fine-tuning of Pretrained Language Models via - Writer-specific Intermediate Learning and Prompts",http://arxiv.org/abs/2309.07727v1 -Topic-DPR: Topic-based Prompts for Dense Passage Retrieval,http://arxiv.org/abs/2310.06626v1 -Prompt Tuning for Multi-View Graph Contrastive Learning,http://arxiv.org/abs/2310.10362v1 -PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models,http://arxiv.org/abs/2310.12439v1 -"Can prompt cusps of WIMP dark matter be detected as individual gamma-ray - sources?",http://arxiv.org/abs/2310.15214v1 -"Towards a Better Understanding of the GRB Phenomenon: a New Model for - GRB Prompt Emission and its effects on the New Non-Thermal - L$_\mathrm{i}^\mathrm{NT}$-E$_\mathrm{peak,i}^\mathrm{rest,NT}$ relation",http://dx.doi.org/10.1088/0004-637X/807/2/148 -"Evidence of mini-jet emission in a large emission zone from a - magnetically-dominated gamma-ray burst jet",http://arxiv.org/abs/2310.07205v1 -All in One: Multi-task Prompting for Graph Neural Networks,http://dx.doi.org/10.1145/3580305.3599256 -"The Broad-lined Ic Supernova ZTF18aaqjovh (SN 2018bvw): An - Optically-discovered Engine-driven Supernova Candidate with Luminous Radio - Emission",http://dx.doi.org/10.3847/1538-4357/ab7f3b -"THESEUS and Gamma-Ray Bursts: a valuable contribution to the - understanding of prompt emission",http://arxiv.org/abs/1802.01683v1 -"Gated Convolutional Bidirectional Attention-based Model for Off-topic - Spoken Response Detection",http://dx.doi.org/10.18653/v1/2020.acl-main.56 -"Measurements of $ψ(2S)$ and $X(3872) \to J/ψπ^+π^-$ production - in $pp$ collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector",http://dx.doi.org/10.1007/JHEP01(2017)117 -"Hauser-Feshbach fission fragment de-excitation with calculated - macroscopic-microscopic mass yields",http://dx.doi.org/10.1103/PhysRevC.97.034608 -Latest D0 results on exotic hadrons produced in $p\bar{p}$ collisions,http://arxiv.org/abs/2012.02072v1 -"KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization - for Relation Extraction",http://dx.doi.org/10.1145/3485447.3511998 -"Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt - Verbalizer for Text Classification",http://arxiv.org/abs/2108.02035v2 -"Eliciting Knowledge from Pretrained Language Models for Prototypical - Prompt Verbalizer",http://arxiv.org/abs/2201.05411v1 -Prompt-Learning for Short Text Classification,http://arxiv.org/abs/2202.11345v2 -"Prompt for Extraction? PAIE: Prompting Argument Interaction for Event - Argument Extraction",http://arxiv.org/abs/2202.12109v2 -"Bridge-Prompt: Towards Ordinal Action Understanding in Instructional - Videos",http://arxiv.org/abs/2203.14104v1 -"SpeechPrompt: An Exploration of Prompt Tuning on Generative Spoken - Language Model for Speech Processing Tasks",http://arxiv.org/abs/2203.16773v3 -Clinical Prompt Learning with Frozen Language Models,http://arxiv.org/abs/2205.05535v1 -"PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot - Learners",http://arxiv.org/abs/2205.09229v3 -DynaMaR: Dynamic Prompt with Mask Token Representation,http://arxiv.org/abs/2206.02982v1 -Pro-tuning: Unified Prompt Tuning for Vision Tasks,http://arxiv.org/abs/2207.14381v3 -Fine-grained Retrieval Prompt Tuning,http://arxiv.org/abs/2207.14465v3 -"Small Character Models Match Large Word Models for Autocomplete Under - Memory Constraints",http://arxiv.org/abs/2210.03251v2 -Automatic Chain of Thought Prompting in Large Language Models,http://arxiv.org/abs/2210.03493v1 -Visual Prompting for Adversarial Robustness,http://arxiv.org/abs/2210.06284v4 -"One Model to Edit Them All: Free-Form Text-Driven Image Manipulation - with Semantic Modulations",http://arxiv.org/abs/2210.07883v2 -CPL: Counterfactual Prompt Learning for Vision and Language Models,http://arxiv.org/abs/2210.10362v3 -"Contextual Dynamic Prompting for Response Generation in Task-oriented - Dialog Systems",http://arxiv.org/abs/2301.13268v2 -Visual Prompt Based Personalized Federated Learning,http://arxiv.org/abs/2303.08678v1 -Fairness-guided Few-shot Prompting for Large Language Models,http://arxiv.org/abs/2303.13217v3 -ReVersion: Diffusion-Based Relation Inversion from Images,http://arxiv.org/abs/2303.13495v1 -Learning Federated Visual Prompt in Null Space for MRI Reconstruction,http://arxiv.org/abs/2303.16181v2 -Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models,http://arxiv.org/abs/2304.07221v2 -"FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain - Adaptation of Medical Image Segmentation",http://arxiv.org/abs/2304.13672v1 -"DRPT: Disentangled and Recurrent Prompt Tuning for Compositional - Zero-Shot Learning",http://arxiv.org/abs/2305.01239v1 -Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner,http://arxiv.org/abs/2305.01711v4 -Controllable Speaking Styles Using a Large Language Model,http://arxiv.org/abs/2305.10321v2 -Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer,http://arxiv.org/abs/2305.12077v1 -"A Benchmark on Extremely Weakly Supervised Text Classification: - Reconcile Seed Matching and Prompting Approaches",http://arxiv.org/abs/2305.12749v1 -"Measurements of the azimuthal anisotropy of prompt and nonprompt - charmonia in PbPb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV",http://dx.doi.org/10.1007/JHEP10(2023)115 -Continual Task Allocation in Meta-Policy Network via Sparse Prompting,http://arxiv.org/abs/2305.18444v2 -Soft-prompt Tuning for Large Language Models to Evaluate Bias,http://arxiv.org/abs/2306.04735v1 -"Adversarial Robustness of Prompt-based Few-Shot Learning for Natural - Language Understanding",http://arxiv.org/abs/2306.11066v2 -Approximated Prompt Tuning for Vision-Language Pre-trained Models,http://arxiv.org/abs/2306.15706v2 -"Large Language Model as Attributed Training Data Generator: A Tale of - Diversity and Bias",http://arxiv.org/abs/2306.15895v2 -"Understanding Prompt Tuning for V-L Models Through the Lens of Neural - Collapse",http://arxiv.org/abs/2306.15955v3 -"Retrieval-augmented GPT-3.5-based Text-to-SQL Framework with - Sample-aware Prompting and Dynamic Revision Chain",http://arxiv.org/abs/2307.05074v2 -DPL: Decoupled Prompt Learning for Vision-Language Models,http://arxiv.org/abs/2308.10061v1 -Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models,http://arxiv.org/abs/2308.11186v1 -PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine,http://arxiv.org/abs/2308.12033v1 -DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning,http://arxiv.org/abs/2309.05173v2 -How (Not) to Use Sociodemographic Information for Subjective NLP Tasks,http://arxiv.org/abs/2309.07034v1 -"Improving Language Model-Based Zero-Shot Text-to-Speech Synthesis with - Multi-Scale Acoustic Prompts",http://arxiv.org/abs/2309.11977v2 -"Dynamic Prompt Learning: Addressing Cross-Attention Leakage for - Text-Based Image Editing",http://arxiv.org/abs/2309.15664v1 -"AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language - Models",http://arxiv.org/abs/2310.04451v1 -"Prompting Large Language Models with Chain-of-Thought for Few-Shot - Knowledge Base Question Generation",http://arxiv.org/abs/2310.08395v3 -Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models,http://arxiv.org/abs/2310.13828v1 -Multiwavelength study of the very long GRB 020410,http://dx.doi.org/10.1051/0004-6361:20040516 -"The Afterglow, Energetics and Host Galaxy of the Short-Hard Gamma-Ray - Burst 051221a",http://dx.doi.org/10.1086/506429 -"Merger of binary neutron stars to a black hole: Disk mass, short - gamma-ray bursts, and quasinormal mode ringing",http://dx.doi.org/10.1103/PhysRevD.73.064027 -The prompt to late-time multiwavelength analysis of GRB 060210,http://dx.doi.org/10.1051/0004-6361:20077055 -X-ray Hardness Variations as an Internal/External Shock Diagnostic,http://dx.doi.org/10.1086/521072 -HETE-2 Observations of the X-Ray Flash XRF 040916,http://dx.doi.org/10.1093/pasj/59.3.695 -"The exceptionally extended flaring activity in the X-ray afterglow of - GRB 050730 observed with Swift and XMM-Newton",http://dx.doi.org/10.1051/0004-6361:20066227 -"A Comprehensive Analysis of the Swift/XRT Data: II. Diverse Physical - Origins of the Shallow Decay Segment",http://dx.doi.org/10.1086/521870 -"Observational Signatures of High-Energy Emission during the Shallow - Decay Phase of GRB X-Ray Afterglows",http://dx.doi.org/10.1086/522829 -Magnetar-energized supernova explosions and GRB-jets,http://dx.doi.org/10.1111/j.1365-2966.2007.12485.x -"Accurate evolutions of unequal-mass neutron-star binaries: properties of - the torus and short GRB engines",http://dx.doi.org/10.1088/0264-9381/27/11/114105 -"The Internal-Collision-Induced Magnetic Reconnection and Turbulence - (ICMART) Model of Gamma-Ray Bursts",http://dx.doi.org/10.1088/0004-637X/726/2/90 -Electromagnetic power of merging and collapsing compact objects,http://dx.doi.org/10.1103/PhysRevD.83.124035 -"The Spectroscopic Classification and Explosion Properties of SN2009nz - Associated with GRB091127 at z=0.490",http://dx.doi.org/10.1088/0004-637X/743/2/204 -Prompt thermal emission in gamma-ray bursts,http://dx.doi.org/10.1051/0004-6361/201220023 -GRB 091024A and the nature of ultra-long gamma-ray bursts,http://dx.doi.org/10.1088/0004-637X/778/1/54 -GRB 130427A: a Nearby Ordinary Monster,http://dx.doi.org/10.1126/science.1242279 -Super-solar metallicity at the position of the ultra-long GRB130925A,http://dx.doi.org/10.1051/0004-6361/201526060 -"Electromagnetic emission from long-lived binary neutron star merger - remnants I: formulation of the problem",http://dx.doi.org/10.3847/0004-637X/819/1/14 -"GRB/GW association: Long-short GRB candidates, time-lag, measuring - gravitational wave velocity and testing Einstein's equivalence principle",http://dx.doi.org/10.3847/0004-637X/827/1/75 -"Measurement of the fast neutron background at the China Jinping - Underground Laboratory",http://dx.doi.org/10.1016/j.nima.2018.01.098 -"Calibrating chemical multisensory devices for real world applications: - An in-depth comparison of quantitative Machine Learning approaches",http://dx.doi.org/10.1016/j.snb.2017.07.155 -"First Electromagnetic Pulse Associated with a Gravitational-Wave Event: - Profile, Duration, and Delay",http://dx.doi.org/10.3847/1538-4357/aab3d7 -"Search for a Dark Photon in Electro-Produced $e^{+}e^{-}$ Pairs with the - Heavy Photon Search Experiment at JLab",http://dx.doi.org/10.1103/PhysRevD.98.091101 -"Early Optical Observations of GRB 150910A: Bright Jet Optical Afterglow - and X-ray Dipole Radiation from a Magnetar Central Engine",http://dx.doi.org/10.3847/1538-4357/ab8d2a -"Consistency with synchrotron emission in the bright GRB 160625B observed - by Fermi",http://dx.doi.org/10.1051/0004-6361/201732245 -"A Quark-Nova in the wake of a core-collapse Supernova: a unifying model - for long duration Gamma-Ray Bursts and Fast Radio Bursts",http://dx.doi.org/10.1088/1674-4527/20/2/27 -"Detection of GRB 090618 with RT-2 Experiment Onboard the Coronas-Photon - Satellite",http://dx.doi.org/10.1088/0004-637X/728/1/42 -"Do Fermi-LAT observations imply very large Lorentz factors in GRB - outflows ?",http://dx.doi.org/10.1111/j.1365-2966.2011.20332.x -"External Shock in a Multi-Bursting Gamma-ray Burst: Energy Injection - Phase induced by the Later Launched Ejecta",http://dx.doi.org/10.3847/1538-4357/aa9f15 -"Multiple Components in the Broadband $γ$-ray Emission of the Short - GRB 160709A",http://dx.doi.org/10.3847/1538-4357/ab0e72 -"Magnetar as Central Engine of Gamma-Ray Bursts: Quasi-Universal Jet, - Event Rate and X-ray Luminosity Function of Dipole Radiations",http://dx.doi.org/10.3847/1538-4357/ab8302 -"Multi-wavelength radiation models for low-luminosity GRBs, and the - implications for UHECRs",http://dx.doi.org/10.1093/mnras/stac433 -"""Super-Kilonovae"" from Massive Collapsars as Signatures of Black-Hole - Birth in the Pair-instability Mass Gap",http://dx.doi.org/10.3847/1538-4357/ac8d04 -"GRB 191016A: The onset of the forward shock and evidence of late energy - injection",http://dx.doi.org/10.1093/mnras/stac389 -"United Monoids: Finding Simplicial Sets and Labelled Algebraic Graphs in - Trees",http://dx.doi.org/10.22152/programming-journal.org/2022/6/12 -"Pre-training of Equivariant Graph Matching Networks with Conformation - Flexibility for Drug Binding",http://dx.doi.org/10.1002/advs.202203796 -"Black hole to photosphere: 3D GRMHD simulations of collapsars reveal - wobbling and hybrid composition jets",http://dx.doi.org/10.3847/2041-8213/ac7530 -CIRCLE: Continual Repair across Programming Languages,http://arxiv.org/abs/2205.10956v4 -"Flow network controlled shape transformation of a thin membrane through - differential fluid storage and surface expansion",http://dx.doi.org/10.1103/PhysRevE.107.024419 -ChatGPT and Software Testing Education: Promises & Perils,http://dx.doi.org/10.1109/ICSTW58534.2023.00078 -"An Empirical Evaluation of Using Large Language Models for Automated - Unit Test Generation",http://arxiv.org/abs/2302.06527v3 -Observation of Josephson Harmonics in Tunnel Junctions,http://arxiv.org/abs/2302.09192v2 -"A variational quantum algorithm-based numerical method for solving - potential and Stokes flows",http://arxiv.org/abs/2303.01805v1 -"Greener yet Powerful: Taming Large Code Generation Models with - Quantization",http://arxiv.org/abs/2303.05378v1 -Susceptibility to Influence of Large Language Models,http://arxiv.org/abs/2303.06074v1 -"Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI - Testing",http://arxiv.org/abs/2305.09434v1 -Guiding Language Models of Code with Global Context using Monitors,http://arxiv.org/abs/2306.10763v1 -Demonstrations of the Potential of AI-based Political Issue Polling,http://arxiv.org/abs/2307.04781v2 -"Understanding the Effectiveness of Large Language Models in Code - Translation",http://arxiv.org/abs/2308.03109v1 -"Effective Test Generation Using Pre-trained Large Language Models and - Mutation Testing",http://arxiv.org/abs/2308.16557v1 -"AI Foundation Models for Weather and Climate: Applications, Design, and - Implementation",http://arxiv.org/abs/2309.10808v2 -"MetaTool Benchmark for Large Language Models: Deciding Whether to Use - Tools and Which to Use",http://arxiv.org/abs/2310.03128v3 -"CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code - Completion",http://arxiv.org/abs/2310.11248v1 -"Less is More? An Empirical Study on Configuration Issues in Python PyPI - Ecosystem",http://arxiv.org/abs/2310.12598v1 -White-box Compiler Fuzzing Empowered by Large Language Models,http://arxiv.org/abs/2310.15991v1 -"Characterization of Gamma-Ray Burst prompt emission spectra down to soft - X-rays",http://dx.doi.org/10.1051/0004-6361/201732172 -"Measurement of beauty-strange meson production in Pb$-$Pb collisions at - $\sqrt{s_{\rm NN}} = 5.02$ TeV via non-prompt $\mathrm{D_s}^{+}$ mesons",http://dx.doi.org/10.1016/j.physletb.2022.137561 -"Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango",http://arxiv.org/abs/2209.07686v2 -"SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with - Large Language Models",http://arxiv.org/abs/2305.05189v3 -"Fermi-GBM Discovery of GRB 221009A: An Extraordinarily Bright GRB from - Onset to Afterglow",http://dx.doi.org/10.3847/2041-8213/ace5b4 -Did Swift measure GRB prompt emission radii?,http://dx.doi.org/10.1111/j.1745-3933.2006.00161.x -Measurement of Prompt Photon Cross Sections in Photoproduction at HERA,http://dx.doi.org/10.1140/epjc/s2004-02085-x -"Prompt photon hadroproduction at high energies in the k_T-factorization - approach",http://dx.doi.org/10.1088/0954-3899/34/2/005 -"Modeling Spectral Variability of Prompt GRB Emission within the Jitter - Radiation Paradigm",http://dx.doi.org/10.1088/0004-637X/702/1/L91 -"Prompt and non-prompt J/psi production in pp collisions at sqrt(s) = 7 - TeV",http://dx.doi.org/10.1140/epjc/s10052-011-1575-8 -Quarkonia Measurements by the CMS Experiment in pp and PbPb Collisions,http://dx.doi.org/10.1088/0954-3899/38/12/124033 -"Prompt photon production and photon-hadron correlations at RHIC and the - LHC from the Color Glass Condensate",http://dx.doi.org/10.1103/PhysRevD.86.034016 -"The calculation of prompt fission neutron spectrum for 233U(n,f) - reaction by the semi-empirical method",http://dx.doi.org/10.1088/1674-1137/38/5/054001 -"A hint to the origin of the extended emission in LAT GRBs: the relation - between LAT luminosity and prompt energetics",http://arxiv.org/abs/1308.5442v1 -"Polarization of prompt $J/ψ$ and $Υ$(1S) production in the - color evaporation model",http://dx.doi.org/10.1103/PhysRevD.96.054014 -Intrinsic charm contribution to the prompt atmospheric neutrino flux,http://dx.doi.org/10.1103/PhysRevD.98.014012 -"A simulation study to distinguish prompt photon from $π^0$ and beam - halo in a granular calorimeter using deep networks",http://dx.doi.org/10.1088/1748-0221/14/01/P01011 -"Prompt GeV emission in the synchrotron self-Compton model for Gamma-Ray - Bursts",http://arxiv.org/abs/0811.1235v1 -Distributed PROMPT-LTL Synthesis,http://dx.doi.org/10.4204/EPTCS.226.16 -"Efficiency of prompt quarantine measures on a - susceptible-infected-removed model in networks",http://dx.doi.org/10.1103/PhysRevE.96.022311 -Calibrate Before Use: Improving Few-Shot Performance of Language Models,http://arxiv.org/abs/2102.09690v2 -Ab Initio Particle-based Object Manipulation,http://dx.doi.org/10.15607/RSS.2021.XVII.071 -The Power of Prompt Tuning for Low-Resource Semantic Parsing,http://arxiv.org/abs/2110.08525v2 -True Few-Shot Learning with Prompts -- A Real-World Perspective,http://arxiv.org/abs/2111.13440v1 -Learning To Retrieve Prompts for In-Context Learning,http://arxiv.org/abs/2112.08633v2 -Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,http://arxiv.org/abs/2201.11903v6 -"HealthPrompt: A Zero-shot Learning Paradigm for Clinical Natural - Language Processing",http://arxiv.org/abs/2203.05061v1 -Iteratively Prompt Pre-trained Language Models for Chain of Thought,http://arxiv.org/abs/2203.08383v3 -"RelationPrompt: Leveraging Prompts to Generate Synthetic Data for - Zero-Shot Relation Triplet Extraction",http://arxiv.org/abs/2203.09101v1 -"A Study on Prompt-based Few-Shot Learning Methods for Belief State - Tracking in Task-oriented Dialog Systems",http://arxiv.org/abs/2204.08167v1 -GRB Prompt Emission: Observed Correlations and Their Interpretations,http://arxiv.org/abs/2204.09729v2 -Making Pretrained Language Models Good Long-tailed Learners,http://arxiv.org/abs/2205.05461v2 -Prompt Tuning for Discriminative Pre-trained Language Models,http://arxiv.org/abs/2205.11166v1 -"Learning to Generate Prompts for Dialogue Generation through - Reinforcement Learning",http://arxiv.org/abs/2206.03931v3 -CP3: Unifying Point Cloud Completion by Pretrain-Prompt-Predict Paradigm,http://arxiv.org/abs/2207.05359v2 -"Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated - Neural Text Retrievers",http://arxiv.org/abs/2207.07087v1 -"ELECTRA is a Zero-Shot Learner, Too",http://arxiv.org/abs/2207.08141v2 -Visual Prompting via Image Inpainting,http://arxiv.org/abs/2209.00647v1 -"How to Prompt? Opportunities and Challenges of Zero- and Few-Shot - Learning for Human-AI Interaction in Creative Applications of Generative - Models",http://arxiv.org/abs/2209.01390v1 -"PromptAttack: Prompt-based Attack for Language Models via Gradient - Search",http://arxiv.org/abs/2209.01882v1 -"Prompt Combines Paraphrase: Teaching Pre-trained Models to Understand - Rare Biomedical Words",http://arxiv.org/abs/2209.06453v1 -Prompting for a conversation: How to control a dialog model?,http://arxiv.org/abs/2209.11068v1 -"Exploring The Design of Prompts For Applying GPT-3 based Chatbots: A - Mental Wellbeing Case Study on Mechanical Turk",http://arxiv.org/abs/2209.11344v1 -What Makes Pre-trained Language Models Better Zero-shot Learners?,http://arxiv.org/abs/2209.15206v3 -"DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for - Controllable Text Generation",http://arxiv.org/abs/2210.09551v1 -Continued Pretraining for Better Zero- and Few-Shot Promptability,http://arxiv.org/abs/2210.10258v2 -SPE: Symmetrical Prompt Enhancement for Fact Probing,http://arxiv.org/abs/2211.07078v1 -Learning Label Modular Prompts for Text Classification in the Wild,http://arxiv.org/abs/2211.17142v2 -"Towards Understanding Chain-of-Thought Prompting: An Empirical Study of - What Matters",http://arxiv.org/abs/2212.10001v2 -ZegOT: Zero-shot Segmentation Through Optimal Transport of Text Prompts,http://arxiv.org/abs/2301.12171v2 -"Explanation Selection Using Unlabeled Data for Chain-of-Thought - Prompting",http://arxiv.org/abs/2302.04813v3 -"Dictionary-based Phrase-level Prompting of Large Language Models for - Machine Translation",http://arxiv.org/abs/2302.07856v1 -"Learning to Initialize: Can Meta Learning Improve Cross-task - Generalization in Prompt Tuning?",http://arxiv.org/abs/2302.08143v2 -SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks,http://arxiv.org/abs/2303.00733v1 -Multimodal Prompting with Missing Modalities for Visual Recognition,http://arxiv.org/abs/2303.03369v2 -LION: Implicit Vision Prompt Tuning,http://arxiv.org/abs/2303.09992v1 -Visual Prompt Multi-Modal Tracking,http://arxiv.org/abs/2303.10826v2 -GPT is becoming a Turing machine: Here are some ways to program it,http://arxiv.org/abs/2303.14310v1 -How to Design Translation Prompts for ChatGPT: An Empirical Study,http://arxiv.org/abs/2304.02182v2 -"ChatGPT Empowered Long-Step Robot Control in Various Environments: A - Case Application",http://dx.doi.org/10.1109/ACCESS.2023.3310935 -Prompt Learning for News Recommendation,http://dx.doi.org/10.1145/3539618.3591752 -"In-context Learning as Maintaining Coherency: A Study of On-the-fly - Machine Translation Using Large Language Models",http://arxiv.org/abs/2305.03573v1 -Privacy-Preserving Prompt Tuning for Large Language Model Services,http://arxiv.org/abs/2305.06212v1 -"CPL-NoViD: Context-Aware Prompt-based Learning for Norm Violation - Detection in Online Communities",http://arxiv.org/abs/2305.09846v2 -TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks,http://arxiv.org/abs/2305.11430v1 -"Decomposed Prompting for Machine Translation Between Related Languages - using Large Language Models",http://arxiv.org/abs/2305.13085v2 -"PEARL: Prompting Large Language Models to Plan and Execute Actions Over - Long Documents",http://arxiv.org/abs/2305.14564v1 -Prompt Evolution for Generative AI: A Classifier-Guided Approach,http://dx.doi.org/10.1109/CAI54212.2023.00105 -"Large Language Models Can be Lazy Learners: Analyze Shortcuts in - In-Context Learning",http://dx.doi.org/10.18653/v1/2023.findings-acl.284 -Deeply Coupled Cross-Modal Prompt Learning,http://arxiv.org/abs/2305.17903v2 -Alfred: A System for Prompted Weak Supervision,http://arxiv.org/abs/2305.18623v1 -Learning Disentangled Prompts for Compositional Image Synthesis,http://arxiv.org/abs/2306.00763v1 -"TransTIC: Transferring Transformer-based Image Compression from Human - Perception to Machine Perception",http://arxiv.org/abs/2306.05085v2 -"Reliability Check: An Analysis of GPT-3's Response to Sensitive Topics - and Prompt Wording",http://arxiv.org/abs/2306.06199v1 -Boosting Language Models Reasoning with Chain-of-Knowledge Prompting,http://arxiv.org/abs/2306.06427v2 -"MetricPrompt: Prompting Model as a Relevance Metric for Few-shot Text - Classification",http://arxiv.org/abs/2306.08892v1 -"2nd Place Winning Solution for the CVPR2023 Visual Anomaly and Novelty - Detection Challenge: Multimodal Prompting for Data-centric Anomaly Detection",http://arxiv.org/abs/2306.09067v2 -"Evolutionary Verbalizer Search for Prompt-based Few Shot Text - Classification",http://arxiv.org/abs/2306.10514v1 -"Harnessing the Power of Adversarial Prompting and Large Language Models - for Robust Hypothesis Generation in Astronomy",http://arxiv.org/abs/2306.11648v1 -Counting Guidance for High Fidelity Text-to-Image Synthesis,http://arxiv.org/abs/2306.17567v1 -"Legal Syllogism Prompting: Teaching Large Language Models for Legal - Judgment Prediction",http://arxiv.org/abs/2307.08321v1 -"PromptCrafter: Crafting Text-to-Image Prompt through Mixed-Initiative - Dialogue with LLM",http://arxiv.org/abs/2307.08985v1 -"DAPrompt: Deterministic Assumption Prompt Learning for Event Causality - Identification",http://arxiv.org/abs/2307.09813v1 -Divide & Bind Your Attention for Improved Generative Semantic Nursing,http://arxiv.org/abs/2307.10864v2 -Why Is Prompt Tuning for Vision-Language Models Robust to Noisy Labels?,http://arxiv.org/abs/2307.11978v1 -"Measurement of Non-prompt $\rm D^0$-meson Elliptic Flow in Pb-Pb - Collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV",http://arxiv.org/abs/2307.14084v1 -"Tutorials on Stance Detection using Pre-trained Language Models: - Fine-tuning BERT and Prompting Large Language Models",http://arxiv.org/abs/2307.15331v1 -PromptSum: Parameter-Efficient Controllable Abstractive Summarization,http://arxiv.org/abs/2308.03117v1 -Mitigating Word Bias in Zero-shot Prompt-based Classifiers,http://arxiv.org/abs/2309.04992v1 -DePT: Decoupled Prompt Tuning,http://arxiv.org/abs/2309.07439v1 -"Adaptive Prompt Learning with Distilled Connective Knowledge for - Implicit Discourse Relation Recognition",http://arxiv.org/abs/2309.07561v1 -"EchoPrompt: Instructing the Model to Rephrase Queries for Improved - In-context Learning",http://arxiv.org/abs/2309.10687v2 -"On the Relationship between Skill Neurons and Robustness in Prompt - Tuning",http://arxiv.org/abs/2309.12263v1 -"HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text - Hybrid Question Answering",http://arxiv.org/abs/2309.12669v1 -"Generative Speech Recognition Error Correction with Large Language - Models and Task-Activating Prompting",http://arxiv.org/abs/2309.15649v2 -Stress Testing Chain-of-Thought Prompting for Large Language Models,http://arxiv.org/abs/2309.16621v1 -"Measurement of the production cross-section of $J/ψ$ and $ψ(2$S$)$ - mesons in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector",http://arxiv.org/abs/2309.17177v1 -Prompting-based Efficient Temporal Domain Generalization,http://arxiv.org/abs/2310.02473v1 -"MedPrompt: Cross-Modal Prompting for Multi-Task Medical Image - Translation",http://arxiv.org/abs/2310.02663v1 -"DiffPrompter: Differentiable Implicit Visual Prompts for - Semantic-Segmentation in Adverse Conditions",http://arxiv.org/abs/2310.04181v1 -Domain-Controlled Prompt Learning,http://arxiv.org/abs/2310.07730v1 -Prompting Scientific Names for Zero-Shot Species Recognition,http://arxiv.org/abs/2310.09929v1 -"Prompting for Discovery: Flexible Sense-Making for AI Art-Making with - Dreamsheets",http://arxiv.org/abs/2310.09985v1 -A Search for Prompts: Generating Structured Answers from Contracts,http://arxiv.org/abs/2310.10141v1 -Eliciting Human Preferences with Language Models,http://arxiv.org/abs/2310.11589v1 -"Attack Prompt Generation for Red Teaming and Defending Large Language - Models",http://arxiv.org/abs/2310.12505v1 -"PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers' - Workflows",http://arxiv.org/abs/2310.15435v1 -"SCATTERING INVOLVING PROMPT AND EQUILIBRATED COMPONENTS, INFORMATION - THEORY, AND CHAOTIC QUANTUM DOTS",http://arxiv.org/abs/cond-mat/9502078v1 -"Measurement of the quark to photon fragmentation function through the - inclusive production of prompt photons in hadronic Z^0 decays",http://dx.doi.org/10.1007/s100520050122 -Prompt photon processes in photoproduction at HERA,http://dx.doi.org/10.1016/S0920-5632(00)00148-1 -Prompt Photon Production at HERA and LEP,http://dx.doi.org/10.1142/9789812702227_0113 -Deeply Virtual Compton Scattering and Prompt Photon production at HERA,http://arxiv.org/abs/hep-ex/0510031v1 -Prompt Photons and Particle Momentum Distributions at HERA,http://arxiv.org/abs/hep-ex/0701033v1 -"Constraints on the Proton's Gluon Distribution from Prompt Photon - Production",http://dx.doi.org/10.1016/0550-3213(95)00424-Q -Effects of Anomalous Couplings of Quarks on Prompt Photon Production,http://dx.doi.org/10.1103/PhysRevD.55.2724 -The prompt lepton cookbook,http://dx.doi.org/10.1016/S0927-6505(01)00105-0 -Prompt J/psi production at the LHC,http://dx.doi.org/10.1088/1126-6708/2002/09/014 -Prompt photon photoproduction at HERA in the k_T-factorization approach,http://dx.doi.org/10.1103/PhysRevD.72.054002 -Prompt J/psi plus photon associated electroproduction at DESY HERA,http://dx.doi.org/10.1140/epjc/s10052-006-0044-2 -Probing Low-x QCD With Very High Energy Prompt Muons,http://arxiv.org/abs/hep-ph/0701003v1 -"A New Type of Shape Instability of Hot Nuclei and Prompt Nuclear - Fragmentation",http://dx.doi.org/10.1103/PhysRevLett.82.5008 -Prompt photons with associated jets in photoproduction at HERA,http://arxiv.org/abs/0705.2700v1 -Identification of photon-tagged jets in the ALICE experiment,http://dx.doi.org/10.1016/j.nima.2007.10.050 -Probing QCD (media) with prompt photons,http://arxiv.org/abs/0911.4609v1 -Prompt atmospheric neutrino flux,http://arxiv.org/abs/1611.05120v1 -A practical guide to event generation for prompt photon production,http://dx.doi.org/10.1088/1361-6471/aa5f29 -"Energy Dependent Prompt Neutron Multiplicity Parameterization for - Actinide Photofission",http://arxiv.org/abs/1801.01107v1 -"Production and polarization of prompt $J/ψ$ in the improved color - evaporation model using the $k_T$-factorization approach",http://dx.doi.org/10.1103/PhysRevD.98.114029 -Prompt neutrino flux in the atmosphere revisited,http://arxiv.org/abs/1602.08441v1 -"Determination of the strong coupling constant from ATLAS measurements of - the inclusive isolated prompt photon cross section at 7 TeV",http://arxiv.org/abs/1703.03959v1 -"$η_c'$ Hadroproduction at Next-to-Leading Order and its Relevance to - $ψ'$ Production",http://dx.doi.org/10.1016/j.physletb.2018.10.009 -Using natural language prompts for machine translation,http://arxiv.org/abs/2202.11822v1 -"Prompt-based System for Personality and Interpersonal Reactivity - Prediction",http://dx.doi.org/10.1016/j.simpa.2022.100296 -A very preliminary analysis of DALL-E 2,http://arxiv.org/abs/2204.13807v2 -Asking the Right Questions in Low Resource Template Extraction,http://arxiv.org/abs/2205.12643v1 -Adversarial Attacks on Image Generation With Made-Up Words,http://arxiv.org/abs/2208.04135v1 -On the prompt contribution to the atmospheric neutrino flux,http://dx.doi.org/10.1103/PhysRevD.107.023014 -"CHAI-DT: A Framework for Prompting Conversational Generative AI Agents - to Actively Participate in Co-Creation",http://arxiv.org/abs/2305.03852v1 -Prompted LLMs as Chatbot Modules for Long Open-domain Conversation,http://dx.doi.org/10.18653/v1/2023.findings-acl.277 -Chain-Of-Thought Prompting Under Streaming Batch: A Case Study,http://arxiv.org/abs/2306.00550v1 -Analysing Gender Bias in Text-to-Image Models using Object Detection,http://arxiv.org/abs/2307.08025v1 -"Let the Models Respond: Interpreting Language Model Detoxification - Through the Lens of Prompt Dependence",http://arxiv.org/abs/2309.00751v1 -Audio Editing with Non-Rigid Text Prompts,http://arxiv.org/abs/2310.12858v1 -GEMBA-MQM: Detecting Translation Quality Error Spans with GPT-4,http://arxiv.org/abs/2310.13988v1 -"Multiwavelength analysis of the intriguing GRB 061126: the reverse shock - scenario and magnetization",http://dx.doi.org/10.1086/592062 -"Highly Luminous Supernovae associated with Gamma-Ray Bursts I.: GRB - 111209A/SN 2011kl in the Context of Stripped-Envelope and Superluminous - Supernovae",http://dx.doi.org/10.1051/0004-6361/201629162 -"The central engine of GRB 130831A and the energy breakdown of a - relativistic explosion",http://dx.doi.org/10.1093/mnras/stv2280 -"Deciphering Core Collapse Supernovae: Is Convection the Key? I. Prompt - Convection",http://arxiv.org/abs/astro-ph/9609006v1 -"REM Optical Slitless Spectrograph (ROSS): an instrument for prompt low - resolution spectroscopy of Gamma Ray Bursts",http://arxiv.org/abs/astro-ph/0109126v1 -GeV Emission from Prompt and Afterglow Phases of Gamma-Ray Bursts,http://dx.doi.org/10.1086/592486 -"A Correlation of Spectral Lag Evolution with Prompt Optical Emission in - GRBs?",http://dx.doi.org/10.1063/1.3027958 -"Realistic analytic model for the prompt and high latitude emission in - GRBs",http://dx.doi.org/10.1111/j.1365-2966.2009.15355.x -"Lag-luminosity relation in gamma-ray burst X-ray flares: a direct link - to the prompt emission",http://dx.doi.org/10.1111/j.1365-2966.2010.16824.x -Observing the prompt emission of gamma-ray bursts,http://dx.doi.org/10.1016/j.crhy.2011.01.012 -Gamma-Ray bursts: Energetics and Prompt Correlations,http://arxiv.org/abs/1308.1097v1 -"Observation and measurements of the production of prompt and non-prompt - $J/ψ$ mesons in association with a $Z$ boson in $pp$ collisions at - $\sqrt{s}= 8$ TeV with the ATLAS detector",http://dx.doi.org/10.1140/epjc/s10052-015-3406-9 -"A Monte Carlo Study of the Relationship between the Time Structures of - Prompt Gammas and in vivo Radiation Dose in Proton Therapy",http://arxiv.org/abs/1503.02803v2 -"Luminosity--time and luminosity--luminosity correlations for GRB prompt - and afterglow plateau emissions",http://dx.doi.org/10.1093/mnras/stv1229 -"Method for measuring prompt gamma-rays generated by D-T neutrons - bombarding a depleted uranium spherical shell",http://dx.doi.org/10.1088/1674-1137/40/1/014001 -A Revised Analysis of Gamma Ray Bursts' prompt efficiencies,http://dx.doi.org/10.1093/mnras/stw1331 -"Measurement of prompt and nonprompt charmonium suppression in PbPb - collisions at 5.02 TeV",http://dx.doi.org/10.1140/epjc/s10052-018-5950-6 -"AutoPrompt: Eliciting Knowledge from Language Models with Automatically - Generated Prompts",http://arxiv.org/abs/2010.15980v2 -"PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen - Domains",http://arxiv.org/abs/2102.12206v4 -"Measurement of beauty and charm production in pp collisions at - $\sqrt{s}=5.02$ TeV via non-prompt and prompt D mesons",http://dx.doi.org/10.1007/JHEP05(2021)220 -"Polarization Predictions in the GRB Prompt Phase with the Internal Shock - Model",http://dx.doi.org/10.3847/1538-4357/abe3fb -"Controllable Generation from Pre-trained Language Models via Inverse - Prompting",http://dx.doi.org/10.1145/3447548.3467418 -HTLM: Hyper-Text Pre-Training and Prompting of Language Models,http://arxiv.org/abs/2107.06955v1 -"LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot - Learners",http://arxiv.org/abs/2110.06274v2 -"P-Adapters: Robustly Extracting Factual Information from Language Models - with Diverse Prompts",http://arxiv.org/abs/2110.07280v2 -Exploring Universal Intrinsic Task Subspace via Prompt Tuning,http://arxiv.org/abs/2110.07867v3 -"Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation - for Open-Domain Dialogue",http://arxiv.org/abs/2110.08094v2 -"Reconsidering Tweets: Intervening During Tweet Creation Decreases - Offensive Content",http://arxiv.org/abs/2112.00773v1 -HyperPrompt: Prompt-based Task-Conditioning of Transformers,http://arxiv.org/abs/2203.00759v2 -Conditional Prompt Learning for Vision-Language Models,http://arxiv.org/abs/2203.05557v2 -"Multi-Modal Few-Shot Object Detection with Meta-Learning-Based - Cross-Modal Prompting",http://arxiv.org/abs/2204.07841v3 -"Towards Unified Conversational Recommender Systems via - Knowledge-Enhanced Prompt Learning",http://dx.doi.org/10.1145/3534678.3539382 -Few-Shot Stance Detection via Target-Aware Prompt Distillation,http://dx.doi.org/10.1145/3477495.3531979 -VIMA: General Robot Manipulation with Multimodal Prompts,http://arxiv.org/abs/2210.03094v2 -"Prompt Generation Networks for Input-based Adaptation of Frozen Vision - Transformers",http://arxiv.org/abs/2210.06466v2 -"Continuous Prompt Tuning Based Textual Entailment Model for E-commerce - Entity Typing",http://arxiv.org/abs/2211.02483v1 -"Easily Accessible Text-to-Image Generation Amplifies Demographic - Stereotypes at Large Scale",http://dx.doi.org/10.1145/3593013.3594095 -"CCPrompt: Counterfactual Contrastive Prompt-Tuning for Many-Class - Classification",http://arxiv.org/abs/2211.05987v1 -Prompt Tuning for Parameter-efficient Medical Image Segmentation,http://arxiv.org/abs/2211.09233v1 -"Context Variance Evaluation of Pretrained Language Models for - Prompt-based Biomedical Knowledge Probing",http://arxiv.org/abs/2211.10265v3 -"J/$ψ$ production at midrapidity in p$-$Pb collisions at $\sqrt{s_{\rm - NN}} = 8.16$ TeV",http://dx.doi.org/10.1007/JHEP07(2023)137 -"Controlled Text Generation using T5 based Encoder-Decoder Soft Prompt - Tuning and Analysis of the Utility of Generated Text in AI",http://arxiv.org/abs/2212.02924v1 -"MolCPT: Molecule Continuous Prompt Tuning to Generalize Molecular - Representation Learning",http://arxiv.org/abs/2212.10614v2 -"Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image - Diffusion Models",http://arxiv.org/abs/2301.13826v2 -LabelPrompt: Effective Prompt-based Learning for Relation Classification,http://arxiv.org/abs/2302.08068v2 -How Does In-Context Learning Help Prompt Tuning?,http://arxiv.org/abs/2302.11521v1 -Active Prompting with Chain-of-Thought for Large Language Models,http://arxiv.org/abs/2302.12246v3 -SGL-PT: A Strong Graph Learner with Graph Prompt Tuning,http://arxiv.org/abs/2302.12449v2 -Editing Implicit Assumptions in Text-to-Image Diffusion Models,http://arxiv.org/abs/2303.08084v2 -"SPeC: A Soft Prompt-Based Calibration on Performance Variability of - Large Language Model in Clinical Notes Summarization",http://arxiv.org/abs/2303.13035v3 -Towards Making the Most of ChatGPT for Machine Translation,http://arxiv.org/abs/2303.13780v4 -Semantic Prompt for Few-Shot Image Recognition,http://arxiv.org/abs/2303.14123v1 -Segment Everything Everywhere All at Once,http://arxiv.org/abs/2304.06718v4 -Chain of Thought Prompt Tuning in Vision Language Models,http://arxiv.org/abs/2304.07919v2 -SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective,http://arxiv.org/abs/2304.14674v1 -"Prompt-ICM: A Unified Framework towards Image Coding for Machines with - Task-driven Prompts",http://arxiv.org/abs/2305.02578v1 -"Low-Resource Multi-Granularity Academic Function Recognition Based on - Multiple Prompt Knowledge",http://arxiv.org/abs/2305.03287v1 -Domain Incremental Lifelong Learning in an Open World,http://arxiv.org/abs/2305.06555v1 -Efficient Prompting via Dynamic In-Context Learning,http://arxiv.org/abs/2305.11170v1 -Universal Self-Adaptive Prompting,http://arxiv.org/abs/2305.14926v2 -Explicit Visual Prompting for Universal Foreground Segmentations,http://arxiv.org/abs/2305.18476v1 -"Enhancing CLIP with CLIP: Exploring Pseudolabeling for Limited-Label - Prompt Tuning",http://arxiv.org/abs/2306.01669v1 -"Multiscale Progressive Text Prompt Network for Medical Image - Segmentation",http://arxiv.org/abs/2307.00174v1 -"All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with - Prompt-based Finetuning",http://arxiv.org/abs/2307.00290v2 -Image Captions are Natural Prompts for Text-to-Image Models,http://arxiv.org/abs/2307.08526v1 -Prompt Tuning on Graph-augmented Low-resource Text Classification,http://arxiv.org/abs/2307.10230v1 -"XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in - Large Language Models",http://arxiv.org/abs/2308.01263v2 -"From Prompt Injections to SQL Injection Attacks: How Protected is Your - LLM-Integrated Web Application?",http://arxiv.org/abs/2308.01990v3 -"You Only Prompt Once: On the Capabilities of Prompt Learning on Large - Language Models to Tackle Toxic Content",http://arxiv.org/abs/2308.05596v1 -AD-CLIP: Adapting Domains in Prompt Space Using CLIP,http://arxiv.org/abs/2308.05659v1 -"ICPC: Instance-Conditioned Prompting with Contrastive Learning for - Semantic Segmentation",http://arxiv.org/abs/2308.07078v1 -Better Zero-Shot Reasoning with Role-Play Prompting,http://arxiv.org/abs/2308.07702v1 -SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation,http://arxiv.org/abs/2308.08746v1 -"MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large - Language Models",http://arxiv.org/abs/2308.09729v4 -Contrastive Graph Prompt-tuning for Cross-domain Recommendation,http://arxiv.org/abs/2308.10685v1 -Prompt-Based Length Controlled Generation with Reinforcement Learning,http://arxiv.org/abs/2308.12030v2 -Large Language Models Vote: Prompting for Rare Disease Identification,http://arxiv.org/abs/2308.12890v2 -PE-MED: Prompt Enhancement for Interactive Medical Image Segmentation,http://arxiv.org/abs/2308.13746v1 -"Adversarial Fine-Tuning of Language Models: An Iterative Optimisation - Approach for the Generation and Detection of Problematic Content",http://arxiv.org/abs/2308.13768v1 -"TransPrompt v2: A Transferable Prompting Framework for Cross-task Text - Classification",http://arxiv.org/abs/2308.15010v1 -BatchPrompt: Accomplish more with less,http://arxiv.org/abs/2309.00384v2 -"Prompting or Fine-tuning? A Comparative Study of Large Language Models - for Taxonomy Construction",http://arxiv.org/abs/2309.01715v1 -"Statistical analysis of long GRBs' prompt emission and X-ray flares: - multivariate clustering and correlations",http://arxiv.org/abs/2309.07224v1 -Delving into Multimodal Prompting for Fine-grained Visual Classification,http://arxiv.org/abs/2309.08912v1 -"Understanding Catastrophic Forgetting in Language Models via Implicit - Inference",http://arxiv.org/abs/2309.10105v1 -"ChatGPT Performance on Standardized Testing Exam -- A Proposed Strategy - for Learners",http://arxiv.org/abs/2309.14519v1 -"Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News - Detection",http://dx.doi.org/10.1145/3583780.3615015 -"Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task - Adaptation",http://arxiv.org/abs/2310.02842v2 -Revisiting Large Language Models as Zero-shot Relation Extractors,http://arxiv.org/abs/2310.05028v3 -"LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios - via Prompt Compression",http://arxiv.org/abs/2310.06839v1 -"Diversity of Thought Improves Reasoning Abilities of Large Language - Models",http://arxiv.org/abs/2310.07088v1 -"Hierarchical Decomposition of Prompt-Based Continual Learning: - Rethinking Obscured Sub-optimality",http://arxiv.org/abs/2310.07234v1 -"Prompt Packer: Deceiving LLMs through Compositional Instruction with - Hidden Attacks",http://arxiv.org/abs/2310.10077v1 -Six Years of Gamma Ray Burst Observations with BeppoSAX,http://arxiv.org/abs/astro-ph/0407633v1 -"Prompt Photon Production and Observation of Deeply Virtual Compton - Scattering",http://arxiv.org/abs/hep-ex/0003030v1 -Prompt Photons and Deeply Virtual Compton Scattering at Hera,http://arxiv.org/abs/hep-ex/0205026v1 -Prompt Photon Production at HERA,http://arxiv.org/abs/hep-ex/0308066v1 -Prompt Photon Production at the Tevatron,http://arxiv.org/abs/hep-ex/0511051v1 -"Photon Structure and the Production of Jets, Hadrons, and Prompt Photons",http://arxiv.org/abs/hep-ph/9907366v2 -Results in next-to-leading-log prompt-photon hadroproduction,http://dx.doi.org/10.1088/0954-3899/26/5/316 -Current Issues in Prompt Photon Production,http://arxiv.org/abs/hep-ph/0006352v1 -Photoproduction of prompt photons at NLO,http://arxiv.org/abs/hep-ph/0105135v1 -Prompt photon production in nuclear collisions,http://arxiv.org/abs/hep-ph/0406291v1 -Charmonium production in two-photon collisions at next-to-leading order,http://dx.doi.org/10.1063/1.2122166 -Prompt photon production with $k_T-$factorization,http://arxiv.org/abs/hep-ph/0611384v1 -"Thermal photon-IMF anticorrelation: a signal of prompt - multifragmentation?",http://arxiv.org/abs/nucl-ex/0507028v1 -"Prompt photon production in photoproduction, DIS and hadronic collisions",http://dx.doi.org/10.1016/j.nuclphysbps.2008.09.149 -Lag-luminosity relation in gamma-ray burst X-ray flares,http://dx.doi.org/10.1063/1.3509315 -Prompt photon production at HERA,http://arxiv.org/abs/1006.5322v1 -NLO Calculation of Prompt Photon Production in DIS at HERA,http://dx.doi.org/10.1140/epjc/s10052-011-1616-3 -Prompt emission of GRB 121217A from gamma-rays to the NIR,http://dx.doi.org/10.1051/0004-6361/201322600 -"Simultaneous optical/gamma-ray observations of GRB 121217's prompt - emission",http://arxiv.org/abs/1312.5099v1 -Prompt production of D mesons with ALICE at the LHC,http://arxiv.org/abs/1402.1370v1 -Forward charm-production models and prompt neutrinos at IceCube,http://dx.doi.org/10.1007/JHEP11(2018)150 -"The electromagnetic model of short GRBs, the nature of prompt tails, - supernova-less long GRBs and highly efficient episodic accretion",http://dx.doi.org/10.1088/0004-637X/768/1/63 -ATLAS results on charmonium production,http://arxiv.org/abs/2107.14706v1 -"Distilling Hypernymy Relations from Language Models: On the - Effectiveness of Zero-Shot Taxonomy Induction",http://arxiv.org/abs/2202.04876v1 -Prompting Is Programming: A Query Language for Large Language Models,http://dx.doi.org/10.1145/3591300 -Intriguing Properties of Text-guided Diffusion Models,http://arxiv.org/abs/2306.00974v4 -Language Models as Black-Box Optimizers for Vision-Language Models,http://arxiv.org/abs/2309.05950v2 -A Formal Method for Mapping Software Engineering Practices to Essence,http://arxiv.org/abs/1812.01791v1 -"The prompt energy release of gamma-ray bursts using a cosmological - k-correction",http://dx.doi.org/10.1086/321093 -"The REM Telescope: Detecting the Near Infra-Red Counterparts of - Gamma-Ray Bursts and the Prompt Behaviour of Their Optical Continuum",http://arxiv.org/abs/astro-ph/0203034v1 -The r-process in supernova explosions from the collapse of O-Ne-Mg cores,http://dx.doi.org/10.1086/376617 -Continuous optical monitoring during the prompt emission of GRB 060111B,http://dx.doi.org/10.1051/0004-6361:20065158 -"GRB 060218/SN 2006aj: Prompt Emission from Inverse-Compton Scattering of - Shock Breakout Thermal Photons",http://arxiv.org/abs/astro-ph/0604510v1 -"Production of Prompt Charmonia in $e^+e^-$ Annihilation at - $\sqrt{s}\approx 10.6$ GeV",http://dx.doi.org/10.1103/PhysRevLett.88.052001 -"Measurement of prompt photons with associated jets in photoproduction at - HERA",http://dx.doi.org/10.1140/epjc/s10052-006-0134-1 -Limits on Anomalous Couplings of Quarks From Prompt Photon Data,http://arxiv.org/abs/hep-ph/9609455v1 -Nuclear Shadowing Effects on Prompt Photons at RHIC and LHC,http://dx.doi.org/10.1103/PhysRevC.57.3292 -"Nuclear Broadening Effects on Hard Prompt Photons at Relativistic - Energies",http://dx.doi.org/10.1103/PhysRevC.64.054909 -The prompt TeV-PeV atmospheric neutrino window,http://dx.doi.org/10.1103/PhysRevD.66.113002 -The rise of the afterglow in GRB 050820a,http://dx.doi.org/10.1051/0004-6361:20066160 -"Search for Prompt Production of $χ_{c}$ and X(3872) in e^+e^- - Annihilations",http://dx.doi.org/10.1103/PhysRevD.76.071102 -Measuring gluon shadowing with prompt photons at RHIC and LHC,http://dx.doi.org/10.1016/j.physletb.2007.12.025 -Strong spectral evolution during the prompt emission of GRB 070616,http://dx.doi.org/10.1063/1.2943421 -A general scheme for modeling gamma-ray burst prompt emission,http://dx.doi.org/10.1111/j.1365-2966.2007.12621.x -"High-pT photon processes and the photon structure - results from HERA - jet and prompt photon (photo)production",http://dx.doi.org/10.1016/j.nuclphysbps.2008.07.004 -"Monitoring the Bragg peak location of 73 MeV/u carbon ions by means of - prompt $γ$-ray measurements",http://dx.doi.org/10.1063/1.2975841 -"Prompt photon photoproduction at HERA within the framework of the quark - Reggeization hypothesis",http://dx.doi.org/10.1103/PhysRevD.78.114031 -Production of the X(3872) at the Tevatron and the LHC,http://dx.doi.org/10.1103/PhysRevD.81.114018 -"Measurement of the differential cross-sections of inclusive, prompt and - non-prompt J/psi production in proton-proton collisions at sqrt(s) = 7 TeV",http://dx.doi.org/10.1016/j.nuclphysb.2011.05.015 -Prompt J/psi production in the Regge limit of QCD: From Tevatron to LHC,http://dx.doi.org/10.1103/PhysRevD.85.074013 -"The photospheric radiation model for the prompt emission of Gamma-ray - Bursts: Interpreting four observed correlations",http://dx.doi.org/10.1088/2041-8205/755/1/L6 -"A comprehensive statistical analysis of Swift X-ray light-curves: the - prompt-afterglow connection in Gamma-Ray Bursts",http://arxiv.org/abs/1207.0537v1 -Single transverse spin asymmetry of prompt photon production,http://dx.doi.org/10.1016/j.physletb.2012.10.002 -A New Correlation Between GRB X-Ray Flares And The Prompt Emission,http://dx.doi.org/10.1088/2041-8205/767/2/L28 -"Delayed Onset and Fast Rise of Prompt Optical-UV Emission from Gamma-Ray - Bursts in Molecular Clouds",http://dx.doi.org/10.1088/1674-4527/13/1/007 -"Atmospheric leptons, the search for a prompt component",http://dx.doi.org/10.1051/epjconf/20125209004 -"Extent of sensitivity of single photon production to parton distribution - functions",http://dx.doi.org/10.1007/s12043-014-0765-y -Prompt Upsilon(nS) production at the LHC in the Regge limit of QCD,http://dx.doi.org/10.1103/PhysRevD.88.014003 -"Measurement of the production cross-section of $ψ(2S)\to - J/ψ(\toμ^+μ^-)π^+π^-$ in $pp$ collisions at $\sqrt{s}=7$ TeV at - ATLAS",http://dx.doi.org/10.1007/JHEP09(2014)079 -Understanding and Overcoming Biases in Customer Reviews,http://arxiv.org/abs/1604.00417v1 -"Prompt neutrino fluxes in the atmosphere with PROSA parton distribution - functions",http://dx.doi.org/10.1007/JHEP05(2017)004 -"Environmental Bisimulations for Delimited-Control Operators with Dynamic - Prompt Generation",http://dx.doi.org/10.23638/LMCS-13(3:27)2017 -Prompt Delay,http://arxiv.org/abs/1602.05045v2 -"Predictions for the isolated prompt photon production at the LHC at $ - \sqrt s= $13 TeV",http://dx.doi.org/10.1155/2017/3802381 -"Identifying Inconsistencies in Fission Product Yield Evaluations with - Prompt Neutron Emission",http://arxiv.org/abs/1709.01183v1 -Defective fission correlation data from the 2E-2v method,http://arxiv.org/abs/1709.07443v1 -Gamma Ray Burst Prompt correlations,http://dx.doi.org/10.1155/2018/4969503 -Gamma-ray burst prompt correlations: selection and instrumental effects,http://dx.doi.org/10.1088/1538-3873/aaa8d7 -"From $D_{s}^{\pm}$ production asymmetry at the LHC to prompt - $ν_τ$ at IceCube",http://dx.doi.org/10.1016/j.physletb.2019.05.026 -Novel pre-burst stage of gamma-ray bursts from machine learning,http://dx.doi.org/10.1016/j.jheap.2021.09.002 -"Extension of the Hauser-Feshbach Fission Fragment Decay Model to - Multi-Chance Fission",http://dx.doi.org/10.1103/PhysRevC.103.014615 -Fission Fragment Decay Simulations with the CGMF Code,http://dx.doi.org/10.1016/j.cpc.2021.108087 -FLEX: Unifying Evaluation for Few-Shot NLP,http://arxiv.org/abs/2107.07170v2 -"Open Aspect Target Sentiment Classification with Natural Language - Prompts",http://arxiv.org/abs/2109.03685v1 -Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning,http://arxiv.org/abs/2109.04144v1 -"PoKE: A Prompt-based Knowledge Eliciting Approach for Event Argument - Extraction",http://arxiv.org/abs/2109.05190v3 -"Dialogue State Tracking with a Language Model using Schema-Driven - Prompting",http://arxiv.org/abs/2109.07506v1 -"Energy Dependence of Prompt Fission Neutron Multiplicity in the - $^{239}$Pu($n,f$) Reaction",http://arxiv.org/abs/2109.14330v1 -"MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better - Translators",http://arxiv.org/abs/2110.06609v2 -Few-Shot Self-Rationalization with Natural Language Prompts,http://arxiv.org/abs/2111.08284v2 -PSG: Prompt-based Sequence Generation for Acronym Extraction,http://arxiv.org/abs/2111.14301v2 -"GraphPrompt: Biomedical Entity Normalization Using Graph-based Prompt - Templates",http://arxiv.org/abs/2112.03002v1 -PromptBERT: Improving BERT Sentence Embeddings with Prompts,http://arxiv.org/abs/2201.04337v2 -"ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves - Zero-Shot Generalization",http://arxiv.org/abs/2201.06910v2 -Domain Adaptation via Prompt Learning,http://arxiv.org/abs/2202.06687v1 -Continual Prompt Tuning for Dialog State Tracking,http://arxiv.org/abs/2203.06654v1 -Prototypical Verbalizer for Prompt-based Few-shot Tuning,http://arxiv.org/abs/2203.09770v1 -Contrastive Demonstration Tuning for Pre-trained Language Models,http://arxiv.org/abs/2204.04392v4 -Exploring the Universal Vulnerability of Prompt-based Learning Paradigm,http://arxiv.org/abs/2204.05239v1 -"Incremental Prompting: Episodic Memory Prompt for Lifelong Event - Detection",http://arxiv.org/abs/2204.07275v2 -"CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument - Extraction",http://arxiv.org/abs/2205.00498v2 -Contrastive Learning for Prompt-Based Few-Shot Language Learners,http://arxiv.org/abs/2205.01308v1 -"P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking - Fine-tuning with Prompt-based Learning and Pre-finetuning",http://dx.doi.org/10.1145/3477495.3531786 -"The Unreliability of Explanations in Few-shot Prompting for Textual - Reasoning",http://arxiv.org/abs/2205.03401v2 -Selective Fairness in Recommendation via Prompts,http://arxiv.org/abs/2205.04682v2 -BBTv2: Towards a Gradient-Free Future with Large Language Models,http://arxiv.org/abs/2205.11200v2 -Few-shot Reranking for Multi-hop QA via Language Model Prompting,http://arxiv.org/abs/2205.12650v3 -Memory-Based Label-Text Tuning for Few-Shot Class-Incremental Learning,http://arxiv.org/abs/2207.01036v1 -"Contextual Information and Commonsense Based Prompt for Emotion - Recognition in Conversation",http://arxiv.org/abs/2207.13254v1 -Prompting for Multi-Modal Tracking,http://arxiv.org/abs/2207.14571v2 -"BEIKE NLP at SemEval-2022 Task 4: Prompt-Based Paragraph Classification - for Patronizing and Condescending Language Detection",http://arxiv.org/abs/2208.01312v1 -"GRASP: Guiding model with RelAtional Semantics using Prompt for Dialogue - Relation Extraction",http://arxiv.org/abs/2208.12494v4 -"Let Me Check the Examples: Enhancing Demonstration Learning via Explicit - Imitation",http://arxiv.org/abs/2209.00455v1 -"Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A - Prompt-Based Uncertainty Propagation Approach",http://arxiv.org/abs/2209.06995v2 -"Psychologically-informed chain-of-thought prompts for metaphor - understanding in large language models",http://arxiv.org/abs/2209.08141v2 -Efficient Few-Shot Learning Without Prompts,http://arxiv.org/abs/2209.11055v1 -Best Prompts for Text-to-Image Models and How to Find Them,http://dx.doi.org/10.1145/3539618.3592000 -News Summarization and Evaluation in the Era of GPT-3,http://arxiv.org/abs/2209.12356v2 -Prompt-driven efficient Open-set Semi-supervised Learning,http://arxiv.org/abs/2209.14205v1 -Compositional Semantic Parsing with Large Language Models,http://arxiv.org/abs/2209.15003v2 -Visual Prompt Tuning for Generative Transfer Learning,http://arxiv.org/abs/2210.00990v1 -"A Unified Framework for Multi-intent Spoken Language Understanding with - prompting",http://arxiv.org/abs/2210.03337v1 -Learning to Decompose Visual Features with Latent Textual Prompts,http://arxiv.org/abs/2210.04287v1 -3DALL-E: Integrating Text-to-Image AI in 3D Design Workflows,http://arxiv.org/abs/2210.11603v2 -"Performance-Efficiency Trade-Offs in Adapting Language Models to Text - Classification Tasks",http://arxiv.org/abs/2210.12022v1 -"Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual - Understanding With Multilingual Language Models",http://arxiv.org/abs/2210.12360v2 -Prompt-based Text Entailment for Low-Resource Named Entity Recognition,http://arxiv.org/abs/2211.03039v1 -"Contrastive Learning with Prompt-derived Virtual Semantic Prototypes for - Unsupervised Sentence Embedding",http://arxiv.org/abs/2211.03348v2 -Prompting Language Models for Linguistic Structure,http://arxiv.org/abs/2211.07830v2 -MEAL: Stable and Active Learning for Few-Shot Prompting,http://arxiv.org/abs/2211.08358v2 -A Creative Industry Image Generation Dataset Based on Captions,http://arxiv.org/abs/2211.09035v1 -"TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense - Question Answering",http://arxiv.org/abs/2211.13515v1 -"Tools for estimating fake/non-prompt lepton backgrounds with the ATLAS - detector at the LHC",http://arxiv.org/abs/2211.16178v1 -"Generalizing Multiple Object Tracking to Unseen Domains by Introducing - Natural Language Representation",http://arxiv.org/abs/2212.01568v1 -Self-Prompting Large Language Models for Zero-Shot Open-Domain QA,http://arxiv.org/abs/2212.08635v2 -PromptBoosting: Black-Box Text Classification with Ten Forward Passes,http://arxiv.org/abs/2212.09257v2 -"UniHD at TSAR-2022 Shared Task: Is Compute All We Need for Lexical - Simplification?",http://arxiv.org/abs/2301.01764v2 -"Transferring Pre-trained Multimodal Representations with Cross-modal - Similarity Matching",http://arxiv.org/abs/2301.02903v1 -"Are Language Models Worse than Humans at Following Prompts? It's - Complicated",http://arxiv.org/abs/2301.07085v1 -MTTN: Multi-Pair Text to Text Narratives for Prompt Generation,http://arxiv.org/abs/2301.10172v2 -Is Writing Prompts Really Making Art?,http://arxiv.org/abs/2301.13049v2 -PLACES: Prompting Language Models for Social Conversation Synthesis,http://arxiv.org/abs/2302.03269v3 -"RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards - Precise Expressions",http://dx.doi.org/10.1145/3544548.3581402 -"In What Languages are Generative Language Models the Most Formal? - Analyzing Formality Distribution across Languages",http://arxiv.org/abs/2302.12299v1 -Reward Design with Language Models,http://arxiv.org/abs/2303.00001v1 -Video-P2P: Video Editing with Cross-attention Control,http://arxiv.org/abs/2303.04761v1 -Context-faithful Prompting for Large Language Models,http://arxiv.org/abs/2303.11315v2 -Is ChatGPT A Good Keyphrase Generator? A Preliminary Study,http://arxiv.org/abs/2303.13001v1 -"Prompt-Guided Transformers for End-to-End Open-Vocabulary Object - Detection",http://arxiv.org/abs/2303.14386v1 -Probabilistic Prompt Learning for Dense Prediction,http://arxiv.org/abs/2304.00779v1 -"One-shot and Partially-Supervised Cell Image Segmentation Using Small - Visual Prompt",http://arxiv.org/abs/2304.07991v1 -TextMesh: Generation of Realistic 3D Meshes From Text Prompts,http://arxiv.org/abs/2304.12439v1 -"POUF: Prompt-oriented unsupervised fine-tuning for large pre-trained - models",http://arxiv.org/abs/2305.00350v1 -Query Expansion by Prompting Large Language Models,http://arxiv.org/abs/2305.03653v1 -"Prompt Learning to Mitigate Catastrophic Forgetting in Cross-lingual - Transfer for Open-domain Dialogue Generation",http://arxiv.org/abs/2305.07393v2 -"Multi-modal Visual Understanding with Prompts for Semantic Information - Disentanglement of Image",http://arxiv.org/abs/2305.09333v1 -Mobile User Interface Element Detection Via Adaptively Prompt Tuning,http://arxiv.org/abs/2305.09699v1 -"Few-shot Joint Multimodal Aspect-Sentiment Analysis Based on Generative - Multimodal Prompt",http://arxiv.org/abs/2305.10169v2 -"Zero-shot Visual Relation Detection via Composite Visual Cues from Large - Language Models",http://arxiv.org/abs/2305.12476v2 -"""According to ..."" Prompting Language Models Improves Quoting from - Pre-Training Data",http://arxiv.org/abs/2305.13252v1 -"If at First You Don't Succeed, Try, Try Again: Faithful Diffusion-based - Text-to-Image Generation by Selection",http://arxiv.org/abs/2305.13308v1 -Spatial-temporal Prompt Learning for Federated Weather Forecasting,http://arxiv.org/abs/2305.14244v1 -"Large Language Models are Frame-level Directors for Zero-shot - Text-to-Video Generation",http://arxiv.org/abs/2305.14330v2 -"Boosting Cross-lingual Transferability in Multilingual Models via - In-Context Learning",http://arxiv.org/abs/2305.15233v1 -Augmenting Large Language Model Translators via Translation Memories,http://arxiv.org/abs/2305.17367v1 -"Marked Personas: Using Natural Language Prompts to Measure Stereotypes - in Language Models",http://arxiv.org/abs/2305.18189v1 -Trompt: Towards a Better Deep Neural Network for Tabular Data,http://arxiv.org/abs/2305.18446v2 -PaintSeg: Training-free Segmentation via Painting,http://arxiv.org/abs/2305.19406v3 -Consistency-guided Prompt Learning for Vision-Language Models,http://arxiv.org/abs/2306.01195v1 -Word-Level Explanations for Analyzing Bias in Text-to-Image Models,http://arxiv.org/abs/2306.05500v1 -"COVER: A Heuristic Greedy Adversarial Attack on Prompt-based Learning in - Language Models",http://arxiv.org/abs/2306.05659v3 -"Temporally-Extended Prompts Optimization for SAM in Interactive Medical - Image Segmentation",http://arxiv.org/abs/2306.08958v1 -"Deep Language Networks: Joint Prompt Training of Stacked LLMs using - Variational Inference",http://arxiv.org/abs/2306.12509v1 -PromptIR: Prompting for All-in-One Blind Image Restoration,http://arxiv.org/abs/2306.13090v1 -DreamEditor: Text-Driven 3D Scene Editing with Neural Fields,http://arxiv.org/abs/2306.13455v3 -Large Language Models as Sous Chefs: Revising Recipes with GPT-3,http://arxiv.org/abs/2306.13986v1 -"You Can Generate It Again: Data-to-text Generation with Verification and - Correction Prompting",http://arxiv.org/abs/2306.15933v1 -Prompt Ensemble Self-training for Open-Vocabulary Domain Adaptation,http://arxiv.org/abs/2306.16658v1 -"Abstractions, Scenarios, and Prompt Definitions for Process Mining with - LLMs: A Case Study",http://arxiv.org/abs/2307.02194v2 -PREADD: Prefix-Adaptive Decoding for Controlled Text Generation,http://arxiv.org/abs/2307.03214v1 -VampNet: Music Generation via Masked Acoustic Token Modeling,http://arxiv.org/abs/2307.04686v2 -"Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual - Learning",http://arxiv.org/abs/2307.04869v2 -"SAM-U: Multi-box prompts triggered uncertainty estimation for reliable - SAM in medical image",http://arxiv.org/abs/2307.04973v1 -"Bootstrapping Vision-Language Learning with Decoupled Language - Pre-training",http://arxiv.org/abs/2307.07063v2 -"ActionPrompt: Action-Guided 3D Human Pose Estimation With Text and Pose - Prompting",http://arxiv.org/abs/2307.09026v1 -Visual Instruction Inversion: Image Editing via Visual Prompting,http://arxiv.org/abs/2307.14331v1 -Prompt Guided Transformer for Multi-Task Dense Prediction,http://arxiv.org/abs/2307.15362v1 -"PromptPaint: Steering Text-to-Image Generation Through Paint Medium-like - Interactions",http://dx.doi.org/10.1145/3586183.3606777 -"Diagnostic Reasoning Prompts Reveal the Potential for Large Language - Model Interpretability in Medicine",http://arxiv.org/abs/2308.06834v1 -"FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for - Large Language Models",http://arxiv.org/abs/2308.09975v1 -False Negative/Positive Control for SAM on Noisy Medical Images,http://arxiv.org/abs/2308.10382v1 -PartSeg: Few-shot Part Segmentation via Part-aware Prompt Learning,http://arxiv.org/abs/2308.12757v1 -"Interpretable Image Quality Assessment via CLIP with Multiple - Antonym-Prompt Pairs",http://arxiv.org/abs/2308.13094v1 -"WorldSmith: Iterative and Expressive Prompting for World Building with a - Generative AI",http://arxiv.org/abs/2308.13355v1 -"Prompt me a Dataset: An investigation of text-image prompting for - historical image dataset creation using foundation models",http://arxiv.org/abs/2309.01674v1 -Prompt-based Ingredient-Oriented All-in-One Image Restoration,http://arxiv.org/abs/2309.03063v2 -"From Sparse to Dense: GPT-4 Summarization with Chain of Density - Prompting",http://arxiv.org/abs/2309.04269v1 -MaskDiffusion: Boosting Text-to-Image Consistency with Conditional Mask,http://arxiv.org/abs/2309.04399v1 -"EPA: Easy Prompt Augmentation on Large Language Models via Multiple - Sources and Multiple Targets",http://arxiv.org/abs/2309.04725v1 -Audio-free Prompt Tuning for Language-Audio Models,http://arxiv.org/abs/2309.08357v1 -"Language-Oriented Communication with Semantic Coding and Knowledge - Distillation for Text-to-Image Generation",http://arxiv.org/abs/2309.11127v1 -"Transformer-based Image Compression with Variable Image Quality - Objectives",http://arxiv.org/abs/2309.12717v1 -"Self-Explanation Prompting Improves Dialogue Understanding in Large - Language Models",http://arxiv.org/abs/2309.12940v1 -VoiceLDM: Text-to-Speech with Environmental Context,http://arxiv.org/abs/2309.13664v1 -"Intuitive or Dependent? Investigating LLMs' Robustness to Conflicting - Prompts",http://arxiv.org/abs/2309.17415v2 -"Prompt-Enhanced Self-supervised Representation Learning for Remote - Sensing Image Understanding",http://arxiv.org/abs/2310.00022v1 -"Towards LLM-based Fact Verification on News Claims with a Hierarchical - Step-by-Step Prompting Method",http://arxiv.org/abs/2310.00305v1 -"UPAR: A Kantian-Inspired Prompting Framework for Enhancing Large - Language Model Capabilities",http://arxiv.org/abs/2310.01441v1 -Large Language Models as Analogical Reasoners,http://arxiv.org/abs/2310.01714v2 -Dual Prompt Tuning for Domain-Aware Federated Learning,http://arxiv.org/abs/2310.03103v1 -Molecule Design by Latent Prompt Transformer,http://arxiv.org/abs/2310.03253v1 -Fine-tune Language Models to Approximate Unbiased In-context Learning,http://arxiv.org/abs/2310.03331v1 -"Inclusive Data Representation in Federated Learning: A Novel Approach - Integrating Textual and Visual Prompt",http://dx.doi.org/10.1145/3594739.3612914 -PromptSpeaker: Speaker Generation Based on Text Descriptions,http://arxiv.org/abs/2310.05001v1 -"Self-Convinced Prompting: Few-Shot Question Answering with Repeated - Introspection",http://arxiv.org/abs/2310.05035v2 -"LLMLingua: Compressing Prompts for Accelerated Inference of Large - Language Models",http://arxiv.org/abs/2310.05736v1 -"Take a Step Back: Evoking Reasoning via Abstraction in Large Language - Models",http://arxiv.org/abs/2310.06117v1 -The Importance of Prompt Tuning for Automated Neuron Explanations,http://arxiv.org/abs/2310.06200v2 -"Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task - Scenarios with Large Language Models",http://arxiv.org/abs/2310.06692v2 -Large Language Models can Learn Rules,http://arxiv.org/abs/2310.07064v1 -"Found in the Middle: Permutation Self-Consistency Improves Listwise - Ranking in Large Language Models",http://arxiv.org/abs/2310.07712v1 -"Co$^2$PT: Mitigating Bias in Pre-trained Language Models through - Counterfactual Contrastive Prompt Tuning",http://arxiv.org/abs/2310.12490v1 -"Towards Anytime Fine-tuning: Continually Pre-trained Language Models - with Hypernetwork Prompt",http://arxiv.org/abs/2310.13024v1 -"Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring - Fine-Grained Relevance Labels",http://arxiv.org/abs/2310.14122v1 -"Large Language Models can Share Images, Too!",http://arxiv.org/abs/2310.14804v1 -Prompt-driven Target Speech Diarization,http://arxiv.org/abs/2310.14823v1 -"PROMPT: Panchromatic Robotic Optical Monitoring and Polarimetry - Telescopes",http://dx.doi.org/10.1393/ncc/i2005-10149-6 -GRB 070311: a direct link between the prompt emission and the afterglow,http://dx.doi.org/10.1051/0004-6361:20078254 -"Evidence for New Relations between Gamma Ray Burst Prompt and X-ray - Afterglow Emission from 9 Years of Swift",http://dx.doi.org/10.1088/0067-0049/209/2/20 -"Inclusive, prompt and non-prompt ${\rm J}/ψ$ production at - midrapidity in p$-$Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV",http://dx.doi.org/10.1007/JHEP06(2022)011 -"Photospheric Prompt Emission From Long Gamma Ray Burst Simulations -- I. - Optical Emission",http://dx.doi.org/10.3847/1538-4357/ac2428 -Few-Shot Bot: Prompt-Based Learning for Dialogue Systems,http://arxiv.org/abs/2110.08118v1 -New approaches to particle induced prompt gamma imaging,http://arxiv.org/abs/2205.00200v2 -Crosslingual Generalization through Multitask Finetuning,http://arxiv.org/abs/2211.01786v2 -ADEPT: A DEbiasing PrompT Framework,http://arxiv.org/abs/2211.05414v2 -"First measurement of prompt and non-prompt ${\rm D^{*+}}$ vector meson - spin alignment in pp collisions at $\sqrt{s} = 13$ TeV",http://dx.doi.org/10.1016/j.physletb.2023.137920 -Segment Anything Model for Medical Image Analysis: an Experimental Study,http://dx.doi.org/10.1016/j.media.2023.102918 -"The Rise of AI Language Pathologists: Exploring Two-level Prompt - Learning for Few-shot Weakly-supervised Whole Slide Image Classification",http://arxiv.org/abs/2305.17891v1 -"Layout and Task Aware Instruction Prompt for Zero-shot Document Image - Question Answering",http://arxiv.org/abs/2306.00526v4 -"ArtWhisperer: A Dataset for Characterizing Human-AI Interactions in - Artistic Creations",http://arxiv.org/abs/2306.08141v2 -"Resprompt: Residual Connection Prompting Advances Multi-Step Reasoning - in Large Language Models",http://arxiv.org/abs/2310.04743v1 -"A decreasing column density during the prompt emission from GRB000528 - observed with BeppoSAX",http://dx.doi.org/10.1086/423335 -Prompt Optical Detection of GRB 050401 with ROTSE-IIIa,http://dx.doi.org/10.1086/497370 -The role of afterglow break-times as GRB jet angle indicators,http://dx.doi.org/10.1111/j.1365-2966.2007.11679.x -"Compton dragged supercritical piles: The GRB prompt and afterglow - scenario",http://arxiv.org/abs/0708.4364v1 -A view of prompt atmospheric neutrinos with IceCube,http://dx.doi.org/10.1016/j.nuclphysbps.2013.04.105 -"A common stochastic process rules gamma-ray burst prompt emission and - X-ray flares",http://dx.doi.org/10.1088/0004-637X/801/1/57 -"A Unified Model for GRB Prompt Emission from Optical to $γ$-Rays: - Exploring GRBs as Standard Candles",http://dx.doi.org/10.3847/2041-8205/831/1/L8 -Gamma Ray Burst afterglow and prompt-afterglow relations: an overview,http://dx.doi.org/10.1016/j.newar.2017.04.001 -"Constraining Low-luminosity Gamma-Ray Bursts as Ultra-high-energy Cosmic - Ray Sources Using GRB 060218 as a Proxy",http://dx.doi.org/10.3847/1538-4357/abb60c -The prompt atmospheric neutrino flux in the light of LHCb,http://dx.doi.org/10.1007/JHEP02(2016)130 -"Observation of prompt J/$ψ$ meson elliptic flow in high-multiplicity - pPb collisions at $\sqrt{s_\mathrm{NN}} =$ 8.16 TeV",http://dx.doi.org/10.1016/j.physletb.2019.02.018 -A First Search for Prompt Radio Emission from a Gravitational-Wave Event,http://dx.doi.org/10.3847/2041-8213/ab2248 -"Validation of a recommender system for prompting omitted foods in online - dietary assessment surveys",http://dx.doi.org/10.1145/3329189.3329191 -"Inferring prompt black-hole formation in neutron star mergers from - gravitational-wave data",http://dx.doi.org/10.1103/PhysRevD.101.044006 -"Origin for the Prompt Spectral Evolution Characteristics and High Energy - Emission during the X-Ray Flare in GRB 180720B",http://dx.doi.org/10.3847/1538-4357/ab3c6e -"Accretion-induced prompt black hole formation in asymmetric neutron star - mergers, dynamical ejecta and kilonova signals",http://dx.doi.org/10.1093/mnras/staa1860 -"Electron Spectrum for the Prompt Emission of Gamma-ray Bursts in the - Synchrotron Radiation Scenario",http://dx.doi.org/10.3847/1538-4357/abe76c -Factual Probing Is [MASK]: Learning vs. Learning to Recall,http://arxiv.org/abs/2104.05240v2 -GRB Polarization: A Unique Probe of GRB Physics,http://arxiv.org/abs/2109.03286v2 -"Prompt-based Zero-shot Relation Extraction with Semantic Knowledge - Augmentation",http://arxiv.org/abs/2112.04539v2 -Align and Prompt: Video-and-Language Pre-training with Entity Prompts,http://arxiv.org/abs/2112.09583v2 -Image Segmentation Using Text and Image Prompts,http://arxiv.org/abs/2112.10003v2 -Learning to Compose Diversified Prompts for Image Emotion Classification,http://arxiv.org/abs/2201.10963v2 -Personalized Prompt Learning for Explainable Recommendation,http://arxiv.org/abs/2202.07371v2 -"Constrains on the physics of the prompt emission from a distant and - energetic gamma-ray burst GRB 220101A",http://dx.doi.org/10.3847/1538-4357/aca091 -Multi-Stage Prompting for Knowledgeable Dialogue Generation,http://arxiv.org/abs/2203.08745v1 -"A Prompting-based Approach for Adversarial Example Generation and - Robustness Enhancement",http://arxiv.org/abs/2203.10714v1 -Prompt Consistency for Zero-Shot Task Generalization,http://arxiv.org/abs/2205.00049v2 -Prompt-based Learning for Unpaired Image Captioning,http://arxiv.org/abs/2205.13125v2 -"S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for - Domain Incremental Learning",http://arxiv.org/abs/2207.12819v2 -"Neutrino search from γ-ray bursts during the prompt and X-ray - afterglow phases using 10 years of IceCube public data",http://dx.doi.org/10.1051/0004-6361/202244815 -Prompt cusps and the dark matter annihilation signal,http://dx.doi.org/10.1088/1475-7516/2023/10/008 -Visual Prompt Tuning for Test-time Domain Adaptation,http://arxiv.org/abs/2210.04831v2 -"FS-DETR: Few-Shot DEtection TRansformer with prompting and without - re-training",http://arxiv.org/abs/2210.04845v2 -"Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph - Construction",http://dx.doi.org/10.1145/3539618.3591763 -"Fine-grained Visual-Text Prompt-Driven Self-Training for Open-Vocabulary - Object Detection",http://dx.doi.org/10.1109/TNNLS.2023.3293484 -"Few-shot Multimodal Sentiment Analysis based on Multimodal Probabilistic - Fusion Prompts",http://dx.doi.org/10.1145/3581783.3612181 -Contextual Transformer for Offline Meta Reinforcement Learning,http://arxiv.org/abs/2211.08016v1 -PromptCap: Prompt-Guided Task-Aware Image Captioning,http://arxiv.org/abs/2211.09699v4 -"ProSFDA: Prompt Learning based Source-free Domain Adaptation for Medical - Image Segmentation",http://arxiv.org/abs/2211.11514v1 -PromptTTS: Controllable Text-to-Speech with Text Descriptions,http://arxiv.org/abs/2211.12171v1 -VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval,http://arxiv.org/abs/2211.12764v3 -PØDA: Prompt-driven Zero-shot Domain Adaptation,http://arxiv.org/abs/2212.03241v3 -"PromptCAL: Contrastive Affinity Learning via Auxiliary Prompts for - Generalized Novel Category Discovery",http://arxiv.org/abs/2212.05590v2 -"Forward production of prompt neutrinos from charm in the atmosphere and - at high energy colliders",http://arxiv.org/abs/2212.07865v2 -"Boosting Low-Data Instance Segmentation by Unsupervised Pre-training - with Saliency Prompt",http://arxiv.org/abs/2302.01171v1 -"BLIAM: Literature-based Data Synthesis for Synergistic Drug Combination - Prediction",http://arxiv.org/abs/2302.06860v2 -"GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural - Networks",http://arxiv.org/abs/2302.08043v3 -"A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and - Spatial Reasoning",http://arxiv.org/abs/2302.09068v1 -Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis,http://arxiv.org/abs/2303.00815v1 -"MEDIMP: 3D Medical Images with clinical Prompts from limited tabular - data for renal transplantation",http://arxiv.org/abs/2303.12445v2 -"$k$NN Prompting: Beyond-Context Learning with Calibration-Free Nearest - Neighbor Inference",http://arxiv.org/abs/2303.13824v1 -"Debiasing Scores and Prompts of 2D Diffusion for View-consistent - Text-to-3D Generation",http://arxiv.org/abs/2303.15413v3 -"Efficiently Aligned Cross-Lingual Transfer Learning for Conversational - Tasks using Prompt-Tuning",http://arxiv.org/abs/2304.01295v2 -"UniSeg: A Prompt-driven Universal Segmentation Model as well as A Strong - Representation Learner",http://arxiv.org/abs/2304.03493v1 -"Prompt-to-afterglow transition of optical emission in a long gamma-ray - burst consistent with a fireball",http://dx.doi.org/10.1038/s41550-023-01930-0 -"FashionSAP: Symbols and Attributes Prompt for Fine-grained Fashion - Vision-Language Pre-training",http://arxiv.org/abs/2304.05051v1 -Social Biases through the Text-to-Image Generation Lens,http://arxiv.org/abs/2304.06034v1 -PBNR: Prompt-based News Recommender System,http://arxiv.org/abs/2304.07862v1 -"Multi-view Vision-Prompt Fusion Network: Can 2D Pre-trained Model Boost - 3D Point Cloud Data-scarce Learning?",http://arxiv.org/abs/2304.10224v2 -"Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in - Language Models",http://arxiv.org/abs/2305.01219v5 -"Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion - Models",http://arxiv.org/abs/2305.04441v1 -AMD: Autoregressive Motion Diffusion,http://arxiv.org/abs/2305.09381v6 -Can Language Models Solve Graph Problems in Natural Language?,http://arxiv.org/abs/2305.10037v1 -"OPT-R: Exploring the Role of Explanations in Finetuning and Prompting - for Reasoning Skills of Large Language Models",http://dx.doi.org/10.18653/v1/2023.nlrse-1.10 -"Evaluating Prompt-based Question Answering for Object Prediction in the - Open Research Knowledge Graph",http://arxiv.org/abs/2305.12900v2 -"Prompting Language-Informed Distribution for Compositional Zero-Shot - Learning",http://arxiv.org/abs/2305.14428v2 -"In-Context Impersonation Reveals Large Language Models' Strengths and - Biases",http://arxiv.org/abs/2305.14930v1 -On the Robustness of Segment Anything,http://arxiv.org/abs/2305.16220v1 -"INTapt: Information-Theoretic Adversarial Prompt Tuning for Enhanced - Non-Native Speech Recognition",http://arxiv.org/abs/2305.16371v1 -"LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive - Prompt-Based Few-Shot Fine-Tuning",http://dx.doi.org/10.18653/v1/2023.acl-short.59 -"Towards Unified Text-based Person Retrieval: A Large-scale - Multi-Attribute and Language Search Benchmark",http://arxiv.org/abs/2306.02898v4 -"Democratizing LLMs for Low-Resource Languages by Leveraging their - English Dominant Abilities with Linguistically-Diverse Prompts",http://arxiv.org/abs/2306.11372v1 -"Unified Conversational Models with System-Initiated Transitions between - Chit-Chat and Task-Oriented Dialogues",http://arxiv.org/abs/2307.01664v1 -"Multimodal Prompt Learning for Product Title Generation with Extremely - Limited Labels",http://arxiv.org/abs/2307.01969v1 -"Prompting Diffusion Representations for Cross-Domain Semantic - Segmentation",http://arxiv.org/abs/2307.02138v1 -Is ChatGPT a Good Personality Recognizer? A Preliminary Study,http://arxiv.org/abs/2307.03952v2 -TIAM -- A Metric for Evaluating Alignment in Text-to-Image Generation,http://arxiv.org/abs/2307.05134v1 -"MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental - Learning",http://arxiv.org/abs/2307.05707v1 -"Unified Medical Image-Text-Label Contrastive Learning With Continuous - Prompt",http://arxiv.org/abs/2307.05920v1 -"Zero-shot Domain-sensitive Speech Recognition with Prompt-conditioning - Fine-tuning",http://arxiv.org/abs/2307.10274v2 -"UP-DP: Unsupervised Prompt Learning for Data Pre-Selection with - Vision-Language Models",http://arxiv.org/abs/2307.11227v1 -"IteraTTA: An interface for exploring both text prompts and audio priors - in generating music with text-to-audio models",http://arxiv.org/abs/2307.13005v1 -Visual Prompt Flexible-Modal Face Anti-Spoofing,http://arxiv.org/abs/2307.13958v1 -"Adapt and Decompose: Efficient Generalization of Text-to-SQL via Domain - Adapted Least-To-Most Prompting",http://arxiv.org/abs/2308.02582v3 -"AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene - Segmentation",http://arxiv.org/abs/2308.03726v1 -"DIG In: Evaluating Disparities in Image Generations with Indicators for - Geographic Diversity",http://arxiv.org/abs/2308.06198v2 -"Evaluating the Instruction-Following Robustness of Large Language Models - to Prompt Injection",http://arxiv.org/abs/2308.10819v2 -Random Word Data Augmentation with CLIP for Zero-Shot Anomaly Detection,http://arxiv.org/abs/2308.11119v2 -"GOPro: Generate and Optimize Prompts in CLIP using Self-Supervised - Learning",http://arxiv.org/abs/2308.11605v1 -Knowledge Graph Prompting for Multi-Document Question Answering,http://arxiv.org/abs/2308.11730v1 -CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No,http://arxiv.org/abs/2308.12213v2 -I3DOD: Towards Incremental 3D Object Detection via Prompting,http://arxiv.org/abs/2308.12512v1 -Re-Reading Improves Reasoning in Language Models,http://arxiv.org/abs/2309.06275v1 -"Controlled Generation with Prompt Insertion for Natural Language - Explanations in Grammatical Error Correction",http://arxiv.org/abs/2309.11439v1 -Prompt-based test-time real image dehazing: a novel pipeline,http://arxiv.org/abs/2309.17389v2 -"Fine-tuned vs. Prompt-tuned Supervised Representations: Which Better - Account for Brain Language Representations?",http://arxiv.org/abs/2310.01854v1 -"Instance Needs More Care: Rewriting Prompts for Instances Yields Better - Zero-Shot Performance",http://arxiv.org/abs/2310.02107v2 -"DSPy: Compiling Declarative Language Model Calls into Self-Improving - Pipelines",http://arxiv.org/abs/2310.03714v1 -"Mastering Robot Manipulation with Multimodal Prompts through Pretraining - and Multi-task Fine-tuning",http://arxiv.org/abs/2310.09676v1 -"Using Global Land Cover Product as Prompt for Cropland Mapping via - Visual Foundation Model",http://arxiv.org/abs/2310.10219v1 -"LLM Blueprint: Enabling Text-to-Image Generation with Complex and - Detailed Prompts",http://arxiv.org/abs/2310.10640v1 -"IDEAL: Influence-Driven Selective Annotations Empower In-Context - Learners in Large Language Models",http://arxiv.org/abs/2310.10873v1 -"To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still - Easy To Generate Unsafe Images ... For Now",http://arxiv.org/abs/2310.11868v1 -"Federated Learning of Large Language Models with Parameter-Efficient - Prompt Tuning and Adaptive Optimization",http://arxiv.org/abs/2310.15080v1 -"AutoDAN: Automatic and Interpretable Adversarial Attacks on Large - Language Models",http://arxiv.org/abs/2310.15140v1 -"""openness of search engine"": A critical flaw in search systems; a case - study on google, yahoo and bing",http://arxiv.org/abs/1203.0434v1 -GRB 060313: A New Paradigm for Short-Hard Bursts?,http://dx.doi.org/10.1086/508054 -"The Remarkable Afterglow of GRB 061007: Implications for Optical Flashes - and GRB Fireballs",http://dx.doi.org/10.1086/512605 -The Optical/NIR afterglow of GRB 111209A: Complex yet not Unprecedented,http://dx.doi.org/10.1051/0004-6361/201731292 -"Transition from Fireball to Poynting-flux-dominated Outflow in - Three-Episode GRB 160625B",http://arxiv.org/abs/1612.03089v2 -A blast from the infant Universe: the very high-z GRB 210905A,http://dx.doi.org/10.1051/0004-6361/202243225 -StarCoder: may the source be with you!,http://arxiv.org/abs/2305.06161v1 -"Panchromatic observations of the textbook GRB 110205A: constraining - physical mechanisms of prompt emission and afterglow",http://dx.doi.org/10.1088/0004-637X/751/2/90 -Lessons Learned from Educating AI Engineers,http://arxiv.org/abs/2103.10703v1 -r-Process in Prompt Supernova Explosions Revisited,http://dx.doi.org/10.1086/323524 -Exploring Broadband GRB Behavior During gamma-ray Emission,http://dx.doi.org/10.1086/510896 -Swift captures the spectrally evolving prompt emission of GRB 070616,http://dx.doi.org/10.1111/j.1365-2966.2007.12763.x -"The Correlation of Spectral Lag Evolution with Prompt Optical Emission - in GRB 080319B",http://dx.doi.org/10.1063/1.3155918 -"The Effect of Different Type Ia Supernova Progenitors on Galactic - Chemical Evolution",http://dx.doi.org/10.1051/0004-6361/200911869 -Partition of the total excitation energy between complementary fragments,http://arxiv.org/abs/1103.1574v1 -"A simultaneous search for prompt radio emission associated with the - short GRB 170112A using the all-sky imaging capability of the OVRO-LWA",http://dx.doi.org/10.3847/1538-4357/aad2d7 -"A faint optical flash in dust-obscured GRB 080603A - implications for - GRB prompt emission mechanisms",http://dx.doi.org/10.1111/j.1365-2966.2011.19394.x -GRB 110205A: Anatomy of a long gamma-ray burst,http://dx.doi.org/10.1088/0004-637X/748/1/59 -"The luminosity evolution over the EQuiTemporal Surfaces in the prompt - emission of Gamma-Ray Bursts",http://dx.doi.org/10.1142/S0218271811019943 -"Revisit prompt $J/ψ$ production in associated with Higgs Boson via - gluon fusion at the LHC",http://dx.doi.org/10.1103/PhysRevD.104.054006 -"Towards Visual-Prompt Temporal Answering Grounding in Medical - Instructional Video",http://arxiv.org/abs/2203.06667v6 -"Understanding and Mitigating Overfitting in Prompt Tuning for - Vision-Language Models",http://arxiv.org/abs/2211.02219v3 -"Understanding and Improving Visual Prompting: A Label-Mapping - Perspective",http://arxiv.org/abs/2211.11635v5 -Human Evaluation of Text-to-Image Models on a Multi-Task Benchmark,http://arxiv.org/abs/2211.12112v1 -"CODA-Prompt: COntinual Decomposed Attention-based Prompting for - Rehearsal-Free Continual Learning",http://arxiv.org/abs/2211.13218v2 -Fine-Grained Regional Prompt Tuning for Visual Abductive Reasoning,http://arxiv.org/abs/2303.10428v2 -"Prompt-MIL: Boosting Multi-Instance Learning Schemes via Task-specific - Prompt Tuning",http://arxiv.org/abs/2303.12214v2 -Similarity-Aware Multimodal Prompt Learning for Fake News Detection,http://dx.doi.org/10.1016/j.ins.2023.119446 -"Adapting Pre-trained Language Models to Vision-Language Tasks via - Dynamic Visual Prompting",http://arxiv.org/abs/2306.00409v2 -Trained Transformers Learn Linear Models In-Context,http://arxiv.org/abs/2306.09927v3 -"SAM Meets Robotic Surgery: An Empirical Study on Generalization, - Robustness and Adaptation",http://arxiv.org/abs/2308.07156v1 -"How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a - Self-Paced Learning Environment",http://arxiv.org/abs/2309.14049v1 -"ClickPrompt: CTR Models are Strong Prompt Generators for Adapting - Language Models to CTR Prediction",http://arxiv.org/abs/2310.09234v2 -Are Game Engines Software Frameworks? A Three-perspective Study,http://dx.doi.org/10.1016/j.jss.2020.110846 -Skeena: Efficient and Consistent Cross-Engine Transactions,http://dx.doi.org/10.1145/3514221.3526171 -Dissipative heat engine is thermodynamically inconsistent,http://dx.doi.org/10.1098/rspa.2009.0581 -Towards a Systems Engineering Essence,http://arxiv.org/abs/1502.00121v2 -Keyword Search Engine Enriched by Expert System Features,http://arxiv.org/abs/2009.08958v1 -The Risk-Taking Software Engineer: A Framed Portrait,http://arxiv.org/abs/2301.08923v1 -Non-Ideal Measurement Heat Engines,http://arxiv.org/abs/2308.02381v1 -Top-down Paradigm in Engineering Software Integration,http://arxiv.org/abs/0908.0833v1 -Teaching cloud computing: a software engineering perspective,http://arxiv.org/abs/1209.0948v1 -"Academic Search Engines: Constraints, Bugs, and Recommendation",http://arxiv.org/abs/2211.00361v1 -On the Origin of Planetary Nebula K648 in Globular Cluster M15,http://dx.doi.org/10.1086/304273 -"Prompt Iron Enrichment, Two r-Process Components, and Abundances in Very - Metal-Poor Stars",http://dx.doi.org/10.1086/312455 -"Time dependent photoionization opacities in dense Gamma-Ray Burst - environments",http://dx.doi.org/10.1046/j.1365-8711.2001.04735.x -On the mechanism of prompt emission of gamma-ray bursts,http://arxiv.org/abs/astro-ph/0311111v1 -Thermal emission in the prompt phase of gamma-ray bursts,http://arxiv.org/abs/astro-ph/0504539v1 -"Absorption of Nuclear Gamma Radiation by Heavy Electrons on Metallic - Hydride Surfaces",http://arxiv.org/abs/cond-mat/0509269v1 -"A Study of Hadronic Backgrounds to Isolated Hard Photon Production with - L3",http://arxiv.org/abs/hep-ex/9505012v1 -Prompt Photons in Photoproduction at HERA,http://arxiv.org/abs/hep-ex/0008011v1 -Prompt Photons and DVCS at HERA,http://arxiv.org/abs/hep-ex/0108023v1 -Observation of isolated high-E_T photons in deep inelastic scattering,http://dx.doi.org/10.1016/j.physletb.2004.05.033 -Measurement of prompt photon in sqrt(s)=200GeV pp collisions,http://dx.doi.org/10.1142/9789812701909_0057 -Inclusive Prompt Photon Production in Deep Inelastic Scattering at H1,http://dx.doi.org/10.1142/9789812706706_0098 -"Prompt Photon Production in $γγ$ Collisions and the Gluon - Content of the Photon",http://dx.doi.org/10.1016/0370-2693(96)00877-5 -Isolated Prompt Photon Production at HERA,http://arxiv.org/abs/hep-ph/9606457v1 -Inclusive Prompt Photon Production in Polarized pp Collisions at HERA-N,http://dx.doi.org/10.1016/0370-2693(96)01062-3 -"Conatraints on $ΔG$ from Prompt Photon plus Jet Production at - HERA-$\vec{N}$",http://dx.doi.org/10.1016/S0370-2693(97)00615-1 -Isolated Prompt Photon Cross Sections,http://arxiv.org/abs/hep-ph/9610497v1 -Isolated Prompt Photon Plus Jet Photoproduction at HERA,http://arxiv.org/abs/hep-ph/9706355v1 -"Prompt Photon Plus Jet Photoproduction at HERA at Next-to-Leading Order - in QCD",http://dx.doi.org/10.1103/PhysRevD.57.235 -Isolated Prompt Photon Production,http://arxiv.org/abs/hep-ph/9708408v1 -"Rapidity Correlations and $ΔG $ from Prompt Photon plus Jet - Production in Polarized $pp$ Collisions",http://dx.doi.org/10.1103/PhysRevD.58.074002 -Sudakov Resummation Effects in Prompt-Photon Hadroproduction,http://dx.doi.org/10.1088/1126-6708/1999/03/025 -Constraints on the Gluon Density from Lepton Pair Production,http://arxiv.org/abs/hep-ph/0001127v1 -Prompt J/psi Polarization at the Tevatron,http://dx.doi.org/10.1142/S0217751X01007273 -Prompt Photon Production in Polarized Hadron Collisions,http://arxiv.org/abs/hep-ph/0006199v1 -Isolated prompt photon photoproduction at NLO,http://dx.doi.org/10.1007/s100520100732 -Theory of hard photoproduction,http://dx.doi.org/10.1103/RevModPhys.74.1221 -Prompt J/psi production from Tevatron to LHC,http://dx.doi.org/10.1142/9789812704429_0029 -"QCD radiative corrections to prompt diphoton production in association - with a jet at hadron colliders",http://dx.doi.org/10.1088/1126-6708/2003/04/059 -"Next-to-leading order QCD corrections to A_TT for prompt photon - production",http://dx.doi.org/10.1103/PhysRevD.67.114006 -"Uncertainty of polarized gluon distribution from prompt photon - production",http://dx.doi.org/10.1016/j.physletb.2004.06.102 -z-Scaling and Prompt J/psi Production at High-pT in bar pp at Tevatron,http://arxiv.org/abs/hep-ph/0405230v1 -Heavy-quarkonium production at next-to-leading order,http://dx.doi.org/10.1142/S0217751X06032046 -A new critical study of photon production in hadronic collisions,http://dx.doi.org/10.1103/PhysRevD.73.094007 -Prompt Contributions to the Dilepton Yield in Heavy Ion Collisions,http://dx.doi.org/10.1007/s100500050407 -Soft-collinear effects in prompt photon production,http://dx.doi.org/10.1103/PhysRevD.76.014010 -Prompt muons in extended air showers,http://arxiv.org/abs/0706.2145v2 -"Prompt photons in heavy ion collisions at the LHC: A ''multi-purpose'' - observable",http://arxiv.org/abs/0707.2320v1 -Correlating prompt GRB photons with neutrinos,http://arxiv.org/abs/0711.2277v1 -X-ray Polarization of Gamma-Ray Bursts,http://dx.doi.org/10.1017/CBO9780511750809.031 -The origin of the prompt GRB spectrum,http://arxiv.org/abs/0912.3743v1 -What is the radiative process of the prompt phase of Gamma Ray Bursts?,http://dx.doi.org/10.1063/1.3475299 -Commissioning of the ATLAS Muon Trigger Selection,http://arxiv.org/abs/1009.6202v1 -GRBs in the SWIFT and Fermi era: a new view of the prompt emission,http://dx.doi.org/10.1063/1.3621741 -Quarkonium production at ATLAS,http://dx.doi.org/10.1051/epjconf/20122812025 -Soft-collinear effects for prompt photon production via fragmentation,http://arxiv.org/abs/1204.2503v1 -Production of Prompt Photons: Holographic Duality and Thermalization,http://dx.doi.org/10.1103/PhysRevD.86.081901 -"D mesons suppression in PbPb collisions at sqrt{s_{NN}} = 2.76 TeV - measured by ALICE",http://arxiv.org/abs/1210.2163v1 -"Production of Prompt Photons and Dileptons in Rapid Holographic - Thermalization",http://arxiv.org/abs/1212.3354v2 -Prompt Photon A_N with the PHENIX MPC-EX Detector,http://dx.doi.org/10.1134/S1063779614010584 -X-ray behaviour of GRBs detected by INTEGRAL/JEM-X,http://arxiv.org/abs/1302.0560v1 -Quarkonium Results in PbPb Collisions at CMS,http://dx.doi.org/10.1088/1742-6596/458/1/012011 -X - Ray Flares and Their Connection With Prompt Emission in GRBs,http://arxiv.org/abs/1308.1996v1 -Polarization of GRB Prompt Emission,http://arxiv.org/abs/1308.5733v1 -PGNAA neutron source moderation setup optimization,http://arxiv.org/abs/1309.1308v1 -The prompt-early afterglow connection in GRBs,http://arxiv.org/abs/1309.6592v1 -"Centrality and rapidity dependence of inclusive pion and prompt photon - production in p+Pb collisions at the LHC with EPS09s nPDFs",http://dx.doi.org/10.1088/1742-6596/589/1/012010 -"Polarizations of $χ_{c1}$ and $χ_{c2}$ in prompt production at the - LHC",http://dx.doi.org/10.1103/PhysRevLett.112.182003 -Medium-induced optical effects for prompt photons,http://arxiv.org/abs/1408.1410v1 -"Averaged number of prompt neutrons calculus for photo-fission of - actinides",http://arxiv.org/abs/1411.1156v2 -"Present theoretical uncertainties on charm hadroproduction in QCD and - prompt neutrino fluxes",http://dx.doi.org/10.1051/epjconf/201611608002 -"Motivating Healthy Water Intake through Prompting, Historical - Information, and Implicit Feedback",http://arxiv.org/abs/1603.01367v1 -On similarity of jet quenching and charmonia suppression,http://dx.doi.org/10.1016/j.physletb.2017.01.041 -Prompt atmospheric neutrino flux from the various QCD models,http://dx.doi.org/10.1051/epjconf/201714107002 -"Photon radiation from heavy-ion collisions in the $\sqrt{s_{NN}}=19-200$ - GeV regime",http://dx.doi.org/10.1016/j.nuclphysa.2018.08.005 -Probing Neural Language Models for Human Tacit Assumptions,http://arxiv.org/abs/2004.04877v2 -Prompt photon production with POWHEG,http://arxiv.org/abs/1709.02648v1 -Prompt photon hadroproduction in the k_T-factorization approach,http://dx.doi.org/10.1063/1.3122201 -"Measurement of the Isolated Prompt Photon Production Cross Section in pp - Collisions at sqrt(s) = 7 TeV",http://dx.doi.org/10.1103/PhysRevLett.106.082001 -"Measurements of isolated prompt photons in pp collisions at 7 TeV in - ATLAS",http://arxiv.org/abs/1107.2200v1 -Do Prescribed Prompts Prime Sensemaking During Group Problem Solving?,http://arxiv.org/abs/1111.5400v1 -Prompt photon production and photon-hadron jet correlations with POWHEG,http://arxiv.org/abs/1610.02275v1 -Increasing efficiency of quantum memory based on atomic frequency combs,http://arxiv.org/abs/1912.05867v2 -"A model for the fast evaluation of prompt losses of energetic ions in - stellarators",http://dx.doi.org/10.1088/1741-4326/ac2994 -A general-mass scheme for prompt charm production at hadron colliders,http://arxiv.org/abs/2108.03741v1 -ClipMatrix: Text-controlled Creation of 3D Textured Meshes,http://arxiv.org/abs/2109.12922v1 -Modular and Parameter-Efficient Multimodal Fusion with Prompting,http://arxiv.org/abs/2203.08055v1 -Odor Descriptor Understanding through Prompting,http://arxiv.org/abs/2205.03719v1 -"Few-Shot Natural Language Inference Generation with PDD: Prompt and - Dynamic Demonstration",http://arxiv.org/abs/2205.10593v1 -Discovering the Hidden Vocabulary of DALLE-2,http://arxiv.org/abs/2206.00169v1 -Building a Personalized Dialogue System with Prompt-Tuning,http://arxiv.org/abs/2206.05399v1 -Transformers are Adaptable Task Planners,http://arxiv.org/abs/2207.02442v1 -"Task-specific Pre-training and Prompt Decomposition for Knowledge Graph - Population with Language Models",http://arxiv.org/abs/2208.12539v2 -"Chain of Explanation: New Prompting Method to Generate Higher Quality - Natural Language Explanation for Implicit Hate Speech",http://dx.doi.org/10.1145/3543873.3587320 -EvEntS ReaLM: Event Reasoning of Entity States via Language Models,http://arxiv.org/abs/2211.05392v1 -Understanding How Model Size Affects Few-shot Instruction Prompting,http://arxiv.org/abs/2212.01907v1 -"Neutron capture-induced silicon nuclear recoils for dark matter and - CE$ν$NS",http://dx.doi.org/10.1103/PhysRevD.107.076026 -"'That Darned Sandstorm': A Study of Procedural Generation through - Archaeological Storytelling",http://arxiv.org/abs/2304.08293v1 -ZeroShotDataAug: Generating and Augmenting Training Data with ChatGPT,http://arxiv.org/abs/2304.14334v1 -"Segment Anything is A Good Pseudo-label Generator for Weakly Supervised - Semantic Segmentation",http://arxiv.org/abs/2305.01275v1 -Studies on the hadronization of charm and beauty quarks,http://arxiv.org/abs/2305.10086v1 -Bits of Grass: Does GPT already know how to write like Whitman?,http://arxiv.org/abs/2305.11064v1 -"MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of - Thought Prompting",http://arxiv.org/abs/2305.16896v1 -Tab-CoT: Zero-shot Tabular Chain of Thought,http://arxiv.org/abs/2305.17812v1 -Long Text Generation Challenge,http://arxiv.org/abs/2306.02334v1 -Large language models and (non-)linguistic recursion,http://arxiv.org/abs/2306.07195v1 -Federated Generative Learning with Foundation Models,http://arxiv.org/abs/2306.16064v1 -Fairness of ChatGPT and the Role Of Explainable-Guided Prompts,http://arxiv.org/abs/2307.11761v1 -"Leveraging Few-Shot Data Augmentation and Waterfall Prompting for - Response Generation",http://arxiv.org/abs/2308.01080v1 -"In-Context Alignment: Chat with Vanilla Language Models Before - Fine-Tuning",http://arxiv.org/abs/2308.04275v1 -"Distilling Adversarial Prompts from Safety Benchmarks: Report for the - Adversarial Nibbler Challenge",http://arxiv.org/abs/2309.11575v1 -Guidelines for Systematic Mapping Studies in Security Engineering,http://arxiv.org/abs/1801.06810v1 -What Users See - Structures in Search Engine Results Pages,http://arxiv.org/abs/1511.05802v1 -Software Engineering at Google,http://arxiv.org/abs/1702.01715v3 -"Social Engineering in Cybersecurity: A Domain Ontology and Knowledge - Graph Application Examples",http://dx.doi.org/10.1186/s42400-021-00094-6 -"The Framework For The Discipline Of Software Engineering in Connection - to Information Technology Discipline",http://arxiv.org/abs/2206.09303v2 -"Artificial Intelligence Impact On The Labour Force -- Searching For The - Analytical Skills Of The Future Software Engineers",http://arxiv.org/abs/2302.13229v1 -Fast X-ray Transients and Gamma-ray Bursts: Constraints on Beaming,http://dx.doi.org/10.1086/306617 -"Wind Interaction Models for Gamma-Ray Burst Afterglows: The Case for Two - Types of Progenitors",http://dx.doi.org/10.1086/308914 -PIC Simulations of Prompt GRB Emissions,http://dx.doi.org/10.1063/1.2207879 -"Using Swift observations of prompt and afterglow emission to classify - GRBs",http://dx.doi.org/10.1098/rsta.2006.1984 -"Prompt neutrinos from atmospheric c-cbar and b-bbar production and the - gluon at very small x",http://dx.doi.org/10.1016/S0370-2693(03)00656-7 -"Shower Power: Isolating the Prompt Atmospheric Neutrino Flux Using - Electron Neutrinos",http://dx.doi.org/10.1088/1475-7516/2004/11/009 -Gamma-Ray Burst high energy emission from Internal Shocks,http://dx.doi.org/10.1051/0004-6361:20078518 -Prompt optical emission from residual collisions in GRB outflows,http://dx.doi.org/10.1086/529042 -"Short-Hard Gamma-Ray Bursts in Young Host Galaxies: the Effect of Prompt - Twins",http://arxiv.org/abs/0712.3309v1 -Is the Rapid Decay Phase from High Latitude Emission?,http://dx.doi.org/10.1063/1.3155933 -"Probes of Diffusive Shock Acceleration using Gamma-Ray Burst Prompt - Emission",http://dx.doi.org/10.1063/1.3155905 -"Variable polarization measured in the prompt emission of GRB 041219A - using IBIS on board INTEGRAL",http://dx.doi.org/10.1088/0004-637X/695/2/L208 -"Prompt optical emission and synchrotron self-absorption constraints on - emission site of GRBs",http://dx.doi.org/10.1111/j.1365-2966.2009.15212.x -"The spectral-temporal properties of the prompt pulses and rapid decay - phase of GRBs",http://dx.doi.org/10.1111/j.1365-2966.2009.16187.x -"Using Gamma-Ray Burst Prompt Emission to Probe Relativistic Shock - Acceleration",http://dx.doi.org/10.1016/j.asr.2010.02.016 -Inclusive prompt photon production in nuclear collisions at RHIC and LHC,http://dx.doi.org/10.1007/JHEP04(2011)055 -On Conditions Relating to Nonsolvability,http://arxiv.org/abs/1109.4913v1 -"An upscattering spectral formation model for the prompt emission of - Gamma-Ray Bursts",http://dx.doi.org/10.1088/0004-637X/752/2/116 -GRB980923. A burst with a short duration high energy component,http://dx.doi.org/10.1088/0004-637X/755/2/140 -"Thermal emission in the early X-ray afterglows of GRBs: following the - prompt phase to the late times",http://dx.doi.org/10.1088/0004-637X/771/1/15 -"A lingering non-thermal component in the GRB prompt emission: predicting - GeV emission from the MeV spectrum",http://dx.doi.org/10.1088/0004-637X/775/1/31 -Hard probes and the event generator EPOS,http://dx.doi.org/10.1088/1742-6596/589/1/012008 -"Short vs. Long Gamma-Ray Bursts: A Comprehensive Study of Energetics and - Prompt Gamma-Ray Correlations",http://dx.doi.org/10.1093/mnras/stv714 -"Thermal Emissions Spanning the Prompt and the Afterglow Phase of the - Ultra-long GRB 130925A",http://arxiv.org/abs/1505.03296v1 -"The prompt $J/ψ$ production in Association with a $c\bar{c}$ Pair - within the Framework of Non-relativistic QCD via photon-photon collision at - the International Linear Collider",http://dx.doi.org/10.1103/PhysRevD.92.074021 -"Update on the GRB universal scaling - E$_{\rm{X,iso}}$-E$_{\rm{γ,iso}}$-E$_{\rm{pk}}$ with ten years of - $Swift$ data",http://dx.doi.org/10.1093/mnras/stv2393 -"Prompt Neutrino Emission of Gamma-Ray Bursts in the Dissipative - Photospheric Scenario Revisited: Possible Contributions from Cocoons",http://dx.doi.org/10.3847/1538-4357/aa76e5 -"Evidence of an Internal Dissipation Origin for the High-energy Prompt - Emission of GRB 170214A",http://dx.doi.org/10.3847/1538-4357/aa7a58 -"Measurement of prompt D$^0$ meson azimuthal anisotropy in PbPb - collisions at $\sqrt{s_\mathrm{NN}} = $5.02 TeV",http://dx.doi.org/10.1103/PhysRevLett.120.202301 -"Prompt and non-prompt J/$ψ$ production and nuclear modification at - mid-rapidity in p-Pb collisions at ${\bf \sqrt{{\it s}_{\text{NN}}}= 5.02}$ - TeV",http://dx.doi.org/10.1140/epjc/s10052-018-5881-2 -"Role of THESEUS in Understanding the Radiation Mechanism of GRB Prompt - Emission",http://arxiv.org/abs/1802.01690v1 -"Inclusive prompt photon production in electron-nucleus scattering at - small x",http://dx.doi.org/10.1007/JHEP05(2018)013 -"Precision Pollution - The effects of enrichment yields and timing on - galactic chemical evolution",http://dx.doi.org/10.1093/mnras/sty2080 -"Search for heavy neutral leptons in decays of $W$ bosons produced in 13 - TeV $pp$ collisions using prompt and displaced signatures with the ATLAS - detector",http://dx.doi.org/10.1007/JHEP10(2019)265 -"LOFAR detectability of prompt low-frequency radio emission during - gamma-ray burst X-ray flares",http://dx.doi.org/10.1093/mnras/staa1168 -"Inclusive, prompt and non-prompt J/$ψ$ production at mid-rapidity in - Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV",http://dx.doi.org/10.1007/JHEP07(2015)051 -"Prompt gamma ray diagnostics and enhanced hadron-therapy using - neutron-free nuclear reactions",http://arxiv.org/abs/1608.06778v1 -"Photospheric Emission in the Joint GBM and Konus Prompt Spectra of GRB - 120323A",http://dx.doi.org/10.3847/1538-4357/aa81c2 -"Indication for Double Parton Scatterings in W + Prompt J/Psi Production - at the LHC",http://dx.doi.org/10.1016/j.physletb.2018.04.020 -Synchrotron radiation in $γ$-ray burst prompt emission,http://dx.doi.org/10.1038/s41550-020-1041-3 -Invariant transversals in finite groups,http://arxiv.org/abs/2005.01380v1 -"From prompt to direct J/$ψ$ production: new insights on the - $χ_{c1}$ and $χ_{c2}$ polarizations and feed-down contributions from a - global-fit analysis of mid-rapidity LHC data",http://dx.doi.org/10.1140/epjc/s10052-020-8201-6 -"The long and the short of the high energy emission in GRB090926A: an - external shock",http://dx.doi.org/10.1088/0004-637X/755/2/127 -"Measurement of charmonium production in PbPb collisions at sqrt(sNN) = - 2.76 TeV with CMS",http://dx.doi.org/10.1016/j.nuclphysa.2012.12.067 -Galactic Constraints on Supernova Progenitor Models,http://dx.doi.org/10.1051/0004-6361/201220944 -"A Deep Search for Prompt Radio Emission from the Short GRB 150424A With - The Murchison Widefield Array",http://dx.doi.org/10.1088/2041-8205/814/2/L25 -"Suppression and azimuthal anisotropy of prompt and nonprompt J/psi - production in PbPb collisions at sqrt(s[NN]) = 2.76 TeV",http://dx.doi.org/10.1140/epjc/s10052-017-4781-1 -"Measurement of prompt $ψ$(2S) production cross sections in - proton-lead and proton-proton collisions at $\sqrt{s_{_\mathrm{NN}}} =$ 5.02 - TeV",http://dx.doi.org/10.1016/j.physletb.2019.01.058 -"Exponentially Decaying Extended Emissions Following Short Gamma-Ray - Bursts with Possible Luminosity -- E-folding Time Correlation",http://dx.doi.org/10.3847/1538-4357/ab1bd6 -"High-energy atmospheric muon flux calculations in comparison with recent - measurements",http://dx.doi.org/10.1088/1742-6596/1181/1/012054 -"Study of J/$ψ$ meson production inside jets in pp collisions at - $\sqrt{s} =$ 8 TeV",http://dx.doi.org/10.1016/j.physletb.2020.135409 -Spin-one dark matter and gamma ray signals from the galactic center,http://arxiv.org/abs/1911.01604v3 -"Discriminating Uranium Isotopes Based on Fission Signatures Induced by - Delayed Neutrons",http://dx.doi.org/10.1103/PhysRevApplied.14.014033 -"Proton-synchrotron as the radiation mechanism of the prompt emission of - GRBs?",http://dx.doi.org/10.1051/0004-6361/201937244 -GRB Prompt Emission Spectra: The Synchrotron Revenge,http://arxiv.org/abs/2003.10447v1 -"Studies of charm and beauty hadron long-range correlations in pp and pPb - collisions at LHC energies",http://dx.doi.org/10.1016/j.physletb.2020.136036 -Relativistic Langevin dynamics: charm versus beauty,http://dx.doi.org/10.1140/epjc/s10052-020-08708-y -A marginally fast-cooling proton-synchrotron model for prompt GRBs,http://dx.doi.org/10.1093/mnras/stab1285 -Reordering Examples Helps during Priming-based Few-Shot Learning,http://arxiv.org/abs/2106.01751v1 -CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models,http://arxiv.org/abs/2109.11797v3 -Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains,http://arxiv.org/abs/2111.12853v4 -Learning to Prompt for Continual Learning,http://arxiv.org/abs/2112.08654v2 -A Dual Prompt Learning Framework for Few-Shot Dialogue State Tracking,http://arxiv.org/abs/2201.05780v3 -"Measurement of beauty production via non-prompt ${\rm D}^{0}$ mesons in - Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV",http://dx.doi.org/10.1007/JHEP12(2022)126 -"Visual-Language Navigation Pretraining via Prompt-based Environmental - Self-exploration",http://arxiv.org/abs/2203.04006v1 -Learning to Compose Soft Prompts for Compositional Zero-Shot Learning,http://arxiv.org/abs/2204.03574v3 -"Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt - Tuning",http://dx.doi.org/10.1145/3477495.3531746 -"LogicSolver: Towards Interpretable Math Word Problem Solving with - Logical Prompt-enhanced Learning",http://arxiv.org/abs/2205.08232v3 -"Decoupling Knowledge from Memorization: Retrieval-augmented Prompt - Learning",http://arxiv.org/abs/2205.14704v5 -"CLIP-Actor: Text-Driven Recommendation and Stylization for Animating - Human Meshes",http://arxiv.org/abs/2206.04382v2 -"Rethinking Reinforcement Learning for Recommendation: A Prompt - Perspective",http://arxiv.org/abs/2206.07353v1 -Text-Driven Stylization of Video Objects,http://arxiv.org/abs/2206.12396v2 -GPTs at Factify 2022: Prompt Aided Fact-Verification,http://arxiv.org/abs/2206.14913v1 -"Time-resolved polarizations of gamma-ray burst prompt emission with - observed energy spectra",http://arxiv.org/abs/2208.04681v2 -"Prompting as Probing: Using Language Models for Knowledge Base - Construction",http://arxiv.org/abs/2208.11057v3 -"What does a platypus look like? Generating customized prompts for - zero-shot image classification",http://arxiv.org/abs/2209.03320v2 -ThinkSum: Probabilistic reasoning over sets using large language models,http://arxiv.org/abs/2210.01293v2 -"Explaining Patterns in Data with Language Models via Interpretable - Autoprompting",http://arxiv.org/abs/2210.01848v2 -Rethinking the Event Coding Pipeline with Prompt Entailment,http://arxiv.org/abs/2210.05257v2 -"DE-FAKE: Detection and Attribution of Fake Images Generated by - Text-to-Image Generation Models",http://arxiv.org/abs/2210.06998v2 -"Being Comes from Not-being: Open-vocabulary Text-to-Motion Generation - with Wordless Training",http://arxiv.org/abs/2210.15929v3 -"Exploiting prompt learning with pre-trained language models for - Alzheimer's Disease detection",http://arxiv.org/abs/2210.16539v2 -PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales,http://arxiv.org/abs/2211.01562v3 -"DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via - Positive-Negative Prompt-Tuning",http://arxiv.org/abs/2211.11337v3 -Multi-label Few-shot ICD Coding as Autoregressive Generation with Prompt,http://arxiv.org/abs/2211.13813v2 -Zero-Shot Rumor Detection with Propagation Structure via Prompt Learning,http://arxiv.org/abs/2212.01117v5 -CoP: Factual Inconsistency Detection by Controlling the Preference,http://arxiv.org/abs/2212.01611v2 -Successive Prompting for Decomposing Complex Questions,http://arxiv.org/abs/2212.04092v1 -"MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text - Generation",http://arxiv.org/abs/2212.08607v1 -"From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language - Models",http://arxiv.org/abs/2212.10846v3 -"Pre-merger alert to detect the very-high-energy prompt emission from - binary neutron-star mergers: Einstein Telescope and Cherenkov Telescope Array - synergy",http://dx.doi.org/10.1051/0004-6361/202345850 -Multitask Instruction-based Prompting for Fallacy Recognition,http://arxiv.org/abs/2301.09992v1 -What Makes Good Examples for Visual In-Context Learning?,http://arxiv.org/abs/2301.13670v2 -"Clinical Decision Transformer: Intended Treatment Recommendation through - Goal Prompting",http://arxiv.org/abs/2302.00612v1 -Noise2Music: Text-conditioned Music Generation with Diffusion Models,http://arxiv.org/abs/2302.03917v2 -"Few-Shot Table-to-Text Generation with Prompt Planning and Knowledge - Memorization",http://arxiv.org/abs/2302.04415v3 -"Bottom energy loss and non-prompt $J/ψ$ production in relativistic - heavy ion collisions",http://dx.doi.org/10.1103/PhysRevC.107.054917 -"Towards Unifying Medical Vision-and-Language Pre-training via Soft - Prompts",http://arxiv.org/abs/2302.08958v1 -Language Model Crossover: Variation through Few-Shot Prompting,http://arxiv.org/abs/2302.12170v2 -"Visual Exemplar Driven Task-Prompting for Unified Perception in - Autonomous Driving",http://arxiv.org/abs/2303.01788v1 -"TranSG: Transformer-Based Skeleton Graph Prototype Contrastive Learning - with Structure-Trajectory Prompted Reconstruction for Person - Re-Identification",http://arxiv.org/abs/2303.06819v3 -"Clinical Concept and Relation Extraction Using Prompt-based Machine - Reading Comprehension",http://dx.doi.org/10.1093/jamia/ocad107 -"CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting - and Anchor Pre-Matching",http://arxiv.org/abs/2303.13076v1 -"Large Language Models are Diverse Role-Players for Summarization - Evaluation",http://arxiv.org/abs/2303.15078v3 -StyleDiffusion: Prompt-Embedding Inversion for Text-Based Editing,http://arxiv.org/abs/2303.15649v2 -"A Prompt-based Multimodal Tabular Transformer Encoder For Medical - Intervention Duration Estimation",http://arxiv.org/abs/2303.17408v2 -"Automated Prompting for Non-overlapping Cross-domain Sequential - Recommendation",http://arxiv.org/abs/2304.04218v1 -"Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into - 3D, alleviate Janus problem and Beyond",http://arxiv.org/abs/2304.04968v3 -Approximating Online Human Evaluation of Social Chatbots with Prompting,http://arxiv.org/abs/2304.05253v2 -"APPLeNet: Visual Attention Parameterized Prompt Learning for Few-Shot - Remote Sensing Image Generalization using CLIP",http://arxiv.org/abs/2304.05995v1 -Progressive-Hint Prompting Improves Reasoning in Large Language Models,http://arxiv.org/abs/2304.09797v5 -Safety Assessment of Chinese Large Language Models,http://arxiv.org/abs/2304.10436v1 -Fundamental Limitations of Alignment in Large Language Models,http://arxiv.org/abs/2304.11082v4 -"Boosting Theory-of-Mind Performance in Large Language Models via - Prompting",http://arxiv.org/abs/2304.11490v3 -"CitePrompt: Using Prompts to Identify Citation Intent in Scientific - Papers",http://arxiv.org/abs/2304.12730v2 -Generating Procedural Materials from Text or Image Prompts,http://dx.doi.org/10.1145/3588432.3591520 -PVP: Pre-trained Visual Parameter-Efficient Tuning,http://arxiv.org/abs/2304.13639v1 -"Towards Robust Text-Prompted Semantic Criterion for In-the-Wild Video - Quality Assessment",http://arxiv.org/abs/2304.14672v1 -"A Unified Generative Retriever for Knowledge-Intensive Language Tasks - via Prompt Learning",http://dx.doi.org/10.1145/3539618.3591631 -SAM on Medical Images: A Comprehensive Study on Three Prompt Modes,http://arxiv.org/abs/2305.00035v1 -Psychologically-Inspired Causal Prompts,http://arxiv.org/abs/2305.01764v1 -Causality-aware Concept Extraction based on Knowledge-guided Prompting,http://arxiv.org/abs/2305.01876v5 -"Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling - Augmentation Framework",http://arxiv.org/abs/2305.03980v1 -Tree of Thoughts: Deliberate Problem Solving with Large Language Models,http://arxiv.org/abs/2305.10601v1 -UP5: Unbiased Foundation Model for Fairness-aware Recommendation,http://arxiv.org/abs/2305.12090v1 -"Bi-VLGM : Bi-Level Class-Severity-Aware Vision-Language Graph Matching - for Text Guided Medical Image Segmentation",http://arxiv.org/abs/2305.12231v1 -"GPT4Table: Can Large Language Models Understand Structured Table Data? A - Benchmark and Empirical Study",http://arxiv.org/abs/2305.13062v2 -"GenSpectrum Chat: Data Exploration in Public Health Using Large Language - Models",http://arxiv.org/abs/2305.13821v1 -Better Zero-Shot Reasoning with Self-Adaptive Prompting,http://arxiv.org/abs/2305.14106v1 -"Self-Polish: Enhance Reasoning in Large Language Models via Problem - Refinement",http://arxiv.org/abs/2305.14497v1 -"BLIP-Diffusion: Pre-trained Subject Representation for Controllable - Text-to-Image Generation and Editing",http://arxiv.org/abs/2305.14720v2 -"Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation - into Input Regurgitation and Prompt-Induced Sanitization",http://arxiv.org/abs/2305.15008v1 -"Transferring Visual Attributes from Natural Language to Verified Image - Generation",http://arxiv.org/abs/2305.15026v2 -"InstructEdit: Improving Automatic Masks for Diffusion-based Image - Editing With User Instructions",http://arxiv.org/abs/2305.18047v1 -"DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image - Segmentation",http://arxiv.org/abs/2306.00499v1 -LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning,http://arxiv.org/abs/2306.01293v2 -"SpeechGen: Unlocking the Generative Power of Speech Language Models with - Prompts",http://arxiv.org/abs/2306.02207v3 -TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models,http://arxiv.org/abs/2306.06815v2 -How to Efficiently Adapt Large Segmentation Model(SAM) to Medical Images,http://arxiv.org/abs/2306.13731v1 -Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction,http://arxiv.org/abs/2307.01128v1 -"Unveiling the Potential of Knowledge-Prompted ChatGPT for Enhancing Drug - Trafficking Detection on Social Media",http://arxiv.org/abs/2307.03699v1 -Answering Ambiguous Questions via Iterative Prompting,http://arxiv.org/abs/2307.03897v1 -"OntoChatGPT Information System: Ontology-Driven Structured Prompts for - ChatGPT Meta-Learning",http://dx.doi.org/10.47839/ijc.22.2.3086 -"SAM-Path: A Segment Anything Model for Semantic Segmentation in Digital - Pathology",http://arxiv.org/abs/2307.09570v1 -"Leveraging Large Language Models (LLMs) for Process Mining (Technical - Report)",http://arxiv.org/abs/2307.12701v1 -"InFusion: Inject and Attention Fusion for Multi Concept Zero-Shot - Text-based Video Editing",http://arxiv.org/abs/2308.00135v3 -"Mondrian: Prompt Abstraction Attack Against Large Language Models for - Cheaper API Pricing",http://arxiv.org/abs/2308.03558v1 -"EPCFormer: Expression Prompt Collaboration Transformer for Universal - Referring Video Object Segmentation",http://arxiv.org/abs/2308.04162v1 -"Diverse Data Augmentation with Diffusions for Effective Test-time Prompt - Tuning",http://arxiv.org/abs/2308.06038v2 -Knowledge Prompt-tuning for Sequential Recommendation,http://arxiv.org/abs/2308.08459v1 -"FashionLOGO: Prompting Multimodal Large Language Models for Fashion Logo - Embeddings",http://arxiv.org/abs/2308.09012v1 -"Towards Large-scale 3D Representation Learning with Multi-dataset Point - Prompt Training",http://arxiv.org/abs/2308.09718v1 -"Improving Adversarial Robustness of Masked Autoencoders via Test-time - Frequency-domain Prompting",http://arxiv.org/abs/2308.10315v2 -PromptMRG: Diagnosis-Driven Prompts for Medical Report Generation,http://arxiv.org/abs/2308.12604v1 -"MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual - Captioning",http://dx.doi.org/10.18653/v1/2023.acl-long.664 -LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors,http://dx.doi.org/10.14722/ndss.2024.23238 -"TextrolSpeech: A Text Style Control Speech Corpus With Codec Language - Text-to-Speech Models",http://arxiv.org/abs/2308.14430v1 -Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation,http://arxiv.org/abs/2308.14936v1 -Contextual Biasing of Named-Entities with Large Language Models,http://arxiv.org/abs/2309.00723v2 -"StyleAdapter: A Single-Pass LoRA-Free Model for Stylized Image - Generation",http://arxiv.org/abs/2309.01770v1 -"Automatic Data Transformation Using Large Language Model: An - Experimental Study on Building Energy Data",http://arxiv.org/abs/2309.01957v2 -Create Your World: Lifelong Text-to-Image Diffusion,http://arxiv.org/abs/2309.04430v1 -ITI-GEN: Inclusive Text-to-Image Generation,http://arxiv.org/abs/2309.05569v1 -"Phase-space simulations of prompt cusps: simulating the formation of the - first haloes without artificial fragmentation",http://arxiv.org/abs/2309.05707v1 -Unsupervised Contrast-Consistent Ranking with Language Models,http://arxiv.org/abs/2309.06991v1 -"Investigating the Applicability of Self-Assessment Tests for Personality - Measurement of Large Language Models",http://arxiv.org/abs/2309.08163v1 -PromptST: Prompt-Enhanced Spatio-Temporal Multi-Attribute Prediction,http://arxiv.org/abs/2309.09500v1 -"Bias of AI-Generated Content: An Examination of News Produced by Large - Language Models",http://arxiv.org/abs/2309.09825v2 -Deep Prompt Tuning for Graph Transformers,http://arxiv.org/abs/2309.10131v1 -"Self-Recovery Prompting: Promptable General Purpose Service Robot System - with Foundation Models and Self-Recovery",http://arxiv.org/abs/2309.14425v2 -"CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware - Prompting",http://arxiv.org/abs/2309.16140v1 -"Investigating the Efficacy of Large Language Models in Reflective - Assessment Methods through Chain of Thoughts Prompting",http://arxiv.org/abs/2310.00272v1 -"Adaptive-Solver Framework for Dynamic Strategy Selection in Large - Language Model Reasoning",http://arxiv.org/abs/2310.01446v1 -"How Prevalent is Gender Bias in ChatGPT? -- Exploring German and English - ChatGPT Responses",http://arxiv.org/abs/2310.03031v1 -Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models,http://arxiv.org/abs/2310.03059v1 -Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models,http://arxiv.org/abs/2310.03123v1 -"Multimodal Prompt Transformer with Hybrid Contrastive Learning for - Emotion Recognition in Conversation",http://arxiv.org/abs/2310.04456v1 -"Profit: Benchmarking Personalization and Robustness Trade-off in - Federated Prompt Tuning",http://arxiv.org/abs/2310.04627v1 -HowToCaption: Prompting LLMs to Transform Video Annotations at Scale,http://arxiv.org/abs/2310.04900v1 -P5: Plug-and-Play Persona Prompting for Personalized Response Selection,http://arxiv.org/abs/2310.06390v1 -UniPose: Detecting Any Keypoints,http://arxiv.org/abs/2310.08530v1 -Unifying Image Processing as Visual Prompting Question Answering,http://arxiv.org/abs/2310.10513v1 -"An Image is Worth Multiple Words: Learning Object Level Concepts using - Multi-Concept Prompt Learning",http://arxiv.org/abs/2310.12274v1 -"GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for - Reasoning Problems",http://arxiv.org/abs/2310.12397v1 -"Automatic Hallucination Assessment for Aligned Large Language Models via - Transferable Adversarial Attacks",http://arxiv.org/abs/2310.12516v1 -Motif-Based Prompt Learning for Universal Cross-Domain Recommendation,http://arxiv.org/abs/2310.13303v1 -An LLM can Fool Itself: A Prompt-Based Adversarial Attack,http://arxiv.org/abs/2310.13345v1 -"Self-prompted Chain-of-Thought on Large Language Models for Open-domain - Multi-hop Reasoning",http://arxiv.org/abs/2310.13552v2 -On Bilingual Lexicon Induction with Large Language Models,http://arxiv.org/abs/2310.13995v1 -"Language Model Unalignment: Parametric Red-Teaming to Expose Hidden - Harms and Biases",http://arxiv.org/abs/2310.14303v1 -"Prompt And Delayed Radio Bangs At Kilohertz By SN 1987A: A Test For - Graviton-Photon Conversion",http://arxiv.org/abs/astro-ph/9604047v1 -Constraints on Off-Axis X-Ray Emission from Beamed GRBs,http://dx.doi.org/10.1086/307738 -X-ray afterglow detection of the short gamma-ray burst 991014,http://dx.doi.org/10.1086/317804 -"Prompt High Energy $γ$-Ray Emission From the Synchrotron - Self-Compton Process in the Reverse Shocks of $γ$-Ray Bursts",http://dx.doi.org/10.1086/318064 -The prompt gamma-ray emission of novae,http://dx.doi.org/10.1016/S1387-6473(02)00201-4 -Was GRB 990123 a unique optical flash?,http://dx.doi.org/10.1046/j.1365-8711.2002.05286.x -BATSE Observations of Gamma-Ray Burst Tails,http://dx.doi.org/10.1086/338695 -How is the Reionization Epoch Defined?,http://dx.doi.org/10.1046/j.1365-8711.2002.05302.x -X-ray spectroscopy of gamma-ray bursts: the path to the progenitor,http://dx.doi.org/10.1063/1.1579389 -"X-ray and radio bright type Ic SN 2002ap -- a hypernova without an - associated GRB",http://dx.doi.org/10.1007/10828549_40 -Emission Processes of High Energy Gamma Rays from Gamma-Ray Bursts,http://arxiv.org/abs/astro-ph/0211644v1 -Multiwavelength observations of the GRB000615 field,http://arxiv.org/abs/astro-ph/0302022v1 -"A Characteristic Dense Environment or Wind Signature in Prompt GRB - Afterglows",http://dx.doi.org/10.1086/381733 -"Prompt Ultraviolet-to-Soft-X-Ray Emission of Gamma-Ray Bursts: - Application to GRB 031203?",http://dx.doi.org/10.1086/422098 -"Early GRB afterglow from a reverse shock as a tracer of the prompt - gamma-ray light curve",http://dx.doi.org/10.1393/ncc/i2005-10076-6 -The Type Ia Supernova Rate,http://dx.doi.org/10.1086/452632 -"A Simple Test of the External Shock Model for the Prompt Emission in - Gamma-Ray Bursts",http://dx.doi.org/10.1016/j.newast.2007.06.003 -GRB Fireball Physics: Prompt and Early Emission,http://dx.doi.org/10.1088/1367-2630/8/9/199 -Implications of the Large Polarization Measured in Gamma Ray Bursts,http://arxiv.org/abs/astro-ph/0701294v2 -Some Theoretical Implications of Short-Hard Gamma-Ray Burst observations,http://dx.doi.org/10.1016/j.asr.2006.12.038 -"The Shapiro Conjecture: Prompt or Delayed Collapse in the head-on - collision of neutron stars?",http://dx.doi.org/10.1103/PhysRevD.63.121501 -"Upper Limit on the Prompt Muon Flux Derived from the LVD Underground - Experiment",http://dx.doi.org/10.1103/PhysRevD.60.112001 -"Measurement of Isolated Prompt Photon Production in Photon-Photon - Collisions at sqrt(s)_ee=183-209 GeV",http://dx.doi.org/10.1140/epjc/s2003-01359-1 -"Measurement of the Cross Section for Prompt Diphoton Production in - p-pbar Collisions at sqrt(s) = 1.96 TeV",http://dx.doi.org/10.1103/PhysRevLett.95.022003 -Anomalous prompt photon production in hadronic collisions at low-$x_T$,http://dx.doi.org/10.1103/PhysRevD.48.3121 -Prompt Upsilon and Psi Production at LEP,http://dx.doi.org/10.1016/0370-2693(95)01484-5 -"Isolated Prompt Photon Production in Hadronic Final States of $e^+e^-$ - Annihilation",http://dx.doi.org/10.1103/PhysRevD.54.5470 -Prompt Charmonium Production in Z Decays,http://dx.doi.org/10.1016/S0370-2693(97)00040-3 -"Effect of quark-jet energy loss on direct photons in ultrarelativistic - heavy-ion collisions",http://arxiv.org/abs/hep-ph/9807260v2 -Asymmetry of prompt photon production in p-p collisions at RHIC,http://arxiv.org/abs/hep-ph/9905511v1 -Low mass lepton pair production in hadron collisions,http://dx.doi.org/10.1016/S0920-5632(00)00151-1 -Unintegrated parton distributions and prompt photon hadroproduction,http://dx.doi.org/10.1007/s100520000326 -"$ΔG(x,μ^2)$ from jet and prompt photon production at RHIC",http://arxiv.org/abs/hep-ph/0005320v1 -Nuclear Effects in Prompt Photon Production at the Large Hadron Collider,http://dx.doi.org/10.1016/S0375-9474(01)01309-4 -Diffractive Higgs bosons and prompt photons at hadron colliders,http://dx.doi.org/10.1103/PhysRevD.67.011301 -Production of Prompt Photons,http://dx.doi.org/10.1063/1.2402665 -Direct Photons in Ion Collisions at FAIR Energies,http://arxiv.org/abs/hep-ph/0701130v1 -Prompt Photon and Inclusive $π^0$ Production at RHIC and LHC,http://dx.doi.org/10.1016/S0375-9474(02)01491-4 -"Total Prompt Energy Release in the Neutron-Induced Fission of 235-U, - 238-U, and 239-Pu",http://dx.doi.org/10.1016/j.nuclphysa.2006.03.013 -"Simulation of prompt emission from GRBs with a photospheric component - and its detectability by GLAST",http://dx.doi.org/10.1063/1.2737404 -Quark-nova explosion inside a collapsar: application to Gamma Ray Bursts,http://dx.doi.org/10.1155/2009/463521 -From RHIC to LHC,http://dx.doi.org/10.1016/j.nuclphysbps.2007.11.104 -"Deep inelastic scattering and prompt photon production within the - framework of quark Reggeization hypothesis",http://dx.doi.org/10.1103/PhysRevD.78.034033 -Nuclear shadowing and prompt photons at relativistic hadron colliders,http://dx.doi.org/10.1103/PhysRevC.78.037901 -Observational Limits on Inverse Compton Processes in GRBs,http://dx.doi.org/10.1111/j.1365-2966.2008.14198.x -X-Ray Afterglows,http://dx.doi.org/10.1063/1.3027899 -"Temporal variability of GRB early X-ray afterglows and GRB080319B prompt - emission",http://dx.doi.org/10.1063/1.3027924 -"The 3He(alpha,gamma)7Be S-factor at solar energies: the prompt gamma - experiment at LUNA",http://dx.doi.org/10.1016/j.nuclphysa.2008.09.014 -Clues from the prompt emission of GRB 080319B,http://dx.doi.org/10.1088/0004-637X/692/2/L92 -GRB physics with Fermi,http://dx.doi.org/10.1063/1.3155906 -"Single photons from relativistic collision of lead nuclei at CERN SPS: A - reanalysis",http://dx.doi.org/10.1103/PhysRevC.79.034906 -A Reanalysis of Single Photon Data at CERN SPS,http://dx.doi.org/10.1016/j.nuclphysa.2009.10.052 -Interpreting the high energy emission of Fermi GRBs,http://arxiv.org/abs/0912.1887v2 -Prompt optical observations of Fermi-LAT bursts and GRB 090902B,http://arxiv.org/abs/0912.3026v1 -More loosely bound hadron molecules at CDF?,http://dx.doi.org/10.1016/j.physletb.2010.01.037 -"Deep inelastic prompt photon production at HERA in the kt-factorization - approach",http://dx.doi.org/10.1103/PhysRevD.81.094034 -"Studies of isolated photon production in simulated proton-proton - collisions with ALICE-EMCal",http://dx.doi.org/10.1088/1742-6596/270/1/012033 -"Gamma-ray Burst Prompt Emission: Jitter Radiation in Stochastic Magnetic - Field Revisited",http://dx.doi.org/10.1088/0004-637X/731/1/26 -"Prompt electrons driving ion acceleration and formation of a two - temperatures plasma in nanosecond laser-ablation domain",http://dx.doi.org/10.1209/0295-5075/100/45003 -"Prompt photon and associated heavy quark production at hadron colliders - with kt-factorization",http://dx.doi.org/10.1007/JHEP05(2012)104 -"Polarization for Prompt J/psi, psi(2s) production at the Tevatron and - LHC",http://dx.doi.org/10.1103/PhysRevLett.110.042002 -"Isolated prompt photon pair production at hadron colliders with - kt-factorization",http://dx.doi.org/10.1007/JHEP02(2013)009 -"Medium modifications of photon-tagged jet fragmentation function in - high-energy heavy-ion collisions",http://dx.doi.org/10.1103/PhysRevC.89.064909 -Single and double diffractive prompt photon production at the LHC,http://arxiv.org/abs/1310.6387v1 -"Measurement of the production cross section of prompt J/psi mesons in - association with a W boson in pp collisions at sqrt{s}=7 TeV with the ATLAS - detector",http://dx.doi.org/10.1007/JHEP04(2014)172 -"Prompt-photon plus jet associated photoproduction at HERA in the parton - Reggeization approach",http://dx.doi.org/10.1103/PhysRevD.89.114016 -Diphoton production in high-energy p+A collisions,http://dx.doi.org/10.1103/PhysRevD.90.014031 -"Perturbative charm production and the prompt atmospheric neutrino flux - in light of RHIC and LHC",http://dx.doi.org/10.1007/JHEP06(2015)110 -"Monte Carlo simulation of prompt gamma-ray spectra from depleted uranium - under D-T neutron irradiation and electron recoil spectra in a liquid - scintillator detector",http://dx.doi.org/10.1088/1674-1137/40/3/036201 -Searches for Prompt $R$-Parity-Violating Supersymmetry at the LHC,http://arxiv.org/abs/1512.05956v1 -"Complete Study of Hadroproduction of a $Υ$ Meson Associated with - a Prompt $J/ψ$",http://dx.doi.org/10.1103/PhysRevLett.117.062001 -A MAD Model for Gamma-Ray Burst Variability,http://dx.doi.org/10.1093/mnras/stw1366 -"The effect of neutron skin on inclusive prompt photon production in - Pb~+~Pb collisions at the LHC",http://dx.doi.org/10.1088/1361-6471/aa5689 -Constraints on atmospheric charmed-meson production from IceCube,http://dx.doi.org/10.1051/epjconf/201613005015 -DPS in CGC: HBT correlations in double inclusive photon production,http://dx.doi.org/10.1103/PhysRevD.95.114028 -"Investigation of Frame Alignments for GMM-based Digit-prompted Speaker - Verification",http://arxiv.org/abs/1710.10436v4 -Deep CNN based feature extractor for text-prompted speaker recognition,http://arxiv.org/abs/1803.05307v1 -"Probing nuclear modifications of parton distribution functions through - the isolated prompt photon production at the LHC",http://dx.doi.org/10.1103/PhysRevC.99.055206 -"Production and polarization of prompt $\varUpsilon$($n$S) in the - improved color evaporation model using the $k_T$-factorization approach",http://dx.doi.org/10.1103/PhysRevD.99.034007 -"Prompt Neutron Multiplicity Distributions Inferred from $γ$-ray and - Fission Fragment Energy Measurements",http://dx.doi.org/10.1103/PhysRevC.100.054610 -Time-Resolved and Energy-Resolved Polarizations of GRB Prompt Emission,http://dx.doi.org/10.3847/1538-4357/ab7b5d -"Prompt photon production in high-energy $pA$ collisions at forward - rapidity",http://dx.doi.org/10.1103/PhysRevC.102.054901 -"Event-by-event evaluation of the prompt fission neutron spectrum from - 239Pu(n, f)",http://dx.doi.org/10.1103/PhysRevC.85.024608 -Charmonium production measured in PbPb and pp collisions by CMS,http://dx.doi.org/10.1088/0954-3899/38/12/124105 -J/psi and psi(2S) production in pp collisions at sqrt(s) = 7 TeV,http://dx.doi.org/10.1007/JHEP02(2012)011 -"Creation of prompt and thin-sheet splashing by varying surface roughness - or increasing air pressure",http://dx.doi.org/10.1103/PhysRevLett.109.054501 -"Measurement of high p_T isolated prompt photons in lead-lead collisions - at sqrt(s_NN)=2.76 TeV with the ATLAS detector at the LHC",http://dx.doi.org/10.1016/j.nuclphysa.2012.12.090 -"An extended study of the prompt photon photoproduction at HERA with - $k_T$-factorization",http://dx.doi.org/10.1103/PhysRevD.88.074001 -"Measurement of the prompt J/psi and psi(2S) polarizations in pp - collisions at sqrt(s) = 7 TeV",http://dx.doi.org/10.1016/j.physletb.2013.10.055 -"Observation of Energy and Baseline Dependent Reactor Antineutrino - Disappearance in the RENO Experiment",http://dx.doi.org/10.1103/PhysRevLett.116.211801 -Distributed Synthesis for Parameterized Temporal Logics,http://arxiv.org/abs/1705.08112v2 -Short GRBs Viewed from Far Off Axis,http://dx.doi.org/10.3847/2041-8213/aaec0d -"Impact of the inelastic proton -- nucleus cross section on the prompt - neutrino flux",http://dx.doi.org/10.1140/epjc/s10052-019-6669-8 -Prompt atmospheric neutrinos in the quark-gluon string model,http://dx.doi.org/10.1140/epjc/s10052-019-7547-0 -"Prompt $η_c$ meson production at the LHC in the NRQCD with - $k_T$-factorization",http://dx.doi.org/10.1140/epjc/s10052-019-7134-4 -"Study of energy deposition patterns in hadron calorimeter for prompt and - displaced jets using convolutional neural network",http://dx.doi.org/10.1007/JHEP11(2019)156 -"On the relevance of prompt neutrinos for the interpretation of the - IceCube signals",http://dx.doi.org/10.1088/1475-7516/2019/08/004 -"Prompt X-ray emission from Fast Radio Bursts -- Upper limits with - AstroSat",http://dx.doi.org/10.3847/1538-4357/ab5363 -Personalized Policy Learning using Longitudinal Mobile Health Data,http://arxiv.org/abs/2001.03258v1 -"Nudge for Deliberativeness: How Interface Features Influence Online - Discourse",http://dx.doi.org/10.1145/3313831.3376646 -"IIE-NLP-NUT at SemEval-2020 Task 4: Guiding PLM with Prompt Template - Reconstruction Strategy for ComVE",http://arxiv.org/abs/2007.00924v1 -"Prompt photon production in proton collisions as a probe of parton - scattering in high energy limit",http://dx.doi.org/10.1103/PhysRevD.103.034013 -Generating Dialogue Responses from a Semantic Latent Space,http://arxiv.org/abs/2010.01658v1 -"Get It Scored Using AutoSAS -- An Automated System for Scoring Short - Answers",http://arxiv.org/abs/2012.11243v1 -"Perturbative Charm Production and the Prompt Atmospheric Neutrino Flux - in light of RHIC and LHC",http://arxiv.org/abs/2012.15190v1 -"Charmonium production in pp, p+Pb and Pb+Pb collisions with CMS - experiment",http://dx.doi.org/10.1088/1742-6596/1258/1/012001 -Cognitively Aided Zero-Shot Automatic Essay Grading,http://arxiv.org/abs/2102.11258v1 -"Impact of intrinsic charm amount in the nucleon and saturation effects - on the prompt atmospheric $ν_μ$ flux for IceCube",http://dx.doi.org/10.1140/epjc/s10052-022-10214-2 -"Predicting spectral parameters in the backscattering dominated model for - the prompt phase of GRBs",http://dx.doi.org/10.3847/2041-8213/ac1d56 -"Prevalence of Extra Power-Law Spectral Components in Short Gamma-Ray - Bursts",http://dx.doi.org/10.3847/1538-4357/ac26ba -Planning with Learned Entity Prompts for Abstractive Summarization,http://arxiv.org/abs/2104.07606v2 -"Fantastically Ordered Prompts and Where to Find Them: Overcoming - Few-Shot Prompt Order Sensitivity",http://arxiv.org/abs/2104.08786v2 -Prompt Acceleration of a Short-Lifetime Low-Energy Muon Beam,http://arxiv.org/abs/2104.14293v1 -Low-efficiency long gamma-ray bursts: A case study with AT2020blt,http://dx.doi.org/10.1093/mnras/stac601 -Prompting Contrastive Explanations for Commonsense Reasoning Tasks,http://arxiv.org/abs/2106.06823v1 -"Investigating Unprompted and Prompted Diagrams Generated by Physics - MajorsDuring Problem Solving",http://dx.doi.org/10.1103/PhysRevPhysEducRes.18.010104 -"Neutrinos from charm: forward production at the LHC and in the - atmosphere",http://arxiv.org/abs/2107.01178v2 -"Searches for Neutrinos from Precursors and Afterglows of Gamma-ray - Bursts using the IceCube Neutrino Observatory",http://arxiv.org/abs/2107.08870v2 -"Prompt hadroproduction of C-even quarkonia in the light-front $k_T$ - -factorization approach",http://arxiv.org/abs/2107.11661v1 -Forming Molecular States with Hadronic Rescattering,http://dx.doi.org/10.1140/epja/s10050-021-00650-1 -"CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented - Dialog Systems",http://arxiv.org/abs/2109.04645v4 -"The Low-Energy Spectral Index of Gamma-Ray Burst Prompt Emission from - Internal Shocks",http://dx.doi.org/10.3390/galaxies9030068 -"SentiPrompt: Sentiment Knowledge Enhanced Prompt-Tuning for Aspect-Based - Sentiment Analysis",http://arxiv.org/abs/2109.08306v1 -Generated Knowledge Prompting for Commonsense Reasoning,http://arxiv.org/abs/2110.08387v3 -Telling Creative Stories Using Generative Visual Aids,http://arxiv.org/abs/2110.14810v1 -Prompting Visual-Language Models for Efficient Video Understanding,http://arxiv.org/abs/2112.04478v2 -"Probing the incompressibility of nuclear matter at ultra-high density - through the prompt collapse of asymmetric neutron star binaries",http://dx.doi.org/10.1103/PhysRevLett.129.032701 -"Understanding User Perspectives on Prompts for Brief Reflection on - Troubling Emotions",http://arxiv.org/abs/2112.10833v1 -"PROMPT: Learning Dynamic Resource Allocation Policies for Network - Applications",http://dx.doi.org/10.1016/j.future.2023.03.016 -"Novelty Controlled Paraphrase Generation with Retrieval Augmented - Conditional Prompt Tuning",http://arxiv.org/abs/2202.00535v2 -"P4E: Few-Shot Event Detection as Prompt-Guided Identification and - Localization",http://arxiv.org/abs/2202.07615v3 -PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks,http://arxiv.org/abs/2202.12499v2 -"PRBoost: Prompt-Based Rule Discovery and Boosting for Interactive - Weakly-Supervised Learning",http://arxiv.org/abs/2203.09735v1 -"Prompt-based Generative Approach towards Multi-Hierarchical Medical - Dialogue State Tracking",http://arxiv.org/abs/2203.09946v1 -Self-Consistency Improves Chain of Thought Reasoning in Language Models,http://arxiv.org/abs/2203.11171v4 -"Continuous Detection, Rapidly React: Unseen Rumors Detection based on - Continual Prompt-Tuning",http://arxiv.org/abs/2203.11720v2 -"Can Prompt Probe Pretrained Language Models? Understanding the Invisible - Risks from a Causal View",http://arxiv.org/abs/2203.12258v1 -"CLIP-Mesh: Generating textured meshes from text using pretrained - image-text models",http://dx.doi.org/10.1145/3550469.3555392 -Uniform Complexity for Text Generation,http://arxiv.org/abs/2204.05185v3 -"Identifying and Measuring Token-Level Sentiment Bias in Pre-trained - Language Models with Prompts",http://arxiv.org/abs/2204.07289v1 -"Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional - Characters with only a Few Utterances",http://arxiv.org/abs/2204.10825v1 -HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification,http://arxiv.org/abs/2204.13413v2 -"Measurement of the prompt $D^0$ nuclear modification factor in $p$Pb - collisions at $\sqrt{s_\mathrm{NN}} = 8.16$ TeV",http://dx.doi.org/10.1103/PhysRevLett.131.102301 -"Prompting to Distill: Boosting Data-Free Knowledge Distillation via - Reinforced Prompt",http://arxiv.org/abs/2205.07523v1 -Challenges in Measuring Bias via Open-Ended Language Generation,http://arxiv.org/abs/2205.11601v1 -"ORCA: Interpreting Prompted Language Models via Locating Supporting Data - Evidence in the Ocean of Pretraining Data",http://arxiv.org/abs/2205.12600v1 -kNN-Prompt: Nearest Neighbor Zero-Shot Inference,http://arxiv.org/abs/2205.13792v2 -"First measurement of $\rm Ω_c^0$ production in pp collisions at - $\sqrt{s}=13$ TeV",http://dx.doi.org/10.1016/j.physletb.2022.137625 -"Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained - Models",http://arxiv.org/abs/2205.15223v3 -"BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic - Parsing",http://arxiv.org/abs/2206.10668v1 -Probing via Prompting,http://arxiv.org/abs/2207.01736v1 -Scene-Aware Prompt for Multi-modal Dialogue Understanding and Generation,http://arxiv.org/abs/2207.01823v1 -"Enhancing Collaborative Filtering Recommender with Prompt-Based - Sentiment Analysis",http://arxiv.org/abs/2207.12883v1 -"Debiased Large Language Models Still Associate Muslims with Uniquely - Violent Acts",http://arxiv.org/abs/2208.04417v2 -"HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create - Customized Content with Models",http://arxiv.org/abs/2208.08232v2 -"Learning Dynamic Contextualised Word Embeddings via Template-based - Temporal Adaptation",http://arxiv.org/abs/2208.10734v3 -Unified Knowledge Prompt Pre-training for Customer Service Dialogues,http://arxiv.org/abs/2208.14652v1 -"Improving Language Model Prompting in Support of Semi-autonomous Task - Learning",http://arxiv.org/abs/2209.07636v2 -"Extracting Biomedical Factual Knowledge Using Pretrained Language Model - and Electronic Health Record Context",http://arxiv.org/abs/2209.07859v2 -"ProgPrompt: Generating Situated Robot Task Plans using Large Language - Models",http://arxiv.org/abs/2209.11302v1 -"Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong - Learning in Task-Oriented Dialogue",http://arxiv.org/abs/2210.07783v2 -"Bi-Link: Bridging Inductive Link Predictions from Text via Contrastive - Learning of Transformers and Prompts",http://arxiv.org/abs/2210.14463v1 -"LVP-M3: Language-aware Visual Prompt for Multilingual Multimodal Machine - Translation",http://arxiv.org/abs/2210.15461v2 -Query Refinement Prompts for Closed-Book Long-Form Question Answering,http://arxiv.org/abs/2210.17525v1 -Prompt-Based Metric Learning for Few-Shot NER,http://arxiv.org/abs/2211.04337v1 -"Describing emotions with acoustic property prompts for speech emotion - recognition",http://arxiv.org/abs/2211.07737v1 -Prompting PaLM for Translation: Assessing Strategies and Performance,http://arxiv.org/abs/2211.09102v3 -Ignore Previous Prompt: Attack Techniques For Language Models,http://arxiv.org/abs/2211.09527v1 -Knowledge Prompting for Few-shot Action Recognition,http://arxiv.org/abs/2211.12030v1 -"Quarkonia production and elliptic flow in small systems measured with - ALICE",http://arxiv.org/abs/2211.13504v1 -Complementary Explanations for Effective In-Context Learning,http://arxiv.org/abs/2211.13892v2 -PUnifiedNER: A Prompting-based Unified NER System for Diverse Datasets,http://arxiv.org/abs/2211.14838v2 -"Measurements of azimuthal anisotropy of nonprompt D$^0$ mesons in PbPb - collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV",http://arxiv.org/abs/2212.01636v1 -Task Bias in Vision-Language Models,http://arxiv.org/abs/2212.04412v1 -Diffusion Guided Domain Adaptation of Image Generators,http://arxiv.org/abs/2212.04473v2 -Doubly Right Object Recognition: A Why Prompt for Visual Rationales,http://arxiv.org/abs/2212.06202v2 -"Structured Prompting: Scaling In-Context Learning to 1,000 Examples",http://arxiv.org/abs/2212.06713v1 -Pre-trained Language Models Can be Fully Zero-Shot Learners,http://arxiv.org/abs/2212.06950v2 -Controlling Styles in Neural Machine Translation with Activation Prompt,http://arxiv.org/abs/2212.08909v2 -"Consideration of memory of spin and parity in the fissioning compound - nucleus by applying the Hauser-Feshbach fission fragment decay model to - photonuclear reactions",http://dx.doi.org/10.1103/PhysRevC.107.044608 -Cross-Lingual Retrieval Augmented Prompt for Low-Resource Languages,http://arxiv.org/abs/2212.09651v4 -Data Curation Alone Can Stabilize In-context Learning,http://arxiv.org/abs/2212.10378v2 -"The political ideology of conversational AI: Converging evidence on - ChatGPT's pro-environmental, left-libertarian orientation",http://arxiv.org/abs/2301.01768v1 -"What is in a Text-to-Image Prompt: The Potential of Stable Diffusion in - Visual Arts Education",http://arxiv.org/abs/2301.01902v1 -"Interactive-Chain-Prompting: Ambiguity Resolution for Crosslingual - Conditional Generation with Interaction",http://arxiv.org/abs/2301.10309v1 -Causal Reasoning of Entities and Events in Procedural Texts,http://arxiv.org/abs/2301.10896v3 -"Understanding the Effectiveness of Very Large Language Models on Dialog - Evaluation",http://arxiv.org/abs/2301.12004v1 -"Stabilized In-Context Learning with Pre-trained Language Models for Few - Shot Dialogue State Tracking",http://arxiv.org/abs/2302.05932v1 -Can GPT-3 Perform Statutory Reasoning?,http://arxiv.org/abs/2302.06100v2 -"Measurement of the non-prompt D-meson fraction as a function of - multiplicity in proton$-$proton collisions at $\sqrt{s} = 13$ TeV",http://arxiv.org/abs/2302.07783v1 -"Large Language Models Are State-of-the-Art Evaluators of Translation - Quality",http://arxiv.org/abs/2302.14520v2 -"CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation - Verification",http://arxiv.org/abs/2303.03628v1 -"Identification of Systematic Errors of Image Classifiers on Rare - Subgroups",http://arxiv.org/abs/2303.05072v2 -"Prompt-Based Learning for Thread Structure Prediction in Cybersecurity - Forums",http://arxiv.org/abs/2303.05400v1 -"Large Language Models Know Your Contextual Search Intent: A Prompting - Framework for Conversational Search",http://arxiv.org/abs/2303.06573v2 -"Steering Prototypes with Prompt-tuning for Rehearsal-free Continual - Learning",http://arxiv.org/abs/2303.09447v2 -"COVID-19 event extraction from Twitter via extractive question answering - with continuous prompts",http://arxiv.org/abs/2303.10659v2 -Neural Implicit Vision-Language Feature Fields,http://arxiv.org/abs/2303.10962v1 -MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action,http://arxiv.org/abs/2303.11381v1 -Multi-modal Prompting for Low-Shot Temporal Action Localization,http://arxiv.org/abs/2303.11732v1 -A Word is Worth a Thousand Pictures: Prompts as AI Design Material,http://arxiv.org/abs/2303.12647v1 -Remind of the Past: Incremental Learning with Analogical Prompts,http://arxiv.org/abs/2303.13898v1 -"Errors are Useful Prompts: Instruction Guided Task Programming with - Verifier-Assisted Iterative Prompting",http://arxiv.org/abs/2303.14100v1 -"Analyzing the Performance of GPT-3.5 and GPT-4 in Grammatical Error - Correction",http://arxiv.org/abs/2303.14342v2 -"Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot - Learning",http://arxiv.org/abs/2303.15230v1 -"Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese - Machine Translation: A Case Study on Attributive Clauses",http://arxiv.org/abs/2303.15587v1 -Zero-shot Clinical Entity Recognition using ChatGPT,http://arxiv.org/abs/2303.16416v2 -"Soft Gamma-Ray Spectral and Time evolution of the GRB 221009A: prompt - and afterglow emission with INTEGRAL/IBIS-PICsIT",http://dx.doi.org/10.1051/0004-6361/202346373 -"Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: - An Empirical Study",http://arxiv.org/abs/2303.17466v2 -"viz2viz: Prompt-driven stylized visualization generation using a - diffusion model",http://arxiv.org/abs/2304.01919v1 -"Sentence-Level Relation Extraction via Contrastive Learning with - Descriptive Relation Prompts",http://arxiv.org/abs/2304.04935v1 -"SAM.MD: Zero-shot medical image segmentation capabilities of the Segment - Anything Model",http://arxiv.org/abs/2304.05396v1 -Efficient Multimodal Fusion via Interactive Prompting,http://arxiv.org/abs/2304.06306v2 -MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning,http://arxiv.org/abs/2304.09402v1 -"Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in - Large Language Models",http://arxiv.org/abs/2304.11657v2 -Learnable Ophthalmology SAM,http://arxiv.org/abs/2304.13425v1 -"An automatically discovered chain-of-thought prompt generalizes to novel - models and datasets",http://arxiv.org/abs/2305.02897v2 -"Augmenting Low-Resource Text Classification with Graph-Grounded - Pre-training and Prompting",http://dx.doi.org/10.1145/3539618.3591641 -"Diffusion Explainer: Visual Explanation for Text-to-image Stable - Diffusion",http://arxiv.org/abs/2305.03509v2 -"Prompt What You Need: Enhancing Segmentation in Rainy Scenes with - Anchor-based Prompting",http://arxiv.org/abs/2305.03902v2 -"DiscoPrompt: Path Prediction Prompt Tuning for Implicit Discourse - Relation Recognition",http://arxiv.org/abs/2305.03973v1 -Controllable Mixed-Initiative Dialogue Generation through Prompting,http://arxiv.org/abs/2305.04147v1 -PromptRank: Unsupervised Keyphrase Extraction Using Prompt,http://arxiv.org/abs/2305.04490v2 -GPT Agents in Game Theory Experiments,http://arxiv.org/abs/2305.05516v1 -"Not All Languages Are Created Equal in LLMs: Improving Multilingual - Capability by Cross-Lingual-Thought Prompting",http://arxiv.org/abs/2305.07004v2 -Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation,http://arxiv.org/abs/2305.07375v4 -"Pre-trained Language Model with Prompts for Temporal Knowledge Graph - Completion",http://arxiv.org/abs/2305.07912v1 -"Make Prompt-based Black-Box Tuning Colorful: Boosting Model - Generalization from Three Orthogonal Perspectives",http://arxiv.org/abs/2305.08088v1 -"Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs - Sampling",http://arxiv.org/abs/2305.09993v1 -Segment Any Anomaly without Training via Hybrid Prompt Regularization,http://arxiv.org/abs/2305.10724v1 -"Transformer-based Variable-rate Image Compression with - Region-of-interest Control",http://arxiv.org/abs/2305.10807v3 -TextDiffuser: Diffusion Models as Text Painters,http://arxiv.org/abs/2305.10855v4 -"Collaborative Generative AI: Integrating GPT-k for Efficient Editing in - Text-to-Image Generation",http://arxiv.org/abs/2305.11317v1 -"Writing your own book: A method for going from closed to open book QA to - improve robustness and performance of smaller LLMs",http://arxiv.org/abs/2305.11334v1 -"SelfzCoT: a Self-Prompt Zero-shot CoT from Semantic-level to Code-level - for a Better Utilization of LLMs",http://arxiv.org/abs/2305.11461v3 -"Controlling the Extraction of Memorized Data from Large Language Models - via Prompt-Tuning",http://arxiv.org/abs/2305.11759v1 -"PromptNER: A Prompting Method for Few-shot Named Entity Recognition via - k Nearest Neighbor Search",http://arxiv.org/abs/2305.12217v1 -MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction,http://arxiv.org/abs/2305.12627v1 -"SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly - Generating Predictions and Natural Language Explanations",http://arxiv.org/abs/2305.13235v2 -"Compositional Text-to-Image Synthesis with Attention Map Control of - Diffusion Models",http://arxiv.org/abs/2305.13921v1 -"PURR: Efficiently Editing Language Model Hallucinations by Denoising - Language Model Corruptions",http://arxiv.org/abs/2305.14908v1 -Frugal Prompting for Dialog Models,http://arxiv.org/abs/2305.14919v1 -"ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks - for Exploring Theory of Mind",http://arxiv.org/abs/2305.15068v2 -"RAMP: Retrieval and Attribute-Marking Enhanced Prompting for - Attribute-Controlled Translation",http://dx.doi.org/10.18653/v1/2023.acl-short.126 -Conditional Score Guidance for Text-Driven Image-to-Image Translation,http://arxiv.org/abs/2305.18007v1 -"Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A - Study on Performance and Controllability in Prompt-Based Methods",http://arxiv.org/abs/2305.18156v1 -"Prompt-Based Tuning of Transformer Models for Multi-Center Medical Image - Segmentation of Head and Neck Cancer",http://arxiv.org/abs/2305.18948v2 -"The Magic of IF: Investigating Causal Reasoning Abilities in Large - Language Models of Code",http://arxiv.org/abs/2305.19213v1 -"Grammar Prompting for Domain-Specific Language Generation with Large - Language Models",http://arxiv.org/abs/2305.19234v2 -"LMCap: Few-shot Multilingual Image Captioning by Retrieval Augmented - Language Model Prompting",http://arxiv.org/abs/2305.19821v1 -Towards In-context Scene Understanding,http://arxiv.org/abs/2306.01667v1 -"InstructZero: Efficient Instruction Optimization for Black-Box Large - Language Models",http://arxiv.org/abs/2306.03082v2 -"Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge - Graph Question Answering",http://arxiv.org/abs/2306.04136v1 -PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts,http://arxiv.org/abs/2306.04535v1 -"The ADAIO System at the BEA-2023 Shared Task on Generating AI Teacher - Responses in Educational Dialogues",http://arxiv.org/abs/2306.05360v1 -Grounded Text-to-Image Synthesis with Attention Refocusing,http://arxiv.org/abs/2306.05427v1 -"Assisting Language Learners: Automated Trans-Lingual Definition - Generation via Contrastive Prompt Learning",http://arxiv.org/abs/2306.06058v1 -"Mimicking the Thinking Process for Emotion Recognition in Conversation - with Prompts and Paraphrasing",http://arxiv.org/abs/2306.06601v1 -"Prompt-based Extraction of Social Determinants of Health Using Few-shot - Learning",http://arxiv.org/abs/2306.07170v1 -"MolCAP: Molecular Chemical reActivity pretraining and - prompted-finetuning enhanced molecular representation learning",http://arxiv.org/abs/2306.09187v1 -"Text Promptable Surgical Instrument Segmentation with Vision-Language - Models",http://arxiv.org/abs/2306.09244v1 -"Investigating Prompting Techniques for Zero- and Few-Shot Visual - Question Answering",http://arxiv.org/abs/2306.09996v1 -"Can GPT-4 Support Analysis of Textual Data in Tasks Requiring Highly - Specialized Domain Expertise?",http://dx.doi.org/10.1145/3587102.3588792 -"Chain-of-Thought Prompt Distillation for Multimodal Named Entity - Recognition and Multimodal Relation Extraction",http://arxiv.org/abs/2306.14122v3 -Learning to Isolate Muons in Data,http://arxiv.org/abs/2306.15737v1 -"Prompting Large Language Models for Zero-Shot Domain Adaptation in - Speech Recognition",http://arxiv.org/abs/2306.16007v1 -"RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation - based on Visual Foundation Model",http://arxiv.org/abs/2306.16269v1 -"Topological Data Analysis Guided Segment Anything Model Prompt - Optimization for Zero-Shot Segmentation in Biological Imaging",http://arxiv.org/abs/2306.17400v1 -Multimodal Prompt Retrieval for Generative Visual Question Answering,http://arxiv.org/abs/2306.17675v1 -"Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge - Graph Completion via Conditional Soft Prompting",http://arxiv.org/abs/2307.01709v1 -"Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft - Prompting and Calibrated Confidence Estimation",http://arxiv.org/abs/2307.04401v1 -What do LLMs need to Synthesize Correct Router Configurations?,http://arxiv.org/abs/2307.04945v1 -"RoPDA: Robust Prompt-based Data Augmentation for Low-Resource Named - Entity Recognition",http://arxiv.org/abs/2307.07417v2 -GenAssist: Making Image Generation Accessible,http://arxiv.org/abs/2307.07589v1 -Generative Prompt Model for Weakly Supervised Object Localization,http://arxiv.org/abs/2307.09756v1 -"Can Instruction Fine-Tuned Language Models Identify Social Bias through - Prompting?",http://arxiv.org/abs/2307.10472v1 -LAMP: Leveraging Language Prompts for Multi-person Pose Estimation,http://arxiv.org/abs/2307.11934v2 -"Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal - Language Models",http://arxiv.org/abs/2307.14539v2 -"Distractor generation for multiple-choice questions with predictive - prompting and large language models",http://arxiv.org/abs/2307.16338v1 -Towards General Visual-Linguistic Face Forgery Detection,http://arxiv.org/abs/2307.16545v1 -"DiffColor: Toward High Fidelity Text-Guided Image Colorization with - Diffusion Models",http://arxiv.org/abs/2308.01655v1 -"Seeing in Flowing: Adapting CLIP for Action Recognition with Motion - Prompts Learning",http://arxiv.org/abs/2308.04828v1 -"VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style - Transfer",http://arxiv.org/abs/2308.04830v2 -"The Devil is in the Errors: Leveraging Large Language Models for - Fine-grained Machine Translation Evaluation",http://arxiv.org/abs/2308.07286v1 -"Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural - Networks",http://dx.doi.org/10.1145/3583780.3615505 -"Knowledge-injected Prompt Learning for Chinese Biomedical Entity - Normalization",http://arxiv.org/abs/2308.12025v1 -"Prompt2Model: Generating Deployable Models from Natural Language - Instructions",http://arxiv.org/abs/2308.12261v1 -Text Style Transfer Evaluation Using Large Language Models,http://arxiv.org/abs/2308.13577v2 -RGB-T Tracking via Multi-Modal Mutual Prompt Learning,http://arxiv.org/abs/2308.16386v1 -"Simple LLM Prompting is State-of-the-Art for Robust and Multilingual - Dialogue Evaluation",http://arxiv.org/abs/2308.16797v2 -"Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy - Protection",http://arxiv.org/abs/2309.03057v1 -"Manifold-based Verbalizer Space Re-embedding for Tuning-free - Prompt-based Classification",http://arxiv.org/abs/2309.04174v1 -"FuzzLLM: A Novel and Universal Fuzzing Framework for Proactively - Discovering Jailbreak Vulnerabilities in Large Language Models",http://arxiv.org/abs/2309.05274v1 -"Memory Injections: Correcting Multi-Hop Reasoning Failures during - Inference in Transformer-Based Language Models",http://arxiv.org/abs/2309.05605v2 -Dynamic Visual Prompt Tuning for Parameter Efficient Transfer Learning,http://arxiv.org/abs/2309.06123v1 -ChatGPT Hallucinates when Attributing Answers,http://arxiv.org/abs/2309.09401v1 -Prompt a Robot to Walk with Large Language Models,http://arxiv.org/abs/2309.09969v1 -"Exploring the Relationship between LLM Hallucinations and Prompt - Linguistic Nuances: Readability, Formality, and Concreteness",http://arxiv.org/abs/2309.11064v1 -LPML: LLM-Prompting Markup Language for Mathematical Reasoning,http://arxiv.org/abs/2309.13078v2 -"Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for - Pixel-Level Semantic Segmentation",http://arxiv.org/abs/2309.14303v3 -Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM,http://arxiv.org/abs/2309.14348v1 -"PP-MeT: a Real-world Personalized Prompt based Meeting Transcription - System",http://arxiv.org/abs/2309.16247v1 -"Alphazero-like Tree-Search can Guide Large Language Model Decoding and - Training",http://arxiv.org/abs/2309.17179v1 -"Modelling uncertainties and prompt b-jet identification in - $t\bar{t}b\bar{b}$ production with dilepton signatures at the LHC",http://arxiv.org/abs/2309.17353v1 -Prompt-tuning latent diffusion models for inverse problems,http://arxiv.org/abs/2310.01110v1 -Graph Neural Architecture Search with GPT-4,http://arxiv.org/abs/2310.01436v1 -"FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language - Models",http://arxiv.org/abs/2310.01467v1 -Language Models as Knowledge Bases for Visual Word Sense Disambiguation,http://arxiv.org/abs/2310.01960v1 -"Ask Again, Then Fail: Large Language Models' Vacillations in Judgement",http://arxiv.org/abs/2310.02174v1 -"Multi-Prompt Fine-Tuning of Foundation Models for Enhanced Medical Image - Segmentation",http://arxiv.org/abs/2310.02381v1 -"UniverSLU: Universal Spoken Language Understanding for Diverse - Classification and Sequence Generation Tasks with a Single Network",http://arxiv.org/abs/2310.02973v1 -"IPDreamer: Appearance-Controllable 3D Object Generation with Image - Prompts",http://arxiv.org/abs/2310.05375v1 -"Improved prompting and process for writing user personas with LLMs, - using qualitative interviews: Capturing behaviour and personality traits of - users",http://arxiv.org/abs/2310.06391v1 -Mitigating stereotypical biases in text to image generative systems,http://arxiv.org/abs/2310.06904v1 -"PHALM: Building a Knowledge Graph from Scratch by Prompting Humans and a - Language Model",http://arxiv.org/abs/2310.07170v1 -SAM-OCTA: Prompting Segment-Anything for OCTA Image Segmentation,http://arxiv.org/abs/2310.07183v1 -"CP-KGC: Constrained-Prompt Knowledge Graph Completion with Large - Language Models",http://arxiv.org/abs/2310.08279v1 -"Idea2Img: Iterative Self-Refinement with GPT-4V(ision) for Automatic - Image Design and Generation",http://arxiv.org/abs/2310.08541v1 -"PromptRE: Weakly-Supervised Document-Level Relation Extraction via - Prompting-Based Data Programming",http://arxiv.org/abs/2310.09265v1 -"A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking - with Large Language Models",http://arxiv.org/abs/2310.09497v1 -"ASSERT: Automated Safety Scenario Red Teaming for Evaluating the - Robustness of Large Language Models",http://arxiv.org/abs/2310.09624v1 -Assessing the Reliability of Large Language Model Knowledge,http://arxiv.org/abs/2310.09820v1 -Black-box Targeted Adversarial Attack on Segment Anything (SAM),http://arxiv.org/abs/2310.10010v1 -"On the Effectiveness of Creating Conversational Agent Personalities - Through Prompting",http://arxiv.org/abs/2310.11182v1 -"Revealing the Unwritten: Visual Investigation of Beam Search Trees to - Address Language Model Prompting Challenges",http://arxiv.org/abs/2310.11252v1 -Probing LLMs for hate speech detection: strengths and vulnerabilities,http://arxiv.org/abs/2310.12860v1 -"Experimental Narratives: A Comparison of Human Crowdsourced Storytelling - and AI Storytelling",http://arxiv.org/abs/2310.12902v1 -POSQA: Probe the World Models of LLMs with Size Comparisons,http://arxiv.org/abs/2310.13394v1 -"Mind the instructions: a holistic evaluation of consistency and - interactions in prompt-based learning",http://arxiv.org/abs/2310.13486v1 -"Prompt-based Grouping Transformer for Nucleus Detection and - Classification",http://arxiv.org/abs/2310.14176v1 -Large Language Models are biased to overestimate profoundness,http://arxiv.org/abs/2310.14422v1 -"CorefPrompt: Prompt-based Event Coreference Resolution by Measuring - Event Type and Argument Compatibilities",http://arxiv.org/abs/2310.14512v2 -"Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts",http://arxiv.org/abs/2310.14628v1 -Dissecting In-Context Learning of Translations in GPTs,http://arxiv.org/abs/2310.15987v1 -Towards Digital Engineering -- The Advent of Digital Systems Engineering,http://dx.doi.org/10.1504/IJSSE.2020.10031364 -"A Principal-Agent Model of Systems Engineering Processes with - Application to Satellite Design",http://arxiv.org/abs/1903.06979v1 -Variability and Evolution in Systems of Systems,http://dx.doi.org/10.4204/EPTCS.133.2 -Abstract models for heat engines,http://dx.doi.org/10.1007/s11467-020-1029-6 -Fractional Quantum Heat Engine,http://dx.doi.org/10.1038/s41598-021-97304-5 -A Survey of Reverse Engineering and Program Comprehension,http://arxiv.org/abs/cs/0503068v1 -AB Space Engine,http://arxiv.org/abs/0803.0089v1 -Adaptive Quantum Heat Engines,http://arxiv.org/abs/2104.00115v1 -High Energy Emission from the Prompt Gamma-Ray Burst,http://dx.doi.org/10.1086/346221 -"Prompt and afterglow early X-ray phases in the comoving frame. Evidence - for Universal properties?",http://arxiv.org/abs/astro-ph/0506453v4 -"Discovery of an Afterglow Extension of the Prompt Phase of Two Gamma Ray - Bursts Observed by Swift",http://dx.doi.org/10.1086/499432 -"Discovery of a tight correlation among the prompt emission properties of - long Gamma Ray Bursts",http://dx.doi.org/10.1111/j.1365-2966.2006.10445.x -"A comprehensive analysis of Swift/XRT data: I. Apparent spectral - evolution of GRB X-ray tails",http://dx.doi.org/10.1086/519548 -"The Amati relation in the ""fireshell"" model",http://dx.doi.org/10.1051/0004-6361:200810338 -"A multiwavelength study of Swift GRB 060111B constraining the origin of - its prompt optical emission",http://dx.doi.org/10.1051/0004-6361/200911981 -"Fermi Observations of GRB 090902B: A Distinct Spectral Component in the - Prompt and Delayed Emission",http://dx.doi.org/10.1088/0004-637X/706/1/L138 -High energy emission components in the short GRB 090510,http://dx.doi.org/10.1088/0004-637X/720/2/1008 -A Reconnection Switch to Trigger Gamma-Ray Burst Jet Dissipation,http://dx.doi.org/10.1111/j.1365-2966.2011.19721.x -"Toward a standard Gamma Ray Burst: tight correlations between the prompt - and the afterglow plateau phase emission",http://dx.doi.org/10.1111/j.1365-2966.2011.19433.x -"The ultra-long GRB 111209A - II. Prompt to afterglow and afterglow - properties",http://dx.doi.org/10.1088/0004-637X/779/1/66 -"Connecting Prompt and Afterglow GRB emission I. Investigating the impact - of optical selection effects in the Epi - Eiso plane",http://arxiv.org/abs/1503.02760v1 -"Investigating the impact of optical selection effects on observed rest - frame prompt GRB properties",http://dx.doi.org/10.3847/0004-637X/831/1/28 -"AstroSat-CZTI detection of variable prompt emission polarization in GRB - 171010A",http://dx.doi.org/10.3847/1538-4357/ab0826 -"Are fast radio bursts the most likely electromagnetic counterpart of - neutron star mergers resulting in prompt collapse?",http://dx.doi.org/10.1103/PhysRevD.100.043001 -"Detection of Low-energy Breaks in Gamma-Ray Burst Prompt Emission - Spectra",http://dx.doi.org/10.3847/1538-4357/aa831e -"Correlations Between Fission Fragment and Neutron Anisotropies in - Neutron-Induced Fission",http://dx.doi.org/10.1103/PhysRevC.102.024621 -"Time evolution of the spectral break in the high-energy extra component - of GRB 090926A",http://dx.doi.org/10.1051/0004-6361/201630353 -"Machine learning technique to improve anti-neutrino detection efficiency - for the ISMRAN experiment",http://dx.doi.org/10.1088/1748-0221/15/04/P04021 -On the Feynman-alpha Method for Reflected Fissile Assemblies,http://arxiv.org/abs/2010.11768v4 -"A backscattering dominated prompt emission model for the prompt phase of - Gamma ray bursts",http://dx.doi.org/10.3847/1538-4357/abd242 -"Image reconstruction method for dual-isotope positron emission - tomography",http://dx.doi.org/10.1088/1748-0221/16/01/P01035 -"Prompt and non-prompt J/$ψ$ production cross sections at midrapidity - in proton-proton collisions at $\sqrt{s}$ = 5.02 and 13 TeV",http://dx.doi.org/10.1007/JHEP03(2022)190 -Multitask Prompted Training Enables Zero-Shot Task Generalization,http://arxiv.org/abs/2110.08207v3 -"Match-Prompt: Improving Multi-task Generalization Ability for Neural - Text Matching via Prompt Learning",http://dx.doi.org/10.1145/3511808.3557388 -"Automated Identification of Eviction Status from Electronic Health - Record Notes",http://dx.doi.org/10.1093/jamia/ocad081 -"The effect of stellar encounters on the dark matter annihilation signal - from prompt cusps",http://dx.doi.org/10.1093/mnras/stad1268 -"Backdooring Instruction-Tuned Large Language Models with Virtual Prompt - Injection",http://arxiv.org/abs/2307.16888v2 -"Exploring Transfer Learning in Medical Image Segmentation using - Vision-Language Models",http://arxiv.org/abs/2308.07706v2 -Federated Class-Incremental Learning with Prompting,http://arxiv.org/abs/2310.08948v1 -ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt,http://arxiv.org/abs/2310.14845v1 -An Introduction to Software Engineering and Fault Tolerance,http://arxiv.org/abs/1011.1551v1 -Benchmarking as Empirical Standard in Software Engineering Research,http://dx.doi.org/10.1145/3463274.3463361 -"Teaching Model-based Requirements Engineering to Industry Professionals: - An Experience Report",http://arxiv.org/abs/2103.04433v1 -Towards Quantum Software Requirements Engineering,http://arxiv.org/abs/2309.13358v1 -An Observational Investigation of Reverse Engineers' Processes,http://arxiv.org/abs/1912.00317v1 -"Investigating student understanding of heat engine: a case study of - Stirling engine",http://arxiv.org/abs/2008.06405v1 -Information Arbitrage in Bipartite Heat Engines,http://arxiv.org/abs/2308.06325v1 -"Harnessing energy extracted from heat engines to charge quantum - batteries",http://arxiv.org/abs/2309.15634v1 -Can GPT-4 Replicate Empirical Software Engineering Research?,http://arxiv.org/abs/2310.01727v1 -Data Engineering for the Analysis of Semiconductor Manufacturing Data,http://arxiv.org/abs/cs/0212040v1 -"A Theoretical and Empirical Evaluation of Software Component Search - Engines, Semantic Search Engines and Google Search Engine in the Context of - COTS-Based Development",http://arxiv.org/abs/1204.2079v1 -"Some investigations in design of low cost variable compression ratio two - stroke petrol engine",http://arxiv.org/abs/1206.4251v1 -The Complexification of Engineering,http://dx.doi.org/10.1002/cplx.20395 -The retrieval effectiveness of search engines on navigational queries,http://arxiv.org/abs/1511.05812v1 -Push vs. Pull-Based Loop Fusion in Query Engines,http://arxiv.org/abs/1610.09166v1 -"The Essence Theory of Software Engineering - Large-Scale Classroom - Experiences from 450+ Software Engineering BSc Students",http://arxiv.org/abs/1809.08827v1 -"Experience in engineering of scientific software: The case of an - optimization software for oil pipelines",http://arxiv.org/abs/2003.00463v1 -Semantic Networks for Engineering Design: A Survey,http://dx.doi.org/10.1017/pds.2021.523 -"To get good student ratings should you only teach programming courses? - Investigation and implications of student evaluations of teaching in a - software engineering context",http://dx.doi.org/10.1109/ICSE-SEET52601.2021.00035 -Software-Intensive Product Engineering in Start-Ups: A Taxonomy,http://dx.doi.org/10.1109/MS.2018.2801548 -Quantum mechanical Carnot engine,http://dx.doi.org/10.1088/0305-4470/33/24/302 -Quantum heat engine with continuum working medium,http://dx.doi.org/10.1088/1751-8113/40/30/004 -Intelligent Semantic Web Search Engines: A Brief Survey,http://arxiv.org/abs/1102.0831v1 -Design and Implementation of a Simple Web Search Engine,http://arxiv.org/abs/1112.2807v2 -Credibility in Web Search Engines,http://arxiv.org/abs/1208.1011v1 -On Cloud-Based Engineering of Dependable Systems,http://arxiv.org/abs/1404.7509v1 -On Low Overlap Among Search Results of Academic Search Engines,http://arxiv.org/abs/1701.02617v1 -Holographic heat engine with momentum relaxation,http://dx.doi.org/10.1007/s11433-017-9185-9 -"Software Engineering Timeline: major areas of interest and - multidisciplinary trends",http://arxiv.org/abs/2002.10163v1 -Teaching Software Engineering through Robotics,http://arxiv.org/abs/1406.4458v1 -Agile Software Engineering and Systems Engineering at SKA Scale,http://arxiv.org/abs/1712.00061v2 -RPSE: Reification as Paradigm of Software Engineering,http://arxiv.org/abs/1810.01904v1 -The Fluidyne engine,http://dx.doi.org/10.1119/1.5078518 -Turning Software Engineers into AI Engineers,http://arxiv.org/abs/2011.01590v2 -Explainable AI for Software Engineering,http://arxiv.org/abs/2012.01614v1 -Numerical computing in engineering mathematics,http://dx.doi.org/10.1109/ASET53988.2022.9734960 -One-particle engine with a porous piston,http://dx.doi.org/10.1038/s41598-022-18057-3 -"Exploring a multi_stage feedback teaching mode for graduate students of - software engineering discipline based on project_driven competition",http://arxiv.org/abs/2212.09394v1 -"Industrial Engineering with Large Language Models: A case study of - ChatGPT's performance on Oil & Gas problems",http://arxiv.org/abs/2304.14354v1 -"Advances in Engine Efficiency: Nanomaterials, Surface Engineering, and - Quantum-based Propulsion",http://arxiv.org/abs/2307.02490v1 -Knowledge Engineering using Large Language Models,http://arxiv.org/abs/2310.00637v1 -The Central Engines of Low-Luminosity AGNs,http://arxiv.org/abs/astro-ph/0303059v1 -Free Energy and Internal Combustion Engine Cycles,http://arxiv.org/abs/1205.0177v1 -Efficiency bounds for nonequilibrium heat engines,http://dx.doi.org/10.1016/j.aop.2013.01.017 -EmptyHeaded: A Relational Engine for Graph Processing,http://arxiv.org/abs/1503.02368v7 -"Correlation-powered Information Engines and the Thermodynamics of - Self-Correction",http://dx.doi.org/10.1103/PhysRevE.95.012152 -"Proceedings of the 5th International Workshop on Software Engineering - Methods in Spreadsheets (SEMS'18)",http://arxiv.org/abs/1808.09174v1 -Chersiphron & Son Engineers,http://arxiv.org/abs/1110.5849v1 -"Towards a Theory of Systems Engineering Processes: A Principal-Agent - Model of a One-Shot, Shallow Process",http://dx.doi.org/10.1109/JSYST.2020.2964668 -"Discussions of gas power cycle performance analysis method in the course - of Engineering Thermodynamics",http://arxiv.org/abs/1909.03446v2 -Social Engineering Resistant 2FA,http://arxiv.org/abs/2001.06075v1 -"Social engineering: Concepts, Techniques and Security Countermeasures",http://arxiv.org/abs/2107.14082v1 -"Digital Engineering Transformation with Trustworthy AI towards Industry - 4.0: Emerging Paradigm Shifts",http://arxiv.org/abs/2301.00951v1 -"Utilizing Online and Open-Source Machine Learning Toolkits to Leverage - the Future of Sustainable Engineering",http://arxiv.org/abs/2304.11175v1 -"Trustworthy and Synergistic Artificial Intelligence for Software - Engineering: Vision and Roadmaps",http://arxiv.org/abs/2309.04142v2 -InAs three quantum dots as working substance for quantum heat engine,http://arxiv.org/abs/2309.10958v1 -"Quantum Lubrication: Suppression of Friction in a First Principle Four - Stroke Heat Engine",http://dx.doi.org/10.1103/PhysRevE.73.025107 -The smallest possible heat engines,http://arxiv.org/abs/1010.6029v1 -"Deconcentration of Attention: Addressing the Complexity of Software - Engineering",http://arxiv.org/abs/1201.5735v1 -Simple Search Engine Model: Adaptive Properties,http://arxiv.org/abs/1212.3906v1 -"High Quality Requirement Engineering and Applying Priority Based Tools - for QoS Standardization in Web Service Architecture",http://arxiv.org/abs/1303.5950v1 -"Hidden symmetries and nonlinear constitutive relations for - tight-coupling heat engines",http://dx.doi.org/10.1088/1367-2630/17/4/045013 -Human Factors in Software Reliability Engineering,http://arxiv.org/abs/1503.03584v1 -Reverse Engineering of RFID devices,http://arxiv.org/abs/1507.02196v1 -"The G-ACM Tool: using the Drools Rule Engine for Access Control - Management",http://arxiv.org/abs/1611.08547v1 -An Empirical Analysis of Feature Engineering for Predictive Modeling,http://dx.doi.org/10.1109/SECON.2016.7506650 -Quantum Analog of Carnot Engine,http://arxiv.org/abs/1710.10435v1 -Essencery - A Tool for Essentializing Software Engineering Practices,http://arxiv.org/abs/1808.02723v1 -"A holistic look at requirements engineering practices in the gaming - industry",http://arxiv.org/abs/1811.03482v1 -Holographic Heat Engines as Quantum Heat Engines,http://dx.doi.org/10.1088/1361-6382/ab5ba9 -Teaching Inclusive Engineering Design at a Small Liberal Arts College,http://arxiv.org/abs/2105.08201v1 -A functionality taxonomy for document search engines,http://arxiv.org/abs/2105.12989v1 -"Software Engineers' Information Seeking Behavior in Change Impact - Analysis - An Interview Study",http://arxiv.org/abs/1703.01897v1 -"Applying Agile Requirements Engineering Approach for Re-engineering & - Changes in existing Brownfield Adaptive Systems",http://arxiv.org/abs/1410.6902v1 -"On Gender, Ethnicity, and Culture in Empirical Software Engineering - Research",http://dx.doi.org/10.1145/3195836.3195837 -"Experimental characterization of autonomous heat engine based on minimal - dynamical-system model",http://dx.doi.org/10.1103/PhysRevResearch.2.033146 -Declarative Demand-Driven Reverse Engineering,http://arxiv.org/abs/2101.04718v1 -"On a Factorial Knowledge Architecture for Data Science-powered Software - Engineering",http://arxiv.org/abs/2103.01387v1 -Quantitative Assessment of Solution Innovation in Engineering Education,http://arxiv.org/abs/2104.04065v1 -Quantum Heat Engines with Carnot Efficiency at Maximum Power,http://dx.doi.org/10.1103/PhysRevResearch.4.013157 -"On the validity of pre-trained transformers for natural language - processing in the software engineering domain",http://arxiv.org/abs/2109.04738v2 -"Software Engineering Meets Systems Engineering: Conceptual Modeling - Applied to Engineering Operations",http://dx.doi.org/10.22937/IJCSNS.2021.21.10.47 -Biased or Not?: The Story of Two Search Engines,http://arxiv.org/abs/2112.12802v1 -Comparative Study of Cloud and Non-Cloud Gaming Platform: Apercu,http://arxiv.org/abs/2201.08210v1 -"Thermodynamics of one and two-qubit nonequilibrium heat engines running - between squeezed thermal reservoirs",http://dx.doi.org/10.1016/j.physa.2023.128832 -"Structure Preserving Transformations for Practical Model-based Systems - Engineering",http://arxiv.org/abs/2209.07935v1 -"Optimization analysis of an endoreversible quantum heat engine with - efficient power function",http://arxiv.org/abs/2301.03927v1 -"What Practitioners Really Think About Continuous Software Engineering: A - Taxonomy of Challenges",http://arxiv.org/abs/2303.17271v1 -"Analysis of Software Engineering Practices in General Software and - Machine Learning Startups",http://arxiv.org/abs/2304.01523v1 -A Plasma Instability Theory of Gamma-Ray Burst Emission,http://dx.doi.org/10.1063/1.1361580 -LOTIS Upper Limits and the Prompt OT from GRB 990123,http://dx.doi.org/10.1063/1.1361544 -Evolution of O Abundance Relative to Fe,http://dx.doi.org/10.1086/319084 -"The prompt emission of GRB990712 with BeppoSAX: evidence of a transient - X-ray emission feature",http://dx.doi.org/10.1086/319480 -Towards an Understanding of GRB Prompt Emission,http://dx.doi.org/10.1063/1.1579351 -Early GRB Afterglows Revisited,http://arxiv.org/abs/astro-ph/0206423v1 -Stellar Tidal Processes Near Massive Black Holes,http://arxiv.org/abs/astro-ph/0303571v1 -Orbital inspiral into a massive black hole in a galactic center,http://dx.doi.org/10.1086/376672 -"Effect of Differential Rotation on the Maximum Mass of Neutron Stars: - Realistic Nuclear Equations of State",http://dx.doi.org/10.1086/421897 -A Study of Prompt Emission Mechanisms in Gamma-Ray Bursts,http://dx.doi.org/10.1086/422867 -"The prompt X-ray emission of GRB011211: possible evidence of a transient - absorption feature",http://dx.doi.org/10.1086/425066 -"Parameters of the prompt gamma-ray burst emission estimated with the - opening angle of jets",http://arxiv.org/abs/astro-ph/0504070v3 -"Axisymmetric collapse simulations of rotating massive stellar cores in - full general relativity: Numerical study for prompt black hole formation",http://dx.doi.org/10.1103/PhysRevD.71.084013 -"Evidence of polarisation in the prompt gamma-ray emission from GRB - 930131 and GRB 960924",http://dx.doi.org/10.1051/0004-6361:20052693 -Is FLARE for Solar flare?,http://arxiv.org/abs/astro-ph/0512181v1 -"Synchrotron emission in small scale magnetic field as possible - explanation for prompt emission spectra of gamma-ray bursts",http://dx.doi.org/10.1086/508681 -"GRB 060714: No Clear Dividing Line Between Prompt Emission and X-ray - Flares",http://dx.doi.org/10.1086/519019 -High energy neutrino early afterglows from gamma-ray bursts revisited,http://dx.doi.org/10.1103/PhysRevD.76.123001 -PROMPT Observations of the Early-Time Optical Afterglow of GRB 060607A,http://dx.doi.org/10.1088/0004-637X/693/2/1417 -Sudakov resummation in QCD,http://arxiv.org/abs/0709.2826v2 -Prompt dipole radiation in fusion reactions,http://dx.doi.org/10.1016/j.physletb.2008.05.007 -Prompt GRB emission from gradual energy dissipation,http://dx.doi.org/10.1051/0004-6361:20079085 -"'Jet breaks' and 'missing breaks' in the X-Ray afterglow of Gamma Ray - Bursts",http://dx.doi.org/10.1086/587869 -Direct Photons in Nuclear Collisions at FAIR Energies,http://dx.doi.org/10.1134/S1063778809030168 -"Associated production of prompt photons and heavy quarks in off-shell - gluon-gluon fusion",http://dx.doi.org/10.1140/epjc/s10052-008-0654-y -"Implications of Two Type Ia Supernova Populations for Cosmological - Measurements",http://dx.doi.org/10.1086/592019 -"Feasibility study of the eta-prime -> pi+pi-pi0 decay using WASA-at-COSY - apparatus",http://arxiv.org/abs/0807.0576v1 -"Early optical observations of GRBs by the TAROT telescopes: period - 2001-2008",http://dx.doi.org/10.1088/0004-6256/137/5/4100 -The Supercritical Pile GRB Model: The Prompt to Afterglow Evolution,http://dx.doi.org/10.1088/0004-637X/694/1/L54 -"Search for muon neutrinos from Gamma-Ray Bursts with the IceCube - neutrino telescope",http://dx.doi.org/10.1088/0004-637X/710/1/346 -"Prompt X-ray and Optical Excess Emission due to Hadronic Cascades in - Gamma-Ray Bursts",http://dx.doi.org/10.1088/2041-8205/725/2/L121 -A detailed spectral study of GRB 041219A and its host galaxy,http://dx.doi.org/10.1111/j.1365-2966.2011.18290.x -"Implications of Understanding Short Gamma-Ray Bursts Detected by {\it - Swift}",http://dx.doi.org/10.1088/0004-673X/738/1/19 -Quarkonium production in pp collisions at 7 TeV with the CMS experiment,http://arxiv.org/abs/1108.4845v2 -"Measurement of prompt J/psi and beauty hadron production cross sections - at mid-rapidity in pp collisions at sqrt(s) = 7 TeV",http://dx.doi.org/10.1007/JHEP11(2012)065 -"Explaining the type Ia supernova PTF 11kx with a violent-prompt merger - scenario",http://dx.doi.org/10.1093/mnras/stt271 -"Hadronic Models for LAT Prompt Emission Observed in Fermi Gamma-Ray - Bursts",http://dx.doi.org/10.1093/mnras/sts581 -"Multiwavelength observations of GRB 110731A: GeV emission from onset to - afterglow",http://dx.doi.org/10.1088/0004-637X/763/2/71 -"GRB 121027A: long-lasting, energetic X-ray flares and clues to radiation - mechanism and progenitor star",http://arxiv.org/abs/1302.4876v1 -"Open-charm production as a function of charged particle multiplicity in - pp collisions at \sqrt{s}=7 TeV with ALICE",http://dx.doi.org/10.1088/1742-6596/509/1/012081 -Comptonization signatures in the prompt emission of Gamma Ray Bursts,http://dx.doi.org/10.1088/0004-637X/779/2/175 -GRB 140206A: the most distant polarized Gamma-Ray Burst,http://dx.doi.org/10.1093/mnras/stu1634 -"Shedding light on the prompt high efficiency paradox - self consistent - modeling of GRB afterglows",http://arxiv.org/abs/1503.03368v1 -Constraints on the Bulk Lorentz Factors of GRB X-Ray Flares,http://dx.doi.org/10.1088/0004-637X/807/1/92 -"Di-photon ""Ridge"" in p+p and p+A collisions at RHIC and the LHC",http://dx.doi.org/10.1103/PhysRevD.92.074045 -Monte Carlo Simulations of the Photospheric Process,http://dx.doi.org/10.1093/mnras/stv2709 -Estimating Long GRB Jet Opening Angles and Rest-Frame Energetics,http://dx.doi.org/10.3847/0004-637X/818/1/18 -Charmonium production in Pb-Pb collisions with ALICE at the LHC,http://dx.doi.org/10.1016/j.nuclphysa.2016.01.033 -Polarization of prompt and afterglow emission of Gamma-Ray Bursts,http://arxiv.org/abs/1605.03588v2 -"Prompt atmospheric neutrino fluxes: perturbative QCD models and nuclear - effects",http://dx.doi.org/10.1007/JHEP11(2016)167 -Late Time Emission of Prompt Fission Gamma Rays,http://dx.doi.org/10.1103/PhysRevC.94.064613 -IC at IC: IceCube can constrain the intrinsic charm of the proton,http://dx.doi.org/10.1103/PhysRevD.96.123002 -"Strategies for Finding Prompt Radio Counterparts to Gravitational Wave - Transients with the Murchison Widefield Array",http://dx.doi.org/10.1017/pasa.2016.43 -"Relative modification of prompt psi(2S) and J/psi yields from pp to PbPb - collisions at sqrt(s[NN]) = 5.02 TeV",http://dx.doi.org/10.1103/PhysRevLett.118.162301 -"Mapping the dominant regions of the phase space associated with $c \bar - c$ production relevant for the Prompt Atmospheric Neutrino Flux",http://dx.doi.org/10.1103/PhysRevD.96.094026 -Off-axis short GRBs from structured jets as counterparts to GW events,http://dx.doi.org/10.1093/mnrasl/slx175 -Neutron-star radius constraints from GW170817 and future detections,http://dx.doi.org/10.3847/2041-8213/aa9994 -"Hadronic origin of prompt high-energy emission of gamma-ray bursts - revisited: in the case of a limited maximum proton energy",http://dx.doi.org/10.3847/1538-4357/aab667 -"Fiducial cross-section measurements of the production of a prompt photon - in association with a top-quark pair at $\sqrt{s}=13$ TeV with the ATLAS - detector at the LHC",http://arxiv.org/abs/1811.08780v1 -"Ultra-deep tidal disruption events: prompt self-intersections and - observables",http://dx.doi.org/10.1093/mnras/stz1923 -"MAGICal GRB 190114C: Implications of cutoff in the spectrum at sub-$GeV$ - energies",http://dx.doi.org/10.3847/1538-4357/abb5fc -"The color dipole picture for prompt photon production in $pp$ and $pPb$ - collisions at the CERN-LHC",http://dx.doi.org/10.1140/epjc/s10052-020-8405-9 -Physics of Gamma-Ray Bursts Prompt Emission,http://dx.doi.org/10.1155/2015/907321 -"GRMHD simulations of prompt-collapse neutron star mergers: the absence - of jets",http://dx.doi.org/10.1103/PhysRevD.96.084063 -"A Toy Model for the Electromagnetic Output of Neutron-star Merger Prompt - Collapse to a Black Hole: Magnetized Neutron-star Collisions",http://dx.doi.org/10.3847/1538-4357/ab7923 -"Discovery of a Universal Correlation For Long and Short GRBs and Its - Application on the Study of Luminosity Function and Formation Rate",http://dx.doi.org/10.3847/1538-4357/ab8f9d -"Searching for Dark Matter Signals in Timing Spectra at Neutrino - Experiments",http://arxiv.org/abs/2006.09386v1 -"Expected high energy emission from GRB 080319B and origins of the GeV - emission of GRBs 080514B, 080916C and 081024B",http://dx.doi.org/10.1111/j.1365-2966.2009.14779.x -"Quasi-blackbody component and radiative efficiency of the prompt - emission of gamma-ray bursts",http://dx.doi.org/10.1088/0004-637X/702/2/1211 -Highlights from Fermi GRB observations,http://arxiv.org/abs/1003.2452v1 -"Measuring the pulse of GRB 090618: A Simultaneous Spectral and Timing - Analysis of the Prompt Emission",http://dx.doi.org/10.1088/0004-637X/745/1/76 -Detection of Gamma-Ray Polarization in Prompt Emission of GRB 100826A,http://dx.doi.org/10.1088/2041-8205/743/2/L30 -"Prompt heavy quarkonium production in association with a massive - (anti)bottom quark at the LHC",http://dx.doi.org/10.1103/PhysRevD.85.074026 -"Measurement of prompt J/psi pair production in pp collisions at sqrt(s) - = 7 TeV",http://dx.doi.org/10.1007/JHEP09(2014)094 -"Characterization and modeling of crosstalk and afterpulsing in Hamamatsu - silicon photomultipliers",http://dx.doi.org/10.1088/1748-0221/10/10/P10031 -Limits on optical polarization during the prompt phase of GRB 140430A,http://dx.doi.org/10.1088/0004-637X/813/1/1 -Prompt double $J/ψ$ production in proton-proton collisions at the LHC,http://dx.doi.org/10.1103/PhysRevD.93.114011 -"Approximating Optimal Bounds in Prompt-LTL Realizability in - Doubly-exponential Time",http://dx.doi.org/10.4204/EPTCS.226.21 -"Off-axis emission of short gamma-ray bursts and the detectability of - electromagnetic counterparts of gravitational wave detected binary mergers",http://dx.doi.org/10.1093/mnras/stx1683 -"High-Energy Non-Thermal and Thermal Emission from GRB141207A detected by - Fermi",http://dx.doi.org/10.3847/1538-4357/833/2/139 -"Measurement of the prompt $J/ψ$ pair production cross-section in $pp$ - collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector",http://dx.doi.org/10.1140/epjc/s10052-017-4644-9 -"Imprints of the ejecta-companion interaction in Type Ia supernovae: main - sequence, subgiant, and red giant companions",http://dx.doi.org/10.1093/mnras/stw2737 -"Energy dependence of forward-rapidity J/$ψ$ and $ψ(2S)$ production - in pp collisions at the LHC",http://dx.doi.org/10.1140/epjc/s10052-017-4940-4 -"Measurement of prompt and nonprompt J/psi production in pp and pPb - collisions at sqrt(s[NN]) = 5.02 TeV",http://dx.doi.org/10.1140/epjc/s10052-017-4828-3 -"Semi-analytic derivation of the threshold mass for prompt collapse in - binary neutron star mergers",http://dx.doi.org/10.1093/mnras/stx1983 -"Study of isolated prompt photon production in $ p $-Pb collisions for - the ALICE kinematics",http://dx.doi.org/10.1103/PhysRevD.95.054002 -"Photon production from Pb+Pb collisions at {$\sqrt{s_{\rm {NN}}}$} = - 5.02 TeV at LHC and at {$\sqrt{s_{\rm {NN}}}$} = 39 TeV at FCC",http://dx.doi.org/10.1103/PhysRevC.98.024911 -"The Limited Contribution of Low- and High-Luminosity Gamma-Ray Bursts to - Ultra-High Energy Cosmic Rays",http://dx.doi.org/10.3847/1538-4357/ab153c -"A Catalog of Redshift Estimates for 1366 BATSE Long-Duration Gamma-Ray - Bursts: Evidence for Strong Selection Effects on the Phenomenological Prompt - Gamma-Ray Correlations",http://arxiv.org/abs/1903.06989v1 -"High efficiency photospheric emission entailed by formation of a - collimation shock in gamma-ray bursts",http://dx.doi.org/10.1093/mnras/stz1828 -Prompt $J/ψ$ production in associated with top quark pair at the LHC,http://dx.doi.org/10.1103/PhysRevD.100.074019 -"Compact Method for Proton Range Verification Based on Coaxial Prompt - Gamma-Ray Monitoring: a Theoretical Study",http://dx.doi.org/10.1109/TRPMS.2019.2930362 -"Detecting Alzheimer's Disease by estimating attention and elicitation - path through the alignment of spoken picture descriptions with the picture - prompt",http://arxiv.org/abs/1910.00515v1 -"Analysis of $^{83m}$Kr Prompt Scintillation Signals in the PIXeY - Detector",http://dx.doi.org/10.1088/1748-0221/15/01/P01023 -"High rate of gravitational waves mergers from flyby perturbations of - wide black-hole triples in the field",http://dx.doi.org/10.1093/mnras/staa2720 -"RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language - Models",http://arxiv.org/abs/2009.11462v2 -Making Pre-trained Language Models Better Few-shot Learners,http://arxiv.org/abs/2012.15723v2 -"Adapting Language Models for Zero-shot Learning by Meta-tuning on - Dataset and Prompt Collections",http://arxiv.org/abs/2104.04670v5 -"Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis - of Head and Prompt Tuning",http://arxiv.org/abs/2106.09226v2 -"Comparison of the measured atmospheric muon rate with Monte Carlo - simulations and sensitivity study for detection of prompt atmospheric muons - with KM3NeT",http://dx.doi.org/10.22323/1.395.1112 -DEGREE: A Data-Efficient Generation-Based Event Extraction Model,http://arxiv.org/abs/2108.12724v3 -Evolution Patterns of the Peak Energy in the GRB Prompt Emission,http://dx.doi.org/10.1051/0004-6361/202141647 -"LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based - on Prompt Tuning of T5",http://arxiv.org/abs/2110.07298v3 -"Comparison of the measured atmospheric muon rate with Monte Carlo - simulations and sensitivity study for detection of prompt atmospheric muons - with KM3NeT",http://dx.doi.org/10.1088/1748-0221/16/09/C09035 -An Explanation of In-context Learning as Implicit Bayesian Inference,http://arxiv.org/abs/2111.02080v6 -"Instrumental Tip-of-the-iceberg Effects on the Prompt Emission of - Swift/BAT Gamma-ray Bursts",http://dx.doi.org/10.3847/1538-4357/ac4d94 -"Unified Multimodal Pre-training and Prompt-based Tuning for - Vision-Language Understanding and Generation",http://arxiv.org/abs/2112.05587v2 -"Few-shot Instruction Prompts for Pretrained Language Models to Detect - Social Biases",http://arxiv.org/abs/2112.07868v2 -"Investigating the mass-ratio dependence of the prompt-collapse threshold - with numerical-relativity simulations",http://dx.doi.org/10.1103/PhysRevD.106.044026 -"Investigation of the Prompt SNe Ia progenitor nature through the - analysis of the chemical composition of globular clusters and circumgalactic - clouds",http://dx.doi.org/10.1093/mnras/stac141 -"What Makes the Story Forward? Inferring Commonsense Explanations as - Prompts for Future Event Generation",http://arxiv.org/abs/2201.07099v2 -"Protum: A New Method For Prompt Tuning Based on ""[MASK]""",http://arxiv.org/abs/2201.12109v1 -Training language models to follow instructions with human feedback,http://arxiv.org/abs/2203.02155v1 -"Internet-augmented language models through few-shot prompting for - open-domain question answering",http://arxiv.org/abs/2203.05115v2 -"Parameter-Efficient Tuning by Manipulating Hidden States of Pretrained - Language Models For Classification Tasks",http://arxiv.org/abs/2204.04596v2 -"DualPrompt: Complementary Prompting for Rehearsal-free Continual - Learning",http://arxiv.org/abs/2204.04799v2 -In-BoXBART: Get Instructions into Biomedical Multi-Task Learning,http://arxiv.org/abs/2204.07600v1 -"Hadronic supercriticality in spherically expanding sources: application - to GRB prompt emission",http://dx.doi.org/10.1093/mnras/stad880 -"Simultaneous Multiple-Prompt Guided Generation Using Differentiable - Optimal Transport",http://arxiv.org/abs/2204.08472v1 -Fairness and promptness in Muller formulas,http://arxiv.org/abs/2204.13215v3 -Declaration-based Prompt Tuning for Visual Question Answering,http://arxiv.org/abs/2205.02456v1 -"PEVL: Position-enhanced Pre-training and Prompt Tuning for - Vision-language Models",http://arxiv.org/abs/2205.11169v2 -Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation,http://arxiv.org/abs/2205.12647v2 -Neuro-Symbolic Procedural Planning with Commonsense Prompting,http://arxiv.org/abs/2206.02928v6 -"Probing the progenitor of high-$z$ short-duration GRB 201221D and its - possible bulk acceleration in prompt emission",http://dx.doi.org/10.1088/1674-4527/ac712d -"CLAMP: Prompt-based Contrastive Learning for Connecting Language and - Animal Pose",http://dx.doi.org/10.1109/CVPR52729.2023.02229 -"Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation - and Instance Generation",http://arxiv.org/abs/2206.13746v1 -"On the Robustness of Dialogue History Representation in Conversational - Question Answering: A Comprehensive Study and a New Prompt-based Method",http://arxiv.org/abs/2206.14796v2 -FL-Tuning: Layer Tuning for Feed-Forward Network in Transformer,http://arxiv.org/abs/2206.15312v1 -"AI Illustrator: Translating Raw Descriptions into Images by Prompt-based - Cross-Modal Generation",http://dx.doi.org/10.1145/3503161.3547790 -"IDIAPers @ Causal News Corpus 2022: Efficient Causal Relation - Identification Through a Prompt-based Few-shot Approach",http://arxiv.org/abs/2209.03895v2 -Prompt-based Conservation Learning for Multi-hop Question Answering,http://arxiv.org/abs/2209.06923v1 -"Generative Visual Prompt: Unifying Distributional Control of Pre-Trained - Generative Models",http://arxiv.org/abs/2209.06970v2 -"Generate rather than Retrieve: Large Language Models are Strong Context - Generators",http://arxiv.org/abs/2209.10063v3 -"Domain-Unified Prompt Representations for Source-Free Domain - Generalization",http://arxiv.org/abs/2209.14926v1 -"Phenaki: Variable Length Video Generation From Open Domain Textual - Description",http://arxiv.org/abs/2210.02399v1 -Context Generation Improves Open Domain Question Answering,http://arxiv.org/abs/2210.06349v2 -"PromptCast: A New Prompt-based Learning Paradigm for Time Series - Forecasting",http://arxiv.org/abs/2210.08964v4 -"Learning to Perform Complex Tasks through Compositional Fine-Tuning of - Language Models",http://arxiv.org/abs/2210.12607v1 -"Prompter: Utilizing Large Language Model Prompting for a Data Efficient - Embodied Instruction Following",http://arxiv.org/abs/2211.03267v1 -"Measuring Reliability of Large Language Models through Semantic - Consistency",http://arxiv.org/abs/2211.05853v2 -"CapEnrich: Enriching Caption Semantics for Web Images via Cross-modal - Pre-trained Knowledge",http://arxiv.org/abs/2211.09371v3 -"Null-text Inversion for Editing Real Images using Guided Diffusion - Models",http://arxiv.org/abs/2211.09794v1 -"Decomposed Soft Prompt Guided Fusion Enhancing for Compositional - Zero-Shot Learning",http://arxiv.org/abs/2211.10681v1 -PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning,http://arxiv.org/abs/2211.11682v2 -"Identifying the physical origin of gamma-ray bursts with supervised - machine learning",http://arxiv.org/abs/2211.16451v2 -Data-Efficient Finetuning Using Cross-Task Nearest Neighbors,http://arxiv.org/abs/2212.00196v2 -"NIR-Prompt: A Multi-task Generalized Neural Information Retrieval - Training Framework",http://arxiv.org/abs/2212.00229v2 -"Cloud-Device Collaborative Adaptation to Continual Changing Environments - in the Real-world",http://arxiv.org/abs/2212.00972v1 -"Images Speak in Images: A Generalist Painter for In-Context Visual - Learning",http://arxiv.org/abs/2212.02499v2 -KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales,http://arxiv.org/abs/2212.09721v2 -Large Language Models Are Reasoning Teachers,http://arxiv.org/abs/2212.10071v2 -"See, Think, Confirm: Interactive Prompting Between Vision and Language - Models for Knowledge-based Visual Reasoning",http://arxiv.org/abs/2301.05226v1 -"Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion - Models",http://arxiv.org/abs/2301.12661v1 -ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models,http://arxiv.org/abs/2302.04456v2 -Boosting Adversarial Transferability using Dynamic Cues,http://arxiv.org/abs/2302.12252v2 -Adapting Prompt for Few-shot Table-to-Text Generation,http://arxiv.org/abs/2302.12468v2 -"Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong - Few-shot Learners",http://arxiv.org/abs/2303.02151v1 -Exploring the Feasibility of ChatGPT for Event Extraction,http://arxiv.org/abs/2303.03836v2 -Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer,http://arxiv.org/abs/2303.03922v1 -Text-Visual Prompting for Efficient 2D Temporal Video Grounding,http://arxiv.org/abs/2303.04995v3 -Observations of GRB 230307A by TESS,http://arxiv.org/abs/2303.07319v2 -Model-tuning Via Prompts Makes NLP Models Adversarially Robust,http://arxiv.org/abs/2303.07320v1 -"ART: Automatic multi-step reasoning and tool-use for large language - models",http://arxiv.org/abs/2303.09014v1 -"Localizing Object-level Shape Variations with Text-to-Image Diffusion - Models",http://arxiv.org/abs/2303.11306v2 -"Visually-Prompted Language Model for Fine-Grained Scene Graph Generation - in an Open World",http://arxiv.org/abs/2303.13233v2 -Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning,http://arxiv.org/abs/2303.14375v1 -BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning,http://arxiv.org/abs/2303.14773v2 -"Prompt-Guided Zero-Shot Anomaly Action Recognition using Pretrained Deep - Skeleton Features",http://arxiv.org/abs/2303.15167v1 -"Soft-prompt tuning to predict lung cancer using primary care free-text - Dutch medical notes",http://arxiv.org/abs/2303.15846v1 -"AnnoLLM: Making Large Language Models to Be Better Crowdsourced - Annotators",http://arxiv.org/abs/2303.16854v1 -Language Models can Solve Computer Tasks,http://arxiv.org/abs/2303.17491v2 -Unsupervised Brain Tumor Segmentation with Image-based Prompts,http://arxiv.org/abs/2304.01472v1 -Black Box Few-Shot Adaptation for Vision-Language models,http://arxiv.org/abs/2304.01752v3 -DITTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model,http://arxiv.org/abs/2304.02827v1 -Zero-shot Generative Model Adaptation via Image-specific Prompt Learning,http://arxiv.org/abs/2304.03119v1 -"Zero-Shot Next-Item Recommendation using Large Pretrained Language - Models",http://arxiv.org/abs/2304.03153v1 -Timestamps as Prompts for Geography-Aware Location Recommendation,http://arxiv.org/abs/2304.04151v1 -"Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot - Segmentation on Whole Slide Imaging",http://arxiv.org/abs/2304.04155v1 -"Interaction-Aware Prompting for Zero-Shot Spatio-Temporal Action - Detection",http://arxiv.org/abs/2304.04688v4 -"Are Large Language Models Ready for Healthcare? A Comparative Study on - Clinical Language Understanding",http://arxiv.org/abs/2304.05368v3 -"LINGO : Visually Debiasing Natural Language Instructions to Support Task - Diversity",http://arxiv.org/abs/2304.06184v1 -AutoSplice: A Text-prompt Manipulated Image Dataset for Media Forensics,http://arxiv.org/abs/2304.06870v1 -Delta Denoising Score,http://arxiv.org/abs/2304.07090v1 -"In-Context Operator Learning with Data Prompts for Differential Equation - Problems",http://dx.doi.org/10.1073/pnas.2310142120 -Is ChatGPT a Good Recommender? A Preliminary Study,http://arxiv.org/abs/2304.10149v2 -"""HOT"" ChatGPT: The promise of ChatGPT in detecting and discriminating - hateful, offensive, and toxic comments on social media",http://arxiv.org/abs/2304.10619v1 -"Prompting GPT-3.5 for Text-to-SQL with De-semanticization and Skeleton - Retrieval",http://arxiv.org/abs/2304.13301v2 -"Is a prompt and a few samples all you need? Using GPT-4 for data - augmentation in low-resource classification tasks",http://arxiv.org/abs/2304.13861v1 -Large Language Models are Strong Zero-Shot Retriever,http://arxiv.org/abs/2304.14233v2 -SceneGenie: Scene Graph Guided Diffusion Models for Image Synthesis,http://arxiv.org/abs/2304.14573v1 -Multimodal Procedural Planning via Dual Text-Image Prompting,http://arxiv.org/abs/2305.01795v1 -AutoML-GPT: Automatic Machine Learning with GPT,http://arxiv.org/abs/2305.02499v1 -Using ChatGPT for Entity Matching,http://arxiv.org/abs/2305.03423v2 -"""We need to do more ... I need to do more"": Augmenting Digital Media - Consumption via Critical Reflection to Increase Compassion and Promote - Prosocial Attitudes and Behaviors",http://dx.doi.org/10.1145/3544548.3581355 -"CodeIE: Large Code Generation Models are Better Few-Shot Information - Extractors",http://arxiv.org/abs/2305.05711v2 -"Multilingual LLMs are Better Cross-lingual In-context Learners with - Alignment",http://arxiv.org/abs/2305.05940v3 -iEdit: Localised Text-guided Image Editing with Weak Supervision,http://arxiv.org/abs/2305.05947v1 -Mode Approximation Makes Good Multimodal Prompts,http://arxiv.org/abs/2305.08381v2 -Large Language Models are Zero-Shot Rankers for Recommender Systems,http://arxiv.org/abs/2305.08845v1 -"MsPrompt: Multi-step Prompt Learning for Debiasing Few-shot Event - Detection",http://arxiv.org/abs/2305.09335v1 -"UniControl: A Unified Diffusion Model for Controllable Visual Generation - In the Wild",http://arxiv.org/abs/2305.11147v2 -RECIPE: How to Integrate ChatGPT into EFL Writing Education,http://dx.doi.org/10.1145/3573051.3596200 -"Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue - Questions with LLMs",http://arxiv.org/abs/2305.11792v2 -"SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' - Safety Filters",http://arxiv.org/abs/2305.12082v2 -Watermarking Diffusion Model,http://arxiv.org/abs/2305.12502v1 -"Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A - Study on Prompt Design Strategies",http://arxiv.org/abs/2305.12586v1 -PRODIGY: Enabling In-context Learning Over Graphs,http://arxiv.org/abs/2305.12600v1 -"Deduction under Perturbed Evidence: Probing Student Simulation - Capabilities of Large Language Models",http://arxiv.org/abs/2305.14507v1 -Benchmarking Arabic AI with Large Language Models,http://arxiv.org/abs/2305.14982v1 -PromptNER: Prompting For Named Entity Recognition,http://arxiv.org/abs/2305.15444v2 -"SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and - Reasoning",http://arxiv.org/abs/2305.15486v2 -UFO: Unified Fact Obtaining for Commonsense Question Answering,http://arxiv.org/abs/2305.16048v1 -"ProSpect: Expanded Conditioning for the Personalization of - Attribute-aware Image Generation",http://arxiv.org/abs/2305.16225v2 -Hierarchical Verbalizer for Few-Shot Hierarchical Text Classification,http://arxiv.org/abs/2305.16885v1 -Evaluating GPT-3 Generated Explanations for Hateful Content Moderation,http://arxiv.org/abs/2305.17680v4 -"Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language - Perspective",http://arxiv.org/abs/2306.00595v4 -Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging,http://arxiv.org/abs/2306.01176v1 -"Prompt Tuning Large Language Models on Personalized Aspect Extraction - for Recommendations",http://arxiv.org/abs/2306.01475v1 -"Bias Against 93 Stigmatized Groups in Masked Language Models and - Downstream Sentiment Classification Tasks",http://dx.doi.org/10.1145/3593013.3594109 -"HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual - Federated Learning",http://arxiv.org/abs/2306.09970v1 -"Blended-NeRF: Zero-Shot Object Generation and Blending in Existing - Neural Radiance Fields",http://arxiv.org/abs/2306.12760v2 -"Learning-to-Rank Meets Language: Boosting Language-Driven Ordering - Alignment for Ordinal Classification",http://arxiv.org/abs/2306.13856v3 -LLM-assisted Generation of Hardware Assertions,http://arxiv.org/abs/2306.14027v1 -"Learning Prompt-Enhanced Context Features for Weakly-Supervised Video - Anomaly Detection",http://arxiv.org/abs/2306.14451v1 -"Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A - Two-Stage Approach to Mitigate Social Biases",http://arxiv.org/abs/2307.01595v1 -Text-Guided Synthesis of Eulerian Cinemagraphs,http://arxiv.org/abs/2307.03190v3 -"PiTL: Cross-modal Retrieval with Weakly-supervised Vision-language - Pre-training via Prompting",http://dx.doi.org/10.1145/3539618.3592038 -"CValues: Measuring the Values of Chinese Large Language Models from - Safety to Responsibility",http://arxiv.org/abs/2307.09705v1 -SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer,http://arxiv.org/abs/2307.10550v1 -"Study of charm hadronization with prompt $Λ^+_\mathrm{c}$ baryons - in proton-proton and lead-lead collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 - TeV",http://arxiv.org/abs/2307.11186v1 -"Controllable Generation of Dialogue Acts for Dialogue Systems via - Few-Shot Response Generation and Ranking",http://arxiv.org/abs/2307.14440v1 -"Educational data augmentation in physics education research using - ChatGPT",http://arxiv.org/abs/2307.14475v2 -Prompt Gamma-Ray Burst Emission from Internal Shocks -- New Insights,http://arxiv.org/abs/2308.00403v2 -"Prompted Contrast with Masked Motion Modeling: Towards Versatile 3D - Action Representation Learning",http://arxiv.org/abs/2308.03975v1 -"Study of flavor dependence of the baryon-to-meson ratio in proton-proton - collisions at $\sqrt{s} = 13$ TeV",http://arxiv.org/abs/2308.04873v1 -"LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image - Generation",http://arxiv.org/abs/2308.05095v2 -"LittleMu: Deploying an Online Virtual Teaching Assistant via - Heterogeneous Sources Integration and Chain of Teach Prompts",http://arxiv.org/abs/2308.05935v1 -Zero-shot Text-driven Physically Interpretable Face Editing,http://arxiv.org/abs/2308.05976v1 -Semantic Consistency for Assuring Reliability of Large Language Models,http://arxiv.org/abs/2308.09138v1 -"Online Class Incremental Learning on Stochastic Blurry Task Boundary via - Mask and Visual Prompt Tuning",http://arxiv.org/abs/2308.09303v1 -Turning a CLIP Model into a Scene Text Spotter,http://arxiv.org/abs/2308.10408v1 -"Zero- and Few-Shot Prompting with LLMs: A Comparative Study with - Fine-tuned Models for Bangla Sentiment Analysis",http://arxiv.org/abs/2308.10783v1 -"Masked Momentum Contrastive Learning for Zero-shot Semantic - Understanding",http://arxiv.org/abs/2308.11448v1 -StoryBench: A Multifaceted Benchmark for Continuous Story Visualization,http://arxiv.org/abs/2308.11606v2 -SPPNet: A Single-Point Prompt Network for Nuclei Image Segmentation,http://arxiv.org/abs/2308.12231v1 -"Towards Realistic Zero-Shot Classification via Self Structural Semantic - Alignment",http://arxiv.org/abs/2308.12960v2 -ExpCLIP: Bridging Text and Facial Expressions via Semantic Alignment,http://arxiv.org/abs/2308.14448v2 -"Mixup-Augmented Meta-Learning for Sample-Efficient Fine-Tuning of - Protein Simulators",http://arxiv.org/abs/2308.15116v3 -"AnoVL: Adapting Vision-Language Models for Unified Zero-shot Anomaly - Localization",http://arxiv.org/abs/2308.15939v1 -SAM-Med2D,http://arxiv.org/abs/2308.16184v1 -"Business Process Text Sketch Automation Generation Using Large Language - Model",http://arxiv.org/abs/2309.01071v1 -Zero-shot information extraction from radiological reports using ChatGPT,http://arxiv.org/abs/2309.01398v2 -"Generative AI-aided Joint Training-free Secure Semantic Communications - via Multi-modal Prompts",http://arxiv.org/abs/2309.02616v1 -"Image-Object-Specific Prompt Learning for Few-Shot Class-Incremental - Learning",http://arxiv.org/abs/2309.02833v1 -"Prompt-based Node Feature Extractor for Few-shot Learning on - Text-Attributed Graphs",http://arxiv.org/abs/2309.02848v1 -Searching for sbottom LSP at the LHC,http://arxiv.org/abs/2309.06456v1 -"Gradient constrained sharpness-aware prompt learning for vision-language - models",http://arxiv.org/abs/2309.07866v2 -"MMICL: Empowering Vision-language Model with Multi-Modal In-Context - Learning",http://arxiv.org/abs/2309.07915v2 -"Prompting Segmentation with Sound is Generalizable Audio-Visual Source - Localizer",http://arxiv.org/abs/2309.07929v2 -"PromptVC: Flexible Stylistic Voice Conversion in Latent Space Driven by - Natural Language Prompts",http://arxiv.org/abs/2309.09262v1 -"MVP: Meta Visual Prompt Tuning for Few-Shot Remote Sensing Image Scene - Classification",http://arxiv.org/abs/2309.09276v1 -"Connection of four-dimensional Langevin model and Hauser-Feshbach theory - to describe statistical decay of fission fragments",http://arxiv.org/abs/2309.12653v1 -Large Language Models Are Also Good Prototypical Commonsense Reasoners,http://arxiv.org/abs/2309.13165v1 -"Scalable Multi-Robot Collaboration with Large Language Models: - Centralized or Decentralized Systems?",http://arxiv.org/abs/2309.15943v1 -"TextField3D: Towards Enhancing Open-Vocabulary 3D Generation with Noisy - Text Fields",http://arxiv.org/abs/2309.17175v1 -LLM-grounded Video Diffusion Models,http://arxiv.org/abs/2309.17444v2 -"Ctrl-Room: Controllable Text-to-3D Room Meshes Generation with Layout - Constraints",http://arxiv.org/abs/2310.03602v2 -Prompt-augmented Temporal Point Process for Streaming Event Sequence,http://arxiv.org/abs/2310.04993v2 -"GROVE: A Retrieval-augmented Complex Story Generation Framework with A - Forest of Evidence",http://arxiv.org/abs/2310.05388v2 -FireAct: Toward Language Agent Fine-tuning,http://arxiv.org/abs/2310.05915v1 -"BYOC: Personalized Few-Shot Classification with Co-Authored Class - Descriptions",http://arxiv.org/abs/2310.06111v1 -Multilingual Jailbreak Challenges in Large Language Models,http://arxiv.org/abs/2310.06474v1 -FGPrompt: Fine-grained Goal Prompting for Image-goal Navigation,http://arxiv.org/abs/2310.07473v1 -AutoVP: An Automated Visual Prompting Framework and Benchmark,http://arxiv.org/abs/2310.08381v1 -Language Models as Zero-Shot Trajectory Generators,http://arxiv.org/abs/2310.11604v1 -"Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison - Scaling of Texts with Large Language Models",http://arxiv.org/abs/2310.12049v1 -"Cache me if you Can: an Online Cost-aware Teacher-Student framework to - Reduce the Calls to Large Language Models",http://arxiv.org/abs/2310.13395v1 -Exploring the Boundaries of GPT-4 in Radiology,http://arxiv.org/abs/2310.14573v1 -Compton Echoes from Gamma-ray Bursts,http://dx.doi.org/10.1086/309463 -"ECLAIRs: A microsatellite for the prompt optical and X-ray emission of - Gamma-Ray Bursts",http://arxiv.org/abs/astro-ph/0109178v1 -Prompt Emission and Early Afterglows of Gamma-Ray Bursts,http://arxiv.org/abs/astro-ph/0201321v1 -Electromagnetic (versus fireball) model of GRBs,http://dx.doi.org/10.1063/1.1810906 -Swift observations of the X-ray bright GRB 050315,http://dx.doi.org/10.1086/499069 -"Interference Phenomena in Electronic Transport Through Chaotic Cavities: - An Information-Theoretic Approach",http://dx.doi.org/10.1088/0959-7174/9/2/304 -Recent Photoproduction Results From ZEUS,http://dx.doi.org/10.1063/1.53651 -A New Upper Limit for the Tau-Neutrino Magnetic Moment,http://dx.doi.org/10.1016/S0370-2693(01)00746-8 -Inclusive Jets and alpha_s at HERA,http://dx.doi.org/10.1016/S0920-5632(03)02316-8 -High Energy Photoproduction,http://dx.doi.org/10.1088/0034-4885/68/12/R03 -Particle Production and Fragmentation at HERA,http://dx.doi.org/10.1142/9789812773524_0016 -New Insights into the Production of Heavy Quarkonium,http://arxiv.org/abs/hep-ph/9509210v1 -Is factorization for isolated photon cross sections broken?,http://dx.doi.org/10.1103/PhysRevD.55.1124 -"Prompt photon, Drell-Yan and Bethe-Heitler processes in hard - photoproduction",http://arxiv.org/abs/hep-ph/9609273v1 -Nucleon Spin Structure and Large $p_T$ Processes at $pp$ Colliders,http://arxiv.org/abs/hep-ph/9811244v1 -Neutrino Masses and Mixings: a Theoretical Perspective,http://dx.doi.org/10.1016/S0370-1573(99)00067-8 -A note on the production of photons at RHIC,http://dx.doi.org/10.1016/S0920-5632(00)00547-8 -Status of Hard Interactions (Jets and Heavy Flavor),http://dx.doi.org/10.1142/9789812778345_0004 -Scattering Through QCD Sphalerons,http://dx.doi.org/10.1016/S0375-9474(02)01534-8 -"Photoproduction of isolated photons, single hadrons and jets at NLO",http://dx.doi.org/10.1142/9789812702722_0011 -"Electroweak radiative corrections to hadronic precision observables at - TeV energies",http://arxiv.org/abs/hep-ph/0401093v1 -Code for prompt numerical computation of the leading order GPD evolution,http://arxiv.org/abs/hep-ph/0604248v1 -"Derivation of the Supersymmetric Harish--Chandra Integral for - UOSp(k_1/2k_2)",http://arxiv.org/abs/math-ph/0212060v1 -Prompt dimuons and D meson production in heavy-ion collisions at the SPS,http://arxiv.org/abs/nucl-ex/0108015v1 -GRB 050410 and GRB 050412: are they really dark GRBs?,http://dx.doi.org/10.1051/0004-6361:20066594 -"Addendum to: A simultaneous Frobenius splitting for closures of - conjugacy classes of nilpotent matrices, by V. B. Mehta and Wilberd van der - Kallen",http://arxiv.org/abs/0803.2960v2 -"Correlations of Prompt and Afterglow Emission in Swift Long and Short - Gamma Ray Bursts",http://dx.doi.org/10.1086/592766 -"Prompt GeV Emission from Residual Collisions in GRB Outflows: Evidence - from Fermi Observations of GRB 080916c",http://dx.doi.org/10.1088/0004-637X/709/1/525 -GRB Probes of the High-z Universe with EXIST,http://dx.doi.org/10.1063/1.3155875 -Photon + Jet production at sqrt{s}=1.96 TeV,http://arxiv.org/abs/0905.2201v1 -"Agile Detection of Delayed Gamma-Ray Emission from the Short Gamma-Ray - Burst GRB 090510",http://dx.doi.org/10.1088/2041-8205/708/2/L84 -X-Ray Flares In GRB090812 - Case Study,http://arxiv.org/abs/0908.2849v2 -"Towards the Properties of Long Gamma-Ray Burst Progenitors with Swift - Data",http://dx.doi.org/10.1111/j.1365-2966.2009.15760.x -"Nearby Supernova Rates from the Lick Observatory Supernova Search. IV. A - Recovery Method for the Delay Time Distribution",http://dx.doi.org/10.1111/j.1365-2966.2010.16808.x -The Second Swift BAT Gamma-Ray Burst Catalog,http://dx.doi.org/10.1088/0067-0049/195/1/2 -The origin of the early time optical emission of Swift GRB 080310,http://dx.doi.org/10.1111/j.1365-2966.2012.20499.x -BPA Bisimilarity is EXPTIME-hard,http://arxiv.org/abs/1205.7041v2 -Remarks On the Topology of the Fano surface,http://arxiv.org/abs/1211.2621v2 -"A Proposal for the Muon Piston Calorimeter Extension (MPC-EX) to the - PHENIX Experiment at RHIC",http://arxiv.org/abs/1301.1096v1 -Direct Photon Results from CDF,http://dx.doi.org/10.1051/epjconf/20136014005 -"Recent developments of analysis for hydrodynamic flow of nematic liquid - crystals",http://dx.doi.org/10.1098/rsta.2013.0361 -The many classical faces of quantum structures,http://dx.doi.org/10.3390/e19040144 -"Measurement of charm and beauty production at central rapidity versus - charged-particle multiplicity in proton-proton collisions at - $\mathbf{\sqrt{\textit s}}=7$ TeV",http://dx.doi.org/10.1007/JHEP09(2015)148 -"CGRO/BATSE Data Support the New Paradigm for GRB Prompt Emission and the - New L$_{i}^{nTh}$-E$_{peak,i}^{nTh,rest}$ relation",http://dx.doi.org/10.3847/0004-637X/819/1/79 -The sharpness of gamma-ray burst prompt emission spectra,http://dx.doi.org/10.1051/0004-6361/201527015 -Attitudes towards Refugees in Light of the Paris Attacks,http://arxiv.org/abs/1512.04310v2 -Dealing with Big Data,http://arxiv.org/abs/1605.06354v1 -GRB Observational Properties,http://dx.doi.org/10.1007/s11214-016-0305-9 -Quantum Uncertainty Based on Metric Adjusted Skew Information,http://arxiv.org/abs/1708.00978v2 -Correlated Prompt Fission Data in Transport Simulations,http://dx.doi.org/10.1140/epja/i2018-12455-0 -"GRB170817A associated with GW170817: multifrequency observations and - modeling of prompt gamma-ray emission",http://dx.doi.org/10.3847/2041-8213/aaa2f6 -New results on heavy-flavour in heavy-ion collisions with LHCb,http://arxiv.org/abs/1710.07784v1 -"Prompt gamma-ray emission of GRB 170817A associated to GW 170817: A - consistent picture",http://dx.doi.org/10.1093/mnras/sty1246 -"Towards 2+1 photon tomography: Energy-based selection of two 511 keV - photons and a prompt photon with the J-PET scanner",http://arxiv.org/abs/1803.00996v1 -Adjoint Monte Carlo calculation of charged plasma particle flux to wall,http://arxiv.org/abs/1504.00214v2 -"The NLTK FrameNet API: Designing for Discoverability with a Rich - Linguistic Resource",http://arxiv.org/abs/1703.07438v2 -"Energy dependence of the prompt $γ$-ray emission from the - $(d,p)$-induced fission of $^{234}\mathrm{U}^{*}$ and $^{240}\mathrm{Pu}^{*}$",http://dx.doi.org/10.1103/PhysRevC.96.014601 -Prompt emission polarimetry of Gamma Ray Bursts with ASTROSAT CZT-Imager,http://dx.doi.org/10.3847/1538-4357/ab40b7 -The DIDI dataset: Digital Ink Diagram data,http://arxiv.org/abs/2002.09303v2 -Galactic restrictions on iron production by various Types of supernovae,http://dx.doi.org/10.1111/j.1365-2966.2011.20161.x -Photon and di-photon production at ATLAS,http://arxiv.org/abs/1111.2223v1 -"The prompt-afterglow connection in Gamma-Ray Bursts: a comprehensive - statistical analysis of Swift X-ray light-curves",http://dx.doi.org/10.1093/mnras/sts066 -"Do We Need Higher-Order Probabilities and, If So, What Do They Mean?",http://arxiv.org/abs/1304.2716v1 -Properties of GRB Lightcurves from Magnetic Reconnection,http://dx.doi.org/10.1093/mnras/stw895 -Exotic charmonium spectroscopy with CMS,http://arxiv.org/abs/1509.09276v1 -"GRB120729A: external shock origin for both the prompt gamma-ray emission - and afterglow",http://dx.doi.org/10.3847/1538-4357/aaba6e -Heavily Separable Functors,http://arxiv.org/abs/1812.07272v1 -"Some observations about determinants which are connected with Catalan - numbers and related topics",http://arxiv.org/abs/1902.10468v2 -"Prompt optical emission as a signature of synchrotron radiation in - gamma-ray bursts",http://dx.doi.org/10.1051/0004-6361/201935766 -"Spectro-polarimetric analysis of prompt emission of GRB 160325A: jet - with evolving environment of internal shocks",http://dx.doi.org/10.1093/mnras/staa570 -"Prospects for the detection of the prompt very-high-energy emission from - $\rmγ$-ray bursts with the High Altitude Detection of Astronomical - Radiation experiment",http://dx.doi.org/10.3847/1538-4357/ac2df7 -"Murchison Widefield Array rapid-response observations of the short GRB - 180805A",http://dx.doi.org/10.1017/pasa.2021.15 -"The P-P(P) cross-section of isolated single-photon production in the - $k_t$-factorization and different angular ordering unintegrated parton - distributions frameworks",http://dx.doi.org/10.1103/PhysRevD.103.074020 -"Photospheric Prompt Emission From Long Gamma Ray Burst Simulations -- - II. Spectropolarimetry",http://dx.doi.org/10.3847/1538-4357/ac4093 -"$t\bar{t}b\bar{b}$ at the LHC: On the size of off-shell effects and - prompt $b$-jet identification",http://dx.doi.org/10.1103/PhysRevD.107.014028 -"Evidence of Photosphere Emission Origin for Gamma-Ray Burst Prompt - Emission",http://dx.doi.org/10.3847/1538-4365/ac98b1 -"Multi-messenger observations of binary neutron star mergers in the O4 - run",http://dx.doi.org/10.3847/1538-4357/ac8d00 -"Heroes, Villains, and Victims, and GPT-3: Automated Extraction of - Character Roles Without Training Data",http://arxiv.org/abs/2205.07557v2 -Large Language Models are Zero-Shot Reasoners,http://arxiv.org/abs/2205.11916v4 -"Interpreting time-integrated polarization data of gamma-ray burst prompt - emission",http://dx.doi.org/10.1051/0004-6361/202243805 -Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them,http://arxiv.org/abs/2210.09261v1 -"Two-stage LLM Fine-tuning with Less Specialization and More - Generalization",http://arxiv.org/abs/2211.00635v2 -"Matching Exemplar as Next Sentence Prediction (MeNSP): Zero-shot Prompt - Learning for Automatic Scoring in Science Education",http://arxiv.org/abs/2301.08771v4 -"InstructTTS: Modelling Expressive TTS in Discrete Latent Space with - Natural Language Style Prompt",http://arxiv.org/abs/2301.13662v2 -"Prompting Multilingual Large Language Models to Generate Code-Mixed - Texts: The Case of South East Asian Languages",http://arxiv.org/abs/2303.13592v4 -Improving the Diproche CNL through autoformalization via GPT-3,http://arxiv.org/abs/2303.17513v1 -"Structured prompt interrogation and recursive extraction of semantics - (SPIRES): A method for populating knowledge bases using zero-shot learning",http://arxiv.org/abs/2304.02711v1 -Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models,http://arxiv.org/abs/2305.10276v6 -"Boosting Text-to-Image Diffusion Models with Fine-Grained Semantic - Rewards",http://arxiv.org/abs/2305.19599v2 -"Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer - Control",http://arxiv.org/abs/2306.07863v2 -"Evaluation of OpenAI Codex for HPC Parallel Programming Models Kernel - Generation",http://dx.doi.org/10.1145/3605731.3605886 -"Leveraging GPT-4 for Food Effect Summarization to Enhance - Product-Specific Guidance Development via Iterative Prompting",http://arxiv.org/abs/2306.16275v1 -"Large Language Models are Effective Text Rankers with Pairwise Ranking - Prompting",http://arxiv.org/abs/2306.17563v1 -"GRB 221009A: revealing a hidden afterglow during the prompt emission - phase with Fermi-GBM observations",http://dx.doi.org/10.3847/2041-8213/acfcab -"Mental-LLM: Leveraging Large Language Models for Mental Health - Prediction via Online Text Data",http://arxiv.org/abs/2307.14385v3 -YORC: Yoruba Reading Comprehension dataset,http://arxiv.org/abs/2308.09768v2 -"Self-Deception: Reverse Penetrating the Semantic Firewall of Large - Language Models",http://arxiv.org/abs/2308.11521v2 -MLLM-DataEngine: An Iterative Refinement Approach for MLLM,http://arxiv.org/abs/2308.13566v2 -"Energy-dependent polarization of Gamma-Ray Bursts' prompt emission with - the POLAR and POLAR-2 instruments",http://dx.doi.org/10.22323/1.444.0619 -AGILE gamma-ray detection of the exceptional GRB 221009A,http://dx.doi.org/10.3847/2041-8213/acfaff -"Prompt Tuned Embedding Classification for Multi-Label Industry Sector - Allocation",http://arxiv.org/abs/2309.12075v2 -"Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion - Models?",http://arxiv.org/abs/2310.10012v1 -On the Computational Power of Molecular Heat Engines,http://dx.doi.org/10.1007/s10955-005-8015-9 -Ideal Engine Durations For Gamma-Ray-Burst-Jet Launch,http://dx.doi.org/10.1093/mnras/stx987 -Machine Learning for Software Engineering: A Systematic Mapping,http://dx.doi.org/10.1109/ACCESS.2021.3119746 -FML-based Prediction Agent and Its Application to Game of Go,http://dx.doi.org/10.1109/IFSA-SCIS.2017.8023311 -The Evolution of Empirical Methods in Software Engineering,http://arxiv.org/abs/1912.11512v4 -"Exposing Bugs in JavaScript Engines through Test Transplantation and - Differential Testing",http://arxiv.org/abs/2012.03759v1 -Developing a Meta-suggestion Engine for Search Queries,http://dx.doi.org/10.1109/ACCESS.2022.3186096 -"Improving transferability between different engineering stages in the - development of automated material flow modules",http://dx.doi.org/10.1109/TASE.2016.2576022 -"Code Generation for Machine Learning using Model-Driven Engineering and - SysML",http://arxiv.org/abs/2307.05584v1 -Phenomena in Gravitational Clustering,http://arxiv.org/abs/astro-ph/0509251v1 -Energetics of a simple microscopic heat engine,http://dx.doi.org/10.1103/PhysRevE.72.056109 -"Performance Oriented Query Processing In GEO Based Location Search - Engines",http://arxiv.org/abs/1005.0961v1 -"Mixed-Variable Requirements Roadmaps and their Role in the Requirements - Engineering of Adaptive Systems",http://arxiv.org/abs/1102.4178v1 -Model based system engineering approach of a lightweight embedded TCP/IP,http://arxiv.org/abs/1104.5387v1 -Refinement-Based Specification: Requirements and Architecture,http://arxiv.org/abs/1404.7260v1 -Orchestration of Global Software Engineering Projects,http://dx.doi.org/10.1109/ICGSE.2009.52 -"An Open Distributed Architecture for Flexible Hybrid Assembly Systems: A - Model Driven Engineering Approach",http://arxiv.org/abs/1411.1307v1 -"Identifying Talented Software Engineering Students through Data-driven - Skill Assessment",http://arxiv.org/abs/1411.6197v1 -"Do Personality Profiles Differ in the Pakistani Software Industry and - Academia - A Case Study",http://arxiv.org/abs/1507.06888v1 -A Business Maturity Model of Software Product Line Engineering,http://dx.doi.org/10.1007/s10796-010-9230-8 -Autonomous Rotor Heat Engine,http://dx.doi.org/10.1103/PhysRevE.95.062131 -The Dangerous Dogmas of Software Engineering,http://arxiv.org/abs/1802.06321v1 -Evaluating search engines and defining a consensus implementation,http://arxiv.org/abs/1808.00958v1 -"Understanding What Software Engineers Are Working on -- The Work-Item - Prediction Challenge",http://arxiv.org/abs/2004.06174v1 -SimJEB: Simulated Jet Engine Bracket Dataset,http://dx.doi.org/10.1111/cgf.14353 -Voice based self help System: User Experience Vs Accuracy,http://dx.doi.org/10.1007/978-90-481-3658-2_18 -"Stationary engines in and beyond the linear response regime at the - Carnot efficiency",http://dx.doi.org/10.1103/PhysRevE.95.052128 -Designing a highly efficient graphene quantum spin heat engine,http://dx.doi.org/10.1038/s41598-019-42279-7 -"Combining features of the Unreal and Unity Game Engines to hone - development skills",http://arxiv.org/abs/1511.03640v1 -"Problems with the use of Web search engines to find results in foreign - languages",http://arxiv.org/abs/1511.05798v1 -"The Retrieval Effectiveness of Web Search Engines: Considering Results - Descriptions",http://arxiv.org/abs/1511.05800v1 -Model-Driven Engineering of Self-Adaptive Software with EUREMA,http://dx.doi.org/10.1145/2555612 -"Standards of Validity and the Validity of Standards in Behavioral - Software Engineering Research: The Perspective of Psychological Test Theory",http://dx.doi.org/10.1145/3239235.3267437 -"Computer simulations in science and engineering - Concepts - Practices - - Perspectives",http://arxiv.org/abs/1904.01053v1 -"Unleashing the Potentials of Immersive Augmented Reality for Software - Engineering",http://arxiv.org/abs/2001.01223v1 -"Privacy Engineering Meets Software Engineering. On the Challenges of - Engineering Privacy ByDesign",http://arxiv.org/abs/2007.08613v1 -The efficiency of simple Quantum Engine Stirling and Ericsson cycle,http://arxiv.org/abs/2010.01581v1 -"Renovating Requirements Engineering: First Thoughts to Shape - Requirements Engineering as a Profession",http://dx.doi.org/10.1109/REW.2019.00008 -"Tuning the performance of a micrometer-sized Stirling engine through - reservoir engineering",http://dx.doi.org/10.1038/s41467-021-25230-1 -"An Empirical Evaluation of Cost-based Federated SPARQL Query Processing - Engines",http://arxiv.org/abs/2104.00984v1 -"Mean-value exergy modeling of internal combustion engines: - characterization of feasible operating regions",http://dx.doi.org/10.1115/1.4053945 -A microscopic theory of Curzon-Ahlborn heat engine,http://dx.doi.org/10.1103/PhysRevE.106.024105 -Development of an Undergraduate Quantum Engineering Degree,http://dx.doi.org/10.1109/TQE.2022.3157338 -"Improving Software Engineering Research through Experimentation - Workbenches",http://dx.doi.org/10.1007/978-3-030-30985-5_6 -"Formal Quantum Software Engineering: Introducing the Formal Methods of - Software Engineering to Quantum Computing",http://arxiv.org/abs/2111.08426v1 -Developmental Status and Perspectives for Tissue Engineering in Urology,http://arxiv.org/abs/2111.09414v1 -"Computational Rational Engineering and Development: Synergies and - Opportunities",http://dx.doi.org/10.1007/978-3-030-82193-7_50 -Social Science Theories in Software Engineering Research,http://arxiv.org/abs/2202.07519v1 -"Colossal power extraction from active cyclic Brownian information - engines",http://dx.doi.org/10.1021/acs.jpclett.2c01736 -"Towards Understanding Barriers and Mitigation Strategies of Software - Engineers with Non-traditional Educational and Occupational Backgrounds",http://arxiv.org/abs/2204.04318v1 -Game Engine Comparative Anatomy,http://arxiv.org/abs/2207.06473v1 -Bath engineering enhanced quantum critical engines,http://dx.doi.org/10.3390/e24101458 -"Quantum dynamic and geometric phases in harmonic oscillator, spin 1/2 - and two-level thermal engines systems",http://arxiv.org/abs/2212.02970v1 -"Effects of the self-propulsion parity on the efficiency of a - fuel-consuming active heat engine",http://dx.doi.org/10.1103/PhysRevE.108.024602 -Geometric characterization for cyclic heat engines far from equilibrium,http://arxiv.org/abs/2305.06219v1 -A Knowledge Engineering Primer,http://arxiv.org/abs/2305.17196v1 -Circular Systems Engineering,http://arxiv.org/abs/2306.17808v1 -"Combustion Process in a Spark Ignition Engine: Dynamics and Noise Level - Estimation",http://dx.doi.org/10.1063/1.1739011 -On Engineering and Emergence,http://arxiv.org/abs/nlin/0601002v1 -Sorption heat engines: simple inanimate negative entropy generators,http://dx.doi.org/10.1016/j.physa.2005.12.003 -Spreadsheet Engineering: A Research Framework,http://arxiv.org/abs/0711.0538v1 -A Process Algebra Software Engineering Environment,http://arxiv.org/abs/0806.2730v1 -Support of Study on Engineering Technology from Physics and Mathematics,http://arxiv.org/abs/0807.1950v1 -"The Levels of Conceptual Interoperability Model: Applying Systems - Engineering Principles to M&S",http://arxiv.org/abs/0908.0191v1 -Software Engineering Education by Example,http://arxiv.org/abs/0911.3306v1 -Theory of Regulatory Compliance for Requirements Engineering,http://arxiv.org/abs/1002.3711v1 -The Study and Approach of Software Re-Engineering,http://arxiv.org/abs/1112.4016v1 -"Gain Scheduling Control of Gas Turbine Engines: Absolute Stability by - Finding a Common Lyapunov Matrix",http://arxiv.org/abs/1206.5344v1 -Efficiency of the general quantum-mechanical Carnot engine,http://arxiv.org/abs/1208.2222v2 -Quantum-mechanical engine models and their efficiencies,http://arxiv.org/abs/1302.0469v1 -"Requirements engineering current practice and capability in small and - medium software development enterprises in New Zealand",http://arxiv.org/abs/1407.6102v1 -Toward Reverse Engineering of VBA Based Excel Spreadsheet Applications,http://arxiv.org/abs/1503.03401v1 -Attainability of Carnot Efficiency with Autonomous Engines,http://dx.doi.org/10.1103/PhysRevE.92.050101 -"Towards the Holodeck: Fully Immersive Virtual Reality Visualisation of - Scientific and Engineering Data",http://dx.doi.org/10.1145/2683405.2683424 -"Classical and Quantum Szilard Engine under Generalized Uncertainty - Principle Effect",http://arxiv.org/abs/1607.02690v1 -"Thermodynamics, entropy and waterwheels",http://arxiv.org/abs/1609.05090v2 -Carnot's theorem and Szilárd engine,http://arxiv.org/abs/1611.08014v1 -"Irreversible thermodynamic analysis and application for molecular heat - engines",http://dx.doi.org/10.1016/j.chemphys.2017.07.009 -"Entangled quantum Otto heat engines based on two-spin systems with the - Dzyaloshinski-Moriya interaction",http://dx.doi.org/10.1007/s11128-017-1665-0 -Efficient Quantum Measurement Engine,http://dx.doi.org/10.1103/PhysRevLett.120.260601 -Lossless Brownian information engine,http://dx.doi.org/10.1103/PhysRevLett.120.020601 -Holographic heat engine in Horndeski model with the $k$-essence sector,http://dx.doi.org/10.1007/s11433-018-9315-8 -"Descartes: A PITest Engine to Detect Pseudo-Tested Methods - Tool - Demonstration",http://dx.doi.org/10.1145/3238147.3240474 -"Thermodynamics education for energy transformation: a Stirling Engine - experiment",http://dx.doi.org/10.1088/1361-6552/ac142c -Geometric Heat Engines Featuring Power that Grows with Efficiency,http://dx.doi.org/10.1103/PhysRevLett.116.160601 -Reflections on Cyberethics Education for Millennial Software Engineers,http://arxiv.org/abs/1703.00619v1 -"What If People Learn Requirements Over Time? A Rough Introduction to - Requirements Economics",http://arxiv.org/abs/1711.09092v1 -"Software Engineering und Software Engineering Forschung im Zeitalter der - Digitalisierung",http://arxiv.org/abs/2002.10835v1 -Evaluation of Query Generators for Entity Search Engines,http://arxiv.org/abs/1003.4418v1 -"Teaching Introductory Electrical Engineering Course to CS Students in a - Russian University",http://arxiv.org/abs/1107.3785v1 -"Requirements Engineering Methods: A Classification Framework and - Research Challenges",http://arxiv.org/abs/1203.1717v1 -"Hybrid Data Mining Technique for Knowledge Discovery from Engineering - Materials' Data sets",http://arxiv.org/abs/1209.4169v1 -Realization of Ontology Web Search Engine,http://arxiv.org/abs/1702.06934v1 -"Virtual Sensor Modelling using Neural Networks with Coefficient-based - Adaptive Weights and Biases Search Algorithm for Diesel Engines",http://arxiv.org/abs/1712.08319v1 -On Testing Quantum Programs,http://dx.doi.org/10.1109/ICSE-NIER.2019.00023 -"On Integrating Design Thinking for a Human-centered Requirements - Engineering",http://arxiv.org/abs/1908.07223v2 -Endoreversible Otto engines at maximal power,http://dx.doi.org/10.1515/jnet-2020-0039 -"Towards a Systems Engineering based Automotive Product Engineering - Process",http://arxiv.org/abs/2007.11897v1 -"Research, Develop, Deploy: Building a Full Spectrum Software Engineering - and Research Department",http://arxiv.org/abs/2010.04660v1 -How Research Software Engineers Can Support Scientific Software,http://arxiv.org/abs/2010.07381v1 -Engineering Education in the Age of Autonomous Machines,http://arxiv.org/abs/2102.07900v1 -SELM: Software Engineering of Machine Learning Models,http://arxiv.org/abs/2103.11249v1 -"Team-oriented Consistency Checking of Heterogeneous Engineering - Artifacts",http://arxiv.org/abs/2103.14860v1 -"A Numerical Method to Find the Optimal Thermodynamic Cycle in - Microscopic Heat Engine",http://dx.doi.org/10.1007/s10955-021-02813-2 -Quantum-Enhanced Heat Engine Based on Superabsorption,http://dx.doi.org/10.1103/PhysRevLett.128.180602 -An Exploration of the Mentorship Needs of Research Software Engineers,http://arxiv.org/abs/2110.02251v1 -"Towards More Accountable Search Engines: Online Evaluation of - Representation Bias",http://arxiv.org/abs/2110.08835v1 -How can potential engineers seek and discover their genuine vocations?,http://arxiv.org/abs/2203.01574v1 -"Mechanism Design Theory in Control Engineering: A Tutorial and Overview - of Applications in Communication, Power Grid, Transportation, and Security - Systems",http://arxiv.org/abs/2212.00756v1 -How Cyber Criminal Use Social Engineering To Target Organizations,http://arxiv.org/abs/2212.12309v1 -"Battle of the Blocs: Quantity and Quality of Software Engineering - Research by Origin",http://dx.doi.org/10.1109/APSEC57359.2022.00095 -Thermodynamic performance bounds for radiative heat engines,http://arxiv.org/abs/2304.03942v2 -"Hardware Honeypot: Setting Sequential Reverse Engineering on a Wrong - Track",http://arxiv.org/abs/2305.03707v1 -"Artificial intelligence-aided protein engineering: from topological data - analysis to deep protein language models",http://arxiv.org/abs/2307.14587v1 -Content-based Recommendation Engine for Video Streaming Platform,http://arxiv.org/abs/2308.08406v1 -"Summary of the 4th International Workshop on Requirements Engineering - and Testing (RET 2017)",http://dx.doi.org/10.1145/3149485.3149522 -The physical observer in a Szilard engine with uncertainty,http://arxiv.org/abs/2309.10580v1 -"Spectral Properties of the Prompt X-ray emission and Afterglow from the - Gamma-Ray Burst of 28 February 1997",http://dx.doi.org/10.1086/311132 -Recent Results on Gamma-Ray Bursts with the Bepposax Satellite,http://arxiv.org/abs/astro-ph/9802157v1 -"GRB990123, The Optical Flash and The Fireball Model",http://dx.doi.org/10.1086/312039 -GRB990510: on the possibility of a beamed X-ray afterglow,http://dx.doi.org/10.1086/309159 -"Detectability of Gravitational Radiation from Prompt and Delayed Star - Collapse to a Black Hole",http://arxiv.org/abs/astro-ph/0003321v3 -Dust Echos from Gamma Ray Bursts,http://dx.doi.org/10.1086/312670 -"The cepheid-like relationship between variability and luminosity - explained within the ``cannonball model'' of Gamma-Ray bursts",http://dx.doi.org/10.1051/0004-6361:20000546 -Possible detection of hard X-ray afterglows of short gamma-ray bursts,http://dx.doi.org/10.1051/0004-6361:20011485 -Hard X-ray afterglows of short GRBs,http://dx.doi.org/10.1063/1.1579331 -Constraints on the Bulk Lorentz Factor of GRB 990123,http://dx.doi.org/10.1063/1.1579332 -The nature of the prompt X-ray and radio emission from SN2002ap,http://dx.doi.org/10.1051/0004-6361:20021576 -"Some recent developments in gamma-ray burst afterglow and prompt - emission models",http://arxiv.org/abs/astro-ph/0212015v1 -"External Shock Model for the Prompt Phase of Gamma Ray Bursts: - Implications for GRB Source Models",http://arxiv.org/abs/astro-ph/0301340v1 -Hard GRB spectra: thermal vs non-thermal emission,http://arxiv.org/abs/astro-ph/0301390v1 -Magnetically powered prompt radiation and flow acceleration in GRB,http://arxiv.org/abs/astro-ph/0302468v1 -Early Optical Afterglows from Wind-Type Gamma-Ray Bursts,http://dx.doi.org/10.1086/378283 -"Ultrahigh Energy Cosmic Rays and Prompt TeV Gamma Rays from Gamma Ray - Bursts",http://dx.doi.org/10.1007/BF02705371 -"Pair loading in Gamma-Ray Burst Fireball And Prompt Emission From - Pair-Rich Reverse Shock",http://dx.doi.org/10.1086/379231 -X-ray and radio prompt emission from a hypernova SN 2002ap,http://dx.doi.org/10.1016/j.nuclphysbps.2004.04.054 -"Linear polarization on Gamma-Ray Bursts: from the prompt to the late - afterglow",http://dx.doi.org/10.1063/1.1810842 -Photometric Identification of Young Stripped-Core Supernovae,http://dx.doi.org/10.1086/421875 -MASTER: The Mobile Astronomical System of Telescope-Robots,http://dx.doi.org/10.1002/asna.200410284 -"Gamma-Ray Burst Prompt Emission: Implications from Shock Acceleration - Theory",http://dx.doi.org/10.1016/j.asr.2005.02.004 -"Monitoring of the prompt radio emission from the unusual supernova - 2004dj in NGC2403",http://dx.doi.org/10.1086/430049 -"Early optical-IR emission from GRB 041219a: neutron-rich internal shocks - and a mildly magnetized external reverse shock",http://dx.doi.org/10.1086/432616 -"Evidence for a long duration component in the prompt emission of short - Gamma-Ray Bursts detected with BeppoSAX",http://dx.doi.org/10.1086/430759 -"The GRB of 1999 January 23: prompt emission and broad-band afterglow - modeling",http://dx.doi.org/10.1393/ncc/i2005-10091-7 -Prompt Mergers of Neutron Stars with Black Holes,http://dx.doi.org/10.1086/431583 -Measuring diffuse neutrino fluxes with IceCube,http://dx.doi.org/10.1088/1475-7516/2005/05/010 -HETE-2 Localization and Observations of the Gamma-Ray Burst GRB 020813,http://dx.doi.org/10.1393/ncc/i2005-10056-x -A key to the spectral variability of prompt GRBs,http://dx.doi.org/10.1063/1.2207882 -Prompt emission spectra from the photosphere of a GRB,http://dx.doi.org/10.1051/0004-6361:20065000 -Stochastic wake field particle acceleration in Gamma-Ray Bursts,http://dx.doi.org/10.1063/1.2207864 -GRB 050315: A step toward the uniqueness of the overall GRB structure,http://dx.doi.org/10.1393/ncb/i2007-10268-y -"Polarization in the prompt emission of gamma-ray bursts and their - afterglows",http://dx.doi.org/10.1088/1367-2630/8/8/131 -A Unified Model of the Prompt Optical Emission of Gamma-Ray Bursts,http://dx.doi.org/10.1086/516839 -On the Detectability of Prompt Coherent GRB Radio Emission,http://dx.doi.org/10.1086/513424 -GRB060218: A Relativistic Supernova Shock Breakout,http://dx.doi.org/10.1086/520715 -"Behavior of X-Ray Dust Scattering and Implications for X-Ray Afterglows - of Gamma-Ray Bursts",http://dx.doi.org/10.1086/513139 -The time ending the shallow decay of the X-ray light curves of long GRBs,http://arxiv.org/abs/astro-ph/0703700v1 -Observation of Isolated High-ET Photons in Photoproduction at HERA,http://dx.doi.org/10.1016/S0370-2693(97)01164-7 -Measurement of inclusive prompt photon photoproduction at HERA,http://dx.doi.org/10.1016/S0370-2693(99)01450-1 -"Study of the effective transverse momentum of partons in the proton - using prompt photons in photoproduction at HERA",http://dx.doi.org/10.1016/S0370-2693(01)00615-3 -"Measurement of Prompt Charm Meson Production Cross Sections in p anti-p - Collisions at s**(1/2) = 1.96 TeV",http://dx.doi.org/10.1103/PhysRevLett.91.241804 -"Spectra of prompt electrons from decays of B+ and B0 mesons and ratio of - inclusive semielectronic branching fractions",http://dx.doi.org/10.1016/j.physletb.2005.03.060 -"Measurement of $σ_{χ_{c2}} {\cal B}(χ_{c2} \to J/ψ - γ)/ σ_{χ_{c1}} {\cal B}(χ_{c1} \to J/ψγ)$ in - $p\bar{p}$ Collisions at $\sqrt{s} =$ 1.96 TeV",http://dx.doi.org/10.1103/PhysRevLett.98.232001 -Fragmentation production of J/psi and psi' at the Tevatron,http://dx.doi.org/10.1016/0370-2693(94)90182-1 -Color-Octet Fragmentation and the psi' Surplus at the Tevatron,http://dx.doi.org/10.1103/PhysRevLett.74.3327 -"Inclusive Prompt Photon Production in Hadronic Final States of $e^+e^-$ - Annihilation",http://dx.doi.org/10.1103/PhysRevD.53.1124 -"Analytic Calculation of Prompt Photon plus Associated Heavy Flavor at - Next-to-Leading Order in QCD",http://dx.doi.org/10.1103/PhysRevD.54.2279 -"Polarized and Unpolarized Double Prompt Photon Production in Next to - Leading Order QCD",http://dx.doi.org/10.1016/0550-3213(96)00139-3 -"Production of a Prompt Photon in Association with Charm at - Next-to-Leading Order in QCD",http://dx.doi.org/10.1103/PhysRevD.54.1896 -Massive Lepton Pairs as a Prompt Photon Surrogate,http://dx.doi.org/10.1103/PhysRevD.58.074012 -"Spin Dependence of Associated Production of a Prompt Photon and a Charm - Quark at Next-to-Leading Order in QCD",http://dx.doi.org/10.1103/PhysRevD.58.114024 -Sudakov Resummation for Prompt-Photon Production in Hadron Collisions,http://dx.doi.org/10.1088/1126-6708/1998/07/024 -Expected Muon Energy Spectra and Zenithal Distributions Deep Underwater,http://arxiv.org/abs/hep-ph/9905399v1 -Charmonium suppression by gluon bremsstrahlung in p-A and A-B collisions,http://dx.doi.org/10.1007/s100500050387 -Isolated-photon production in polarized pp collisions,http://dx.doi.org/10.1016/S0550-3213(99)00575-1 -"Spin Dependence of Massive Lepton Pair Production in Proton-Proton - Collisions",http://dx.doi.org/10.1103/PhysRevD.62.014014 -Polarization of Prompt J/psi at the Tevatron,http://dx.doi.org/10.1103/PhysRevD.62.094005 -Prompt muon contribution to the flux underwater,http://dx.doi.org/10.1103/PhysRevD.63.096004 -Prompt photons at RHIC,http://dx.doi.org/10.1103/PhysRevC.63.041901 -"Atmospheric muon fluxes underwater as a tool to probe the small-x gluon - distribution",http://arxiv.org/abs/hep-ph/0106051v1 -Cross section of isolated prompt photons in hadron-hadron collisions,http://dx.doi.org/10.1088/1126-6708/2002/05/028 -Leptogenesis in a prompt decay scenario,http://arxiv.org/abs/hep-ph/0210274v2 -"Fluxes of atmospheric muons underwater depending on the small-x gluon - density",http://dx.doi.org/10.1088/0954-3899/29/2/314 -"J/psi plus prompt-photon associated production in two-photon collisions - at next-to-leading order",http://dx.doi.org/10.1103/PhysRevD.71.014016 -"Renormalization group approach to Sudakov resummation in prompt photon - production",http://dx.doi.org/10.1016/j.nuclphysb.2005.07.036 -"Probing Very High Energy Prompt Muon and Neutrino fluxes and the cosmic - ray knee via Underground Muons",http://dx.doi.org/10.1088/1475-7516/2006/07/011 -"Hard pion and prompt photon at RHIC, from single to double inclusive - production",http://dx.doi.org/10.1088/1126-6708/2006/09/015 -Hard Photoproduction at HERA,http://arxiv.org/abs/hep-ph/0702052v1 -"Prompt muon-induced fission: a sensitive probe for nuclear energy - dissipation and fission dynamics",http://arxiv.org/abs/nucl-th/0403087v1 -Prompt Beta Spectroscopy as a Diagnostic for Mix in Ignited NIF Capsules,http://dx.doi.org/10.1142/S0217732306020317 -"Enhancement of prompt photons in ultrarelativistic proton-proton - collisions from nonlinear gluon evolution at small-$x$",http://dx.doi.org/10.1103/PhysRevC.75.068202 -"GRB 050315: A step toward the uniqueness of the overall GRB structure - and the true nature of long GRBs",http://arxiv.org/abs/0705.2453v1 -"Prompt GeV-TeV Emission of Gamma-Ray Bursts Due to High-Energy Protons, - Muons and Electron-Positron Pairs",http://dx.doi.org/10.1086/522939 -Diffusive radiation in Langmuir turbulence produced by jet shocks,http://dx.doi.org/10.1111/j.1365-2966.2007.12059.x -Quenching of photon and pion spectra at intermediate RHIC energy,http://dx.doi.org/10.1088/1126-6708/2007/07/032 -"Prompt photon hadroproduction at high energies in off-shell gluon-gluon - fusion",http://dx.doi.org/10.1103/PhysRevD.77.074024 -Observations of Gamma-ray Bursts with VERITAS and Whipple,http://arxiv.org/abs/0709.3830v1 -The rapid decline of the prompt emission in Gamma-Ray Bursts,http://dx.doi.org/10.1086/587952 -"Chaotic scattering with direct processes: A generalization of Poisson's - kernel for non-unitary scattering matrices",http://dx.doi.org/10.1088/1751-8113/41/1/015103 -The 3He + 4He --> 7Be Astrophysical S-factor,http://dx.doi.org/10.1103/PhysRevC.76.055801 -"A Search for Prompt Very High Energy Emission from Satellite-detected - Gamma-ray Bursts using Milagro",http://arxiv.org/abs/0710.2350v1 -A universal GRB photon energy-peak luminosity relation,http://arxiv.org/abs/0710.3727v1 -Azimuthal Asymmetry of Prompt Photons in Nuclear Collisions,http://dx.doi.org/10.1016/j.nuclphysa.2008.03.013 -"Spectral split in prompt supernova neutrino burst: Analytic three-flavor - treatment",http://dx.doi.org/10.1103/PhysRevD.77.113007 -Theoretical interpretation of GRB060124: preliminary results,http://dx.doi.org/10.1142/9789812834300_0307 -"Rates, Progenitors and Cosmic Mix of Type Ia Supernovae",http://dx.doi.org/10.1111/j.1365-2966.2008.13445.x -Probing gluon shadowing with forward photons at RHIC,http://arxiv.org/abs/0806.0769v2 -"Relationship between pulse width and energy in GRB 060124: from X-ray to - gamma-ray bands",http://dx.doi.org/10.1016/j.newast.2008.01.005 -"Prompt high-energy neutrinos from gamma-ray bursts in photospheric and - synchrotron self-Compton scenarios",http://dx.doi.org/10.1103/PhysRevD.78.101302 -Flares in Gamma Ray Bursts,http://dx.doi.org/10.1063/1.3027911 -"Investigating the high energy QCD approaches for prompt photon - production at the LHC",http://dx.doi.org/10.1140/epjc/s10052-008-0842-9 -"Synchrotron Emissions in GRB Prompt Phase Using a Semi Leptonic and - Hadronic Model",http://arxiv.org/abs/0810.2555v1 -"Gamma-ray Burst 080319B: Evidence for Relativistic Turbulence, Not - Internal Shocks",http://dx.doi.org/10.1111/j.1365-2966.2009.14539.x -Gamma Ray Bursts Cook Book II: Simulation,http://dx.doi.org/10.1111/j.1365-2966.2009.14928.x -"Investigating the Possibility of Screening High-z GRBs based on BAT - Prompt Emission Properties",http://dx.doi.org/10.1063/1.3155945 -"Where are Swift Gamma-ray bursts beyond the ""synchrotron deathline""?",http://dx.doi.org/10.1111/j.1365-2966.2009.14747.x -"Dominant Spin-Flip Effects for the Hadronic Produced $J/ψ$ - Polarization at TEVATRON",http://dx.doi.org/10.1103/PhysRevD.80.034010 -MASTER prompt and follow-up GRB observations,http://dx.doi.org/10.1155/2010/763629 -"Study of non-collinear parton dynamics in the prompt photon - photoproduction at HERA",http://dx.doi.org/10.1103/PhysRevD.81.094027 -"Event-by-event study of prompt neutrons from 239Pu(n,f)",http://dx.doi.org/10.1103/PhysRevC.80.044611 -RHESSI Tests of Quasi-Thermal Gamma-Ray Burst Spectral Models,http://dx.doi.org/10.1088/0004-637X/714/1/881 -Prompt Ia Supernovae Are Significantly Delayed,http://dx.doi.org/10.1088/0004-637X/707/1/74 -Prompt Photons in Photoproduction at HERA,http://dx.doi.org/10.1140/epjc/s10052-010-1240-7 -Polarization of prompt J/psi in proton-proton collisions at RHIC,http://dx.doi.org/10.1103/PhysRevD.81.014020 -"SSS in young stellar populations and the ""prompt"" component of Type Ia - supernovae",http://dx.doi.org/10.1002/asna.200911331 -Hard photon production and matrix-element parton-shower merging,http://dx.doi.org/10.1103/PhysRevD.81.034026 -"Gamma Ray Bursts in the Fermi era: the spectral energy distribution of - the prompt emission",http://dx.doi.org/10.1088/2041-8205/714/2/L299 -Testing two-component jet models of GRBs with orphan afterglows,http://dx.doi.org/10.1063/1.3509303 -"Prompt Tidal Disruption of Stars as an Electromagnetic Signature of - Supermassive Black Hole Coalescence",http://dx.doi.org/10.1111/j.1365-2966.2010.17880.x -Phonon-mediated sticking of electrons at dielectric surfaces,http://dx.doi.org/10.1103/PhysRevB.82.125408 -Secondary atmospheric tau neutrino production,http://dx.doi.org/10.1103/PhysRevD.82.057302 -Gamma-ray bursts: connecting the prompt emission with the afterglow,http://arxiv.org/abs/1008.0478v1 -"Modeling of the Interaction of GRB Prompt Emission with the Circumburst - Medium",http://dx.doi.org/10.1134/S1063773710100014 -Gamma Ray Bursts Spectral--Energy correlations: recent results,http://dx.doi.org/10.1017/S1743921310016376 -Inclusive jet production at Tevatron in the Regge limit of QCD,http://arxiv.org/abs/1011.3131v1 -"Current and Future Constraints on Dark Matter from Prompt and - Inverse-Compton Photon Emission in the Isotropic Diffuse Gamma-ray Background",http://dx.doi.org/10.1103/PhysRevD.85.043509 -Origin of the bright prompt optical emission in the naked eye burst,http://dx.doi.org/10.1063/1.3509278 -"Testing for kt-factorization with inclusive prompt photon production at - LHC",http://dx.doi.org/10.1016/j.physletb.2011.03.057 -"Spectral evolution of long Gamma Ray Burst prompt emission: - electrostatic acceleration and adiabatic expansion",http://dx.doi.org/10.1088/2041-8205/727/1/L1 -"X-ray and Optical Plateau Following the Main Bursts in Gamma-Ray Bursts - and SNe II-P: A hint to the similar late injection behavior?",http://arxiv.org/abs/1104.0080v2 -"Measurement of the Cross Section for Prompt Isolated Diphoton Production - in p\bar p Collisions at \sqrt{s} = 1.96 TeV",http://dx.doi.org/10.1103/PhysRevLett.107.102003 -"Measurement of the Cross Section for Prompt Isolated Diphoton Production - in p\bar p Collisions at \sqrt{s} = 1.96 TeV",http://dx.doi.org/10.1103/PhysRevD.84.052006 -"Measurement of the inclusive isolated prompt photon cross-section in pp - collisions at sqrt(s)= 7 TeV using 35 pb-1 of ATLAS data",http://dx.doi.org/10.1016/j.physletb.2011.11.010 -"Measurement of the Differential Cross Section for Isolated Prompt Photon - Production in pp Collisions at 7 TeV",http://dx.doi.org/10.1103/PhysRevD.84.052011 -Prompt J/Psi production at LHC: new evidence for the kt-factorization,http://dx.doi.org/10.1103/PhysRevD.85.014034 -"Follow the BAT: Monitoring Swift BAT FoV for Prompt Optical Emission - from Gamma-ray Bursts",http://dx.doi.org/10.1063/1.3621814 -"Interaction of a highly magnetized impulsive relativistic flow with an - external medium",http://dx.doi.org/10.1111/j.1365-2966.2012.20473.x -"Evidence of Deterministic Components in the Apparent Randomness of GRBs: - Clues of a Chaotic Dynamic",http://arxiv.org/abs/1109.5891v1 -"Measurement of the Cross Section for Prompt Isolated Diphoton Production - in p\bar p Collisions at \sqrt s = 1.96 TeV",http://arxiv.org/abs/1109.6827v1 -Broadband study of GRB 091127: a sub-energetic burst at higher redshift?,http://dx.doi.org/10.1088/0004-637X/761/1/50 -"Observational signatures of sub-photospheric radiation mediated shocks - in the prompt phase of GRBs",http://dx.doi.org/10.1088/0004-637X/756/2/174 -"Implications of gauge-mediated supersymmetry breaking with vector-like - quarks and a ~125 GeV Higgs boson",http://dx.doi.org/10.1103/PhysRevD.86.035017 -"Measurement of the relative prompt production rate of chi(c2) and - chi(c1) in pp collisions at sqrt(s) = 7 TeV",http://dx.doi.org/10.1140/epjc/s10052-012-2251-3 -"Search for anomalous production of prompt like-sign lepton pairs at - sqrt(s) = 7 TeV with the ATLAS detector",http://dx.doi.org/10.1007/JHEP12(2012)007 -Gamma-ray fluxes in Oklo natural reactors,http://dx.doi.org/10.1103/PhysRevC.86.054602 -Open heavy flavor and J/psi at RHIC and LHC within a transport model,http://dx.doi.org/10.1088/1742-6596/446/1/012044 -"Measurement of the cross section for prompt isolated diphoton production - using the full CDF Run II data sample",http://dx.doi.org/10.1103/PhysRevLett.110.101801 -Long and short high energy components presented in GRBs,http://arxiv.org/abs/1212.4418v1 -"Prompt Emission from Tidal Disruptions of White Dwarfs by Intermediate - Mass Black Holes",http://dx.doi.org/10.1051/epjconf/20123902007 -Subphotospheric Neutrinos from Gamma-Ray Bursts: The Role of Neutrons,http://dx.doi.org/10.1103/PhysRevLett.111.131102 -"Search for WH production with a light Higgs boson decaying to prompt - electron-jets in proton-proton collisions at sqrt(s)=7 TeV with the ATLAS - detector",http://dx.doi.org/10.1088/1367-2630/15/4/043009 -"Centrality dependence of inclusive prompt photon production in d+Au, - Au+Au, p+Pb, and Pb+Pb collisions",http://dx.doi.org/10.1007/JHEP05(2013)030 -Radiative Mechanisms in GRB prompt emission,http://dx.doi.org/10.1051/eas/1361016 -"Neutral-pion reactions induced by chiral anomaly in strong magnetic - fields",http://arxiv.org/abs/1305.7224v2 -Isolating Prompt Photons with Narrow Cones,http://dx.doi.org/10.1007/JHEP09(2013)007 -"Statistical analysis of the prompt and afterglow emission of the three - groups of gamma-ray bursts",http://arxiv.org/abs/1309.1015v1 -A Complete Sample of Long Bright Swift GRBs,http://arxiv.org/abs/1309.2298v1 -"Forward Neutral Pion Cross Section and Spin Asymmetry Measurements at - STAR",http://arxiv.org/abs/1309.3216v1 -"Measurement of the inclusive isolated prompt photon cross section in pp - collisions at sqrt(s) = 7 TeV with the ATLAS detector using 4.6 fb-1",http://dx.doi.org/10.1103/PhysRevD.89.052004 -The LHCb prompt charm triggers,http://arxiv.org/abs/1311.7585v1 -Gamma-Ray Bursts: Temporal Scales and the Bulk Lorentz Factor,http://dx.doi.org/10.1088/0004-637X/805/2/86 -A size-duration trend for gamma-ray burst progenitors,http://dx.doi.org/10.1088/2041-8205/794/1/L8 -"QCD analysis and effective temperature of direct photons in lead-lead - collisions at the LHC",http://arxiv.org/abs/1409.3363v1 -"Heavy-flavour production and multiplicity dependence in pp and p--Pb - collisions with ALICE",http://arxiv.org/abs/1409.4675v2 -Electromagnetic prompt response in an elastic wave cavity,http://dx.doi.org/10.1209/0295-5075/110/54003 -Calculation of conventional and prompt lepton fluxes at very high energy,http://arxiv.org/abs/1503.00544v2 -Neutrinos from Gamma Ray Bursts in the IceCube and ARA Era,http://arxiv.org/abs/1503.07146v1 -"Confronting GRB prompt emission with a model for subphotospheric - dissipation",http://dx.doi.org/10.1093/mnrasl/slv114 -Properties of Low Luminosity Afterglow Gamma-ray Bursts,http://arxiv.org/abs/1506.05521v1 -"Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb - collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV",http://dx.doi.org/10.1007/JHEP11(2015)205 -"Inclusive prompt $χ_{c,b}(1^{++})$ production at the LHC",http://dx.doi.org/10.1140/epjc/s10052-015-3840-8 -Soft X-ray Observation of the Prompt Emission of GRB100418A,http://dx.doi.org/10.1093/pasj/psv075 -Relativistic corrections to prompt $J/ψ$ photo- and hadroproduction,http://dx.doi.org/10.1103/PhysRevD.90.014045 -"Relativistic corrections to $J/ψ$ polarization in photo- and - hadroproduction",http://dx.doi.org/10.1103/PhysRevD.92.014009 -"Inclusive production of the $X(4140)$ state in $p \overline p $ - collisions at D0",http://arxiv.org/abs/1508.07846v1 -"Prompt charmonia production and polarization at LHC in the NRQCD with - kt-factorization. Part II: $χ_c$ mesons",http://dx.doi.org/10.1103/PhysRevD.93.094012 -"A Search for Correlations between Gamma-Ray Burst Variability and - Afterglow Onset",http://dx.doi.org/10.1093/mnras/stv2129 -"Systematic study of complete fusion suppression in reactions involving - weakly bound nuclei at energies above the Coulomb barrier",http://dx.doi.org/10.1103/PhysRevC.93.014615 -"Optical-infrared flares and radio afterglows by Jovian planets - inspiraling into their host stars",http://dx.doi.org/10.1093/mnras/stw3207 -A MapReduce Approach to NoSQL RDF Databases,http://arxiv.org/abs/1601.01770v1 -Photospheric Emission in Gamma-Ray Bursts,http://dx.doi.org/10.1142/S021827181730018X -"A Blind Search for Prompt Gamma-ray Counterparts of Fast Radio Bursts - with Fermi-LAT Data",http://dx.doi.org/10.1093/mnras/stw1206 -"Constraining Magnetization of Gamma-Ray Bursts Outflows using Prompt - Emission Fluence",http://dx.doi.org/10.3847/1538-4357/aa974e -"Measurement of the inclusive isolated prompt photon cross section in - $pp$ collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector",http://dx.doi.org/10.1007/JHEP08(2016)005 -"Prompt-photon plus jet photoproduction with ZEUS at DESY HERA in the - parton Reggeization approach",http://dx.doi.org/10.1016/j.nuclphysbps.2015.09.313 -"Reconsideration of the inclusive prompt photon production at LHC with - kt-factorization",http://dx.doi.org/10.1103/PhysRevD.94.034020 -"Complete Nonrelativistic-QCD Prediction for Prompt Double $J/ψ$ - Hadroproduction",http://dx.doi.org/10.1103/PhysRevLett.115.022002 -Measurement of the photon and jet production with the ATLAS detector,http://arxiv.org/abs/1609.03825v1 -"Heavy-flavour production in pp collisions and correlations in pp and - p-Pb collisions measured with ALICE at the LHC",http://dx.doi.org/10.1088/1742-6596/779/1/012024 -"Separating Double-Beta Decay Events from Solar Neutrino Interactions in - a Kiloton-Scale Liquid Scintillator Detector By Fast Timing",http://dx.doi.org/10.1016/j.nima.2016.12.033 -"Neutron activation and prompt gamma intensity in Ar/CO$_{2}$-filled - neutron detectors at the European Spallation Source",http://arxiv.org/abs/1701.08117v2 -"Inverting the mass hierarchy of jet quenching effects with prompt - $b$-jet substructure",http://dx.doi.org/10.1016/j.physletb.2019.04.052 -"$D \bar D$ asymmetry at low and high energies and possible consequences - for prompt atmospheric neutrinos",http://dx.doi.org/10.5506/APhysPolB.49.1383 -"Particle physics origin of the 5 MeV bump in the reactor antineutrino - spectrum?",http://dx.doi.org/10.1103/PhysRevD.99.055045 -"Associated non-prompt $J/ψ+ μ$ and $J/ψ+ J/ψ$ production at - LHC as a test for TMD gluon density",http://dx.doi.org/10.1140/epjc/s10052-018-6297-8 -"High resolution measurement of tagged two-neutron energy and angle - correlations in Cf-252(sf)",http://dx.doi.org/10.1103/PhysRevC.100.014605 -"Next-to-leading power threshold effects for resummed prompt photon - production",http://dx.doi.org/10.1103/PhysRevD.100.056009 -Properties of $Z_c^{\pm}(3900)$ produced in $p \bar p$ collision,http://dx.doi.org/10.1103/PhysRevD.100.012005 -"Equation of state constraints from the threshold binary mass for prompt - collapse of neutron star mergers",http://dx.doi.org/10.1103/PhysRevLett.125.141103 -"Speech Paralinguistic Approach for Detecting Dementia Using Gated - Convolutional Neural Network",http://dx.doi.org/10.1587/transinf.2020EDP7196 -"Prompt acceleration of the $μ^+$ beam in a donut wakefield driven by a - shaped Laguerre-Gaussian laser pulse",http://arxiv.org/abs/2105.02003v1 -True Few-Shot Learning with Language Models,http://arxiv.org/abs/2105.11447v1 -Generative Adversarial Imitation Learning for Empathy-based AI,http://arxiv.org/abs/2105.13328v1 -"Multilepton and Lepton Jet Probes of Sub-Weak-Scale Right-Handed - Neutrinos",http://dx.doi.org/10.1103/PhysRevD.91.093010 -"Extremely Soft X-ray Flash as the indicator of off-axis orphan GRB - afterglow",http://arxiv.org/abs/1504.07288v1 -"The anatomy of a long gamma-ray burst: a simple classification scheme - for the emission mechanism(s)",http://dx.doi.org/10.3847/0004-637X/820/1/68 -"The Feasibility of Dynamically Granted Permissions: Aligning Mobile - Privacy with User Preferences",http://arxiv.org/abs/1703.02090v1 -Prompt Gamma Ray Burst emission from gradual magnetic dissipation,http://dx.doi.org/10.1093/mnras/stx717 -"Off-axis prompt X-ray transients from the cocoon of short gamma-ray - bursts",http://dx.doi.org/10.3847/2041-8213/aa8f3d -Double Gravitational Wave Mergers,http://dx.doi.org/10.1093/mnras/sty2249 -"Modeling the high-energy emission in GRB 110721A and implications on the - early multiwavelength and polarimetric observations",http://dx.doi.org/10.3847/1538-4357/aa8d65 -On the synchrotron spectrum of GRB prompt emission,http://dx.doi.org/10.3847/1538-4357/aaa0ca -Prompt emission from the counter jet of a short gamma-ray burst,http://dx.doi.org/10.1093/ptep/pty012 -"Prompt photon yield and $v_2$ coefficient from gluon fusion induced by - magnetic field in heavy-ion collision",http://dx.doi.org/10.1051/epjconf/201817208004 -"On the Relationship Between Scintillation Anisotropy and Crystal - Structure in Pure Crystalline Organic Scintillator Materials",http://dx.doi.org/10.1109/TNS.2018.2833030 -Learning to Rank for Plausible Plausibility,http://arxiv.org/abs/1906.02079v1 -Cosmology in the presence of multiple light moduli,http://dx.doi.org/10.1088/1475-7516/2019/11/035 -"Determining the average prompt-fission-neutron multiplicity for - $^{239}$Pu($n$,$f$) via a $^{240}$Pu($α$,$α^{\prime}f$) surrogate - reaction",http://dx.doi.org/10.1103/PhysRevC.100.064609 -"Deciphering the $X(3872)$ via its polarization in prompt production at - the CERN LHC",http://dx.doi.org/10.1103/PhysRevLett.123.032001 -"A Prompt Report on the Performance of Intel Optane DC Persistent Memory - Module",http://dx.doi.org/10.1587/transinf.2019EDL8141 -Reputation Agent: Prompting Fair Reviews in Gig Markets,http://dx.doi.org/10.1145/3366423.3380199 -"Prompt $J/ψ$ photoproduction within the non-relativistic QCD - framework at the CEPC",http://dx.doi.org/10.1140/epjc/s10052-020-8276-0 -"On the quark component in prompt photon and electroweak gauge boson - production at high energies",http://dx.doi.org/10.1088/0954-3899/36/12/125008 -"$J / ψ$ electromagnetic production associated with light hadrons at - $B$ factories",http://arxiv.org/abs/1003.5566v1 -Polarization of prompt J/psi in pp -> J/psi+X at sqrt{s}=200 GeV,http://dx.doi.org/10.1103/PhysRevD.83.037501 -"Measurement of the inclusive isolated prompt photon cross section in pp - collisions at sqrt(s) = 7 TeV with the ATLAS detector",http://dx.doi.org/10.1103/PhysRevD.83.052005 -"Single jet and prompt-photon inclusive production with multi-Regge - kinematics: From Tevatron to LHC",http://dx.doi.org/10.1103/PhysRevD.84.074017 -"The Role of Stochastic Acceleration in the Prompt Emission of Gamma-Ray - Bursts: Application to Hadronic Injection",http://dx.doi.org/10.1088/0004-637X/746/2/164 -"Constraints on dark matter annihilation from M87: Signatures of prompt - and inverse-Compton gamma rays",http://dx.doi.org/10.1140/epjc/s10052-011-1815-y -"$Υ(1S)$ prompt production at the Tevatron and LHC in - nonrelativistic QCD",http://dx.doi.org/10.1103/PhysRevD.85.114003 -"A universal scaling for short and long gamma-ray bursts: - E_{X,iso}-E_{gamma,iso}-E_{pk}",http://dx.doi.org/10.1111/j.1365-2966.2012.21487.x -How robust is a thermal photon interpretation of the ALICE low-p_T data?,http://dx.doi.org/10.1007/JHEP10(2013)119 -"An inverse Compton origin for the 55 GeV photon in the late afterglow of - GRB 130907A",http://dx.doi.org/10.1088/0004-637X/788/2/156 -Simulations of prompt many-body ionization in a frozen Rydberg gas,http://dx.doi.org/10.1103/PhysRevA.90.022712 -"Measurement of prompt psi(2S) to J/psi yield ratios in PbPb and pp - collisions at sqrt(s[NN]) = 2.76 TeV",http://dx.doi.org/10.1103/PhysRevLett.113.262301 -Prompt photon production in double-Pomeron-exchange events at the LHC,http://dx.doi.org/10.1016/j.physletb.2016.04.012 -"Heavy flavour production in proton-lead and lead-lead collisions with - LHCb",http://dx.doi.org/10.1016/j.nuclphysa.2017.05.039 -"Survivor-complier effects in the presence of selection on treatment, - with application to a study of prompt ICU admission",http://arxiv.org/abs/1704.05706v2 -"Prompt neutrinos from atmospheric charm in the general-mass - variable-flavor-number scheme",http://dx.doi.org/10.1007/JHEP12(2017)021 -"Clustering of gamma-ray burst types in the Fermi-GBM catalogue: - indications of photosphere and synchrotron emissions during the prompt phase",http://dx.doi.org/10.1093/mnras/stx3106 -Hierarchical Neural Story Generation,http://arxiv.org/abs/1805.04833v1 -"The Prompt Emission of Gamma-Ray Bursts from the Wind of Newborn - Millisecond Magnetars: A Case Study of GRB 160804A",http://dx.doi.org/10.3847/1538-4357/aae52f -"Social learning of prescribing behavior can promote population optimum - of antibiotic use",http://arxiv.org/abs/1810.08284v1 -Geometrical scaling of prompt photons in heavy ion collisions,http://dx.doi.org/10.1051/epjconf/201920602002 -"How Good is Your Data? Investigating the Quality of Data Generated - During Security Incident Response Investigations",http://arxiv.org/abs/1901.03723v1 -"A General-relativistic Determination of the Threshold Mass to Prompt - Collapse in Binary Neutron Star Mergers",http://dx.doi.org/10.3847/2041-8213/ab0210 -"Engaging Audiences in Virtual Museums by Interactively Prompting Guiding - Questions",http://arxiv.org/abs/1902.03527v1 -"Triple prompt $J/ψ$ hadroproduction as a hard probe of - multiple-parton scatterings",http://dx.doi.org/10.1103/PhysRevLett.122.192002 -"Measurement of prompt photon production in $\sqrt{s_\mathrm{NN}} = 8.16$ - TeV $p$+Pb collisions with ATLAS",http://dx.doi.org/10.1016/j.physletb.2019.07.031 -"Evidence of two spectral breaks in the prompt emission of gamma ray - bursts",http://dx.doi.org/10.1051/0004-6361/201834987 -"Dielectron production in proton-proton collisions at $\sqrt{s} = 7$ TeV - with ALICE",http://dx.doi.org/10.3390/proceedings2019010024 -"Prompt, pre-equilibrium, and thermal photons in relativistic nuclear - collisions",http://dx.doi.org/10.1088/1361-6471/ab8d8c -"Measurement of $J/ψ$ production in association with a $W^\pm$ boson - with $pp$ data at 8 TeV",http://dx.doi.org/10.1007/JHEP01(2020)095 -"Improved constraints on parton distributions using LHCb, ALICE and HERA - heavy-flavour measurements and implications for the predictions for prompt - atmospheric-neutrino fluxes",http://dx.doi.org/10.1007/JHEP04(2020)118 -Synchrotron spectra of GRB prompt emission and pulsar wind nebulae,http://dx.doi.org/10.1088/1742-6596/1332/1/012019 -"Self-Supervised Contextual Language Representation of Radiology Reports - to Improve the Identification of Communication Urgency",http://arxiv.org/abs/1912.02703v1 -"A Multi Purpose and Large Scale Speech Corpus in Persian and English for - Speaker and Speech Recognition: the DeepMine Database",http://arxiv.org/abs/1912.03627v1 -"Synchrotron Gamma-Ray Emission Model of the Giant Outburst of Quasar 3C - 279 in 2015 June: Fast Reconnection or Stochastic Acceleration with - Electromagnetic Cascade?",http://dx.doi.org/10.3847/1538-4357/ab6a93 -"Distal edge determination precision for a multi-slat promptgamma camera: - a comprehensive simulation and optimization of the detection system",http://arxiv.org/abs/2009.14154v2 -The Resonance Hopping Effect in the Neptune-Planet Nine System,http://dx.doi.org/10.1088/1538-3873/abbd8a -Towards a Conversational Measure of Trust,http://arxiv.org/abs/2010.04885v1 -"Rossi-alpha Uncertainty Quantification by Analytic, Bootstrap, and - Sample Methods to Inform Fitting Best Practices",http://arxiv.org/abs/2010.07085v1 -Fast Rossi-alpha Measurements of Plutonium using Organic Scintillators,http://dx.doi.org/10.1051/epjconf/202124709025 -"A study on the isolated photon production in nuclear collisions at the - CERN-LHC energies",http://arxiv.org/abs/2011.08154v2 -WARP: Word-level Adversarial ReProgramming,http://arxiv.org/abs/2101.00121v2 -Prefix-Tuning: Optimizing Continuous Prompts for Generation,http://arxiv.org/abs/2101.00190v1 -"The Electric Field Dependence of Single Electron Emission in the PIXeY - Two-Phase Xenon Detector",http://dx.doi.org/10.1088/1748-0221/16/12/P12015 -Persistent Anti-Muslim Bias in Large Language Models,http://arxiv.org/abs/2101.05783v2 -"BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language - Generation",http://dx.doi.org/10.1145/3442188.3445924 -"Energy Dependent Calculations of Fission Product, Prompt, and Delayed - Neutron Yields for Neutron Induced Fission on $^{235}$U, $^{238}$U, and - $^{239}$Pu",http://arxiv.org/abs/2102.01015v1 -Dissecting the Energy Budget of a Gamma-Ray Burst Fireball,http://dx.doi.org/10.3847/2041-8213/abe6ab -"""Do this! Do that!, And nothing will happen"" Do specifications lead to - securely stored passwords?",http://arxiv.org/abs/2102.09790v1 -"Influence of non-statistical properties in nuclear structure on emission - of prompt fission neutrons",http://dx.doi.org/10.1103/PhysRevC.104.014611 -GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation,http://arxiv.org/abs/2104.08826v2 -Langevin dynamics of heavy quarks in a soft-hard factorized approach,http://dx.doi.org/10.1140/epjc/s10052-021-09339-7 -Multimodal Few-Shot Learning with Frozen Language Models,http://arxiv.org/abs/2106.13884v2 -What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 -- MeasEval,http://arxiv.org/abs/2106.14720v1 -"Measurement of prompt open-charm production cross sections in - proton-proton collisions at $\sqrt{s} = $ 13 TeV",http://dx.doi.org/10.1007/JHEP11(2021)225 -Intersectional Bias in Causal Language Models,http://arxiv.org/abs/2107.07691v1 -"Intrinsic charm in the nucleon and forward production of charm: a new - constrain from IceCube Neutrino Observatory",http://arxiv.org/abs/2107.13852v1 -StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators,http://arxiv.org/abs/2108.00946v2 -"Probing charged lepton flavor violation with axion-like particles at - Belle II",http://dx.doi.org/10.1007/JHEP11(2021)218 -"Human-Robot Interaction via a Joint-Initiative Supervised Autonomy - (JISA) Framework",http://arxiv.org/abs/2109.04837v1 -AffectGAN: Affect-Based Generative Art Driven by Semantics,http://arxiv.org/abs/2109.14845v1 -"Improving Gender Fairness of Pre-Trained Language Models without - Catastrophic Forgetting",http://arxiv.org/abs/2110.05367v3 -Prompt-tuning in ASR systems for efficient domain-adaptation,http://arxiv.org/abs/2110.06502v2 -A Functional Abstraction of Typed Invocation Contexts,http://dx.doi.org/10.46298/lmcs-18(3:34)2022 -"mLUKE: The Power of Entity Representations in Multilingual Pretrained - Language Models",http://arxiv.org/abs/2110.08151v3 -Control Prefixes for Parameter-Efficient Text Generation,http://arxiv.org/abs/2110.08329v2 -"Multi-stage Clarification in Conversational AI: The case of - Question-Answering Dialogue Systems",http://arxiv.org/abs/2110.15235v1 -"MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal - Emotion Recognition",http://arxiv.org/abs/2111.00865v1 -Constraining new physics with SModelS version 2,http://dx.doi.org/10.1007/JHEP08(2022)068 -Text2Mesh: Text-Driven Neural Stylization for Meshes,http://arxiv.org/abs/2112.03221v1 -Early Warnings of Binary Neutron Star Coalescence using the SPIIR Search,http://dx.doi.org/10.3847/2041-8213/ac5687 -Zero-Shot Recommendation as Language Modeling,http://arxiv.org/abs/2112.04184v1 -Discourse-Aware Soft Prompting for Text Generation,http://arxiv.org/abs/2112.05717v2 -"Prompt Tuning GPT-2 language model for parameter-efficient domain - adaptation of ASR systems",http://arxiv.org/abs/2112.08718v3 -$\textit{Fermi}$-LAT realtime follow-ups of high-energy neutrino alerts,http://dx.doi.org/10.22323/1.395.0956 -"Toward Educator-focused Automated Scoring Systems for Reading and - Writing",http://arxiv.org/abs/2112.11973v1 -Ontology-enhanced Prompt-tuning for Few-shot Learning,http://dx.doi.org/10.1145/3485447.3511921 -GRB Prompt Emission with Anisotropic Electron Distribution,http://arxiv.org/abs/2201.13028v2 -The white dwarf binary merger model of GRB 170817A,http://dx.doi.org/10.1142/S0218271822300130 -"Generating Training Data with Language Models: Towards Zero-Shot - Language Understanding",http://arxiv.org/abs/2202.04538v2 -Text and Image Guided 3D Avatar Generation and Manipulation,http://arxiv.org/abs/2202.06079v1 -Heavy Neutrinos at the FCC-hh in the $U(1)_{B-L}$ Model,http://dx.doi.org/10.1103/PhysRevD.105.095043 -Quantifying Memorization Across Neural Language Models,http://arxiv.org/abs/2202.07646v3 -SGPT: GPT Sentence Embeddings for Semantic Search,http://arxiv.org/abs/2202.08904v5 -Pre-trained Token-replaced Detection Model as Few-shot Learner,http://arxiv.org/abs/2203.03235v2 -"InstructionNER: A Multi-Task Instruction-Based Generative Framework for - Few-shot NER",http://arxiv.org/abs/2203.03903v1 -"Improved Universal Sentence Embeddings with Prompt-based Contrastive - Learning and Energy-based Learning",http://arxiv.org/abs/2203.06875v2 -"Prompt electron and tau neutrinos and antineutrinos in the forward - region at the LHC",http://arxiv.org/abs/2203.07212v2 -Visual Prompt Tuning,http://arxiv.org/abs/2203.12119v2 -"Language Models that Seek for Knowledge: Modular Search & Generation for - Dialogue and Prompt Completion",http://arxiv.org/abs/2203.13224v2 -"Data Augmentation for Intent Classification with Off-the-shelf Large - Language Models",http://arxiv.org/abs/2204.01959v1 -Text2LIVE: Text-Driven Layered Image and Video Editing,http://arxiv.org/abs/2204.02491v2 -"Characterization of the GRB prompt fundamental plane using Fermi-GBM - data",http://dx.doi.org/10.1016/j.jheap.2022.06.003 -"Entities, Dates, and Languages: Zero-Shot on Historical Texts with T0",http://arxiv.org/abs/2204.05211v1 -"A Unified Multi-task Learning Framework for Multi-goal Conversational - Recommender Systems",http://arxiv.org/abs/2204.06923v1 -"VQGAN-CLIP: Open Domain Image Generation and Editing with Natural - Language Guidance",http://arxiv.org/abs/2204.08583v2 -"Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce - Data Annotation Required in Visual Commonsense Tasks",http://arxiv.org/abs/2204.11922v1 -Mixed-effects transformers for hierarchical adaptation,http://arxiv.org/abs/2205.01749v2 -"Go Back in Time: Generating Flashbacks in Stories with Event Temporal - Prompts",http://arxiv.org/abs/2205.01898v1 -"KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive - Question Answering",http://arxiv.org/abs/2205.03071v1 -Vector Representations of Idioms in Conversational Systems,http://arxiv.org/abs/2205.03666v1 -Beyond Bounding Box: Multimodal Knowledge Learning for Object Detection,http://arxiv.org/abs/2205.04072v1 -ALLSH: Active Learning Guided by Local Sensitivity and Hardness,http://arxiv.org/abs/2205.04980v2 -Dynamic Prefix-Tuning for Generative Template-based Event Extraction,http://dx.doi.org/10.18653/v1/2022.acl-long.358 -"Weakly Supervised Text Classification using Supervision Signals from a - Language Model",http://arxiv.org/abs/2205.06604v1 -Rapid localization of gravitational wave hosts with FIGARO,http://dx.doi.org/10.1093/mnrasl/slac101 -"Measurement of antiproton production from antihyperon decays in pHe - collisions at $\sqrt{s_{NN}}=110$ GeV",http://dx.doi.org/10.1140/epjc/s10052-023-11673-x -StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions,http://arxiv.org/abs/2205.10351v2 -Plot Writing From Pre-Trained Language Models,http://arxiv.org/abs/2206.03021v1 -Ask Me What You Need: Product Retrieval using Knowledge from GPT-3,http://arxiv.org/abs/2207.02516v1 -"Meta-Learning the Difference: Preparing Large Language Models for - Efficient Adaptation",http://arxiv.org/abs/2207.03509v1 -Zero-Shot Video Captioning with Evolving Pseudo-Tokens,http://arxiv.org/abs/2207.11100v2 -Exploring CLIP for Assessing the Look and Feel of Images,http://arxiv.org/abs/2207.12396v2 -"Prompt cusp formation from the gravitational collapse of peaks in the - initial cosmological density field",http://dx.doi.org/10.1093/mnrasl/slac107 -"Toward Supporting Perceptual Complementarity in Human-AI Collaboration - via Reflection on Unobservables",http://arxiv.org/abs/2207.13834v2 -"Charm production: constraints to transport models and charm diffusion - coefficient with ALICE",http://arxiv.org/abs/2207.14154v1 -"Graphene Quantum Dot with Divacancy and Topological Defects: A Novel - Material for Promoting Prompt and Delayed Fluorescence of Tunable Wavelengths",http://dx.doi.org/10.1021/acs.jpcc.2c03833 -"Towards Open-vocabulary Scene Graph Generation with Prompt-based - Finetuning",http://arxiv.org/abs/2208.08165v3 -Injecting Image Details into CLIP's Feature Space,http://arxiv.org/abs/2208.14649v4 -CommunityLM: Probing Partisan Worldviews from Language Models,http://arxiv.org/abs/2209.07065v1 -"Enabling Conversational Interaction with Mobile UI using Large Language - Models",http://arxiv.org/abs/2209.08655v2 -Learning to Learn with Generative Models of Neural Network Checkpoints,http://arxiv.org/abs/2209.12892v1 -Creative Painting with Latent Diffusion Models,http://arxiv.org/abs/2209.14697v2 -"Healthcare serial killer or coincidence? Statistical issues in - investigation of suspected medical misconduct",http://arxiv.org/abs/2210.00962v1 -Distilling Task-specific Logical Rules from Large Pre-trained Models,http://arxiv.org/abs/2210.02768v1 -Recent ALICE results on quarkonium production in nuclear collisions,http://dx.doi.org/10.1088/1742-6596/2586/1/012007 -Language Models Are Poor Learners of Directional Inference,http://arxiv.org/abs/2210.04695v2 -Can Language Models Be Specific? How?,http://arxiv.org/abs/2210.05159v2 -CLIP also Understands Text: Prompting CLIP for Phrase Understanding,http://arxiv.org/abs/2210.05836v1 -"Zero-Shot Prompting for Implicit Intent Prediction and Recommendation - with Commonsense Reasoning",http://arxiv.org/abs/2210.05901v2 -"Towards visually prompted keyword localisation for zero-resource spoken - languages",http://arxiv.org/abs/2210.06229v1 -Re3: Generating Longer Stories With Recursive Reprompting and Revision,http://arxiv.org/abs/2210.06774v3 -"Prompt-based Connective Prediction Method for Fine-grained Implicit - Discourse Relation Recognition",http://arxiv.org/abs/2210.07032v2 -Bootstrapping Multilingual Semantic Parsers using Large Language Models,http://arxiv.org/abs/2210.07313v2 -DiffEdit: Diffusion-based semantic image editing with mask guidance,http://arxiv.org/abs/2210.11427v1 -Better Few-Shot Relation Extraction with Label Prompt Dropout,http://arxiv.org/abs/2210.13733v1 -"Weakly Supervised Data Augmentation Through Prompting for Dialogue - Understanding",http://arxiv.org/abs/2210.14169v3 -"Quantum Mechanics: Statistical Balance Prompts Caution in Assessing - Conceptual Implications",http://dx.doi.org/10.3390/e24111537 -"Preventing Verbatim Memorization in Language Models Gives a False Sense - of Privacy",http://arxiv.org/abs/2210.17546v3 -"PromptEHR: Conditional Electronic Healthcare Records Generation with - Prompt Learning",http://arxiv.org/abs/2211.01761v1 -"Probing neural language models for understanding of words of estimative - probability",http://arxiv.org/abs/2211.03358v2 -MACSum: Controllable Summarization with Mixed Attributes,http://arxiv.org/abs/2211.05041v2 -"Safe Latent Diffusion: Mitigating Inappropriate Degeneration in - Diffusion Models",http://arxiv.org/abs/2211.05105v4 -Prompt Learning for Domain Adaptation in Task-Oriented Dialogue,http://arxiv.org/abs/2211.05596v1 -Steps towards prompt-based creation of virtual worlds,http://arxiv.org/abs/2211.05875v1 -"Towards a Mathematics Formalisation Assistant using Large Language - Models",http://arxiv.org/abs/2211.07524v1 -"Adaptive PromptNet For Auxiliary Glioma Diagnosis without - Contrast-Enhanced MRI",http://arxiv.org/abs/2211.07966v1 -Teaching Algorithmic Reasoning via In-context Learning,http://arxiv.org/abs/2211.09066v1 -"Event characterization of dark bosons via exotic Higgs decays with final - states of displaced dimuons in high luminosity era of the LHC",http://arxiv.org/abs/2211.12573v1 -"Schrödinger's Bat: Diffusion Models Sometimes Generate Polysemous - Words in Superposition",http://arxiv.org/abs/2211.13095v1 -Peekaboo: Text to Image Diffusion Models are Zero-Shot Segmentors,http://arxiv.org/abs/2211.13224v2 -"Tapping the Potential of Coherence and Syntactic Features in Neural - Models for Automatic Essay Scoring",http://arxiv.org/abs/2211.13373v1 -"Action-GPT: Leveraging Large-scale Language Models for Improved and - Generalized Action Generation",http://arxiv.org/abs/2211.15603v3 -"Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large - Language Models",http://arxiv.org/abs/2211.15718v2 -Prompted Opinion Summarization with GPT-3.5,http://arxiv.org/abs/2211.15914v2 -"Improving Few-Shot Performance of Language Models via Nearest Neighbor - Calibration",http://arxiv.org/abs/2212.02216v1 -"DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context - Tuning",http://arxiv.org/abs/2212.02851v2 -iQuery: Instruments as Queries for Audio-Visual Sound Separation,http://arxiv.org/abs/2212.03814v2 -Discovering Latent Knowledge in Language Models Without Supervision,http://arxiv.org/abs/2212.03827v1 -PIVOT: Prompting for Video Continual Learning,http://arxiv.org/abs/2212.04842v2 -"Despite ""super-human"" performance, current LLMs are unsuited for - decisions about ethics and safety",http://arxiv.org/abs/2212.06295v1 -"Technical Report -- Competition Solution for Prompt Tuning using - Pretrained Language Model",http://arxiv.org/abs/2212.06369v3 -Understanding Zero-Shot Adversarial Robustness for Large-Scale Models,http://arxiv.org/abs/2212.07016v2 -Decoder Tuning: Efficient Language Understanding as Decoding,http://arxiv.org/abs/2212.08408v2 -"Nuclear modification of hard scattering processes in small systems at - PHENIX",http://arxiv.org/abs/2212.09425v1 -Reasoning with Language Model Prompting: A Survey,http://arxiv.org/abs/2212.09597v8 -"Unnatural Instructions: Tuning Language Models with (Almost) No Human - Labor",http://arxiv.org/abs/2212.09689v1 -"Understanding Stereotypes in Language Models: Towards Robust Measurement - and Zero-Shot Debiasing",http://arxiv.org/abs/2212.10678v1 -"What do LLMs Know about Financial Markets? A Case Study on Reddit Market - Sentiment Analysis",http://arxiv.org/abs/2212.11311v1 -Image To Tree with Recursive Prompting,http://arxiv.org/abs/2301.00447v1 -Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers,http://arxiv.org/abs/2301.02111v1 -Prompting Neural Machine Translation with Translation Memories,http://arxiv.org/abs/2301.05380v2 -"Using Large Text-to-Image Models with Structured Prompts for Skin - Disease Identification: A Case Study",http://arxiv.org/abs/2301.07178v1 -"Visual Writing Prompts: Character-Grounded Story Generation with Curated - Image Sequences",http://arxiv.org/abs/2301.08571v1 -"Prompt Federated Learning for Weather Forecasting: Toward Foundation - Models on Meteorological Data",http://arxiv.org/abs/2301.09152v2 -"Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt - Tuning",http://arxiv.org/abs/2301.10915v2 -"A Discerning Several Thousand Judgments: GPT-3 Rates the Article + - Adjective + Numeral + Noun Construction",http://arxiv.org/abs/2301.12564v2 -Large Language Models Can Be Easily Distracted by Irrelevant Context,http://arxiv.org/abs/2302.00093v3 -"Commonsense-Aware Prompting for Controllable Empathetic Dialogue - Generation",http://arxiv.org/abs/2302.01441v1 -"Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal - Supervision",http://arxiv.org/abs/2302.03540v1 -"CatAlyst: Domain-Extensible Intervention for Preventing Task - Procrastination Using Large Generative Models",http://dx.doi.org/10.1145/3544548.3581133 -"GPT4MIA: Utilizing Generative Pre-trained Transformer (GPT-3) as A - Plug-and-Play Transductive Model for Medical Image Analysis",http://arxiv.org/abs/2302.08722v3 -Affect-Conditioned Image Generation,http://arxiv.org/abs/2302.09742v1 -"Can ChatGPT Understand Too? A Comparative Study on ChatGPT and - Fine-tuned BERT",http://arxiv.org/abs/2302.10198v2 -Zero-Shot Information Extraction via Chatting with ChatGPT,http://arxiv.org/abs/2302.10205v1 -Region-Aware Diffusion for Zero-shot Text-driven Image Editing,http://arxiv.org/abs/2302.11797v1 -Aligning Text-to-Image Models using Human Feedback,http://arxiv.org/abs/2302.12192v1 -"Testing AI performance on less frequent aspects of language reveals - insensitivity to underlying meaning",http://arxiv.org/abs/2302.12313v2 -Prompt-based Learning for Text Readability Assessment,http://arxiv.org/abs/2302.13139v1 -Swift/UVOT: 18 Years of Long GRB Discoveries and Advances,http://dx.doi.org/10.3390/universe9030113 -Competence-Based Analysis of Language Models,http://arxiv.org/abs/2303.00333v2 -Mixture of Soft Prompts for Controllable Data Generation,http://arxiv.org/abs/2303.01580v2 -"The study on the structure of exotic states $χ_{c 1}(3872)$ via - beauty-hadron decays in $pp$ collisions at $\sqrt{s}=8\,\mathrm{TeV}$",http://dx.doi.org/10.1103/PhysRevD.107.114022 -"Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec - Language Modeling",http://arxiv.org/abs/2303.03926v1 -"Sample Efficient Multimodal Semantic Augmentation for Incremental - Summarization",http://arxiv.org/abs/2303.04361v1 -"Exploiting the Textual Potential from Vision-Language Pre-training for - Text-based Person Search",http://arxiv.org/abs/2303.04497v1 -MathPrompter: Mathematical Reasoning using Large Language Models,http://arxiv.org/abs/2303.05398v1 -"DeltaEdit: Exploring Text-free Training for Text-Driven Image - Manipulation",http://arxiv.org/abs/2303.06285v1 -ODIN: On-demand Data Formulation to Mitigate Dataset Lock-in,http://arxiv.org/abs/2303.06832v2 -Architext: Language-Driven Generative Architecture Design,http://arxiv.org/abs/2303.07519v3 -A Theory of Emergent In-Context Learning as Implicit Structure Induction,http://arxiv.org/abs/2303.07971v1 -"SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep - Models for Kidney Stone Classification",http://arxiv.org/abs/2303.08303v1 -"Large Language Model Is Not a Good Few-shot Information Extractor, but a - Good Reranker for Hard Samples!",http://arxiv.org/abs/2303.08559v2 -"Designing Participatory AI: Creative Professionals' Worries and - Expectations about Generative AI",http://arxiv.org/abs/2303.08931v1 -P+: Extended Textual Conditioning in Text-to-Image Generation,http://arxiv.org/abs/2303.09522v3 -"Generate labeled training data using Prompt Programming and GPT-3. An - example of Big Five Personality Classification",http://arxiv.org/abs/2303.12279v1 -"Enabling Calibration In The Zero-Shot Inference of Large Vision-Language - Models",http://arxiv.org/abs/2303.12748v4 -GesGPT: Speech Gesture Synthesis With Text Parsing from GPT,http://arxiv.org/abs/2303.13013v1 -"Exploring Visual Prompts for Whole Slide Image Classification with - Multiple Instance Learning",http://arxiv.org/abs/2303.13122v1 -Modular Retrieval for Generalization and Interpretation,http://arxiv.org/abs/2303.13419v1 -"Evaluating GPT-3.5 and GPT-4 Models on Brazilian University Admission - Exams",http://arxiv.org/abs/2303.17003v1 -Yes but.. Can ChatGPT Identify Entities in Historical Documents?,http://arxiv.org/abs/2303.17322v1 -Automatic Geo-alignment of Artwork in Children's Story Books,http://arxiv.org/abs/2304.01204v1 -"Multilingual Machine Translation with Large Language Models: Empirical - Results and Analysis",http://arxiv.org/abs/2304.04675v2 -Zero-shot Temporal Relation Extraction with ChatGPT,http://arxiv.org/abs/2304.05454v1 -"HiPrompt: Few-Shot Biomedical Knowledge Fusion via Hierarchy-Oriented - Prompting",http://dx.doi.org/10.1145/3539618.3591997 -"Associated production of $J/ψ$ plus $Z/W$ in the improved color - evaporation model using the parton Reggeization approach",http://arxiv.org/abs/2304.07481v1 -CLIP-Lung: Textual Knowledge-Guided Lung Nodule Malignancy Prediction,http://dx.doi.org/10.1007/978-3-031-43990-2_38 -"UPGPT: Universal Diffusion Model for Person Image Generation, Editing - and Pose Transfer",http://arxiv.org/abs/2304.08870v2 -"Learning CLIP Guided Visual-Text Fusion Transformer for Video-based - Pedestrian Attribute Recognition",http://arxiv.org/abs/2304.10091v1 -"Which Factors Predict the Chat Experience of a Natural Language - Generation Dialogue Service?",http://dx.doi.org/10.1145/3544549.3583940 -"Testing the Reliability of ChatGPT for Text Annotation and - Classification: A Cautionary Remark",http://arxiv.org/abs/2304.11085v1 -"Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs - Answering",http://arxiv.org/abs/2304.13911v2 -"Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image - Generation",http://arxiv.org/abs/2305.01569v1 -Few-shot Event Detection: An Empirical Study and a Unified View,http://arxiv.org/abs/2305.01901v2 -A Cross-Linguistic Analysis of Intertemporal Preferences in GPT-3.5,http://arxiv.org/abs/2305.02531v4 -Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework,http://arxiv.org/abs/2305.03268v1 -"Towards Segment Anything Model (SAM) for Medical Image Segmentation: A - Survey",http://arxiv.org/abs/2305.03678v3 -"TMD parton showers for associated $γ$+jet production in - electron-proton collisions at high energies",http://dx.doi.org/10.1103/PhysRevD.108.014022 -"Diffusion-NAT: Self-Prompting Discrete Diffusion for Non-Autoregressive - Text Generation",http://arxiv.org/abs/2305.04044v1 -"LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed - Multi-Label Visual Recognition",http://arxiv.org/abs/2305.04536v1 -Text-guided High-definition Consistency Texture Model,http://arxiv.org/abs/2305.05901v1 -"Generating medically-accurate summaries of patient-provider dialogue: A - multi-stage approach using large language models",http://arxiv.org/abs/2305.05982v1 -"Musketeer (All for One, and One for All): A Generalist Vision-Language - Model with Task Explanation Prompts",http://arxiv.org/abs/2305.07019v1 -"Knowledge distillation with Segment Anything (SAM) model for Planetary - Geological Mapping",http://arxiv.org/abs/2305.07586v2 -Soft Prompt Decoding for Multilingual Dense Retrieval,http://dx.doi.org/10.1145/3539618.3591769 -SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting,http://arxiv.org/abs/2305.09067v1 -"Make-An-Animation: Large-Scale Text-conditional 3D Human Motion - Generation",http://arxiv.org/abs/2305.09662v1 -"When Gradient Descent Meets Derivative-Free Optimization: A Match Made - in Black-Box Scenario",http://arxiv.org/abs/2305.10013v1 -"Searching for Needles in a Haystack: On the Role of Incidental - Bilingualism in PaLM's Translation Capability",http://arxiv.org/abs/2305.10266v1 -Discriminative Diffusion Models as Few-shot Vision and Language Learners,http://arxiv.org/abs/2305.10722v1 -Large Language Models can be Guided to Evade AI-Generated Text Detection,http://arxiv.org/abs/2305.10847v4 -LDM3D: Latent Diffusion Model for 3D,http://arxiv.org/abs/2305.10853v2 -"Generalized Planning in PDDL Domains with Pretrained Large Language - Models",http://arxiv.org/abs/2305.11014v1 -Reasoning Implicit Sentiment with Chain-of-Thought Prompting,http://arxiv.org/abs/2305.11255v4 -AutoTrial: Prompting Language Models for Clinical Trial Design,http://arxiv.org/abs/2305.11366v2 -"Graphologue: Exploring Large Language Model Responses with Interactive - Diagrams",http://dx.doi.org/10.1145/3586183.3606737 -Enhancing Few-shot NER with Prompt Ordering based Data Augmentation,http://arxiv.org/abs/2305.11791v1 -"Logic-LM: Empowering Large Language Models with Symbolic Solvers for - Faithful Logical Reasoning",http://arxiv.org/abs/2305.12295v2 -"PiVe: Prompting with Iterative Verification Improving Graph-based - Generative Capability of LLMs",http://arxiv.org/abs/2305.12392v1 -"Model-Generated Pretraining Signals Improves Zero-Shot Generalization of - Text-to-Text Transformers",http://arxiv.org/abs/2305.12567v1 -"UVOSAM: A Mask-free Paradigm for Unsupervised Video Object Segmentation - via Segment Anything Model",http://arxiv.org/abs/2305.12659v1 -Learning Interpretable Style Embeddings via Prompting LLMs,http://arxiv.org/abs/2305.12696v2 -"Enhancing Small Medical Learners with Privacy-preserving Contextual - Prompting",http://arxiv.org/abs/2305.12723v1 -"This Prompt is Measuring : Evaluating Bias Evaluation in Language - Models",http://arxiv.org/abs/2305.12757v1 -Training Diffusion Models with Reinforcement Learning,http://arxiv.org/abs/2305.13301v3 -"MaskCL: Semantic Mask-Driven Contrastive Learning for Unsupervised - Person Re-Identification with Clothes Change",http://arxiv.org/abs/2305.13600v1 -"Prompting and Evaluating Large Language Models for Proactive Dialogues: - Clarification, Target-guided, and Non-collaboration",http://arxiv.org/abs/2305.13626v2 -"Prompt-Based Monte-Carlo Tree Search for Goal-Oriented Dialogue Policy - Planning",http://arxiv.org/abs/2305.13660v2 -"LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain - Conversations with Large Language Models",http://arxiv.org/abs/2305.13711v1 -"Self-Critique Prompting with Large Language Models for Inductive - Instructions",http://arxiv.org/abs/2305.13733v1 -"Enhancing Black-Box Few-Shot Text Classification with Prompt-Based Data - Augmentation",http://arxiv.org/abs/2305.13785v2 -Multi-Granularity Prompts for Topic Shift Detection in Dialogue,http://arxiv.org/abs/2305.14006v1 -Multilingual Large Language Models Are Not (Yet) Code-Switchers,http://arxiv.org/abs/2305.14235v2 -"Improving Factuality and Reasoning in Language Models through Multiagent - Debate",http://arxiv.org/abs/2305.14325v1 -Empowering LLM-based Machine Translation with Cultural Awareness,http://arxiv.org/abs/2305.14328v1 -"Is a Prestigious Job the same as a Prestigious Country? A Case Study on - Multilingual Sentence Embeddings and European Countries",http://arxiv.org/abs/2305.14482v1 -EXnet: Efficient In-context Learning for Data-less Text classification,http://arxiv.org/abs/2305.14622v1 -"ExpertPrompting: Instructing Large Language Models to be Distinguished - Experts",http://arxiv.org/abs/2305.14688v1 -ClusterLLM: Large Language Models as a Guide for Text Clustering,http://arxiv.org/abs/2305.14871v1 -Aligning Language Models to User Opinions,http://arxiv.org/abs/2305.14929v1 -"PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text - Classification",http://arxiv.org/abs/2305.14963v1 -"Tricking LLMs into Disobedience: Understanding, Analyzing, and - Preventing Jailbreaks",http://arxiv.org/abs/2305.14965v1 -"Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language - Models",http://arxiv.org/abs/2305.15597v1 -Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models,http://arxiv.org/abs/2305.15779v1 -"Self-contradictory Hallucinations of Large Language Models: Evaluation, - Detection and Mitigation",http://arxiv.org/abs/2305.15852v2 -"DiffCLIP: Leveraging Stable Diffusion for Language Grounded 3D - Classification",http://arxiv.org/abs/2305.15957v2 -"Negative-prompt Inversion: Fast Image Inversion for Editing with - Text-guided Diffusion Models",http://arxiv.org/abs/2305.16807v1 -"A Mechanism for Sample-Efficient In-Context Learning for Sparse - Retrieval Tasks",http://arxiv.org/abs/2305.17040v1 -Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning,http://arxiv.org/abs/2305.17373v1 -"FoPro-KD: Fourier Prompted Effective Knowledge Distillation for - Long-Tailed Medical Image Recognition",http://arxiv.org/abs/2305.17421v1 -Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks,http://arxiv.org/abs/2305.17653v1 -"Syntax and Semantics Meet in the ""Middle"": Probing the Syntax-Semantics - Interface of LMs Through Agentivity",http://arxiv.org/abs/2305.18185v2 -"LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly - Transformers",http://arxiv.org/abs/2305.18396v2 -"Short Answer Grading Using One-shot Prompting and Text Similarity - Scoring Model",http://arxiv.org/abs/2305.18638v1 -Strategic Reasoning with Language Models,http://arxiv.org/abs/2305.19165v1 -"ScoNe: Benchmarking Negation Reasoning in Language Models With - Fine-Tuning and In-Context Learning",http://arxiv.org/abs/2305.19426v1 -"PromptStyle: Controllable Style Transfer for Text-to-Speech with Natural - Language Descriptions",http://arxiv.org/abs/2305.19522v2 -Red Teaming Language Model Detectors with Language Models,http://arxiv.org/abs/2305.19713v2 -An Invariant Learning Characterization of Controlled Text Generation,http://arxiv.org/abs/2306.00198v1 -"FDNeRF: Semantics-Driven Face Reconstruction, Prompt Editing and - Relighting with Diffusion Models",http://arxiv.org/abs/2306.00783v1 -"StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual - Representation Learners",http://arxiv.org/abs/2306.00984v1 -Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation,http://arxiv.org/abs/2306.01183v1 -"Analyzing Syntactic Generalization Capacity of Pre-trained Language - Models on Japanese Honorific Conversion",http://arxiv.org/abs/2306.03055v1 -"Sequential Monte Carlo Steering of Large Language Models using - Probabilistic Programs",http://arxiv.org/abs/2306.03081v1 -"Composition and Deformance: Measuring Imageability with a Text-to-Image - Model",http://arxiv.org/abs/2306.03168v1 -"Triggering Multi-Hop Reasoning for Question Answering in Language Models - using Soft Prompts and Random Walks",http://arxiv.org/abs/2306.04009v1 -"An Empirical Analysis of Parameter-Efficient Methods for Debiasing - Pre-Trained Language Models",http://arxiv.org/abs/2306.04067v1 -World Models for Math Story Problems,http://arxiv.org/abs/2306.04347v1 -STEPS: A Benchmark for Order Reasoning in Sequential Tasks,http://arxiv.org/abs/2306.04441v1 -"Enhancing In-Context Learning with Answer Feedback for Multi-Span - Question Answering",http://arxiv.org/abs/2306.04508v1 -Matting Anything,http://arxiv.org/abs/2306.05399v1 -"How Does Fine-Tuning Impact Out-of-Distribution Detection for - Vision-Language Models?",http://arxiv.org/abs/2306.06048v1 -Virtual Node Tuning for Few-shot Node Classification,http://arxiv.org/abs/2306.06063v1 -Human-in-the-Loop through Chain-of-Thought,http://arxiv.org/abs/2306.07932v2 -Safeguarding Crowdsourcing Surveys from ChatGPT with Prompt Injection,http://arxiv.org/abs/2306.08833v1 -"Seen to Unseen: Exploring Compositional Generalization of - Multi-Attribute Controllable Dialogue Generation",http://arxiv.org/abs/2306.10317v1 -"Advancements of $γ$-ray spectroscopy of isotopically identified - fission fragments with AGATA and VAMOS++",http://dx.doi.org/10.1140/epja/s10050-023-01053-0 -"GUMSum: Multi-Genre Data and Evaluation for English Abstractive - Summarization",http://arxiv.org/abs/2306.11256v1 -"Symbolic Chain-of-Thought Distillation: Small Models Can Also ""Think"" - Step-by-Step",http://arxiv.org/abs/2306.14050v1 -Language models are weak learners,http://arxiv.org/abs/2306.14101v1 -"Sentence-level Event Detection without Triggers via Prompt Learning and - Machine Reading Comprehension",http://arxiv.org/abs/2306.14176v1 -"Rotation of Polarization Angle in Gamma-Ray Burst Prompt - Phase$-$\uppercase\expandafter{\romannumeral2}. The Influence of The - Parameters",http://arxiv.org/abs/2306.16618v1 -Rotation of polarization angle in gamma-ray burst prompt phase,http://dx.doi.org/10.3847/1538-4357/acba0c -"SummQA at MEDIQA-Chat 2023:In-Context Learning with GPT-4 for Medical - Summarization",http://arxiv.org/abs/2306.17384v1 -Stay on topic with Classifier-Free Guidance,http://arxiv.org/abs/2306.17806v1 -Improving Multitask Retrieval by Promoting Task Specialization,http://arxiv.org/abs/2307.00342v1 -ProPILE: Probing Privacy Leakage in Large Language Models,http://arxiv.org/abs/2307.01881v1 -"Ariadne's Thread:Using Text Prompts to Improve Segmentation of Infected - Areas from Chest X-ray images",http://arxiv.org/abs/2307.03942v1 -"Augmenters at SemEval-2023 Task 1: Enhancing CLIP in Handling - Compositionality and Ambiguity for Zero-Shot Visual WSD through Prompt - Augmentation and Text-To-Image Diffusion",http://arxiv.org/abs/2307.05564v1 -"$\mathrm{SAM^{Med}}$: A medical image annotation framework based on - large vision model",http://arxiv.org/abs/2307.05617v2 -"Personalization for BERT-based Discriminative Speech Recognition - Rescoring",http://arxiv.org/abs/2307.06832v1 -Improving Zero-Shot Generalization for CLIP with Synthesized Prompts,http://arxiv.org/abs/2307.07397v1 -Leveraging Large Language Models to Generate Answer Set Programs,http://arxiv.org/abs/2307.07699v1 -"Uncertainty-aware State Space Transformer for Egocentric 3D Hand - Trajectory Forecasting",http://arxiv.org/abs/2307.08243v2 -"Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser - with Prompts",http://arxiv.org/abs/2307.10213v1 -"Abusing Images and Sounds for Indirect Instruction Injection in - Multi-Modal LLMs",http://arxiv.org/abs/2307.10490v4 -Precision measurements of jet and photon production at ATLAS,http://arxiv.org/abs/2307.13096v1 -Fashion Matrix: Editing Photos by Just Talking,http://arxiv.org/abs/2307.13240v1 -Low-Parameter Federated Learning with Large Language Models,http://arxiv.org/abs/2307.13896v1 -"Constraining the jet composition of GRB 221009A with the prompt TeV - emission limit",http://arxiv.org/abs/2307.14113v1 -Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners,http://arxiv.org/abs/2307.14856v1 -"PromptStyler: Prompt-driven Style Generation for Source-free Domain - Generalization",http://arxiv.org/abs/2307.15199v2 -Camoscio: an Italian Instruction-tuned LLaMA,http://arxiv.org/abs/2307.16456v1 -"Transferable Decoding with Visual Entities for Zero-Shot Image - Captioning",http://arxiv.org/abs/2307.16525v1 -Scaling Sentence Embeddings with Large Language Models,http://arxiv.org/abs/2307.16645v1 -"Data Augmentation for Neural Machine Translation using Generative - Language Model",http://arxiv.org/abs/2307.16833v1 -The Bias Amplification Paradox in Text-to-Image Generation,http://arxiv.org/abs/2308.00755v1 -Reasoning in Large Language Models Through Symbolic Math Word Problems,http://arxiv.org/abs/2308.01906v1 -Is GPT-4 a reliable rater? Evaluating Consistency in GPT-4 Text Ratings,http://arxiv.org/abs/2308.02575v1 -"The Five-Dollar Model: Generating Game Maps and Sprites from Sentence - Embeddings",http://arxiv.org/abs/2308.04052v1 -"Prompting In-Context Operator Learning with Sensor Data, Equations, and - Natural Language",http://arxiv.org/abs/2308.05061v1 -ZYN: Zero-Shot Reward Models with Yes-No Questions,http://arxiv.org/abs/2308.06385v1 -"Polyp-SAM++: Can A Text Guided SAM Perform Better for Polyp - Segmentation?",http://arxiv.org/abs/2308.06623v1 -SpeechX: Neural Codec Language Model as a Versatile Speech Transformer,http://arxiv.org/abs/2308.06873v1 -"Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme - Detection",http://arxiv.org/abs/2308.08088v1 -Detoxify Language Model Step-by-Step,http://arxiv.org/abs/2308.08295v1 -Painter: Teaching Auto-regressive Language Models to Draw Sketches,http://arxiv.org/abs/2308.08520v1 -"Boosting Logical Reasoning in Large Language Models through a New - Framework: The Graph of Thought",http://arxiv.org/abs/2308.08614v1 -"March in Chat: Interactive Prompting for Remote Embodied Referring - Expression",http://arxiv.org/abs/2308.10141v1 -"RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented - Large Language Models",http://arxiv.org/abs/2308.10633v2 -"SPEGTI: Structured Prediction for Efficient Generative Text-to-Image - Models",http://arxiv.org/abs/2308.10997v1 -"Using language models in the implicit automated assessment of - mathematical short answer items",http://arxiv.org/abs/2308.11006v1 -"Diversity Measures: Domain-Independent Proxies for Failure in Language - Model Queries",http://arxiv.org/abs/2308.11189v1 -Opening the Vocabulary of Egocentric Actions,http://arxiv.org/abs/2308.11488v1 -"EfficientDreamer: High-Fidelity and Robust 3D Creation via - Orthogonal-view Diffusion Prior",http://arxiv.org/abs/2308.13223v1 -"A Preliminary Study on a Conceptual Game Feature Generation and - Recommendation System",http://arxiv.org/abs/2308.13538v1 -LAMBO: Large Language Model Empowered Edge Intelligence,http://arxiv.org/abs/2308.15078v1 -"Enhancing PLM Performance on Labour Market Tasks via Instruction-based - Finetuning and Prompt-tuning with Rules",http://arxiv.org/abs/2308.16770v1 -"When 3D Bounding-Box Meets SAM: Point Cloud Instance Segmentation with - Weak-and-Noisy Supervision",http://arxiv.org/abs/2309.00828v1 -CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection,http://arxiv.org/abs/2309.01093v1 -"Fine-grained Affective Processing Capabilities Emerging from Large - Language Models",http://arxiv.org/abs/2309.01664v1 -"Breaking Barriers to Creative Expression: Co-Designing and Implementing - an Accessible Text-to-Image Interface",http://arxiv.org/abs/2309.02402v1 -RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model,http://arxiv.org/abs/2309.02455v1 -Aligning Large Language Models for Clinical Tasks,http://arxiv.org/abs/2309.02884v2 -"Prompt Learning With Knowledge Memorizing Prototypes For Generalized - Few-Shot Intent Detection",http://arxiv.org/abs/2309.04971v1 -Thermal photon measurements at PHENIX,http://arxiv.org/abs/2309.04993v1 -"Flesch or Fumble? Evaluating Readability Standard Alignment of - Instruction-Tuned Language Models",http://arxiv.org/abs/2309.05454v1 -Scaled Prompt-Tuning for Few-Shot Natural Language Generation,http://arxiv.org/abs/2309.06759v1 -"TAP: Targeted Prompting for Task Adaptive Generation of Textual Training - Instances for Visual Classification",http://arxiv.org/abs/2309.06809v1 -"DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion - Models",http://arxiv.org/abs/2309.06933v1 -"Do Not Give Away My Secrets: Uncovering the Privacy Issue of Neural Code - Completion Tools",http://arxiv.org/abs/2309.07639v1 -"Self-Consistent Narrative Prompts on Abductive Natural Language - Inference",http://arxiv.org/abs/2309.08303v1 -"Bridging Topic, Domain, and Language Shifts: An Evaluation of - Comprehensive Out-of-Distribution Scenarios",http://arxiv.org/abs/2309.08316v1 -"Enhancing Multilingual Speech Recognition through Language Prompt Tuning - and Frame-Level Language Adapter",http://arxiv.org/abs/2309.09443v2 -"CLIP-based Synergistic Knowledge Transfer for Text-based Person - Retrieval",http://arxiv.org/abs/2309.09496v1 -"SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language - Models",http://arxiv.org/abs/2309.10062v1 -"Natural Language Embedded Programs for Hybrid Language Symbolic - Reasoning",http://arxiv.org/abs/2309.10814v1 -Kosmos-2.5: A Multimodal Literate Model,http://arxiv.org/abs/2309.11419v1 -"PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan - pre-trained language models",http://arxiv.org/abs/2309.12109v1 -"ChaCha: Leveraging Large Language Models to Prompt Children to Share - Their Emotions about Personal Events",http://arxiv.org/abs/2309.12244v1 -Text-Guided Vector Graphics Customization,http://arxiv.org/abs/2309.12302v1 -"Synthetic Boost: Leveraging Synthetic Data for Enhanced Vision-Language - Segmentation in Echocardiography",http://dx.doi.org/10.1007/978-3-031-44521-7_9 -"MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary - Instance Segmentation",http://arxiv.org/abs/2309.13042v1 -"Coco-Nut: Corpus of Japanese Utterance and Voice Characteristics - Description for Prompt-based Control",http://arxiv.org/abs/2309.13509v1 -"Measurement of prompt $D^+$ and $D^+_{s}$ production in $p\mathrm{Pb}$ - collisions at $\sqrt {s_{\mathrm{NN}}}=5.02\,$TeV",http://arxiv.org/abs/2309.14206v2 -"Bridging the Gulf of Envisioning: Cognitive Design Challenges in LLM - Interfaces",http://arxiv.org/abs/2309.14459v1 -"""Teach AI How to Code"": Using Large Language Models as Teachable Agents - for Programming Education",http://arxiv.org/abs/2309.14534v1 -"Transformer-based classification of user queries for medical consultancy - with respect to expert specialization",http://arxiv.org/abs/2309.14662v2 -"Navigating Text-To-Image Customization:From LyCORIS Fine-Tuning to Model - Evaluation",http://arxiv.org/abs/2309.14859v1 -Graph Neural Prompting with Large Language Models,http://arxiv.org/abs/2309.15427v1 -NLPBench: Evaluating Large Language Models on Solving NLP Problems,http://arxiv.org/abs/2309.15630v4 -"LatticeGen: A Cooperative Framework which Hides Generated Text in a - Lattice for Privacy-Aware Generation on Cloud",http://arxiv.org/abs/2309.17157v2 -"SocREval: Large Language Models with the Socratic Method for - Reference-Free Reasoning Evaluation",http://arxiv.org/abs/2310.00074v1 -"Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised - Document Seal Data Generation and Realisation",http://arxiv.org/abs/2310.00546v1 -"Jet Lorentz factor constraint for GRB 221009A based on the optical depth - of the TeV photons",http://arxiv.org/abs/2310.00631v1 -"Evaluating the Decency and Consistency of Data Validation Tests - Generated by LLMs",http://arxiv.org/abs/2310.01402v1 -"LLM Lies: Hallucinations are not Bugs, but Features as Adversarial - Examples",http://arxiv.org/abs/2310.01469v2 -"Fool Your (Vision and) Language Model With Embarrassingly Simple - Permutations",http://arxiv.org/abs/2310.01651v1 -"Boosting Dermatoscopic Lesion Segmentation via Diffusion Models with - Visual and Textual Prompts",http://arxiv.org/abs/2310.02906v1 -"Can Large Language Models be Good Path Planners? A Benchmark and - Investigation on Spatial-temporal Reasoning",http://arxiv.org/abs/2310.03249v1 -SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks,http://arxiv.org/abs/2310.03684v2 -"Muon identification using multivariate techniques in the CMS experiment - in proton-proton collisions at $\sqrt{s}$ = 13 TeV",http://arxiv.org/abs/2310.03844v1 -Ada-Instruct: Adapting Instruction Generators for Complex Reasoning,http://arxiv.org/abs/2310.04484v2 -LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation,http://arxiv.org/abs/2310.04535v1 -"Harnessing the Power of ChatGPT in Fake News: An In-Depth Exploration in - Generation, Detection and Explanation",http://arxiv.org/abs/2310.05046v1 -"Language-driven Open-Vocabulary Keypoint Detection for Animal Body and - Face",http://arxiv.org/abs/2310.05056v2 -"Large Language Model (LLM) as a System of Multiple Expert Agents: An - Approach to solve the Abstraction and Reasoning Corpus (ARC) Challenge",http://arxiv.org/abs/2310.05146v1 -"Jailbreak and Guard Aligned Language Models with Only Few In-Context - Demonstrations",http://arxiv.org/abs/2310.06387v1 -"Empowering Psychotherapy with Large Language Models: Cognitive - Distortion Detection through Diagnosis of Thought Prompting",http://arxiv.org/abs/2310.07146v1 -Evaluating Large Language Models at Evaluating Instruction Following,http://arxiv.org/abs/2310.07641v1 -OpenLEAF: Open-Domain Interleaved Image-Text Generation and Evaluation,http://arxiv.org/abs/2310.07749v1 -"Co-NavGPT: Multi-Robot Cooperative Visual Semantic Navigation using - Large Language Models",http://arxiv.org/abs/2310.07937v1 -"Who Wrote it and Why? Prompting Large-Language Models for Authorship - Verification",http://arxiv.org/abs/2310.08123v1 -"Towards Informative Few-Shot Prompt with Maximum Information Gain for - In-Context Learning",http://arxiv.org/abs/2310.08923v1 -Self-Detoxifying Language Models via Toxification Reversal,http://arxiv.org/abs/2310.09573v1 -Generative Calibration for In-context Learning,http://arxiv.org/abs/2310.10266v1 -"Towards Emotion-Based Synthetic Consciousness: Using LLMs to Estimate - Emotion Probability Vectors",http://arxiv.org/abs/2310.10673v1 -"Denevil: Towards Deciphering and Navigating the Ethical Values of Large - Language Models via Instruction Learning",http://arxiv.org/abs/2310.11053v1 -"Last One Standing: A Comparative Analysis of Security and Privacy of - Soft Prompt Tuning, LoRA, and In-Context Learning",http://arxiv.org/abs/2310.11397v1 -Evaluating LLMs for Privilege-Escalation Scenarios,http://arxiv.org/abs/2310.11409v2 -Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V,http://arxiv.org/abs/2310.11441v1 -Multi-stage Large Language Model Correction for Speech Recognition,http://arxiv.org/abs/2310.11532v1 -"Chain-of-Thought Tuning: Masked Language Models can also Think Step By - Step in Natural Language Understanding",http://arxiv.org/abs/2310.11721v1 -Primacy Effect of ChatGPT,http://arxiv.org/abs/2310.13206v1 -PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain,http://arxiv.org/abs/2310.14151v1 -Can Language Models Laugh at YouTube Short-form Videos?,http://arxiv.org/abs/2310.14159v1 -"Learning to Correct Noisy Labels for Fine-Grained Entity Typing via - Co-Prediction Prompt Tuning",http://arxiv.org/abs/2310.14596v1 -"Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion - Recognition",http://arxiv.org/abs/2310.14614v1 -"CoF-CoT: Enhancing Large Language Models with Coarse-to-Fine - Chain-of-Thought Prompting for Multi-domain NLU Tasks",http://arxiv.org/abs/2310.14623v1 -Moral Foundations of Large Language Models,http://arxiv.org/abs/2310.15337v1 -"Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting - Elusive Disinformation",http://arxiv.org/abs/2310.15515v1 -"Failures Pave the Way: Enhancing Large Language Models through - Tuning-free Rule Accumulation",http://arxiv.org/abs/2310.15746v1 -Language-driven Scene Synthesis using Multi-conditional Diffusion Model,http://arxiv.org/abs/2310.15948v1 -"MuSR: Testing the Limits of Chain-of-thought with Multistep Soft - Reasoning",http://arxiv.org/abs/2310.16049v1 -Thermal noise engines,http://dx.doi.org/10.1016/j.chaos.2010.12.008 -"Constitutive Relation for Nonlinear Response and Universality of - Efficiency at Maximum Power for Tight-Coupling Heat Engines",http://dx.doi.org/10.1103/PhysRevE.91.022136 -A Survey of Software Engineering Practices in Turkey (extended version),http://arxiv.org/abs/1412.4648v2 -Intelligent Search Optimization using Artificial Fuzzy Logics,http://arxiv.org/abs/1510.00819v1 -"Demand Engineering: IP Network Optimisation Through Intelligent Demand - Placement",http://arxiv.org/abs/1606.04720v2 -"Design of an SI Engine Cold Start Controller based on Dynamic Coupling - Analysis",http://arxiv.org/abs/1710.03303v1 -"Proto-magnetar jets as central engines for broad-lined type Ic - supernovae",http://dx.doi.org/10.1093/mnras/stab2964 -Query Completion Using Bandits for Engines Aggregation,http://arxiv.org/abs/1709.04095v1 -"Quasi-linear irreversible thermodynamics of a - low-temperature-differential kinematic Stirling heat engine",http://dx.doi.org/10.1103/PhysRevE.102.012142 -"Exactly solvable two-terminal heat engine with asymmetric Onsager - coefficients: Origin of the power-efficiency bound",http://dx.doi.org/10.1103/PhysRevE.101.052132 -"Parsisanj: a semi-automatic component-based approach towards search - engine evaluation",http://arxiv.org/abs/2009.12097v1 -"A Software Engineering Perspective on Engineering Machine Learning - Systems: State of the Art and Challenges",http://arxiv.org/abs/2012.07919v3 -A new approach to the thermodynamic analysis of gas power cycles,http://arxiv.org/abs/2101.06814v1 -Second law for active heat engines,http://dx.doi.org/10.1103/PhysRevX.12.031034 -"Non-Markovian thermal operations boosting the performance of quantum - heat engines",http://dx.doi.org/10.1103/PhysRevE.106.014114 -Evaluation Metrics for Measuring Bias in Search Engine Results,http://dx.doi.org/10.1007/s10791-020-09386-w -"An initial Theory to Understand and Manage Requirements Engineering Debt - in Practice",http://dx.doi.org/10.1016/j.infsof.2023.107201 -"Human Error Management in Requirements Engineering: Should We Fix the - People, the Processes, or the Environment?",http://arxiv.org/abs/2304.02702v1 -Boon: A Neural Search Engine for Cross-Modal Information Retrieval,http://arxiv.org/abs/2307.14240v1 -"Prompt, early, and afterglow optical observations of five gamma-ray - bursts (GRBs 100901A, 100902A, 100905A, 100906A, and 101020A)",http://dx.doi.org/10.1111/j.1365-2966.2012.20195.x -"Physics-informed neural networks for predicting gas flow dynamics and - unknown parameters in diesel engines",http://dx.doi.org/10.1038/s41598-023-39989-4 -Web Engineering,http://arxiv.org/abs/cs/0306108v1 -Complex Systems + Systems Engineering = Complex Systems Engineeri,http://arxiv.org/abs/cs/0603127v1 -Performance of discrete heat engines and heat pumps in finite time,http://dx.doi.org/10.1103/PhysRevE.61.4774 -Quantum Thermodynamic Cycles and quantum heat engines,http://dx.doi.org/10.1103/PhysRevE.76.031105 -Work extremum principle: Structure and function of quantum heat engines,http://dx.doi.org/10.1103/PhysRevE.77.041118 -"Modular Workflow Engine for Distributed Services using Lightweight Java - Clients",http://arxiv.org/abs/0912.0549v1 -"Proceedings Second International Workshop on Algebraic Methods in - Model-based Software Engineering",http://dx.doi.org/10.4204/EPTCS.56 -Verifiable Control System Development for Gas Turbine Engines,http://arxiv.org/abs/1311.1885v1 -Experience of Developing a Meta-Semantic Search Engine,http://arxiv.org/abs/1311.6227v1 -Holographic Heat Engines,http://dx.doi.org/10.1088/0264-9381/31/20/205002 -Single Particle Stochastic Heat Engine,http://dx.doi.org/10.1103/PhysRevE.90.042146 -Social Media Impact on Website Ranking,http://arxiv.org/abs/1408.0332v1 -RoboBrain: Large-Scale Knowledge Engine for Robots,http://arxiv.org/abs/1412.0691v2 -"Engineering of Quantum State by Time-Dependent Decoherence-Free - Subspaces",http://dx.doi.org/10.1103/PhysRevA.91.032104 -"Code Generator Composition for Model-Driven Engineering of Robotics - Component & Connector Systems",http://arxiv.org/abs/1505.00904v1 -Quantum point contacts as heat engines,http://dx.doi.org/10.1016/j.physe.2015.08.003 -"A Taxonomy for Tools, Processes and Languages in Automotive Software - Engineering",http://dx.doi.org/10.5121/csit.2016.60121 -Stirling Stuff,http://arxiv.org/abs/1604.02362v2 -Maximum Power Output of Quantum Heat Engine with Energy Bath,http://dx.doi.org/10.3390/e18060205 -Consensus in Software Engineering: A Cognitive Mapping Study,http://arxiv.org/abs/1802.06319v1 -"SAT-based Reverse Engineering of Gate-Level Schematics using Fault - Injection and Probing",http://arxiv.org/abs/1802.08916v1 -"Modernization of Professional Training of Electromechanics Bachelors: - ICT-based Competence Approach",http://arxiv.org/abs/1807.00803v2 -GE852: A Dataset of 852 Game Engines,http://arxiv.org/abs/1905.04482v1 -"Supporting Software Engineering Research and Education by Annotating - Public Videos of Developers Programming",http://arxiv.org/abs/1905.11366v1 -"Enhancing the OPEN Process Framework with Service-Oriented Method - Fragments",http://arxiv.org/abs/2004.10136v1 -"Engineering Blockchain Based Software Systems: Foundations, Survey, and - Future Directions",http://arxiv.org/abs/2105.01881v2 -"An Exact Model of the Power/Efficiency Trade-Off While Approaching the - Carnot Limit",http://dx.doi.org/10.1103/PhysRevD.98.026008 -Black holes in massive gravity as heat engines,http://dx.doi.org/10.1016/j.physletb.2018.03.072 -Cognitive Biases in Software Engineering: A Systematic Mapping Study,http://dx.doi.org/10.1109/TSE.2018.2877759 -"Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering - in the DevOps World",http://arxiv.org/abs/1709.08951v1 -"What prevents Finnish women from applying to software engineering roles? - A preliminary analysis of survey data",http://dx.doi.org/10.1145/3377814.3381708 -"Sampling in Software Engineering Research: A Critical Review and - Guidelines",http://arxiv.org/abs/2002.07764v6 -DeepPlume: Very High Resolution Real-Time Air Quality Mapping,http://arxiv.org/abs/2002.10394v1 -"Engineering Privacy by Design: Are engineers ready to live up to the - challenge?",http://dx.doi.org/10.1080/01972243.2019.1583296 -Implementation of a Quantum Engine Fuelled by Information,http://arxiv.org/abs/2006.10136v1 -"A Web-based modeling tool for the SEMAT Essence theory of Software - Engineering",http://dx.doi.org/10.5334/jors.ad -An Innovative Approach for online Meta Search Engine Optimization,http://arxiv.org/abs/1509.08396v1 -Efficiency at the maximum power output for simple two-level heat engine,http://arxiv.org/abs/1612.00518v1 -Pulsejet engine dynamics in vertical motion using momentum conservation,http://dx.doi.org/10.1088/1361-6404/aa55a7 -"Parameterizing the Supernova Engine and its Effects on Remnants and - Basic Yields",http://dx.doi.org/10.3847/1538-4357/aaaf6f -"mRUBiS: An Exemplar for Model-Based Architectural Self-Healing and - Self-Optimization",http://dx.doi.org/10.1145/3194133.3194161 -"Total Recall, Language Processing, and Software Engineering",http://dx.doi.org/10.1145/3283812.3283818 -"Three-level laser heat engine at optimal performance with ecological - function",http://dx.doi.org/10.1103/PhysRevE.100.012138 -Relativistic quantum heat engine from uncertainty relation standpoint,http://dx.doi.org/10.1038/s41598-019-53331-x -Requirements Engineering Challenges in Building AI-Based Complex Systems,http://arxiv.org/abs/1908.11791v1 -"Finite-time quantum Otto engine: Surpassing the quasi-static efficiency - due to friction",http://dx.doi.org/10.1103/PhysRevE.101.022127 -"Four presumed gaps in the software engineering research community's - knowledge",http://arxiv.org/abs/1911.09971v1 -"Non-Commutative space: boon or bane for quantum engines and - refrigerators",http://dx.doi.org/10.1140/epjp/s13360-020-00318-7 -A stochastic heat engine using an active particle,http://dx.doi.org/10.1103/PhysRevE.101.032109 -Human Cognition through the Lens of Social Engineering Cyberattacks,http://arxiv.org/abs/2007.04932v2 -"Inducing a many-body topological state of matter through - Coulomb-engineered local interactions",http://dx.doi.org/10.1103/PhysRevResearch.3.013265 -Szilard Engines as Quantum Thermodynamical Systems,http://arxiv.org/abs/2010.14652v2 -"Classification of Reverse-Engineered Class Diagram and - Forward-Engineered Class Diagram using Machine Learning",http://arxiv.org/abs/2011.07313v1 -Modular Moose: A new generation software reverse engineering environment,http://arxiv.org/abs/2011.10975v1 -A Review into Data Science and Its Approaches in Mechanical Engineering,http://arxiv.org/abs/2012.15358v1 -TrustSECO: An Interview Survey into Software Trust,http://arxiv.org/abs/2101.06138v1 -"Finite-time two-spin quantum Otto engines: shortcuts to adiabaticity vs. - irreversibility",http://dx.doi.org/10.3906/fiz-2101-10 -Exploring Web Search Engines to Find Architectural Knowledge,http://arxiv.org/abs/2103.11705v1 -"Interdisciplinary Research Methodologies in Engineering Education - Research",http://dx.doi.org/10.2139/ssrn.3812769 -Boosting quantum battery performance by structure engineering,http://arxiv.org/abs/2104.06522v1 -Storytelling in human--centric software engineering research,http://arxiv.org/abs/2104.14296v2 -"Collective effects on the performance and stability of quantum heat - engines",http://dx.doi.org/10.1103/PhysRevE.106.014143 -Structural engineering from an inverse problems perspective,http://dx.doi.org/10.1098/rspa.2021.0526 -"Effects of the non-Markovianity and non-Gaussianity of active - environmental noises on engine performance",http://dx.doi.org/10.1103/PhysRevE.105.024130 -"Human factors engineering research on single pilot operations for large - commercial aircraft: Status and prospect",http://arxiv.org/abs/2110.07770v1 -"Realization of a coupled-mode heat engine with cavity-mediated - nanoresonators",http://arxiv.org/abs/2110.13022v1 -"Combining Design Thinking and Software Requirements Engineering to - create Human-centered Software-intensive Systems",http://arxiv.org/abs/2112.05549v1 -Software and Security Engineering in Digital Transformation,http://arxiv.org/abs/2201.01359v1 -Towards a Common Speech Analysis Engine,http://arxiv.org/abs/2203.00613v1 -Beyond the Badge: Reproducibility Engineering as a Lifetime Skill,http://arxiv.org/abs/2203.05283v1 -"The invariant-based shortcut to adiabaticity for qubit heat engine - operates under quantum Otto cycle",http://arxiv.org/abs/2203.05911v1 -Continuous Software Engineering in the Wild,http://arxiv.org/abs/2203.12409v2 -Szilard engines and information-based work extraction for active systems,http://dx.doi.org/10.1103/PhysRevLett.129.228005 -"Achieving Guidance in Applied Machine Learning through Software - Engineering Techniques",http://dx.doi.org/10.1145/3397537.3397552 -"Industry-academia research collaboration and knowledge co-creation: - Patterns and anti-patterns",http://dx.doi.org/10.1145/3494519 -Software Engineering in Australasia,http://dx.doi.org/10.1145/3448992.3448995 -GDsmith: Detecting Bugs in Graph Database Engines,http://arxiv.org/abs/2206.08530v1 -"Aspects of Modelling Requirements in Very-Large Agile Systems - Engineering",http://arxiv.org/abs/2209.01993v1 -"An RSE Group Model: Operational and Organizational Approaches From - Princeton University's Central Research Software Engineering Group",http://arxiv.org/abs/2210.16261v1 -Capabilities for Better ML Engineering,http://arxiv.org/abs/2211.06409v2 -"Augmenting data-driven models for energy systems through feature - engineering: A Python framework for feature engineering",http://arxiv.org/abs/2301.01720v1 -"Under the Bridge: Trolling and the Challenges of Recruiting Software - Developers for Empirical Research Studies",http://arxiv.org/abs/2302.00174v2 -An Evidence-based Roadmap for IoT Software Systems Engineering,http://arxiv.org/abs/2303.07862v1 -Maxwell's Demon for Quantum Transport,http://arxiv.org/abs/2303.08326v1 -Taxing Collaborative Software Engineering,http://arxiv.org/abs/2304.06539v2 -"Construction of a quantum Stirling engine cycle tuned by dynamic-angle - spinning",http://arxiv.org/abs/2304.09230v1 -"Investigating the Software Engineering Roadmap for Smart City - Infrastructure Development: Goals and Challenges",http://arxiv.org/abs/2305.05574v1 -"Convective meta-thermal concentration for ultrahigh efficient Stirling - engine with waste heat and cold utilization",http://arxiv.org/abs/2306.07813v1 -Team Composition in Software Engineering Education,http://arxiv.org/abs/2306.08431v1 -Identifying and Consolidating Knowledge Engineering Requirements,http://arxiv.org/abs/2306.15124v1 -"Quantum Software Engineering Challenges from Developers' Perspective: - Mapping Research Challenges to the Proposed Workflow Model",http://arxiv.org/abs/2308.01141v1 -"Search Engine and Recommendation System for the Music Industry built - with JinaAI",http://arxiv.org/abs/2308.03842v1 -A Modular Engine for Quantum Monte Carlo Integration,http://arxiv.org/abs/2308.06081v1 -"Reflecting on the Use of the Policy-Process-Product Theory in Empirical - Software Engineering",http://arxiv.org/abs/2308.12387v1 -Visualising Game Engine Subsystem Coupling,http://arxiv.org/abs/2309.06329v1 -"The asymmetric Otto engine: frictional effects on performance bounds and - operational modes",http://arxiv.org/abs/2310.06512v1 -"Monopoly Power on the Web - A Preliminary Investigation of Search - Engines",http://arxiv.org/abs/cs/0109054v1 -"Quantum Heat Engines, the Second Law and Maxwell's Daemon",http://arxiv.org/abs/quant-ph/0311157v5 -"Optimal Threshold Control by the Robots of Web Search Engines with - Obsolescence of Documents",http://dx.doi.org/10.1016/j.comnet.2011.01.013 -"ODYS: A Massively-Parallel Search Engine Using a DB-IR - Tightly-Integrated Parallel DBMS",http://arxiv.org/abs/1208.4270v1 -A Micrometer-sized Heat Engine Operating Between Bacterial Reservoirs,http://dx.doi.org/10.1038/nphys3870 -"A New Paradigm of Software Service Engineering in the Era of Big Data - and Big Service",http://arxiv.org/abs/1608.08342v1 -"A Realistic Simulation Testbed of A Turbocharged Spark-Ignited Engine - System: A Platform for the Evaluation of Fault Diagnosis Algorithms and - Strategies",http://dx.doi.org/10.1109/MCS.2019.2961793 -Empirical Software Engineering: From Discipline to Interdiscipline,http://dx.doi.org/10.1016/j.jss.2018.11.019 -A Call to Promote Soft Skills in Software Engineering,http://dx.doi.org/10.17140/PCSOJ-4-e011 -How software engineering research aligns with design science: A review,http://dx.doi.org/10.1007/s10664-020-09818-7 -"Teaching Hardware Reverse Engineering: Educational Guidelines and - Practical Insights",http://dx.doi.org/10.1109/TALE.2018.8615270 -"An experimental study of the detailed flame transport in a SI engine - using simultaneous dual-plane OH-LIF and stereoscopic PIV",http://dx.doi.org/10.1016/j.combustflame.2018.12.024 -"Satisfaction and Performance of Software Developers during Enforced Work - from Home in the COVID-19 Pandemic",http://arxiv.org/abs/2107.07944v2 -"Explainable AI for engineering design: A unified approach of systems - engineering and component-based deep learning",http://arxiv.org/abs/2108.13836v3 -"A Quality Assessment Instrument for Systematic Literature Reviews in - Software Engineering",http://arxiv.org/abs/2109.10134v1 -"Optimizations of Multilevel Quantum Heat Engine with N Noninteracting - Fermions Based on Lenoir Cycle",http://dx.doi.org/10.1140/epjp/s13360-022-03235-z -"Exactly solvable model of a passive Brownian heat engine and its - comparison with active engines",http://dx.doi.org/10.1088/1742-5468/ac7e3d -"Software Engineering Process and Methodology in Blockchain-Oriented - Software Development: A Systematic Study",http://dx.doi.org/10.1109/SERA54885.2022.9806817 -Inertialess Gyrating Engines,http://arxiv.org/abs/2208.01292v1 -Quantum Engines and Refrigerators,http://arxiv.org/abs/2302.00726v1 -An Exploratory Approach for Game Engine Architecture Recovery,http://arxiv.org/abs/2303.02429v2 -Exploring outputs from concatenated stochastic heat engines,http://dx.doi.org/10.1088/1742-5468/ace714 -"Towards an Understanding of Large Language Models in Software - Engineering Tasks",http://arxiv.org/abs/2308.11396v1 -"Availing non-Markovian dynamics in effective negative temperature-based - transient quantum Otto engines",http://arxiv.org/abs/2310.04347v1 -Equilibrium Chemical Engines,http://dx.doi.org/10.1143/JPSJ.67.2666 -The virtual reality framework for engineering objects,http://arxiv.org/abs/cs/0612126v1 -Software (Re-)Engineering with PSF,http://arxiv.org/abs/0712.2943v1 -"Software (Re-)Engineering with PSF II: from architecture to - implementation",http://arxiv.org/abs/0712.3115v1 -Software (Re-)Engineering with PSF III: an IDE for PSF,http://arxiv.org/abs/0712.3128v1 -The Efficiency of Ideal Thermomagnetic Engine,http://arxiv.org/abs/1206.2171v1 -Simple Search Engine Model: Adaptive Properties for Doubleton,http://arxiv.org/abs/1212.4702v1 -Simple Search Engine Model: Selective Properties,http://arxiv.org/abs/1303.3964v1 -Optomechanical elastomeric engine,http://dx.doi.org/10.1103/PhysRevE.88.040501 -Optimal work of quantum Szilard engine,http://arxiv.org/abs/1401.1685v1 -Structured Spreadsheet Modeling and Implementation,http://arxiv.org/abs/1503.03122v2 -"Proceedings 4th International Workshop on Engineering Safety and - Security Systems",http://dx.doi.org/10.4204/EPTCS.184 -"Super-Languages: Developing Languages and Applications with XMF (Second - Edition)",http://arxiv.org/abs/1506.03363v1 -Electromagnetically Induced Transparency and Quantum Heat Engines,http://dx.doi.org/10.1103/PhysRevA.94.053859 -"Robust Coherent Superposition of States using Quasiadiabatic Inverse - Engineering",http://dx.doi.org/10.1088/1361-6455/aa8b4e -"Blueprint and Evaluation Instruments for a Course on Software - Engineering for Sustainability",http://arxiv.org/abs/1802.02517v1 -Integrated Tools for Engineering Ontologies,http://arxiv.org/abs/1802.06821v1 -Applied Logic in Engineering,http://arxiv.org/abs/1602.05170v1 -"A Comparison of Approaches for Traffic Engineering in IP and MPLS - Networks",http://arxiv.org/abs/1608.03770v1 -Quantum heat engines with multiferroic working substance,http://arxiv.org/abs/1703.00855v1 -"The Bourgeois Gentleman, Engineering and Formal Methods",http://arxiv.org/abs/2005.08309v1 -Fluctuation Relation for Heat Engines,http://dx.doi.org/10.1088/1751-8113/44/40/405001 -On the maximum efficiency of realistic heat engines,http://arxiv.org/abs/1209.6300v1 -The Broken Engine of Szilard,http://arxiv.org/abs/1410.8565v1 -Why a Testing Career Is Not the First Choice of Engineers,http://arxiv.org/abs/1612.00734v1 -"Geant4 Maintainability Assessed with Respect to Software Engineering - References",http://arxiv.org/abs/1704.05911v1 -Fourteen Years of Software Engineering at ETH Zurich,http://arxiv.org/abs/1712.05078v2 -Scripting Relational Database Engine Using Transducer,http://arxiv.org/abs/1805.04265v1 -"MaLTESE: Large-Scale Simulation-Driven Machine Learning for Transient - Driving Cycles",http://dx.doi.org/10.1007/978-3-030-20656-7_10 -"A Novel Approach for Optimal Trajectory Design with Multiple Operation - Modes of Propulsion System, Part 2",http://dx.doi.org/10.1016/j.actaastro.2020.02.047 -"Investigating the potential impact of values on requirements and - software engineering",http://arxiv.org/abs/2103.01309v1 -"A CAD Tool for Linear Optics Design: A Controls Engineer's Geometric - Approach to Hill's Equation",http://arxiv.org/abs/2109.15066v1 -The Present and Future of Bots in Software Engineering,http://dx.doi.org/10.1109/MS.2022.3176864 -A Retrospective on ICSE 2022,http://arxiv.org/abs/2207.12578v1 -Automated Quantum Software Engineering: why? what? how?,http://arxiv.org/abs/2212.00619v1 -"Requirements Engineering, Software Testing and Education: A Systematic - Mapping",http://arxiv.org/abs/2304.13693v1 -Majorana Thermoelectrics and Refrigeration,http://arxiv.org/abs/2305.12462v1 -"Automatic Differentiation for Inverse Problems with Applications in - Quantum Transport",http://arxiv.org/abs/2307.09311v1 -Research Software Engineering in 2030,http://dx.doi.org/10.1109/e-Science58273.2023.10254813 -BeppoSAX confirmation of beamed afterglow emission from GRB990510,http://dx.doi.org/10.1051/0004-6361:20010471 -"The dark burst 010214 with BeppoSAX: possible variable absorption and - jet X-ray emission",http://dx.doi.org/10.1051/0004-6361:20030143 -A very low luminosity X-ray flash: XMM-Newton observations of GRB 031203,http://dx.doi.org/10.1086/420844 -"Prompt and afterglow X-ray emission from the X-Ray Flash of 2002 April - 27",http://dx.doi.org/10.1051/0004-6361:20047146 -Early optical afterglow lightcurves of neutron-fed Gamma-ray bursts,http://dx.doi.org/10.1086/430693 -Swift-UVOT Observations of the X-Ray Flash 050406,http://dx.doi.org/10.1086/501449 -"Nonthermal gamma-ray and X-ray flashes from shock breakout in gamma-ray - bursts/supernovae",http://dx.doi.org/10.1086/519228 -The early and late-time spectral and temporal evolution of GRB 050716,http://dx.doi.org/10.1111/j.1365-2966.2006.11224.x -The nature of the outflow in gamma-ray bursts,http://dx.doi.org/10.1111/j.1365-2966.2007.00286.x -"Astrophysical S-factor of the 3He(alpha,gamma)7Be reaction measured at - low energy via prompt and delayed gamma detection",http://dx.doi.org/10.1103/PhysRevC.75.065803 -"Accurate evolutions of inspiralling neutron-star binaries: prompt and - delayed collapse to black hole",http://dx.doi.org/10.1103/PhysRevD.78.084033 -"The Afterglows of Swift-era Gamma-Ray Bursts II.: Type I GRB versus Type - II GRB Optical Afterglows",http://dx.doi.org/10.1088/0004-637X/734/2/96 -"The diverse broad-band light-curves of Swift GRBs reproduced with the - cannonball model",http://dx.doi.org/10.1088/0004-637X/696/1/994 -"In search of progenitors for supernova-less GRBs 060505 and 060614: - re-examination of their afterglows",http://dx.doi.org/10.1088/0004-637X/696/1/971 -"High energy emission and polarisation limits for the INTEGRAL burst GRB - 061122",http://dx.doi.org/10.1051/0004-6361/200810920 -GRB081028 and its late-time afterglow re-brightening,http://dx.doi.org/10.1111/j.1365-2966.2009.15882.x -Unveiling the origin of X-ray flares in Gamma-Ray Bursts,http://dx.doi.org/10.1111/j.1365-2966.2010.17037.x -Reconciling observed GRB prompt spectra with synchrotron radiation ?,http://dx.doi.org/10.1051/0004-6361/201015457 -"A New Derivation of GRB Jet Opening Angles from the Prompt Gamma-Ray - Emission",http://arxiv.org/abs/1101.2458v1 -"Analysis of GRB 080319B and GRB 050904 within the fireshell model: - evidence for a broader spectral energy distribution",http://dx.doi.org/10.1088/0004-637X/756/1/16 -GRB 090727 and gamma-ray bursts with early time optical emission,http://dx.doi.org/10.1088/0004-637X/772/1/73 -"The diversity of progenitors and emission mechanisms for ultra-long - bursts",http://arxiv.org/abs/1308.1001v1 -Luminosity function of GRBs,http://arxiv.org/abs/1309.3360v1 -"The prompt-early afterglow connection in gamma-ray bursts: implications - for the early afterglow physics",http://dx.doi.org/10.1093/mnras/stu750 -"The Statistics of the Prompt-to-Afterglow GRB Flux Ratios and the - Supercritical Pile GRB Model",http://arxiv.org/abs/1501.01221v1 -"Constraints on the broad line region from regularized linear inversion: - Velocity-delay maps for five nearby active galactic nuclei",http://dx.doi.org/10.1093/mnras/stv1917 -New measurements of $Ω_m$ from gamma-ray bursts,http://dx.doi.org/10.1051/0004-6361/201526461 -"A Deep Search for Prompt Radio Emission from Thermonuclear Supernovae - with the Very Large Array",http://dx.doi.org/10.3847/0004-637X/821/2/119 -Evaluating the Performance of a Speech Recognition based System,http://arxiv.org/abs/1601.02543v1 -"Prompt Signals and Displaced Vertices in Sparticle Searches for - Next-to-Minimal Gauge Mediated Supersymmetric Models",http://dx.doi.org/10.1140/epjc/s10052-016-4330-3 -"Hard X-ray Spectral Investigations of Gamma-ray Bursts, 120521C and - 130606A, at High-redshift z~6",http://dx.doi.org/10.1093/mnras/stw3130 -"Lessons from the short GRB$\,$170817A - the First Gravitational Wave - Detection of a Binary Neutron Star Merger",http://dx.doi.org/10.3847/2041-8213/aa991d -"Characteristics of Two-episode Emission Patterns in {\em Fermi} Long - Gamma-Ray Bursts",http://dx.doi.org/10.3847/1538-4357/aacda6 -Linear polarization in gamma-ray burst prompt emission,http://dx.doi.org/10.1093/mnras/stz2976 -"Energies of GRB blast waves and prompt efficiencies as implied by - modeling of X-ray and GeV afterglows",http://dx.doi.org/10.1093/mnras/stv2033 -Very Bright Prompt and Reverse Shock Emission of GRB 140512A,http://dx.doi.org/10.3847/1538-4357/833/1/100 -INTEGRAL observations of GW170104,http://dx.doi.org/10.3847/2041-8213/aa87ae -Near-UV OH Prompt Emission in the Innermost Coma of 103P/Hartley 2,http://dx.doi.org/10.3847/1538-3881/aa8e03 -"Prompt high-energy emission from gamma-ray bursts in the internal shock - model",http://dx.doi.org/10.1051/0004-6361/200811375 -"Statistical Properties of Multiple Optical Emission Components in - Gamma-Ray Bursts and Implications",http://dx.doi.org/10.1142/S2010194513011355 -"Time-resolved Analysis of Fermi GRBs with Fast and Slow-Cooled - Synchrotron Photon Models",http://dx.doi.org/10.1088/0004-637X/784/1/17 -Properties of jet and surrounding material of GW/GRB~170817A,http://dx.doi.org/10.1093/mnras/stz2735 -"Prompt Fission Neutrons in the $^{239}$Pu($n,f$) Reaction",http://dx.doi.org/10.1103/PhysRevC.101.044614 -Systematic study of the peak energy of the broad-band Gamma-Ray Burst,http://dx.doi.org/10.3847/1538-4357/ab6167 -"Prompt-delayed $γ$-ray spectroscopy of neutron-rich $^{119,121}$In - isotopes",http://dx.doi.org/10.1103/PhysRevC.102.014326 -"Magnetar Giant Flare Originated GRB 200415A: Transient GeV emission, - Time-Resolved $\rm E_p~ -~L_{iso}$ Correlation, and Implications",http://dx.doi.org/10.1088/1674-4527/21/9/236 -"LOFAR early-time search for coherent radio emission from Short GRB - 181123B",http://dx.doi.org/10.1093/mnras/stab2060 -"Modelling the prompt optical emission of GRB 180325A: the evolution of a - spike from the optical to gamma-rays",http://dx.doi.org/10.3847/1538-4357/abcd3a -Gamma Ray Burst studies with THESEUS,http://dx.doi.org/10.1007/s10686-021-09763-3 -The slope of the low energy spectrum of Gamma-Ray Burst prompt emission,http://dx.doi.org/10.1051/0004-6361/202141032 -Gamma-Ray Polarization Results of the POLAR Mission and Future Prospects,http://dx.doi.org/10.22323/1.395.0600 -"Prompt D$^{0}$, D$^{+}$, and D$^{*+}$ production in Pb-Pb collisions at - $\sqrt{s_{\rm NN}}$ = 5.02 TeV",http://dx.doi.org/10.1007/JHEP01(2022)174 -"Measurement of prompt $\rm{D_{s}^{+}}$-meson production and azimuthal - anisotropy in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV",http://dx.doi.org/10.1016/j.physletb.2022.136986 -"Numerical relativity simulations of prompt collapse mergers: threshold - mass and phenomenological constraints on neutron star properties after - GW170817",http://dx.doi.org/10.1103/PhysRevD.105.103022 -"Hybrid in-beam PET- and Compton prompt-gamma imaging aimed at enhanced - proton-range verification",http://arxiv.org/abs/2202.06556v3 -Language Models Can See: Plugging Visual Controls in Text Generation,http://arxiv.org/abs/2205.02655v2 -"Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA - Task",http://arxiv.org/abs/2208.12037v2 -"Measurement of the time structure of FLASH beams using prompt gamma rays - and secondary neutrons as surrogates",http://dx.doi.org/10.1088/1361-6560/acdc7c -"The Minimum Wage as an Anchor: Effects on Determinations of Fairness by - Humans and AI",http://arxiv.org/abs/2210.10585v2 -Improving Zero-shot Generalization and Robustness of Multi-modal Models,http://arxiv.org/abs/2212.01758v2 -BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting,http://arxiv.org/abs/2212.09535v3 -Large Language Models Encode Clinical Knowledge,http://arxiv.org/abs/2212.13138v1 -"Adversarial Transformer Language Models for Contextual Commonsense - Inference",http://arxiv.org/abs/2302.05406v1 -Unleashing Text-to-Image Diffusion Models for Visual Perception,http://arxiv.org/abs/2303.02153v1 -"CLIP for All Things Zero-Shot Sketch-Based Image Retrieval, Fine-Grained - or Not",http://arxiv.org/abs/2303.13440v3 -Segment Anything in 3D with NeRFs,http://arxiv.org/abs/2304.12308v3 -"Should ChatGPT and Bard Share Revenue with Their Data Providers? A New - Business Model for the AI Era",http://arxiv.org/abs/2305.02555v2 -"Principle-Driven Self-Alignment of Language Models from Scratch with - Minimal Human Supervision",http://arxiv.org/abs/2305.03047v1 -"Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes - From Text-To-Image Models",http://arxiv.org/abs/2305.13873v2 -Large Language Models for User Interest Journeys,http://arxiv.org/abs/2305.15498v1 -"Large Language Models, scientific knowledge and factuality: A systematic - analysis in antibiotic discovery",http://arxiv.org/abs/2305.17819v1 -TaleCrafter: Interactive Story Visualization with Multiple Characters,http://arxiv.org/abs/2305.18247v2 -Adding 3D Geometry Control to Diffusion Models,http://arxiv.org/abs/2306.08103v2 -Pushing the Limits of ChatGPT on NLP Tasks,http://arxiv.org/abs/2306.09719v2 -"Learning to Prompt in the Classroom to Understand AI Limits: A pilot - study",http://arxiv.org/abs/2307.01540v2 -Measuring the Success of Diffusion Models at Imitating Human Artists,http://arxiv.org/abs/2307.04028v1 -"Post-fission properties of uranium isotopes: a hybrid method with - Langevin dynamics and the Hauser-Feshbach statistical model",http://arxiv.org/abs/2307.08971v2 -"DVPT: Dynamic Visual Prompt Tuning of Large Pre-trained Models for - Medical Image Analysis",http://arxiv.org/abs/2307.09787v1 -"Large Language Models Understand and Can be Enhanced by Emotional - Stimuli",http://arxiv.org/abs/2307.11760v5 -Foundational Models Defining a New Era in Vision: A Survey and Outlook,http://arxiv.org/abs/2307.13721v1 -"E-CLIP: Towards Label-efficient Event-based Open-world Understanding by - CLIP",http://arxiv.org/abs/2308.03135v2 -"Tryage: Real-time, intelligent Routing of User Prompts to Large Language - Models",http://arxiv.org/abs/2308.11601v2 -"Prompt-enhanced Hierarchical Transformer Elevating Cardiopulmonary - Resuscitation Instruction via Temporal Action Segmentation",http://arxiv.org/abs/2308.16552v1 -MathAttack: Attacking Large Language Models Towards Math Solving Ability,http://arxiv.org/abs/2309.01686v1 -"A search for the afterglows, kilonovae, and host galaxies of two short - GRBs: GRB 211106A and GRB 211227A",http://dx.doi.org/10.1051/0004-6361/202347113 -"A high sensitivity Cherenkov detector for Prompt Gamma Timing and Time - Imaging",http://dx.doi.org/10.1038/s41598-023-30712-x -Hypothesis Search: Inductive Reasoning with Language Models,http://arxiv.org/abs/2309.05660v1 -Retrieving Supporting Evidence for Generative Question Answering,http://dx.doi.org/10.1145/3624918.3625336 -"One for All: Towards Training One Graph Model for All Classification - Tasks",http://arxiv.org/abs/2310.00149v1 -"CodeChain: Towards Modular Code Generation Through Chain of - Self-revisions with Representative Sub-modules",http://arxiv.org/abs/2310.08992v1 -Autonomous Tree-search Ability of Large Language Models,http://arxiv.org/abs/2310.10686v1 -"HateRephrase: Zero- and Few-Shot Reduction of Hate Intensity in Online - Posts using Large Language Models",http://arxiv.org/abs/2310.13985v1 -The thermodynamics governing 'endoreversible' engines,http://dx.doi.org/10.1119/1.2397094 -Efficiency of Energy Transduction in a Molecular Chemical Engine,http://dx.doi.org/10.1143/JPSJ.76.023003 -Modeling the Uncertainty in Complex Engineering Systems,http://arxiv.org/abs/cs/0005021v1 -"Reverse Engineering from Assembler to Formal Specifications via Program - Transformations",http://dx.doi.org/10.1109/WCRE.2000.891448 -Clever Search: A WordNet Based Wrapper for Internet Search Engines,http://arxiv.org/abs/cs/0501086v1 -Methods for comparing rankings of search engine results,http://arxiv.org/abs/cs/0505039v1 -"On the Software and Knowledge Engineering Aspects of the Educational - Process",http://arxiv.org/abs/cs/0701175v1 -Numerical simulations on Szilard's engine and information erasure,http://dx.doi.org/10.1143/PTP.100.695 -Heat Flow and Efficiency in a Microscopic Engine,http://dx.doi.org/10.1140/epjb/e2005-00383-0 -"Improving the Efficiency of an Ideal Heat Engine: The Quantum - Afterburner",http://arxiv.org/abs/quant-ph/0105135v1 -"The second law, Maxwell's daemon and work derivable from quantum heat - engines",http://dx.doi.org/10.1103/PhysRevLett.93.140403 -"Efficiency at maximum power: An analytically solvable model for - stochastic heat engines",http://dx.doi.org/10.1209/0295-5075/81/20003 -Central engine afterglow of Gamma-ray Bursts,http://dx.doi.org/10.1063/1.2840422 -A one dimensional hard-point gas as a thermoelectric engine,http://dx.doi.org/10.1103/PhysRevE.80.031136 -"Software Engineering with Process Algebra: Modelling Client / Server - Architectures",http://arxiv.org/abs/0908.2506v1 -Situational Method Engineering: Fundamentals and Experiences,http://arxiv.org/abs/0911.1494v1 -Engineering directed excitonic energy transfer,http://dx.doi.org/10.1063/1.3323108 -Component Based Development,http://arxiv.org/abs/1011.2163v1 -Isolated quantum heat engine,http://dx.doi.org/10.1103/PhysRevLett.108.085303 -Rectification of thermal fluctuations in a chaotic cavity heat engine,http://dx.doi.org/10.1103/PhysRevB.85.205301 -"Performance comparison of pulse-pair and wavelets methods for the pulse - Doppler weather radar spectrum",http://arxiv.org/abs/1201.5453v1 -Single ion heat engine with maximum efficiency at maximum power,http://dx.doi.org/10.1103/PhysRevLett.109.203006 -Probability of Failure in Hypersonic Engines Using Large Deviations,http://arxiv.org/abs/1208.5029v1 -Efficiency and Its Bounds for a Quantum Einstein Engine at Maximum Power,http://dx.doi.org/10.1103/PhysRevE.86.051135 -"Model-driven engineering approach to design and implementation of robot - control system",http://arxiv.org/abs/1302.5085v1 -Analysing the Assumed Benefits of Software Requirements,http://arxiv.org/abs/1305.3853v2 -"Computational molecular engineering as an emerging technology in process - engineering",http://arxiv.org/abs/1305.4781v1 -"OntoMaven: Maven-based Ontology Development and Management of - Distributed Ontology Repositories",http://arxiv.org/abs/1309.7341v1 -"Mind Your Language: Effects of Spoken Query Formulation on Retrieval - Effectiveness",http://arxiv.org/abs/1312.4036v1 -Quick Safari Through Software Design,http://arxiv.org/abs/1403.4152v1 -"Sustainable Software Ecosystems: Software Engineers, Domain Scientists, - and Engineers Collaborating for Science",http://arxiv.org/abs/1407.5701v1 -"Autonomous requirements specification processing using natural language - processing",http://arxiv.org/abs/1407.6099v1 -"The Cost of Problem-Based Learning: An Example in Information Systems - Engineering",http://dx.doi.org/10.1109/CSEET.2013.6595257 -Efficient Iterative Processing in the SciDB Parallel Array Engine,http://arxiv.org/abs/1506.00307v1 -"Personality Dimensions and Temperaments of Engineering Professors and - Students - A Survey",http://arxiv.org/abs/1507.06896v1 -When is a quantum heat engine quantum?,http://dx.doi.org/10.1209/0295-5075/120/10002 -Advanced Cloud Privacy Threat Modeling,http://dx.doi.org/10.5121/csit.2016.60120 -"Towards a General Software Engineering Methodology for the Internet of - Things",http://arxiv.org/abs/1601.05569v1 -Reliability Testing Strategy - Reliability in Software Engineering,http://arxiv.org/abs/1605.01097v1 -Optimal quantum interference thermoelectric heat engine with edge states,http://dx.doi.org/10.1103/PhysRevLett.118.256801 -"Naming the Pain in Requirements Engineering: Comparing Practices in - Brazil and Germany",http://dx.doi.org/10.1109/MS.2015.122 -"A Systematic Literature Review on Intertemporal Choice in Software - Engineering - Protocol and Results",http://arxiv.org/abs/1701.08310v1 -QRMW: Quantum representation of multi wavelength images,http://dx.doi.org/10.3906/elk-1705-396 -"A Benchmark Study on Sentiment Analysis for Software Engineering - Research",http://dx.doi.org/10.1145/3196398.3196403 -AISC: Approximate Instruction Set Computer,http://arxiv.org/abs/1803.06955v1 -Optimization of a class of heat engines with explicit solution,http://dx.doi.org/10.1016/j.physa.2019.121272 -A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems,http://dx.doi.org/10.13052/jwe1540-9589.18133 -Software Engineering For Automated Game Design,http://arxiv.org/abs/2004.01770v1 -"Requirements engineering in global scaled agile software development - environment: a multi-vocal literature review protocol",http://arxiv.org/abs/2004.12647v1 -Single-Temperature Quantum Engine Without Feedback Control,http://dx.doi.org/10.1103/PhysRevE.96.022108 -Performance of shortcut-to-adiabaticity quantum engines,http://dx.doi.org/10.1103/PhysRevE.98.032121 -InterPSS: A New Generation Power System Simulation Engine,http://arxiv.org/abs/1711.10875v1 -"Making Lab Sessions Mandatory -- On Student Work Distribution in a - Gamified Project Course on Market-Driven Software Engineering",http://arxiv.org/abs/2005.13496v1 -Biology-Derived Algorithms in Engineering Optimization,http://arxiv.org/abs/1003.1888v1 -Designing optimal discrete-feedback thermodynamic engines,http://dx.doi.org/10.1088/1367-2630/13/12/123019 -Agile Research,http://arxiv.org/abs/1202.0652v1 -Generalized thermodynamics of an autonomous micro-engine,http://arxiv.org/abs/1304.3222v1 -Applications of Dynamical Systems in Engineering,http://arxiv.org/abs/1304.5251v1 -The Multi-engine ASP Solver ME-ASP: Progress Report,http://arxiv.org/abs/1405.0876v1 -Finite temperature reservoir engineering and entanglement dynamics,http://dx.doi.org/10.1103/PhysRevA.90.042103 -Quantum Nernst engines,http://dx.doi.org/10.1209/0295-5075/107/47003 -Low cost page quality factors to detect web spam,http://arxiv.org/abs/1410.2085v1 -The classical mechanics of autonomous microscopic engines,http://arxiv.org/abs/1509.02778v1 -"The Influence of Commercial Intent of Search Results on Their Perceived - Relevance",http://arxiv.org/abs/1511.05810v1 -A Framework for Evaluating the Retrieval Effectiveness of Search Engines,http://arxiv.org/abs/1511.05817v1 -A Python Engine for Teaching Artificial Intelligence in Games,http://arxiv.org/abs/1511.07714v1 -Psychological Types of Brazilian Software Engineering Students,http://arxiv.org/abs/1511.08845v1 -Domain Specific Distributed Search Engine Based on Semantic P2P Networks,http://arxiv.org/abs/1610.09190v1 -A DIKW Paradigm to Cognitive Engineering,http://arxiv.org/abs/1702.07168v1 -Ontologies in System Engineering: a Field Report,http://arxiv.org/abs/1702.07193v1 -Rectifying full-counting statistics in a spin Seebeck engine,http://dx.doi.org/10.1103/PhysRevB.97.081407 -Autonomous quantum heat engine using an electron shuttle,http://arxiv.org/abs/1809.04251v1 -Automated Fix Detection Given Flaky Tests,http://arxiv.org/abs/1810.02659v1 -Quantitative Stirling Cycle Measurements: P-V Diagram and Refrigeration,http://dx.doi.org/10.1119/1.5141965 -Ultra-cold Single-Atom Quantum Heat Engines,http://dx.doi.org/10.1088/1367-2630/ab2684 -Heat Bath Algorithmic Cooled Quantum Otto Engines,http://dx.doi.org/10.1103/PhysRevE.100.012109 -"Heat engine model exhibit super-universal feature and capture the - efficiencies of different power plants",http://dx.doi.org/10.1088/1742-5468/ab409e -"An Energy-Efficient Reconfigurable DTLS Cryptographic Engine for - End-to-End Security in IoT Applications",http://dx.doi.org/10.1109/ISSCC.2018.8310174 -"Useful Statistical Methods for Human Factors Research in Software - Engineering: A Discussion on Validation with Quantitative Data",http://dx.doi.org/10.1145/2897586.2897588 -"The Systems Approach to Change and the Agile Software Development - Context",http://arxiv.org/abs/1904.02465v1 -Social Influence in Agile Requirements Engineering,http://arxiv.org/abs/1904.04749v1 -"First things first: If software engineering is the solution, then what - is the problem?",http://arxiv.org/abs/1904.11540v1 -"Revealing the strokes of autonomous quantum heat engines with work and - heat fluctuations",http://dx.doi.org/10.1103/PhysRevA.101.010101 -Maxwell's lesser demon: a quantum engine driven by pointer measurements,http://dx.doi.org/10.1103/PhysRevLett.124.100603 -Bosons Outperform Fermions -- The Thermodynamic Advantage of Symmetry,http://dx.doi.org/10.1103/PhysRevE.101.012110 -Role of Ontology Training to Software Engineering Students,http://arxiv.org/abs/1910.07968v1 -"Effective Model Calibration via Sensible Variable Identification and - Adjustment, with Application to Composite Fuselage Simulation",http://arxiv.org/abs/1912.12569v1 -Power fluctuations in a finite-time quantum Carnot engine,http://dx.doi.org/10.1103/PhysRevResearch.3.L032041 -A Single Gyrotropic Particle as a Heat Engine,http://arxiv.org/abs/2007.11234v1 -"Proceedings of the First Workshop on Agents and Robots for reliable - Engineered Autonomy",http://dx.doi.org/10.4204/EPTCS.319 -"Theoretical foundations and covariant balances for chemical engineering - applications with electromagnetic field",http://dx.doi.org/10.1080/009864490510635 -Recommender Systems for Configuration Knowledge Engineering,http://arxiv.org/abs/2102.08113v1 -"QBugs: A Collection of Reproducible Bugs in Quantum Algorithms and a - Supporting Infrastructure to Enable Controlled Quantum Software Testing and - Debugging Experiments",http://arxiv.org/abs/2103.16968v1 -"Exponential Competence of Computer Science and Software Engineering - Undergraduate Students",http://arxiv.org/abs/2104.12538v1 -"Towards the adoption of model-based engineering for the development of - safety-critical systems in industrial practice",http://dx.doi.org/10.1007/978-3-319-45480-1_26 -"Sinoledge: A Knowledge Engine based on Logical Reasoning and Distributed - Micro Services",http://arxiv.org/abs/2109.08307v1 -Quantum Computing for Software Engineering: Prospects,http://dx.doi.org/10.1145/3549036.3562060 -"Incoherent control of two-photon induced optical measurements in open - quantum systems: quantum heat engine perspective",http://arxiv.org/abs/2203.04268v1 -"Self-interfering dynamics in Bose-Einstein condensates with engineered - dispersions",http://dx.doi.org/10.1016/j.physleta.2022.128218 -Holographic heat engine efficiency of hyperbolic charged black holes,http://arxiv.org/abs/2205.00654v1 -"More Programming Than Programming: Teaching Formal Methods in a Software - Engineering Programme",http://arxiv.org/abs/2205.00787v1 -"Paving the Way for Mature Secondary Research: The Seven Types of - Literature Review",http://dx.doi.org/10.1145/3540250.3560877 -"Quantum heat engine based on a spin-orbit and Zeeman-coupled - Bose-Einstein condensate",http://dx.doi.org/10.1103/PhysRevA.106.L030201 -"Proceedings of the Second Workshop on Agents and Robots for reliable - Engineered Autonomy",http://dx.doi.org/10.4204/EPTCS.362 -"Ermittlung und Kommunikation von Anforderungen in etablierten - UX-Prozessen",http://arxiv.org/abs/2209.06598v1 -"Empowering the trustworthiness of ML-based critical systems through - engineering activities",http://arxiv.org/abs/2209.15438v1 -Interaction enhanced quantum heat engine,http://dx.doi.org/10.1103/PhysRevResearch.5.013088 -"Research Software Science: Expanding the Impact of Research Software - Engineering",http://arxiv.org/abs/2211.09034v1 -A many-body Szilard engine with giant number fluctuations,http://arxiv.org/abs/2211.10961v2 -"Reconfigurable Intelligent Surfaces for 6G -- Applications, Challenges - and Solutions",http://dx.doi.org/10.20944/preprints202212.0006.v1 -"Introducing Hermes: Executing Clinical Quality Language (CQL) at over 66 - Million Resources per Second (inexpensively)",http://arxiv.org/abs/2212.06043v1 -Beyond Classroom: Making a Difference in Diversity in Tech,http://arxiv.org/abs/2301.12000v1 -Efficiency at maximum power of a Carnot quantum information engine,http://dx.doi.org/10.1103/PhysRevLett.130.240401 -"EQuaTE: Efficient Quantum Train Engine for Dynamic Analysis via - HCI-based Visual Feedback",http://arxiv.org/abs/2302.03853v1 -Tailoring Requirements Engineering for Responsible AI,http://dx.doi.org/10.1109/MC.2023.3243182 -Implementing a Model-based Engineering Tool as Web Application,http://arxiv.org/abs/2302.14091v1 -"Where and What do Software Architects blog? An Exploratory Study on - Architectural Knowledge in Blogs, and their Relevance to Design Steps",http://arxiv.org/abs/2303.10015v1 -Towards a Blockchain-based Software Engineering Education,http://arxiv.org/abs/2304.04549v1 -Documenting Bioinformatics Software Via Reverse Engineering,http://arxiv.org/abs/2305.04349v1 -Causal Diagrams for Structural Engineers,http://arxiv.org/abs/2306.15834v1 -Towards ML Engineering: A Brief History Of TensorFlow Extended (TFX),http://arxiv.org/abs/2010.02013v2 -"Which architecture should be implemented to manage data from the real - world, in an Unreal Engine 5 simulator and in the context of mixed reality?",http://arxiv.org/abs/2305.09244v1 -"Temperature- and interaction-tweaked efficiency boost of finite-time - robust quantum Otto engines",http://arxiv.org/abs/2309.11483v1 -Implementing an Observatory Control System-I. A Generic Approach,http://arxiv.org/abs/astro-ph/0401041v1 -"A thermodynamic cycle more efficient than an infinite set of carnot - engines operating between the same temperature levels",http://arxiv.org/abs/physics/0510105v2 -"Maxwell's Demon, Szilard's Engine and Quantum Measurements",http://arxiv.org/abs/quant-ph/0301076v1 -"Comment on Conjugate heat transfer of mixed convection for viscoelastic - fluid past a stretching sheet, by Hsiao and Chen, Mathematical Problems in - Engineering",http://arxiv.org/abs/0807.2850v1 -Optimizing Compiler for Engineering Problems,http://arxiv.org/abs/0808.3100v1 -"Quantum Interference: an experimental proposal, and possible Engineering - applications",http://arxiv.org/abs/0901.1475v2 -Fault prediction in aircraft engines using Self-Organizing Maps,http://arxiv.org/abs/0907.1368v1 -Capacity Planning for Vertical Search Engines,http://arxiv.org/abs/1006.5059v1 -'Just Enough' Ontology Engineering,http://arxiv.org/abs/1108.1488v1 -Energy-efficient traffic engineering for future core networks,http://arxiv.org/abs/1207.0159v1 -"Optimisation of energy and exergy of two-spool turbofan engines using - genetic algorithms",http://arxiv.org/abs/1207.0743v1 -Design Based Teaching for Science and Engineering Students,http://arxiv.org/abs/1306.1604v1 -"Action Research Can Swing the Balance in Experimental Software - Engineering",http://dx.doi.org/10.1016/B978-0-12-385510-7.00005-9 -Manifesto - Model Engineering for Complex Systems,http://arxiv.org/abs/1409.6591v1 -Multicast Traffic Engineering for Software-Defined Networks,http://arxiv.org/abs/1507.08728v3 -"Efficiency at maximum power and efficiency fluctuations in a linear - Brownian heat engine model",http://dx.doi.org/10.1103/PhysRevE.94.012127 -Fault-Tolerant Dot-Product Engines,http://arxiv.org/abs/1708.06892v1 -Data-Driven Search-based Software Engineering,http://dx.doi.org/10.1145/3196398.3196442 -"A Product Line Systems Engineering Process for Variability - Identification and Reduction",http://dx.doi.org/10.1109/JSYST.2019.2897628 -"Cultivating the Next Generation: Outcomes from a Learning Assistant - Program in Engineering",http://arxiv.org/abs/1807.04838v1 -"Platform Dependent Verification: On Engineering Verification Tools for - 21st Century",http://dx.doi.org/10.4204/EPTCS.72.1 -Unifying the Zoo of Jet-Driven Stellar Explosions,http://dx.doi.org/10.1088/0004-637X/750/1/68 -Current Concepts in Version Control Systems,http://arxiv.org/abs/1405.3496v1 -"Lessons Learned from an Experiment in Crowdsourcing Complex Citizen - Engineering Tasks with Amazon Mechanical Turk",http://arxiv.org/abs/1406.7588v1 -Coherence effects in the performance of the quantum Otto heat engine,http://dx.doi.org/10.1103/PhysRevA.99.062103 -Systematization of Vulnerability Discovery Knowledge: Review Protocol,http://arxiv.org/abs/1902.03331v1 -A New Frontier for Pull-Based Graph Processing,http://arxiv.org/abs/1903.07754v1 -Software Engineering in Civic Tech: A Case Study about Code for Ireland,http://dx.doi.org/10.1109/ICSE-SEIS.2019.00013 -Landauer's Principle in a Quantum Szilard Engine Without Maxwell's Demon,http://dx.doi.org/10.3390/e22030294 -ArduCode: Predictive Framework for Automation Engineering,http://arxiv.org/abs/1909.04503v4 -"Effect of finite-size heat source's heat capacity on the efficiency of - heat engine",http://dx.doi.org/10.3390/e22091002 -"Search Engine Similarity Analysis: A Combined Content and Rankings - Approach",http://dx.doi.org/10.1007/978-3-030-62008-0_2 -Toward Inclusion of Children as Software Engineering Stakeholders,http://arxiv.org/abs/2101.02704v1 -Software Engineering for Robotic Systems:a systematic mapping study,http://arxiv.org/abs/2102.12520v1 -"An Experience Report on Machine Learning Reproducibility: Guidance for - Practitioners and TensorFlow Model Garden Contributors",http://arxiv.org/abs/2107.00821v2 -"Toward Integrated Human-machine Intelligence for Civil Engineering: An - Interdisciplinary Perspective",http://arxiv.org/abs/2107.13498v1 -A Framework for Aspectual Requirements Validation: An Experimental Study,http://dx.doi.org/10.5121/ijsea.2021.12502 -"Recent trends in Social Engineering Scams and Case study of Gift Card - Scam",http://arxiv.org/abs/2110.06487v1 -"Text and Team: What Article Metadata Characteristics Drive Citations in - Software Engineering?",http://dx.doi.org/10.1145/3530019.3530022 -"Mining Observation and Cognitive Behavior Process Patterns of Bridge - Inspector",http://dx.doi.org/10.1061/9780784483893.075 -"Cavity-based reservoir engineering for Floquet-engineered - superconducting circuits",http://dx.doi.org/10.1103/PhysRevLett.129.233601 -"Privacy Engineering in the Wild: Understanding the Practitioners' - Mindset, Organisational Aspects, and Current Practices",http://dx.doi.org/10.1109/TSE.2023.3290237 -Pseudospin-selective Floquet band engineering in black phosphorus,http://dx.doi.org/10.1038/s41586-022-05610-3 -"Treat societally impactful scientific insights as open-source software - artifacts",http://arxiv.org/abs/2302.05669v1 -"Enhancement of quantum heat engine by encircling a Liouvillian - exceptional point",http://dx.doi.org/10.1103/PhysRevLett.130.110402 -"Challenges and Practices of Deep Learning Model Reengineering: A Case - Study on Computer Vision",http://arxiv.org/abs/2303.07476v2 -"Can AI Chatbots Pass the Fundamentals of Engineering (FE) and Principles - and Practice of Engineering (PE) Structural Exams?",http://arxiv.org/abs/2303.18149v2 -"A User Study for Evaluation of Formal Verification Results and their - Explanation at Bosch",http://dx.doi.org/10.1007/s10664-023-10353-4 -"Nitro-compounds and GHG exhaust emissions of a pilot diesel-ignited - ammonia dual-fuel engine under various operating conditions",http://arxiv.org/abs/2307.03797v1 -"On stepwise advancement of fractures and pressure oscillations in - saturated porous media",http://arxiv.org/abs/2307.06960v1 -"Weighted Unsupervised Domain Adaptation Considering Geometry Features - and Engineering Performance of 3D Design Data",http://arxiv.org/abs/2309.04499v1 -Inflation and Mixed Dark Matter Models,http://dx.doi.org/10.1093/mnras/265.2.379 -"Prompt and Delayed High-Energy Emission from Cosmological Gamma-Ray - Bursts",http://dx.doi.org/10.1016/S0927-6505(99)00034-1 -Present and future gamma-ray burst experiments,http://dx.doi.org/10.1051/aas:1999349 -SNEWS: The SuperNova Early Warning System,http://dx.doi.org/10.1063/1.1291879 -On the Efficiency of Internal Shocks in Gamma-Ray Bursts,http://dx.doi.org/10.1086/312830 -"Delayed X-Ray Afterglows from Obscured Gamma-Ray Bursts in Star-Forming - Regions",http://dx.doi.org/10.1086/318175 -"HST as a reliable astrometric tool for pulsar astronomy: the cases of - the Vela and Geminga pulsars",http://arxiv.org/abs/astro-ph/0009034v1 -Light Curves of GRB Optical Flashes,http://dx.doi.org/10.1086/317869 -SNO and Supernovae,http://dx.doi.org/10.1016/S0920-5632(01)01465-7 -Cosmology studies with GRB afterglows,http://arxiv.org/abs/astro-ph/0107276v1 -Theories of Gamma-Ray Bursts,http://dx.doi.org/10.1146/annurev.astro.40.060401.093821 -Enhanced signal of astrophysical tau neutrinos propagating through Earth,http://dx.doi.org/10.1103/PhysRevD.66.021302 -The evolution of galaxy mass in hierarchical models,http://dx.doi.org/10.1007/10899892_22 -The simplest curvaton model,http://dx.doi.org/10.1103/PhysRevD.65.121301 -"Deriving the redshift of distant galaxies with Gamma-Ray Burst transient - edges",http://arxiv.org/abs/astro-ph/0211254v1 -Searches for Fast Radio Transients,http://dx.doi.org/10.1086/378231 -Electromagnetic signals from planetary collisions,http://dx.doi.org/10.1086/379186 -z-ph-REM: A photometric redshift code for the REM telescope,http://arxiv.org/abs/astro-ph/0309492v1 -Physical restrictions on the models of gamma-ray bursts,http://arxiv.org/abs/astro-ph/0310361v1 -"Gamma-Ray Bursts: Progress, Problems & Prospects",http://dx.doi.org/10.1142/S0217751X0401746X -Magnetic Acceleration and Collimation of Gamma-Ray Burst Jets,http://dx.doi.org/10.1063/1.1810843 -Planetary Collisions: Electromagnetic Signals,http://dx.doi.org/10.1063/1.1774544 -Physical Limits of Different Models of Cosmic Gamma-Ray Bursts,http://dx.doi.org/10.1088/1009-9271/3/S1/489 -Can a simple instrument measure polarization in gamma-ray bursts?,http://arxiv.org/abs/astro-ph/0402596v1 -The X-ray afterglow of GRB030329 at early and late times,http://dx.doi.org/10.1063/1.1810855 -Magnetic fields in GRBs,http://arxiv.org/abs/astro-ph/0409489v2 -"Energetics - Spectral correlations vs the BATSE Gamma-Ray Bursts - population",http://arxiv.org/abs/astro-ph/0502185v1 -"Comment on the paper ""Diffusive Synchrotron Radiation from Relativistic - Shocks of Gamma-Ray Burst Sources"" by G. D. Fleishman",http://arxiv.org/abs/astro-ph/0503463v1 -The MAGIC Telescope and the Observation of Gamma Ray Bursts,http://dx.doi.org/10.1393/ncc/i2005-10136-y -"Coherent Radiation in Gamma-Ray Bursts and Relativistic Collisionless - Shocks",http://dx.doi.org/10.1143/PTP.114.1317 -Panchromatic study of GRB 060124: from precursor to afterglow,http://dx.doi.org/10.1051/0004-6361:20065071 -Gamma-Ray Bursts,http://dx.doi.org/10.1088/0034-4885/69/8/R01 -"Intrinsic Radiation from Poynting Jets and Magnetized Collisionless - Shocks",http://dx.doi.org/10.1393/ncb/i2007-10067-6 -Dust scattering X-ray expanding rings around gamma-ray bursts,http://dx.doi.org/10.1393/ncb/i2007-10068-5 -GRB 060117: Reverse + forward shock solution,http://dx.doi.org/10.1393/ncb/i2007-10328-4 -When are vector fields Hamiltonian?,http://arxiv.org/abs/chao-dyn/9412010v1 -"News from the Adriatico Research Conference on ""Superconductivity, - Andreev Reflection, and Proximity Effect in Mesoscopic Structures""",http://arxiv.org/abs/cond-mat/9709230v3 -Metallic Phase and Metal-insulator Transition in 2d Electronic Systems,http://dx.doi.org/10.1103/PhysRevB.57.R9381 -Phase Transitions in Condensed Matter and Relativistic QFT,http://arxiv.org/abs/cond-mat/9809200v1 -"A first-order transition with power-law singularity in models with - absorbing states",http://dx.doi.org/10.1103/PhysRevE.62.4401 -Novel criticality in a model with absorbing states,http://dx.doi.org/10.1103/PhysRevE.63.026105 -Linear response function around a localized impurity in a superconductor,http://dx.doi.org/10.1016/S0022-3697(02)00301-3 -"Interferometric Time-Resolved Probing of Acoustic Modes in Single Gold - Nanospheres",http://arxiv.org/abs/cond-mat/0506401v1 -Evolution of density perturbations in a realistic universe,http://dx.doi.org/10.1007/s10714-005-0180-2 -RHIC Spin Physics,http://arxiv.org/abs/hep-ex/9807033v1 -Jets and Prompt Photons in Photoproduction at ZEUS,http://arxiv.org/abs/hep-ex/9810044v1 -Jet and hadron production in photon-photon collisions,http://dx.doi.org/10.1016/S0920-5632(99)00763-X -Inclusive hard processes in photon-photon and photon-proton interactions,http://dx.doi.org/10.1016/S0920-5632(00)00135-3 -"Measurement of J/Psi and Psi(2S) Polarization in ppbar Collisions at - sqrt(s) = 1.8 TeV",http://dx.doi.org/10.1103/PhysRevLett.85.2886 -Inclusive Semileptonic B Decays at BABAR,http://arxiv.org/abs/hep-ex/0204001v1 -"Measurement of the Branching Fraction for Inclusive Semileptonic B Meson - Decays",http://dx.doi.org/10.1103/PhysRevD.67.031101 -Search for the K(L) --> PI0 PI0 E+ E- Decay in the KTeV Experiment,http://dx.doi.org/10.1103/PhysRevLett.89.211801 -The X(3872) at CDF II,http://dx.doi.org/10.1142/S0217751X05027552 -High Pt Jet Physics,http://arxiv.org/abs/hep-ex/0610036v2 -Interpreting the lattice monopoles in the continuum terms,http://arxiv.org/abs/hep-lat/0211033v1 -Longitudinal Structure Function at Intermediate x and the Gluon Density,http://dx.doi.org/10.1016/0370-2693(93)90302-X -"Determining $α_s$ from Measurements at $Z$: How Nature Prompts us - about New Physics",http://dx.doi.org/10.1142/S0217732395000648 -A Monte Carlo Calculation of Atmospheric Muon and Neutrino Fluxes,http://dx.doi.org/10.1016/0920-5632(95)00489-V -QCD Physics Lessons of Z0 Decay,http://arxiv.org/abs/hep-ph/9509235v1 -Color-Octet Contributions to $J/ψ$ Photoproduction,http://dx.doi.org/10.1103/PhysRevLett.76.4128 -Atmospheric muons and neutrinos from charm,http://dx.doi.org/10.1016/0920-5632(96)00294-0 -"Relativistic and Binding Energy Corrections to Direct Photon Production - In Upsilon Decay",http://dx.doi.org/10.1103/PhysRevD.54.3345 -Quarkonium Polarization in the NRQCD Factorization Framework,http://dx.doi.org/10.1142/S0217751X97002097 -Prompt Photon plus Charm Production at Next-to-Leading Order in QCD,http://arxiv.org/abs/hep-ph/9610502v1 -Low-mass dileptons at the SPS,http://dx.doi.org/10.1103/PhysRevC.57.882 -"Isolated Photons at Hadron Colliders at O(αalpha_s^2) (II): Spin - Dependent Case",http://dx.doi.org/10.1016/S0550-3213(97)00340-4 -Evolution Effects in Z^0 Fragmentation into Charmonium,http://arxiv.org/abs/hep-ph/9612408v1 -J/psi Production at the LHC,http://dx.doi.org/10.1016/S0370-2693(97)00668-0 -Lambda_b polarization in the Z boson decays,http://dx.doi.org/10.1016/S0370-2693(98)00248-2 -Spin Dependent Drell Yan and Double Prompt Photon Production to NLO QCD,http://dx.doi.org/10.1063/1.53676 -Photon '97: Theory Summary,http://arxiv.org/abs/hep-ph/9707238v1 -The Structure of Hadrons,http://arxiv.org/abs/hep-ph/9712539v1 -Parton Distributions,http://arxiv.org/abs/hep-ph/9805205v1 -"Charmonium Production at Tevatron, HERA and LHC",http://dx.doi.org/10.1016/S0920-5632(99)00351-5 -"Spin Structure of the Proton and Large p_T Processes in Polarized pp - Collisions",http://dx.doi.org/10.1103/PhysRevD.59.074018 -"Inclusive J/psi and psi(2S) Production from B Decay in p p-bar - Collisions",http://dx.doi.org/10.1103/PhysRevD.60.014006 -Tevatron Potential for Technicolor Search with Prompt Photons,http://dx.doi.org/10.1016/S0370-2693(99)00869-2 -Massive-Evolution Effects on Charmonium Hadroproduction,http://dx.doi.org/10.1016/S0370-2693(99)01215-0 -"Large-p_T Inclusive pi^0 Cross Sections and Next-to-Leading-Order QCD - Predictions",http://dx.doi.org/10.1007/s100520000309 -Bottomonium Production at the Tevatron and the LHC,http://dx.doi.org/10.1016/S0370-2693(00)00119-2 -Higher-Order QCD Corrections in Prompt Photon Production,http://dx.doi.org/10.1103/PhysRevLett.84.4296 -Current Issues In Quarkonium Production,http://arxiv.org/abs/hep-ph/0006203v1 -Polarized J/psi from chi_{cJ} and psi' Decays at the Tevatron,http://dx.doi.org/10.1103/PhysRevD.62.114027 -Tests of QCD: Summary of DIS 2000,http://arxiv.org/abs/hep-ph/0008154v1 -Constraints on the Proton's Gluon Density from Lepton-Pair Production,http://arxiv.org/abs/hep-ph/0009257v1 -Threshold Resummation and Rapidity Dependence,http://dx.doi.org/10.1088/1126-6708/2001/02/016 -"Next to leading order predictions for pi_0 gamma and pi_0 pi_0 - production at the LHC",http://arxiv.org/abs/hep-ph/0105149v1 -Soft Photon Spectrum in Orthopositronium and Vector Quarkonium Decays,http://dx.doi.org/10.1016/S0370-2693(01)01406-X -"A NLO calculation of the large p_T photon + photon -> photon + jet cross - section",http://dx.doi.org/10.1007/s100520200907 -Accessing Transversity in Double-Spin Asymmetries at the BNL-RHIC,http://dx.doi.org/10.1103/PhysRevD.65.114024 -Heavy Quarkonium Production with Polarized Hadrons and Photons,http://arxiv.org/abs/hep-ph/0307384v1 -LSND anomaly from CPT violation in four-neutrino models,http://dx.doi.org/10.1016/j.physletb.2003.10.004 -Double-transverse spin asymmetries at NLO,http://dx.doi.org/10.1063/1.1664317 -Dijet rates with symmetric E_t cuts,http://dx.doi.org/10.1088/1126-6708/2004/01/027 -Systematics of Exotic Cascade Decays,http://dx.doi.org/10.1103/PhysRevD.69.114017 -"Angular distributions in $J/ψ(ρ,ω)$ states near threshold",http://dx.doi.org/10.1103/PhysRevD.70.094023 -Neutrinos: Key to New Physics,http://dx.doi.org/10.1016/j.nuclphysbps.2004.10.081 -Bottomonium Production in the Regge Limit of QCD,http://dx.doi.org/10.1103/PhysRevD.74.014024 -"Photon-tagged correlations in heavy-ion collisions: kinematic - requirements and a case study",http://dx.doi.org/10.1088/0954-3899/34/8/S151 -The other topological twisting of N=4 Yang-Mills,http://dx.doi.org/10.1016/0550-3213(95)00389-A -Closed Form Effective Conformal Anomaly Actions in D$\geq$4,http://dx.doi.org/10.1016/S0370-2693(00)00315-4 -A remark on perturbations of sine and cosine sums,http://arxiv.org/abs/math/9912149v1 -"Reflected planar Brownian motions, intertwining relations and crossing - probabilities",http://dx.doi.org/10.1016/j.anihpb.2003.11.005 -Nearly integrable SO(3)-structures on 5-dimensional Lie groups,http://arxiv.org/abs/math/0607392v1 -Energy localization in two chaotically coupled systems,http://dx.doi.org/10.1103/PhysRevE.71.036214 -"Origins of Intermediate Velocity Particle Production in Heavy Ion - Reactions",http://dx.doi.org/10.1103/PhysRevC.65.061604 -High $p_T$ correlations of $γ$ and charged hadrons at RHIC,http://dx.doi.org/10.1556/APH.25.2006.2-4.26 -Isotope thermometery in nuclear multifragmentation,http://dx.doi.org/10.1103/PhysRevC.59.832 -Prompt muon-induced fission: a probe for nuclear energy dissipation,http://arxiv.org/abs/nucl-th/9905006v1 -"Transverse Momentum Distribution of J/Psi's in Pb-Pb Collisions and - Gluon Rescattering",http://dx.doi.org/10.1140/epja/i2001-10271-3 -Where is the charm quark energy loss at RHIC?,http://dx.doi.org/10.1016/S0370-2693(03)00327-7 -Testing the Sprint Curve Model using the 150m Bailey-Johnson Showdown,http://arxiv.org/abs/physics/9706023v1 -"How Good Can We Get? Using mathematical models to predict the future of - athletics",http://arxiv.org/abs/physics/9803034v1 -"""Information, please... ?""",http://arxiv.org/abs/physics/0408033v1 -"Splashing of liquids: interplay of surface roughness with surrounding - gas",http://dx.doi.org/10.1103/PhysRevE.76.066311 -Scientific reticence and sea level rise,http://dx.doi.org/10.1088/1748-9326/2/2/024002 -Pre- and post-selected ensembles and time-symmetry in quantum mechanics,http://dx.doi.org/10.1007/s10773-008-9876-x -What made GRBs 060505 and 060614?,http://arxiv.org/abs/0704.1421v1 -Measuring Bremsstrahlung Photons in 200 GeV p+p Collisions,http://dx.doi.org/10.1142/S0218301307007659 -"Direct photon spectra in Pb-Pb at sqrt(s) = 5.5 TeV: hydrodynamics+pQCD - predictions",http://arxiv.org/abs/0707.2357v2 -"Spectral evolution of GRB 060904A observed with Swift and Suzaku -- - Possibility of Inefficient Electron Acceleration",http://dx.doi.org/10.1093/pasj/60.sp1.S351 -Simulation of an All-Silicon Tracker,http://arxiv.org/abs/0709.0758v1 -Charmonium Polarization in High Energy Collisions,http://arxiv.org/abs/0709.2259v1 -"High-energy Atmospheric Muon Flux Expected at India-Based Neutrino - Observatory",http://dx.doi.org/10.1142/S0217751X08041268 -Calorimeter Assisted Tracking Algorithm for SiD,http://arxiv.org/abs/0711.0134v1 -"Measurement of the KS->gg branching ratio using a pure KS beam with the - KLOE detector",http://dx.doi.org/10.1088/1126-6708/2008/05/051 -"Orbital Stability of Planets in Binary Systems: A New Look at Old - Results",http://dx.doi.org/10.1017/S1743921308017043 -A New Scenario on X-ray Shallow Decay of Gamma-ray Bursts,http://arxiv.org/abs/0801.0855v1 -Feasibility of Portfolio Optimization under Coherent Risk Measures,http://arxiv.org/abs/0803.2283v3 -"Swift uncovers that SAX J0840.7+2248 is not an X-ray Binary, but - BeppoSAX X-ray Rich GRB 980429",http://dx.doi.org/10.1063/1.2943430 -On the Decay of Unparticles,http://dx.doi.org/10.1016/j.physletb.2008.11.069 -Conjugation spaces and edges of compatible torus actions,http://arxiv.org/abs/0807.3289v1 -The Dynamical Dipole Mode in Fusion Reactions with Exotic Nuclear Beams,http://dx.doi.org/10.1103/PhysRevC.79.021603 -A radiation-like era before inflation,http://dx.doi.org/10.1088/1475-7516/2008/10/027 -Supersymmetry Searches at the LHC,http://arxiv.org/abs/0808.2934v1 -Recent Developments in Heavy-Quarkonium Phenomenology,http://arxiv.org/abs/0809.0122v1 -"Direct photon production at HERA, the Tevatron and the LHC",http://arxiv.org/abs/0809.0846v2 -Stage analysis of delayed-choice and quantum eraser experiments,http://arxiv.org/abs/0810.3826v1 -"What did we learn from the extremely bright gamma ray bursts 990123 and - 080319B?",http://arxiv.org/abs/0812.3340v2 -Muons in IceCube,http://dx.doi.org/10.1016/j.nuclphysbps.2009.09.050 -Production of psi(2S) Mesons in ppbar Collisions at 1.96 TeV,http://dx.doi.org/10.1103/PhysRevD.80.031103 -"Is the X(3872) Production Cross Section at Tevatron Compatible with a - Hadron Molecule Interpretation?",http://dx.doi.org/10.1103/PhysRevLett.103.162001 -Dark gamma-ray bursts: possible role of multiphoton processes,http://arxiv.org/abs/0907.4613v1 -"Diffractive quarkonium production in association with a photon at the - LHC",http://dx.doi.org/10.1016/j.physletb.2009.12.025 -Advances on GRB as cosmological tools,http://dx.doi.org/10.1063/1.3141613 -Understanding the proton's spin structure,http://dx.doi.org/10.1088/0954-3899/37/2/023101 -Full tomography from compatible measurements,http://dx.doi.org/10.1103/PhysRevLett.103.250402 -The proton gyromagnetic g-factor: an electromagnetic model,http://arxiv.org/abs/0912.4962v1 -Fermi and Swift observations of the bright short GRB 090510,http://arxiv.org/abs/1002.2863v1 -X(3872) as a 1D2 charmonium state,http://dx.doi.org/10.1103/PhysRevD.82.097502 -Multiplicity distributions and long range rapidity correlations,http://dx.doi.org/10.1016/j.nuclphysa.2010.09.002 -Precision measurements with jets and particles at HERA,http://dx.doi.org/10.1016/j.nuclphysbps.2010.10.004 -"Non-Separable, Quasiconcave Utilities are Easy -- in a Perfect Price - Discrimination Market Model",http://arxiv.org/abs/1010.4281v1 -Count response model for the CMB spots,http://arxiv.org/abs/1010.5972v1 -Gluon correlations in the glasma,http://dx.doi.org/10.1143/PTPS.187.134 -Fission involves a new state of nuclear matter,http://arxiv.org/abs/1101.1819v1 -Fermi matrix element with isospin breaking,http://dx.doi.org/10.1016/j.physletb.2011.01.005 -Search for New Physics with Rare Heavy Flavour Decays at LHCb,http://arxiv.org/abs/1101.4838v1 -"Discoveries enabled by Multi-wavelength Afterglow Observations of - Gamma-Ray Bursts",http://arxiv.org/abs/1102.0472v1 -"Direct photons at low transverse momentum -- a QGP signal in pp - collisions at LHC",http://dx.doi.org/10.1103/PhysRevLett.106.242301 -"Effect of nano-scale surface roughness on transverse energy spread from - GaAs photocathodes",http://dx.doi.org/10.1063/1.3559895 -Sibyll with charm,http://arxiv.org/abs/1102.5705v1 -"Constraints on Lorentz Invariance Violation using INTEGRAL/IBIS - observations of GRB041219A",http://dx.doi.org/10.1103/PhysRevD.83.121301 -"Quenching of hadron and photon spectra in heavy-ion collisions from RHIC - to LHC",http://dx.doi.org/10.1088/0954-3899/38/12/124017 -"Measurement of the Production Cross Section of Pairs of Isolated Photons - with CMS",http://arxiv.org/abs/1109.3310v1 -"The curvature tensor of (\ka,μ,ν)-contact metric manifolds",http://dx.doi.org/10.1007/s00605-015-0762-3 -"Metastable Staus: Reconstructing Non-Prompt Tracks at the ILC with the - SiD Detector",http://arxiv.org/abs/1112.4825v1 -Interfacial Phenomena and Natural Local Time,http://arxiv.org/abs/1204.0271v1 -Observation and Quantum Objectivity,http://dx.doi.org/10.1086/671106 -Some Affine Invariants Revisited,http://arxiv.org/abs/1208.0783v1 -On low order mimetic finite difference methods,http://arxiv.org/abs/1208.4213v1 -Gamma Ray Bursts,http://dx.doi.org/10.1126/science.1216793 -Measurement of CP Violation in $D^0/\bar{D}^0$,http://dx.doi.org/10.1142/S021773231230039X -Rapid Mass Segregation in Massive Star Clusters,http://arxiv.org/abs/1210.8200v1 -Modified Higgs couplings and unitarity violation,http://dx.doi.org/10.1103/PhysRevD.87.011702 -Gamma Ray Bursts in the Swift-Fermi Era,http://dx.doi.org/10.1007/s11467-013-0282-3 -Photocurrent Response of Topological Insulator Surface States,http://dx.doi.org/10.1103/PhysRevB.88.075144 -"Associated photon and heavy quark production at high energy within - k_T-factorization",http://arxiv.org/abs/1301.6515v1 -"Conical Fireballs, Cannonballs, And Jet-Breaks In The Afterglows Of - Gamma Ray Bursts",http://dx.doi.org/10.1051/0004-6361/201321876 -CP violation in charm decays,http://arxiv.org/abs/1308.1372v1 -Are GRBs the same at high and low redshift?,http://arxiv.org/abs/1308.5651v1 -"Thermodynamic assessment of probability distribution divergencies and - Bayesian model comparison",http://arxiv.org/abs/1308.6753v2 -"An Effective Ratner Equidistribution Result for ASL(2,R)",http://dx.doi.org/10.1215/00127094-2885873 -"Measurement of D-meson production in p-Pb collisions with the ALICE - detector",http://dx.doi.org/10.1088/1742-6596/509/1/012101 -Results of Soft-Diffraction at LHCb,http://arxiv.org/abs/1310.2192v1 -"Symmetry-energy dependence of the dynamical dipole mode in the - Boltzmann-Uehling-Uhlenbeck model",http://dx.doi.org/10.1103/PhysRevC.88.047602 -eta' multiplicity and Witten-Veneziano relation at T>0,http://dx.doi.org/10.5506/APhysPolBSupp.6.935 -An adaptive finite element method for the infinity Laplacian,http://dx.doi.org/10.1007/978-3-319-10705-9_28 -"Coexisting Itinerant and Localized Electrons in Iron-Based - Superconductors",http://arxiv.org/abs/1311.4094v2 -Local analytic regularity in the linearized Calderón problem,http://dx.doi.org/10.2140/apde.2016.9.515 -Double-Diffusive Convection,http://arxiv.org/abs/1401.0928v1 -Partial and Quasi Dynamical Symmetries in Nuclei,http://arxiv.org/abs/1401.4881v2 -Double ratio of charmonia in p+Pb collisions at sqrt(s_NN)=5.02 TeV,http://dx.doi.org/10.1088/1742-6596/535/1/012011 -Experimental Results on p(d)+A Collisions at RHIC and the LHC,http://dx.doi.org/10.1016/j.nuclphysa.2014.09.084 -The cut-and-paste process,http://dx.doi.org/10.1214/14-AOP922 -Quarkonia production in proton-lead collisions at LHCb,http://arxiv.org/abs/1409.3967v2 -"Experimental systematic uncertainties (and object reconstruction) on top - physics, their correlations, comparison ATLAS vs CMS (vs Tevatron) and common - agreements",http://arxiv.org/abs/1412.0875v1 -Charm production in SIBYLL,http://arxiv.org/abs/1502.06353v1 -A more accurate measurement of the $^{28}$Si lattice parameter,http://dx.doi.org/10.1063/1.4917488 -Advanced scanning probe lithography,http://dx.doi.org/10.1038/nnano.2014.157 -Long-Lived Particle Searches in R-Parity Violating MSSM,http://arxiv.org/abs/1505.03479v3 -Investigation of Geant4 Simulation of Electron Backscattering,http://dx.doi.org/10.1109/TNS.2015.2442292 -"Heavy-flavor correlations and multiplicity dependence in pp and p--Pb - collisions with ALICE",http://dx.doi.org/10.1016/j.nuclphysa.2016.02.057 -LHCb results from proton ion collisions,http://dx.doi.org/10.1051/epjconf/201612006004 -Production of exotic hadrons at hadron colliders,http://dx.doi.org/10.1063/1.4949444 -"Two-dimensional currents at semiconductor surfaces as resonances within - the classical current equation",http://arxiv.org/abs/1512.06850v1 -Hadronic final states at HERA,http://arxiv.org/abs/1512.08910v1 -Quantum metamaterials in the microwave and optical ranges,http://dx.doi.org/10.1140/epjqt/s40507-016-0040-x -Asymptotic Analysis of Random Lattices in High Dimensions,http://arxiv.org/abs/1603.00133v1 -Quantum geometrodynamics with intrinsic time development,http://dx.doi.org/10.1142/S0218271816450085 -Atmospheric Neutrino Status,http://arxiv.org/abs/1605.00612v1 -Quantum Gravity and a Time Operator in Relativistic Quantum Mechanics,http://arxiv.org/abs/1605.01659v1 -"Secondary radiation measurements for particle therapy applications: - prompt photons produced by $^{4}$He, $^{12}$C and $^{16}$O ion beams in a - PMMA target",http://dx.doi.org/10.1088/1361-6560/62/4/1438 -"Application of the Signature Method to Pattern Recognition in the CEQUEL - Clinical Trial",http://arxiv.org/abs/1606.02074v1 -"Sentence Similarity Measures for Fine-Grained Estimation of Topical - Relevance in Learner Essays",http://dx.doi.org/10.18653/v1/W16-0533 -LWB and FS-LWB implementation for Sky platform using Contiki,http://arxiv.org/abs/1607.06622v2 -Nonclassicality of local bipartite correlations,http://dx.doi.org/10.1103/PhysRevA.95.032120 -Charmonium physics with heavy ions: experimental results,http://arxiv.org/abs/1611.02557v1 -Gamma-rays from $^{nat}$Sn and $^{nat}$C induced by fast neutrons,http://arxiv.org/abs/1611.02893v1 -High Energy Polarimetry of Prompt GRB Emission,http://dx.doi.org/10.1016/j.newar.2016.11.001 -The hodograph method for relativistic Coulomb systems,http://arxiv.org/abs/1701.08281v1 -Observing FRB 121102 with VERITAS; Searching for Associated TeV Emission,http://arxiv.org/abs/1708.04717v1 -"Robot-Initiated Specification Repair through Grounded Language - Interaction",http://arxiv.org/abs/1710.01417v1 -"Characterizations of equilibrium controls in time inconsistent - mean-field stochastic linear quadratic problems. I",http://arxiv.org/abs/1802.01080v1 -Measuring third party tracker power across web and mobile,http://arxiv.org/abs/1802.02507v1 -"Flares from coalescing black holes in the centimeter-wavelength - transient sky",http://arxiv.org/abs/1806.08446v1 -"Evidence for a 5 MeV Spectral Deviation in the Goesgen Reactor Neutrino - Oscillation Experiment",http://arxiv.org/abs/1807.01810v1 -Modeling OWL with Rules: The ROWL Protege Plugin,http://arxiv.org/abs/1808.10104v1 -Statistical reform and the replication crisis,http://dx.doi.org/10.1007/s13164-018-0421-4 -On Symmetry and Duality,http://arxiv.org/abs/1905.05966v2 -The Special Galileon as Goldstone of Diffeomorphisms,http://arxiv.org/abs/2004.09559v2 -Cyclic Sieving for Cyclic Codes,http://arxiv.org/abs/2004.11998v1 -Beauty Production with ALICE at the LHC,http://arxiv.org/abs/2004.13185v1 -"The Bowditch boundary of $(G,\mathcal{H})$ when $G$ is hyperbolic",http://arxiv.org/abs/1504.03630v4 -"On Functions Whose Mean Value Abscissas Are Midpoints, with Connections - to Harmonic Functions",http://arxiv.org/abs/1608.02558v2 -"Comments on ""Study of $J/ψ$ production in jets""",http://dx.doi.org/10.1142/S021773231771002X -Quantum Secrecy in Thermal States,http://arxiv.org/abs/1711.06592v3 -"RodSteward: A Design-to-Assembly System for Fabrication using 3D-Printed - Joints and Precision-Cut Rods",http://arxiv.org/abs/1906.05710v1 -"Enhanced Input Modeling for Construction Simulation using Bayesian Deep - Neural Networks",http://dx.doi.org/10.1109/WSC40007.2019.9004934 -Teacher Noticing and Shifting of Student Epistemic Framing,http://arxiv.org/abs/2002.12879v1 -Enabling Edge Cloud Intelligence for Activity Learning in Smart Home,http://arxiv.org/abs/2005.06885v1 -Swimming statistics of cargo-loaded single bacteria,http://arxiv.org/abs/2005.12070v1 -Unsettling physics in the quantum-corrected Schwarzschild black hole,http://dx.doi.org/10.3390/sym12081264 -Fast Generators of Direct Photons,http://arxiv.org/abs/0811.2634v1 -Meson spectrum and tetraquarks through an AdS/QCD inspired potential,http://arxiv.org/abs/0811.3553v1 -"Electron and Photon Performance and Electron p_T Spectrum Measurement - with ATLAS in pp Collisions at sqrt(s) = 7 TeV",http://arxiv.org/abs/1012.0554v1 -Timing analysis of two-electron photoemission,http://dx.doi.org/10.1088/0953-4075/44/10/101003 -Pair production of J/psi as a probe of double parton scattering at LHCb,http://dx.doi.org/10.1103/PhysRevLett.107.082002 -Supersymmetric Fluid Dynamics,http://dx.doi.org/10.1103/PhysRevD.85.125009 -Algorithms for strongly stable ideals,http://arxiv.org/abs/1110.4080v2 -Algebra+Homotopy=Operad,http://arxiv.org/abs/1202.3245v1 -"Construction of an Ordinary Dirichlet Series with Convergence beyond the - Bohr Strip",http://arxiv.org/abs/1202.5703v1 -Lamellar Lα Mesophases Doped with Inorganic Nanoparticles,http://dx.doi.org/10.1002/cphc.201301187 -Image Quality of SOLIS/VSM in Helium vs. Nitrogen,http://arxiv.org/abs/1405.7967v1 -Co-existing structures in 105Ru,http://dx.doi.org/10.1103/PhysRevC.89.064312 -"Recognizing how some states are built, from mixed states to singlets",http://arxiv.org/abs/1406.3549v2 -"W, Z and photon production at the LHC",http://arxiv.org/abs/1410.6372v2 -Forecasting the Integration of Immigrants,http://arxiv.org/abs/1509.05447v1 -"Ice Surface Entropy Induction by Humidity or How Humidity Prompts - Freezing",http://arxiv.org/abs/1509.06728v1 -Hydrodynamic synchronization of flagellar oscillators,http://dx.doi.org/10.1140/epjst/e2016-60056-4 -"The Performance and Development of the Inner Detector Trigger Algorithms - at ATLAS for LHC Run 2",http://arxiv.org/abs/1511.01136v1 -Polygonal instabilities on interfacial vorticities,http://dx.doi.org/10.1140/epje/i2015-15113-5 -"Next-to-leading order QCD corrections to $χ_{cJ} W^+ b$ associated - production from top-quark decay",http://dx.doi.org/10.1103/PhysRevD.94.094045 -Measurement of D-meson production in pp collisions with ALICE at the LHC,http://arxiv.org/abs/1705.05147v1 -Knowledge Base Completion: Baselines Strike Back,http://arxiv.org/abs/1705.10744v1 -On maximizers of convolution operators in $L_p$ spaces,http://dx.doi.org/10.1070/SM9099 -"Spin Waves in Quantum Gases --- The Quality Factor of the Identical Spin - Rotation Effect",http://dx.doi.org/10.1088/1402-4896/aad4d6 -Decision problems for Clark-congruential languages,http://arxiv.org/abs/1805.04402v2 -"A Manually Annotated Chinese Corpus for Non-task-oriented Dialogue - Systems",http://arxiv.org/abs/1805.05542v1 -Who discovered positron annihilation?,http://arxiv.org/abs/1809.04815v3 -PROSA PDFs and astrophysical applications,http://dx.doi.org/10.5506/APhysPolBSupp.12.885 -LAMP: Prompt Layer 7 Attack Mitigation with Programmable Data Planes,http://dx.doi.org/10.1109/NCA.2018.8548136 -A crystal ball for kilonovae,http://arxiv.org/abs/1812.07307v1 -String Production in the Abelian Higgs Vacuum,http://dx.doi.org/10.1103/PhysRevD.99.103509 -"Fiducial cross-section measurements of top-quark pair production in - association with a photon at $\sqrt{s}$ = 13 TeV with the ATLAS detector",http://arxiv.org/abs/1901.03929v1 -"Unified Coupled-Channels and Hauser-Feshbach Model Calculation for - Nuclear Data Evaluation",http://arxiv.org/abs/1901.05641v1 -"Gamma-ray burst localisation strategies for the SPHiNX hard X-ray - polarimeter",http://dx.doi.org/10.1117/1.JATIS.5.1.018002 -"SPD - the Spin Physics Project with Polarized Proton and Deuteron Beams - at the NICA Collider",http://dx.doi.org/10.7566/JPSCP.26.021018 -"Precision measurements of the AC field dependence of the superconducting - transition in strontium titanate",http://dx.doi.org/10.1007/s10948-019-05282-7 -Wetzel's sector covers unit arcs,http://dx.doi.org/10.13140/RG.2.2.24935.80807 -"Brownian motion and beyond: first-passage, power spectrum, - non-Gaussianity, and anomalous diffusion",http://dx.doi.org/10.1088/1742-5468/ab4988 -"On Laughter and Speech-Laugh, Based on Observations of Child-Robot - Interaction",http://arxiv.org/abs/1908.11593v1 -The tension between openness and prudence in AI research,http://arxiv.org/abs/1910.01170v2 -Arguing Ecosystem Values with Paraconsistent Logics,http://arxiv.org/abs/1911.06367v1 -Parsimonious Mixtures of Matrix Variate Bilinear Factor Analyzers,http://arxiv.org/abs/1911.09012v1 -Connecting Inverse Design with Experimentally Relevant Models,http://arxiv.org/abs/2003.05896v1 -"The determination of stellar temperatures from Baron B. Harkányi to - the Gaia mission",http://dx.doi.org/10.1177/0021828620918961 -On a Kantorovich-Rubinstein inequality,http://arxiv.org/abs/2010.12946v1 -"SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated - Multiple Reference Training",http://dx.doi.org/10.18653/v1/2020.findings-emnlp.403 -"Bounds on the scale of noncommutativity from mono photon production in - ATLAS Runs -1 and -2 experiments at LHC energies",http://dx.doi.org/10.1142/S0219887821501267 -GN-z11-flash in the context of Gamma-Ray Burst Afterglows,http://arxiv.org/abs/2012.09634v1 -Consequences of the packing problem,http://dx.doi.org/10.1007/s10801-021-01039-5 -Search for Axion(-like) Particles in Heavy-Ion Collisions,http://dx.doi.org/10.1007/JHEP07(2022)082 -"Compton driven beam formation and magnetisation via plasma - microinstabilities",http://dx.doi.org/10.1017/S0022377821000660 -"Conjecture of TxGraffiti: Independence, domination, and matchings",http://arxiv.org/abs/2104.01092v1 -Assorted Musings on Dimension-critical Graphs,http://arxiv.org/abs/2106.05333v2 -Fan Valuations and spherical intrinsic volumes,http://arxiv.org/abs/2106.06407v1 -"Hidden variables in angular correlations of the particles emitted in - fission",http://arxiv.org/abs/2106.10371v1 -"Recent results of $\text{D}^{0}$ mesons azimuthal anisotropy using the - CMS detector",http://arxiv.org/abs/2110.09878v1 -Lightweight Decoding Strategies for Increasing Specificity,http://arxiv.org/abs/2110.11850v1 -On a new environment-friendly gas mixture for Resistive Plate Chambers,http://dx.doi.org/10.1088/1748-0221/17/05/P05005 -Leashing the Inner Demons: Self-Detoxification for Language Models,http://arxiv.org/abs/2203.03072v1 -"Direct Photons in Hydrodynamic Modeling of Relativistic Nuclear - Collisions",http://dx.doi.org/10.1142/S0217751X2230006X -Seq-2-Seq based Refinement of ASR Output for Spoken Name Capture,http://arxiv.org/abs/2203.15833v1 -Evaluating the Text-to-SQL Capabilities of Large Language Models,http://arxiv.org/abs/2204.00498v1 -"UniGDD: A Unified Generative Framework for Goal-Oriented - Document-Grounded Dialogue",http://arxiv.org/abs/2204.07770v1 -Physics implications of recent Dresden-II reactor data,http://dx.doi.org/10.1103/PhysRevD.106.093010 -Towards non-linear quadrature formulae,http://arxiv.org/abs/2209.02302v1 -Quarkonia production in (ultra-)peripheral PbPb collisions at LHCb,http://arxiv.org/abs/2209.09990v1 -"Precision measurements of jet and photon production at the ATLAS - experiment",http://arxiv.org/abs/2210.02598v2 -MiQA: A Benchmark for Inference on Metaphorical Questions,http://arxiv.org/abs/2210.07993v1 -"A Study of Teacher Educators Skill and ICT Integration in Online - Teaching during the Pandemic Situation in India",http://arxiv.org/abs/2210.11267v1 -"Novel photon timing techniques applied to the LHCb RICH upgrade - programme",http://dx.doi.org/10.1088/1742-6596/2374/1/012074 -Charm and beauty production and hadronization with the ALICE experiment,http://arxiv.org/abs/2211.03720v1 -Overcoming positivity violations for density matrices in surface hopping,http://dx.doi.org/10.1063/5.0135456 -Complete $(2+1)$-dimensional Ricci flow spacetimes,http://arxiv.org/abs/2211.11866v1 -Risks to Zero Trust in a Federated Mission Partner Environment,http://arxiv.org/abs/2211.17073v1 -"Accuracy and Fidelity Comparison of Luna and DALL-E 2 Diffusion-Based - Image Generation Systems",http://arxiv.org/abs/2301.01914v2 -"The COVID-19 vaccination, preventive behaviors and pro-social - motivation: panel data analysis from Japan",http://arxiv.org/abs/2301.03124v1 -"The Keyword Explorer Suite: A Toolkit for Understanding Online - Populations",http://arxiv.org/abs/2301.05198v2 -Kippenhahn's construction revisited,http://arxiv.org/abs/2301.05802v2 -"Mixture of Diffusers for scene composition and high resolution image - generation",http://arxiv.org/abs/2302.02412v1 -"Is ChatGPT better than Human Annotators? Potential and Limitations of - ChatGPT in Explaining Implicit Hate Speech",http://dx.doi.org/10.1145/3543873.3587368 -"LayoutDiffuse: Adapting Foundational Diffusion Models for - Layout-to-Image Generation",http://arxiv.org/abs/2302.08908v1 -In-Depth Look at Word Filling Societal Bias Measures,http://arxiv.org/abs/2302.12640v1 -"Investigating the Translation Performance of a Large Multilingual - Language Model: the Case of BLOOM",http://arxiv.org/abs/2303.01911v2 -Text2Face: A Multi-Modal 3D Face Model,http://arxiv.org/abs/2303.02688v2 -"Mapping the Design Space of Interactions in Human-AI Text Co-creation - Tasks",http://arxiv.org/abs/2303.06430v2 -"Disconnected from Reality: Do the core concepts of the metaverse exclude - disabled individuals?",http://arxiv.org/abs/2303.08222v2 -Universality and Control of Fat Tails,http://dx.doi.org/10.1109/LCSYS.2023.3279248 -Compositional 3D Scene Generation using Locally Conditioned Diffusion,http://arxiv.org/abs/2303.12218v2 -"CoCoMo: Computational Consciousness Modeling for Generative and Ethical - AI",http://arxiv.org/abs/2304.02438v2 -Human-like Summarization Evaluation with ChatGPT,http://arxiv.org/abs/2304.02554v1 -ChatGPT as a Therapist Assistant: A Suitability Study,http://arxiv.org/abs/2304.09873v1 -Better Question-Answering Models on a Budget,http://arxiv.org/abs/2304.12370v1 -Training-Free Location-Aware Text-to-Image Synthesis,http://arxiv.org/abs/2304.13427v1 -Multidimensional Evaluation for Text Style Transfer Using ChatGPT,http://arxiv.org/abs/2304.13462v1 -Edit Everything: A Text-Guided Generative System for Images Editing,http://arxiv.org/abs/2304.14006v1 -Getting More out of Large Language Models for Proofs,http://arxiv.org/abs/2305.04369v2 -ComputeGPT: A computational chat model for numerical problems,http://arxiv.org/abs/2305.06223v1 -"Reply to: Deep reinforced learning heuristic tested on spin-glass ground - states: The larger picture",http://arxiv.org/abs/2305.07562v1 -"Enhance Reasoning Ability of Visual-Language Models via Large Language - Models",http://arxiv.org/abs/2305.13267v1 -"""Is the Pope Catholic?"" Applying Chain-of-Thought Reasoning to - Understanding Conversational Implicatures",http://arxiv.org/abs/2305.13826v1 -Domain Private Transformers,http://arxiv.org/abs/2305.14208v1 -Instruction Tuning with Lexicons for Zero-Shot Style Classification,http://arxiv.org/abs/2305.14592v1 -"OverPrompt: Enhancing ChatGPT Capabilities through an Efficient - In-Context Learning Approach",http://arxiv.org/abs/2305.14973v1 -"Predicting Alzheimers Disease Diagnosis Risk over Time with Survival - Machine Learning on the ADNI Cohort",http://arxiv.org/abs/2306.10326v1 -Prompt to GPT-3: Step-by-Step Thinking Instructions for Humor Generation,http://arxiv.org/abs/2306.13195v1 -"Some identities involving $q$-Stirling numbers of the second kind in - type B",http://arxiv.org/abs/2307.00570v1 -"ChatGPT Creates a Review Article: State of the Art in the Most-Cited - Articles on ChatGPT in Health Science, Computer Science, Communication, and - Culture, According to Altmetric in Dimensions.ai",http://arxiv.org/abs/2307.02488v1 -"Dimensionless Numbers Reveal Distinct Regimes in the Structure and - Dynamics of Pedestrian Crowds",http://arxiv.org/abs/2307.12786v1 -"AI and Education: An Investigation into the Use of ChatGPT for Systems - Thinking",http://arxiv.org/abs/2307.14206v1 -Electromagnetic radiation at extreme angular velocity,http://arxiv.org/abs/2308.10349v1 -Blockchain-Powered Supply Chain Management for Kidney Organ Preservation,http://arxiv.org/abs/2308.11169v2 -Exploring Large Language Models for Ontology Alignment,http://arxiv.org/abs/2309.07172v1 -Hysteresis resulting from Lennard-Jones interactions,http://arxiv.org/abs/2309.09356v1 -Evaluation of GPT-3 for Anti-Cancer Drug Sensitivity Prediction,http://arxiv.org/abs/2309.10016v1 -require: Package dependencies for reproducible research,http://arxiv.org/abs/2309.11058v1 -Knowledge Sanitization of Large Language Models,http://arxiv.org/abs/2309.11852v1 -"Navigating the Conjectural Labyrinth of the Black Hole Information - Paradox",http://arxiv.org/abs/2310.03607v1 -"Tabular Representation, Noisy Operators, and Impacts on Table Structure - Understanding Tasks in LLMs",http://arxiv.org/abs/2310.10358v1 -Why Can Large Language Models Generate Correct Chain-of-Thoughts?,http://arxiv.org/abs/2310.13571v1 -An adjustable Brownian heat engine,http://arxiv.org/abs/cond-mat/0208474v1 -Measuring Cognitive Activities in Software Engineering,http://arxiv.org/abs/cs/0702001v1 -A simple quantum heat engine,http://arxiv.org/abs/quant-ph/0211072v2 -Multiple Presents: How Search Engines Re-write the Past,http://arxiv.org/abs/0911.3643v1 -"Approaches to Curriculum and Teaching Materials to Bring Out Better - Skilled Software Engineers-An Indian Perspective",http://arxiv.org/abs/1001.3932v1 -Pointer States via Engineered Dissipation,http://dx.doi.org/10.1103/PhysRevA.84.022336 -Quantum cooling by unitary transformations,http://arxiv.org/abs/1112.1557v2 -"Specification and Verification of Uplink Framework for Application of - Software Engineering using RM-ODP",http://arxiv.org/abs/1204.6729v1 -"Efficiency and Its Bounds for Thermal Engines at Maximum Power using a - Newton's Law of Cooling",http://dx.doi.org/10.1103/PhysRevE.85.011146 -Similarity Measuring Approuch for Engineering Materials Selection,http://arxiv.org/abs/1301.0176v1 -A quantum dynamical framework for Brownian heat engines,http://arxiv.org/abs/1303.1233v1 -Intelligent Agent Based Semantic Web in Cloud Computing Environment,http://arxiv.org/abs/1305.0939v1 -On the Peculiarities of Design: An Engineering Perspective,http://arxiv.org/abs/1305.4148v1 -A Synonym Based Approach of Data Mining in Search Engine Optimization,http://dx.doi.org/10.14445/22312803/IJCTT-V12P140 -"Bridging the Research-Practice Gap in Requirements Engineering through - Effective Teaching and Peer Learning",http://arxiv.org/abs/1407.4186v1 -A manager's view on large scale XP projects,http://arxiv.org/abs/1409.7060v1 -"Integrating goals after prioritization and evaluation-A Goal-oriented - requirements engineering method",http://dx.doi.org/10.5121/ijsea.2014.5604 -A Narrow Short-Duration GRB Jet from a Wide Central Engine,http://dx.doi.org/10.1088/0004-637X/813/1/64 -"Understanding the Affect of Developers: Theoretical Background and - Guidelines for Psychoempirical Software Engineering",http://dx.doi.org/10.1145/2804381.2804386 -"An Architecture Process Maturity Model of Software Product Line - Engineering",http://dx.doi.org/10.1007/s11334-011-0159-y -Exotic properties and optimal control of quantum heat engine,http://dx.doi.org/10.1209/0295-5075/113/40009 -Universality and scaling of optimal heat engines,http://arxiv.org/abs/1510.06183v2 -Optimal tuning of a confined Brownian information engine,http://dx.doi.org/10.1103/PhysRevE.93.032146 -"Nmag micromagnetic simulation tool - software engineering lessons - learned",http://dx.doi.org/10.1145/2897676.2897677 -A mechanical autonomous stochastic heat engine,http://dx.doi.org/10.1103/PhysRevLett.117.010602 -Adaptive heat engine,http://dx.doi.org/10.1103/PhysRevLett.117.030601 -"Large-scale Analysis of Chess Games with Chess Engines: A Preliminary - Report",http://arxiv.org/abs/1607.04186v1 -Resource Selection for Federated Search on the Web,http://arxiv.org/abs/1609.04556v1 -"Protocol for a Systematic Mapping Study on Collaborative Model-Driven - Software Engineering",http://arxiv.org/abs/1611.02619v1 -"Solving Cold-Start Problem in Large-scale Recommendation Engines: A Deep - Learning Approach",http://arxiv.org/abs/1611.05480v1 -"Engines with ideal efficiency and nonzero power for sublinear transport - laws",http://dx.doi.org/10.1140/epjb/e2016-70297-9 -The spatial dynamics of ecosystem engineers,http://dx.doi.org/10.1016/j.mbs.2017.08.002 -"Efficiency at maximum power of a quantum Carnot engine with temperature - tunable baths",http://arxiv.org/abs/1710.06565v1 -"Adapting general-purpose speech recognition engine output for - domain-specific natural language question answering",http://arxiv.org/abs/1710.06923v1 -Ethical and Social Aspects of Self-Driving Cars,http://arxiv.org/abs/1802.04103v1 -Extracting maximum power from active colloidal heat engines,http://dx.doi.org/10.1209/0295-5075/121/60005 -"Development of formal models, algorithms, procedures, engineering and - functioning of the software system ""Instrumental complex for ontological - engineering purpose""",http://arxiv.org/abs/1803.10684v2 -"Defining the Structure of Environmental Competence of Future Mining - Engineers: ICT Approach",http://arxiv.org/abs/1807.00805v2 -Nonuniversality of heat engine efficiency at maximum power,http://dx.doi.org/10.1103/PhysRevE.98.052137 -A spin heat engine coupled to a harmonic-oscillator flywheel,http://dx.doi.org/10.1103/PhysRevLett.123.080602 -"Future Automation Engineering using Structural Graph Convolutional - Neural Networks",http://dx.doi.org/10.1145/3240765.3243477 -"Knowledge Management in Software Engineering: A Systematic Review of - Studied Concepts, Findings and Research Methods Used",http://dx.doi.org/10.1016/j.infsof.2008.03.006 -"Redesigning Telecommunication Engineering Courses with CDIO geared for - Polytechnic Education",http://dx.doi.org/10.24908/PCEEA.VI0.13855 -Quantum mechanical bound for efficiency of quantum Otto heat engine,http://dx.doi.org/10.1103/PhysRevE.100.012148 -"Optimal efficiency and power and their trade-off in three-terminal - quantum thermoelectric engines with two output electric currents",http://dx.doi.org/10.1103/PhysRevB.100.115438 -"Toward a Methodological Knowledge for Service-Oriented Development Based - on OPEN Meta Model",http://arxiv.org/abs/2004.10135v2 -Simulation-based Safety Assessment of High-level Reliability Models,http://dx.doi.org/10.4204/EPTCS.316.9 -"""Sampling""' as a Baseline Optimizer for Search-based Software - Engineering",http://arxiv.org/abs/1608.07617v3 -On the Emotion of Users in App Reviews,http://arxiv.org/abs/1703.02256v1 -"Squeezed thermal reservoirs as a resource for a nano-mechanical engine - beyond the Carnot limit",http://dx.doi.org/10.1103/PhysRevX.7.031044 -Holographic Heat engine within the framework of massive gravity,http://dx.doi.org/10.1007/JHEP05(2018)122 -MBL-mobile: Quantum engine based on many-body localization,http://dx.doi.org/10.1103/PhysRevB.99.024203 -The collapse of ecosystem engineer populations,http://dx.doi.org/10.3390/math6010009 -The Poschl-Teller Like Description of Quantum Mechanical Carnot Engine,http://dx.doi.org/10.1016/j.cjph.2021.01.004 -A Feshbach engine in the Thomas-Fermi regime,http://dx.doi.org/10.1103/PhysRevResearch.2.033335 -Quantum jump approach to microscopic heat engines,http://dx.doi.org/10.1103/PhysRevResearch.2.033449 -"A Reinforcement Learning Approach for Transient Control of Liquid Rocket - Engines",http://dx.doi.org/10.1109/TAES.2021.3074134 -Distributing Content Simplifies ISP Traffic Engineering,http://arxiv.org/abs/1209.5715v1 -"Sharing of Semantically Enhanced Information for the Adaptive Execution - of Business Processes",http://arxiv.org/abs/1304.2326v1 -"Engine efficiency at maximum power, entropy production and equilibrium - thermodynamics",http://arxiv.org/abs/1405.2273v3 -"Personality Profiles of Software Engineers and Their Software Quality - Preferences",http://dx.doi.org/10.4018/ijissc.2014070106 -"A comprehensive safety engineering approach for software-intensive - systems based on STPA",http://dx.doi.org/10.1016/j.proeng.2015.11.498 -Towards Reverse Engineering Reversible Logic,http://arxiv.org/abs/1704.08397v3 -Taub-Bolt Heat Engines,http://dx.doi.org/10.1088/1361-6382/aaa010 -Mesh Model (MeMo): A Systematic Approach to Agile System Engineering,http://arxiv.org/abs/1705.09170v1 -Invariant Synthesis for Incomplete Verification Engines,http://arxiv.org/abs/1712.05581v2 -Quantum heat engine operating between thermal and spin reservoirs,http://dx.doi.org/10.1103/PhysRevA.97.052104 -"Integrating Software Engineering Key Practices into an OOP Massive - In-Classroom Course: an Experience Report",http://arxiv.org/abs/1804.01700v1 -Quantum Rotor Engines,http://dx.doi.org/10.1007/978-3-319-99046-0_9 -Efficiency of harmonic quantum Otto engines at maximal power,http://dx.doi.org/10.3390/e20110875 -"Fast and Exact Nearest Neighbor Search in Hamming Space on Full-Text - Search Engines",http://arxiv.org/abs/1902.08498v2 -"Towards a Strategy for Supporting the Engineering of Contemporary - Software Systems",http://arxiv.org/abs/1904.11741v1 -Searching the Visual Style and Structure of D3 Visualizations,http://dx.doi.org/10.1109/TVCG.2019.2934431 -Topological and Finite Size Effects in a Kitaev Chain Heat Engine,http://arxiv.org/abs/1908.02643v1 -Approximation Algorithms for Process Systems Engineering,http://arxiv.org/abs/1909.12328v1 -"Software Engineering Practice in the Development of Deep Learning - Applications",http://arxiv.org/abs/1910.03156v1 -Attaining Carnot Efficiency with Quantum and Nanoscale Heat Engines,http://dx.doi.org/10.1038/s41534-021-00366-6 -A Flipped Classroom Approach to Teaching Empirical Software Engineering,http://dx.doi.org/10.1109/TE.2019.2960264 -Double quantum-dot engine fueled by entanglement between electron spins,http://dx.doi.org/10.1103/PhysRevB.101.081408 -Teaching Software Engineering for AI-Enabled Systems,http://arxiv.org/abs/2001.06691v1 -Quantum engine based on general measurements,http://dx.doi.org/10.1088/1751-8121/abca74 -Microscopic thermal machines using run-and-tumble particles,http://dx.doi.org/10.1007/s12043-021-02225-7 -Analogy-Making as a Core Primitive in the Software Engineering Toolbox,http://arxiv.org/abs/2009.06592v1 -Substrate engineering in the growth of perovskite crystals,http://arxiv.org/abs/2010.04422v1 -"Underpinning Theories of Software Engineering: Dynamism in Physical - Sources of the Shannon Weaver Communication Model",http://arxiv.org/abs/2010.08538v1 -"Can Reinforcement Learning for Continuous Control Generalize Across - Physics Engines?",http://arxiv.org/abs/2010.14444v1 -"Numerical Algorithm Development for Optimizing the Engine Stroke of - Linear Generators",http://arxiv.org/abs/2011.03266v1 -Advancing Behavior Engineering: Toward Integrated Events Modeling,http://arxiv.org/abs/2101.01325v1 -"Synergistic action in colloidal heat engines coupled by non-conservative - flows",http://arxiv.org/abs/2101.07015v2 -"Misplaced trust? The relationship between trust, ability to identify - commercially influenced results, and search engine preference",http://arxiv.org/abs/2101.09159v2 -Active engines: Thermodynamics moves forward,http://dx.doi.org/10.1209/0295-5075/134/10003 -How Developers Engineer Test Cases: An Observational Study,http://arxiv.org/abs/2103.01783v4 -"Hierarchical Onsager symmetries in adiabatically driven linear - irreversible heat engines",http://dx.doi.org/10.1103/PhysRevE.103.L050101 -Engineering Sketch Generation for Computer-Aided Design,http://arxiv.org/abs/2104.09621v1 -Engineering Knowledge Graph from Patent Database,http://arxiv.org/abs/2106.06739v1 -"To Infinity and Beyond! Accessibility is the Future for Kids' Search - Engines",http://arxiv.org/abs/2106.07813v1 -Theory and Practice of Algorithm Engineering,http://arxiv.org/abs/2107.10675v1 -Fluctuations in heat engines,http://dx.doi.org/10.1088/1751-8121/ac3aac -"Optimisation of an FPGA Credit Default Swap engine by embracing dataflow - techniques",http://arxiv.org/abs/2108.03982v1 -Is Szilard Engine Really Broken?,http://arxiv.org/abs/2111.12300v1 -"From Anecdote to Evidence: The Relationship Between Personality and Need - for Cognition of Developers",http://arxiv.org/abs/2112.06610v1 -"Building Bridges: Establishing a Dialogue Between Software Engineering - Research and Computational Science",http://arxiv.org/abs/2201.04007v1 -"Maximizing information from chemical engineering data sets: Applications - to machine learning",http://dx.doi.org/10.1016/j.ces.2022.117469 -Spin Quantum Heat Engine Quantified by Quantum Steering,http://dx.doi.org/10.1103/PhysRevLett.128.090602 -"The efficiency of Quantum Mechanical Carnot Engine using the Woods Saxon - model",http://arxiv.org/abs/2203.02564v3 -Enabling Automated Machine Learning for Model-Driven AI Engineering,http://arxiv.org/abs/2203.02927v1 -"Irreversible efficiency and Carnot theorem for heat engines operating - with multiple heat baths in linear response regime",http://arxiv.org/abs/2204.00807v2 -The General Index of Software Engineering Papers,http://dx.doi.org/10.1145/3524842.3528494 -The Evolving Landscape of Software Performance Engineering,http://dx.doi.org/10.1145/3530019.3534977 -"Human-AI Guidelines in Practice: Leaky Abstractions as an Enabler in - Collaborative Software Teams",http://arxiv.org/abs/2207.01749v1 -Value-based Engineering with IEEE 7000TM,http://arxiv.org/abs/2207.07599v1 -"Study of bounds on non-equilibrium fluctuations for asymmetrically - driven quantum Otto engine",http://dx.doi.org/10.1103/PhysRevE.108.014118 -Work harvesting by q-deformed statistical mutations in an Otto engine,http://arxiv.org/abs/2208.08565v2 -Reflections on Software Failure Analysis,http://dx.doi.org/10.1145/3540250.3560879 -"Classical to Quantum Software Migration Journey Begins: A Conceptual - Readiness Model",http://arxiv.org/abs/2209.05105v1 -Engineering a heat engine purely driven by quantum coherence,http://dx.doi.org/10.1103/PhysRevA.107.012221 -Cyclegan Network for Sheet Metal Welding Drawing Translation,http://arxiv.org/abs/2209.14106v1 -Rethinking the Reverse-engineering of Trojan Triggers,http://arxiv.org/abs/2210.15127v1 -Systematic Literature Review of Gender and Software Engineering in Asia,http://arxiv.org/abs/2211.09554v1 -Correlation-boosted quantum engine: A proof-of-principle demonstration,http://arxiv.org/abs/2211.11449v2 -"Investigation of the Effects of Biodiesel Produced from Crambe - Abyssinica Plant on Combustion, Engine Performance and Exhaust Emissions",http://arxiv.org/abs/2212.12159v1 -"Efficient Attack Detection in IoT Devices using Feature Engineering-Less - Machine Learning",http://dx.doi.org/10.5121/ijcsit.2022.14605 -Registered Reports in Software Engineering,http://dx.doi.org/10.1007/s10664-022-10277-5 -Algorithmic neutrality,http://arxiv.org/abs/2303.05103v2 -"NICHE: A Curated Dataset of Engineered Machine Learning Projects in - Python",http://arxiv.org/abs/2303.06286v1 -"Continuous Three-level Quantum Heat Engine with High Performance Under - Medium Temperature Difference",http://dx.doi.org/10.1063/5.0139998 -"Stop Words for Processing Software Engineering Documents: Do they - Matter?",http://arxiv.org/abs/2303.10439v2 -Resource engines,http://arxiv.org/abs/2304.09559v1 -"The two laws of engines in general, information and the meaning of - entropy",http://arxiv.org/abs/2304.10609v1 -Boosting Big Brother: Attacking Search Engines with Encodings,http://dx.doi.org/10.1145/3607199.3607220 -Emotions in Requirements Engineering: A Systematic Mapping Study,http://arxiv.org/abs/2305.16091v1 -"Multi-Modal Emotion Recognition for Enhanced Requirements Engineering: A - Novel Approach",http://arxiv.org/abs/2306.01492v1 -"Motivational models for validating agile requirements in Software - Engineering subjects",http://arxiv.org/abs/2306.06834v1 -Collective effects enhanced multi-qubit information engines,http://arxiv.org/abs/2306.12072v1 -Cloud Native Software Engineering,http://arxiv.org/abs/2307.01045v1 -"Adversarial Latent Autoencoder with Self-Attention for Structural Image - Synthesis",http://arxiv.org/abs/2307.10166v1 -AI in Software Engineering: A Survey on Project Management Applications,http://arxiv.org/abs/2307.15224v1 -A photonic engine fueled by quantum-correlated atoms,http://arxiv.org/abs/2307.16726v1 -"An interaction-driven quantum many-body engine enabled by atom-atom - correlations",http://arxiv.org/abs/2308.05266v1 -"Software Engineering Knowledge Areas in Startup Companies: A Mapping - Study",http://dx.doi.org/10.1007/978-3-319-18612-2_3 -Can stochastic resetting render a heat engine more efficient?,http://arxiv.org/abs/2308.15212v1 -"Coherence-enhanced thermodynamic performance in periodically driven - thermoelectric heat engines",http://arxiv.org/abs/2310.10465v1 -"Qualitative analysis of the relationship between design smells and - software engineering challenges",http://dx.doi.org/10.1145/3571697.3571704 -Persistent Counterparts to GRBS,http://arxiv.org/abs/astro-ph/9706141v2 -"A Search for Gamma-Ray Burst Optical Emission with the Automated Patrol - Telescope",http://dx.doi.org/10.1063/1.55461 -R-Process in Collapsing O/Ne/Mg Cores,http://dx.doi.org/10.1086/311133 -"Expected characteristics of the subclass of Supernova Gamma-ray Bursts - (S-GRBs)",http://dx.doi.org/10.1086/311655 -Dust sublimation by GRBs and its implications,http://dx.doi.org/10.1086/309053 -Prompt Optical Emission from Gamma-ray Bursts,http://arxiv.org/abs/astro-ph/9909219v1 -"Beaming, Baryon-Loading, and the Synchrotron Self-Compton Component in - Gamma-Ray Burst Blast Waves Energized by External Shocks",http://dx.doi.org/10.1086/309061 -"Prompt and afterglow emission from the X-ray rich GRB981226 observed - with BeppoSAX",http://dx.doi.org/10.1086/309369 -X-ray spectral features from GRBs: Predictions of progenitor models,http://dx.doi.org/10.1086/318346 -The Redshift of the Optical Transient Associated with GRB 010222,http://dx.doi.org/10.1086/321709 -On the absorption feature in the prompt X-ray spectrum of GRB 990705,http://dx.doi.org/10.1086/321565 -GRB 990704: the most X-ray rich BeppoSAX gamma-ray burst,http://dx.doi.org/10.1051/0004-6361:20011192 -Gamma Ray Bursts and Cosmic Ray Origin,http://arxiv.org/abs/astro-ph/0202254v1 -"The Prompt Inventory from Very Massive Stars and Elemental Abundances in - Ly Alpha Systems",http://dx.doi.org/10.1086/340641 -"ECLAIRs: A microsatellite to observe the prompt optical and X-ray - emission of Gamma-Ray Bursts",http://dx.doi.org/10.1063/1.1579407 -VLT and HST Observations of the Host Galaxy of GRB990705,http://dx.doi.org/10.1086/346072 -Gamma Ray Bursts: open problems,http://arxiv.org/abs/astro-ph/0301256v1 -Thermonuclear Stability of Material Accreting onto a Neutron Star,http://dx.doi.org/10.1086/379211 -INTEGRAL and XMM-Newton observations of the weak GRB 030227,http://dx.doi.org/10.1086/376853 -Optical afterglow of the not so dark GRB 021211,http://dx.doi.org/10.1051/0004-6361:20031153 -"Polarization of prompt GRB emission: evidence for - electromagnetically-dominated outflow",http://dx.doi.org/10.1086/378497 -"Neutral beam model for the anomalous gamma-ray emission component in GRB - 941017",http://dx.doi.org/10.1051/0004-6361:20040108 -"GeV and higher energy photon interactions in gamma-ray burst fireballs - and surroundings",http://dx.doi.org/10.1086/423166 -Discovering Tau and Muon Solar Neutrino Flares above backgrounds,http://dx.doi.org/10.1142/9789812701824_0043 -"MGGPOD: a Monte Carlo Suite for Modeling Instrumental Line and Continuum - Backgrounds in Gamma-Ray Astronomy",http://dx.doi.org/10.1086/425577 -INTEGRAL and XMM-Newton Observations of GRB040106,http://dx.doi.org/10.1051/0004-6361:20041267 -May Gravity detect Tsunami ?,http://dx.doi.org/10.1088/1009-9271/6/S1/55 -Cosmology with Gamma Ray Bursts,http://dx.doi.org/10.1393/ncc/i2005-10119-0 -Search for muon enhancement at sea level from transient solar activity,http://dx.doi.org/10.1103/PhysRevD.71.103011 -Gamma-Ray Burst Jet Profiles And Their Signatures,http://dx.doi.org/10.1063/1.2207869 -Tail Emission of Prompt Gamma-Ray Burst Jets,http://dx.doi.org/10.1111/j.1365-2966.2006.10290.x -"The observable effects of a photospheric component on GRB's and XRF's - prompt emission spectrum",http://dx.doi.org/10.1086/501424 -"Swift observations of the prompt X-ray emission and afterglow from - GRB050126 and GRB050219A",http://dx.doi.org/10.1051/0004-6361:20054457 -"The GRB early optical flashes from internal shocks: application to - GRB990123, GRB041219a and GRB060111b",http://dx.doi.org/10.1111/j.1365-2966.2006.11156.x -Are GRB 980425 and GRB 031203 real outliers or twins of GRB 060218?,http://dx.doi.org/10.1111/j.1365-2966.2006.10972.x -"GRB 050717: A Long, Short-Lag, High Peak Energy Burst Observed by Swift - and Konus",http://dx.doi.org/10.1063/1.2207875 -Are Short GRBs Really Hard?,http://dx.doi.org/10.1063/1.2210314 -Anti-Neutrino Imprint in Solar Neutrino Flare,http://dx.doi.org/10.1088/0031-8949/2006/T127/008 -"Prompt and Afterglow Emission Properties of Gamma-Ray Bursts with - Spectroscopically Identified Supernovae",http://dx.doi.org/10.1086/508324 -CZT in Space Based Hard-X-ray Astronomy,http://arxiv.org/abs/astro-ph/0610049v1 -"A unified picture for the gamma-ray and prompt optical emissions of GRB - 990123",http://dx.doi.org/10.1111/j.1365-2966.2007.11398.x -"Predicted and observed evolution in the mean properties of Type Ia - supernovae with redshift",http://dx.doi.org/10.1086/522030 -"Neutron-rich gamma-ray burst flows: dynamics and particle creation in - neutron - proton collisions",http://dx.doi.org/10.1051/0004-6361:20077560 -Prompt J/psi production at e^+e^- colliders,http://dx.doi.org/10.1103/PhysRevD.56.321 -Detecting Solar Neutrino Flares and Flavors,http://dx.doi.org/10.1088/1126-6708/2004/06/045 -Heavy Flavor Production at STAR,http://dx.doi.org/10.1088/0954-3899/32/12/S03 -"Study of the $pd\to ^3\textrm{He} K^+K^-$ and $pd\to ^3\textrm{He} φ$ - reactions close to threshold",http://dx.doi.org/10.1103/PhysRevC.75.015204 -"Properties of a relativistic equation of state for collapse-driven - supernovae",http://dx.doi.org/10.1016/j.nuclphysa.2003.10.007 -Prompt Emission of High Energy Photons from Gamma Ray Bursts,http://dx.doi.org/10.1111/j.1365-2966.2007.12051.x -Observations of the Prompt Gamma-Ray Emission of GRB 070125,http://dx.doi.org/10.1086/592136 -"A Possible New Distance Indicator -Correlation between the duration and - the X-ray luminosity of the shallow decay phase of Gamma Ray Bursts-",http://arxiv.org/abs/0711.0903v1 -"Pre-Merger Localization of Gravitational-Wave Standard Sirens With LISA: - Triggered Search for an Electromagnetic Counterpart",http://dx.doi.org/10.1086/590230 -"Special Relativistic Simulations of Magnetically-dominated Jets in - Collapsing Massive Stars",http://dx.doi.org/10.1088/0004-637X/691/2/1360 -"Comparative Direct Analysis of Type Ia Supernova Spectra. IV. - Postmaximum",http://dx.doi.org/10.1086/527572 -Precursors in Swift Gamma Ray Bursts with redshift,http://dx.doi.org/10.1086/592350 -Prompt TeV neutrinos from dissipative photospheres of gamma-ray bursts,http://dx.doi.org/10.1088/0004-637X/691/2/L67 -Prompt High-Energy Emission from Proton-Dominated Gamma-Ray Bursts,http://dx.doi.org/10.1088/0004-637X/699/2/953 -H.E.S.S. Observations of the Prompt and Afterglow Phases of GRB 060602B,http://dx.doi.org/10.1088/0004-637X/690/2/1068 -"Gamma-Ray Burst at the extreme: ""the naked-eye burst"" GRB 080319B",http://dx.doi.org/10.1088/0004-637X/691/1/495 -Possible Effects of Pair Echoes on Gamma-Ray Burst Afterglow Emission,http://dx.doi.org/10.1111/j.1365-2966.2009.14704.x -"The Large-Scale Environments of Type Ia Supernovae: Evidence for a - Metallicity Bias in the Rate or Luminosity of Prompt Ia Events",http://dx.doi.org/10.1088/0004-637X/704/1/687 -"Interpretation and implication of the non-detection of GeV spectrum - excess by Fermi gamma-ray Space Telescope in most GRBs",http://dx.doi.org/10.1111/j.1365-2966.2009.15018.x -"The Role and Detectability of the Charm Contribution to Ultra High - Energy Neutrino Fluxes",http://dx.doi.org/10.1088/1475-7516/2009/09/015 -"Quantum Scattering and Transport in Classically Chaotic Cavities: An - Overview of Past and New Results",http://dx.doi.org/10.1142/9789814299725_0024 -"XMM-Newton and Swift observations prove GRB 090709A to be a distant, - standard, long GRB",http://dx.doi.org/10.1111/j.1365-2966.2009.16012.x -"Searching for prompt signatures of nearby core-collapse supernovae by a - joint analysis of neutrino and gravitational-wave data",http://dx.doi.org/10.1088/0264-9381/27/8/084019 -"Empirical Delay Time Distributions of Type Ia Supernovae From The - Extended GOODS/HST Supernova Survey",http://dx.doi.org/10.1088/0004-637X/713/1/32 -"MIT bag model inspired partonic transverse momentum distribution for - prompt photon production in pp collisions",http://dx.doi.org/10.1103/PhysRevD.78.054023 -WIDGET: System Performance and GRB Prompt Optical Observations,http://dx.doi.org/10.1093/pasj/63.1.137 -"Delay Times and Rates for Type Ia Supernovae and Thermonuclear - Explosions from Double-detonation Sub-Chandrasekhar Mass Models",http://dx.doi.org/10.1111/j.1365-2966.2011.19276.x -"Quantum Mechanical Aspects of Cell Microtubules: Science Fiction or - Realistic Possibility?",http://dx.doi.org/10.1088/1742-6596/306/1/012008 -The Chinese-French SVOM mission for GRBs studies,http://dx.doi.org/10.1016/j.crhy.2011.01.009 -INTEGRAL observations of the gamma-ray binary 1FGL J1018.6-5856,http://dx.doi.org/10.1088/2041-8205/738/2/L31 -Numerical Simulations of Driven Supersonic Relativistic MHD Turbulence,http://dx.doi.org/10.1063/1.3621748 -"Search for anomalous production of prompt like-sign muon pairs and - constraints on physics beyond the Standard Model with the ATLAS detector",http://dx.doi.org/10.1103/PhysRevD.85.032004 -The origin of the late rebrightening in GRB 080503,http://dx.doi.org/10.1051/0004-6361/201118722 -Pulse spectral evolution of GRBs: implication as standard candle,http://arxiv.org/abs/1207.1774v1 -"Limits on prompt, dispersed radio pulses from gamma-ray bursts",http://dx.doi.org/10.1088/0004-637X/757/1/38 -Charmonia production in ALICE,http://dx.doi.org/10.1016/j.nuclphysa.2012.12.092 -"Causal inference in paired two-arm experimental studies under - non-compliance with application to prognosis of myocardial infarction",http://arxiv.org/abs/1210.6678v1 -Nonlinear alfvénic fast particle transport and losses,http://dx.doi.org/10.1088/1742-6596/401/1/012022 -"Prompt photon in association with a heavy-quark jet in Pb-Pb collisions - at the LHC",http://dx.doi.org/10.1007/JHEP02(2013)072 -"Characterizing the atomic mass surface beyond the proton drip line via - a-decay measurements of the s1/2 ground state of 165Re and the h11/2 isomer - in 161Ta",http://dx.doi.org/10.1103/PhysRevC.86.064315 -A Model for the Escape of Solar-Flare Accelerated Particles,http://dx.doi.org/10.1088/0004-637X/771/2/82 -On the origin of >10 GeV photons in gamma-ray burst afterglows,http://dx.doi.org/10.1088/2041-8205/771/2/L33 -"Major electron events and coronal magnetic configurations of the related - solar active regions",http://dx.doi.org/10.1088/2041-8205/720/1/L36 -"Open Charm Production in $p + p$ and Pb + Pb collisions at the CERN - Large Hadron Collider",http://dx.doi.org/10.1088/0954-3899/41/11/115101 -Demystifying the PeV Cascades in IceCube: Less (Energy) is More (Events),http://dx.doi.org/10.1103/PhysRevD.88.043009 -"Time resolved spectral analysis of the prompt emission of long gamma ray - bursts with GeV Emission",http://dx.doi.org/10.1088/1674-4527/14/1/003 -"NuSTAR Observations of GRB130427A establish a single component - synchrotron afterglow origin for the late optical to multi-GeV emission",http://dx.doi.org/10.1088/2041-8205/779/1/L1 -"Measurements of $\mathrm{J/ψ\rightarrow e^{+}e^{-}}$ with ALICE at - the LHC",http://dx.doi.org/10.1088/1742-6596/509/1/012110 -Are gamma-ray bursts the sources of ultra-high energy cosmic rays?,http://dx.doi.org/10.1016/j.astropartphys.2014.07.007 -Neutrinos from charm production in the atmosphere,http://arxiv.org/abs/1402.0880v1 -"UHE neutrino and cosmic ray emission from GRBs: revising the models and - clarifying the cosmic ray-neutrino connection",http://arxiv.org/abs/1402.1497v1 -Limits on GRB Prompt Radio Emission Using the LWA1,http://dx.doi.org/10.1088/0004-637X/785/1/27 -"Simulations of gamma-ray burst afterglows with a relativistic kinetic - code",http://dx.doi.org/10.1051/0004-6361/201322520 -Hadronic supercriticality as a trigger for GRB emission,http://dx.doi.org/10.1093/mnras/stu1362 -GROND coverage of the main peak of Gamma-Ray Burst 130925A,http://dx.doi.org/10.1051/0004-6361/201424250 -The afterglow of a relativistic shock breakout and low-luminosity GRBs,http://dx.doi.org/10.1093/mnras/stv011 -The $E_{\rm p}$ - $E_{\rm iso}$ relation and the internal shock model,http://dx.doi.org/10.1051/0004-6361/201424490 -"The Fate of Long-Lived Superparticles with Hadronic Decays after LHC Run - 1",http://dx.doi.org/10.1007/JHEP06(2015)042 -An External Shock Origin of GRB $\textit{141028A}$,http://dx.doi.org/10.3847/0004-637X/822/2/63 -"Centrality, rapidity and transverse momentum dependence of isolated - prompt photon production in lead-lead collisions at $\sqrt{s_{\mathrm{NN}}} = - 2.76$ TeV measured with the ATLAS detector",http://dx.doi.org/10.1103/PhysRevC.93.034914 -"Prompt charmonia production and polarization at LHC in the NRQCD with - kt-factorization. Part I: psi(2S) meson",http://dx.doi.org/10.1140/epjc/s10052-015-3689-x -"Statistical Analysis of the Parameters of Gamma-Ray Bursts with Known - Redshifts and Peaked Optical Light Curves",http://dx.doi.org/10.1134/S1990341315040033 -"Photodisintegrated gamma rays and neutrinos from heavy nuclei in the - gamma-ray burst jet of GRB 130427A",http://dx.doi.org/10.1093/mnrasl/slw023 -Long-Lived Staus and Displaced Leptons at the LHC,http://dx.doi.org/10.1007/JHEP04(2016)056 -Photon-jet ridge at RHIC and the LHC,http://dx.doi.org/10.1103/PhysRevD.93.094030 -How else can we detect Fast Radio Bursts?,http://dx.doi.org/10.3847/2041-8205/824/2/L18 -Fermi LAT Stacking Analysis of Swift Localized Gamma-ray Bursts,http://dx.doi.org/10.3847/0004-637X/822/2/68 -Particle Acceleration in Solar Flares and Associated CME Shocks,http://dx.doi.org/10.3847/0004-637X/830/1/28 -"Searching for high-energy gamma-ray counterparts to Gravitational Wave - sources with Fermi-LAT: a needle in a haystack",http://dx.doi.org/10.3847/2041-8213/aa7262 -"On the Lack of a Radio Afterglow from Some Gamma-ray Bursts - Insight - into Their Progenitors?",http://dx.doi.org/10.1093/mnras/stx313 -First LHCb results from pA and Pb-Pb collisions,http://arxiv.org/abs/1609.06477v1 -"Prompt charmonia production and polarization at LHC in the NRQCD with - $k_T$-factorization. Part III: $J/ψ$ meson",http://dx.doi.org/10.1103/PhysRevD.96.034019 -"An evolving GeV spectrum from prompt to afterglow: the case of GRB - 160509A",http://dx.doi.org/10.3847/2041-8213/aa7ca5 -"A Millennium-Long Evolution of One-Year-Recurrence-Period Nova -- Search - for Any Indication of the Forthcoming He Flash",http://dx.doi.org/10.3847/1538-4357/aa7c5e -Glitches: the exact quantum signatures of pulsars metamorphosis,http://dx.doi.org/10.4236/jmp.2018.94038 -A Neutron Star Binary Merger Model for GW170817/GRB170817a/SSS17a,http://dx.doi.org/10.3847/2041-8213/aa91b3 -Jet-driven and jet-less fireballs from compact binary mergers,http://dx.doi.org/10.1093/mnrasl/slx189 -Electroweak-Charged Bound States as LHC Probes of Hidden Forces,http://dx.doi.org/10.1103/PhysRevD.97.015010 -"Properties of Short Gamma-Ray Burst Pulses from A BATSE TTE GRB Pulse - Catalog",http://dx.doi.org/10.3847/1538-4357/aaac2b -The Evolution of the Type Ia Supernova Luminosity Function,http://dx.doi.org/10.3847/2041-8213/aaa015 -Reconciling Volumetric and Individual Galaxy Type Ia Supernova Rates,http://dx.doi.org/10.1093/mnras/sty1837 -Marginally fast cooling synchrotron models for prompt GRBs,http://dx.doi.org/10.1093/mnras/sty340 -"Toward an understanding of GRB prompt emission mechanism: II. Patterns - of peak energy evolution and their connection to spectral lags",http://dx.doi.org/10.3847/1538-4357/aaeb30 -"A Global Photoionization Response to Prompt Emission and Outliers: - Different Origin of Long Gamma-ray Bursts?",http://dx.doi.org/10.3847/1538-4357/aaad00 -GRB follow-up and science with THESEUS/IRT,http://arxiv.org/abs/1802.01688v1 -Revisiting the Statistics of X-ray Flares in Gamma-ray Bursts,http://arxiv.org/abs/1802.01693v1 -On detectability of labeled Petri nets and finite automata,http://arxiv.org/abs/1802.07551v4 -The photospheric origin of the Yonetoku relation in gamma-ray bursts,http://dx.doi.org/10.1038/s41467-019-09281-z -"Inverting the mass hierarchy of jet quenching effects with prompt - $b$-jet substructure",http://dx.doi.org/10.1088/1742-6596/1070/1/012020 -"Fermi GBM Observations of GRB 150101B: A Second Nearby Event with a - Short Hard Spike and a Soft Tail",http://dx.doi.org/10.3847/2041-8213/aad813 -"Elucidating the multiplicity dependence of J/$ψ$ production in - proton-proton collisions with PYTHIA8",http://dx.doi.org/10.1140/epjc/s10052-018-6531-4 -Estimates of the $X(3872)$ Cross Section at a Hadron Collider,http://dx.doi.org/10.1103/PhysRevD.100.094024 -"Probing cold nuclear matter effects with the productions of - isolated-$γ$ and $γ$+jet in p$+$Pb collisions at $\sqrt{s_{NN}}=$ - 8.16 TeV",http://dx.doi.org/10.1088/1674-1137/43/4/044104 -The on-axis view of GRB 170817A,http://dx.doi.org/10.1051/0004-6361/201935831 -"KOOLS--IFU: Kyoto Okayama Optical Low-dispersion Spectrograph with - Optical-Fiber Integral Field Unit",http://dx.doi.org/10.1093/pasj/psz087 -"Measurement of the production cross section of prompt $Ξ^0_{\rm c}$ - baryons at midrapidity in pp collisions at $\sqrt{s}$ = 5.02 TeV",http://dx.doi.org/10.1007/JHEP10(2021)159 -"The statistics of BAT-to-XRT flux ratio in GRB: Evidence for a - characteristic value and its implications",http://dx.doi.org/10.1088/0004-637X/802/2/83 -"Prompt directional detection of galactic supernova by combining large - liquid scintillator neutrino detectors",http://dx.doi.org/10.1088/1475-7516/2015/08/032 -"INTEGRAL upper limits on gamma-ray emission associated with the - gravitational wave event GW150914",http://dx.doi.org/10.3847/2041-8205/820/2/L36 -Prompt planetesimal formation beyond the snow line,http://dx.doi.org/10.3847/2041-8205/828/1/L2 -Recent investigations of QCD at HERA,http://arxiv.org/abs/1707.03248v2 -"Comparison of Multiple Features and Modeling Methods for Text-dependent - Speaker Verification",http://arxiv.org/abs/1707.04373v2 -"Measurement of quarkonium production in proton--lead and proton--proton - collisions at $5.02$ $\mathrm{TeV}$ with the ATLAS detector",http://dx.doi.org/10.1140/epjc/s10052-018-5624-4 -Detection strategies for the first supernovae with JWST,http://dx.doi.org/10.1093/mnras/sty1576 -"D meson production asymmetry, unfavoured fragmentation and consequences - for prompt atmospheric neutrino production",http://dx.doi.org/10.1103/PhysRevD.97.074001 -Light Hidden Mesons through the Z Portal,http://dx.doi.org/10.1007/JHEP11(2019)031 -On the Cosmological Evolution of Long Gamma-ray Burst Properties,http://dx.doi.org/10.1093/mnras/stz2155 -"Identification of nutrient deficiency in bean plants by prompt - chlorophyll fluorescence measurements and Artificial Neural Networks",http://arxiv.org/abs/1906.03312v1 -"Double prompt $J/ψ$ hadroproduction in the parton Reggeization - approach with high-energy resummation",http://dx.doi.org/10.1103/PhysRevLett.123.162002 -"LHCb measurements of the exotic tetraquark candidate $χ_{c1}(3872)$ - in high-multiplicity $pp$ and $p$Pb collisions",http://arxiv.org/abs/2002.01551v1 -Schema-Guided Natural Language Generation,http://arxiv.org/abs/2005.05480v2 -An Effective End-to-End Modeling Approach for Mispronunciation Detection,http://arxiv.org/abs/2005.08440v1 -"An Unambiguous Separation of Gamma-Ray Bursts into Two Classes from - Prompt Emission Alone",http://dx.doi.org/10.3847/2041-8213/ab964d -The structure of hydrodynamic $ γ$-ray burst jets,http://dx.doi.org/10.1093/mnras/staa3501 -A 10-3 drift velocity monitoring chamber,http://arxiv.org/abs/2006.05154v1 -AdvMind: Inferring Adversary Intent of Black-Box Attacks,http://arxiv.org/abs/2006.09539v1 -"Early optical emission in support of synchrotron radiation in - $γ$-ray bursts",http://arxiv.org/abs/2006.13965v1 -"FERMI constraints on the high energy, ~1 GeV, emission of long GRBs",http://dx.doi.org/10.1051/0004-6361/201014344 -"A complete NLO calculation of the $J/ψ$ and $ψ'$ production at - hadron colliders",http://dx.doi.org/10.1103/PhysRevD.84.114001 -Constraints on Cold Magnetized Shocks in Gamma-Ray Bursts,http://dx.doi.org/10.1111/j.1365-2966.2011.19197.x -"Photometric Observations of Three High Mass X-Ray Binaries and a Search - for Variations Induced by Orbital Motion",http://dx.doi.org/10.1088/1674-4527/11/8/007 -"System size dependence of nuclear modification and azimuthal anisotropy - of jet quenching",http://dx.doi.org/10.1088/0954-3899/39/1/015001 -"Effect of an electric field on superfluid helium scintillation produced - by alpha-particle sources",http://dx.doi.org/10.1103/PhysRevA.85.042718 -Quarkonia and heavy-flavour production in CMS,http://dx.doi.org/10.1016/j.nuclphysa.2012.12.017 -Universal scaling law in long gamma-ray bursts,http://dx.doi.org/10.1093/pasj/65.3.L3 -Sensitivity of measured fission yields on prompt-neutron corrections,http://arxiv.org/abs/1304.2278v1 -On the origin of GeV emission in gamma-ray bursts,http://dx.doi.org/10.1088/0004-637X/788/1/36 -"New Fermi-LAT event reconstruction reveals more high-energy gamma rays - from Gamma-ray bursts",http://dx.doi.org/10.1088/0004-637X/774/1/76 -"Measurement of prompt D-meson production in p-Pb collisions at - $\sqrt{s_{\rm NN}}$ = 5.02 TeV",http://dx.doi.org/10.1103/PhysRevLett.113.232301 -Selectron NLSP in Gauge Mediation,http://dx.doi.org/10.1007/JHEP09(2014)133 -"The role of hadronic cascades in GRB models of efficient neutrino - production",http://dx.doi.org/10.1093/mnras/stu1079 -"Clustering of LAT light curves: a clue to the origin of high-energy - emission in Gamma-Ray Bursts",http://dx.doi.org/10.1093/mnras/stu1451 -The Physics of Gamma-Ray Bursts and Relativistic Jets,http://dx.doi.org/10.1016/j.physrep.2014.09.008 -"A prompt radio transient associated with a gamma-ray superflare from the - young M dwarf binary DG CVn",http://dx.doi.org/10.1093/mnrasl/slu165 -"Theoretical description of prompt fission neutron multiplicity and - spectra",http://arxiv.org/abs/1410.4386v1 -Dust in the wind: the role of recent mass loss in long gamma-ray bursts,http://dx.doi.org/10.1088/0004-637X/805/2/159 -An anisotropic minijets model for the GRB prompt emission,http://dx.doi.org/10.1093/mnrasl/slv140 -"Fragmentation contributions to hadroproduction of prompt $J/ψ$, - $χ_{cJ}$, and $ψ(2S)$ states",http://dx.doi.org/10.1103/PhysRevD.93.034041 -A Study of the Gamma-Ray Burst Fundamental Plane,http://arxiv.org/abs/1610.09082v2 -"Broadening of the thermal component of the prompt GRB emission due to - rapid temperature evolution",http://dx.doi.org/10.1016/j.newast.2017.02.004 -"Revisiting the role of the $(n,γf)$ process in the low-energy - fission of $^{235}$U and $^{239}$Pu",http://dx.doi.org/10.1103/PhysRevC.97.064601 -"Prompt inclusive production of $J/ψ$, $ψ'$ and $χ_{c}$ mesons - at the LHC in forward directions within the NRQCD $k_t$-factorization - approach - search for the onset of gluon saturation",http://dx.doi.org/10.1103/PhysRevD.97.034035 -"Mergers of black hole-neutron star binaries and rates of associated - electromagnetic counterparts",http://dx.doi.org/10.1093/mnras/stz1147 -"PHENIX study of the initial state with forward hadron measurements in - 200 GeV p(d)+A and $^{3}$He$+$Au collisions",http://dx.doi.org/10.1016/j.nuclphysa.2018.09.070 -"Studies of beauty suppression via nonprompt D$^0$ mesons in PbPb - collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV",http://dx.doi.org/10.1103/PhysRevLett.123.022001 -"Direct-photon and heavy-flavour production in proton-proton collisions - at $\sqrt{s} = 7$ TeV with ALICE",http://dx.doi.org/10.22323/1.345.0015 -When Did the Remnant of GW170817 Collapse to a Black Hole?,http://dx.doi.org/10.3847/1538-4357/ab16da -Inverse Compton Scattering Spectra of Gamma-Ray Burst Prompt Emission,http://dx.doi.org/10.3847/1538-4357/ab1b10 -"Kilonovae: nUV/Optical/IR Counterparts of Neutron Star Binary Mergers - with TSO",http://arxiv.org/abs/1903.05736v1 -"Two-layer Electrospun System Enabling Wound Exudate Management and - Visual Infection Response",http://dx.doi.org/10.3390/s19050991 -"Searches for counterparts of gravitational waves at very high energies - with H.E.S.S",http://arxiv.org/abs/1908.06705v1 -"Explaining GRB prompt emission with sub-photospheric dissipation and - Comptonization",http://dx.doi.org/10.1093/mnras/stz3182 -"Azimuthal correlations of prompt D mesons with charged particles in pp - and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV",http://dx.doi.org/10.1140/epjc/s10052-020-8118-0 -Reducing Sentiment Bias in Language Models via Counterfactual Evaluation,http://arxiv.org/abs/1911.03064v3 -Bright gamma-ray flares observed in GRB131108A,http://dx.doi.org/10.3847/2041-8213/ab564f -Knowledge-Enriched Visual Storytelling,http://arxiv.org/abs/1912.01496v1 -"Ultra-fast prompt gamma detection in single proton counting regime for - range monitoring in particle therapy",http://dx.doi.org/10.1088/1361-6560/ab7a6c -"Global Prompt Proton Sensor Network: Monitoring Solar Energetic Protons - based on GPS Satellite Constellation",http://dx.doi.org/10.1029/2019JA027679 -"Topological carnival: electrically-powered motions of toron crystallites - in chiral liquid crystals",http://dx.doi.org/10.1073/pnas.1922198117 -"Studies of Strange and Non-Strange Beauty Productions in PbPb Collisions - with the CMS Detector",http://dx.doi.org/10.1016/j.nuclphysa.2020.121805 -"The Fraction of Gamma-ray Bursts with an Observed Photospheric Emission - Episode",http://dx.doi.org/10.3847/1538-4357/ab80c7 -"Study of prompt emission of short gamma ray bursts using multi-color - blackbody: a clue to the viewing angle",http://dx.doi.org/10.3847/1538-4365/ac082f -"Inclusive prompt photon-jet correlations as a probe of gluon saturation - in electron-nucleus scattering at small $x$",http://dx.doi.org/10.1007/JHEP01(2021)052 -10.4m GTC observations of the nearby VHE-detected GRB 190829A/SN 2019oyw,http://dx.doi.org/10.1051/0004-6361/202039349 -"RECOApy: Data recording, pre-processing and phonetic transcription for - end-to-end speech-based applications",http://arxiv.org/abs/2009.05493v2 -"Measurement of prompt $\mathrm{D}^0$ and $\overline{\mathrm{D}}^0$ meson - azimuthal asymmetry and search for strong electric fields in PbPb collisions - at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV",http://dx.doi.org/10.1016/j.physletb.2021.136253 -Are there radio-loud and radio-quiet Gamma-Ray Bursts?,http://dx.doi.org/10.1093/mnras/stab425 -"Pencil Beamforming Increases Human Exposure to ElectroMagnetic Fields: - True or False?",http://arxiv.org/abs/2010.16288v2 -A brief history of the origin of domesticated date palms,http://arxiv.org/abs/2012.00281v1 -What Makes Good In-Context Examples for GPT-$3$?,http://arxiv.org/abs/2101.06804v1 -Realizing Omega-regular Hyperproperties,http://dx.doi.org/10.1007/978-3-030-53291-8_4 -"Evidence for X(3872) in PbPb collisions and studies of its prompt - production at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV",http://dx.doi.org/10.1103/PhysRevLett.128.032001 -"Repairing Human Trust by Promptly Correcting Robot Mistakes with An - Attention Transfer Model",http://arxiv.org/abs/2103.08025v2 -"Statistical search for angular non-stationarities of long gamma-ray - burst jets using Swift data",http://dx.doi.org/10.1093/mnras/stab3476 -StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery,http://arxiv.org/abs/2103.17249v1 -Prompt production of the hidden charm pentaquarks in the LHC,http://dx.doi.org/10.1140/epjc/s10052-021-09613-8 -Efficient (Soft) Q-Learning for Text Generation with Limited Good Data,http://arxiv.org/abs/2106.07704v4 -"Measurement of prompt D$^{0}$, $Λ_{c}^{+}$, and - $Σ_{c}^{0,++}$(2455) production in pp collisions at $\sqrt{s} = 13$ TeV",http://dx.doi.org/10.1103/PhysRevLett.128.012001 -CPM-2: Large-scale Cost-effective Pre-trained Language Models,http://arxiv.org/abs/2106.10715v3 -"Fragmentation of jets containing a prompt J$/ψ$ meson in PbPb and pp - collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV",http://dx.doi.org/10.1016/j.physletb.2021.136842 -"End-to-End Tests of the Sensitivity of IceCube to the Neutrino Burst - from a Core-Collapse Supernova",http://arxiv.org/abs/2107.08098v1 -GRB 200716C: Evidence for a Short Burst Being Lensed,http://dx.doi.org/10.3847/2041-8213/ac1ff9 -The H.E.S.S. Gravitational Wave Rapid Follow-up Program during O2 and O3,http://dx.doi.org/10.22323/1.395.0936 -"Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code - Contributions",http://arxiv.org/abs/2108.09293v3 -"LightNER: A Lightweight Tuning Paradigm for Low-resource NER via - Pluggable Prompting",http://arxiv.org/abs/2109.00720v5 -"Communicating Inferred Goals with Passive Augmented Reality and Active - Haptic Feedback",http://arxiv.org/abs/2109.01747v1 -"Learning-Based Strategy Design for Robot-Assisted Reminiscence Therapy - Based on a Developed Model for People with Dementia",http://arxiv.org/abs/2109.02194v1 -Template-free Prompt Tuning for Few-shot NER,http://arxiv.org/abs/2109.13532v3 -Perspective-taking to Reduce Affective Polarization on Social Media,http://arxiv.org/abs/2110.05596v1 -"Overall Spectral Properties of Prompt Emissions with Diverse Segments in - Swift/BAT Short Gamma-ray Bursts",http://dx.doi.org/10.1051/0004-6361/202140747 -"Investigating charm production and fragmentation via azimuthal - correlations of prompt D mesons with charged particles in pp collisions at - $\sqrt{s} = 13$ TeV",http://dx.doi.org/10.1140/epjc/s10052-022-10267-3 -Illiterate DALL-E Learns to Compose,http://arxiv.org/abs/2110.11405v3 -"PROMPT: Parallel Iterative Algorithm for $\ell_{p}$ norm linear - regression via Majorization Minimization with an application to - semi-supervised graph learning",http://arxiv.org/abs/2110.12190v1 -"Evolution of equal mass binary bare quark stars in full general - relativity: could a supramassive merger remnant experience prompt collapse?",http://dx.doi.org/10.1103/PhysRevD.106.103030 -Transformers for prompt-level EMA non-response prediction,http://arxiv.org/abs/2111.01193v1 -"An efficient method for fitting radiation-mediated shocks to gamma-ray - burst data: The Kompaneets RMS approximation",http://dx.doi.org/10.3847/1538-4357/ac332a -"Theme Transformer: Symbolic Music Generation with Theme-Conditioned - Transformer",http://arxiv.org/abs/2111.04093v2 -"Revisiting the Rates and Demographics of Tidal Disruption Events: - Effects of the Disk Formation Efficiency",http://dx.doi.org/10.3847/2041-8213/ac5823 -DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting,http://arxiv.org/abs/2112.01518v2 -CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields,http://arxiv.org/abs/2112.05139v3 -"VL-Adapter: Parameter-Efficient Transfer Learning for - Vision-and-Language Tasks",http://arxiv.org/abs/2112.06825v2 -Analyzing the Limits of Self-Supervision in Handling Bias in Language,http://arxiv.org/abs/2112.08637v3 -"An energy independent scaling of transverse momentum spectra of direct - (prompt) photons from two-body processes in high-energy proton-proton - collisions",http://dx.doi.org/10.1002/andp.202100567 -"Electrically powered locomotion of dual-nature colloid-hedgehog and - colloid-umbilic topological and elastic dipoles in liquid crystals",http://arxiv.org/abs/2112.14312v1 -Memory-assisted prompt editing to improve GPT-3 after deployment,http://arxiv.org/abs/2201.06009v7 -"A Tight Three-parameter Correlation and Related Classification on - Gamma-Ray Bursts",http://dx.doi.org/10.3847/1538-4357/ac4753 -"Space and time correlations for diffusion models with prompt and delayed - birth-and-death events",http://dx.doi.org/10.1103/PhysRevE.105.064105 -"Heavy Neutral Lepton Searches at the Electron-Ion Collider: A Snowmass - Whitepaper",http://arxiv.org/abs/2203.06705v3 -MotionCLIP: Exposing Human Motion Generation to CLIP Space,http://arxiv.org/abs/2203.08063v1 -Reconstruction of Large Radius Tracks with the Exa.TrkX pipeline,http://dx.doi.org/10.1088/1742-6596/2438/1/012117 -Spontaneous fission of 246Fm,http://arxiv.org/abs/2203.11802v1 -"CodeGen: An Open Large Language Model for Code with Multi-Turn Program - Synthesis",http://arxiv.org/abs/2203.13474v5 -Can language models learn from explanations in context?,http://arxiv.org/abs/2204.02329v4 -Gamma-ray Polarimetry of Transient Sources with POLAR,http://dx.doi.org/10.1007/978-981-16-4544-0_142-1 -"Zero-shot Entity and Tweet Characterization with Designed Conditional - Prompts and Contexts",http://arxiv.org/abs/2204.08405v1 -"GRB 210121A: Observation of photospheric emissions from different - regimes and the evolution of the outflow",http://dx.doi.org/10.3847/1538-4357/ac6b33 -Seeding Diversity into AI Art,http://arxiv.org/abs/2205.00804v1 -Gamma-Ray Bursts at TeV Energies: Theoretical Considerations,http://arxiv.org/abs/2205.06312v2 -"SeqZero: Few-shot Compositional Semantic Parsing with Sequential Prompts - and Zero-shot Models",http://arxiv.org/abs/2205.07381v1 -GRB 220426A: A Thermal Radiation-Dominated Gamma-Ray Burst,http://dx.doi.org/10.3847/1538-4357/aca017 -"Evaluating the Impact of Model Scale for Compositional Generalization in - Semantic Parsing",http://arxiv.org/abs/2205.12253v2 -Making Large Language Models Better Reasoners with Step-Aware Verifier,http://arxiv.org/abs/2206.02336v3 -Learning to Ask Like a Physician,http://arxiv.org/abs/2206.02696v1 -A Unified Sequence Interface for Vision Tasks,http://arxiv.org/abs/2206.07669v2 -"BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from - Pretrained Language Models",http://arxiv.org/abs/2206.14268v3 -"Metal Mixing in the R-Process Enhanced Ultra-Faint Dwarf Galaxy - Reticulum II",http://arxiv.org/abs/2207.03499v2 -PromptEM: Prompt-tuning for Low-resource Generalized Entity Matching,http://arxiv.org/abs/2207.04802v2 -GRB 190829A -- A Showcase of Binary Late Evolution,http://dx.doi.org/10.3847/1538-4357/ac7da3 -Can large language models reason about medical questions?,http://arxiv.org/abs/2207.08143v3 -Zero-Shot Temporal Action Detection via Vision-Language Prompting,http://arxiv.org/abs/2207.08184v1 -"Constraints on Nonrelativistic-QCD Long-Distance Matrix Elements from - $J/ψ$ Plus $W$/$Z$ Production at the LHC",http://dx.doi.org/10.1103/PhysRevLett.130.041901 -Leveraging GAN Priors for Few-Shot Part Segmentation,http://arxiv.org/abs/2207.13428v1 -Expanding Language-Image Pretrained Models for General Video Recognition,http://arxiv.org/abs/2208.02816v1 -"Unsupervisedly Prompting AlphaFold2 for Few-Shot Learning of Accurate - Folding Landscape and Protein Structure Prediction",http://arxiv.org/abs/2208.09652v2 -DPTDR: Deep Prompt Tuning for Dense Passage Retrieval,http://arxiv.org/abs/2208.11503v1 -"AutoQGS: Auto-Prompt for Low-Resource Knowledge-based Question - Generation from SPARQL",http://arxiv.org/abs/2208.12461v1 -"""Prompt-Gamma Neutron Activation Analysis (PGNAA)"" Metal Spectral - Classification using Deep Learning Method",http://arxiv.org/abs/2208.13909v1 -"Detection of GeV emission from an ultra-long gamma-ray burst with the - Fermi Large Area Telescope",http://dx.doi.org/10.3847/2041-8213/aca147 -LANIT: Language-Driven Image-to-Image Translation for Unlabeled Data,http://arxiv.org/abs/2208.14889v4 -Code as Policies: Language Model Programs for Embodied Control,http://arxiv.org/abs/2209.07753v4 -WeLM: A Well-Read Pre-trained Language Model for Chinese,http://arxiv.org/abs/2209.10372v5 -Linearly Mapping from Image to Text Space,http://arxiv.org/abs/2209.15162v3 -Learning by Distilling Context,http://arxiv.org/abs/2209.15189v1 -"Knowledge Injected Prompt Based Fine-tuning for Multi-label Few-shot ICD - Coding",http://arxiv.org/abs/2210.03304v2 -ReAct: Synergizing Reasoning and Acting in Language Models,http://arxiv.org/abs/2210.03629v3 -"Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of - In-Context Experts",http://arxiv.org/abs/2210.03690v2 -Large Language Models can Implement Policy Iteration,http://arxiv.org/abs/2210.03821v2 -Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis,http://arxiv.org/abs/2210.06629v2 -Feature-Proxy Transformer for Few-Shot Segmentation,http://arxiv.org/abs/2210.06908v1 -"Scaling Back-Translation with Domain Text Generation for Sign Language - Gloss Translation",http://arxiv.org/abs/2210.07054v2 -"MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for - Vision-Language Few-Shot Prompting",http://arxiv.org/abs/2210.07179v2 -"The Possible Cause of the 40 SpaceX Starlink Satellite Losses in - February 2022: Prompt Penetrating Electric Fields and the Dayside Equatorial - and Midlatitude Ionospheric Convective Uplift",http://arxiv.org/abs/2210.07902v2 -Improving Radiology Summarization with Radiograph and Anatomy Prompts,http://arxiv.org/abs/2210.08303v1 -"Towards Realistic Low-resource Relation Extraction: A Benchmark with - Empirical Baseline Study",http://arxiv.org/abs/2210.10678v3 -"XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for - Cross-lingual Text-to-SQL Semantic Parsing",http://arxiv.org/abs/2210.13693v1 -Differentially Private Language Models for Secure Data Sharing,http://arxiv.org/abs/2210.13918v2 -"How well can Text-to-Image Generative Models understand Ethical Natural - Language Interventions?",http://arxiv.org/abs/2210.15230v1 -MagicMix: Semantic Mixing with Diffusion Models,http://arxiv.org/abs/2210.16056v1 -Integrated Parameter-Efficient Tuning for General-Purpose Audio Models,http://arxiv.org/abs/2211.02227v2 -"Rickrolling the Artist: Injecting Backdoors into Text Encoders for - Text-to-Image Synthesis",http://arxiv.org/abs/2211.02408v3 -Finding Skill Neurons in Pre-trained Transformer-based Language Models,http://arxiv.org/abs/2211.07349v1 -QueryForm: A Simple Zero-shot Form Entity Query Framework,http://arxiv.org/abs/2211.07730v2 -QAmeleon: Multilingual QA with Only 5 Examples,http://arxiv.org/abs/2211.08264v2 -"Program of Thoughts Prompting: Disentangling Computation from Reasoning - for Numerical Reasoning Tasks",http://arxiv.org/abs/2211.12588v4 -InDiReCT: Language-Guided Zero-Shot Deep Metric Learning for Images,http://arxiv.org/abs/2211.12760v1 -Multi-Modal Few-Shot Temporal Action Detection,http://arxiv.org/abs/2211.14905v2 -Context-Aware Robust Fine-Tuning,http://arxiv.org/abs/2211.16175v1 -On the Power of Foundation Models,http://arxiv.org/abs/2211.16327v4 -Shape-Guided Diffusion with Inside-Outside Attention,http://arxiv.org/abs/2212.00210v2 -"Finetune like you pretrain: Improved finetuning of zero-shot vision - models",http://arxiv.org/abs/2212.00638v1 -ClipFace: Text-guided Editing of Textured 3D Morphable Models,http://dx.doi.org/10.1145/3588432.3591566 -"Estimating Time-Varying Direct and Indirect Causal Excursion Effects - with Longitudinal Binary Outcomes",http://arxiv.org/abs/2212.01472v1 -"Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image - Inpainting",http://arxiv.org/abs/2212.06909v2 -Large Language Models are Better Reasoners with Self-Verification,http://arxiv.org/abs/2212.09561v5 -DePlot: One-shot visual language reasoning by plot-to-table translation,http://arxiv.org/abs/2212.10505v2 -"Interleaving Retrieval with Chain-of-Thought Reasoning for - Knowledge-Intensive Multi-Step Questions",http://arxiv.org/abs/2212.10509v2 -"Prompt-Augmented Linear Probing: Scaling beyond the Limit of Few-shot - In-Context Learners",http://arxiv.org/abs/2212.10873v3 -Multi-Lingual DALL-E Storytime,http://arxiv.org/abs/2212.11985v1 -"Jets in a Gamma-Ray Burst During its Prompt Emission: Evolution of - Lorentz Factor",http://dx.doi.org/10.3847/1538-4357/aca96a -"Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and - Text-to-Image Diffusion Models",http://arxiv.org/abs/2212.14704v2 -Time-averaging Polarimetric and Spectral Properties of GRBs,http://arxiv.org/abs/2301.00576v1 -GPT as Knowledge Worker: A Zero-Shot Evaluation of (AI)CPA Capabilities,http://arxiv.org/abs/2301.04408v1 -"Transformers as Algorithms: Generalization and Stability in In-context - Learning",http://arxiv.org/abs/2301.07067v2 -"On Robustness of Prompt-based Semantic Parsing with Large Pre-trained - Language Model: An Empirical Study on Codex",http://arxiv.org/abs/2301.12868v3 -GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis,http://arxiv.org/abs/2301.12959v1 -Zero-shot Image-to-Image Translation,http://arxiv.org/abs/2302.03027v1 -"NapSS: Paragraph-level Medical Text Simplification via Narrative - Prompting and Sentence-matching Summarization",http://arxiv.org/abs/2302.05574v1 -Denoising and Prompt-Tuning for Multi-Behavior Recommendation,http://arxiv.org/abs/2302.05862v1 -Distinguishability Calibration to In-Context Learning,http://arxiv.org/abs/2302.06198v3 -Pretraining Language Models with Human Preferences,http://arxiv.org/abs/2302.08582v2 -"Multi-frequency test of dark matter annihilation into long-lived - particles in Sirius",http://arxiv.org/abs/2302.09951v2 -"The power of the rings: the GRB 221009A soft X-ray emission from its - dust-scattering halo",http://dx.doi.org/10.3847/2041-8213/acc1dc -"Directed Diffusion: Direct Control of Object Placement through Attention - Guidance",http://arxiv.org/abs/2302.13153v3 -"A collagen-based theranostic wound dressing with visual, long-lasting - infection detection capability",http://arxiv.org/abs/2302.13283v1 -Systematic Rectification of Language Models via Dead-end Analysis,http://arxiv.org/abs/2302.14003v1 -Zero-Shot Cross-Lingual Summarization via Large Language Models,http://arxiv.org/abs/2302.14229v4 -Can ChatGPT Assess Human Personalities? A General Evaluation Framework,http://arxiv.org/abs/2303.01248v3 -"Prompt Detection of Fast Optical Bursts with the Vera C. Rubin - Observatory",http://dx.doi.org/10.3847/1538-4357/accb93 -StyO: Stylize Your Face in Only One-Shot,http://arxiv.org/abs/2303.03231v2 -"Evidence for four-top quark production in proton-proton collisions at - $\sqrt{s}$ = 13 TeV",http://dx.doi.org/10.1016/j.physletb.2023.138076 -Multimodal Parameter-Efficient Few-Shot Class Incremental Learning,http://arxiv.org/abs/2303.04751v1 -"ChatGPT Asks, BLIP-2 Answers: Automatic Questioning Towards Enriched - Visual Descriptions",http://arxiv.org/abs/2303.06594v1 -PromptFusion: Decoupling Stability and Plasticity for Continual Learning,http://arxiv.org/abs/2303.07223v1 -FateZero: Fusing Attentions for Zero-shot Text-based Video Editing,http://arxiv.org/abs/2303.09535v3 -"A Unified Continual Learning Framework with General Parameter-Efficient - Tuning",http://arxiv.org/abs/2303.10070v2 -Visual Information Matters for ASR Error Correction,http://arxiv.org/abs/2303.10160v2 -"CLIP goes 3D: Leveraging Prompt Tuning for Language Grounded 3D - Recognition",http://arxiv.org/abs/2303.11313v3 -3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion,http://arxiv.org/abs/2303.11938v1 -Talking Abortion (Mis)information with ChatGPT on TikTok,http://arxiv.org/abs/2303.13524v1 -Many bioinformatics programming tasks can be automated with ChatGPT,http://arxiv.org/abs/2303.13528v1 -"CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D - Scene Layout",http://arxiv.org/abs/2303.13843v2 -"Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender - System",http://arxiv.org/abs/2303.14524v2 -A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion,http://arxiv.org/abs/2303.16378v2 -"Joint 2D-3D Multi-Task Learning on Cityscapes-3D: 3D Detection, - Segmentation, and Depth Estimation",http://arxiv.org/abs/2304.00971v3 -"PromptORE -- A Novel Approach Towards Fully Unsupervised Relation - Extraction",http://dx.doi.org/10.1145/3511808.3557422 -Document-Level Machine Translation with Large Language Models,http://arxiv.org/abs/2304.02210v2 -When do you need Chain-of-Thought Prompting for ChatGPT?,http://arxiv.org/abs/2304.03262v2 -"ChatPipe: Orchestrating Data Preparation Program by Optimizing - Human-ChatGPT Interactions",http://arxiv.org/abs/2304.03540v1 -"Towards Real-time Text-driven Image Manipulation with Unconditional - Diffusion Models",http://arxiv.org/abs/2304.04344v1 -Controllable Textual Inversion for Personalized Text-to-Image Generation,http://arxiv.org/abs/2304.05265v3 -"Audience Expansion for Multi-show Release Based on an Edge-prompted - Heterogeneous Graph Network",http://arxiv.org/abs/2304.05474v1 -Expressive Text-to-Image Generation with Rich Text,http://arxiv.org/abs/2304.06720v2 -"Optimizing the Resolution of Hydrodynamic Simulations for MCRaT - Radiative Transfer Calculations",http://arxiv.org/abs/2304.07287v2 -"Language Models Enable Simple Systems for Generating Structured Views of - Heterogeneous Data Lakes",http://arxiv.org/abs/2304.09433v2 -"Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via - Prompt Augmented by ChatGPT",http://arxiv.org/abs/2304.11116v3 -"Evidence for two distinct populations of kilonova-associated Gamma Ray - Bursts",http://dx.doi.org/10.3847/2041-8213/acd4c4 -TextDeformer: Geometry Manipulation using Text Guidance,http://arxiv.org/abs/2304.13348v1 -"CONSCENDI: A Contrastive and Scenario-Guided Distillation Approach to - Guardrail Models for Virtual Assistants",http://arxiv.org/abs/2304.14364v1 -"TALLRec: An Effective and Efficient Tuning Framework to Align Large - Language Model with Recommendation",http://dx.doi.org/10.1145/3604915.3608857 -"Distilling Step-by-Step! Outperforming Larger Language Models with Less - Training Data and Smaller Model Sizes",http://arxiv.org/abs/2305.02301v2 -Lift Yourself Up: Retrieval-augmented Text Generation with Self Memory,http://arxiv.org/abs/2305.02437v2 -Task-Optimized Adapters for an End-to-End Task-Oriented Dialogue System,http://arxiv.org/abs/2305.02468v3 -Personalize Segment Anything Model with One Shot,http://arxiv.org/abs/2305.03048v2 -"Language Models Don't Always Say What They Think: Unfaithful - Explanations in Chain-of-Thought Prompting",http://arxiv.org/abs/2305.04388v1 -"Do Large Language Models Show Decision Heuristics Similar to Humans? A - Case Study Using GPT-3.5",http://arxiv.org/abs/2305.04400v1 -"ReGeneration Learning of Diffusion Models with Rich Prompts for - Zero-Shot Image Translation",http://arxiv.org/abs/2305.04651v1 -"The Case Records of ChatGPT: Language Models and Complex Clinical - Questions",http://arxiv.org/abs/2305.05609v1 -"Beyond Prompts: Exploring the Design Space of Mixed-Initiative - Co-Creativity Systems",http://arxiv.org/abs/2305.07465v1 -"Distinguish Before Answer: Generating Contrastive Explanation as - Knowledge for Commonsense Question Answering",http://arxiv.org/abs/2305.08135v2 -Text Classification via Large Language Models,http://arxiv.org/abs/2305.08377v3 -SatLM: Satisfiability-Aided Language Models Using Declarative Prompting,http://arxiv.org/abs/2305.09656v3 -"Counterfactuals for Design: A Model-Agnostic Method For Design - Recommendations",http://arxiv.org/abs/2305.11308v1 -Post Hoc Explanations of Language Models Can Improve Language Models,http://arxiv.org/abs/2305.11426v2 -Zero-Shot Text Classification via Self-Supervised Tuning,http://arxiv.org/abs/2305.11442v2 -"PlugMed: Improving Specificity in Patient-Centered Medical Dialogue - Generation using In-Context Learning",http://arxiv.org/abs/2305.11508v2 -"Harnessing Text-to-Image Attention Prior for Reference-based Multi-view - Image Synthesis",http://arxiv.org/abs/2305.11577v2 -Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields,http://arxiv.org/abs/2305.11588v1 -"Prompting ChatGPT in MNER: Enhanced Multimodal Named Entity Recognition - with Auxiliary Refined Knowledge",http://arxiv.org/abs/2305.12212v2 -"BreastSAM: A Study of Segment Anything Model for Breast Tumor Detection - in Ultrasound Images",http://arxiv.org/abs/2305.12447v1 -TheoremQA: A Theorem-driven Question Answering dataset,http://arxiv.org/abs/2305.12524v2 -RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text,http://arxiv.org/abs/2305.13304v1 -"PIEClass: Weakly-Supervised Text Classification with Prompting and - Noise-Robust Iterative Ensemble Training",http://arxiv.org/abs/2305.13723v2 -"RetICL: Sequential Retrieval of In-Context Examples with Reinforcement - Learning",http://arxiv.org/abs/2305.14502v1 -Getting MoRE out of Mixture of Language Model Reasoning Experts,http://arxiv.org/abs/2305.14628v2 -SciFix: Outperforming GPT3 on Scientific Factual Error Correction,http://arxiv.org/abs/2305.14707v2 -"Self-ICL: Zero-Shot In-Context Learning with Self-Generated - Demonstrations",http://arxiv.org/abs/2305.15035v2 -"Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs - without Fine-tuning",http://arxiv.org/abs/2305.15065v1 -"Annotation Imputation to Individualize Predictions: Initial Studies on - Distribution Dynamics and Model Predictions",http://arxiv.org/abs/2305.15070v3 -"Harnessing the Power of Large Language Models for Natural Language to - First-Order Logic Translation",http://arxiv.org/abs/2305.15541v1 -Voyager: An Open-Ended Embodied Agent with Large Language Models,http://arxiv.org/abs/2305.16291v2 -"Discovering Novel Actions in an Open World with Object-Grounded Visual - Commonsense Reasoning",http://arxiv.org/abs/2305.16602v1 -HUB: Guiding Learned Optimizers with Continuous Prompt Tuning,http://arxiv.org/abs/2305.16823v2 -"Breaking Language Barriers with a LEAP: Learning Strategies for Polyglot - LLMs",http://arxiv.org/abs/2305.17740v1 -TD-GEM: Text-Driven Garment Editing Mapper,http://arxiv.org/abs/2305.18120v2 -Improved Probabilistic Image-Text Representations,http://arxiv.org/abs/2305.18171v2 -"LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and - Unlabeled Image Collections",http://arxiv.org/abs/2305.18287v2 -"ReWOO: Decoupling Reasoning from Observations for Efficient Augmented - Language Models",http://arxiv.org/abs/2305.18323v1 -"Diffusion Model is an Effective Planner and Data Synthesizer for - Multi-Task Reinforcement Learning",http://arxiv.org/abs/2305.18459v2 -"EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture - Generation",http://arxiv.org/abs/2305.18891v1 -"StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity - 3D Avatar Generation",http://arxiv.org/abs/2305.19012v2 -"ReviewerGPT? An Exploratory Study on Using Large Language Models for - Paper Reviewing",http://arxiv.org/abs/2306.00622v1 -"LLMatic: Neural Architecture Search via Large Language Models and - Quality Diversity Optimization",http://arxiv.org/abs/2306.01102v6 -"Befriending ChatGPT and other superchatbots: An AI-integrated take-home - assessment preserving integrity",http://arxiv.org/abs/2306.02096v1 -Detector Guidance for Multi-Object Text-to-Image Generation,http://arxiv.org/abs/2306.02236v1 -"Evaluating and Improving Tool-Augmented Computation-Intensive Math - Reasoning",http://arxiv.org/abs/2306.02408v1 -"Disaster Anomaly Detector via Deeper FCDDs for Explainable Initial - Responses",http://arxiv.org/abs/2306.02517v2 -"A Scalable and Adaptive System to Infer the Industry Sectors of - Companies: Prompt + Model Tuning of Generative Language Models",http://arxiv.org/abs/2306.03313v1 -MISGENDERED: Limits of Large Language Models in Understanding Pronouns,http://arxiv.org/abs/2306.03950v2 -GPT Self-Supervision for a Better Data Annotator,http://arxiv.org/abs/2306.04349v2 -"GeoDiffusion: Text-Prompted Geometric Control for Object Detection Data - Generation",http://arxiv.org/abs/2306.04607v5 -covLLM: Large Language Models for COVID-19 Biomedical Literature,http://arxiv.org/abs/2306.04926v1 -"DLAMA: A Framework for Curating Culturally Diverse Facts for Probing the - Knowledge of Pretrained Language Models",http://arxiv.org/abs/2306.05076v1 -"Interactive Fashion Content Generation Using LLMs and Latent Diffusion - Models",http://arxiv.org/abs/2306.05182v1 -"AutoTAMP: Autoregressive Task and Motion Planning with LLMs as - Translators and Checkers",http://arxiv.org/abs/2306.06531v2 -Sticker820K: Empowering Interactive Retrieval with Stickers,http://arxiv.org/abs/2306.06870v1 -"Large Language Models as Tax Attorneys: A Case Study in Legal - Capabilities Emergence",http://arxiv.org/abs/2306.07075v1 -Controlling Text-to-Image Diffusion by Orthogonal Finetuning,http://arxiv.org/abs/2306.07280v1 -"Linguistic Binding in Diffusion Models: Enhancing Attribute - Correspondence through Attention Map Alignment",http://arxiv.org/abs/2306.08877v2 -Language-Guided Music Recommendation for Video via Prompt Analogies,http://arxiv.org/abs/2306.09327v1 -"Human Preference Score v2: A Solid Benchmark for Evaluating Human - Preferences of Text-to-Image Synthesis",http://arxiv.org/abs/2306.09341v2 -AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation,http://arxiv.org/abs/2306.09864v1 -"Developing Effective Educational Chatbots with ChatGPT prompts: Insights - from Preliminary Tests in a Case Study on Social Media Literacy (with - appendix)",http://arxiv.org/abs/2306.10645v2 -"A Preliminary Study of ChatGPT on News Recommendation: Personalization, - Provider Fairness, Fake News",http://arxiv.org/abs/2306.10702v1 -"A-STAR: Test-time Attention Segregation and Retention for Text-to-image - Synthesis",http://arxiv.org/abs/2306.14544v1 -Cross Architecture Distillation for Face Recognition,http://arxiv.org/abs/2306.14662v1 -MotionGPT: Human Motion as a Foreign Language,http://arxiv.org/abs/2306.14795v2 -Generating Parametric BRDFs from Natural Language Descriptions,http://arxiv.org/abs/2306.15679v2 -"Towards Measuring the Representation of Subjective Global Opinions in - Language Models",http://arxiv.org/abs/2306.16388v1 -"The Segment Anything Model (SAM) for Remote Sensing Applications: From - Zero to One Shot",http://arxiv.org/abs/2306.16623v1 -Towards Personalized Cold-Start Recommendation with Prompts,http://arxiv.org/abs/2306.17256v2 -InstructEval: Systematic Evaluation of Instruction Selection Methods,http://arxiv.org/abs/2307.00259v2 -GenRec: Large Language Model for Generative Recommendation,http://arxiv.org/abs/2307.00457v2 -JourneyDB: A Benchmark for Generative Image Understanding,http://arxiv.org/abs/2307.00716v1 -"RefSAM: Efficiently Adapting Segmenting Anything Model for Referring - Video Object Segmentation",http://arxiv.org/abs/2307.00997v2 -"Open-Domain Hierarchical Event Schema Induction by Incremental Prompting - and Verification",http://arxiv.org/abs/2307.01972v1 -"ODD: A Benchmark Dataset for the NLP-based Opioid Related Aberrant - Behavior Detection",http://arxiv.org/abs/2307.02591v2 -"Large Language Models as Batteries-Included Zero-Shot ESCO Skills - Matchers",http://arxiv.org/abs/2307.03539v1 -Large Language Models as General Pattern Machines,http://arxiv.org/abs/2307.04721v1 -"Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image - Alignment with Iterative VQA Feedback",http://arxiv.org/abs/2307.04749v1 -"CREPE: Learnable Prompting With CLIP Improves Visual Relationship - Prediction",http://arxiv.org/abs/2307.04838v2 -"Unleashing Cognitive Synergy in Large Language Models: A Task-Solving - Agent through Multi-Persona Self-Collaboration",http://arxiv.org/abs/2307.05300v2 -"Exploring Large Language Model for Graph Data Understanding in Online - Job Recommendations",http://arxiv.org/abs/2307.05722v1 -"Prompt Generate Train (PGT): Few-shot Domain Adaption of Retrieval - Augmented Generation Models for Open Book Question-Answering",http://arxiv.org/abs/2307.05915v2 -Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation,http://arxiv.org/abs/2307.06940v1 -"LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise - Comparisons using Large Language Models",http://arxiv.org/abs/2307.07889v2 -"Is Imitation All You Need? Generalized Decision-Making with Dual-Phase - Training",http://arxiv.org/abs/2307.07909v3 -CLIP-Guided StyleGAN Inversion for Text-Driven Real Image Editing,http://arxiv.org/abs/2307.08397v2 -"Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation - Evaluation",http://arxiv.org/abs/2307.09416v2 -"SciBench: Evaluating College-Level Scientific Problem-Solving Abilities - of Large Language Models",http://arxiv.org/abs/2307.10635v1 -Revise thermal winds of remnant neutron stars in gamma-ray bursts,http://dx.doi.org/10.1088/1674-4527/ace51e -"GIST: Generating Image-Specific Text for Fine-grained Object - Classification",http://arxiv.org/abs/2307.11315v2 -TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition,http://arxiv.org/abs/2307.12493v4 -PUMA: Secure Inference of LLaMA-7B in Five Minutes,http://arxiv.org/abs/2307.12533v3 -"AViT: Adapting Vision Transformers for Small Skin Lesion Segmentation - Datasets",http://arxiv.org/abs/2307.13897v1 -"Minimally-Supervised Speech Synthesis with Conditional Diffusion Model - and Language Model: A Comparative Study of Semantic Coding",http://arxiv.org/abs/2307.15484v2 -"A Knowledge-enhanced Two-stage Generative Framework for Medical Dialogue - Information Extraction",http://arxiv.org/abs/2307.16200v1 -Generative Query Reformulation for Effective Adhoc Search,http://arxiv.org/abs/2308.00415v1 -"Retroformer: Retrospective Large Language Agents with Policy Gradient - Optimization",http://arxiv.org/abs/2308.02151v1 -"Exploring Part-Informed Visual-Language Learning for Person - Re-Identification",http://arxiv.org/abs/2308.02738v1 -"Sketch and Text Guided Diffusion Model for Colored Point Cloud - Generation",http://arxiv.org/abs/2308.02874v1 -"""Kurosawa"": A Script Writer's Assistant",http://arxiv.org/abs/2308.03122v1 -"SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering - Dataset for Scientific Graphs",http://arxiv.org/abs/2308.03349v1 -"Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative - Instructions",http://arxiv.org/abs/2308.04152v3 -ChatGPT for Arabic Grammatical Error Correction,http://arxiv.org/abs/2308.04492v1 -"Gamma-Ray Bursts Observed by the Transiting Exoplanet Survey Satellite: - Prompt Optical Counterparts and Afterglows of Swift-XRT Localized GRBs",http://arxiv.org/abs/2308.05148v1 -"Jurassic World Remake: Bringing Ancient Fossils Back to Life via - Zero-Shot Long Image-to-Image Translation",http://dx.doi.org/10.1145/3581783.3612708 -"Steering Language Generation: Harnessing Contrastive Expert Guidance and - Negative Prompting for Coherent and Diverse Synthetic Data Generation",http://arxiv.org/abs/2308.07645v2 -Prompt Switch: Efficient CLIP Adaptation for Text-Video Retrieval,http://arxiv.org/abs/2308.07648v1 -"Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with - Code-based Self-Verification",http://arxiv.org/abs/2308.07921v1 -Time Travel in LLMs: Tracing Data Contamination in Large Language Models,http://arxiv.org/abs/2308.08493v2 -TeCH: Text-guided Reconstruction of Lifelike Clothed Humans,http://arxiv.org/abs/2308.08545v2 -"MaScQA: A Question Answering Dataset for Investigating Materials Science - Knowledge of Large Language Models",http://arxiv.org/abs/2308.09115v1 -"Red-Teaming Large Language Models using Chain of Utterances for - Safety-Alignment",http://arxiv.org/abs/2308.09662v3 -Graph of Thoughts: Solving Elaborate Problems with Large Language Models,http://arxiv.org/abs/2308.09687v2 -"UniAP: Towards Universal Animal Perception in Vision via Few-shot - Learning",http://arxiv.org/abs/2308.09953v1 -"Color Prompting for Data-Free Continual Unsupervised Domain Adaptive - Person Re-Identification",http://arxiv.org/abs/2308.10716v1 -Backdooring Textual Inversion for Concept Censorship,http://arxiv.org/abs/2308.10718v2 -"Exploring the Effectiveness of GPT Models in Test-Taking: A Case Study - of the Driver's License Knowledge Test",http://arxiv.org/abs/2308.11827v1 -LLM Powered Sim-to-real Transfer for Traffic Signal Control,http://arxiv.org/abs/2308.14284v2 -"AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language - Models",http://arxiv.org/abs/2308.15366v3 -Context Aware Query Rewriting for Text Rankers using LLM,http://arxiv.org/abs/2308.16753v1 -"VideoGen: A Reference-Guided Latent Diffusion Approach for High - Definition Text-to-Video Generation",http://arxiv.org/abs/2309.00398v2 -Extracting Mathematical Concepts with Large Language Models,http://arxiv.org/abs/2309.00642v1 -Demystifying RCE Vulnerabilities in LLM-Integrated Apps,http://arxiv.org/abs/2309.02926v2 -Prompt-based Effective Input Reformulation for Legal Case Retrieval,http://arxiv.org/abs/2309.02962v1 -"Caveat (IoT) Emptor: Towards Transparency of IoT Device Presence (Full - Version)",http://dx.doi.org/10.1145/3576915.3623089 -"The Impact of AI in Physics Education: A Comprehensive Review from GCSE - to University Levels",http://arxiv.org/abs/2309.05163v1 -"Balanced and Explainable Social Media Analysis for Public Health with - Large Language Models",http://arxiv.org/abs/2309.05951v1 -Unified Human-Scene Interaction via Prompted Chain-of-Contacts,http://arxiv.org/abs/2309.07918v2 -"Decoder-only Architecture for Speech Recognition with CTC Prompts and - Text Data Augmentation",http://arxiv.org/abs/2309.08876v1 -"Fabricator: An Open Source Toolkit for Generating Labeled Training Data - with Teacher LLMs",http://arxiv.org/abs/2309.09582v1 -RECAP: Retrieval-Augmented Audio Captioning,http://arxiv.org/abs/2309.09836v1 -"Prompt, Plan, Perform: LLM-based Humanoid Control via Quantized - Imitation Learning",http://arxiv.org/abs/2309.11359v1 -Code Soliloquies for Accurate Calculations in Large Language Models,http://arxiv.org/abs/2309.12161v1 -"ChatPRCS: A Personalized Support System for English Reading - Comprehension based on ChatGPT",http://arxiv.org/abs/2309.12808v2 -"Bridging Sensor Gaps via Single-Direction Tuning for Hyperspectral Image - Classification",http://arxiv.org/abs/2309.12865v1 -MOSAIC: Multi-Object Segmented Arbitrary Stylization Using CLIP,http://arxiv.org/abs/2309.13716v1 -"Fill the K-Space and Refine the Image: Prompting for Dynamic and - Multi-Contrast MRI Reconstruction",http://arxiv.org/abs/2309.13839v1 -"Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM - Animator",http://arxiv.org/abs/2309.14494v1 -"Text-image guided Diffusion Model for generating Deepfake celebrity - interactions",http://arxiv.org/abs/2309.14751v1 -"VideoAdviser: Video Knowledge Distillation for Multimodal Transfer - Learning",http://dx.doi.org/10.1109/ACCESS.2023.3280187. -"GPT-Fathom: Benchmarking Large Language Models to Decipher the - Evolutionary Path towards GPT-4 and Beyond",http://arxiv.org/abs/2309.16583v3 -An evaluation of GPT models for phenotype concept recognition,http://arxiv.org/abs/2309.17169v1 -Motif: Intrinsic Motivation from Artificial Intelligence Feedback,http://arxiv.org/abs/2310.00166v1 -Parameter-Efficient Tuning Helps Language Model Alignment,http://arxiv.org/abs/2310.00819v1 -Enable Language Models to Implicitly Learn Self-Improvement From Data,http://arxiv.org/abs/2310.00898v2 -"All Languages Matter: On the Multilingual Safety of Large Language - Models",http://arxiv.org/abs/2310.00905v1 -Resolving Knowledge Conflicts in Large Language Models,http://arxiv.org/abs/2310.00935v1 -"Knowledge Crosswords: Geometric Reasoning over Structured Knowledge with - Large Language Models",http://arxiv.org/abs/2310.01290v1 -Prompt GRB Polarization from Non-Axisymmetric Jets,http://arxiv.org/abs/2310.01357v1 -Direct Inversion: Boosting Diffusion-based Editing with 3 Lines of Code,http://arxiv.org/abs/2310.01506v2 -Zero-Shot Refinement of Buildings' Segmentation Models using SAM,http://arxiv.org/abs/2310.01845v1 -Predicting the Splash of a Drop Impacting a Thin Liquid Film,http://dx.doi.org/10.1021/acs.langmuir.3c02185 -"Retrieval-augmented Generation to Improve Math Question-Answering: - Trade-offs Between Groundedness and Human Preference",http://arxiv.org/abs/2310.03184v1 -"Tuning In to Neural Encoding: Linking Human Brain and Artificial - Supervised Representations of Language",http://arxiv.org/abs/2310.04460v1 -"Parameterizing Context: Unleashing the Power of Parameter-Efficient - Fine-Tuning and In-Context Tuning for Continual Table Semantic Parsing",http://arxiv.org/abs/2310.04801v1 -"TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series - Forecasting",http://arxiv.org/abs/2310.04948v2 -"Compresso: Structured Pruning with Collaborative Prompting Learns - Compact Large Language Models",http://arxiv.org/abs/2310.05015v2 -"Resolving the Imbalance Issue in Hierarchical Disciplinary Topic - Inference via LLM-based Data Augmentation",http://arxiv.org/abs/2310.05318v2 -Are Large Language Models Post Hoc Explainers?,http://arxiv.org/abs/2310.05797v2 -"Reinforcement Learning in the Era of LLMs: What is Essential? What is - needed? An RL Perspective on RLHF, Prompting, and Beyond",http://arxiv.org/abs/2310.06147v1 -Expanding the Vocabulary of BERT for Knowledge Base Construction,http://arxiv.org/abs/2310.08291v1 -"GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with - Point Cloud Priors",http://arxiv.org/abs/2310.08529v1 -"Tree-Planner: Efficient Close-loop Task Planning with Large Language - Models",http://arxiv.org/abs/2310.08582v1 -"DeltaSpace: A Semantic-aligned Feature Space for Flexible Text-guided - Image Editing",http://arxiv.org/abs/2310.08785v1 -"GRB 080710: a narrow, structured jet showing a late, achromatic peak in - the optical and infrared afterglow?",http://arxiv.org/abs/2310.08900v1 -"AgentCF: Collaborative Learning with Autonomous Language Agents for - Recommender Systems",http://arxiv.org/abs/2310.09233v1 -LICO: Explainable Models with Language-Image Consistency,http://arxiv.org/abs/2310.09821v1 -"Generalizing Few-Shot Named Entity Recognizers to Unseen Domains with - Type-Related Features",http://arxiv.org/abs/2310.09846v1 -MHz to TeV expectations from scotogenic WIMP dark matter,http://arxiv.org/abs/2310.10421v1 -"Model Selection of Anomaly Detectors in the Absence of Labeled - Validation Data",http://arxiv.org/abs/2310.10461v1 -EvalCrafter: Benchmarking and Evaluating Large Video Generation Models,http://arxiv.org/abs/2310.11440v2 -"Eliminating Reasoning via Inferring with Planning: A New Framework to - Guide LLMs' Non-linear Thinking",http://arxiv.org/abs/2310.12342v1 -AgentTuning: Enabling Generalized Agent Abilities for LLMs,http://arxiv.org/abs/2310.12823v2 -Do Language Models Learn about Legal Entity Types during Pretraining?,http://arxiv.org/abs/2310.13092v1 -"FABULA: Intelligence Report Generation Using Retrieval-Augmented - Narrative Construction",http://dx.doi.org/10.1145/3625007.3627505 -SAM-Med3D,http://arxiv.org/abs/2310.15161v1 -"Videoprompter: an ensemble of foundational models for zero-shot video - understanding",http://arxiv.org/abs/2310.15324v1 -"Learning with Noisy Labels Using Collaborative Sample Selection and - Contrastive Semi-Supervised Learning",http://arxiv.org/abs/2310.15533v1 -"CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts - For Aspect Sentiment Triplet Extraction",http://arxiv.org/abs/2310.15577v1 -"Contextual Normalization Applied to Aircraft Gas Turbine Engine - Diagnosis",http://arxiv.org/abs/cs/0212031v1 -"Characteristics of the Limit Cycle of a Reciprocating Quantum Heat - Engine",http://dx.doi.org/10.1103/PhysRevE.70.046110 -Neural Modeling and Control of Diesel Engine with Pollution Constraints,http://dx.doi.org/10.1007/s10846-005-3806-y -Piezoresistive heat engine and refrigerator,http://dx.doi.org/10.1038/NPHYS1871 -Circuit Design: An inquiry lab activity at Maui Community College,http://arxiv.org/abs/1009.3296v1 -"Application of polynomial vector (pv) processing to improve the - estimation performance of bio diesel in variable compression ratio diesel - engine",http://arxiv.org/abs/1301.1261v1 -More bang for your buck: Towards super-adiabatic quantum engines,http://dx.doi.org/10.1038/srep06208 -User Satisfaction in Competitive Sponsored Search,http://arxiv.org/abs/1310.4098v1 -Collaborative Verification-Driven Engineering of Hybrid Systems,http://dx.doi.org/10.1007/s11786-014-0176-y -"Happy software developers solve problems better: psychological - measurements in empirical software engineering",http://dx.doi.org/10.7717/peerj.289 -"Software Engineers' Attitudes Towards Organizational Change - an - Industrial Case Study",http://arxiv.org/abs/1601.05837v1 -Irreversible Brownian heat engine,http://arxiv.org/abs/1606.08720v1 -"Heat Engines running upon a Non-Ideal Fluid Model with Higher - Efficiencies than upon the Ideal Gas Model",http://arxiv.org/abs/1609.08974v4 -"Efficiency versus Speed in Quantum Heat Engines: Rigorous Constraint - from Lieb-Robinson Bound",http://dx.doi.org/10.1103/PhysRevE.96.022138 -Database Engines: Evolution of Greenness,http://dx.doi.org/10.1002/smr.1915 -"A quantum-dot heat engine operating close to the thermodynamic - efficiency limits",http://dx.doi.org/10.1038/s41565-018-0200-5 -"The challenges for Systems Engineers of non-classical quantum - technologies",http://arxiv.org/abs/1710.05643v2 -"Using Blippar Augmented Reality Browser in the Practical Training of - Mechanical Engineers",http://arxiv.org/abs/1807.00279v1 -"Limit of Temporal Resolution in Atomic Force Microscopy: How fast can we - image with atomically-engineered tips while preserving picometer range - spatial resolution?",http://dx.doi.org/10.1103/PhysRevApplied.11.024068 -"Optimizing the reliability of a bank with Logistic Regression and - Particle Swarm Optimization",http://arxiv.org/abs/2004.11122v1 -"Thermodynamics of the mesoscopic thermoelectric heat engine beyond the - linear-response regime",http://dx.doi.org/10.1103/PhysRevE.92.042165 -Old Techniques for New Join Algorithms: A Case Study in RDF Processing,http://arxiv.org/abs/1602.03557v1 -"On the Presence of Green and Sustainable Software Engineering in Higher - Education Curricula",http://arxiv.org/abs/1703.01078v1 -Foraging patterns in online searches,http://dx.doi.org/10.1103/PhysRevE.95.032145 -"Game Development Software Engineering Process Life Cycle: A Systematic - Review",http://dx.doi.org/10.1186/s40411-016-0032-7 -Ethical Interviews in Software Engineering,http://arxiv.org/abs/1906.07993v1 -Thermodynamic uncertainty relation in slowly driven quantum heat engines,http://dx.doi.org/10.1103/PhysRevLett.126.210603 -BigDataBench: a Big Data Benchmark Suite from Web Search Engines,http://arxiv.org/abs/1307.0320v1 -"Optimal performance of generalized heat engines with finite-size baths - of arbitrary multiple conserved quantities beyond i.i.d. scaling",http://dx.doi.org/10.1103/PhysRevE.97.012129 -Microscopic Engine Powered by Critical Demixing,http://dx.doi.org/10.1103/PhysRevLett.120.068004 -Qualitative software engineering research -- reflections and guidelines,http://arxiv.org/abs/1712.08341v3 -"Demographic differences in search engine use with implications for - cohort selection",http://arxiv.org/abs/1805.09139v1 -Measurement Based Quantum Heat Engine with Coupled Working Medium,http://dx.doi.org/10.3390/e21111131 -"Influence of technological progress and renewability on the - sustainability of ecosystem engineers populations",http://dx.doi.org/10.3934/mbe.2019173 -Hardware Reverse Engineering: Overview and Open Challenges,http://dx.doi.org/10.1109/IVSW.2017.8031550 -"Online information of vaccines: information quality is an ethical - responsibility of search engines",http://arxiv.org/abs/1912.00898v1 -Analyzing Web Search Behavior for Software Engineering Tasks,http://arxiv.org/abs/1912.09519v3 -"A New Software Framework for Traffic Engineering: Path Cardinality and - the Effect of Multipath on Residual Capacity",http://arxiv.org/abs/2001.05939v1 -"Optimal operation of a three-level quantum heat engine and universal - nature of efficiency",http://dx.doi.org/10.1103/PhysRevResearch.2.043187 -Variations on a Demonic Theme: Szilard's Other Engines,http://arxiv.org/abs/2003.09990v1 -"An Evaluation of Two Commercial Deep Learning-Based Information - Retrieval Systems for COVID-19 Literature",http://arxiv.org/abs/2007.03106v2 -"Detecting Optimization Bugs in Database Engines via Non-Optimizing - Reference Engine Construction",http://dx.doi.org/10.1145/3368089.3409710 -"Understanding and Improving Artifact Sharing in Software Engineering - Research",http://dx.doi.org/10.1007/s10664-021-09973-5 -MAR: A structure-based search engine for models,http://arxiv.org/abs/2008.11858v1 -Data Engineering for HPC with Python,http://arxiv.org/abs/2010.06312v1 -Stochastic Thermodynamic Cycles of a Mesoscopic Thermoelectric Engine,http://dx.doi.org/10.1103/PhysRevB.103.075404 -"Work extraction and performance of colloidal heat engines in - viscoelastic baths",http://dx.doi.org/10.3389/fphy.2021.643333 -Supporting search engines with knowledge and context,http://arxiv.org/abs/2102.06762v1 -Towards a Systematic Engineering of Industrial Domain-Specific Language,http://arxiv.org/abs/2103.09682v1 -Software Engineering for IoT-Driven Data Analytics Applications,http://arxiv.org/abs/2103.11192v1 -"CodeTrans: Towards Cracking the Language of Silicon's Code Through - Self-Supervised Deep Learning and High Performance Computing",http://arxiv.org/abs/2104.02443v2 -"Automated Conformance Testing for JavaScript Engines via Deep Compiler - Fuzzing",http://dx.doi.org/10.1145/3453483.3454054 -On the Future of Cloud Engineering,http://arxiv.org/abs/2108.08685v1 -"Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval - with Deep Feature Engineering",http://dx.doi.org/10.1145/3490519 -FuSeBMC v.4: Smart Seed Generation for Hybrid Fuzzing,http://arxiv.org/abs/2112.10627v1 -A geometric bound on the efficiency of irreversible thermodynamic cycles,http://dx.doi.org/10.1103/PhysRevLett.128.230601 -Thermodynamic engine powered by anisotropic fluctuations,http://arxiv.org/abs/2203.07573v2 -"Ultra-coherent nanomechanical resonators based on density phononic - crystal engineering",http://arxiv.org/abs/2207.06703v1 -"Power of a quasi-spin quantum Otto engine at negative effective - temperature",http://arxiv.org/abs/2207.09272v1 -"A Research Software Engineering Workflow for Computational Science and - Engineering",http://arxiv.org/abs/2208.07460v1 -Making statistics work: a quantum engine in the BEC-BCS crossover,http://dx.doi.org/10.1038/s41586-023-06469-8 -"Towards Ontology-Based Requirements Engineering for IoT-Supported - Well-Being, Aging and Health",http://dx.doi.org/10.1109/REW56159.2022.00019 -"A systematic literature review of capstone courses in software - engineering",http://arxiv.org/abs/2301.03554v1 -"Exploring the Versal AI engines for accelerating stencil-based - atmospheric advection simulation",http://dx.doi.org/10.1145/3543622.3573047 -"A computational model for the design of a nitrous oxide--paraffin wax - hybrid rocket engine",http://arxiv.org/abs/2302.06725v1 -"Synchronization approach to achieving maximum power and thermal - efficiency for weakly-coupled low-temperature-differential Stirling engines",http://arxiv.org/abs/2302.11308v3 -"Diversity in Software Engineering: A Survey about Scientists from - Underrepresented Groups",http://arxiv.org/abs/2303.05950v2 -"A Case Study on AI Engineering Practices: Developing an Autonomous Stock - Trading System",http://arxiv.org/abs/2303.13216v1 -Reusing Deep Neural Network Models through Model Re-engineering,http://arxiv.org/abs/2304.00245v2 -"Measurement-based quantum Otto engine with a two-spin system coupled by - anisotropic interaction: enhanced efficiency at finite times",http://dx.doi.org/10.1103/PhysRevE.107.054110 -REMaQE -- Reverse Engineering Math Equations from Executables,http://arxiv.org/abs/2305.06902v1 -"Is Our Organization Actually Measuring Productivity? How Contrasting - Organizational and Individual Measures of Engineering Success is an - Opportunity to Drive Engineering Transformation",http://arxiv.org/abs/2305.11030v3 -"FREPA: An Automated and Formal Approach to Requirement Modeling and - Analysis in Aircraft Control Domain",http://arxiv.org/abs/2306.01260v1 -Exploration of technical debt in start-ups,http://dx.doi.org/10.1145/3183519.3183539 -Building a Quantum Engineering Undergraduate Program,http://dx.doi.org/10.1109/TE.2022.3144943 -"Experimental demonstration of quantum effects in the operation of - microscopic heat engines",http://dx.doi.org/10.1103/PhysRevLett.122.110601 -"Software Engineers vs. Machine Learning Algorithms: An Empirical Study - Assessing Performance and Reuse Tasks",http://arxiv.org/abs/1802.01096v2 -"A Study about the Knowledge and Use of Requirements Engineering - Standards in Industry",http://dx.doi.org/10.1109/TSE.2021.3087792 -Biological Computation as the Revolution of Complex Engineered Systems,http://arxiv.org/abs/1110.3316v1 -"A mismatch between self-efficacy and performance: Undergraduate women in - engineering tend to have lower self-efficacy despite earning higher grades - than men",http://arxiv.org/abs/2003.06006v1 -Partially Observable Szilard Engines,http://dx.doi.org/10.1088/1367-2630/ac6b30 -"A Recurrent Differentiable Engine for Modeling Tensegrity Robots - Trainable with Low-Frequency Data",http://arxiv.org/abs/2203.00041v2 -GRBs Light Curves - Another Clue on the Inner Engine,http://dx.doi.org/10.1086/341748 -"Quantum, cyclic and particle-exchange heat engines",http://dx.doi.org/10.1016/j.physe.2005.05.038 -"Guided Google: A Meta Search Engine and its Implementation using the - Google Distributed Web Services",http://arxiv.org/abs/cs/0302018v1 -"A Use-Case Driven Approach in Requirements Engineering : The Mammogrid - Project",http://arxiv.org/abs/cs/0402008v1 -Web search engine based on DNS,http://arxiv.org/abs/cs/0405099v1 -A Model-driven Approach for Grid Services Engineering,http://arxiv.org/abs/cs/0509066v2 -The egalitarian effect of search engines,http://dx.doi.org/10.1073/pnas.0605525103 -Characterization of Search Engine Caches,http://arxiv.org/abs/cs/0703083v2 -"Generation of Pulsed Polarization Entangled Two-Photon State via - Temporal and Spectral Engineering",http://dx.doi.org/10.1080/0950034021000011455 -"The Quantum Four Stroke Heat Engine: Thermodynamic Observables in a - Model with Intrinsic Friction",http://dx.doi.org/10.1103/PhysRevE.68.016101 -"Geometrically Engineering the Standard Model: Locally Unfolding Three - Families out of E8",http://dx.doi.org/10.1103/PhysRevD.76.046004 -"Implementation of holonomic quantum computation through engineering and - manipulating environment",http://dx.doi.org/10.1103/PhysRevA.76.062311 -"Computational Simulation and 3D Virtual Reality Engineering Tools for - Dynamical Modeling and Imaging of Composite Nanomaterials",http://arxiv.org/abs/0708.1818v1 -Knowledge Engineering Technique for Cluster Development,http://arxiv.org/abs/0712.1994v1 -"Investigating the Potential of Test-Driven Development for Spreadsheet - Engineering",http://arxiv.org/abs/0801.4802v1 -The Anatomy of Mitos Web Search Engine,http://arxiv.org/abs/0803.2220v2 -"Towards an Effective Intrusion Response Engine Combined with Intrusion - Detection in Ad Hoc Networks",http://arxiv.org/abs/0807.2053v1 -The formation of planetary disks and winds: an ultraviolet view,http://dx.doi.org/10.1007/s10509-008-9894-4 -"Modeling of autoignition and NO sensitization for the oxidation of IC - engine surrogate fuels",http://dx.doi.org/10.1016/j.combustflame.2008.09.009 -"Modeling of NO sensitization of IC engines surrogate fuels auto-ignition - and combustion",http://arxiv.org/abs/0903.4353v1 -JConstHide: A Framework for Java Source Code Constant Hiding,http://arxiv.org/abs/0904.3458v1 -"Stochastic energetics of a Brownian motor and refrigerator driven by - non-uniform temperature",http://dx.doi.org/10.1142/S0217979214500556 -Teaching an Old Elephant New Tricks,http://arxiv.org/abs/0909.1758v1 -Pavideoge: A Metadata Markup Video Structure in Video Search Engine,http://arxiv.org/abs/0909.2496v2 -Type Safe Extensible Programming,http://arxiv.org/abs/0910.2654v1 -Rank Based Clustering For Document Retrieval From Biomedical Databases,http://arxiv.org/abs/0912.2307v1 -"Search Engine Optimization Techniques Practiced in Organizations: A - Study of Four Organizations",http://arxiv.org/abs/1006.4558v1 -Digital image exploration at Maui Community College,http://arxiv.org/abs/1009.3297v1 -S-MATE: Secure Coding-based Multipath Adaptive Traffic Engineering,http://arxiv.org/abs/1010.4858v1 -"A Kind of Representation of Common Knowledge and its Application in - Requirements Analysis",http://arxiv.org/abs/1101.5957v1 -User Awareness Measurement Through Social Engineering,http://arxiv.org/abs/1108.2149v1 -TrackMeNot: Enhancing the privacy of Web Search,http://arxiv.org/abs/1109.4677v1 -Do Software Languages Engineers Evaluate their Languages?,http://arxiv.org/abs/1109.6794v1 -"Bounds of Efficiency at Maximum Power for Normal-, Sub- and - Super-Dissipative Carnot-Like Heat Engines",http://dx.doi.org/10.1088/0253-6102/59/2/08 -On Reverse Engineering in the Cognitive and Brain Sciences,http://arxiv.org/abs/1201.4896v1 -Advanced Space Propulsion Based on Vacuum (Spacetime Metric) Engineering,http://arxiv.org/abs/1204.2184v1 -The efficiency and power of the martensite rotor heat engine. I,http://arxiv.org/abs/1206.3733v1 -"Quantum simulation via filtered Hamiltonian engineering: application to - perfect quantum transport in spin networks",http://dx.doi.org/10.1103/PhysRevLett.110.220503 -Fundamentals of Atmospheric Physics for Engineering,http://arxiv.org/abs/1208.3765v2 -Influence of Context on Decision Making during Requirements Elicitation,http://arxiv.org/abs/1210.7101v2 -"Adaptive Bee Colony in an Artificial Bee Colony for Solving Engineering - Design Problems",http://arxiv.org/abs/1211.0957v1 -Interaction-Oriented Software Engineering: Concepts and Principles,http://arxiv.org/abs/1211.4123v1 -How many software engineering professionals hold this certificate?,http://arxiv.org/abs/1211.4347v4 -"Nonlinear dynamics analysis of a membrane Stirling engine: Starting and - stable operation",http://dx.doi.org/10.1016/j.jsv.2009.05.025 -"Coupled thermodynamic-dynamic semi-analytical model of Free Piston - Stirling engines",http://dx.doi.org/10.1016/j.enconman.2010.12.014 -Heat engine driven by purely quantum information,http://dx.doi.org/10.1103/PhysRevLett.111.230402 -"Proceedings 10th International Workshop on Formal Engineering Approaches - to Software Components and Architectures",http://dx.doi.org/10.4204/EPTCS.108 -"Using Exclusive Web Crawlers to Store Better Results in Search Engines' - Database",http://arxiv.org/abs/1305.2686v1 -A nano heat engine beyond the Carnot limit,http://dx.doi.org/10.1103/PhysRevLett.112.030602 -On the Benefit of Information Centric Networks for Traffic Engineering,http://arxiv.org/abs/1311.0951v1 -"Towards Ontological Support for Principle Solutions in Mechanical - Engineering",http://arxiv.org/abs/1312.2359v2 -"Multidisciplinary Engineering Models: Methodology and Case Study in - Spreadsheet Analytics",http://arxiv.org/abs/1401.4582v1 -"Quantum Information Engines with Many-Body States attaining optimal - Extractable Work with Quantum Control",http://dx.doi.org/10.1103/PhysRevA.89.032327 -"A Framework for Enhancing Performance and Handling Run-Time Uncertainty - in Self-Adaptive Systems",http://dx.doi.org/10.5121/ijsea.2014.5106 -"Mathematical Model of Semantic Look - An Efficient Context Driven Search - Engine",http://arxiv.org/abs/1402.7200v1 -"Photospheric emission from long duration gamma-ray bursts powered by - variable engines",http://dx.doi.org/10.1093/mnras/stu1016 -The multilevel four-stroke swap engine and its environment,http://dx.doi.org/10.1088/1367-2630/16/9/095003 -A framework for contextual information retrieval from the WWW,http://arxiv.org/abs/1407.6100v1 -"A Noble Methodology for Users Work Process Driven Software Requirements - for Smart Handheld Devices",http://dx.doi.org/10.5121/ijsea.2014.5402 -"Efficiency and Large Deviations in Time-Asymmetric Stochastic Heat - Engines",http://dx.doi.org/10.1088/1367-2630/16/10/102003 -Scaling the Management of Extreme Programming Projects,http://arxiv.org/abs/1409.6604v1 -"Nonequilibrium fluctuations in quantum heat engines: Theory, example, - and possible solid state experiments",http://dx.doi.org/10.1088/1367-2630/17/3/035012 -Clock-Driven Quantum Thermal Engines,http://dx.doi.org/10.1088/1367-2630/17/4/045027 -Total cost of operating an information engine,http://dx.doi.org/10.1088/1367-2630/17/8/085001 -Quantum Otto engine with a spin $1/2$ coupled to an arbitrary spin,http://dx.doi.org/10.1103/PhysRevE.92.022142 -Lipkin-Meshkov-Glick Model in a Quantum Otto Cycle,http://dx.doi.org/10.1140/epjp/i2016-16197-0 -"Optimization in Engine Design via Formal Concept Analysis using Negative - Attributes",http://dx.doi.org/10.5121/csit.2016.60115 -A Model-Driven Engineering Approach for ROS using Ontological Semantics,http://arxiv.org/abs/1601.03998v1 -"Tracing Digital Footprints to Academic Articles: An Investigation of - PeerJ Publication Referral Data",http://arxiv.org/abs/1601.05271v1 -Central Engine Memory of Gamma-Ray Bursts and Soft Gamma-Ray Repeaters,http://dx.doi.org/10.3847/2041-8205/820/2/L32 -Maximum Efficiency of Low-Dissipation Heat Engines at Arbitrary Power,http://dx.doi.org/10.1088/1742-5468/2016/07/073204 -Single-ion quantum Otto engine with always-on bath interaction,http://dx.doi.org/10.1209/0295-5075/118/60003 -Implementing black hole as efficient power plant,http://dx.doi.org/10.1088/0253-6102/71/6/711 -"Non-Gaussian Random Generators in Bacteria Foraging Algorithm for - Multiobjective Optimization",http://dx.doi.org/10.4172/2169-0316.1000182 -Engineering Tagging Languages for DSLs,http://arxiv.org/abs/1606.05112v1 -"How Relevant is the Long Tail? A Relevance Assessment Study on Million - Short",http://dx.doi.org/10.1007/978-3-319-44564-9_20 -"A New Method to Optimize Finite Dimensions Thermodynamic Models: - application to an Irreversible Stirling Engine",http://arxiv.org/abs/1607.00814v1 -Josephson Quantum Heat Engine,http://dx.doi.org/10.1103/PhysRevApplied.6.054014 -"Stereotypes in Search Engine Results: Understanding The Role of Local - and Global Factors",http://arxiv.org/abs/1609.05413v2 -"Programming the Universe: The First Commandment of Software Engineering - for all Varieties of Information Systems",http://dx.doi.org/10.1145/2973839.2982567 -PRSP: A Plugin-based Framework for RDF Stream Processing,http://arxiv.org/abs/1701.03854v1 -A quantum Szilard engine without heat from a thermal reservoir,http://dx.doi.org/10.1088/1367-2630/aa8ba1 -Black hole thermodynamics and heat engines in conformal gravity,http://dx.doi.org/10.1142/S0218271817501516 -Complex wavefront engineering with disorder-engineered metasurfaces,http://dx.doi.org/10.1038/s41566-017-0078-z -Testing as an Investment,http://arxiv.org/abs/1708.00992v1 -"Towards Long-endurance Flight: Design and Implementation of a - Variable-pitch Gasoline-engine Quadrotor",http://arxiv.org/abs/1708.02922v1 -Software engineering and the SP Theory of Intelligence,http://arxiv.org/abs/1708.06665v2 -"Supporting Requirements Engineering Research that Industry Needs: The - Naming the Pain in Requirements Engineering Initiative",http://arxiv.org/abs/1710.04630v1 -How PHP Releases Are Adopted in the Wild?,http://dx.doi.org/10.1109/APSEC.2017.13 -"A Systematic Mapping Study on Requirements Engineering in Software - Ecosystems",http://dx.doi.org/10.4018/JITR.2018010104 -"Accelerating AdS black holes as the holographic heat engines in a - benchmarking scheme",http://dx.doi.org/10.1140/epjc/s10052-018-6137-x -"Nonlinear dynamics analysis of a low-temperature-differential kinematic - Stirling heat engine",http://dx.doi.org/10.1209/0295-5075/121/50004 -"Holographic heat engines and static black holes: a general efficiency - formula",http://dx.doi.org/10.1142/S0218271819500305 -Do Programmers Work at Night or During the Weekend?,http://dx.doi.org/10.1145/3180155.3180193 -"Universal constraint for efficiency and power of a low-dissipation heat - engine",http://dx.doi.org/10.1103/PhysRevE.98.042112 -"Natural Language or Not (NLoN) - A Package for Software Engineering Text - Analysis Pipeline",http://dx.doi.org/10.1145/3196398.3196444 -Measurement driven single temperature engine,http://dx.doi.org/10.1103/PhysRevE.98.042122 -"Cultural Influences on Requirements Engineering Process in the Context - of Saudi Arabia",http://arxiv.org/abs/1807.01930v1 -"Building a Sustainable Structure for Research Software Engineering - Activities",http://dx.doi.org/10.1109/eScience.2018.00015 -The Effect of Noise on Sofware Engineers' Performance,http://arxiv.org/abs/1807.04100v1 -Domain Knowledge Discovery Guided by Software Trace Links,http://arxiv.org/abs/1808.05209v1 -Let CONAN tell you a story: Procedural quest generation,http://arxiv.org/abs/1808.06217v1 -"nn-dependability-kit: Engineering Neural Networks for Safety-Critical - Autonomous Driving Systems",http://arxiv.org/abs/1811.06746v2 -"Software Engineering for Fairness: A Case Study with Hyperparameter - Optimization",http://arxiv.org/abs/1905.05786v2 -A Road Map to Bio-inspired Software Engineering,http://dx.doi.org/10.3923/rjit.2016.75.81 -Self-oscillations in an Alpha Stirling Engine: a bifurcation analysis,http://dx.doi.org/10.1137/19M1299293 -Efficient and tunable Aharonov-Bohm quantum heat engine,http://dx.doi.org/10.1103/PhysRevB.100.235442 -"The Impact of Dynamics of Collaborative Software Engineering on - Introverts: A Study Protocol",http://dx.doi.org/10.1145/3379597.3387505 -Value-based Engineering for Ethics by Design,http://arxiv.org/abs/2004.13676v2 -"A First Principles Approach for Data-Efficient System Identification of - Spring-Rod Systems via Differentiable Physics Engines",http://arxiv.org/abs/2004.13859v1 -Josephson quantum spin thermodynamics,http://dx.doi.org/10.1088/1361-648X/ac6f3b -Users' Perception of Search Engine Biases and Satisfaction,http://arxiv.org/abs/2105.02898v1 -Quantum engine efficiency bound beyond the second law of thermodynamics,http://dx.doi.org/10.1038/s41467-017-01991-6 -"Proceedings International Workshop on Formal Engineering approaches to - Software Components and Architectures",http://dx.doi.org/10.4204/EPTCS.245 -"Requirements Engineering Practice and Problems in Agile Projects: - Results from an International Survey",http://arxiv.org/abs/1703.08360v1 -"Let's hear it from RETTA: A Requirements Elicitation Tool for TrAffic - management systems",http://arxiv.org/abs/1707.01927v1 -"An Automated Compatibility Prediction Engine using DISC Theory Based - Classification and Neural Networks",http://arxiv.org/abs/1709.00539v1 -NeuralVis: Visualizing and Interpreting Deep Learning Models,http://arxiv.org/abs/1906.00690v1 -Single-atom heat engine as a sensitive thermal probe,http://arxiv.org/abs/2005.06858v1 -"Kaya: A Testing Framework for Blockchain-based Decentralized - Applications",http://arxiv.org/abs/2006.01476v1 -"Integrating Deep Learning into CAD/CAE System: Generative Design and - Evaluation of 3D Conceptual Wheel",http://dx.doi.org/10.1007/s00158-021-02953-9 -REBD:A Conceptual Framework for Big Data Requirements Engineering,http://dx.doi.org/10.5121/csit.2020.100608 -Technology Readiness Levels for AI & ML,http://arxiv.org/abs/2006.12497v3 -"Application of Statistical Methods in Software Engineering: Theory and - Practice",http://arxiv.org/abs/2006.15624v1 -Revisiting the Core Ontology and Problem in Requirements Engineering,http://dx.doi.org/10.1109/RE.2008.13 -"Mapping The Best Practices of XP and Project Management: Well defined - approach for Project Manager",http://arxiv.org/abs/1003.4077v2 -"Bounds of efficiency at maximum power for linear, superlinear and - sublinear irreversible Carnot-like heat engines",http://dx.doi.org/10.1209/0295-5075/98/40001 -"Segmentation Based Approach to Dynamic Page Construction from Search - Engine Results",http://arxiv.org/abs/1202.2617v1 -"Can an Ad-hoc ontology Beat a Medical Search Engine? The Chronious - Search Engine case",http://arxiv.org/abs/1203.4494v1 -Challenges and Directions for Engineering Multi-agent Systems,http://arxiv.org/abs/1209.1428v1 -Carnot Cycle at Finite Power: Attainability of Maximal Efficiency,http://dx.doi.org/10.1103/PhysRevLett.111.050601 -Ontology Based Feature Driven Development Life Cycle,http://arxiv.org/abs/1307.4174v1 -"Evaluating the retrieval effectiveness of Web search engines using a - representative query sample",http://arxiv.org/abs/1405.2210v1 -Theory of an optomechanical quantum heat engine,http://dx.doi.org/10.1103/PhysRevA.90.023819 -"Verifying Component and Connector Models against Crosscutting Structural - Views",http://dx.doi.org/10.1145/2568225.2568237 -"On the Feasibility and Implications of Self-Contained Search Engines in - the Browser",http://arxiv.org/abs/1410.4500v1 -Correctness of Backtest Engines,http://arxiv.org/abs/1509.08248v1 -Statistical Engineering: An Idea Whose Time Has Come?,http://arxiv.org/abs/1511.06013v1 -A Survey of Brain Inspired Technologies for Engineering,http://arxiv.org/abs/1610.09882v1 -"The Innovative Behaviour of Software Engineers: Findings from a Pilot - Case Study",http://dx.doi.org/10.1145/2961111.2962589 -Extracting work from quantum measurement in Maxwell demon engines,http://dx.doi.org/10.1103/PhysRevLett.118.260603 -"Enabling Embedded Inference Engine with ARM Compute Library: A Case - Study",http://arxiv.org/abs/1704.03751v3 -Reverse Engineering of Communications Networks: Evolution and Challenges,http://arxiv.org/abs/1704.05432v1 -Verification of Concurrent Engineering Software Using CSM Models,http://arxiv.org/abs/1704.06351v1 -Strider: A Hybrid Adaptive Distributed RDF Stream Processing Engine,http://arxiv.org/abs/1705.05688v1 -"Introducing Geometric Algebra to Geometric Computing Software - Developers: A Computational Thinking Approach",http://arxiv.org/abs/1705.06668v1 -Towards Methods for Model-Based Software Development,http://arxiv.org/abs/1712.02448v2 -Interactions between Health Searchers and Search Engines,http://arxiv.org/abs/1712.03622v1 -"Analysis of the Social Community Based on the Network Growing Model in - Open Source Software Community",http://dx.doi.org/10.1109/IWCIA.2015.7449480 -"Software Engineering for Millennials, by Millennials",http://dx.doi.org/10.1145/3194779.3194787 -"On the design of a decision engine for connected vehicles with an - application to congestion management",http://arxiv.org/abs/1804.06935v1 -"A Chaos Engineering System for Live Analysis and Falsification of - Exception-handling in the JVM",http://dx.doi.org/10.1109/TSE.2019.2954871 -"The theoretical and methodical foundations of usage of information and - communication technologies in teaching higher mathematics engineering - students in universities of the United States",http://arxiv.org/abs/1809.09557v1 -Thinging Ethics for Software Engineers,http://arxiv.org/abs/1810.02685v1 -"Perspective: Organic electronic materials and devices for neuromorphic - engineering",http://dx.doi.org/10.1063/1.5042419 -"Closing the gap between software engineering education and industrial - needs",http://arxiv.org/abs/1812.01954v1 -Quantum Synchronisation in Nanoscale Heat Engines,http://dx.doi.org/10.1103/PhysRevE.101.020201 -Quantum Coherence in a Quantum Heat Engine,http://dx.doi.org/10.1088/1751-8121/ab6a6b -"The First 50 Years of Software Reliability Engineering: A History of SRE - with First Person Accounts",http://arxiv.org/abs/1902.06140v1 -"Energy extraction of a chaotic system in a cyclic process: a Szilárd - Engine perspective",http://dx.doi.org/10.1088/1742-5468/ab345a -"Out-of-equilibrium operation of a quantum heat engine: The cost of - thermal coupling control",http://dx.doi.org/10.1088/1367-2630/ab725a -Regular Bardeen AdS Black Hole as a Heat Engine,http://dx.doi.org/10.1016/j.nuclphysb.2020.115166 -Happiness and the productivity of software engineers,http://arxiv.org/abs/1904.08239v1 -"Explainable Software for Cyber-Physical Systems (ES4CPS): Report from - the GI Dagstuhl Seminar 19023, January 06-11 2019, Schloss Dagstuhl",http://arxiv.org/abs/1904.11851v1 -Boundary Objects and their Use in Agile Systems Engineering,http://dx.doi.org/10.1002/smr.2166 -"A Feature Based Methodology for Variable Requirements Reverse - Engineering",http://dx.doi.org/10.11648/j.ajsea.20190801.11 -Concepts of work in autonomous quantum heat engines,http://dx.doi.org/10.22331/q-2019-10-14-195 -"How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the - OpenStack Cloud Computing Platform",http://dx.doi.org/10.1145/3338906.3338916 -"A Study on the Prevalence of Human Values in Software Engineering - Publications, 2015-2018",http://arxiv.org/abs/1907.07874v1 -"System-Level Development of a User-Integrated Semi-Autonomous Lawn - Mowing System: Problem Overview, Basic Requirements, and Proposed - Architecture",http://arxiv.org/abs/1907.09558v1 -TinySearch -- Semantics based Search Engine using Bert Embeddings,http://arxiv.org/abs/1908.02451v1 -Adabot: Fault-Tolerant Java Decompiler,http://arxiv.org/abs/1908.06748v2 -Bound on efficiency of heat engine from uncertainty relation viewpoint,http://dx.doi.org/10.3390/e23040439 -Organizing genome engineering for the gigabase scale,http://dx.doi.org/10.1038/s41467-020-14314-z -"Modeling, Identification and Control of Model Jet Engines for Jet - Powered Robotics",http://dx.doi.org/10.1109/LRA.2020.2970572 -"Approximate Dynamic Programming for Real-time Dispatching and Relocation - of Emergency Service Engineers",http://arxiv.org/abs/1910.01428v1 -"BACKUS: Comprehensive High-Performance Research Software Engineering - Approach for Simulations in Supercomputing Systems",http://arxiv.org/abs/1910.06415v1 -"Challenges for Inclusion in Software Engineering: The Case of the - Emerging Papua New Guinean Society",http://arxiv.org/abs/1911.04016v2 -"Reinforcement Learning-Driven Test Generation for Android GUI - Applications using Formal Specifications",http://arxiv.org/abs/1911.05403v2 -"Quantum measurement engines and their relevance for quantum - interpretations",http://dx.doi.org/10.1007/s40509-019-00217-2 -"Efficiency fluctuations and noise induced refrigerator-to-heater - transition in information engines",http://dx.doi.org/10.1038/s41467-020-14823-x -"Reaching and violating thermodynamic uncertainty bounds in information - engines",http://dx.doi.org/10.1103/PhysRevE.102.032126 -A non-equilibrium quantum many-body Rydberg atom engine,http://dx.doi.org/10.1103/PhysRevLett.124.170602 -Optimization in Software Engineering -- A Pragmatic Approach,http://arxiv.org/abs/1912.02071v1 -"IMPRESS: Improving Engagement in Software Engineering Courses through - Gamification",http://dx.doi.org/10.1007/978-3-030-35333-9_47 -Analysis of Software Engineering for Agile Machine Learning Projects,http://arxiv.org/abs/1912.07323v1 -"Engineers Code: reusable open learning modules for engineering - computations",http://dx.doi.org/10.1109/MCSE.2020.2976002 -Dataset of Video Game Development Problems,http://arxiv.org/abs/2001.00491v2 -Montage: A Neural Network Language Model-Guided JavaScript Engine Fuzzer,http://arxiv.org/abs/2001.04107v2 -Engineering AI Systems: A Research Agenda,http://arxiv.org/abs/2001.07522v2 -"Authorship Attribution of Source Code: A Language-Agnostic Approach and - Applicability in Software Engineering",http://arxiv.org/abs/2001.11593v2 -Critical heat current for operating an entanglement engine,http://dx.doi.org/10.1088/1367-2630/ab9983 -Grain boundary characteristics of Fe-based superconductors,http://dx.doi.org/10.1088/1361-6668/ab73ef -"New Security Challenges on Machine Learning Inference Engine: Chip - Cloning and Model Reverse Engineering",http://arxiv.org/abs/2003.09739v1 -Rapid Reviews in Software Engineering,http://arxiv.org/abs/2003.10006v1 -"Two-level quantum Otto heat engine operating with unit efficiency far - from the quasi-static regime under a squeezed reservoir",http://dx.doi.org/10.1088/1361-6455/abcfd9 -"Performance of quantum heat engines under the influence of long-range - interactions",http://dx.doi.org/10.1103/PhysRevE.102.012138 -Quantum Software Engineering: Landscapes and Horizons,http://arxiv.org/abs/2007.07047v2 -"Opening the Software Engineering Toolbox for the Assessment of - Trustworthy AI",http://arxiv.org/abs/2007.07768v2 -"Intelligent requirements engineering from natural language and their - chaining toward CAD models",http://arxiv.org/abs/2007.07825v1 -Beyond Accuracy: Assessing Software Documentation Quality,http://arxiv.org/abs/2007.10744v3 -"On Manually Reverse Engineering Communication Protocols of Linux Based - IoT Systems",http://arxiv.org/abs/2007.11981v2 -Energy optimization of two-level quantum Otto machines,http://arxiv.org/abs/2008.05002v1 -Automatic Storage Structure Selection for hybrid Workload,http://arxiv.org/abs/2008.06640v1 -"Deep Learning & Software Engineering: State of Research and Future - Directions",http://arxiv.org/abs/2009.08525v1 -"Constraints on the abundance of $0.01\,c$ stellar engines in the Milky - Way",http://dx.doi.org/10.3847/1538-4357/abc69c -An endoreversible quantum heat engine driven by atomic collisions,http://dx.doi.org/10.1038/s41467-021-22222-z -"A Physics-Informed Machine Learning Approach for Solving Heat Transfer - Equation in Advanced Manufacturing and Engineering Applications",http://dx.doi.org/10.1016/j.engappai.2021.104232 -Questions for Data Scientists in Software Engineering: A Replication,http://dx.doi.org/10.1145/3368089.3409717 -"Assessment of Off-the-Shelf SE-specific Sentiment Analysis Tools: An - Extended Replication Study",http://dx.doi.org/10.1007/s10664-021-09960-w -"Matrix Engines for High Performance Computing:A Paragon of Performance - or Grasping at Straws?",http://arxiv.org/abs/2010.14373v2 -Spring-Rod System Identification via Differentiable Physics Engine,http://arxiv.org/abs/2011.04910v1 -"Hierarchical spline for time series forecasting: An application to Naval - ship engine failure rate",http://dx.doi.org/10.22541/au.159969715.57074848 -"Testing the Stationarity Assumption in Software Effort Estimation - Datasets",http://dx.doi.org/10.18293/SEKE2020-159 -"Temperature dependent maximization of work and efficiency in a - degeneracy assisted quantum Stirling heat engine",http://dx.doi.org/10.1103/PhysRevE.103.062109 -"Incorporating Vision Bias into Click Models for Image-oriented Search - Engine",http://arxiv.org/abs/2101.02459v1 -"The role physics can play in a multi-disciplinary curriculum for - non-physics scientists and engineers",http://arxiv.org/abs/2101.12622v1 -"The Impact of Sampling and Rule Set Size on Generated Fuzzy Inference - System Predictive Accuracy: Analysis of a Software Engineering Data Set",http://dx.doi.org/10.1007/978-3-642-23960-1_43 -"Machine Learning Model Development from a Software Engineering - Perspective: A Systematic Literature Review",http://arxiv.org/abs/2102.07574v1 -Light-matter quantum Otto engine in finite time,http://arxiv.org/abs/2102.10559v1 -"Dispersion-engineered $χ^{(2)}$ nanophotonics: a flexible tool for - nonclassical light",http://arxiv.org/abs/2103.02296v3 -"Diagrammatic Formalism for Complex Systems: More than One Way to - Eventize a Railcar System",http://arxiv.org/abs/2103.02820v1 -IoT Roadmap: Support for Internet of Things Software Systems Engineering,http://arxiv.org/abs/2103.04969v3 -Designing a Bot for Efficient Distribution of Service Requests,http://arxiv.org/abs/2103.05970v1 -Incoherent control of optical signals; quantum heat engine approach,http://dx.doi.org/10.1103/PhysRevResearch.3.023029 -"Experiences and insights from using Github Classroom to support - Project-Based Courses",http://dx.doi.org/10.1109/SEENG53126.2021.00013 -"Robust Machine Learning in Critical Care -- Software Engineering and - Medical Perspectives",http://arxiv.org/abs/2103.08291v1 -"Towards Productizing AI/ML Models: An Industry Perspective from Data - Scientists",http://dx.doi.org/10.1109/WAIN52551.2021.00027 -Thermodynamically-informed Air-based Soft Heat Engine Design,http://arxiv.org/abs/2103.14157v1 -The Quantum Otto Heat Engine with a relativistically moving thermal bath,http://dx.doi.org/10.1007/s10773-021-04969-9 -Multi-Objective Reconstruction Of Software Architecture,http://dx.doi.org/10.1142/S0218194018500262 -"Challenges Women in Software Engineering Leadership Roles Face: A - Qualitative Study",http://arxiv.org/abs/2104.13982v1 -"Detecting race and gender bias in visual representation of AI on web - search engines",http://dx.doi.org/10.1007/978-3-030-78818-6_5 -Optimizing Thermodynamic Cycles with Two Finite-Sized Reservoirs,http://dx.doi.org/10.1103/PhysRevE.105.L022101 -"Quantum heat engines with complex working media, complete Otto cycles - and heuristics",http://dx.doi.org/10.3390/e23091149 -"Text Mining Undergraduate Engineering Programs' Applications: the Role - of Gender, Nationality, and Socio-economic Status",http://arxiv.org/abs/2107.14034v4 -Maximal fluctuation exploitation in Gaussian information engines,http://dx.doi.org/10.1103/PhysRevE.104.044122 -Cloud Native Privacy Engineering through DevPrivOps,http://dx.doi.org/10.1007/978-3-030-99100-5_10 -"Exploring Explainability: A Definition, a Model, and a Knowledge - Catalogue",http://arxiv.org/abs/2108.03012v1 -Recommender Systems for Software Project Managers,http://dx.doi.org/10.1145/3463274.3463951 -"On the Relation of Trust and Explainability: Why to Engineer for - Trustworthiness",http://arxiv.org/abs/2108.05379v2 -The problem of engines in statistical physics,http://dx.doi.org/10.3390/e23081095 -Software Development Processes in Ocean System Modeling,http://dx.doi.org/10.1142/S1793962322300023 -Term Interrelations and Trends in Software Engineering,http://dx.doi.org/10.1145/3468264.3473132 -Patent-KG: Patent Knowledge Graph Use for Engineering Design,http://arxiv.org/abs/2108.11899v3 -A Reasoning Engine for the Gamification of Loop-Invariant Discovery,http://arxiv.org/abs/2109.01121v1 -Broccoli: Bug localization with the help of text search engines,http://arxiv.org/abs/2109.11902v2 -"Human Capabilities as Guiding Lights for the Field of AI-HRI: Insights - from Engineering Education",http://arxiv.org/abs/2110.03026v1 -"Readability and Understandability of Snippets Recommended by - General-purpose Web Search Engines: a Comparative Study",http://arxiv.org/abs/2110.07087v1 -"Obtaining efficient thermal engines from interacting Brownian particles - under time dependent periodic drivings",http://dx.doi.org/10.1103/PhysRevE.105.024106 -Ontology-based question answering over corporate structured data,http://arxiv.org/abs/2111.04507v1 -What Does the Post-Moore Era Mean for Research Software Engineering?,http://arxiv.org/abs/2111.05999v1 -"Quantum signatures in quadratic optomechanical heat engine with an atom - in a tapered trap",http://dx.doi.org/10.1364/JOSAB.472901 -"Teaching Undergraduate Students to Think Like Real-World Systems - Engineers: A Technology-Based Hybrid Learning Approach",http://arxiv.org/abs/2111.13559v1 -"Unified trade-off optimization of quantum harmonic Otto engine and - refrigerator",http://dx.doi.org/10.1103/PhysRevE.106.024137 -"Machine Learning for Computational Science and Engineering -- a brief - introduction and some critical questions",http://arxiv.org/abs/2112.12054v1 -"Theoretical realization of hybrid Weyl state and associated high - catalytic performance for hydrogen evolution in NiSi",http://dx.doi.org/10.1016/j.isci.2021.103543 -"MISO hierarchical inference engine satisfying the law of importation - with aggregation functions",http://arxiv.org/abs/2112.12808v4 -"Recruiting credible participants for field studies in software - engineering research",http://arxiv.org/abs/2112.14186v1 -"Working in Harmony: Towards Integrating RSEs into Multi-Disciplinary CSE - Teams",http://arxiv.org/abs/2201.04010v1 -Two-stroke Quantum Measurement Heat Engine,http://arxiv.org/abs/2201.06303v1 -"Just Enough, Just in Time, Just for ""Me"": Fundamental Principles for - Engineering IoT-native Software Systems",http://dx.doi.org/10.1145/3510455.3512785 -Model-Based Engineering of CPPS Functions and Code Generation for Skills,http://dx.doi.org/10.1109/ICPS51978.2022.9816919 -Atomistic Engineering of Phonons in Functional Oxide Heterostructures,http://dx.doi.org/10.1002/advs.202103403 -Scenario-Assisted Deep Reinforcement Learning,http://arxiv.org/abs/2202.04337v1 -"Social Engineering Attacks and Defenses in the Physical World vs. - Cyberspace: A Contrast Study",http://arxiv.org/abs/2203.04813v1 -"Revisiting Digital Twins: Origins, Fundamentals and Practices",http://arxiv.org/abs/2203.12867v1 -"Application-driven engagement with universities, synergies with other - funding agencies",http://arxiv.org/abs/2203.14706v1 -"Exploring Opportunities in Usable Hazard Analysis Processes for AI - Engineering",http://arxiv.org/abs/2203.15628v1 -A Catalogue of Concerns for Specifying Machine Learning-Enabled Systems,http://arxiv.org/abs/2204.07662v2 -Public awareness and attitudes towards search engine optimization,http://dx.doi.org/10.1080/0144929X.2022.2056507 -"Understanding Quantum Software Engineering Challenges An Empirical Study - on Stack Exchange Forums and GitHub Issues",http://arxiv.org/abs/2205.03181v1 -"Microfluidic cell engineering on high-density microelectrode arrays for - assessing structure-function relationships in living neuronal networks",http://dx.doi.org/10.3389/fnins.2022.943310 -"Digital Sovereignty and Software Engineering for the IoT-laden, - AI/ML-driven Era",http://arxiv.org/abs/2205.14137v1 -Unruh quantum Otto engine in the presence of a reflecting boundary,http://dx.doi.org/10.1007/JHEP09(2022)105 -"COREQQA -- A COmpliance REQuirements Understanding using Question - Answering Tool",http://arxiv.org/abs/2206.10233v1 -"Driving Digital Engineering Integration and Interoperability Through - Semantic Integration of Models with Ontologies",http://arxiv.org/abs/2206.10454v1 -"On the Application of Agile Project Management Techniques, V-Model and - Recent Software Tools in Postgraduate Theses Supervision",http://arxiv.org/abs/2207.04848v1 -Development of VR Teaching System for Engine Dis-assembly,http://arxiv.org/abs/2207.05265v1 -Spatial anomaly detection with optimal transport,http://arxiv.org/abs/2207.06166v1 -Design of VR Engine Assembly Teaching System,http://arxiv.org/abs/2207.07119v1 -"A Comparison of Source Distribution and Result Overlap in Web Search - Engines",http://arxiv.org/abs/2207.07330v1 -Information engine in a nonequilibrium bath,http://dx.doi.org/10.1103/PhysRevLett.131.057101 -Taming Multi-Output Recommenders for Software Engineering,http://arxiv.org/abs/2208.00443v1 -Requirements engineering in open innovation: a research agenda,http://dx.doi.org/10.1145/2785592.2795370 -Requirements Analysis and Management for Benefiting Openness,http://dx.doi.org/10.1109/REW.2016.062 -"Core and Periphery as Closed-System Precepts for Engineering General - Intelligence",http://arxiv.org/abs/2208.02837v1 -Quantum thermochemical engines,http://arxiv.org/abs/2208.04132v1 -Quadratic Enhancement in the Reliability of Collective Quantum Engines,http://dx.doi.org/10.1103/PhysRevA.107.L040202 -Diffusion Models Beat GANs on Topology Optimization,http://arxiv.org/abs/2208.09591v2 -Challenges of Implementing Agile Processes in Remote-First Companies,http://arxiv.org/abs/2209.04376v1 -"This is what a pandemic looks like: Visual framing of COVID-19 on search - engines",http://arxiv.org/abs/2209.11120v1 -Research Software Engineers: Career Entry Points and Training Gaps,http://dx.doi.org/10.1109/MCSE.2023.3258630 -"Recent Advances in Uncertainty Quantification Methods for Engineering - Problems",http://arxiv.org/abs/2211.03012v1 -Causal inference for data centric engineering,http://arxiv.org/abs/2211.13618v1 -Quantum Software Engineering: A New Genre of Computing,http://arxiv.org/abs/2211.13990v1 -Industry Best Practices in Robotics Software Engineering,http://arxiv.org/abs/2212.04877v1 -"Emerging Mobile Phone-based Social Engineering Cyberattacks in the - Zambian ICT Sector",http://arxiv.org/abs/2212.13721v1 -"A Mapping of Assurance Techniques for Learning Enabled Autonomous - Systems to the Systems Engineering Lifecycle",http://dx.doi.org/10.1109/ICAA52185.2022.00013 -"The Evolution of Web Search User Interfaces -- An Archaeological - Analysis of Google Search Engine Result Pages",http://dx.doi.org/10.1145/3576840.3578320 -"Requirements Practices and Gaps When Engineering Human-Centered - Artificial Intelligence Systems",http://arxiv.org/abs/2301.10404v1 -Teaching MLOps in Higher Education through Project-Based Learning,http://dx.doi.org/10.1109/ICSE-SEET58685.2023.00015 -"Requirements Rationalization and Synthesis enabled by Model - Synchronization",http://arxiv.org/abs/2302.05980v1 -"VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile - Acceleration on CPUs",http://arxiv.org/abs/2302.08687v2 -"Multi-modal Machine Learning in Engineering Design: A Review and Future - Directions",http://arxiv.org/abs/2302.10909v2 -"Parameter-free shape optimization: various shape updates for engineering - applications",http://dx.doi.org/10.3390/aerospace10090751 -Quantum Engineering for Energy Applications,http://arxiv.org/abs/2303.01632v1 -"CHESS: A Framework for Evaluation of Self-adaptive Systems based on - Chaos Engineering",http://arxiv.org/abs/2303.07283v1 -Thermodynamic engine with a quantum degenerate working fluid,http://dx.doi.org/10.1103/PhysRevResearch.5.L042009 -Optimal performance of voltage-probe quantum heat engines,http://arxiv.org/abs/2304.10942v1 -Stochastic Heat Engine Using Multiple Interacting Active Particles,http://arxiv.org/abs/2304.11867v1 -"Towards a Scalable Proof Engine: A Performant Prototype Rewriting - Primitive for Coq",http://arxiv.org/abs/2305.02521v1 -"Empathy Models and Software Engineering -- A Preliminary Analysis and - Taxonomy",http://arxiv.org/abs/2305.03941v1 -"Leveraging Machine Learning to Gain Insights on Quantum Thermodynamic - Entropy",http://arxiv.org/abs/2305.06177v1 -Large Language Models are Built-in Autoregressive Search Engines,http://arxiv.org/abs/2305.09612v1 -"ChatGPT: A Study on its Utility for Ubiquitous Software Engineering - Tasks",http://arxiv.org/abs/2305.16837v1 -"Active-passive mixtures with bulk loading: a minimal active engine in - one dimension",http://dx.doi.org/10.1088/1742-5468/acecfa -"What Pakistani Computer Science and Software Engineering Students Think - about Software Testing?",http://dx.doi.org/10.1109/APSEC57350.2022.00087 -EvolveMT: an Ensemble MT Engine Improving Itself with Usage Only,http://arxiv.org/abs/2306.11823v1 -A Quantum Otto Engine with Shortcuts to Thermalization and Adiabaticity,http://arxiv.org/abs/2306.14847v4 -"Navigating the Complexity of Generative AI Adoption in Software - Engineering",http://arxiv.org/abs/2307.06081v1 -web crawler strategies for web pages under robot.txt restriction,http://arxiv.org/abs/2308.04689v1 -"Potential of Deep Operator Networks in Digital Twin-enabling Technology - for Nuclear System",http://arxiv.org/abs/2308.07523v1 -EdgeFL: A Lightweight Decentralized Federated Learning Framework,http://arxiv.org/abs/2309.02936v1 -"DAnTE: a taxonomy for the automation degree of software engineering - tasks",http://arxiv.org/abs/2309.14903v1 -Large Language Models for Software Engineering: Survey and Open Problems,http://arxiv.org/abs/2310.03533v3 -PeaTMOSS: Mining Pre-Trained Models in Open-Source Software,http://arxiv.org/abs/2310.03620v1 -"Introducing High School Students to Version Control, Continuous - Integration, and Quality Assurance",http://arxiv.org/abs/2310.03914v1 -"Digital Deception: Generative Artificial Intelligence in Social - Engineering and Phishing",http://arxiv.org/abs/2310.13715v1 -"Insight-HXMT and GECAM-C observations of the brightest-of-all-time GRB - 221009A",http://arxiv.org/abs/2303.01203v2 -Line Profiles from Different Accretion Engine Geometries,http://arxiv.org/abs/astro-ph/0211236v1 -"A General Architecture for Language Engineering (GATE) - a new approach - to Language Engineering R&D",http://arxiv.org/abs/cmp-lg/9601009v2 -Atomic scale engines: Cars and wheels,http://dx.doi.org/10.1103/PhysRevLett.84.6058 -Sculptured Thin Films: Accomplishments and Emerging Uses,http://arxiv.org/abs/cond-mat/0112070v1 -g-factor engineering and control in self-assembled quantum dots,http://dx.doi.org/10.1007/s00339-003-2241-2 -Multiscale Discrete Dislocation Dynamics Plasticity,http://arxiv.org/abs/cond-mat/0509531v1 -Second Product Line Practice Workshop Report,http://arxiv.org/abs/cs/9811007v1 -SATEN: An Object-Oriented Web-Based Revision and Extraction Engine,http://arxiv.org/abs/cs/0003059v1 -On the Information Engine of Circuit Design,http://arxiv.org/abs/cs/0207014v1 -"Towards a Model-Based Framework for Integrating Usability and Software - Engineering Life Cycles",http://arxiv.org/abs/cs/0402036v1 -Web pages search engine based on DNS,http://arxiv.org/abs/cs/0403035v1 -Underwater Hacker Missile Wars: A Cryptography and Engineering Contest,http://arxiv.org/abs/cs/0509053v1 -Mass production requires precision engineering,http://arxiv.org/abs/hep-th/9806200v1 -Reverse geometric engineering of singularities,http://dx.doi.org/10.1088/1126-6708/2002/04/052 -Chaotic Combustion in Spark Ignition Engines,http://dx.doi.org/10.1016/S0960-0779(03)00031-6 -The Stirling Engine-Refrigerator: Rich Pedagogy from Applied Physics,http://arxiv.org/abs/physics/0112061v1 -"Spatially selective loading of an optical lattice by light-shift - engineering using an auxiliary laser field",http://dx.doi.org/10.1088/1367-2630/8/1/011 -Bacterial self-organisation and computation,http://arxiv.org/abs/q-bio/0512017v1 -Decoherence in a single trapped ion due to engineered reservoir,http://dx.doi.org/10.1088/1464-4266/3/1/301 -Engineering cavity-field states by projection synthesis,http://dx.doi.org/10.1103/PhysRevA.62.043810 -Sundays in a Quantum Engineer's Life,http://arxiv.org/abs/quant-ph/0104140v1 -Quantum Heat Engines Using Superconducting Quantum Circuits,http://dx.doi.org/10.1103/PhysRevLett.97.180402 -Hamiltonian engineering for quantum systems,http://arxiv.org/abs/quant-ph/0602014v2 -Half-metallic silicon nanowires,http://arxiv.org/abs/0704.0109v1 -Conference Summary: The Central Engine of Active Galactic Nuclei,http://arxiv.org/abs/0705.2192v1 -Light and Electromagnetic Waves Teaching in Engineering Education,http://arxiv.org/abs/0711.0452v1 -Looking Beyond Content: Skill development for engineers,http://arxiv.org/abs/0802.2950v1 -Design Patterns for Complex Event Processing,http://arxiv.org/abs/0806.1100v1 -"A Basic Thermodynamic Derivation of the Maximum Overburden Pressure - Generated in Frost Heave",http://arxiv.org/abs/0808.2488v1 -Engineering Giant Nonlinearities in Quantum Nanosystems,http://arxiv.org/abs/0809.2993v2 -"Direct Measurement of Piezoelectric Response around Ferroelectric Domain - Walls in Crystals with Engineered Domain Configuration",http://dx.doi.org/10.1103/PhysRevB.81.024114 -"Towards a Theory of Requirements Elicitation: Acceptability Condition - for the Relative Validity of Requirements",http://arxiv.org/abs/0902.0924v1 -An exactly solvable model of a highly efficient thermoelectric engine,http://arxiv.org/abs/0905.3997v2 -Cooling classical particles with a microcanonical Szilard engine,http://dx.doi.org/10.1103/PhysRevLett.104.245704 -Observation of soliton pulse compression in photonic crystal waveguides,http://dx.doi.org/10.1038/nphoton.2010.261 -Database Reverse Engineering based on Association Rule Mining,http://arxiv.org/abs/1004.3272v1 -"Component Interaction Graph: A new approach to test component - composition",http://arxiv.org/abs/1006.2812v1 -How to build a DNA search engine like Google?,http://arxiv.org/abs/1006.4114v4 -"Building Reusable Software Component For Optimization Check in ABAP - Coding",http://dx.doi.org/10.5121/ijsea.2010.1303 -"Realization and Test of the Engineering Prototype of the CALICE Tile - Hadron Calorimeter",http://arxiv.org/abs/1011.4760v1 -Control System Design Using Finite Laplace Transform Theory,http://arxiv.org/abs/1101.4347v1 -Education for Computational Science and Engineering,http://arxiv.org/abs/1102.4651v3 -"Engineering of radiation of optically active molecules with chiral - nano-meta-particles",http://dx.doi.org/10.1209/0295-5075/97/47004 -What role does the third law of thermodynamics play in Szilard engines?,http://arxiv.org/abs/1108.3644v3 -Web Pages Clustering: A New Approach,http://arxiv.org/abs/1108.5703v1 -"Role of the superposition principle for enhancing the efficiency of the - quantum-mechanical Carnot engine",http://dx.doi.org/10.1103/PhysRevE.85.011104 -"Advanced engineering design as practiced today from the view point of - the CERN Industrial Liaison Officer",http://arxiv.org/abs/1201.3189v1 -A Knowledge Engineering Method for New Product Development,http://dx.doi.org/10.3166/jds.19.117-133 -"What Should Developers Be Aware Of? An Empirical Study on the Directives - of API Documentation",http://dx.doi.org/10.1007/s10664-011-9186-4 -Experience on Re-engineering Applying with Software Product Line,http://arxiv.org/abs/1206.4120v1 -Automatic Test Generation for Space,http://dx.doi.org/10.4230/OASIcs.SLATE.2012.185 -"Web-page Prediction for Domain Specific Web-search using Boolean Bit - Mask",http://arxiv.org/abs/1206.5584v1 -Green Traffic Engineering for Future Core Networks,http://arxiv.org/abs/1207.0157v1 -"Educating and Training Accelerator Scientists and Technologists for - Tomorrow",http://dx.doi.org/10.1142/9789814449953_0012 -An Adaptive Online Ad Auction Scoring Algorithm for Revenue Maximization,http://arxiv.org/abs/1207.4701v1 -An Internet Approach for Engineering Student Exercises,http://arxiv.org/abs/1208.1969v1 -The Oblique Basis Method from an Engineering Point of View,http://dx.doi.org/10.1088/1742-6596/403/1/012005 -"Dependability-Explicit Engineering with Event-B: Overview of Recent - Achievements",http://arxiv.org/abs/1210.7032v1 -Scaling Genetic Programming for Source Code Modification,http://arxiv.org/abs/1211.5098v1 -"Formal Verification, Engineering and Business Value",http://dx.doi.org/10.4204/EPTCS.105.1 -An Application of Uncertain Reasoning to Requirements Engineering,http://arxiv.org/abs/1301.6678v1 -"Semantic integration process of business components to support - information system designers",http://dx.doi.org/10.5121/ijwest.2013.4104 -Network Engineering for Complex Belief Networks,http://arxiv.org/abs/1302.3591v1 -"The Powerful Model Adpredictor for Search Engine Switching Detection - Challenge",http://arxiv.org/abs/1303.2156v1 -Work Issues in Software Engineering,http://arxiv.org/abs/1303.2646v1 -Lowering the Barrier to Reuse through Test-Driven Search,http://dx.doi.org/10.1109/SUITE.2009.5070015 -Efficiency of heat engines coupled to nonequilibrium reservoirs,http://dx.doi.org/10.1209/0295-5075/106/20001 -Room temperature multiferroism in CaTcO$_3$ by interface engineering,http://arxiv.org/abs/1306.3433v1 -Composite Particles and the Szilard Engine,http://arxiv.org/abs/1308.1525v1 -"Experiences from Software Engineering of Large Scale AMR Multiphysics - Code Frameworks",http://dx.doi.org/10.5334/jors.am -"SIED, a Data Privacy Engineering Framework",http://arxiv.org/abs/1309.6576v1 -Misfire Detection in IC Engine using Kstar Algorithm,http://arxiv.org/abs/1310.3717v1 -Semantic Jira - Semantic Expert Finder in the Bug Tracking Tool Jira,http://arxiv.org/abs/1312.5150v1 -"The prospect of using LES and DES in engineering design, and the - research required to get there",http://dx.doi.org/10.1098/rsta.2013.0329 -Design a Persian Automated Plagiarism Detector (AMZPPD),http://dx.doi.org/10.14445/22315381/IJETT-V8P280 -"The assertive profile of the Bulgarian students in computer science and - computer engineering",http://dx.doi.org/10.5815/ijeme.2014.01.01 -Intermittent Control in Man and Machine,http://arxiv.org/abs/1407.3543v1 -An all-optical nanomechanical heat engine,http://dx.doi.org/10.1103/PhysRevLett.114.183602 -"Making FPGAs Accessible to Scientists and Engineers as Domain Expert - Software Programmers with LabVIEW",http://arxiv.org/abs/1408.4715v1 -Validation of the development methodologies,http://arxiv.org/abs/1408.5511v1 -Agile Modeling with the UML,http://arxiv.org/abs/1409.6767v1 -Hierarchical XP,http://arxiv.org/abs/1409.6768v1 -Stochastic Efficiency for Effusion as a Thermal Engine,http://dx.doi.org/10.1209/0295-5075/109/20004 -"Derivative coordinates for analytic tree fractals and fractal - engineering",http://arxiv.org/abs/1501.01675v1 -Slice Sampling for Probabilistic Programming,http://arxiv.org/abs/1501.04684v1 -Chapter 9 TISSUE ENGINEERING,http://arxiv.org/abs/1501.05006v1 -Role of measurement-feedback separation in autonomous Maxwell's demons,http://dx.doi.org/10.1088/1367-2630/17/4/045012 -Flatband Engineering of Mobility Edges,http://dx.doi.org/10.1103/PhysRevB.91.235134 -Covalent pathways in engineering h-BN supported graphene,http://arxiv.org/abs/1503.02219v1 -"Quantum Isothermal Reversible Process of Particles in a Box with a Delta - Potential",http://dx.doi.org/10.3938/jkps.66.739 -Towards the Ontology Web Search Engine,http://arxiv.org/abs/1505.00755v1 -Magneto-strain-driven quantum engine on a graphene flake,http://dx.doi.org/10.1103/PhysRevE.91.052152 -Targeting engineering synchronization in chaotic systems,http://dx.doi.org/10.1142/S0129183116500066 -Embedded Formative Assessment in the Undergraduate Engineering Classroom,http://arxiv.org/abs/1506.07205v1 -"Aspect OntoMaven - Aspect-Oriented Ontology Development and - Configuration With OntoMaven",http://arxiv.org/abs/1507.00212v1 -Implications of MBTI in Software Engineering Education,http://dx.doi.org/10.1145/820127.820185 -What Is Software Engineering?,http://arxiv.org/abs/1508.02031v2 -"Supporting Developers in Porting Software via Combined Textual and - Structural Analysis of Software Artifacts",http://arxiv.org/abs/1508.04044v1 -"Quantum optimal environment engineering for efficient photoinduced - charge separation",http://arxiv.org/abs/1508.04481v1 -Automation of Smartphone Traffic Generation in a Virtualized Environment,http://arxiv.org/abs/1510.07830v1 -Non-Hermitian tight-binding network engineering,http://dx.doi.org/10.1103/PhysRevA.93.022102 -Maximum efficiency of steady-state heat engines at arbitrary power,http://dx.doi.org/10.1103/PhysRevE.93.050101 -Chiral algebra of Argyres-Douglas theory from M5 brane,http://dx.doi.org/10.1103/PhysRevD.103.065003 -Extracted Social Network Mining,http://arxiv.org/abs/1604.06976v1 -Reference and Structure of Software Engineering Theories,http://arxiv.org/abs/1604.07973v1 -A Theory of Service Dependency,http://dx.doi.org/10.4204/EPTCS.209.9 -Swift: Compiled Inference for Probabilistic Programming Languages,http://arxiv.org/abs/1606.09242v1 -"On Formal Methods for Collective Adaptive System Engineering. {Scalable - Approximated, Spatial} Analysis Techniques. Extended Abstract",http://dx.doi.org/10.4204/EPTCS.217.7 -Incentive Engineering Framework for Crowdsourcing Systems,http://arxiv.org/abs/1609.01348v1 -Vortex coronagraphy from self-engineered liquid crystal spin-orbit masks,http://dx.doi.org/10.1364/OL.41.005234 -"Ontologies for Privacy Requirements Engineering: A Systematic Literature - Review",http://arxiv.org/abs/1611.10097v1 -Microservices Science and Engineering,http://arxiv.org/abs/1706.07350v1 -"What Happens to Intentional Concepts in Requirements Engineering If - Intentional States Cannot Be Known?",http://arxiv.org/abs/1706.10133v1 -"A novel metaheuristic method for solving constrained engineering - optimization problems: Drone Squadron Optimization",http://arxiv.org/abs/1708.01368v1 -"DARVIZ: Deep Abstract Representation, Visualization, and Verification of - Deep Learning Models",http://dx.doi.org/10.1109/ICSE-NIER.2017.13 -"E-learning Information Technology Based on an Ontology Driven Learning - Engine",http://arxiv.org/abs/1710.05912v1 -Thermal bath Engineering for Swift Equilibration,http://dx.doi.org/10.1103/PhysRevE.98.010104 -Mitigating Spreadsheet Model Risk with Python Open Source Infrastructure,http://arxiv.org/abs/1801.09771v1 -"On Decision Support for Remote Industrial Facilities using the - Collaborative Engineering Framework",http://arxiv.org/abs/1802.02227v1 -"Efficiency and power of minimally nonlinear irreversible heat engines - with broken time-reversal symmetry",http://dx.doi.org/10.3390/e21070717 -"Tunable high-resolution macroscopic self-engineered geometric phase - optical elements",http://dx.doi.org/10.1103/PhysRevLett.121.033901 -"Quantum correlations and thermodynamic performances of two-qubit engines - with local and collective baths",http://dx.doi.org/10.1103/PhysRevA.98.042102 -Cooperative many-body enhancement of quantum thermal machine power,http://dx.doi.org/10.1088/1367-2630/aaed55 -"Mechanical Engineers Training in Using Cloud and Mobile Services in - Professional Activity",http://arxiv.org/abs/1807.00313v1 -"Shortcut-to-adiabaticity Otto engine: A twist to finite-time - thermodynamics",http://dx.doi.org/10.1103/PhysRevE.99.022110 -Automatic Generation of a Hybrid Query Execution Engine,http://arxiv.org/abs/1808.05448v1 -"Engineering fidelity of the generalized Pauli channels via legitimate - memory kernels",http://dx.doi.org/10.1103/PhysRevA.100.012303 -"Toward Human-Like Summaries Generated from Heterogeneous Software - Artefacts",http://arxiv.org/abs/1905.02258v1 -"Orbital Floquet Engineering of Exchange Interactions in Magnetic - Materials",http://dx.doi.org/10.1103/PhysRevB.100.220403 -On Motion Control and Machine Learning for Robotic Assembly,http://arxiv.org/abs/1905.11129v1 -"Tuning the Magnetic Ground State by Charge Transfer Energy in SrCoO2.5 - via Strain Engineering",http://arxiv.org/abs/1905.11645v1 -Exploratory Test Agents for Stateful Software Systems,http://dx.doi.org/10.1145/3338906.3341458 -"Model-driven Engineering of Safety and Security Systems: A Systematic - Mapping Study",http://arxiv.org/abs/2004.08471v1 -Engineering Economics in the Conflux Network,http://arxiv.org/abs/2004.13696v1 -Unifying Requirements and Code: an Example,http://arxiv.org/abs/1602.05395v1 -"Persistence of Physics and Engineering Students via Peer Mentoring, - Active Learning, and Intentional Advising",http://dx.doi.org/10.1088/0143-0807/37/6/065702 -Massive Computation for Understanding Core-Collapse Supernova Explosions,http://dx.doi.org/10.1109/MCSE.2016.81 -"Universal Coherence-Induced Power Losses of Quantum Heat Engines in - Linear Response",http://dx.doi.org/10.1103/PhysRevLett.119.170602 -Stochastic Stirling engine operating in contact with active baths,http://dx.doi.org/10.3390/e19050193 -Concurrent Software Design Based on Constraints on State Diagrams,http://arxiv.org/abs/1703.08242v1 -"Strained graphene based highly efficient quantum heat engine operating - at maximum power",http://dx.doi.org/10.1103/PhysRevE.96.032118 -"General relations between the power, efficiency and dissipation for the - irreversible heat engines in the nonlinear response regime",http://dx.doi.org/10.1103/PhysRevE.97.012141 -Annotation based automatic action processing,http://arxiv.org/abs/1709.07654v2 -Towards a Semantic Search Engine for Scientific Articles,http://dx.doi.org/10.1007/978-3-319-67008-9_54 -Software Engineering Practices for Machine Learning,http://dx.doi.org/10.1109/MC.2022.3160276 -A Taxonomy for Virtual and Augmented Reality in Education,http://arxiv.org/abs/1906.12051v1 -Stochastic Floquet quantum heat engines and stochastic efficiencies,http://dx.doi.org/10.1103/PhysRevE.101.062144 -Stopwords in Technical Language Processing,http://dx.doi.org/10.1371/journal.pone.0254937 -Optomechanical Stirling heat engine driven by feedback-controlled light,http://dx.doi.org/10.1103/PhysRevA.102.053502 -Entropy generation and jet engine optimization,http://arxiv.org/abs/1012.4201v1 -Predicting User Actions in Software Processes,http://arxiv.org/abs/1110.1301v1 -Measurement and Particle Statistics in the Szilard Engine,http://dx.doi.org/10.1038/srep06995 -FDB: A Query Engine for Factorised Relational Databases,http://arxiv.org/abs/1203.2672v1 -"Semi-Automatically Extracting FAQs to Improve Accessibility of Software - Development Knowledge",http://dx.doi.org/10.1109/ICSE.2012.6227139 -Evaluation of Computational Grammar Formalisms for Indian Languages,http://arxiv.org/abs/1209.1301v1 -The Illusion of Requirements in Software Development,http://dx.doi.org/10.1063/1.4792587 -Dimensional analysis using toric ideals,http://arxiv.org/abs/1304.6659v1 -"Proceedings Third International Workshop on Engineering Safety and - Security Systems",http://dx.doi.org/10.4204/EPTCS.150 -Why we need an independent index of the Web,http://arxiv.org/abs/1405.2212v1 -Component Based Software Development: A State of Art,http://arxiv.org/abs/1406.3728v1 -Universal features in the efficiency of ultra hot quantum Otto engines,http://dx.doi.org/10.1209/0295-5075/108/40001 -"Workshop Summary of the 1st International Workshop on Requirements and - Testing (RET'14)",http://arxiv.org/abs/1410.3401v1 -"Stabilizing entanglement via symmetry-selective bath engineering in - superconducting qubits",http://dx.doi.org/10.1103/PhysRevLett.116.240503 -"Automatic Instrument Recognition in Polyphonic Music Using Convolutional - Neural Networks",http://arxiv.org/abs/1511.05520v1 -"Countering Social Engineering through Social Media: An Enterprise - Security Perspective",http://arxiv.org/abs/1511.06915v1 -Engineering Curvature Induced Anisotropy in Thin Ferromagnetic Films,http://dx.doi.org/10.1103/PhysRevLett.119.077203 -Quantum friction: environment engineering perspectives,http://arxiv.org/abs/1612.00573v2 -"Proceedings 11th Doctoral Workshop on Mathematical and Engineering - Methods in Computer Science",http://dx.doi.org/10.4204/EPTCS.233 -"Comments on Improving Transferability Between Different Engineering - Stages in the Development of Automated Material Flow Modules",http://arxiv.org/abs/1612.04409v1 -Role of partition in work extraction from multi-particle Szilard Engine,http://arxiv.org/abs/1612.07007v1 -Benchmarking Black Hole Heat Engines,http://dx.doi.org/10.1142/S0218271819500128 -Chaos Engineering,http://dx.doi.org/10.1109/MS.2016.60 -Syntax Aware LSTM Model for Chinese Semantic Role Labeling,http://arxiv.org/abs/1704.00405v2 -Kanban + X: Leveraging Kanban for Focused Improvements,http://arxiv.org/abs/1704.09004v1 -Helical thermoelectrics and refrigeration,http://dx.doi.org/10.1103/PhysRevE.97.022114 -"An Analytical Perspective to Traffic Engineering in Anonymous - Communication Systems",http://arxiv.org/abs/1712.07601v1 -"Protocol and Tools for Conducting Agile Software Engineering Research in - an Industrial-Academic Setting: A Preliminary Study",http://dx.doi.org/10.1145/3193965.3193970 -"Optimisation of Least Squares Algorithm: A Study of Frame Based - Programming Techniques in Horizontal Networks",http://dx.doi.org/10.14445/22315373/IJMTT-V37P526 -Adaptive Mesh Refinement in Analog Mesh Computers,http://arxiv.org/abs/1804.10110v2 -A Cyberinfrastructure for BigData Transportation Engineering,http://arxiv.org/abs/1805.00105v1 -Fifty Years of Software Engineering - or - The View from Garmisch,http://arxiv.org/abs/1805.02742v1 -"How to ""DODGE"" Complex Software Analytics?",http://dx.doi.org/10.1109/TSE.2019.2945020 -Efficiency of a cyclic quantum heat engine with finite-size baths,http://dx.doi.org/10.1103/PhysRevE.100.012122 -The efficiency of quantum engines in the Poschl-Teller oscillator model,http://dx.doi.org/10.1016/j.physe.2019.03.002 -"Operationally accessible bounds on fluctuations and entropy production - in periodically driven systems",http://dx.doi.org/10.1103/PhysRevLett.122.230601 -Efficiency fluctuations of a quantum Otto engine,http://dx.doi.org/10.1103/PhysRevResearch.2.032062 -CupQ: A New Clinical Literature Search Engine,http://dx.doi.org/10.5220/0008385202250232 -"Gender Balance in Computer Science and Engineering in Italian - Universities",http://dx.doi.org/10.1145/3344948.3344966 -Towards an Holistic Definition of Requirements Debt,http://arxiv.org/abs/1907.10887v1 -"Efficiency at the maximum power of the power law dissipative Carnot-like - Heat engines with non-adiabatic dissipation",http://dx.doi.org/10.1088/1572-9494/ab6180 -"An Automated Engineering Assistant: Learning Parsers for Technical - Drawings",http://arxiv.org/abs/1909.08552v1 -Automated Chess Commentator Powered by Neural Chess Engine,http://arxiv.org/abs/1909.10413v1 -"Requirements Engineering for Global Systems: Cultural, Regulatory and - Technical Aspects",http://arxiv.org/abs/1910.05008v1 -Kinetics of Many-Body Reservoir Engineering,http://dx.doi.org/10.1103/PhysRevResearch.2.033231 -Variability-aware Datalog,http://dx.doi.org/10.1007/978-3-030-39197-3_14 -Realistic thermal heat engine model and its generalized efficiency,http://arxiv.org/abs/1912.12949v1 -Score Engineered Logistic Regression,http://arxiv.org/abs/2003.00958v1 -Floquet engineering correlated materials with unpolarized light,http://dx.doi.org/10.1103/PhysRevLett.126.177201 -Stirling engine operating at low temperature difference,http://dx.doi.org/10.1119/10.0000832 -"Power-efficiency-fluctuations trade-off in steady-state heat engines: - The role of interactions",http://dx.doi.org/10.1103/PhysRevE.102.040103 -Modern Design Methodologies and the Development of Mechatronic Products,http://arxiv.org/abs/2007.10962v1 -"Energy-based Modelling of the Feedback Control of Biomolecular Systems - with Cyclic Flow Modulation",http://dx.doi.org/10.1109/TNB.2021.3058440 -Efficiency large deviation function of quantum heat engines,http://dx.doi.org/10.1088/1367-2630/ac09fe -Quantum finite-time thermodynamics: insight from a single qubit engine,http://dx.doi.org/10.3390/e22111255 -"PRF: A Framework for Building Automatic Program Repair Prototypes for - JVM-Based Languages",http://dx.doi.org/10.1145/3368089.3417929 -Software Engineering Standards for Epidemiological Modeling,http://arxiv.org/abs/2009.09295v1 -"Speckle engineering through singular value decomposition of the - transmission matrix",http://dx.doi.org/10.1103/PhysRevLett.127.093903 -The Harmonic Quantum Szilárd Engine,http://dx.doi.org/10.1119/10.0005946 -"Mind the Gap: On the Relationship Between Automatically Measured and - Self-Reported Productivity",http://arxiv.org/abs/2012.07428v1 -"A Reo Based Solution for Engineering the Coordination Protocols for - Smart Cities",http://arxiv.org/abs/2012.14280v2 -Valuing Evaluation: Methodologies to Bridge Research and Practice,http://dx.doi.org/10.1145/2372233.2372242 -"Protecting qubit coherence by spectrally engineered driving of the spin - environment",http://arxiv.org/abs/2101.09654v1 -Towards Modal Software Engineering,http://arxiv.org/abs/2102.02966v2 -Quartermaster: A Tool for Modeling and Simulating System Degradation,http://arxiv.org/abs/2103.03956v1 -HSEarch: semantic search system for workplace accident reports,http://arxiv.org/abs/2103.12420v1 -"Thermodynamics and Heat Engines of Black Holes with Born-Infeld-type - Electrodynamics",http://dx.doi.org/10.1142/S0217732321501029 -"The study of variability in engineering design, an appreciation and a - retrospective",http://arxiv.org/abs/2103.15478v1 -"Light and thermodynamics: the three-level laser as an endoreversible - heat engine",http://arxiv.org/abs/2104.07063v1 -"Requirements Contracts: Definition, Design, and Analysis",http://arxiv.org/abs/2104.14110v1 -"PSY-TaLiRo: A Python Toolbox for Search-Based Test Generation for - Cyber-Physical Systems",http://arxiv.org/abs/2106.02200v1 -"Lunaport: Math, Mechanics & Transport",http://arxiv.org/abs/2107.14423v3 -Thermal divergences of quantum measurement engine,http://arxiv.org/abs/2109.10796v1 -Using DevOps Toolchains in Agile Model-Driven Engineering,http://arxiv.org/abs/2111.11607v1 -Tracking Patches for Open Source Software Vulnerabilities,http://arxiv.org/abs/2112.02240v2 -(R)SE challenges in HPC,http://arxiv.org/abs/2112.06617v1 -"Performance evaluation of invariant-based inverse engineering by quantum - speed limit",http://dx.doi.org/10.1103/PhysRevA.106.L040401 -Advances of Proof Scores in CafeOBJ,http://dx.doi.org/10.1016/j.scico.2022.102893 -End to End Software Engineering Research,http://arxiv.org/abs/2112.11858v1 -"Explanation by Automated Reasoning Using the Isabelle Infrastructure - Framework",http://arxiv.org/abs/2112.14809v1 -"Sensitivity analysis of a mean-value exergy-based internal combustion - engine model",http://dx.doi.org/10.4271/2022-01-0356 -Black Holes in a Cavity: Heat engine and Joule-Thomson Expansion,http://dx.doi.org/10.1007/s10714-022-02990-9 -Targeted Code Inspection based on Human Errors,http://arxiv.org/abs/2202.02259v1 -"Performance of quantum heat engines via adiabatic deformation of - potential",http://arxiv.org/abs/2202.06651v1 -"Towards Maintainable Platform Software -- Delivery Cost Control in - Continuous Software Development",http://arxiv.org/abs/2203.15396v1 -Measuring AI Systems Beyond Accuracy,http://arxiv.org/abs/2204.04211v1 -"Engineering Majorana corner modes from two-dimensional hexagonal - crystals",http://dx.doi.org/10.1103/PhysRevB.106.235429 -"Deep Learning Meets Software Engineering: A Survey on Pre-Trained Models - of Source Code",http://arxiv.org/abs/2205.11739v1 -"Can Requirements Engineering Support Explainable Artificial - Intelligence? Towards a User-Centric Approach for Explainability Requirements",http://arxiv.org/abs/2206.01507v1 -"Quantifying Community Evolution in Developer Social Networks: Proof of - Indices' Properties",http://arxiv.org/abs/2208.10049v1 -Leveraging Artificial Intelligence on Binary Code Comprehension,http://dx.doi.org/10.1145/3551349.3559564 -Pied Piper: Meta Search for Music,http://arxiv.org/abs/2211.07610v1 -Machine Learning for Software Engineering: A Tertiary Study,http://arxiv.org/abs/2211.09425v1 -Applications of statistical causal inference in software engineering,http://dx.doi.org/10.1016/j.infsof.2023.107198 -Preliminary Bias Results in Search Engines,http://arxiv.org/abs/2211.11535v1 -Quantum Heat Engines and the Generalized Uncertainty Principle,http://dx.doi.org/10.29083/HJ.46.03.2023/SC339 -Recent Advances in Software Effort Estimation using Machine Learning,http://arxiv.org/abs/2303.03482v1 -From RSSE to BotSE: Potentials and Challenges Revisited after 15 Years,http://arxiv.org/abs/2304.09308v1 -Quantum field heat engine powered by phonon-photon interactions,http://arxiv.org/abs/2305.06445v1 -"Quantum coherent control of linear and nonlinear thermoelectricity on - graphene nanostructure heat engines",http://arxiv.org/abs/2305.07242v1 -"FORFIS: A forest fire firefighting simulation tool for education and - research",http://arxiv.org/abs/2305.17967v1 -"Statistically Enhanced Learning: a feature engineering framework to - boost (any) learning algorithms",http://arxiv.org/abs/2306.17006v1 -"From Lemons to Peaches: Improving Security ROI through Security Chaos - Engineering",http://dx.doi.org/10.1109/SecDev53368.2022.00021 -Improving Students With Rubric-Based Self-Assessment and Oral Feedback,http://dx.doi.org/10.1109/TE.2011.2172981 -"Reinforcement learning guided fuzz testing for a browser's HTML - rendering engine",http://arxiv.org/abs/2307.14556v1 -The Fission Fragment Rocket Engine for Mars Fast Transit,http://dx.doi.org/10.3389/frspt.2023.1191300 -Quantum Otto engine driven by quantum fields,http://arxiv.org/abs/2308.15528v1 -Excel as a Turing-complete Functional Programming Environment,http://arxiv.org/abs/2309.00115v1 -"Reducing Errors in Excel Models with Component-Based Software - Engineering",http://arxiv.org/abs/2309.00650v1 -"Engineering nonlinear boson-boson interactions using mediating spin - systems",http://arxiv.org/abs/2309.10060v1 -"A Use Case: Reformulating Query Rewriting as a Statistical Machine - Translation Problem",http://arxiv.org/abs/2310.13031v1 -Reservoir Engineering for Classical Nonlinear Fields,http://arxiv.org/abs/2310.14854v1 -"Reprocessed emission line profiles from dense clouds in geometrically - thick accretion engines",http://dx.doi.org/10.1046/j.1365-8711.2001.04333.x -Distribution of dust clouds around the central engine of NGC 1068,http://dx.doi.org/10.1086/505125 -AB Levitator and Electricity Storage,http://arxiv.org/abs/physics/0703013v1 -"Managing Separation of Concerns in Grid Applications Through - Architectural Model Transformations",http://arxiv.org/abs/0707.0761v1 -Knowledge Technologies,http://arxiv.org/abs/0802.3789v1 -A Complexity measure based on Requirement Engineering Document,http://arxiv.org/abs/1006.2840v1 -"Reverse Engineering of Molecular Networks from a Common Combinatorial - Approach",http://arxiv.org/abs/1102.4904v1 -Optimizing non-ergodic feedback engines,http://dx.doi.org/10.5506/APhysPolB.44.803 -"Software Engineering as Instrumentation for the Long Tail of Scientific - Software",http://arxiv.org/abs/1309.1806v1 -"Weighted reciprocal of temperature, weighted thermal flux, and their - applications in finite-time thermodynamics",http://dx.doi.org/10.1103/PhysRevE.89.012129 -"An Extreme Learning Machine Approach to Predicting Near Chaotic HCCI - Combustion Phasing in Real-Time",http://arxiv.org/abs/1310.3567v3 -"Rabi model as a quantum coherent heat engine: From quantum biology to - superconducting circuits",http://dx.doi.org/10.1103/PhysRevA.91.023816 -Brownian Carnot engine,http://dx.doi.org/10.1038/nphys3518 -"Nonlinear Model Predictive Control of A Gasoline HCCI Engine Using - Extreme Learning Machines",http://arxiv.org/abs/1501.03969v1 -"The maximum efficiency of nano heat engines depends on more than - temperature",http://dx.doi.org/10.22331/q-2019-08-19-177 -"Ordering Interrogative Questions for Effective Requirements Engineering: - The W6H Pattern",http://dx.doi.org/10.1109/RePa.2015.7407731 -D4M: Bringing Associative Arrays to Database Engines,http://dx.doi.org/10.1109/HPEC.2015.7322472 -"A self-consistent analytical magnetar model: The luminosity of - $γ$-ray burst supernovae is powered by radioactivity",http://dx.doi.org/10.1093/mnras/stw122 -Distributed Technology-Sustained Pervasive Applications,http://arxiv.org/abs/1604.02892v1 -Web Spam Detection Using Multiple Kernels in Twin Support Vector Machine,http://arxiv.org/abs/1605.02917v1 -"Two coupled, driven Ising spin systems working as an Engine",http://dx.doi.org/10.1103/PhysRevE.95.052123 -"A Community's Perspective on the Status and Future of Peer Review in - Software Engineering",http://arxiv.org/abs/1706.07196v2 -"A quantum Otto engine with finite heat baths: energy, correlations, and - degradation",http://dx.doi.org/10.1088/1367-2630/aaba02 -"Optimizing Long Short-Term Memory Recurrent Neural Networks Using Ant - Colony Optimization to Predict Turbine Engine Vibration",http://arxiv.org/abs/1710.03753v1 -DeepStyle: Multimodal Search Engine for Fashion and Interior Design,http://arxiv.org/abs/1801.03002v2 -"Predictive Second Order Sliding Control of Constrained Linear Systems - with Application to Automotive Control Systems",http://arxiv.org/abs/1802.01617v1 -Did We Get It Right? Predicting Query Performance in E-commerce Search,http://arxiv.org/abs/1808.00239v1 -"Using Experience Sampling to link Software Repositories with Emotions - and Work Well-Being",http://dx.doi.org/10.1145/3239235.3239245 -Bayesian Data Analysis in Empirical Software Engineering Research,http://dx.doi.org/10.1109/TSE.2019.2935974 -"ExaHyPE: An Engine for Parallel Dynamically Adaptive Simulations of Wave - Problems",http://dx.doi.org/10.1016/j.cpc.2020.107251 -Adaptive HTAP through Elastic Resource Scheduling,http://arxiv.org/abs/2004.05437v2 -Energy Transformations in a Relativistic Engine,http://dx.doi.org/10.3390/sym13030420 -"The Matter of Chance: Auditing Web Search Results Related to the 2020 - U.S. Presidential Primary Elections Across Six Search Engines",http://dx.doi.org/10.1177/08944393211006863 -Prospects for Multi-omics in the Microbial Ecology of Water Engineering,http://arxiv.org/abs/2105.08856v1 -"Role of Carrier Mobility and Band Alignment Engineering on the - Efficiency of Colloidal Quantum Dot Solar Cells",http://arxiv.org/abs/1602.04236v1 -"Survey on Essential and Accidental Real-Time Issues in Software - Engineering",http://arxiv.org/abs/1703.03783v1 -Search Engine Drives the Evolution of Social Networks,http://arxiv.org/abs/1703.05922v1 -"An Analog Neural Network Computing Engine using CMOS-Compatible - Charge-Trap-Transistor (CTT)",http://arxiv.org/abs/1709.06614v4 -"Integrated Optimization of Power Split, Engine Thermal Management, and - Cabin Heating for Hybrid Electric Vehicles",http://arxiv.org/abs/1906.01177v1 -"Hardware Trust and Assurance through Reverse Engineering: A Survey and - Outlook from Image Analysis and Machine Learning Perspectives",http://arxiv.org/abs/2002.04210v2 -The Hetero-functional Graph Theory Toolbox,http://arxiv.org/abs/2005.10006v2 -"Framework for an Integrated Learning Block with CDIO-led Engineering - Education",http://arxiv.org/abs/2006.03150v1 -"PlumeNet: Large-Scale Air Quality Forecasting Using A Convolutional LSTM - Network",http://arxiv.org/abs/2006.09204v1 -"Protection Over Asymmetric Channels, S-MATE: Secure Multipath Adaptive - Traffic Engineering",http://arxiv.org/abs/1012.5997v1 -"On the random access performance of Cell Broadband Engine with graph - analysis application",http://arxiv.org/abs/1105.5881v2 -"On the Weakenesses of Correlation Measures used for Search Engines' - Results (Unsupervised Comparison of Search Engine Rankings)",http://arxiv.org/abs/1107.2691v1 -Open Data: Reverse Engineering and Maintenance Perspective,http://arxiv.org/abs/1202.1656v1 -Statistical Physics: a Short Course for Electrical Engineering Students,http://arxiv.org/abs/1307.5137v1 -Nucleosynthesis of elements in gamma ray bursts engines,http://dx.doi.org/10.1051/0004-6361/201423822 -"Integration of Heterogeneous Modeling Languages via Extensible and - Composable Language Components",http://arxiv.org/abs/1509.04502v1 -"Supporting Defect Causal Analysis in Practice with Cross-Company Data on - Causes of Requirements Engineering Problems",http://dx.doi.org/10.1109/ICSE-SEIP.2017.14 -Survey Research in Software Engineering: Problems and Strategies,http://dx.doi.org/10.1109/ACCESS.2018.2881041 -"Effects of dark energy on the efficiency of charged AdS black holes as - heat engine",http://dx.doi.org/10.1140/epjc/s10052-017-5134-9 -"Are Computer Science and Engineering Graduates Ready for the Software - Industry? Experiences from an Industrial Student Training Program",http://arxiv.org/abs/1805.08894v1 -Engineering problems in machine learning systems,http://arxiv.org/abs/1904.00001v2 -Open Science in Software Engineering,http://dx.doi.org/10.1007/978-3-030-32489-6_17 -"Using Social Network Service to determine the Initial User Requirements - for Small Software Businesses",http://arxiv.org/abs/1904.12583v1 -"Diverse interactions and ecosystem engineering stabilize community - assembly",http://dx.doi.org/10.1038/s41467-020-17164-x -Methodological Issues in Observational Studies,http://arxiv.org/abs/1908.04366v1 -Real-time Dispatching and Relocation of Emergency Service Engineers,http://arxiv.org/abs/1910.01427v1 -"Role-Oriented Code Generation in an Engine for Solving Hyperbolic PDE - Systems",http://arxiv.org/abs/1911.06817v2 -"Synergizing Domain Expertise with Self-Awareness in Software Systems: A - Patternized Architecture Guideline",http://dx.doi.org/10.1109/JPROC.2020.2985293 -"Physics-informed machine learning for composition-process-property alloy - design: shape memory alloy demonstration",http://arxiv.org/abs/2003.01878v3 -Thermodynamics of Minimal Coupling Quantum Heat Engines,http://dx.doi.org/10.22331/q-2020-12-23-375 -Internal Geometric Friction in a Kitaev Chain Heat Engine,http://dx.doi.org/10.1103/PhysRevB.102.155423 -Case Survey Studies in Software Engineering Research,http://dx.doi.org/10.1145/3382494.3410683 -"Adoption and Effects of Software Engineering Best Practices in Machine - Learning",http://dx.doi.org/10.1145/3382494.3410681 -Stroboscopic two-stroke quantum heat engines,http://dx.doi.org/10.1103/PhysRevA.102.042217 -"Storage, Indexing, Query Processing, and Benchmarking in Centralized and - Distributed RDF Engines: A Survey",http://arxiv.org/abs/2009.10331v2 -"An Observational Study of Engineering Online Education During the - COVID-19 Pandemic",http://dx.doi.org/10.1371/journal.pone.0250041 -"Study of quantum Otto heat engine using driven-dissipative - Schrödinger equation",http://arxiv.org/abs/2010.04856v2 -"Rooting Formal Methods within Higher Education Curricula for Computer - Science and Software Engineering -- A White Paper",http://arxiv.org/abs/2010.05708v1 -"Maximum efficiency of absorption refrigerators at arbitrary cooling - power",http://dx.doi.org/10.1103/PhysRevE.103.052125 -Using game simulator Software Inc in the Software Engineering education,http://arxiv.org/abs/2012.01127v1 -Conceptual Software Engineering Applied to Movie Scripts and Stories,http://dx.doi.org/10.3844/jcssp.2020.1718.1730 -"Walking Through the Method Zoo: Does Higher Education really meet - Software Industry Demands?",http://dx.doi.org/10.1109/ICSE-SEET.2019.00009 -"Qualifying Software Engineers Undergraduates in DevOps -- Challenges of - Introducing Technical and Non-technical Concepts in a Project-oriented Course",http://arxiv.org/abs/2102.06662v1 -Asset Management in Machine Learning: A Survey,http://arxiv.org/abs/2102.06919v2 -"unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights - Generation",http://arxiv.org/abs/2103.05600v2 -"On Determinism of Game Engines used for Simulation-based Autonomous - Vehicle Verification",http://dx.doi.org/10.1109/TITS.2022.3177887 -Grey Literature in Software Engineering: A Critical Review,http://arxiv.org/abs/2104.13435v2 -"Helping results assessment by adding explainable elements to the deep - relevance matching model",http://arxiv.org/abs/2106.05147v1 -"A Fluids Experiment for Remote Learners to Test the Unsteady Bernoulli - Equation Using a Burette",http://arxiv.org/abs/2108.00510v1 -"Janus: A Systems Engineering Approach to the Design of Industrial - Cyber-Physical Systems",http://dx.doi.org/10.1109/INDIN41052.2019.8972051 -Efficient asymmetric collisional Brownian particle engines,http://arxiv.org/abs/2108.01118v1 -"Chaos Engineering For Understanding Consensus Algorithms Performance in - Permissioned Blockchains",http://arxiv.org/abs/2108.08441v1 -"The Influence of Human Aspects on Requirements Engineering-related - Activities: Software Practitioners Perspective",http://arxiv.org/abs/2109.07868v3 -PlumeCityNet: Multi-Resolution Air Quality Forecasting,http://arxiv.org/abs/2110.02661v2 -"Changing Software Engineers' Self-Efficacy with Bootcamps:A Research - Proposal",http://arxiv.org/abs/2110.12241v2 -How to use Persistent Memory in your Database,http://arxiv.org/abs/2112.00425v1 -DevOps and Microservices in Scientific System development,http://dx.doi.org/10.1145/3477314.3507317 -"Development of a Model Predictive Airpath Controller for a Diesel Engine - on a High-Fidelity Engine Model with Transient Thermal Dynamics",http://arxiv.org/abs/2202.12803v1 -"Understanding the role of single-board computers in engineering and - computer science education: A systematic literature review",http://dx.doi.org/10.1002/cae.22439 -Software Engineering for Quantum Programming: How Far Are We?,http://arxiv.org/abs/2203.16969v2 -"Machine Learning Integrated with Model Predictive Control for Imitative - Optimal Control of Compression Ignition Engines",http://arxiv.org/abs/2204.00142v3 -"Relation between fluctuations and efficiency at maximum power for small - heat engines",http://dx.doi.org/10.1103/PhysRevResearch.4.043139 -"Industry-Academia Research Collaboration in Software Engineering: The - Certus Model",http://dx.doi.org/10.1016/j.infsof.2020.106473 -"Cloudprofiler: TSC-based inter-node profiling and high-throughput data - ingestion for cloud streaming workloads",http://arxiv.org/abs/2205.09325v2 -Concept Identification for Complex Engineering Datasets,http://dx.doi.org/10.1016/j.aei.2022.101704 -"Incorporating Failure Knowledge into Design Decisions for IoT Systems: A - Controlled Experiment on Novices",http://arxiv.org/abs/2206.13562v2 -A Brownian cyclic engine operating in a viscoelastic active suspension,http://dx.doi.org/10.1016/j.physa.2022.128342 -Many-body quantum vacuum fluctuation engines,http://arxiv.org/abs/2208.07225v2 -Impacts and Integration of Remote-First Working Environments,http://arxiv.org/abs/2209.04383v1 -"Abstract Images Have Different Levels of Retrievability Per Reverse - Image Search Engine",http://arxiv.org/abs/2211.02115v1 -Moving Media as Photonic Heat Engine and Pump,http://dx.doi.org/10.1103/PhysRevB.107.115406 -"A Morphological, Topological and Mechanical Investigation of Gyroid, - Spinodoid and Dual-Lattice Algorithms as Structural Models of Trabecular Bone",http://dx.doi.org/10.1016/j.jmbbm.2022.105584 -"Component Segmentation of Engineering Drawings Using Graph Convolutional - Networks",http://dx.doi.org/10.1016/j.compind.2023.103885 -Scalable entanglement stabilization with modular reservoir engineering,http://arxiv.org/abs/2301.05725v1 -"Improving Software Engineering in Biostatistics: Challenges and - Opportunities",http://arxiv.org/abs/2301.11791v1 -"Antithesis of Object Orientation: Occurrence-Only Modeling Applied in - Engineering and Medicine",http://arxiv.org/abs/2302.07087v1 -Prevalence of Code Smells in Reinforcement Learning Projects,http://arxiv.org/abs/2303.10236v2 -Applications of Causality and Causal Inference in Software Engineering,http://arxiv.org/abs/2303.16989v1 -Evaluating Verifiability in Generative Search Engines,http://arxiv.org/abs/2304.09848v2 -"The Quantum Measurement Spintronic Engine: Using Entanglement to Harvest - Vacuum Fluctuations",http://arxiv.org/abs/2304.13474v3 -"Large-scale information retrieval in software engineering -- an - experience report from industrial application",http://dx.doi.org/10.1007/s10664-015-9410-8 -"Understanding the Privacy Risks of Popular Search Engine Advertising - Systems",http://arxiv.org/abs/2308.15309v3 -"The Role of Communication and Reference Songs in the Mixing Process: - Insights from Professional Mix Engineers",http://arxiv.org/abs/2309.03404v3 -"The Protein Engineering Tournament: An Open Science Benchmark for - Protein Modeling and Design",http://arxiv.org/abs/2309.09955v2 -Insights from an OTTR-centric Ontology Engineering Methodology,http://arxiv.org/abs/2309.13130v1 -Anisotropic Energy Injection from Magnetar Central Engines in Short GRBs,http://arxiv.org/abs/2309.15141v1 -"ChatGPT & Mechanical Engineering: Examining performance on the FE - Mechanical Engineering and Undergraduate Exams",http://arxiv.org/abs/2309.15866v1 -"A pragmatic workflow for research software engineering in computational - science",http://arxiv.org/abs/2310.00960v1 -"An Adaptable IoT Rule Engine Framework for Dataflow Monitoring and - Control Strategies",http://arxiv.org/abs/2310.05493v2 -"Decoding the Threat Landscape : ChatGPT, FraudGPT, and WormGPT in Social - Engineering Attacks",http://dx.doi.org/10.32628/CSEIT2390533 -"Swift observations of GRB050904: the most distant cosmic explosion ever - observed",http://dx.doi.org/10.1051/0004-6361:20065173 -"The circumburst environment of a FRED GRB: study of the prompt emission - and X-ray/optical afterglow of GRB 051111",http://dx.doi.org/10.1051/0004-6361:20065974 -Panchromatic study of GRB 060124: from precursor to afterglow,http://dx.doi.org/10.1393/ncb/i2007-10062-y -"GRB 061121: Broadband spectral evolution through the prompt and - afterglow phases of a bright burst",http://dx.doi.org/10.1086/518821 -"Multi-wavelength observations of the energetic GRB 080810: detailed - mapping of the broadband spectral evolution",http://dx.doi.org/10.1111/j.1365-2966.2009.15462.x -GRB 081008: from burst to afterglow and the transition phase in between,http://dx.doi.org/10.1088/0004-637X/711/2/870 -Challenging GRB models through the broadband dataset of GRB060908,http://dx.doi.org/10.1051/0004-6361/201014994 -"GRB 081007 and GRB 090424: the surrounding medium, outflows and - supernovae",http://dx.doi.org/10.1088/0004-637X/774/2/114 -"Type Ia Supernova Rate Measurements to Redshift 2.5 from CANDELS : - Searching for Prompt Explosions in the Early Universe",http://dx.doi.org/10.1088/0004-6256/148/1/13 -A strong limit on the very-high-energy emission from GRB 150323A,http://dx.doi.org/10.3847/1538-4357/aab371 -"Effects of one valence proton on seniority and angular momentum of - neutrons in neutron-rich $^{122-131}$Sb$_{51}$ isotopes",http://dx.doi.org/10.1103/PhysRevC.99.064302 -"Synchrotron Cooling in Energetic Gamma-Ray Bursts Observed by the Fermi - Gamma-Ray Burst Monitor",http://dx.doi.org/10.1051/0004-6361/201424858 -"A multi-wavelength analysis of a collection of short-duration GRBs - observed between 2012-2015",http://dx.doi.org/10.1093/mnras/stz530 -MasakhaNEWS: News Topic Classification for African languages,http://arxiv.org/abs/2304.09972v2 -Extremal Optimization: Heuristics via Co-Evolutionary Avalanches,http://arxiv.org/abs/cond-mat/0006374v1 -Inattainability of Carnot efficiency in the Brownian heat engine,http://dx.doi.org/10.1103/PhysRevE.62.6021 -"Quantum Information Science from the Perspective of a Device and - Materials Engineer",http://arxiv.org/abs/cond-mat/0402341v1 -Efficiency and Fluctuation in Tight-Coupling Model of Molecular Motor,http://dx.doi.org/10.1143/JPSJ.75.063001 -The framework for simulation of dynamics of mechanical aggregates,http://arxiv.org/abs/cs/0701119v1 -Random Matrices,http://arxiv.org/abs/hep-ph/0509286v1 -On Geometric Engineering of N=1 ADE Quiver Models,http://arxiv.org/abs/hep-th/0310230v1 -"A Discrete Four Stroke Quantum Heat Engine Exploring the Origin of - Friction",http://dx.doi.org/10.1103/PhysRevE.65.055102 -Superconducting Power Generation,http://dx.doi.org/10.1109/39.841343 -Logic and thermodynamics: the heat-engine axiomatics of the second law,http://arxiv.org/abs/physics/0303017v1 -Advanced Power Transmission of the Future,http://arxiv.org/abs/physics/0304070v1 -Improved Power System of the Future,http://arxiv.org/abs/physics/0304083v1 -Power Delivery of the Future,http://arxiv.org/abs/physics/0304098v1 -"Decoherence, pointer engineering and quantum state protection",http://arxiv.org/abs/quant-ph/0009024v1 -"Band-Gap Engineering of Phononic Crystals: A Computational Survey of - Two-Dimensional Systems",http://arxiv.org/abs/0708.3669v1 -The Brownian gyrator: a minimal heat engine on the nano-scale,http://dx.doi.org/10.1103/PhysRevLett.99.230602 -Critique of some thermodynamic proofs based on the pump-engine couple,http://arxiv.org/abs/0803.3494v1 -Control Paradigms for Quantum Engineering,http://dx.doi.org/10.1109/ISCCSP.2008.4537363 -Algebraic Change-Point Detection,http://arxiv.org/abs/0912.1178v1 -Quantum-dot Carnot engine at maximum power,http://dx.doi.org/10.1103/PhysRevE.81.041106 -Contents of COMP5541 Winter 2010 Final UUIS SRS and SDD Reports,http://arxiv.org/abs/1006.3259v1 -"Invalidity of prohibition of the perpetual motion engine of the second - kind and the scenario of using these engines for prevention of the ""thermal - death"" on the Earth",http://arxiv.org/abs/1006.3431v1 -"Szilard engine revisited; information from time forward and backward - process",http://dx.doi.org/10.1103/PhysRevE.84.012101 -"Applications of local fractional calculus to engineering in fractal - time-space: Local fractional differential equations with local fractional - derivative",http://arxiv.org/abs/1106.3010v1 -Application Of Data Mining In Bioinformatics,http://arxiv.org/abs/1205.1125v1 -Querying Source Code with Natural Language,http://dx.doi.org/10.1109/ASE.2011.6100076 -Fast population transfer engineering of three-level systems,http://dx.doi.org/10.1103/PhysRevA.86.033405 -The Boost.Build System,http://arxiv.org/abs/1208.6264v1 -Reverse Engineering Quantum Field Theory,http://dx.doi.org/10.1063/1.4773160 -"Application of FCI at engineering students in Bogota: an interpretation - of the answers through a random model of two levels",http://arxiv.org/abs/1210.7277v1 -A toy model of information retrieval system based on quantum probability,http://arxiv.org/abs/1305.5330v1 -Can Research be Taught?,http://arxiv.org/abs/1306.1288v1 -"Eugene Garfield, Francis Narin, and PageRank: The Theoretical Bases of - the Google Search Engine",http://arxiv.org/abs/1312.3872v1 -"Reconstruction Models for Attractors in the Technical and Economic - Processes",http://dx.doi.org/10.14445/22312803/IJCTT-V6N3P128 -Hamiltonian engineering via invariants and dynamical algebra,http://dx.doi.org/10.1103/PhysRevA.89.043408 -Research on Study Mechanical Vibrations with Data Acquisition Systems,http://arxiv.org/abs/1403.4508v1 -Human Factors of Formal Methods,http://arxiv.org/abs/1404.7247v1 -Run-time extensibility and librarization of simulation software,http://arxiv.org/abs/1407.2905v1 -"Using crowdsourcing system for creating site-specific statistical - machine translation engine",http://arxiv.org/abs/1409.5502v1 -Limitations of Agile Software Processes,http://arxiv.org/abs/1409.6600v1 -Spreadsheets for Stream Partitions and Windows,http://arxiv.org/abs/1503.04215v1 -Memantic: A Medical Knowledge Discovery Engine,http://arxiv.org/abs/1503.05781v1 -Designing Applications in a Hybrid Cloud,http://dx.doi.org/10.12988/ces.2015.57214 -Visualizing source code in 3D Maya software,http://arxiv.org/abs/1603.00418v1 -"Using a Szilard engine to illustrate the validity of the modified - Jarzynski equality in presence of measurement errors",http://arxiv.org/abs/1609.00486v1 -Dark Modes of Quantum Linear Systems,http://dx.doi.org/10.1109/TAC.2017.2677878 -"Redundancy schemes for engineering coherent systems via a - signature-based approach",http://arxiv.org/abs/1708.07059v1 -Sequential Preference-Based Optimization,http://arxiv.org/abs/1801.02788v1 -Microfluidics for Chemical Synthesis: Flow Chemistry,http://arxiv.org/abs/1802.05611v1 -Automating Requirements Traceability: Two Decades of Learning from KDD,http://arxiv.org/abs/1807.11454v1 -"Needs and Challenges for a Platform to Support Large-scale Requirements - Engineering. A Multiple Case Study",http://arxiv.org/abs/1808.02284v2 -Adaptive filter ordering in Spark,http://arxiv.org/abs/1905.01349v1 -Carnot heat engine efficiency with a paramagnetic gas,http://arxiv.org/abs/1905.06338v1 -"Targeting the Weakest Link: Social Engineering Attacks in Ethereum Smart - Contracts",http://arxiv.org/abs/2105.00132v2 -The Tyranny of Qubits - Quantum Technology's Scalability Bottleneck,http://arxiv.org/abs/1703.05342v1 -Code Reverse Engineering problem for Identification Codes,http://arxiv.org/abs/1105.1601v2 -Maxwell's Demon and Data Compression,http://dx.doi.org/10.1103/PhysRevE.84.061117 -A Survey of Software Reliability Models,http://arxiv.org/abs/1304.4539v1 -"Giovanni de la Fontana, engineer and magician",http://arxiv.org/abs/1304.4588v1 -Expressando Atributos Não-Funcionais em Workflows Científicos,http://arxiv.org/abs/1304.5099v1 -"Towards Microservices and Beyond: An incoming Paradigm Shift in - Distributed Computing",http://arxiv.org/abs/1610.01778v2 -jSET - The Java Software Evolution Tracker,http://arxiv.org/abs/1702.06973v1 -Characteristics of Spreadsheets Developed with the SSMI Methodology,http://arxiv.org/abs/1704.01136v1 -A Cooperative Enterprise Agent Based Control Architecture,http://arxiv.org/abs/1704.02935v1 -"The Web is missing an essential part of infrastructure: an Open Web - Index",http://arxiv.org/abs/1903.03846v1 -Teaching DevOps in academia and industry: reflections and vision,http://arxiv.org/abs/1903.07468v1 -"Fast radio burst counterparts and their implications for the central - engine",http://dx.doi.org/10.3847/1538-4357/ab7dbf -"On the central engine of the fastest-declining Type I supernova - SN2019bkc",http://arxiv.org/abs/1907.13271v1 -The SmartSHARK Ecosystem for Software Repository Mining,http://arxiv.org/abs/2001.01606v1 -LMFAO: An Engine for Batches of Group-By Aggregates,http://arxiv.org/abs/2008.08657v1 -Maximal Steered Coherence Protection by Quantum Reservoir Engineering,http://dx.doi.org/10.1103/PhysRevA.102.020402 -The Hardy space from an engineer's perspective,http://arxiv.org/abs/2009.12707v1 -"Preface: New trends in first-passage methods and applications in the - life sciences and engineering",http://dx.doi.org/10.1088/1751-8121/ab81d5 -"A Comparison of Natural Language Understanding Platforms for Chatbots in - Software Engineering",http://dx.doi.org/10.1109/TSE.2021.3078384 -"Understanding and Fixing Complex Faults in Embedded Cyberphysical - Systems",http://dx.doi.org/10.1109/MC.2020.3029975 -"Enumeration and Identification of Unique 3D Spatial Topologies of - Interconnected Engineering Systems Using Spatial Graphs",http://dx.doi.org/10.1115/1.4062978 -Karpov's Queen Sacrifices and AI,http://arxiv.org/abs/2109.08149v1 -"A Minimal Intervention Definition of Reverse Engineering a Neural - Circuit",http://arxiv.org/abs/2110.00889v1 -"Grafana plugin for visualising vote based consensus mechanisms, and - network P2P overlay networks",http://arxiv.org/abs/2112.01082v1 -"Developing a Suitability Assessment Criteria for Software Developers: - Behavioral Assessment Using Psychometric Test",http://dx.doi.org/10.1145/3494885.3494898 -"Can MAD accretion disks launching structured jets explain both GRB and - AGN engines? Magnetically arrested accretion disks launching structured jets - in application to GRB and AGN engines",http://dx.doi.org/10.1051/0004-6361/202244196 -A visual introduction to information theory,http://arxiv.org/abs/2206.07867v1 -Formal Semantics of the Kconfig Language,http://arxiv.org/abs/2209.04916v1 -"Deterministic vs. Non Deterministic Finite Automata in Automata - Processing",http://arxiv.org/abs/2210.10077v1 -Quantum Computing for Data Centric Engineering and Science,http://dx.doi.org/10.1017/dce.2022.36 -"Introduction to Engineering Mathematics and Analysis: Modeling Physical - Systems Using the Language of Mathematics",http://dx.doi.org/10.5399/osu/1152 -"Relational Playground: Teaching the Duality of Relational Algebra and - SQL",http://dx.doi.org/10.1145/3596673.3596978 -"Towards Stirling engine using an optically confined particle subjected - to asymmetric temperature profile",http://dx.doi.org/10.1088/1367-2630/acd94e -Minimization of energy functionals via FEM: implementation of hp-FEM,http://arxiv.org/abs/2309.13028v1 -"Supernovae, Jets, and Collapsars",http://dx.doi.org/10.1086/319698 -"Prompt and delayed emission properties of Gamma-Ray Bursts observed with - BeppoSAX",http://dx.doi.org/10.1086/313316 -"Nature vs. Nurture: The Origin of Soft Gamma-ray Repeaters and Anomalous - X-ray Pulsars",http://dx.doi.org/10.1086/319701 -"Theory of ""Jitter"" Radiation from Small-Scale Random Magnetic Fields and - Prompt Emission from Gamma-Ray Burst Shocks",http://dx.doi.org/10.1086/309374 -Implications of the $γ$-ray Polarization of GRB 021206,http://dx.doi.org/10.1088/1475-7516/2003/10/005 -"Prompt GRB spectra: detailed calculations and the effect of pair - production",http://dx.doi.org/10.1086/422989 -Towards a More Standardized Candle Using GRB Energetics and Spectra,http://dx.doi.org/10.1086/430292 -"The ECLAIRs micro-satellite for multi-wavelength studies of gamma-ray - burst prompt emission",http://dx.doi.org/10.1109/TNS.2005.862780 -"Probing the environment in Gamma-ray bursts: the case of an X-ray - precursor, afterglow late onset and wind vs constant density profile in - GRB011121 and GRB011211",http://dx.doi.org/10.1086/428377 -Afterglow Observations Shed New Light on the Nature of X-ray Flashes,http://dx.doi.org/10.1086/431477 -"Early photon-shock interaction in stellar wind: sub-GeV photon flash and - high energy neutrino emission from long GRBs",http://dx.doi.org/10.1086/431473 -"The puzzling case of GRB 990123: prompt emission and broad-band - afterglow modeling",http://dx.doi.org/10.1051/0004-6361:20042532 -"Deceleration of a Relativistic, Photon-Rich Shell: End of - Preacceleration, Damping of MHD Turbulence, and the Emission Mechanism of - Gamma-Ray Bursts",http://dx.doi.org/10.1086/505290 -Multi-Wavelength Studies of the Optically Dark Gamma-Ray Burst 001025A,http://dx.doi.org/10.1086/497948 -Swift XRT Observations of the Afterglow of GRB 050319,http://dx.doi.org/10.1086/499292 -"GRB 050117: Simultaneous Gamma-ray and X-ray Observations with the Swift - Satellite",http://dx.doi.org/10.1086/498443 -"Properties of X-Ray Rich Gamma Ray Bursts and X-Ray Flashes detected - with BeppoSAX and Hete-2",http://dx.doi.org/10.1051/0004-6361:20054501 -"On the interpretation of the spectral--energy correlations in long - Gamma--Ray Bursts",http://dx.doi.org/10.1051/0004-6361:20054211 -"The weak INTEGRAL bursts GRB040223 and GRB040624: an emerging population - of dark afterglows",http://dx.doi.org/10.1051/0004-6361:20054072 -Long Gamma-Ray Burst prompt emission properties as a cosmological tool,http://arxiv.org/abs/astro-ph/0605267v2 -An Optically Dark GRB Observed by HETE-2: GRB 051022,http://dx.doi.org/10.1093/pasj/58.4.L35 -"Multi-Wavelength Observations of GRB 050820A: An Exceptionally Energetic - Event Followed from Start to Finish",http://dx.doi.org/10.1086/508149 -Puzzled by GRB 060218,http://dx.doi.org/10.1111/j.1745-3933.2006.00270.x -"Optical and X-Ray Observations of GRB 060526: A Complex Afterglow - Consistent with An Achromatic Jet Break",http://dx.doi.org/10.1086/510774 -"Gamma Ray Bursts as standard candles to constrain the cosmological - parameters",http://dx.doi.org/10.1088/1367-2630/8/7/123 -"Extreme Properties Of GRB061007: A Highly Energetic Or A Highly - Collimated Burst?",http://dx.doi.org/10.1063/1.2943443 -"Swift Discovery of Gamma-Ray Bursts without Jet Break Feature in their - X-Ray Afterglows",http://dx.doi.org/10.1086/510610 -"REM observations of GRB 060418 and GRB 060607A: the onset of the - afterglow and the initial fireball Lorentz factor determination",http://dx.doi.org/10.1051/0004-6361:20077388 -"The Prompt Gamma-Ray and Afterglow Energies of Short-Duration Gamma-Ray - Bursts",http://dx.doi.org/10.1086/522195 -"The Troublesome Broadband Evolution of GRB 061126: Does a Grey Burst - Imply Grey Dust?",http://dx.doi.org/10.1086/523929 -"Axisymmetric Simulations of Rotating Stellar Collapse in Full General - Relativity --- Criteria for Prompt Collapse to Black Holes",http://dx.doi.org/10.1143/PTP.104.325 -Ultra High Energy Cosmic Ray and UHE Neutrino-Z Showering in Dark Halos,http://arxiv.org/abs/hep-ph/0112014v2 -A study of the prompt and afterglow emission of the Short GRB 061201,http://dx.doi.org/10.1063/1.2943468 -"When GRB afterglows get softer, hard components come into play",http://dx.doi.org/10.1063/1.2943448 -"GRB 060607A: A GRB with Bright Asynchronous Early $X$-ray and Optical - Emissions",http://dx.doi.org/10.1111/j.1365-2966.2008.12859.x -"Global characteristics of GRBs observed with INTEGRAL and the inferred - large population of low-luminosity GRBs",http://dx.doi.org/10.1051/0004-6361:20078399 -"The prompt, high resolution spectroscopic view of the ""naked-eye"" - GRB080319B",http://dx.doi.org/10.1088/0004-637X/694/1/332 -"GRB 071003: Broadband Follow-up Observations of a Very Bright Gamma-Ray - Burst in a Galactic Halo",http://dx.doi.org/10.1086/591961 -GRB 070707: the first short gamma-ray burst observed by INTEGRAL,http://dx.doi.org/10.1051/0004-6361:20079295 -"A Comparison of the Afterglows of Short- and Long-Duration Gamma-Ray - Bursts",http://dx.doi.org/10.1088/0004-637X/701/1/824 -"Prompt optical observations of GRBs with ""Pi of the Sky"" system",http://dx.doi.org/10.1063/1.3155907 -"An up-scattered cocoon emission model of Gamma-Ray Burst high-energy - lags",http://dx.doi.org/10.1088/0004-637X/707/2/1404 -SVOM: a new mission for Gamma-Ray Burst Studies,http://dx.doi.org/10.1063/1.3155898 -High Energy Photons From Gamma Ray Bursts,http://arxiv.org/abs/0910.0687v2 -GeV emission from Gamma Ray Bursts: a radiative fireball?,http://dx.doi.org/10.1111/j.1365-2966.2009.16171.x -"The long rapid decay phase of the extended emission from the short GRB - 080503",http://dx.doi.org/10.1111/j.1365-2966.2010.16492.x -Identification and properties of the photospheric emission in GRB090902B,http://dx.doi.org/10.1088/2041-8205/709/2/L172 -Fermi/LAT Gamma Ray Burst emission models and jet properties,http://arxiv.org/abs/1002.3377v1 -"GRBs as standard candles: There is no ""circularity problem"" (and there - never was)",http://dx.doi.org/10.1016/j.newast.2010.08.001 -"Implications of the Fermi-LAT diffuse gamma-ray measurements on - annihilating or decaying Dark Matter",http://dx.doi.org/10.1088/1475-7516/2010/07/008 -"Quark-Novae in Low-Mass X-ray Binaries with massive neutron stars: A - universal model for short-hard Gamma-Ray Bursts",http://dx.doi.org/10.1088/0004-637X/729/1/60 -"Fermi Observations of GRB 090510: A Short Hard Gamma-Ray Burst with an - Additional, Hard Power-Law Component from 10 keV to GeV Energies",http://dx.doi.org/10.1088/0004-637X/716/2/1178 -On the Formation of Multiple Stellar Populations in Globular Clusters,http://dx.doi.org/10.1088/0004-637X/726/1/36 -"The Lick AGN Monitoring Project: Velocity-Delay Maps from the - Maximum-Entropy Method for Arp 151",http://dx.doi.org/10.1088/2041-8205/720/1/L46 -"The Connection Between Thermal and Non-Thermal Emission in Gamma-ray - Bursts: General Considerations and GRB090902B as a Case Study",http://dx.doi.org/10.1111/j.1365-2966.2011.20052.x -Nuclear Star Clusters from Clustered Star Formation,http://dx.doi.org/10.1088/0004-637X/729/1/35 -"Pre-discovery and Follow-up Observations of the Nearby SN 2009nr: - Implications for Prompt Type Ia SNe",http://dx.doi.org/10.1088/0004-637X/726/2/106 -"On the Implications of Late Internal Dissipation for Shallow-Decay - Afterglow Emission and Associated High-Energy Gamma-Ray Signals",http://dx.doi.org/10.1088/0004-637X/732/2/77 -"Fermi/GBM observations of the ultra-long GRB 091024: A burst with an - optical flash",http://dx.doi.org/10.1051/0004-6361/201015891 -"Broad band simulation of Gamma Ray Bursts (GRB) prompt emission in - presence of an external magnetic field",http://dx.doi.org/10.1088/1475-7516/2011/12/001 -"The First Limits on the Ultra-high Energy Neutrino Fluence from - Gamma-ray Bursts",http://dx.doi.org/10.1088/0004-637X/736/1/50 -Fall-Back Disks in Long and Short GRBs,http://dx.doi.org/10.1088/0004-637X/734/1/35 -"Prospects for Detecting Gamma-Ray Bursts at Very High Energies with the - Cherenkov Telescope Array",http://dx.doi.org/10.1111/j.1365-2966.2012.21490.x -"Identifying Subclasses of Long Gamma-Ray Bursts with Cumulative Light - Curve Morphology of Prompt Emissions",http://dx.doi.org/10.1093/pasj/65.1.3 -"Luminosity correlations for gamma-ray bursts and implications for their - prompt and afterglow emission mechanisms",http://dx.doi.org/10.1088/0004-637X/758/1/32 -On the Prompt Signals of Gamma Ray Bursts,http://dx.doi.org/10.1051/eas/1361015 -"Gamma-ray burst optical light-curve zoo: comparison with X-ray - observations",http://dx.doi.org/10.1051/0004-6361/201321221 -"High energy emission of GRB 130427A: evidence for inverse Compton - radiation",http://dx.doi.org/10.1088/0004-637X/776/2/95 -Elliptic flow of thermal photons from event-by-event hydrodynamic model,http://dx.doi.org/10.1103/PhysRevC.88.034901 -High-energy fluxes of atmospheric neutrinos,http://arxiv.org/abs/1306.5907v2 -"A structural evaluation of the tungsten isotopes via thermal neutron - capture",http://dx.doi.org/10.1103/PhysRevC.89.014606 -Optical and X-ray Rest-frame Light Curves of the BAT6 sample,http://dx.doi.org/10.1051/0004-6361/201323361 -"GRB 130925A: an ultra-long Gamma Ray Burst with a dust-echo afterglow, - and implications for the origin of the ultra-long GRBs",http://dx.doi.org/10.1093/mnras/stu1459 -"Spectral evolution in gamma-ray bursts: predictions of the internal - shock model and comparison to observations",http://dx.doi.org/10.1051/0004-6361/201322341 -"The formation of NGC 3603 young starburst cluster: ""prompt"" hierarchical - assembly or monolithic starburst?",http://dx.doi.org/10.1093/mnras/stu2445 -A Dark Year for Tidal Disruption Events,http://dx.doi.org/10.1088/0004-637X/809/2/166 -"The r-process nucleosynthesis in the various jet-like explosions of - magnetorotational core-collapse supernovae",http://dx.doi.org/10.1088/0004-637X/810/2/109 -"Constraints on the Ultra-High Energy Neutrino Flux from Gamma-Ray Bursts - from a Prototype Station of the Askaryan Radio Array",http://dx.doi.org/10.1016/j.astropartphys.2016.12.003 -"Adiabatic Mass Loss in Binary Stars. II. From Zero-Age Main Sequence to - the Base of the Giant Branch",http://dx.doi.org/10.1088/0004-637X/812/1/40 -"GRB 131014A: a Laboratory to Study the Thermal-Like and Non-Thermal - Emissions in Gamma-Ray Bursts, and the new - L$_\mathrm{i}^\mathrm{nTh}$-E$_\mathrm{peak,i}^\mathrm{nTh,rest}$ relation",http://dx.doi.org/10.1088/0004-637X/814/1/10 -Happy Birthday Swift: Ultra-long GRB141121A and its broad-band Afterglow,http://dx.doi.org/10.1088/0004-637X/812/2/122 -"Multi-messenger astronomy of gravitational-wave sources with flexible - wide-area radio transient surveys",http://dx.doi.org/10.1088/0004-637X/812/2/168 -"Analysis Framework for the Prompt Discovery of Compact Binary Mergers in - Gravitational-wave Data",http://dx.doi.org/10.1103/PhysRevD.95.042001 -A fundamental plane for long gamma-ray bursts with X-ray plateaus,http://dx.doi.org/10.3847/2041-8205/825/2/L20 -"Prompt emission from GRB 150915A in the GeV energy range detected at - ground by the New-Tupi detector: A review",http://arxiv.org/abs/1605.04274v2 -"QCD corrections to vector boson pair production in gluon fusion - including interference effects with off-shell Higgs at the LHC",http://dx.doi.org/10.1007/JHEP07(2016)087 -"Experimental observation of $β$-delayed neutrons from $^{9}$Li as a - way to study short-pulse laser-driven deuteron production",http://arxiv.org/abs/1605.05702v1 -"D-meson production in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV - and in pp collisions at $\sqrt{s}=7$ TeV",http://dx.doi.org/10.1103/PhysRevC.94.054908 -How Special Is GRB 170817A?,http://dx.doi.org/10.3847/2041-8213/aaa66c -"Off-Axis Emission of Short GRB Jets from Double Neutron Star Mergers and - GRB 170817A",http://dx.doi.org/10.1093/mnras/sty2308 -"Direct photon spectrum and elliptic flow produced from Pb+Pb collisions - at $\sqrt{s_{NN}}=2.76$ TeV at the CERN Large Hadron Collider within an - integrated hydrokinetic model",http://dx.doi.org/10.1103/PhysRevC.97.054907 -"Low frequency view of GW 170817/GRB 170817A with the Giant Meterwave - Radio Telescope",http://dx.doi.org/10.3847/1538-4357/aae1a6 -"Off-axis afterglow light curves and images from 2D hydrodynamic - simulations of double-sided GRB jets in a stratified external medium",http://dx.doi.org/10.1093/mnras/sty2454 -"Measurement of Reactor Antineutrino Oscillation Amplitude and Frequency - at RENO",http://dx.doi.org/10.1103/PhysRevLett.121.201801 -Late afterglows of GW/GRB 170817A,http://arxiv.org/abs/1806.11161v3 -Multi-color blackbody emission in GRB 081221,http://dx.doi.org/10.3847/1538-4357/aadc07 -"A luminosity distribution for kilonovae based on short gamma-ray burst - afterglows",http://dx.doi.org/10.1093/mnras/stz891 -Production of Quarkonia and Heavy Flavor States in ATLAS,http://arxiv.org/abs/1905.13185v1 -"A global numerical model of the prompt emission in short gamma-ray - bursts",http://dx.doi.org/10.3847/1538-4357/ac0cf9 -"GRB 140102A: Insight into Prompt Spectral Evolution and Early Optical - Afterglow Emission",http://dx.doi.org/10.1093/mnras/stab1573 -Nucleosynthesis in dynamical and torus ejecta of compact binary mergers,http://arxiv.org/abs/1504.05448v2 -"Associated production of a quarkonium and a Z boson at one loop in a - quark-hadron-duality approach",http://dx.doi.org/10.1007/JHEP10(2016)153 -"Measurement of prompt D$^0$, D$^+$, D$^{*+}$, and D$^+_s$ production in - p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV",http://dx.doi.org/10.1007/JHEP12(2019)092 -"Prompt ${J/ψ}$-pair production at the LHC: impact of loop-induced - contributions and of the colour-octet mechanism",http://arxiv.org/abs/1906.10049v1 -"Supernovae and their host galaxies -- VI. Normal Type Ia and 91bg-like - supernovae in ellipticals",http://dx.doi.org/10.1093/mnras/stz2585 -"Revisiting the Relationship between the Long GRB Rate and Cosmic Star - Formation History Based on a Large Swift Sample",http://dx.doi.org/10.3847/1538-4365/ab88da -"A Multilevel Empirical Bayesian Approach to Estimating the Unknown - Redshifts of 1366 BATSE Catalog Long-Duration Gamma-Ray Bursts",http://dx.doi.org/10.3847/1538-4357/abb9b7 -"GRB 080503: Implications of a Naked Short Gamma-Ray Burst Dominated by - Extended Emission",http://dx.doi.org/10.1088/0004-637X/696/2/1871 -Time-resolved spectral correlations of long-duration Gamma-Ray Bursts,http://dx.doi.org/10.1111/j.1365-2966.2008.14271.x -The supernova rate and delay time distribution in the Magellanic Clouds,http://dx.doi.org/10.1111/j.1365-2966.2010.16988.x -"Investigations of three, four, and five-particle exit channels of levels - in light nuclei created using a 9C beam",http://dx.doi.org/10.1103/PhysRevC.84.014320 -"Observational constraints on the external shock prior emission - hypothesis of GRBs",http://dx.doi.org/10.1111/j.1365-2966.2012.20611.x -"Constraints on the Synchrotron Shock Model for the Fermi GBM Gamma-Ray - Burst 090820A",http://dx.doi.org/10.1088/0004-637X/741/1/24 -On The Origin Of High Energy Correlations in Gamma-ray Bursts,http://dx.doi.org/10.1088/0004-637X/747/2/146 -"Extended calibration range for prompt photon emission in ion beam - irradiation",http://dx.doi.org/10.1016/j.nima.2014.01.047 -"Radiation Mechanism and Jet Composition of Gamma-Ray Bursts and GeV-TeV - selected Radio Loud Active Galactic Nuclei",http://dx.doi.org/10.1088/2041-8205/774/1/L5 -"Comprehensive nucleosynthesis analysis for ejecta of compact binary - mergers",http://dx.doi.org/10.1093/mnras/stv009 -"Prospects for Characterizing Host Stars of the Planetary System - Detections Predicted for the Korean Microlensing Telescope Network",http://dx.doi.org/10.1088/0004-637X/800/1/58 -The maximum isotropic energy of gamma-ray bursts,http://dx.doi.org/10.3847/1538-4357 -Early X-ray Flares in GRBs,http://dx.doi.org/10.3847/1538-4357/aa9e8b -A Study of the Gamma-Ray Burst Fundamental Plane,http://dx.doi.org/10.3847/1538-4357/aa8a6b -Dielectron production in proton-proton collisions at $\sqrt{s}$ = 7 TeV,http://dx.doi.org/10.1007/JHEP09(2018)064 -"Radiative-capture cross sections for the $^{139}$La($n,γ$) - reactionusing thermal neutrons and structural properties of $^{140}$La",http://dx.doi.org/10.1103/PhysRevC.99.024310 -"A detail study of the LHC and TEVATRON hadron-hadron prompt-photon pair - production experiments in the angular ordering constraint k_t-factorization - approaches",http://dx.doi.org/10.1088/1361-6471/ab2e94 -GRB 190114C: from prompt to afterglow?,http://dx.doi.org/10.1051/0004-6361/201935214 -"Analysis and modelling of the multi-wavelength observations of the - luminous GRB 190114C",http://dx.doi.org/10.3847/2041-8213/ab2ae4 -Searching for GRBs at VHE with MAGIC: the status before CTA,http://arxiv.org/abs/1909.02802v1 -The rise and fall of the high-energy afterglow emission of GRB 180720B,http://dx.doi.org/10.1051/0004-6361/201936765 -"Spectral analysis of Fermi-LAT gamma-ray bursts with known redshift and - their potential use as cosmological standard candles",http://dx.doi.org/10.3847/1538-4357/ab4e11 -"Fission in a microscopic framework: from basic science to support for - applications",http://dx.doi.org/10.1051/epjconf/202125600016 -"Low frequency view of GRB 190114C reveals time varying shock - micro-physics",http://dx.doi.org/10.1093/mnras/stab1050 -"AGILE Observations of Two Repeating Fast Radio Bursts with Small - Intrinsic Dispersion Measures",http://dx.doi.org/10.3847/2041-8213/ab720a -"Possibility of Disinfection of SARS-CoV-2 (COVID-19) in Human - Respiratory Tract by Controlled Ethanol Vapor Inhalation",http://arxiv.org/abs/2003.12444v1 -"Studies of $X(3872)$ and $ψ(2S)$ production in $p\bar{p}$ collisions - at 1.96 TeV",http://dx.doi.org/10.1103/PhysRevD.102.072005 -Polarization of GRB Prompt Emission and its Application to POLAR's Data,http://dx.doi.org/10.1088/1674-4527/21/3/055 -Systematics of prompt black-hole formation in neutron star mergers,http://dx.doi.org/10.1103/PhysRevD.103.123004 -"Afterglow Synchrotron Radiations follow the $L_{\rm p, iso}-E_{\rm - p,z}-Γ_0$ relation of Gamma-Ray Bursts? Cases of GRBs 190114C, 130427A, - and 180720B",http://dx.doi.org/10.3847/2041-8213/abc330 -"Analysis of Prospective Super-Symmetry Inherent in the $pp$ Collision - Data at $7$ TeV from CMS Collaboration Using Novel Two-Dimensional - Multifractal-Detrended Fluctuation Analysis Method with Rectangular Scale",http://arxiv.org/abs/2012.00442v1 -"A Time-Of-Flight-Based Reconstruction for Real-Time Prompt-Gamma Imaging - in Protontherapy",http://dx.doi.org/10.1088/1361-6560/ac03ca -Temporal Evolution of Prompt GRB Polarization,http://dx.doi.org/10.1093/mnras/stab1013 -Classification of COVID-19 via Homology of CT-SCAN,http://arxiv.org/abs/2102.10593v1 -A nearby repeating fast radio burst in the direction of M81,http://dx.doi.org/10.3847/2041-8213/abeaa6 -"A Minimalist Dataset for Systematic Generalization of Perception, - Syntax, and Semantics",http://arxiv.org/abs/2103.01403v3 -"Nearby SN-Associated GRB~190829A: Environment, Jet Structure, and VHE - Gamma-Ray Afterglows",http://dx.doi.org/10.3847/1538-4357/ac0c7f -"Multi-wavelength view of the close-by GRB 190829A sheds light on - gamma-ray burst physics",http://dx.doi.org/10.3847/2041-8213/ac6c28 -Urdu & Hindi Poetry Generation using Neural Networks,http://arxiv.org/abs/2107.14587v1 -"Identification of Pediatric Respiratory Diseases Using Fine-grained - Diagnosis System",http://dx.doi.org/10.1016/j.jbi.2021.103754 -"Machine Learning-Based COVID-19 Patients Triage Algorithm using - Patient-Generated Health Data from Nationwide Multicenter Database",http://dx.doi.org/10.1007/s40121-022-00600-4 -Standing on the Shoulders of Giant Frozen Language Models,http://arxiv.org/abs/2204.10019v1 -Flamingo: a Visual Language Model for Few-Shot Learning,http://arxiv.org/abs/2204.14198v2 -"Online triggers for supernova and pre-supernova neutrino detection with - cryogenic detectors",http://dx.doi.org/10.1088/1475-7516/2022/10/024 -"Language Models with Image Descriptors are Strong Few-Shot - Video-Language Learners",http://arxiv.org/abs/2205.10747v4 -"High time resolution search for prompt radio emission from the long GRB - 210419A with the Murchison Widefield Array",http://dx.doi.org/10.1093/mnras/stac1483 -"AGILE Observations of GRB 220101A: A ""New Year's Burst"" with an - Exceptionally Huge Energy Release",http://dx.doi.org/10.3847/1538-4357/ac746c -"What Can Transformers Learn In-Context? A Case Study of Simple Function - Classes",http://arxiv.org/abs/2208.01066v3 -"Late-time accretion in neutron star mergers: implications for short - gamma-ray bursts and kilonovae",http://dx.doi.org/10.1093/mnras/stad1336 -"The quest for new correlations in the realm of the Gamma-Ray Burst -- - Supernova connection",http://dx.doi.org/10.3847/1538-4357/ac8b77 -"The structure of the ultrarelativistic prompt emission phase and the - properties of the black hole in GRB 180720B",http://dx.doi.org/10.1140/epjc/s10052-022-10750-x -"Properties of the Prompt Optical Counterpart Arising from the Cooling of - Electrons in Gamma-Ray Bursts",http://arxiv.org/abs/2209.11847v1 -Re-Imagen: Retrieval-Augmented Text-to-Image Generator,http://arxiv.org/abs/2209.14491v3 -"Dynamic Prompt Learning via Policy Gradient for Semi-structured - Mathematical Reasoning",http://arxiv.org/abs/2209.14610v3 -Binding Language Models in Symbolic Languages,http://arxiv.org/abs/2210.02875v2 -Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP,http://arxiv.org/abs/2210.04150v3 -Can Artificial Intelligence Reconstruct Ancient Mosaics?,http://dx.doi.org/10.1080/00393630.2023.2227798 -Upgrade and commissioning of the ALICE muon spectrometer,http://arxiv.org/abs/2210.12431v1 -Multi-lingual Evaluation of Code Generation Models,http://arxiv.org/abs/2210.14868v3 -"eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert - Denoisers",http://arxiv.org/abs/2211.01324v5 -"Prompt emission and early optical afterglow of VHE detected GRB 201015A - and GRB 201216C: onset of the external forward shock",http://dx.doi.org/10.3847/1538-4357/aca414 -PAL: Program-aided Language Models,http://arxiv.org/abs/2211.10435v2 -RoentGen: Vision-Language Foundation Model for Chest X-ray Generation,http://arxiv.org/abs/2211.12737v1 -"GPT-3-driven pedagogical agents for training children's curious - question-asking skills",http://dx.doi.org/10.1007/s40593-023-00340-7 -"Navigation as Attackers Wish? Towards Building Byzantine-Robust Embodied - Agents under Federated Learning",http://arxiv.org/abs/2211.14769v3 -"A Cosmological Fireball with Sixteen-Percent Gamma-Ray Radiative - Efficiency",http://dx.doi.org/10.3847/2041-8213/acb99d -Fine-tuned CLIP Models are Efficient Video Learners,http://arxiv.org/abs/2212.03640v3 -"Temporal and Spectral Evolution of Gamma-ray Burst Broad Pulses: - Identification of High Latitude Emission in the Prompt Emission",http://dx.doi.org/10.3847/1538-4357/acc581 -Position-guided Text Prompt for Vision-Language Pre-training,http://arxiv.org/abs/2212.09737v2 -Overview of quarkonium production in ALICE,http://arxiv.org/abs/2212.11524v1 -GRB 200829A: External Shock Origin of the Very Early Prompt Emission?,http://dx.doi.org/10.3847/1538-4357/acaf68 -Diminished Diversity-of-Thought in a Standard Large Language Model,http://arxiv.org/abs/2302.07267v6 -Synchrotron Radiation Dominates the Extremely Bright GRB 221009A,http://dx.doi.org/10.3847/2041-8213/acc84b -Is ChatGPT a Good NLG Evaluator? A Preliminary Study,http://arxiv.org/abs/2303.04048v3 -"Translating Radiology Reports into Plain Language using ChatGPT and - GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential",http://arxiv.org/abs/2303.09038v3 -"A broken ""$α$-intensity"" relation caused by the evolving - photosphere emission and the nature of the extraordinarily bright GRB 230307A",http://arxiv.org/abs/2303.11083v3 -GRB-SN Association within the Binary-Driven Hypernova Model,http://dx.doi.org/10.3847/1538-4357/ace721 -"ArguGPT: evaluating, understanding and identifying argumentative essays - generated by GPT models",http://arxiv.org/abs/2304.07666v2 -"The Segment Anything foundation model achieves favorable brain tumor - autosegmentation accuracy on MRI to support radiotherapy treatment planning",http://arxiv.org/abs/2304.07875v1 -"ImpressionGPT: An Iterative Optimizing Framework for Radiology Report - Summarization with ChatGPT",http://arxiv.org/abs/2304.08448v2 -"NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot - Speech and Singing Synthesizers",http://arxiv.org/abs/2304.09116v3 -Segment Anything Model for Medical Images?,http://arxiv.org/abs/2304.14660v4 -VPGTrans: Transfer Visual Prompt Generator across LLMs,http://arxiv.org/abs/2305.01278v2 -"""I'm fully who I am"": Towards Centering Transgender and Non-Binary - Voices to Measure Biases in Open Language Generation",http://dx.doi.org/10.1145/3593013.3594078 -"Towards Explainable In-the-Wild Video Quality Assessment: A Database and - a Language-Prompted Approach",http://dx.doi.org/10.1145/3581783.3611737 -Gene Set Summarization using Large Language Models,http://arxiv.org/abs/2305.13338v2 -Reasoning with Language Model is Planning with World Model,http://arxiv.org/abs/2305.14992v2 -"Taming AI Bots: Controllability of Neural States in Large Language - Models",http://arxiv.org/abs/2305.18449v1 -Unifying (Machine) Vision via Counterfactual World Modeling,http://arxiv.org/abs/2306.01828v1 -AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms,http://arxiv.org/abs/2306.03280v1 -"Transformers as Statisticians: Provable In-Context Learning with - In-Context Algorithm Selection",http://arxiv.org/abs/2306.04637v2 -"Learning Profitable NFT Image Diffusions via Multiple Visual-Policy - Guided Reinforcement Learning",http://arxiv.org/abs/2306.11731v2 -Recommender Systems in the Era of Large Language Models (LLMs),http://arxiv.org/abs/2307.02046v2 -"Speed and Acceleration of CMEs Associated with Sustained Gamma-Ray - Emission Events Observed by Fermi/LAT",http://arxiv.org/abs/2307.05585v1 -KPM: A Flexible and Data-Driven K-Process Model for Nucleosynthesis,http://arxiv.org/abs/2307.05691v1 -"Question Decomposition Improves the Faithfulness of Model-Generated - Reasoning",http://arxiv.org/abs/2307.11768v2 -"Tool Documentation Enables Zero-Shot Tool-Usage with Large Language - Models",http://arxiv.org/abs/2308.00675v1 -"Measurements of direct-photon production in Pb-Pb collisions at - $\sqrt{\rm s_{NN}}=5.02$ TeV and $\sqrt{\rm s_{NN}}=2.76$ TeV with the ALICE - experiment",http://arxiv.org/abs/2308.02401v1 -"GRB Optical and X-ray Plateau Properties Classifier Using Unsupervised - Machine Learning",http://arxiv.org/abs/2308.14288v4 -"A LOFAR prompt search for radio emission accompanying X-ray flares in - GRB 210112A",http://dx.doi.org/10.1093/mnras/stad2670 -"Dielectron production in central Pb$-$Pb collisions at - $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV",http://arxiv.org/abs/2308.16704v1 -"MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image - Segmentation",http://arxiv.org/abs/2309.08842v1 -Large language models can accurately predict searcher preferences,http://arxiv.org/abs/2309.10621v1 -"ReConcile: Round-Table Conference Improves Reasoning via Consensus among - Diverse LLMs",http://arxiv.org/abs/2309.13007v1 -"MentaLLaMA: Interpretable Mental Health Analysis on Social Media with - Large Language Models",http://arxiv.org/abs/2309.13567v2 -"Natural Language based Context Modeling and Reasoning with LLMs: A - Tutorial",http://arxiv.org/abs/2309.15074v1 -"Reason for Future, Act for Now: A Principled Framework for Autonomous - LLM Agents with Provable Sample Efficiency",http://arxiv.org/abs/2309.17382v2 -The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision),http://arxiv.org/abs/2309.17421v2 -"BooookScore: A systematic exploration of book-length summarization in - the era of LLMs",http://arxiv.org/abs/2310.00785v2 -How FaR Are Large Language Models From Agents with Theory-of-Mind?,http://arxiv.org/abs/2310.03051v1 -"Benchmarking a foundation LLM on its ability to re-label structure names - in accordance with the AAPM TG-263 report",http://arxiv.org/abs/2310.03874v1 -"LoFT: Local Proxy Fine-tuning For Improving Transferability Of - Adversarial Attacks Against Large Language Model",http://arxiv.org/abs/2310.04445v2 -"Prompt-to-OS (P2OS): Revolutionizing Operating Systems and - Human-Computer Interaction with Integrated AI Generative Models",http://arxiv.org/abs/2310.04875v1 -"Bridging Code Semantic and LLMs: Semantic Chain-of-Thought Prompting for - Code Generation",http://arxiv.org/abs/2310.10698v2 -Entity Matching using Large Language Models,http://arxiv.org/abs/2310.11244v1 -Object-aware Inversion and Reassembly for Image Editing,http://arxiv.org/abs/2310.12149v1 -"$Λ$-Split: A Privacy-Preserving Split Computing Framework for - Cloud-Powered Generative AI",http://arxiv.org/abs/2310.14651v1 -"LINC: A Neurosymbolic Approach for Logical Reasoning by Combining - Language Models with First-Order Logic Provers",http://arxiv.org/abs/2310.15164v1 -"Ultralow-power all-optical switching via a chiral Mach-Zehnder - interferometer",http://dx.doi.org/10.1364/OE.453493 -How to Construct a GRB Engine?,http://arxiv.org/abs/astro-ph/0202403v2 -"Integrating HMM-Based Speech Recognition With Direct Manipulation In A - Multimodal Korean Natural Language Interface",http://arxiv.org/abs/cmp-lg/9611005v1 -"Molecular Chemical Engines: Pseudo-Static Processes and the Mechanism of - Energy Transduction",http://dx.doi.org/10.1143/JPSJ.74.2973 -"Charge-transfer interfaces between metal and redox arylamine molecular - films: As probed with anode interfacial engineering approach in single-layer - organic diodes",http://arxiv.org/abs/cond-mat/0508453v1 -Exploring the operation of a tiny heat engine,http://dx.doi.org/10.1016/j.physa.2007.05.035 -Open source software and peer review,http://arxiv.org/abs/cs/0308040v2 -"From General Systems to Soft Systems to Soft Computing: Applications for - Large and Complex Real World Systems",http://arxiv.org/abs/cs/0511018v2 -Checking C++ Programs for Dimensional Consistency,http://arxiv.org/abs/cs/0512026v1 -"A Formal Architecture-Centric Model-Driven Approach for the Automatic - Generation of Grid Applications",http://arxiv.org/abs/cs/0601118v1 -Difficulties in the Implementation of Quantum Computers,http://arxiv.org/abs/cs/0602096v1 -"Linux, Open Source and Unicode",http://arxiv.org/abs/cs/0609024v2 -A XML Schema Definition based Universal User Interface,http://arxiv.org/abs/cs/0609025v2 -"Negotiation in collaborative assessment of design solutions: an - empirical study on a Concurrent Engineering process",http://arxiv.org/abs/cs/0702006v1 -A Susy Phase Transition as Central Engine,http://arxiv.org/abs/hep-ph/0501075v1 -"Matrix Models, Geometric Engineering and Elliptic Genera",http://dx.doi.org/10.1088/1126-6708/2008/03/069 -Geo-engineering Gone Awry: A New Partial Solution of Fermi's Paradox,http://arxiv.org/abs/physics/0308058v1 -"Reforming a large lecture modern physics course for engineering majors - using a PER-based design",http://dx.doi.org/10.1063/1.2508685 -"Engineering arbitrary motional ionic state through realistic - intensity-fluctuating laser pulses",http://dx.doi.org/10.1103/PhysRevA.63.053813 -"Kinetic modelling of a surrogate diesel fuel applied to 3D auto-ignition - in HCCI engines",http://arxiv.org/abs/0706.2062v1 -"Wave-front engineering by Huygens-Fresnel principle for nonlinear - optical interactions in domain engineered structures",http://dx.doi.org/10.1103/PhysRevLett.100.063902 -"The Transfer of Knowledge from Physics and Mathematics to Engineering - Applications",http://arxiv.org/abs/0708.2577v1 -"Hepatocyte Aggregates: Methods of Preparation in the Microgravity - Simulating Bioreactor Use in Tissue Engineering",http://arxiv.org/abs/0801.3382v1 -Modeling an efficient Brownian heat engine,http://dx.doi.org/10.1140/epjb/e2008-00308-5 -A Paradigm for Spreadsheet Engineering Methodologies,http://arxiv.org/abs/0802.3919v1 -Microtribological Property of Vertically Aligned Carbon Nanotube Film,http://arxiv.org/abs/0805.0473v1 -Efficiency at maximum power of Feynman's ratchet as a heat engine,http://dx.doi.org/10.1088/1751-8113/41/31/312003 -"Synchronization Engineering: Theoretical Framework and Application to - Dynamical Clustering",http://dx.doi.org/10.1063/1.2927531 -Increase of Software Safety,http://arxiv.org/abs/0807.0161v1 -A new model of the Central Engine of GRB and the Cosmic Jets,http://arxiv.org/abs/0902.2408v2 -Personal report of the 3rd ECMDA-FA'07 conference,http://arxiv.org/abs/0903.3797v1 -"Proposition d'une methode de qualification et de selection d'un logiciel - d'analyse et de suivi du referencement dans les moteurs de recherche",http://arxiv.org/abs/0905.4433v1 -"Predictors Of Java Programming Self Efficacy Among Engineering Students - In A Nigerian University",http://arxiv.org/abs/0909.0074v1 -Pessimistic Testing,http://arxiv.org/abs/0910.0996v1 -"Method Chunks Selection by Multicriteria Techniques: an Extension of the - Assembly-based Approach",http://arxiv.org/abs/0911.1495v1 -Posynomial Geometric Programming Problems with Multiple Parameters,http://arxiv.org/abs/1001.3493v1 -"Handling Overload Conditions In High Performance Trustworthy Information - Retrieval Systems",http://arxiv.org/abs/1004.4460v1 -Rolling Release Siege Engines: Teaching an Old Machine a New Trick,http://arxiv.org/abs/1005.0176v1 -Prospects of nuclear propulsion for space physics,http://arxiv.org/abs/1005.0869v1 -"Spreadsheets Grow Up: Three Spreadsheet Engineering Methodologies for - Large Financial Planning Models",http://arxiv.org/abs/1008.4174v1 -"Reply to comment of Bister and co-authors on the critique of the - dissipative heat engine",http://arxiv.org/abs/1010.5753v1 -Link Spam Detection based on DBSpamClust with Fuzzy C-means Clustering,http://dx.doi.org/10.5121/ijngn.2010.2401 -A Domain Specific Ontology Based Semantic Web Search Engine,http://arxiv.org/abs/1102.0695v1 -"The BinProlog Experience: Architecture and Implementation Choices for - Continuation Passing Prolog and First-Class Logic Engines",http://arxiv.org/abs/1102.1178v1 -"Search-based software test data generation using evolutionary - computation",http://arxiv.org/abs/1103.0125v1 -"A study of the existing libraries to read from configuration files (from - C++)",http://arxiv.org/abs/1103.3021v1 -"High-Throughput Biologically Optimized Search Engineering Approach to - Synthetic Biology",http://arxiv.org/abs/1103.5490v1 -"Efficiency at maximum power of minimally nonlinear irreversible heat - engines",http://dx.doi.org/10.1209/0295-5075/97/10004 -A Process Algebra for Supervisory Coordination,http://dx.doi.org/10.4204/EPTCS.60.3 -Large-scale Complex IT Systems,http://arxiv.org/abs/1109.3444v1 -"A Framework for Prefetching Relevant Web Pages using Predictive - Prefetching Engine (PPE)",http://arxiv.org/abs/1109.6206v1 -Tackling the testing migration problem with SAT-Solvers,http://arxiv.org/abs/1204.2974v1 -A Primer on Differential Forms,http://arxiv.org/abs/1206.3323v1 -Fluctuation relations for heat engines in time-periodic steady states,http://dx.doi.org/10.1088/1751-8113/45/46/465001 -Fermi velocity engineering in graphene by substrate modification,http://dx.doi.org/10.1038/srep00590 -A Survey on Web Spam Detection Methods: Taxonomy,http://dx.doi.org/10.5121/ijnsa.2012.4510 -"Use of Repositories and its Significance for Engineering Education / El - Uso de Repositorios y su Importancia para la Educación en Ingeniería",http://arxiv.org/abs/1210.5224v2 -"Soliton compression to few-cycle pulses with a high quality factor by - engineering cascaded quadratic nonlinearities",http://dx.doi.org/10.1364/OE.20.027071 -"Study on the Availability Prediction of the Reconfigurable Networked - Software System",http://arxiv.org/abs/1210.7600v1 -Shark: SQL and Rich Analytics at Scale,http://arxiv.org/abs/1211.6176v1 -Bat Algorithm: A Novel Approach for Global Engineering Optimization,http://dx.doi.org/10.1108/02644401211235834 -Entropic anomaly and maximal efficiency of microscopic heat engines,http://dx.doi.org/10.1103/PhysRevE.87.050102 -"An ontology-based approach for semantics ranking of the web search - engines results",http://dx.doi.org/10.1109/ICMCS.2012.6320318 -"Information model for model driven safety requirements management of - complex systems",http://dx.doi.org/10.1007/978-3-642-15654-0 -Analytical model for Stirling cycle machine design,http://dx.doi.org/10.1016/j.enconman.2010.02.010 -Requirements Issues in SoC and SoS,http://arxiv.org/abs/1301.5470v1 -Preliminary Energy Considerations in a Relativistic Engine,http://arxiv.org/abs/1302.2537v3 -An Empirical Study of Path Feasibility Queries,http://arxiv.org/abs/1302.4798v1 -Knowledge Engineering for Large Belief Networks,http://arxiv.org/abs/1302.6839v1 -Top-Down and Bottom-Up Approach for Model-Based Testing of Product Lines,http://dx.doi.org/10.4204/EPTCS.111.7 -Statistical Proof Pattern Recognition: Automated or Interactive?,http://arxiv.org/abs/1303.1419v1 -Effect of Query Formation on Web Search Engine Results,http://arxiv.org/abs/1303.1695v1 -RuleRunner technical report,http://arxiv.org/abs/1306.0810v1 -Software Testing Models Against Information Security Requirements,http://arxiv.org/abs/1306.1958v1 -Case Study Based Software Engineering Project Development: State of Art,http://dx.doi.org/10.5120/7834-1132 -"A Case-Study on Teaching Undergraduate-Level Software Engineering Course - Using Inverted-Classroom, Large-Group, Real-Client and Studio-Based - Instruction Model",http://arxiv.org/abs/1309.0714v1 -A Hybrid Model of a Genetic Regulatory Network in Mammalian Sclera,http://dx.doi.org/10.4204/EPTCS.125.8 -Binding energies of composite boson clusters using the Szilard engine,http://arxiv.org/abs/1309.6493v2 -Powerful energy harvester based on resonant-tunneling quantum wells,http://dx.doi.org/10.1088/1367-2630/15/9/095021 -An NMF solution for the Flowgraphs case at the TTC 2013,http://dx.doi.org/10.4204/EPTCS.135.5 -"Applying AOSE Concepts to Model Crosscutting Variability in Variant-Rich - Processes",http://dx.doi.org/10.1109/SEAA.2011.58 -"An NMF solution for the Petri Nets to State Charts case study at the TTC - 2013",http://dx.doi.org/10.4204/EPTCS.135.12 -M3: An Open Model for Measuring Code Artifacts,http://arxiv.org/abs/1312.1188v1 -"Java File Security System (JFSS) Evaluation Using Software Engineering - Approaches",http://arxiv.org/abs/1312.1817v1 -Optimal Efficiency of a Noisy Quantum Heat Engine,http://dx.doi.org/10.1103/PhysRevE.90.012119 -"Can pseudocomplementary peptide nucleic acid nucleases (pcPNANs) be a - new tool for genetic engineering?",http://arxiv.org/abs/1401.8223v4 -Quality-aware Approach for Engineering Self-adaptive Software Systems,http://dx.doi.org/10.5121/csit.2014.4117 -Software-defined Quantum Communication Systems,http://dx.doi.org/10.1117/1.OE.53.8.086103 -Fluctuation Relation for Quantum Heat Engines and Refrigerators,http://dx.doi.org/10.1088/1751-8113/47/24/245001 -High-temperature ferroelectricity in SrTiO$_{3}$ crystals,http://arxiv.org/abs/1404.5272v1 -A formal experiment to assess the efficacy of certification standards,http://arxiv.org/abs/1404.7542v1 -"A Methodology for the Diagnostic of Aircraft Engine Based on Indicators - Aggregation",http://dx.doi.org/10.1007/978-3-319-08976-8_11 -"Cyber-Physical Systems -- eine Herausforderung an die - Automatisierungstechnik?",http://arxiv.org/abs/1409.0385v1 -"Enablers and Impediments for Collaborative Research in Software Testing: - An Empirical Exploration",http://arxiv.org/abs/1409.0759v2 -"Assessment of classification techniques on predicting success or failure - of Software reusability",http://arxiv.org/abs/1409.2223v1 -Descriptive Control Theory: A Proposal,http://arxiv.org/abs/1409.3560v2 -"A New Course on Creativity in an Engineering Program: Foundations and - Issues",http://dx.doi.org/10.1109/IDAM.2014.6912706 -Engineering Autonomous Driving Software,http://arxiv.org/abs/1409.6579v1 -"Software & Systems Engineering Process and Tools for the Development of - Autonomous Driving Intelligence",http://arxiv.org/abs/1409.7121v1 -Functional Time Series Models for Ultrafine Particle Distributions,http://arxiv.org/abs/1412.1843v1 -"Multi-Platform Generative Development of Component & Connector Systems - using Model and Code Libraries",http://arxiv.org/abs/1412.2962v1 -The Influence of the Generator's License on Generated Artifacts,http://arxiv.org/abs/1412.2963v1 -Benchmarking Obfuscators of Functionality,http://arxiv.org/abs/1501.02885v1 -Ground-state Stabilization of Open Quantum Systems by Dissipation,http://dx.doi.org/10.1016/j.automatica.2015.11.041 -Quantum Equivalence and Quantum Signatures in Heat Engines,http://dx.doi.org/10.1103/PhysRevX.5.031044 -Superradiant Quantum Heat Engine,http://dx.doi.org/10.1038/srep12953 -"Proceedings 12th International Workshop on Formal Engineering approaches - to Software Components and Architectures",http://dx.doi.org/10.4204/EPTCS.178 -"On the transition period of implementing new mathematics curriculum for - Foundation Engineering students",http://arxiv.org/abs/1503.08921v1 -A Review Paper: Noise Models in Digital Image Processing,http://arxiv.org/abs/1505.03489v1 -"The Affect of Software Developers: Common Misconceptions and - Measurements",http://dx.doi.org/10.1109/CHASE.2015.23 -Algorithm Engineering in Robust Optimization,http://arxiv.org/abs/1505.04901v3 -Experience Report: Developing the Servo Web Browser Engine using Rust,http://arxiv.org/abs/1505.07383v1 -"A Statistical Analysis of the Performance of First Year Engineering - Students at UNSW Canberra and the Impact of the State where They Undertook - Year 12 Study",http://arxiv.org/abs/1506.07400v1 -Design Patterns for Self Adaptive Systems Engineering,http://dx.doi.org/10.5121/ijsea.2015.6402 -Three-terminal heat engine and refrigerator based on superlattices,http://dx.doi.org/10.1016/j.physe.2015.08.002 -"Radio Afterglow Rebrightening: Evidence for Multiple Active Phases in - Gamma-Ray Burst Central Engines",http://dx.doi.org/10.1007/s10509-015-2488-z -A single-atom heat engine,http://dx.doi.org/10.1126/science.aad6320 -"Artificial quantum thermal bath: Engineering temperature for a many-body - quantum system",http://dx.doi.org/10.1103/PhysRevA.94.052301 -Efficiency at and near Maximum Power of Low-Dissipation Heat Engines,http://dx.doi.org/10.1103/PhysRevE.92.052125 -"Generating Domain-Specific Transformation Languages for Component & - Connector Architecture Descriptions",http://arxiv.org/abs/1510.08981v1 -Born-Infeld AdS Black Holes as Heat Engines,http://dx.doi.org/10.1088/0264-9381/33/13/135001 -Model-based Hazard and Impact Analysis,http://arxiv.org/abs/1512.02759v1 -"Mechanical Self-Assembly of a Strain-Engineered Flexible Layer: - Wrinkling, Rolling, and Twisting",http://arxiv.org/abs/1512.04967v1 -"Operational characteristics of single particle heat engines and - refrigerators with time asymmetric protocol",http://dx.doi.org/10.1142/S0217979216502192 -"On Which Skills do Indian Universities Evaluate Software Engineering - Students?",http://arxiv.org/abs/1601.01796v1 -A multipurpose information engine that can go beyond the Carnot limit,http://dx.doi.org/10.1088/1742-5468/2016/10/103207 -The power of a critical heat engine,http://dx.doi.org/10.1038/ncomms11895 -"Proceedings of the 13th International Workshop on Formal Engineering - Approaches to Software Components and Architectures",http://dx.doi.org/10.4204/EPTCS.205 -Document Selection in a Distributed Search Engine Architecture,http://dx.doi.org/10.5829/idosi.mejsr.2015.23.07.22398 -"Attainability of maximum work and the reversible efficiency from - minimally nonlinear irreversible heat engines",http://arxiv.org/abs/1604.01912v4 -An Aerodynamic Analysis of a Robustly Redesigned Modern Aero-Engine Fan,http://arxiv.org/abs/1604.02345v1 -"Inverse engineering rigorous adiabatic Hamiltonian for non-Hermitian - system",http://dx.doi.org/10.1103/PhysRevA.94.053421 -"Black hole central engine for ultra-long gamma-ray burst 111209A and its - associated supernova 2011kl",http://dx.doi.org/10.3847/0004-637X/826/2/141 -Thermoelectric study of dissipative quantum dot heat engines,http://dx.doi.org/10.1103/PhysRevB.94.165416 -Luminescence engineering in plasmonic meta-surfaces,http://arxiv.org/abs/1606.03491v1 -One qubit and one photon -- the simplest polaritonic heat engine,http://dx.doi.org/10.1103/PhysRevA.94.063852 -Designing a High Performance Parallel Personal Cluster,http://arxiv.org/abs/1607.04077v1 -"NPCs as People, Too: The Extreme AI Personality Engine",http://arxiv.org/abs/1609.04879v1 -Interface Engineering in La0.67Sr0.33MnO3-SrTiO3 Heterostructures,http://arxiv.org/abs/1609.04976v1 -"Using CMA-ES for tuning coupled PID controllers within models of - combustion engines",http://arxiv.org/abs/1609.06741v4 -"TMI! How Knowledge Platforms Tame the Information Overload and Advance - Global Development Through Technology",http://arxiv.org/abs/1609.08753v1 -A Differentiable Physics Engine for Deep Learning in Robotics,http://arxiv.org/abs/1611.01652v2 -"Towards Guidelines for Preventing Critical Requirements Engineering - Problems",http://dx.doi.org/10.1109/SEAA.2016.50 -"A new fractional operator of variable order: application in the - description of anomalous diffusion",http://dx.doi.org/10.1016/j.physa.2017.04.054 -"When Students Choose to Use Event-B in their Software Engineering - Projects",http://arxiv.org/abs/1611.10160v1 -"Applying empirical software engineering to software architecture: - challenges and lessons learned",http://dx.doi.org/10.1007/s10664-009-9121-0 -Second law of thermodynamics with quantum memory,http://dx.doi.org/10.1103/PhysRevA.96.042304 -X-Ray Radiation Generation in Heat Engines Using Electron Cooling,http://arxiv.org/abs/1701.07637v1 -"Interfacing Automatic Proof Agents in Atelier B: Introducing ""iapa""",http://dx.doi.org/10.4204/EPTCS.240.6 -Enhancing Android Application Bug Reporting,http://dx.doi.org/10.1145/2786805.2807557 -"The upside of noise: engineered dissipation as a resource in - superconducting circuits",http://dx.doi.org/10.1088/2058-9565/aa7e5d -Building a Structured Query Engine,http://arxiv.org/abs/1710.00454v1 -"Proceedings 2nd International Workshop on Causal Reasoning for Embedded - and safety-critical Systems Technologies",http://dx.doi.org/10.4204/EPTCS.259 -"Effect of Lubricant Contaminants on Tribological Characteristics During - Boundary Lubrication Reciprocating Sliding",http://arxiv.org/abs/1710.04448v1 -"Reverse Engineering Camouflaged Sequential Integrated Circuits Without - Scan Access",http://arxiv.org/abs/1710.10474v1 -"Reverse and Forward Engineering of Local Voltage Control in Distribution - Networks",http://dx.doi.org/10.1109/TAC.2020.2994184 -"Setting the Foundation for Scientific Inquiry and Computational Thinking - in Early Childhood using Lego Machines and Mechanism Education Kit",http://arxiv.org/abs/1801.06042v1 -"Establishing phase diagram for the band engineering in p-type PbTe/SnTe - from elementary electronic structure understanding",http://arxiv.org/abs/1801.08662v1 -"Extreme Asymmetry in Metasurfaces via Evanescent Fields Engineering: - Angular-Asymmetric Absorption",http://dx.doi.org/10.1103/PhysRevLett.121.256802 -"TaintAssembly: Taint-Based Information Flow Control Tracking for - WebAssembly",http://arxiv.org/abs/1802.01050v1 -Psychological Safety and Norm Clarity in Software Engineering Teams,http://arxiv.org/abs/1802.01378v1 -Ways of Applying Artificial Intelligence in Software Engineering,http://arxiv.org/abs/1802.02033v2 -Replication studies considered harmful,http://dx.doi.org/10.1145/3183399.3183423 -An autonomous single-piston engine with a quantum rotor,http://dx.doi.org/10.1088/2058-9565/aac40d -"Formal Analysis of Galois Field Arithmetics - Parallel Verification and - Reverse Engineering",http://arxiv.org/abs/1802.06870v1 -"Practical Pulse Engineering: Gradient Ascent Without Matrix - Exponentiation",http://arxiv.org/abs/1802.07147v2 -Is the macronova in GW170817 powered by the central engine?,http://dx.doi.org/10.3847/1538-4357/aac4a8 -Experimental characterization of a spin quantum heat engine,http://dx.doi.org/10.1103/PhysRevLett.123.240601 -"Quantum engineering of transistors based on 2D materials - heterostructures",http://dx.doi.org/10.1038/s41565-018-0082-6 -"Central Engine-Powered Bright X-ray Flares in Short Gamma-Ray Bursts: A - Hint of Black Hole-Neutron Star Merger?",http://dx.doi.org/10.3847/1538-4357/aaba14 -"Efficiency at Maximum Power of Laser Quantum Heat Engine Enhanced by - Noise-Induced Coherence",http://dx.doi.org/10.1103/PhysRevE.97.042120 -"Transient temperature calculation method for complex fluid-solid heat - transfer problems with scattering boundary conditions",http://arxiv.org/abs/1806.01790v1 -Logic Programming as a Service,http://dx.doi.org/10.1017/S1471068418000364 -"Detecting Speech Act Types in Developer Question/Answer Conversations - During Bug Repair",http://arxiv.org/abs/1806.05130v3 -"Holographic Heat Engines, Entanglement Entropy, and Renormalization - Group Flow",http://dx.doi.org/10.1088/1361-6382/aaf1f1 -Disk formation in the collapse of supramassive neutron stars,http://dx.doi.org/10.1093/mnras/sty2181 -"Determining water mass flow control strategies for a turbocharged SI - engine using a two-stage calculation method",http://arxiv.org/abs/1806.08711v1 -Semi-automatically optimized calibration of internal combustion engines,http://arxiv.org/abs/1806.10980v2 -Hypertree Decompositions Revisited for PGMs,http://arxiv.org/abs/1807.00886v1 -RACK: Code Search in the IDE using Crowdsourced Knowledge,http://arxiv.org/abs/1807.04479v1 -Thinging for Software Engineers,http://arxiv.org/abs/1807.10673v1 -"Development of theory and methods of use of information and - communication technologies in teaching mathematics of engineering - specialities students in the United States",http://arxiv.org/abs/1808.05125v1 -"Massive gravity with Lorentz symmetry breaking: black holes as heat - engines",http://dx.doi.org/10.1142/S0217732318501778 -An Academic's Observations from a Sabbatical at Google,http://dx.doi.org/10.1145/3177748 -"Efficient Volumetric Absorption Solar Thermal Platforms Employing - Thermally Stable - Solar Selective Nanofluids Engineered from Used Engine Oil",http://arxiv.org/abs/1811.00302v1 -"Efficiency of a quantum Otto heat engine operating under a reservoir at - effective negative temperatures",http://dx.doi.org/10.1103/PhysRevLett.122.240602 -"VIEW: A Virtual Interactive Web-based Learning Environment for - Engineering",http://arxiv.org/abs/1811.07463v1 -Floquet-engineered quantum state manipulation in a noisy qubit,http://dx.doi.org/10.1103/PhysRevA.100.012341 -Machine learning-guided directed evolution for protein engineering,http://arxiv.org/abs/1811.10775v2 -Perceiving Physical Equation by Observing Visual Scenarios,http://arxiv.org/abs/1811.12238v1 -A Core Ontology for Privacy Requirements Engineering,http://arxiv.org/abs/1811.12621v1 -Identifying collaborators in large codebases,http://arxiv.org/abs/1905.06782v1 -"The African Wildlife Ontology tutorial ontologies: requirements, design, - and content",http://arxiv.org/abs/1905.09519v1 -"Collective heat capacity for quantum thermometry and quantum engine - enhancements",http://dx.doi.org/10.1088/1367-2630/aba463 -"The multi-objective optimisation of breakwaters using evolutionary - approach",http://arxiv.org/abs/2004.03010v2 -Compositional Formal Analysis Based on Conventional Engineering Models,http://arxiv.org/abs/2004.03666v1 -Defect Engineering of Two-dimensional Molybdenum Disulfide,http://dx.doi.org/10.1002/chem.202000286 -"Extended Abstract of Performance Analysis and Prediction of Model - Transformation",http://dx.doi.org/10.1145/3358960.3383769 -"Data Engineering for Data Analytics: A Classification of the Issues, and - Case Studies",http://arxiv.org/abs/2004.12929v1 -"Development and Application of Sentiment Analysis Tools in Software - Engineering: A Systematic Literature Review",http://dx.doi.org/10.1145/3463274.3463328 -Assessing Semantic Frames to Support Program Comprehension Activities,http://arxiv.org/abs/2105.05981v1 -Feature Interactions on Steroids: On the Composition of ML Models,http://arxiv.org/abs/2105.06449v1 -Work Systems Modeling Library,http://arxiv.org/abs/2105.07419v1 -Da Vinci -- Architecture-Driven Business Solutions,http://arxiv.org/abs/2105.09255v1 -Non-Floquet engineering in periodically driven non-Hermitian systems,http://dx.doi.org/10.1088/1361-648X/ac7c4e -Communication channel prioritization in a publish-subscribe architecture,http://dx.doi.org/10.1109/SEARIS.2015.7854100 -"Potential Errors and Test Assessment in Software Product Line - Engineering",http://dx.doi.org/10.4204/EPTCS.180.4 -Work measurement in an optomechanical quantum heat engine,http://dx.doi.org/10.1103/PhysRevA.92.033854 -Strain engineering in semiconducting two-dimensional crystals,http://dx.doi.org/10.1088/0953-8984/27/31/313201 -An Exact Efficiency Formula for Holographic Heat Engines,http://dx.doi.org/10.3390/e18040120 -"Optimal performance of periodically driven, stochastic heat engines - under limited control",http://dx.doi.org/10.1103/PhysRevE.93.042112 -"Testing Quality Requirements of a System-of-Systems in the Public Sector - - Challenges and Potential Remedies",http://arxiv.org/abs/1602.05618v1 -Engineering matter interactions using squeezed vacuum,http://dx.doi.org/10.1103/PhysRevX.7.021041 -"Paperstack - A Novel Lean-Interactive System for Documentation Sharing - in Maritime Industries",http://dx.doi.org/10.1109/CTS.2013.6567285 -"Modelling politics in requirements engineering: adding emoji to existing - notations",http://arxiv.org/abs/1703.06101v1 -Dynamical engineering of interactions in qudit ensembles,http://dx.doi.org/10.1103/PhysRevLett.119.183603 -Worse Than Spam: Issues In Sampling Software Developers,http://dx.doi.org/10.1145/2961111.2962628 -Role of quantum correlations in light-matter quantum heat engines,http://dx.doi.org/10.1103/PhysRevA.96.052119 -"What Looks Good with my Sofa: Multimodal Search Engine for Interior - Design",http://dx.doi.org/10.15439/2017F56 -An efficient non-linear Feshbach engine,http://dx.doi.org/10.1088/1367-2630/aa9cd8 -"Pole Placement Approach to Coherent Passive Reservoir Engineering for - Storing Quantum Information",http://dx.doi.org/10.1007/s11768-017-7020-2 -"Model-driven Engineering IDE for Quality Assessment of Data-intensive - Applications",http://dx.doi.org/10.1145/3053600.3053633 -"Engineering topological phases with a three-dimensional nodal-loop - semimetal",http://dx.doi.org/10.1103/PhysRevB.96.235424 -Feature Engineering for Predictive Modeling using Reinforcement Learning,http://arxiv.org/abs/1709.07150v1 -"Implementation-independent sufficient condition of the Knill-Laflamme - type for the autonomous protection of logical qudits by strong engineered - dissipation",http://dx.doi.org/10.1103/PhysRevA.98.012317 -Near-field three-terminal thermoelectric heat engine,http://dx.doi.org/10.1103/PhysRevB.97.125422 -"Phase-space interference in extensive and non-extensive quantum heat - engines",http://dx.doi.org/10.1103/PhysRevE.97.042127 -Spec-QP: Speculative Query Planning for Joins over Knowledge Graphs,http://arxiv.org/abs/1711.07581v1 -Soft Sides of Software,"http://dx.doi.org/10.1016/j.infsof.2017.07.011," -Would You Like to Motivate Software Testers? Ask Them How,http://dx.doi.org/10.1109/ESEM.2017.16 -"Wiki-MetaSemantik: A Wikipedia-derived Query Expansion Approach based on - Network Properties",http://arxiv.org/abs/1711.08730v1 -"Heat engine efficiency and Joule-Thomson expansion of non-linear charged - AdS black hole in massive gravity",http://arxiv.org/abs/1906.05557v1 -Unruh Quantum Otto heat engine with level degeneracy,http://dx.doi.org/10.1016/j.physletb.2020.135201 -"Thermal Stability, P-V Criticality and Heat Engine of Charged Rotating - Accelerating Black Holes",http://dx.doi.org/10.1007/s10714-022-02904-9 -"Computer-Supported Collaborative Learning in Software Engineering - Education: A Systematic Mapping Study",http://arxiv.org/abs/1906.10710v1 -Toward Maximum Grip Process Modeling in Software Engineering,http://arxiv.org/abs/1906.11157v1 -Tag Clouds for Object-Oriented Source Code Visualization,http://arxiv.org/abs/1906.11914v1 -"The Separator, a Two-Phase Oil and Water Gravity CPS Separator Testbed",http://arxiv.org/abs/2002.00945v1 -The Four Pillars of Research Software Engineering,http://dx.doi.org/10.1109/MS.2020.2973362 -Importance-Driven Deep Learning System Testing,http://arxiv.org/abs/2002.03433v1 -"Lifting Interpretability-Performance Trade-off via Automated Feature - Engineering",http://arxiv.org/abs/2002.04267v1 -Caveats in Eliciting Mobile App Requirements,http://arxiv.org/abs/2002.08458v1 -MNN: A Universal and Efficient Inference Engine,http://arxiv.org/abs/2002.12418v1 -"Rebooting Neuromorphic Hardware Design -- A Complexity Engineering - Approach",http://arxiv.org/abs/2005.00522v2 -Titanium and Iron in the Cassiopeia A Supernova Remnant,http://dx.doi.org/10.3847/1538-4357/ab8ade -A Human Dimension of Hacking: Social Engineering through Social Media,http://dx.doi.org/10.1088/1757-899X/790/1/012040 -"A Scientific Information Extraction Dataset for Nature Inspired - Engineering",http://arxiv.org/abs/2005.07753v2 -"Lessons learned in a decade of research software engineering GPU - applications",http://arxiv.org/abs/2005.13227v1 -"Vyasa: A High-Performance Vectorizing Compiler for Tensor Convolutions - on the Xilinx AI Engine",http://arxiv.org/abs/2006.01331v1 -How (Un)Happiness Impacts on Software Engineers in Agile Teams?,http://dx.doi.org/10.5121/ijsea.2020.11303 -"Summarising Big Data: Common GitHub Dataset for Software Engineering - Challenges",http://dx.doi.org/10.17776/csj.728932 -"Physical Education and English Language Arts Based K-12 Engineering - Outreach in Software Defined Networking (Extended Version)",http://arxiv.org/abs/2006.05545v1 -"IReEn: Reverse-Engineering of Black-Box Functions via Iterative Neural - Program Synthesis",http://arxiv.org/abs/2006.10720v2 -"Engineering light absorption at critical coupling via bound states in - the continuum",http://dx.doi.org/10.1364/JOSAB.419191 -Symbolic Execution and Debugging Synchronization,http://arxiv.org/abs/2006.16601v1 -Agent Based Approaches to Engineering Autonomous Space Software,http://dx.doi.org/10.4204/EPTCS.20.6 -Maximum-power quantum-mechanical Carnot engine,http://dx.doi.org/10.1103/PhysRevE.83.041117 -"Standardization of information systems development processes and banking - industry adaptations",http://dx.doi.org/10.5121/ijsea.2011.2201 -Two-dimensional Inside-out Eaton Lens: Wave Properties and Design,http://arxiv.org/abs/1105.5340v1 -"Compact Remnant Mass Function: Dependence on the Explosion Mechanism and - Metallicity",http://dx.doi.org/10.1088/0004-637X/749/1/91 -"Non-adiabatic Fast Control of Mixed States based on Lewis-Riesenfeld - Invariant",http://dx.doi.org/10.1143/JPSJ.81.024007 -"Spin chains for robust state transfer: Modified boundary couplings vs. - completely engineered chains",http://dx.doi.org/10.1103/PhysRevA.85.012318 -"Branching actin network remodeling governs the force-velocity - relationship",http://arxiv.org/abs/1111.6611v1 -Beyond The Desktop Spreadsheet,http://arxiv.org/abs/1111.6870v1 -Efficiency at maximum power of thermally coupled heat engines,http://dx.doi.org/10.1103/PhysRevE.85.041144 -Engineered Open Systems and Quantum Simulations with Atoms and Ions,http://arxiv.org/abs/1203.6595v1 -"Note on Combinatorial Engineering Frameworks for Hierarchical Modular - Systems",http://arxiv.org/abs/1304.0030v1 -A Literature Survey on Empirical Evidence in Software Engineering,http://arxiv.org/abs/1304.1002v1 -Knowledge Engineering Within A Generalized Bayesian Framework,http://arxiv.org/abs/1304.3076v1 -"Correlations of heat and charge currents in quantum-dot thermoelectric - engines",http://dx.doi.org/10.1088/1367-2630/15/12/125001 -"Boosting work characteristics and overall heat engine performance via - shortcuts to adiabaticity: quantum and classical systems",http://dx.doi.org/10.1103/PhysRevE.88.062122 -Hybrid Microwave-Cavity Heat Engine,http://dx.doi.org/10.1103/PhysRevLett.112.076803 -Seeking the Principles of Sustainable Software Engineering,http://arxiv.org/abs/1405.4464v3 -"A Hitchhiker's Guide to Search-Based Software Engineering for Software - Product Lines",http://arxiv.org/abs/1406.2823v1 -Energy Sources and Light Curves of Macronovae,http://dx.doi.org/10.1088/0004-637X/802/2/119 -"Uncovering the Perfect Place: Optimising Workflow Engine Deployment in - the Cloud",http://arxiv.org/abs/1410.5976v1 -"A short review on techniques for processes and process simulation of - scaffold-free tissue engineering",http://arxiv.org/abs/1511.00320v1 -Requirements Engineering for General Recommender Systems,http://arxiv.org/abs/1511.05262v4 -Using Search Engine Technology to Improve Library Catalogs,http://arxiv.org/abs/1511.05808v1 -Gauss-Bonnet Black Holes and Holographic Heat Engines Beyond Large N,http://dx.doi.org/10.1088/0264-9381/33/21/215009 -"Vibration-induced coherence enhancement of the performance of a - biological quantum heat engine",http://dx.doi.org/10.1103/PhysRevE.94.052101 -"In Quest for Proper Mediums for Technology Transfer in Software - Engineering",http://dx.doi.org/10.1109/ESEM.2015.7321203 -Quantum Performance of Thermal Machines over Many Cycles,http://dx.doi.org/10.1103/PhysRevLett.118.050601 -Single Particle Brownian Heat Engine With Microadiabaticity,http://arxiv.org/abs/1612.09130v1 -Scalability Analysis of the RADAR Decision Support Tool,http://arxiv.org/abs/1702.02977v1 -Flipping a Graduate-Level Software Engineering Foundations Course,http://arxiv.org/abs/1702.07069v1 -"Floquet engineering of long-range p-wave superconductivity: Beyond the - high-frequency limit",http://dx.doi.org/10.1103/PhysRevB.96.155438 -"The use of controlled vocabularies in requirements engineering - activities: a protocol for a systematic literature review",http://arxiv.org/abs/1704.00822v1 -"Holographic heat engines: general considerations and rotating black - holes",http://dx.doi.org/10.1088/1361-6382/aa7f0f -"Numerical methods to prevent pressure oscillations in transcritical - flows",http://arxiv.org/abs/1704.02637v2 -Social Network Analysis of yahoo web-search engine query logs,http://arxiv.org/abs/1705.01410v1 -Work and power fluctuations in a critical heat engine,http://dx.doi.org/10.1103/PhysRevE.96.030102 -Exact Requirements Engineering for Developing Business Process Models,http://arxiv.org/abs/1705.03883v1 -"How do Practitioners Perceive the Relevance of Requirements Engineering - Research? An Ongoing Study",http://arxiv.org/abs/1705.06013v3 -Second Law based definition of passivity/activity of devices,http://dx.doi.org/10.1016/j.physleta.2017.08.039 -"Michaelis-Menten at 100 and allosterism at 50: driving molecular motors - in a hailstorm with noisy ATPase engines and allosteric transmission",http://dx.doi.org/10.1111/febs.12596 -"Auditing Search Engines for Differential Satisfaction Across - Demographics",http://dx.doi.org/10.1145/3041021.3054197 -Hypertree Decompositions Revisited for PGMs,http://arxiv.org/abs/1804.01640v1 -Teaching Requirements Engineering Concepts using Case-Based Learning,http://arxiv.org/abs/1804.01770v1 -Cycling tames power fluctuations near optimum efficiency,http://dx.doi.org/10.1103/PhysRevLett.121.120601 -"Characterizing the Usage, Evolution and Impact of Java Annotations in - Practice",http://dx.doi.org/10.1109/TSE.2019.2910516 -"Requirements and Assessment of Languages and Frameworks for Adaptation - Models",http://dx.doi.org/10.1007/978-3-642-29645-1_18 -Glimpses of Space-Time Beyond the Singularities Using Supercomputers,http://dx.doi.org/10.1109/MCSE.2018.042781324 -"State of the Art Optical Character Recognition of 19th Century Fraktur - Scripts using Open Source Engines",http://arxiv.org/abs/1810.03436v1 -INFODENS: An Open-source Framework for Learning Text Representations,http://arxiv.org/abs/1810.07091v1 -Power and Efficiency of a Thermal Engine with a Coherent Bath,http://dx.doi.org/10.1103/PhysRevE.100.032129 -"Raman Spectroscopy of Diesel and Gasoline Engine-Out Soot Using - Different Laser Power",http://arxiv.org/abs/1810.10701v1 -"Mining Treatment-Outcome Constructs from Sequential Software Engineering - Data",http://arxiv.org/abs/1901.05604v1 -Implications of non-Markovian dynamics on information-driven engine,http://arxiv.org/abs/1902.06153v3 -Photonic Engineering for CV-QKD over Earth-Satellite Channels,http://dx.doi.org/10.1109/ICC.2019.8762003 -"Engineering Transport in Manganites by Tuning Local Non-Stoichiometry in - Grain Boundaries",http://dx.doi.org/10.1002/adma.201805360 -"Research Software Development & Management in Universities: Case Studies - from Manchester's RSDS Group, Illinois' NCSA, and Notre Dame's CRC",http://dx.doi.org/10.1109/SE4Science.2019.00009 -"A Serious Game for Introducing Software Engineering Ethics to University - Students",http://arxiv.org/abs/1903.01333v1 -"What Makes Research Software Sustainable? An Interview Study With - Research Software Engineers",http://arxiv.org/abs/1903.06039v1 -"A Methodology for Using GitLab for Software Engineering Learning - Analytics",http://arxiv.org/abs/1903.06772v1 -"What software engineering can learn from research on affect in social - psychology",http://arxiv.org/abs/1903.07381v1 -"Proposal of a realistic stochastic rotor engine based on electron - shuttling",http://dx.doi.org/10.1103/PhysRevApplied.12.024001 -Existential Ontology and Thinging Modeling in Software Engineering,http://arxiv.org/abs/1903.10822v1 -Estimation and Verification of Partially-Observed Discrete-Event Systems,http://arxiv.org/abs/1903.11413v1 -"Optimization performance of quantum Otto heat engines and refrigerators - with squeezed thermal reservoirs",http://arxiv.org/abs/1903.11931v2 -Engineering Effective Hamiltonians,http://dx.doi.org/10.1088/1367-2630/ab4525 -Controlling Quantum Transport via Dissipation Engineering,http://dx.doi.org/10.1103/PhysRevLett.123.180402 -Deep reinforcement learning for robust quantum optimization,http://dx.doi.org/10.1103/PhysRevA.100.042314 -Understanding the Engines and Progenitors of Gamma-Ray Bursts,http://dx.doi.org/10.1140/epja/i2019-12818-y -Engineering Token Economy with System Modeling,http://arxiv.org/abs/1907.00899v1 -Quantum heat engine with a quadratically coupled optomechanical system,http://dx.doi.org/10.1364/JOSAB.36.003000 -Optimal cycles for low-dissipation heat engines,http://dx.doi.org/10.1103/PhysRevLett.124.110606 -Thermodynamic Geometry of Microscopic Heat Engines,http://dx.doi.org/10.1103/PhysRevLett.124.040602 -"Semantic interoperability and characterization of data provenance in - computational molecular engineering",http://dx.doi.org/10.1021/acs.jced.9b00739 -RCE: An Integration Environment for Engineering and Science,http://arxiv.org/abs/1908.03461v2 -"A Simple Recommender Engine for Matching Final-Year Project Student with - Supervisor",http://arxiv.org/abs/1908.03475v1 -"Requirements Engineering for Machine Learning: Perspectives from Data - Scientists",http://arxiv.org/abs/1908.04674v1 -Challenges in Survey Research,http://dx.doi.org/10.1007/978-3-030-32489-6_4 -"Semantic Source Code Search: A Study of the Past and a Glimpse at the - Future",http://arxiv.org/abs/1908.06738v2 -"Analyzing the Context of Bug-Fixing Changes in the OpenStack Cloud - Computing Platform",http://arxiv.org/abs/1908.11297v1 -"Context-aware Deep Model for Entity Recommendation in Search Engine at - Alibaba",http://arxiv.org/abs/1909.04493v1 -Query Obfuscation Semantic Decomposition,http://arxiv.org/abs/1909.05819v2 -Engineering Self-adaptive Authorisation Infrastructures,http://dx.doi.org/10.1007/978-981-13-2185-6_3 -Engineering generalized Gibbs ensembles with trapped ions,http://dx.doi.org/10.1103/PhysRevResearch.3.033142 -"Enhanced optomechanical entanglement and cooling via dissipation - engineering",http://dx.doi.org/10.1103/PhysRevA.101.063836 -"Studying Software Engineering Patterns for Designing Machine Learning - Systems",http://arxiv.org/abs/1910.04736v2 -"Towards systems tissue engineering: elucidating the dynamics, spatial - coordination, and individual cells driving emergent behaviors",http://arxiv.org/abs/1910.06884v1 -"A Novel Approach for Optimal Trajectory Design with Multiple Operation - Modes of Propulsion System, Part 1",http://arxiv.org/abs/1910.09109v1 -"Optical Activity from the Exciton Aharonov-Bohm Effect: A Floquet - Engineering Approach",http://dx.doi.org/10.1021/acs.jpcc.9b10030 -"Autonomics: In Search of a Foundation for Next Generation Autonomous - Systems",http://dx.doi.org/10.1073/pnas.2003162117 -Layer engineered interlayer excitons,http://dx.doi.org/10.1126/sciadv.abh0863 -Non-commutative space engine: a boost to thermodynamic processes,http://dx.doi.org/10.1142/S0217732321501741 -QualiBD Tool: Implementation Details,http://arxiv.org/abs/1912.03866v1 -Many-body quantum heat engines with shortcuts to adiabaticity,http://dx.doi.org/10.1103/PhysRevResearch.2.023145 -"Coulomb-Engineered Heterojunctions and Dynamical Screening in Transition - Metal Dichalcogenide Monolayers",http://dx.doi.org/10.1103/PhysRevB.102.115111 -A single layer artificial neural network with engineered bacteria,http://arxiv.org/abs/2001.00792v1 -Coulomb Engineering of two-dimensional Mott materials,http://dx.doi.org/10.1038/s41699-023-00408-x -"Reliable and interoperable computational molecular engineering: 2. - Semantic interoperability based on the European Materials and Modelling - Ontology",http://arxiv.org/abs/2001.04175v1 -A Formal Development Cycle for Security Engineering in Isabelle,http://arxiv.org/abs/2001.08983v1 -Documentation of Machine Learning Software,http://arxiv.org/abs/2001.11956v1 -"SAFE: Scalable Automatic Feature Engineering Framework for Industrial - Tasks",http://arxiv.org/abs/2003.02556v3 -"General description for nonequilibrium steady states in periodically - driven dissipative quantum systems",http://dx.doi.org/10.1126/sciadv.abb4019 -"JS-son -- A Lean, Extensible JavaScript Agent Programming Library",http://arxiv.org/abs/2003.04690v1 -"Vec2Face: Unveil Human Faces from their Blackbox Features in Face - Recognition",http://arxiv.org/abs/2003.06958v1 -The Floquet Engineer's Handbook,http://arxiv.org/abs/2003.08252v2 -"Control Reconfiguration of Dynamical Systems for Improved Performance - via Reverse- and Forward-engineering",http://arxiv.org/abs/2003.09279v3 -Brownian heat engine with active reservoirs,http://dx.doi.org/10.1103/PhysRevE.102.032116 -"Secure Non-Orthogonal Multiple Access: An Interference Engineering - Perspective",http://arxiv.org/abs/2003.13488v2 -"Nonequilibrium many-body quantum engine driven by time-translation - symmetry breaking",http://dx.doi.org/10.1103/PhysRevLett.125.240602 -"Rabi Spectroscopy and Sensitivity of a Floquet Engineered Optical - Lattice Clock",http://dx.doi.org/10.1088/0256-307X/38/7/073201 -"Understanding coordination in global software engineering: A - mixed-methods study on the use of meetings and Slack",http://dx.doi.org/10.1016/j.jss.2020.110717 -Reducing Misinformation in Query Autocompletions,http://arxiv.org/abs/2007.02620v2 -Quantum speed limit for robust state characterization and engineering,http://dx.doi.org/10.1103/PhysRevA.102.042606 -More than Code: Contributions in Scrum Software Engineering Teams,http://dx.doi.org/10.1145/3387940.3392241 -Failures and Fixes: A Study of Software System Incident Response,http://arxiv.org/abs/2008.11192v1 -"Efficiency gain and bidirectional operation of quantum engines with - decoupled internal levels",http://dx.doi.org/10.1103/PhysRevE.104.044133 -"A Survey of Requirement Engineering Process in Android Application - Development",http://arxiv.org/abs/2008.13113v1 -Reverse-engineering Bar Charts Using Neural Networks,http://arxiv.org/abs/2009.02491v1 -"FLFE: A Communication-Efficient and Privacy-Preserving Federated Feature - Engineering Framework",http://arxiv.org/abs/2009.02557v1 -"Irradiation of Nanostrained Monolayer WSe$_2$ for Site-Controlled - Single-Photon Emission up to 150 K",http://dx.doi.org/10.1038/s41467-021-23709-5 -Photon-Number-Dependent Hamiltonian Engineering for Cavities,http://dx.doi.org/10.1103/PhysRevApplied.15.044026 -"Measuring affective states from technical debt: A psychoempirical - software engineering experiment",http://arxiv.org/abs/2009.10660v3 -"Optimal energy conversion through anti-adiabatic driving breaking - time-reversal symmetry",http://dx.doi.org/10.1103/PhysRevResearch.3.013237 -Uncovering the fragility of large-scale engineering project networks,http://dx.doi.org/10.1140/epjds/s13688-021-00291-w -"Hydrogel-based Bio-nanomachine Transmitters for Bacterial Molecular - Communications",http://dx.doi.org/10.1145/3416006.3431271 -"Edge mode engineering for optimal ultracoherent silicon nitride - membranes",http://dx.doi.org/10.1063/5.0031626 -"A Generic Approach to Detect Design Patterns in Model Transformations - Using a String-Matching Algorithm",http://arxiv.org/abs/2010.04759v1 -Underdamped Active Brownian Heat Engine,http://dx.doi.org/10.1103/PhysRevE.102.060101 -"Quantum Cycle in Relativistic Non-Commutative Space with Generalized - Uncertainty Principle correction",http://dx.doi.org/10.1016/j.physa.2021.126365 -DIFER: Differentiable Automated Feature Engineering,http://arxiv.org/abs/2010.08784v3 -The Case for Hop-by-Hop Traffic Engineering,http://arxiv.org/abs/2010.13198v1 -CoroBase: Coroutine-Oriented Main-Memory Database Engine,http://dx.doi.org/10.14778/3430915.3430932 -Maximizing power and velocity of an information engine,http://dx.doi.org/10.1073/pnas.2023356118 -"An engineer's brief introduction to microwave quantum optics and a - single-port state-space representation",http://arxiv.org/abs/2011.06734v2 -Bayesian Mass Averaging in Rigs and Engines,http://arxiv.org/abs/2011.09240v2 -Engineering symmetry breaking in two-dimensional layered materials,http://dx.doi.org/10.1038/s42254-020-00276-0 -"Uniaxial Néel Vector Control in Perovskite Oxide Thin Films by - Anisotropic Strain Engineering",http://dx.doi.org/10.1103/PhysRevB.103.224435 -"Resolving code smells in software product line using refactoring and - reverse engineering",http://arxiv.org/abs/2011.14283v1 -"Low Cost, Educational Internal Combustion Engine Electronic Control Unit - Hardware-in-the-Loop Test Systems",http://arxiv.org/abs/2012.00928v1 -The CROSS Incubator: A Case Study for funding and training RSEs,http://arxiv.org/abs/2012.01144v1 -Time-Aware Models for Software Effort Estimation,http://dx.doi.org/10.18293/SEKE2020-083 -"Linear Regression Evaluation of Search Engine Automatic Search - Performance Based on Hadoop and R",http://arxiv.org/abs/2012.02629v1 -"Hiperfact: In-Memory High Performance Fact Processing -- Rethinking the - Rete Inference Algorithm",http://arxiv.org/abs/2012.02710v1 -"Self-consistency of optimizing finite-time Carnot engines with the - low-dissipation model",http://dx.doi.org/10.1088/1572-9494/ac2cb8 -ROPfuscator: Robust Obfuscation with ROP,http://arxiv.org/abs/2012.09163v2 -"Fake News Data Collection and Classification: Iterative Query Selection - for Opaque Search Engines with Pseudo Relevance Feedback",http://arxiv.org/abs/2012.12498v2 -"Experiential Learning Approach for Software Engineering Courses at - Higher Education Level",http://arxiv.org/abs/2012.14178v2 -"Tissue Engineering for Periodontal Ligament Regeneration: Biomechanical - Specifications",http://dx.doi.org/10.1115/1.4048810 -Design Knowledge Representation with Technology Semantic Network,http://dx.doi.org/10.1017/pds.2021.104 -Exploring the Role of Creativity in Software Engineering,http://arxiv.org/abs/2101.00837v1 -"Awareness of Secure Coding Guidelines in the Industry -- A first data - analysis",http://arxiv.org/abs/2101.02085v1 -"EdgeWorkflowReal: An Edge Computing based Workflow Execution Engine for - Smart Systems",http://arxiv.org/abs/2102.00234v1 -Quantum Computers: Engines for Next Industrial Revolution,http://arxiv.org/abs/2102.04459v1 -"Pulse-engineered Controlled-V gate and its applications on - superconducting quantum device",http://dx.doi.org/10.1109/TQE.2022.3170008 -Fuzzing Symbolic Expressions,http://dx.doi.org/10.1109/ICSE43902.2021.00071 -"Towards Utility-based Prioritization of Requirements in Open Source - Environments",http://arxiv.org/abs/2102.08638v1 -"Software Engineering for Internet of Things: The Practitioner's - Perspective",http://arxiv.org/abs/2102.10708v3 -Data Engineering for Everyone,http://arxiv.org/abs/2102.11447v1 -"A Brief Survey of Current Software Engineering Practices in Continuous - Integration and Automated Accessibility Testing",http://arxiv.org/abs/2103.00097v1 -"Roosterize: Suggesting Lemma Names for Coq Verification Projects Using - Deep Learning",http://arxiv.org/abs/2103.01346v2 -"Stop Building Castles on a Swamp! The Crisis of Reproducing Automatic - Search in Evidence-based Software Engineering",http://arxiv.org/abs/2103.01381v1 -"Sustaining Research Software via Research Software Engineers and - Professional Associations",http://dx.doi.org/10.1109/BoKSS52540.2021.00016 -Thin-film radiative thermal diode with large rectification,http://dx.doi.org/10.1103/PhysRevApplied.16.014069 -Towards Artefact-based Requirements Engineering for Data-Centric Systems,http://arxiv.org/abs/2103.05233v1 -"PGD-based advanced nonlinear multiparametric regressions for - constructing metamodels at the scarce-data limit",http://arxiv.org/abs/2103.05358v1 -"Re-Imagining Performance Reviews: Automated Dashboards for Continuous - Visibility of Engineers Performance",http://arxiv.org/abs/2103.06245v1 -"Challenges and Governance Solutions for Data Science Services based on - Open Data and APIs",http://arxiv.org/abs/2103.07290v1 -"MLOps Challenges in Multi-Organization Setup: Experiences from Two - Real-World Cases",http://arxiv.org/abs/2103.08937v1 -"Fundamental Theory of the Evolution Force: Gene Engineering Utilizing - Synthetic Evolution Artificial Intelligence",http://arxiv.org/abs/2103.09998v1 -Defining Utility Functions for Multi-Stakeholder Self-Adaptive Systems,http://dx.doi.org/10.1007/978-3-030-73128-1_8 -"A Review & Framework for Modeling Complex Engineered System Development - Processes",http://arxiv.org/abs/2103.12820v2 -Universal Bounds on Fluctuations in Continuous Thermal Machines,http://dx.doi.org/10.1103/PhysRevLett.127.190603 -"Towards Tool-Support for Interactive-Machine Learning Applications in - the Android Ecosystem",http://arxiv.org/abs/2103.14852v1 -"Modeling the fast optical transient SN 2019bkc/ATLAS19dqr with a central - engine and implication for its origin",http://dx.doi.org/10.1088/1674-4527/21/8/200 -Extractive Multi Product-Line Engineering,http://arxiv.org/abs/2104.05602v1 -"Two-qubit gates in a trapped-ion quantum computer by engineering - motional modes",http://arxiv.org/abs/2104.13870v1 -A Taxonomy of Data Quality Challenges in Empirical Software Engineering,http://dx.doi.org/10.1109/ASWEC.2013.21 -Finite-time performance of a single-ion quantum Otto engine,http://dx.doi.org/10.1103/PhysRevE.103.032144 -gazel: Supporting Source Code Edits in Eye-Tracking Studies,http://dx.doi.org/10.1109/ICSE-Companion52605.2021.00038 -An Introduction to the Transmon Qubit for Electromagnetic Engineers,http://arxiv.org/abs/2106.11352v1 -Publication Bias: A Detailed Analysis of Experiments Published in ESEM,http://dx.doi.org/10.1145/3383219.3383233 -Multilevel quantum thermodynamic swap engines,http://dx.doi.org/10.1103/PhysRevA.104.012217 -"Leveraging Team Dynamics to Predict Open-source Software Projects' - Susceptibility to Social Engineering Attacks",http://arxiv.org/abs/2106.16067v3 -"From England to Italy: the intriguing story of Poli's engine for the - King of Naples",http://dx.doi.org/10.1007/s00016-021-00277-1 -Rail Topology Ontology: A Rail Infrastructure Base Ontology,http://dx.doi.org/10.1007/978-3-030-88361-4_35 -Promises and Perils of Inferring Personality on GitHub,http://arxiv.org/abs/2107.05829v2 -"Thermodynamics of a continuous quantum heat engine: Interplay between - population and coherence",http://dx.doi.org/10.1103/PhysRevA.104.042203 -"Defect engineering of magnetic ground state in EuTiO$_3$ epitaxial thin - films",http://dx.doi.org/10.1111/jace.17870 -"Ranking labs-of-origin for genetically engineered DNA using Metric - Learning",http://arxiv.org/abs/2107.07878v1 -Quantum Otto engines at relativistic energies,http://dx.doi.org/10.1088/1367-2630/ac2756 -Trajectory control using an information engine,http://dx.doi.org/10.1117/12.2593992 -Verifying Time Complexity of Binary Search using Dafny,http://dx.doi.org/10.4204/EPTCS.338.9 -"Empirical Analysis on Effectiveness of NLP Methods for Predicting Code - Smell",http://arxiv.org/abs/2108.04656v1 -"HPTMT Parallel Operators for High Performance Data Science & Data - Engineering",http://arxiv.org/abs/2108.06001v1 -"A User-Study Protocol for Evaluation of Formal Verification Results and - their Explanation",http://arxiv.org/abs/2108.06376v1 -Entanglement Engineering by Transmon Qubit in a Circuit QED,http://arxiv.org/abs/2109.00316v1 -"DRAFT-What you always wanted to know but could not find about - block-based environments",http://arxiv.org/abs/2110.03073v1 -A Review of Physics-based Machine Learning in Civil Engineering,http://arxiv.org/abs/2110.04600v2 -"KernelHaven -- An Experimentation Workbench for Analyzing Software - Product Lines",http://dx.doi.org/10.1145/3183440.3183480 -"Collaboration Challenges in Building ML-Enabled Systems: Communication, - Documentation, Engineering, and Process",http://arxiv.org/abs/2110.10234v4 -"JavaBERT: Training a transformer-based model for the Java programming - language",http://arxiv.org/abs/2110.10404v1 -Driving the Herd: Search Engines as Content Influencers,http://dx.doi.org/10.1145/3459637.3482334 -Chaos Engineering of Ethereum Blockchain Clients,http://dx.doi.org/10.1145/3611649 -Single-Item Fashion Recommender: Towards Cross-Domain Recommendations,http://dx.doi.org/10.1109/ICEE55646.2022.9827421 -Coherence enhanced quantum-dot heat engine,http://arxiv.org/abs/2111.09582v1 -Formal verification of space systems designed with TASTE,http://arxiv.org/abs/2111.10132v1 -"Assessment of nacre-like ceramics in replacement to Ni superalloys in - aircraft's engines",http://dx.doi.org/10.1016/j.susmat.2021.e00363 -"Pulsed multireservoir engineering for a trapped ion with applications to - state synthesis and quantum Otto cycles",http://dx.doi.org/10.1088/1367-2630/ac5131 -"Agility in Software 2.0 -- Notebook Interfaces and MLOps with Buttresses - and Rebars",http://arxiv.org/abs/2111.14142v1 -"Report on A Formally-Founded Model-Based Approach to Engineer - Self-Adaptive Systems",http://arxiv.org/abs/2112.06198v1 -What can Data-Centric AI Learn from Data and ML Engineering?,http://arxiv.org/abs/2112.06439v1 -"Data-Driven Models for Control Engineering Applications Using the - Koopman Operator",http://dx.doi.org/10.1109/AIRC56195.2022.9836980 -Geometrical Quantum Chemical Engine,http://arxiv.org/abs/2112.12370v2 -Machine Learning Application Development: Practitioners' Insights,http://arxiv.org/abs/2112.15277v1 -"Geometric thermodynamic uncertainty relation in periodically driven - thermoelectric heat engine",http://dx.doi.org/10.1103/PhysRevB.105.115428 -"Harmonica: A Framework for Semi-automated Design and Implementation of - Blockchain Applications",http://dx.doi.org/10.1002/inst.12358 -"Controlling magnetic frustration in 1T-TaS$_2$ via Coulomb engineered - long-range interactions",http://dx.doi.org/10.1088/1361-648X/ac9812 -A Method of Sequential Log-Convex Programming for Engineering Design,http://arxiv.org/abs/2201.08436v1 -1-2-3 Reproducibility for Quantum Software Experiments,http://arxiv.org/abs/2201.12031v1 -Nonunitary Gate Operations by Dissipation Engineering,http://dx.doi.org/10.1088/2058-9565/ac98dd -Thematic Domain Analysis for Ocean Modeling,http://dx.doi.org/10.1016/j.envsoft.2022.105323 -Artificial Intelligence Powered Material Search Engine,http://dx.doi.org/10.1016/j.matpr.2022.01.120 -"Engineering Interlayer Hybridization in Energy Space via Dipolar - Overlayers",http://dx.doi.org/10.1088/0256-307X/40/8/087303 -"Investigating Explainability of Generative AI for Code through - Scenario-based Design",http://arxiv.org/abs/2202.04903v1 -"Natural Language in Requirements Engineering for Structure Inference -- - An Integrative Review",http://arxiv.org/abs/2202.05065v1 -Engineered Dissipation for Quantum Information Science,http://arxiv.org/abs/2202.05280v2 -Video Game Project Management Anti-patterns,http://arxiv.org/abs/2202.06183v2 -Eco-engineering controls vegetation trends in southwest China karst,http://dx.doi.org/10.1016/j.scitotenv.2021.145160 -Banyan: A Scoped Dataflow Engine for Graph Query Service,http://arxiv.org/abs/2202.12530v1 -Reproducibility and Performance: Why Choose?,http://arxiv.org/abs/2203.07953v1 -Towards a Roadmap on Software Engineering for Responsible AI,http://arxiv.org/abs/2203.08594v1 -Recruiting Software Engineers on Prolific,http://arxiv.org/abs/2203.14695v1 -Performance of the collective three-level quantum thermal engine,http://dx.doi.org/10.1103/PhysRevA.105.043708 -"Problem reports and team maturity in agile automotive software - development",http://dx.doi.org/10.1145/3528579.3529173 -"Integrating User Experience into Agile -- An Experience Report on Lean - UX and Scrum",http://dx.doi.org/10.1145/3510456.3514156 -Quality Assurance in the Context of Contemporary Software Practice,http://arxiv.org/abs/2205.00149v1 -Dynamical heat engines with non--Markovian reservoirs,http://dx.doi.org/10.1103/PhysRevResearch.4.033233 -"Engineered Josephson Parametric Amplifier in quantum two-modes squeezed - radar",http://arxiv.org/abs/2205.06344v1 -The Ising critical quantum Otto engine,http://dx.doi.org/10.1088/1367-2630/ac963b -"Assessing the Quality of Computational Notebooks for a Frictionless - Transition from Exploration to Production",http://dx.doi.org/10.1145/3510454.3517055 -"Finite-time quantum Otto engine with a squeezed thermal bath: Role of - quantum coherence and squeezing in the performance and fluctuations",http://arxiv.org/abs/2205.13290v1 -"Microscopic low-dissipation heat engine via shortcuts to adiabaticity - and shortcuts to isothermality",http://dx.doi.org/10.1103/PhysRevE.106.064117 -"About Digital Twins, agents, and multiagent systems: a - cross-fertilisation journey",http://dx.doi.org/10.1007/978-3-031-20179-0_8 -Towards a Roadmap for Trustworthy Dynamic Systems-of-Systems,http://arxiv.org/abs/2206.06008v1 -"Measurement and applications of position bias in a marketplace search - engine",http://arxiv.org/abs/2206.11720v2 -Rivendell: Project-Based Academic Search Engine,http://arxiv.org/abs/2206.12926v1 -Reflecting on Recurring Failures in IoT Development,http://dx.doi.org/10.1145/3551349.3559545 -ITLingo Research Initiative in 2022,http://arxiv.org/abs/2206.14553v1 -"""Communication Is a Scarce Resource!'': A Summary of CHASE'22 Conference - Discussions",http://arxiv.org/abs/2207.00054v1 -"Open-source software for electrical engineering applications requiring - consideration of electrodynamics: elecode",http://arxiv.org/abs/2207.06908v2 -"Pareto-optimal cycles for power, efficiency and fluctuations of quantum - heat engines using reinforcement learning",http://dx.doi.org/10.1103/PhysRevResearch.5.L022017 -"Editorial: Special Issue on Collaborative Aspects of Open Data in - Software EngineeringJohan",http://dx.doi.org/10.1109/MS.2021.3118123 -The Value and Use of Data in Chemical Engineering Practice,http://arxiv.org/abs/2208.03105v1 -When malloc() Never Returns NULL -- Reliability as an Illusion,http://dx.doi.org/10.1109/ISSREW55968.2022.00035 -"Industrial Requirements for Supporting AI-Enhanced Model-Driven - Engineering",http://arxiv.org/abs/2208.13421v1 -Obtaining efficient collisional engines via velocity dependent drivings,http://dx.doi.org/10.1103/PhysRevE.106.064125 -Mixed Reality for Mechanical Design and Assembly Planning,http://arxiv.org/abs/2209.01252v1 -"Making the black-box brighter: interpreting machine learning algorithm - for forecasting drilling accidents",http://dx.doi.org/10.1016/j.petrol.2022.111041 -Data Management Challenges for Internet-scale 3D Search Engines,http://arxiv.org/abs/2209.03913v2 -Pitfalls and Guidelines for Using Time-Based Git Data,http://arxiv.org/abs/2209.04511v1 -"Thermodynamics and Fluctuations in Quantum Heat Engines under Reservoir - Squeezing",http://arxiv.org/abs/2209.05885v2 -"Design Guidelines for Improving User Experience in Industrial - Domain-Specific Modelling Languages",http://arxiv.org/abs/2209.14060v1 -StacerBot: A Stacktrace Search Engine for Stack Overflow,http://arxiv.org/abs/2209.14422v1 -Consensus-Free Spreadsheet Integration,http://arxiv.org/abs/2209.14457v1 -Requirements Engineering for Machine Learning: A Review and Reflection,http://arxiv.org/abs/2210.00859v1 -Biphoton engineering using modal spatial overlap on-chip,http://dx.doi.org/10.1364/OL.471346 -Biomimicry in Nanotechnology: A Comprehensive Review,http://arxiv.org/abs/2210.16811v1 -"Universal optimization efficiency and bounds of Carnot-like heat engines - and refrigerators under shortcuts to isothermality",http://arxiv.org/abs/2211.01773v2 -"Carnot, Stirling, Ericsson stochastic heat engines: Efficiency at - maximum power",http://dx.doi.org/10.1103/PhysRevE.108.014123 -Novelty in news search: a longitudinal study of the 2020 US elections,http://arxiv.org/abs/2211.04746v1 -"Thermodynamic uncertainty relation in nondegenerate and degenerate maser - heat engines",http://arxiv.org/abs/2211.08377v2 -Engineering Monosemanticity in Toy Models,http://arxiv.org/abs/2211.09169v1 -"Maintainability and evolvability of control software in machine and - plant manufacturing -- An industrial survey",http://dx.doi.org/10.1016/j.conengprac.2018.08.007 -"MICOSE4aPS: Industrially Applicable Maturity Metric to Improve - Systematic Reuse of Control Software",http://dx.doi.org/10.1145/3467896 -Character Simulation Using Imitation Learning With Game Engine Physics,http://arxiv.org/abs/2301.02123v1 -"Recommending Root-Cause and Mitigation Steps for Cloud Incidents using - Large Language Models",http://arxiv.org/abs/2301.03797v2 -Impact of Engine Nacelle Flow on Buffet,http://arxiv.org/abs/2301.05889v1 -"Software startup within a university -- producing industry-ready - graduates",http://arxiv.org/abs/2301.07020v1 -"G-Rank: Unsupervised Continuous Learn-to-Rank for Edge Devices in a P2P - Network",http://arxiv.org/abs/2301.12530v1 -Tool interoperability for model-based systems engineering,http://arxiv.org/abs/2302.03503v2 -Improving Performance of Quantum Heat Engines by Free Evolution,http://arxiv.org/abs/2302.07003v1 -"Quantum-enhanced performance in superconducting Andreev-reflection - engines",http://arxiv.org/abs/2302.09414v2 -"A Model-driven Approach for Continuous Performance Engineering in - Microservice-based Systems",http://dx.doi.org/10.1016/j.jss.2021.111084 -"Newton's Third Law in the Framework of Special Relativity for Charged - Bodies Part 3: Time Dependent Engines",http://arxiv.org/abs/2302.10229v2 -"Detection and Amelioration of Social Engineering Vulnerability in - Contingency Table Data using an Orthogonalised Log-linear Analysis",http://arxiv.org/abs/2302.13532v1 -"Learning coherences from nonequilibrium fluctuations in a quantum heat - engine",http://arxiv.org/abs/2302.13717v1 -SAINE: Scientific Annotation and Inference Engine of Scientific Research,http://arxiv.org/abs/2302.14468v2 -Category Theory for Autonomous Robots: The Marathon 2 Use Case,http://arxiv.org/abs/2303.01152v2 -"From Playground Swings to Sway Control of Cranes: An Active Pendulum - Experiment",http://dx.doi.org/10.1177/03064190231159330 -"Requirements Engineering Framework for Human-centered Artificial - Intelligence Software Systems",http://arxiv.org/abs/2303.02920v2 -A Qualitative and Quantitative Analysis of Container Engines,http://arxiv.org/abs/2303.04080v1 -Higher-Order Methods for Hamiltonian Engineering Pulse Sequence Design,http://arxiv.org/abs/2303.07374v1 -Dataflow graphs as complete causal graphs,http://arxiv.org/abs/2303.09552v1 -"A Meta-Summary of Challenges in Building Products with ML Components -- - Collecting Experiences from 4758+ Practitioners",http://arxiv.org/abs/2304.00078v1 -"Architectural Support for Software Performance in Continuous Software - Engineering: A Systematic Mapping Study",http://arxiv.org/abs/2304.02489v1 -"GraphBinMatch: Graph-based Similarity Learning for Cross-Language Binary - and Source Code Matching",http://arxiv.org/abs/2304.04658v1 -AI Safety Subproblems for Software Engineering Researchers,http://arxiv.org/abs/2304.14597v3 -"Addressing Age-Related Accessibility Needs of Senior Users Through - Model-Driven Engineering",http://dx.doi.org/10.1109/CHASE58964.2023.00021 -"The AR/VR Technology Stack: A Central Repository of Software Development - Libraries, Platforms, and Tools",http://dx.doi.org/10.13140/RG.2.2.10465.17769 -Maximum Power of Coupled-Qubit Otto Engines,http://arxiv.org/abs/2305.08440v1 -Robust Hamiltonian Engineering for Interacting Qudit Systems,http://arxiv.org/abs/2305.09757v1 -"Is googling risky? A study on risk perception and experiences of adverse - consequences in web search",http://dx.doi.org/10.1002/ASI.24802 -Mathematics-assisted directed evolution and protein engineering,http://arxiv.org/abs/2306.04658v1 -"A Linearly Convergent GAN Inversion-based Algorithm for Reverse - Engineering of Deceptions",http://arxiv.org/abs/2306.04756v1 -"Enhancing the purity of single photons in parametric down-conversion - through simultaneous pump-beam and crystal-domain engineering",http://arxiv.org/abs/2306.15569v2 -ChatGPT vs SBST: A Comparative Assessment of Unit Test Suite Generation,http://arxiv.org/abs/2307.00588v1 -DeepOnto: A Python Package for Ontology Engineering with Deep Learning,http://arxiv.org/abs/2307.03067v1 -Exploring the Relationship Between Personality Traits and User Feedback,http://arxiv.org/abs/2307.12036v1 -A Taxonomy for Requirements Engineering and Software Test Alignment,http://dx.doi.org/10.1145/2523088 -A quantum Stirling heat engine operating in finite time,http://dx.doi.org/10.1103/PhysRevA.108.012220 -"Cloud Render Farm Services Discovery Using NLP And Ontology Based - Knowledge Graph",http://arxiv.org/abs/2307.13604v1 -Sources of Opacity in Computer Systems: Towards a Comprehensive Taxonomy,http://arxiv.org/abs/2307.14232v1 -"Revisiting the Performance-Explainability Trade-Off in Explainable - Artificial Intelligence (XAI)",http://arxiv.org/abs/2307.14239v1 -"A New Perspective on Evaluation Methods for Explainable Artificial - Intelligence (XAI)",http://arxiv.org/abs/2307.14246v1 -"Using Machine Learning To Identify Software Weaknesses From Software - Requirement Specifications",http://arxiv.org/abs/2308.05558v1 -"Exploring the Optimal Cycle for Quantum Heat Engine using Reinforcement - Learning",http://arxiv.org/abs/2308.06794v1 -"Summary of the 3rd International Workshop on Requirements Engineering - and Testing",http://dx.doi.org/10.1145/2934240.2934249 -Observation of multiple steady states with engineered dissipation,http://arxiv.org/abs/2308.13235v1 -Strain Engineering for High-Performance Phase Change Memristors,http://arxiv.org/abs/2308.13637v1 -"TASEP: A Collaborative Social Engineering Tabletop Role-Playing Game to - Prevent Successful Social Engineering Attacks",http://dx.doi.org/10.1145/3600160.3605005 -"Anisotropy-assisted thermodynamic advantage of a local-spin thermal - machine",http://arxiv.org/abs/2309.04757v1 -Quantum Ising Heat Engines: A mean field study,http://arxiv.org/abs/2309.06301v2 -"Using Large Language Models for Knowledge Engineering (LLMKE): A Case - Study on Wikidata",http://arxiv.org/abs/2309.08491v1 -Floquet engineering of black phosphorus upon below-gap pumping,http://dx.doi.org/10.1103/PhysRevLett.131.116401 -"Creation of flexible spin-caloritronic material with giant transverse - thermoelectric conversion by nanostructure engineering",http://arxiv.org/abs/2310.10233v1 -Diagrammatic Modelling of Causality and Causal Relations,http://arxiv.org/abs/2310.11042v1 -FindZebra: A search engine for rare diseases,http://dx.doi.org/10.1016/j.ijmedinf.2013.01.005 -"How to Apply Markov Chains for Modeling Sequential Edit Patterns in - Collaborative Ontology-Engineering Projects",http://dx.doi.org/10.1016/j.ijhcs.2015.07.006 -"Discovering Beaten Paths in Collaborative Ontology-Engineering Projects - using Markov Chains",http://dx.doi.org/10.1016/j.jbi.2014.06.004 -"Unveiling the Engines of Fast Radio Bursts, Super-Luminous Supernovae, - and Gamma-Ray Bursts",http://dx.doi.org/10.1093/mnras/sty2417 -"Familia: A Configurable Topic Modeling Framework for Industrial Text - Engineering",http://arxiv.org/abs/1808.03733v2 -Quantum state engineering by nonlinear quantum interference,http://dx.doi.org/10.1103/PhysRevA.102.033718 -Monitoring quantum Otto engines,http://dx.doi.org/10.1103/PRXQuantum.2.040328 -Data Quality in Empirical Software Engineering: A Targeted Review,http://dx.doi.org/10.1145/2460999.2461024 -"The Scalability-Efficiency/Maintainability-Portability Trade-off in - Simulation Software Engineering: Examples and a Preliminary Systematic - Literature Review",http://dx.doi.org/10.1109/SE-HPCCSE.2016.008 -"A Common Central Engine for Long Gamma Ray Bursts and Type Ib/c - Supernovae?",http://dx.doi.org/10.1093/mnras/stx2083 -"Gamifying the Escape from the Engineering Method Prison - An Innovative - Board Game to Teach the Essence Theory to Future Project Managers and - Software Engineers",http://dx.doi.org/10.1109/ICE.2018.8436340 -Corrosion Resistance of Sulfur-Selenium Alloy Coatings,http://arxiv.org/abs/2009.02451v1 -"The Diversity of Gamification Evaluation in the Software Engineering - Education and Industry: Trends, Comparisons and Gaps",http://arxiv.org/abs/2102.05089v1 -"JEST: N+1-version Differential Testing of Both JavaScript Engines and - Specification",http://arxiv.org/abs/2102.07498v2 -"Developing a New Tool to Implement Computer-Supported Active Learning - Strategies in the Engineering Classroom",http://arxiv.org/abs/2104.10118v1 -"Eco-Coasting Strategies Using Road Grade Preview: Evaluation and Online - Implementation Based on Mixed Integer Model Predictive Control",http://arxiv.org/abs/2111.07377v3 -"Quantum Computing: Fundamentals, Trends and Perspectives for Chemical - and Biochemical Engineers",http://arxiv.org/abs/2201.02823v1 -"FuSeBMC v4: Improving code coverage with smart seeds via BMC, fuzzing - and static analysis",http://arxiv.org/abs/2206.14068v2 -"The Tiny Triplet Finder as a Versatile Track Segment Seeding Engine for - Trigger Systems",http://arxiv.org/abs/2305.09834v1 -The Supernova Ring Revisited II: Distance to the LMC,http://dx.doi.org/10.1086/176290 -In Search of a Source for the 320 Eev Fly's Eye Cosmic Ray,http://dx.doi.org/10.1086/175345 -"The First Second of a Type-II Supernova: Convection, Accretion, and - Shock Propagation",http://dx.doi.org/10.1086/309604 -The Thermonuclear Explosion Of Chandrasekhar Mass White Dwarfs,http://dx.doi.org/10.1086/303544 -ASCA observations of the nearby galaxies Dwingeloo 1 and Maffei 1,http://dx.doi.org/10.1093/mnras/286.2.349 -ISO observations of the radio-loud BALQSO 1556+3517,http://arxiv.org/abs/astro-ph/9711314v1 -An Old Cluster in NGC 6822,http://dx.doi.org/10.1086/300389 -"Synchrotron and SSC Emission and the Blast-Wave Model of Gamma-Ray - Bursts",http://dx.doi.org/10.1086/306789 -On the spin-down of Be stars,http://arxiv.org/abs/astro-ph/9805104v1 -"Energy-Dependent Gamma-Ray Burst Peak Durations and Blast-Wave - Deceleration",http://dx.doi.org/10.1086/306451 -A gamma ray burst with small contamination,http://dx.doi.org/10.1086/311674 -Galactic chemical evolution of Ba-peak elements,http://arxiv.org/abs/astro-ph/9809207v1 -Galactic Gamma Halo by Heavy Neutrino annihilations?,http://dx.doi.org/10.1016/S0927-6505(99)00094-8 -Bars and Boxy/Peanut-Shaped Bulges: An Observational Point of View,http://arxiv.org/abs/astro-ph/9901246v1 -Mass Limits For Black Hole Formation,http://dx.doi.org/10.1086/307647 -On the Heavy Relic Neutrino - Galactic Gamma Halo Connection,http://arxiv.org/abs/astro-ph/9902327v1 -GRB 990123: Reverse and Internal Shock Flashes and Late Afterglow,http://dx.doi.org/10.1046/j.1365-8711.1999.02800.x -A Compact Fireball Model of Gamma Ray Bursts,http://dx.doi.org/10.1086/308245 -GRB990123: The Case for Saturated Comptonization,http://dx.doi.org/10.1086/312100 -Identifying Gamma-Ray Burst Remnants in Nearby Galaxies,http://dx.doi.org/10.1086/308682 -"Cross-correlation of Tenerife data with Galactic templates - evidence - for spinning dust?",http://dx.doi.org/10.1086/312384 -Fitting Formulae for Cross Sections of Tidal Capture Binary Formation,http://arxiv.org/abs/astro-ph/9905115v1 -"""No High Energy Emission"" GRB Class Is Attributable to Brightness Bias",http://arxiv.org/abs/astro-ph/9905319v1 -"Intrinsic Parameters of GRB990123 from Its Prompt Optical Flash and - Afterglow",http://dx.doi.org/10.1046/j.1365-8711.2000.03922.x -H_2 Absorption and Fluorescence for Gamma Ray Bursts in Molecular Clouds,http://dx.doi.org/10.1086/308581 -Gamma-ray Bursts - A Puzzle Being Resolved,http://dx.doi.org/10.1016/S0370-1573(00)00036-3 -"Evidence for an Early High-Energy Afterglow Observed with BATSE from - GRB980923",http://dx.doi.org/10.1086/312285 -Acceleration at Relativistic Shocks in Gamma-Ray Bursts,http://arxiv.org/abs/astro-ph/9910128v1 -Compton Dragged Gamma--Ray Bursts associated with Supernovae,http://dx.doi.org/10.1086/312452 -X-ray Emission of Supernova 1998bw in the Error Box of GRB980425,http://arxiv.org/abs/astro-ph/9910236v1 -"Distribution of Caustic-Crossing Intervals for Galactic Binary-Lens - Microlensing Events",http://dx.doi.org/10.1046/j.1365-8711.2000.03293.x -"A Possible Explanation of the Radio Afterglow of GRB980519: The Dense - Medium Effect",http://dx.doi.org/10.1046/j.1365-8711.2000.03660.x -"What can BeppoSAX tell us about short GRBs: An update from the Subsecond - GRB Project",http://dx.doi.org/10.1063/1.1361500 -Prompt Optical Observations of Gamma-ray Bursts,http://dx.doi.org/10.1086/312567 -On the intensity of the cosmic X-ray background,http://dx.doi.org/10.1046/j.1365-8711.2000.03733.x -"Search for High Energy Neutrino Emission from Gamma-Ray Bursts with the - A ntarctic Muon and Neutrino Detector Array (AMANDA)",http://arxiv.org/abs/astro-ph/0008255v1 -Implications for quintessence models from MAXIMA-1 and BOOMERANG-98,http://dx.doi.org/10.1086/318904 -"On the role of extinction in failed gamma-ray burst optical/IR - afterglows",http://dx.doi.org/10.1046/j.1365-8711.2002.05076.x -"Chemical Evolution of the Galaxy: the G-dwarf problem and radioactive - chronology revisited taking account of the Thick Disk",http://dx.doi.org/10.1142/9789812810830_0053 -The Trans-Relativistic Blast Wave Model for SN 1998bw and GRB 980425,http://dx.doi.org/10.1063/1.1419631 -Sub-arcsecond imaging of SiO in the HH 211 protostellar jet,http://dx.doi.org/10.1086/321463 -Early afterglows as probes for the reionization epoch,http://dx.doi.org/10.1007/10853853_52 -Emission and Structure of Compact Fireballs,http://arxiv.org/abs/astro-ph/0104082v1 -"The Inverse Compton Emission Spectra in the Very Early Afterglows of - Gamma-Ray Bursts",http://dx.doi.org/10.1086/321608 -Reionization of an Inhomogeneous Universe,http://arxiv.org/abs/astro-ph/0108176v1 -"Limits on the early afterglow phase of gamma-ray burst sources from - TAROT-1",http://dx.doi.org/10.1051/0004-6361:20011203 -Did Very Massive Stars Pre-enrich and Reionize the Universe?,http://dx.doi.org/10.1086/337996 -REM - Rapid Eye Mount. A fast slewing robotized infrared telescope,http://arxiv.org/abs/astro-ph/0109411v1 -Advanced Compton Telescope Designs and SN Science,http://dx.doi.org/10.1016/S1387-6473(02)00210-5 -"Transient Absorption Features in GRBs and Their Implications for GRB - Progenitors",http://dx.doi.org/10.1086/338496 -Substructure in CDM Halos and the Heating of Stellar Disks,http://dx.doi.org/10.1142/9789812778017_0019 -Precursors and electron-positron pair loading from erupting fireballs,http://dx.doi.org/10.1046/j.1365-8711.2002.05176.x -The Supernova-GRB Connection,http://arxiv.org/abs/astro-ph/0112168v2 -Acceleration of GRB outflows by Poynting flux dissipation,http://dx.doi.org/10.1051/0004-6361:20020390 -"Efficient acceleration and radiation in Poynting flux powered GRB - outflows",http://dx.doi.org/10.1051/0004-6361:20020839 -In flight performance and first results of FREGATE,http://dx.doi.org/10.1063/1.1579292 -Observation of Gamma-ray Bursts with INTEGRAL,http://arxiv.org/abs/astro-ph/0205071v1 -"The expected thermal precursors of gamma-ray bursts in the internal - shock model",http://dx.doi.org/10.1046/j.1365-8711.2002.05875.x -Delayed Flashes from Counter Jets of Gamma Ray Bursts,http://dx.doi.org/10.1086/375261 -"Log-Normal Distributions in Cygnus X-1: Possible Physical Link with - Gamma-Ray Bursts and Blazars",http://dx.doi.org/10.1093/pasj/54.5.L69 -The ROTSE-III Robotic Telescope System,http://dx.doi.org/10.1086/345490 -"A reinterpretation of Volcano Ranch lateral distribution measurements to - infer the mass composition of cosmic rays",http://dx.doi.org/10.1016/S0920-5632(03)80387-0 -On the structures in the afterglow peak emission of gamma ray bursts,http://dx.doi.org/10.1086/345804 -"Supranova Model for the Delayed Reddened Optical Excesses Detected in - Several GRBs",http://arxiv.org/abs/astro-ph/0211300v1 -X-Ray Lines and Absorption Edges in GRBs and Their Afterglows,http://arxiv.org/abs/astro-ph/0212034v1 -"Observing scattered X-ray radiation from gamma-ray bursts: a way to - measure their collimation angles",http://dx.doi.org/10.1051/0004-6361:20021825 -SiO Observations of the NGC 1333 IRAS 2A Protostellar Jet,http://arxiv.org/abs/astro-ph/0212247v1 -Near-Infrared Photometric Survey of Proto-Planetary Nebula Candidates,http://dx.doi.org/10.1086/373928 -"On the carriers of the celestial infrared vibrational bands and their - excitation mechanisms",http://arxiv.org/abs/astro-ph/0302023v1 -The Characteristics of Magnetic CVs in the Period Gap,http://arxiv.org/abs/astro-ph/0302046v1 -The expected photospheric emission of GRBs in the internal shock model,http://arxiv.org/abs/astro-ph/0303288v1 -Interaction of GRB Fireballs with Ambient Medium,http://arxiv.org/abs/astro-ph/0305118v1 -Using globular clusters to test gravity in the weak acceleration regime,http://dx.doi.org/10.1051/0004-6361:20030762 -Superlong Gamma-Ray Bursts,http://arxiv.org/abs/astro-ph/0305503v1 -The origin of peculiar jet-torus structure in the Crab nebula,http://dx.doi.org/10.1046/j.1365-8711.2003.07097.x -"Addendum to ""Coherent radio pulses from GEANT generated electromagnetic - showers in ice""",http://dx.doi.org/10.1103/PhysRevD.69.047101 -An Off-Axis Jet Model For GRB980425 and Low Energy Gamma-Ray Bursts,http://dx.doi.org/10.1086/378736 -GRB021125: the first GRB imaged by INTEGRAL,http://dx.doi.org/10.1051/0004-6361:20031155 -"Compton drag as a mechanism for very high linear polarization in - Gamma-Ray Bursts",http://dx.doi.org/10.1111/j.1365-2966.2004.07387.x -"GRB optical and IR rapid follow-up with the 2 m Liverpool Robotic - Telescope",http://arxiv.org/abs/astro-ph/0310137v1 -Radio Identification of the X-ray Jet in the z=4.3 Quasar GB 1508+5714,http://dx.doi.org/10.1086/381366 -"The Liverpool Telescope: Rapid follow-up observation of Targets of - opportunity with a 2 m robotic telescope",http://dx.doi.org/10.1016/j.nuclphysbps.2004.04.055 -Low frequency radio and X-ray properties of core-collapse supernovae,http://dx.doi.org/10.1007/3-540-26633-X_19 -GRB980425 in the Off-Axis Jet Model of the Standard GRBs,http://dx.doi.org/10.1063/1.1810877 -"Inhomogeneous reheating scenario with low scale inflation and/or MSSM - flat directions",http://dx.doi.org/10.1088/1475-7516/2004/03/006 -Spectral analysis of 50 GRBs detected by HETE-2,http://dx.doi.org/10.1063/1.1810806 -"On Hadronic Models for the Anomalous $γ$-ray Emission Component in - GRB 941017",http://dx.doi.org/10.1063/1.1810907 -Gamma-Ray Bursts and Cosmology,http://arxiv.org/abs/astro-ph/0312277v1 -"New constraints on the mass composition of cosmic rays above 10^17 eV - from Volcano Ranch measurements",http://dx.doi.org/10.1016/j.astropartphys.2004.04.009 -"Peak Energy-Isotropic Energy Relation in the Off-Axis Gamma-Ray Burst - Model",http://dx.doi.org/10.1086/421084 -"Does the Lunar Surface Still Offer Value As a Site for Astronomical - Observatories?",http://arxiv.org/abs/astro-ph/0401274v2 -Spectral Evolution of Two High-Energy Gamma-Ray Bursts,http://dx.doi.org/10.1029/156GM29 -"Detection of polarization from the E^4Π-A^4Πsystem of FeH in - sunspot spectra",http://dx.doi.org/10.1086/383224 -On the Kinetic Energy and Radiative Efficiency of Gamma-Ray Bursts,http://dx.doi.org/10.1086/423026 -Preliminary INTEGRAL Analysis of GRB040106,http://arxiv.org/abs/astro-ph/0404126v2 -Gamma ray bursts and the origin of galactic positrons,http://dx.doi.org/10.1016/j.physletb.2006.03.022 -Muon and Tau Neutrinos Spectra from Solar Flares,http://dx.doi.org/10.1088/1009-9271/3/S1/75 -Intrinsic spectra and energetics of cosmological Gamma-Ray Bursts,http://dx.doi.org/10.1088/1009-9271/3/S1/455 -Gamma-Ray Burst Polarization: Limits from RHESSI Measurements,http://dx.doi.org/10.1086/423163 -"The collimation--corrected GRB energies correlate with the peak energy - of their $νF_ν$ spectrum",http://dx.doi.org/10.1086/424913 -MGGPOD: a Monte Carlo Suite for Gamma-Ray Astronomy,http://arxiv.org/abs/astro-ph/0406159v1 -Gamma-Ray Bursts Observed with the Spectrometer SPI Onboard INTEGRAL,http://dx.doi.org/10.1063/1.1810921 -The potential of INTEGRAL for the detection of high redshift GRBs,http://dx.doi.org/10.1051/0004-6361:20034493 -GRB 970228 Within the EMBH Model,http://dx.doi.org/10.1063/1.1810880 -Correlated X-ray and Optical Variability in V404 Cyg in Quiescence,http://dx.doi.org/10.1086/424005 -Gamma-ray burst internal shocks with magnetization,http://dx.doi.org/10.1111/j.1365-2966.2004.08263.x -Molecular gas at high redshift: jet-induced star formation?,http://dx.doi.org/10.1086/424843 -Approaching the dynamics of hot nucleons in supernovae,http://dx.doi.org/10.1016/j.nuclphysa.2005.05.016 -"The Ultraviolet flash accompanying GRBs from neutron-rich internal - shocks",http://dx.doi.org/10.1086/426476 -Propagation of High Energy Galactic Cosmic Rays,http://arxiv.org/abs/astro-ph/0411338v1 -Precursor activity in bright long BATSE gamma-Ray Bursts,http://dx.doi.org/10.1111/j.1365-2966.2005.08687.x -Emergence of a filamentary structure in the fireball from GRB spectra,http://dx.doi.org/10.1142/S0218271805006201 -Exploiting the neutronization burst of a galactic supernova,http://dx.doi.org/10.1103/PhysRevD.71.063003 -An Off-Axis Model for GRB 031203,http://dx.doi.org/10.1086/431237 -Neutrino-Dominated Accretion and Supernovae,http://dx.doi.org/10.1086/431354 -"Infrared afterglow of GRB041219 as a result of reradiation on dust in a - circumstellar cloud",http://dx.doi.org/10.1007/s10511-005-0035-2 -"A New Relation between GRB Rest-Frame Spectra and Energetics and Its - Utility on Cosmology",http://arxiv.org/abs/astro-ph/0504052v3 -Cosmic Rays from Gamma Ray Bursts in the Galaxy,http://dx.doi.org/10.1086/432663 -Is Thermal Emission in Gamma-Ray Bursts Ubiquitous?,http://dx.doi.org/10.1086/431239 -Global properties of X-ray afterglows of GRB,http://dx.doi.org/10.1393/ncc/i2005-10088-2 -"A compact binary merger model for the short, hard GRB 050509b",http://dx.doi.org/10.1086/496882 -A Reverse-Shock Model for the Early Afterglow of GRB 050525A,http://dx.doi.org/10.1086/466523 -Inverse Compton X-ray Flare from GRB Reverse Shock,http://dx.doi.org/10.1086/510198 -"Deriving stellar properties from photometry: maximizing information - content and minimizing biases",http://arxiv.org/abs/astro-ph/0506278v1 -The host of GRB/XRF 030528 - an actively star forming galaxy at z=0.782,http://dx.doi.org/10.1051/0004-6361:20053773 -Wolf-Rayet and O Star Runaway Populations from Supernovae,http://dx.doi.org/10.1111/j.1365-2966.2005.09536.x -Local Pancake Defeats Axis of Evil,http://arxiv.org/abs/astro-ph/0509039v2 -Triggered Star Formation by Massive Stars,http://dx.doi.org/10.1086/510893 -"The Case for Anisotropic Afterglow Efficiency within Gamma-Ray Burst - Jets",http://dx.doi.org/10.1086/503667 -"Theoretical interpretation of luminosity and spectral properties of GRB - 031203",http://dx.doi.org/10.1086/498614 -"Detection of a very bright optical flare from a gamma-ray burst at - redshift 6.29",http://dx.doi.org/10.1086/501048 -The theory of spectral evolution of the GRB prompt emission,http://dx.doi.org/10.1086/498697 -"Shallow Decay of Early X-ray Afterglows from Inhomogeneous Gamma-Ray - Burst Jets",http://dx.doi.org/10.1086/503384 -"No Universality for Electron's Power-Law Index (p) in Gamma-Ray Bursts - and Other Relativistic Sources",http://dx.doi.org/10.1111/j.1365-2966.2006.10768.x -Gamma Ray Bursts as cosmological tools,http://dx.doi.org/10.1063/1.2141844 -"An improved redshift indicator for Gamma-Ray Bursts, based on the prompt - emission",http://dx.doi.org/10.1063/1.2207877 -The role of kink instability in Poynting-flux dominated jets,http://dx.doi.org/10.1051/0004-6361:20054107 -The Complete Spectral Catalog of Bright BATSE Gamma-Ray Bursts,http://dx.doi.org/10.1086/505911 -Theories of GRB Early Afterglow,http://dx.doi.org/10.1063/1.2207895 -"GRB 050315: A step in the proof of the uniqueness of the overall GRB - structure",http://dx.doi.org/10.1063/1.2207866 -INTEGRAL observations of the blazar 3C454.3 in outburst,http://dx.doi.org/10.1051/0004-6361:200600017 -The present and the future of cosmology with Gamma Ray Bursts,http://dx.doi.org/10.1142/9789812773548_0012 -Constraining the GRB Collimation with a Survey for Orphan Afterglows,http://dx.doi.org/10.1063/1.2207929 -GRB 060218/SN 2006aj: A Gamma-Ray Burst and Prompt Supernova at z=0.0335,http://dx.doi.org/10.1086/505177 -The superburst recurrence time in luminous persistent LMXBs,http://dx.doi.org/10.1051/0004-6361:20064884 -Thermal Emission from a Hot Cocoon Surrounding the Jet of XRF 060218,http://arxiv.org/abs/astro-ph/0606565v1 -"The prompt optical/near-infrared flare of GRB 050904: the most luminous - transient ever detected",http://dx.doi.org/10.1086/511066 -"Contribution of GRB Emission to the GeV Extragalactic Diffuse Gamma-Ray - Flux",http://dx.doi.org/10.1063/1.2207868 -Constraining the environment of GRB 990712 through emission line fluxes,http://dx.doi.org/10.1051/0004-6361:20065672 -The Onset of Gamma-Ray Burst Afterglow,http://dx.doi.org/10.1086/510203 -Nonlinear electrodynamics and the Pioneer 10/11 spacecraft anomaly,http://dx.doi.org/10.1209/0295-5075/77/19001 -"Observations of Comet 9P/Tempel 1 with the Keck 1 HIRES Instrument - During Deep Impact",http://dx.doi.org/10.1016/j.icarus.2006.07.030 -A dynamical origin for early mass segregation in young star clusters,http://dx.doi.org/10.1086/511763 -Type Ia Supernova Rate in the Galactic Center Region,http://arxiv.org/abs/astro-ph/0609566v2 -"Swift GRBs and the Ep,i - Eiso correlation",http://dx.doi.org/10.1393/ncb/i2007-10064-9 -Multi-wavelength variability of the magnetar 4U 0142+61,http://dx.doi.org/10.1086/507605 -Gamma Ray Bursts: Cosmic Rulers for the High-Redshift Universe?,http://dx.doi.org/10.1098/rsta.2006.1991 -Wind circumburst density profile: a linear E_p-E_gamma correlation,http://dx.doi.org/10.1393/ncb/i2007-10302-2 -Understanding the Spins of Young Stars,http://arxiv.org/abs/astro-ph/0701648v1 -Swift Observations of GRB 051109B,http://dx.doi.org/10.1393/ncb/i2007-10324-8 -GRB 060218 and the outliers with respect to the Ep-Eiso correlation,http://dx.doi.org/10.1142/9789812709653_0024 -"Gamma-ray Burst UV/optical afterglow polarimetry as a probe of Quantum - Gravity",http://dx.doi.org/10.1111/j.1365-2966.2007.11576.x -GRBs spectral correlations and their cosmological use,http://dx.doi.org/10.1098/rsta.2006.1976 -MGGPOD: A Monte Carlo Suite for Gamma Ray Astronomy -- Version 1.1,http://arxiv.org/abs/astro-ph/0702623v1 -"Jitter radiation as a possible mechanism for Gamma-Ray Burst afterglows. - Spectra and lightcurves",http://dx.doi.org/10.1086/520701 -"Jitter radiation in gamma-ray bursts and their afterglows: emission and - self-absorption",http://dx.doi.org/10.1111/j.1365-2966.2008.13007.x -On aligning trees,http://arxiv.org/abs/cmp-lg/9707016v1 -Quantum Monte Carlo calculations for integer-doped Fullerides,http://arxiv.org/abs/cond-mat/9804059v1 -Subsurface charge accumulation imaging of a quantum Hall liquid,http://dx.doi.org/10.1038/32112 -"Dual Vortex Theory of Strongly Interacting Electrons: Non-Fermi Liquid - to the (Hard) Core",http://dx.doi.org/10.1103/PhysRevB.61.6307 -Pinholes May Mimic Tunneling,http://dx.doi.org/10.1063/1.1344220 -Correcting the quantum clock: conditional sojourn times,http://dx.doi.org/10.1209/epl/i2002-00665-7 -Trapping Dynamics with Gated Traps: Stochastic Resonance-Like Phenomenon,http://dx.doi.org/10.1016/S0375-9601(00)00724-6 -"Percolative conductivity and critical exponents in mixed-valent - manganites",http://dx.doi.org/10.1103/PhysRevB.63.140418 -"Spectral Representation Theory for Dielectric Behavior of Nonspherical - Cell Suspensions",http://dx.doi.org/10.1088/0253-6102/38/1/113 -"Effect of Non Gaussian Noises on the Stochastic Resonance-Like - Phenomenon in Gated Traps",http://dx.doi.org/10.1016/S0167-2789(02)00505-5 -"""Spin-Disentangled"" Exact Diagonalization of Repulsive Hubbard Systems: - Superconducting Pair Propagation",http://dx.doi.org/10.1088/0953-8984/14/43/103 -Exact Born Approximation for the Spin-Boson Model,http://arxiv.org/abs/cond-mat/0304118v2 -"Evidence for a Structurally-driven Insulator-to-metal Transition in VO2: - a View from the Ultrafast Timescale",http://dx.doi.org/10.1103/PhysRevB.70.161102 -"Resonating Valence Bond Mechanism of Impurity Band Superconductivity in - Diamond",http://arxiv.org/abs/cond-mat/0404286v3 -"Similarity and contrasts between thermodynamic properties at the - critical point of liquid alkali metals and of electron-hole droplets",http://dx.doi.org/10.1103/PhysRevB.66.073314 -"A `checkerboard' electronic crystal state in lightly hole-doped - Ca_{2-x}Na_{x}CuO_{2}Cl_{2}",http://dx.doi.org/10.1038/nature02861 -Order and disorder in columnar joints,http://dx.doi.org/10.1209/epl/i2004-10408-x -Multiple Particle Scattering in Quantum Point Contacts,http://dx.doi.org/10.1103/PhysRevB.72.121312 -The XY Model and the Berezinskii-Kosterlitz-Thouless Phase Transition,http://arxiv.org/abs/cond-mat/0512356v1 -"System Description for a Scalable, Fault-Tolerant, Distributed Garbage - Collector",http://arxiv.org/abs/cs/0207036v1 -"A Framework for Combining Defeasible Argumentation with Labeled - Deduction",http://arxiv.org/abs/cs/0405107v1 -"Non-Newtonian Dynamic Gravitational Field from The Longitudinally - Asymmetric Rotating Objects",http://arxiv.org/abs/gr-qc/9706048v1 -"An Isolated Gravitational Dipole Moment Placed at The Center of the Two - Mass Pole Model Universe",http://arxiv.org/abs/gr-qc/9707016v2 -Spacetime and the Philosophical Challenge of Quantum Gravity,http://arxiv.org/abs/gr-qc/9903072v1 -"Brief Comments on ``The Shapiro Conjecture, Prompt or Delayed Collapse - ?'' by Miller, Suen and Tobias",http://arxiv.org/abs/gr-qc/9909059v1 -Head-on/Near Head-on Collisions of Neutron Stars With a Realistic EOS,http://dx.doi.org/10.1103/PhysRevD.67.104001 -"General Relativistic Decompression of Binary Neutron Stars During - Dynamic Inspiral",http://dx.doi.org/10.1103/PhysRevD.75.024001 -"Measurements of the bb~ production cross section and forward backward - asymmetry at centre-of-mass energies above the Z pole at LEP",http://dx.doi.org/10.1016/S0370-2693(00)00676-6 -"Searches for prompt light gravitino signatures in e+e- Collisions at - sqrt{s}=189GeV",http://dx.doi.org/10.1016/S0370-2693(01)00101-0 -"Observation of Double cc bar Production in e+ e- Annihilation at sqrt{s} - ~ 10.6 GeV",http://dx.doi.org/10.1103/PhysRevLett.89.142001 -Bose-Einstein Correlations of pi0 Pairs from Hadronic Z0 Decays,http://dx.doi.org/10.1016/S0370-2693(03)00337-X -Heavy Flavor Production at the Tevatron,http://dx.doi.org/10.1063/1.1807298 -"Measurement of the Forward-Backward Asymmetries of e+e- -> Z -> b bbar - and e+e- -> Z -> c cbar using prompt leptons",http://dx.doi.org/10.1140/epjc/s2004-01708-6 -"Estimation of charm production cross section in hadronic interactions at - high energies",http://arxiv.org/abs/hep-ex/0404032v1 -"Improved Measurement of the b-bbar Production Cross Section in 920 GeV - Fixed-Target Proton-Nucleus Collisions",http://dx.doi.org/10.1103/PhysRevD.73.052005 -Cosmological and Astrophysical Bounds on Neutrino Masses and Lifetimes,http://arxiv.org/abs/hep-ph/9212240v1 -"Empirical Determination of the Very High Energy Heavy Quark Cross - Section from Non-Accelerator Data",http://dx.doi.org/10.1103/PhysRevD.49.2310 -Gluon fragmentation into polarized charmonium,http://dx.doi.org/10.1103/PhysRevD.51.R2039 -"Soft Photons in Hadron-Hadron Collisions: Synchrotron Radiation from the - QCD Vacuum?",http://dx.doi.org/10.1007/BF01564830 -Spin symmetry predictions for heavy quarkonia alignment,http://dx.doi.org/10.1016/0370-2693(94)01658-Y -Pinning down the Glue in the Proton,http://dx.doi.org/10.1016/0370-2693(95)00646-3 -Color-octet quarkonia production,http://dx.doi.org/10.1103/PhysRevD.53.150 -Charm Production and High Energy Atmospheric Muon and Neutrino Fluxes,http://dx.doi.org/10.1016/0927-6505(96)00033-3 -Double gluon fragmentation to $J/ψ$ pairs at the Tevatron,http://dx.doi.org/10.1016/0370-2693(95)01592-2 -Color-octet quarkonia production II,http://dx.doi.org/10.1103/PhysRevD.53.6203 -"The Case of $α_s$: $Z$ versus Low Energies, or How Nature Prompts - us of New Physics",http://dx.doi.org/10.1142/S0217751X9600153X -Extracting $R_b$ and $R_c$ Without Flavor Tagging,http://dx.doi.org/10.1103/PhysRevLett.76.3259 -Production of Heavy Quarkonium in High Energy Colliders,http://dx.doi.org/10.1146/annurev.nucl.46.1.197 -Color-Octet $J/ψ$ Production in the $Υ$ Decay,http://dx.doi.org/10.1103/PhysRevD.54.929 -Decoupling or nondecoupling: is that the $R_b$ question?,http://dx.doi.org/10.1103/PhysRevD.54.1176 -Associated Production of Charm and a Hard Photon,http://arxiv.org/abs/hep-ph/9604447v1 -Physics in Charm Energy Region,http://arxiv.org/abs/hep-ph/9609303v1 -"Extraction of parton distributions and $α_s$ from DIS data within - the Bayesian treatment of systematic errors",http://dx.doi.org/10.1007/s100520050763 -J/psi Production at the LHC,http://dx.doi.org/10.1016/S0920-5632(97)00187-4 -"Isolated Photons at Hadron Colliders at O($αalpha_s^2$) (I): Spin - Averaged Case",http://dx.doi.org/10.1016/S0550-3213(97)00339-8 -Recent Developments in Low x Physics,http://arxiv.org/abs/hep-ph/9702223v1 -"Quarkonium production: velocity-scaling rules and long-distance matrix - elements",http://dx.doi.org/10.1142/S0217751X97002103 -"Color-Octet Contributions to J/psi Photoproduction via Fragmentation at - HERA",http://dx.doi.org/10.1016/S0370-2693(97)01126-X -Neutrinos Must be Tachyons,http://arxiv.org/abs/hep-ph/9704311v4 -Charmonium Production via Fragmentation at DESY HERA,http://dx.doi.org/10.1103/PhysRevD.56.5820 -"TEVATRON-HERA Colour-Octet Charmonium Anomaly Versus Higher-Order QCD - Effects",http://dx.doi.org/10.1007/s100520050359 -Parton distributions: a new global analysis,http://dx.doi.org/10.1007/s100520050220 -Polarized J/psi production at CLEO,http://arxiv.org/abs/hep-ph/9804455v1 -Measuring the Broken Phase Sphaleron Rate Nonperturbatively,http://dx.doi.org/10.1103/PhysRevD.59.014503 -"LHC Reach for Gauge Mediated Supersymmetry Breaking Models Via Prompt - Photon Channels",http://dx.doi.org/10.1016/S0370-2693(98)00794-1 -A One-Scale Model of Dynamical Supersymmetry Breaking,http://dx.doi.org/10.1103/PhysRevD.60.015004 -Bremsstrahlung of a Quark Propagating through a Nucleus,http://dx.doi.org/10.1103/PhysRevC.59.1609 -Charmonium Suppression in Heavy Ion Collisions by Prompt Gluons,http://dx.doi.org/10.1016/S0370-2693(98)01440-3 -"Determination of Radiative Widths of Scalar Mesons from Experimental - Results on $γγ\toππ$",http://dx.doi.org/10.1007/s100520050509 -Prompt atmospheric neutrinos and muons: NLO vs LO QCD predictions,http://dx.doi.org/10.1103/PhysRevD.61.036005 -K* nucleon hyperon form factors and nucleon strangeness,http://dx.doi.org/10.1103/PhysRevC.61.055206 -"Prompt atmospheric neutrinos and muons: dependence on the gluon - distribution function",http://dx.doi.org/10.1103/PhysRevD.61.056011 -Higher-Order Effects on Inelastic J/psi Photoproduction,http://arxiv.org/abs/hep-ph/9907315v1 -"The forward photon production and the gluonic content of the real and - virtual photon at the HERA collider",http://dx.doi.org/10.1016/S0920-5632(00)00149-3 -Longitudinal Spin Dependence of Massive Lepton Pair Production,http://arxiv.org/abs/hep-ph/0001190v1 -Recoil and Threshold Corrections in Short-distance Cross Sections,http://dx.doi.org/10.1103/PhysRevD.63.114018 -"""Free"" Constituent Quarks and Dilepton Production in Heavy Ion - Collisions",http://dx.doi.org/10.1134/1.1446567 -"Results from Bottomonia Production at the Tevatron and Prospects for the - LHC",http://dx.doi.org/10.1016/S0550-3213(01)00053-0 -"The Instanton/Sphaleron Mechanism of Prompt Gluon Production in High - Energy Heavy Ion Collisions at RHIC",http://dx.doi.org/10.1016/S0370-2693(01)00892-9 -Hard Thermal Photon Production in Relativistic Heavy Ion Collisions,http://dx.doi.org/10.1016/S0370-2693(01)00525-1 -Black Holes at the LHC,http://dx.doi.org/10.1103/PhysRevLett.87.161602 -Towards a global analysis of polarized parton distributions,http://dx.doi.org/10.1103/PhysRevD.64.114007 -"Unifying aspects of polarization of vector mesons from hard production - in DIS and at Tevatron",http://dx.doi.org/10.1016/S0375-9474(01)01495-6 -Discovering New Physics in the Decays of Black Holes,http://dx.doi.org/10.1103/PhysRevLett.88.181801 -"J/psi Inclusive Production in nu N Neutral-Current Deep-Inelastic - Scattering",http://dx.doi.org/10.1016/S0550-3213(02)00401-7 -Update on very light CP-odd scalar in the Two-Higgs-Doublet Model,http://dx.doi.org/10.1103/PhysRevD.66.075006 -"Prompt Multi-Gluon Production in High Energy Collisions from Singular - Yang-Mills Solutions",http://dx.doi.org/10.1103/PhysRevD.67.014005 -Prompt Quark Production by exploding Sphalerons,http://dx.doi.org/10.1103/PhysRevD.67.014006 -"Large-$p_T$ Inclusive $π^0$ Production in Heavy-Ion Collisions at RHIC - and LHC",http://dx.doi.org/10.1016/S0375-9474(03)01447-7 -"Z$ decay into a bottom quark, a light sbottom and a light gluino",http://dx.doi.org/10.1103/PhysRevD.67.015005 -"Measuring the Spectra of High Energy Neutrinos with a Kilometer-Scale - Neutrino Telescope",http://dx.doi.org/10.1103/PhysRevD.67.013001 -"Catastrophic rearrangement of a compact star due to the quark core - formation",http://dx.doi.org/10.1016/S0370-2693(02)03108-8 -Longitudinal virtual photons and the interference terms in ep collisions,http://arxiv.org/abs/hep-ph/0211112v1 -Charmonium production in polarized high-energy collisions,http://dx.doi.org/10.1103/PhysRevD.68.034017 -Sudakov resummation of multiparton QCD cross sections,http://dx.doi.org/10.1016/j.physletb.2003.09.068 -"The Constituent Quark Model Revisited - Quark Masses, New Predictions - for Hadron Masses and KN Pentaquark",http://arxiv.org/abs/hep-ph/0307243v2 -Prompt J/psi production in charged-current deep-inelastic scattering,http://dx.doi.org/10.1016/j.nuclphysb.2003.10.039 -Supernova prompt neutronization neutrinos and neutrino magnetic moments,http://dx.doi.org/10.1088/1475-7516/2003/12/007 -Turbulent Thermalization,http://dx.doi.org/10.1103/PhysRevD.70.043538 -Visible Effects of the Hidden Sector,http://dx.doi.org/10.1103/PhysRevD.70.045023 -"Prospects of Searches for Neutral, Long-Lived Particles which Decay to - Photons using Timing at CDF",http://dx.doi.org/10.1103/PhysRevD.70.114032 -Inelastic J/psi and Upsilon hadroproduction,http://dx.doi.org/10.1140/epjc/s2004-02090-1 -Identifying the neutrino mass hierarchy with supernova neutrinos,http://arxiv.org/abs/hep-ph/0412100v1 -"Production of Forward Rapidity Photons in High Energy Heavy Ion - Collisions",http://dx.doi.org/10.1016/j.nuclphysa.2005.02.156 -Transverse single spin asymmetries in photon production,http://dx.doi.org/10.1016/j.physletb.2005.03.008 -Threshold resummation for the prompt-photon cross section revisited,http://dx.doi.org/10.1103/PhysRevD.72.014014 -"J/psi, psi' and Upsilon Production at Hadron Colliders: a review",http://dx.doi.org/10.1142/S0217751X06033180 -Charmonium Production at High Energy in the k_T-Factorization Approach,http://dx.doi.org/10.1103/PhysRevD.73.074022 -Real and Imaginary Elements of Fermion Mass Matrices,http://dx.doi.org/10.1016/j.nuclphysb.2006.07.021 -"Saturation Physics in Ultra High Energy Cosmic Rays: Heavy Quark - Production",http://dx.doi.org/10.1088/1126-6708/2007/04/028 -"New Quark Relations for Hadron Masses and Magnetic Moments - A Challenge - for Explanation from QCD",http://dx.doi.org/10.1016/j.physletb.2007.04.063 -"On the transport equations of cosmic neutrinos passing through Earth and - secondary nu_mu fluxes",http://dx.doi.org/10.1103/PhysRevD.74.103006 -Simulations of Cold Electroweak Baryogenesis: Finite time quenches,http://dx.doi.org/10.1088/1126-6708/2007/01/034 -Identifying the neutrino mass hierarchy with supernova neutrinos,http://arxiv.org/abs/hep-ph/0701060v1 -"The leading particle effect from light quark fragmentation in charm - hadroproduction",http://dx.doi.org/10.1140/epjc/s10052-007-0227-5 -On Admissible Gauges for Constrained Systems,http://dx.doi.org/10.1103/PhysRevD.53.2160 -C*algebras and differential geometry,http://arxiv.org/abs/hep-th/0101093v1 -"D-Brane Monodromies, Derived Categories and Boundary Linear Sigma Models",http://arxiv.org/abs/hep-th/0206242v1 -Free Energy of the Two-Matrix Model/dToda Tau-Function,http://dx.doi.org/10.1016/j.nuclphysb.2003.07.029 -"Adventures in Thermal Duality (II): Towards a Duality-Covariant String - Thermodynamics",http://dx.doi.org/10.1103/PhysRevD.70.126006 -Lie Particle And Its Batalin-Tyutin Extension,http://dx.doi.org/10.1016/j.physletb.2005.12.021 -"N=2 Superparticles, RR Fields and Noncommutative Structures of - (super)-Spacetime",http://dx.doi.org/10.1140/epjcd/s2006-03-002-6 -Precision Cosmology and the Landscape,http://arxiv.org/abs/hep-th/0610211v3 -"Integrally closed ideals in two-dimensional regular local rings are - multiplier ideals",http://arxiv.org/abs/math/0212002v2 -Logged Rewriting for Monoids,http://arxiv.org/abs/math/0507344v1 -Divisibility of countable metric spaces,http://arxiv.org/abs/math/0510254v1 -"Positive polynomials in scalar and matrix variables, the spectral - theorem and optimization",http://arxiv.org/abs/math/0612103v1 -Spectroscopy of $^{194}$Po,http://dx.doi.org/10.1103/PhysRevC.52.R1723 -The 8Li Calibration Source for the Sudbury Neutrino Obervatory,http://dx.doi.org/10.1016/S0168-9002(02)00860-4 -Flash of Prompt Photons from the Early Stage of Heavy-Ion Collisions,http://dx.doi.org/10.1063/1.59569 -"Angular Momenta of Even-Even Fragments in the Neutronless Fission of - $^{252}$Cf",http://dx.doi.org/10.1103/PhysRevC.60.034613 -"A Quasi-Classical Model of Intermediate Velocity Particle Production in - Asymmetric Heavy Ion Reactions",http://dx.doi.org/10.1103/PhysRevC.65.054613 -Echos of the liquid-gas phase transition in multifragmentation,http://dx.doi.org/10.1016/S0375-9474(01)01675-X -The Origin of Large-p_T pi^0 Suppression at RHIC,http://dx.doi.org/10.1016/S0370-2693(03)00536-7 -The Role of Produced Hadrons in J/psi Suppression,http://dx.doi.org/10.1016/S0375-9474(02)01511-7 -Heavy Quark Radiative Energy Loss - Applications to RHIC,http://dx.doi.org/10.1088/0954-3899/30/8/086 -Isospin Dynamics in Heavy Ion Collisions,http://arxiv.org/abs/nucl-th/0607035v1 -Exotic fission properties of highly neutron-rich Uranium isotopes,http://dx.doi.org/10.1007/s12043-008-0007-2 -"Existence threshold for the ac-driven damped nonlinear Schrödinger - solitons",http://dx.doi.org/10.1016/S0167-2789(99)00055-X -Thermodynamic Equilibrium in Open Chemical Systems,http://arxiv.org/abs/physics/0004035v3 -Wakefield Band Partitioning In Linac Structures,http://arxiv.org/abs/physics/0208086v1 -"Equation of State of Chemical System: From True Equilibrium to True - Chaos",http://arxiv.org/abs/physics/0209078v1 -Role of Molecular Dissociation in Feshbach-Interacting 85Rb Condensates,http://arxiv.org/abs/physics/0209083v1 -"A new insight into the negative-mass paradox of gravity and the - accelerating universe",http://arxiv.org/abs/physics/0308038v1 -Universality in snowflake aggregation,http://dx.doi.org/10.1029/2004GL020363 -"Geminate recombination of hydroxyl radicals generated in 200 nm - photodissociation of aqueous hydrogen peroxide",http://dx.doi.org/10.1016/j.cplett.2003.11.062 -"Fourth Generation Nuclear Weapons: Military effectiveness and collateral - effects",http://arxiv.org/abs/physics/0510071v5 -Efficient Response to Cascading Disaster Spreading,http://dx.doi.org/10.1103/PhysRevE.75.056107 -"Liquid drop splashing on smooth, rough and textured surfaces",http://dx.doi.org/10.1103/PhysRevE.75.056316 -Chirality and Protein Folding,http://dx.doi.org/10.1088/0953-8984/17/18/013 -"Virus evolution : the emergence of new ideas (and re-emergence of old - ones)",http://arxiv.org/abs/q-bio/0604034v1 -"Entanglement and teleportation of macroscopic continuous variables by - superconducting devices",http://arxiv.org/abs/quant-ph/0504153v1 -"The Inner Limit of Quantum Brownian Evolution and its Relevance to - Positivity",http://arxiv.org/abs/quant-ph/0505017v2 -"On distinguishability, orthogonality, and violations of the second law: - contradictory assumptions, contrasting pieces of knowledge",http://arxiv.org/abs/quant-ph/0505229v2 -The Sampling Theorem and Coherent State Systems in Quantum Mechanics,http://dx.doi.org/10.1088/0031-8949/74/2/004 -Rotational Control and Selective Isotope Alignment by Ultrashort Pulses,http://arxiv.org/abs/quant-ph/0601197v1 -Subspace Confinement: How good is your qubit?,http://dx.doi.org/10.1088/1367-2630/9/10/384 -Entanglement in Many-Body Systems,http://dx.doi.org/10.1103//RevModPhys.80.517 -"Calculation of prompt diphoton production cross sections at Tevatron and - LHC energies",http://dx.doi.org/10.1103/PhysRevD.76.013009 -"Does the Second Caustic Ring of Dark Matter Cause the Monoceros Ring of - Stars ?",http://dx.doi.org/10.1103/PhysRevD.76.023505 -"A search for X-ray counterparts of the millisecond pulsars in the - globular cluster M28 (NGC 6626)",http://arxiv.org/abs/0705.0119v1 -The interplay of university and industry through the FP5 network,http://dx.doi.org/10.1088/1367-2630/9/6/183 -Learning to Bluff,http://arxiv.org/abs/0705.0693v1 -Extrasolar planet taxonomy: a new statistical approach,http://dx.doi.org/10.1086/519760 -Multi-year search for a diffuse flux of muon neutrinos with AMANDA-II,http://dx.doi.org/10.1103/PhysRevD.76.042008 -"Hydrodynamic Collimation of Relativistic Outflows: Semianalytic - Solutions and Application to Gamma-Ray Bursts",http://dx.doi.org/10.1086/522668 -Topological reconstruction of open charm mesons using electron tagging,http://arxiv.org/abs/0705.2089v1 -Delayed neutrons measurement at the MEGAPIE target,http://arxiv.org/abs/0705.3738v1 -Hadronic Final States and Spectroscopy in ep Collisions at HERA,http://arxiv.org/abs/0705.4625v1 -"The spatially resolved host of GRB 060505 and implications for the - nature of the progenitor",http://arxiv.org/abs/0706.0674v1 -"The GRB afterglow onset observed by REM: fireball Lorentz factor and - afterglow fluence",http://arxiv.org/abs/0706.1772v1 -"Flare magnetic reconnection and relativistic particles in the 2003 - October 28 event",http://dx.doi.org/10.1051/0004-6361:20066966 -"Ultracold Bose gases in time-dependent 1D superlattices: response and - quasimomentum structure",http://dx.doi.org/10.1103/PhysRevA.76.053614 -"The complete catalogue of gamma-ray bursts observed by the Wide Field - Cameras on board BeppoSAX",http://dx.doi.org/10.1051/0004-6361:20077495 -Evidence of Exponential Decay Emission in the Swift Gamma-ray Bursts,http://dx.doi.org/10.1086/521640 -"Dust-scattered X-ray halos around two Swift gamma-ray bursts: GRB 061019 - and GRB 070129",http://dx.doi.org/10.1051/0004-6361:20077968 -"Branching ratio measurement of $K_S\to γγ$ decay using a pure - $\ks$ beam in the KLOE detector",http://arxiv.org/abs/0707.3933v1 -Alfven Wave-Driven Supernova Explosion,http://dx.doi.org/10.1086/533515 -Unstable GRB photospheres and electron-positron annihilation lines,http://dx.doi.org/10.1086/524405 -"Closure Relations for Electron-Positron Pair-Signatures in Gamma-Ray - Bursts",http://dx.doi.org/10.1086/527667 -"Observation of GRBs by the MAGIC Telescope, Status and Outlook",http://arxiv.org/abs/0709.1380v1 -Multiwavelength Gamma-Ray Bursts Observations with ECLAIRs,http://dx.doi.org/10.1063/1.2774837 -A Search for Single Radio Pulses and Bursts from Southern AXPs,http://dx.doi.org/10.1063/1.2900156 -Observation of an unexpected hardening in the spectrum of GRB 021206,http://dx.doi.org/10.1086/526410 -Foundations of multiple black hole evolutions,http://dx.doi.org/10.1103/PhysRevD.77.024034 -"Echo Emission From Dust Scattering and X-Ray Afterglows of Gamma-Ray - Bursts",http://dx.doi.org/10.1086/527047 -"Feedback Processes [in Massive Star Formation]: A Theoretical - Perspective",http://arxiv.org/abs/0711.4047v1 -"On features of the radiation from an electron moving along a helix - inside a cylindrical hole in a homogeneous dielectric",http://dx.doi.org/10.1016/j.nimb.2008.02.002 -GeV emission from Gamma-Ray Burst afterglows,http://dx.doi.org/10.1111/j.1365-2966.2008.12950.x -Charm Production in DPMJET,http://dx.doi.org/10.1088/1475-7516/2008/06/003 -"Reappraising Transition Region Line Widths in light of Recent Alfvén - Wave Discoveries",http://dx.doi.org/10.1086/528682 -Comparative Analysis of Control Strategies,http://arxiv.org/abs/0801.0746v1 -An RMHD study of transition between prompt and afterglow GRB phases,http://arxiv.org/abs/0801.1325v1 -RHESSI Spectral Fits of Swift GRBs,http://dx.doi.org/10.1063/1.2943432 -Radio Wavelength Transients: Current and Emerging Prospects,http://dx.doi.org/10.1002/asna.200710935 -"Tracing the distribution and evolution of Iron in the IntraCluster - Medium",http://dx.doi.org/10.1393/ncb/i2008-10446-5 -"Long Term Evolution of Magnetic Turbulence in Relativistic Collisionless - Shocks",http://dx.doi.org/10.1142/S021827180801339X -"Dominant two-loop electroweak corrections to the hadroproduction of a - pseudoscalar Higgs boson and its photonic decay",http://dx.doi.org/10.1103/PhysRevD.78.011303 -Laminating lattices with symmetrical glue,http://arxiv.org/abs/0802.0730v1 -"Nonthermal Synchrotron and Synchrotron Self-Compton Emission from GRBs: - Predictions for {\em Swift} and {\em GLAST}",http://dx.doi.org/10.1063/1.2943490 -Screening High-z GRBs with BAT Prompt Emission Properties,http://dx.doi.org/10.1063/1.2943435 -"Ghost contributions to charmonium production in polarized high-energy - collisions",http://dx.doi.org/10.1103/PhysRevD.77.117501 -"Brightening of an Accretion Disk Due to Viscous Dissipation of - Gravitational Waves During the Coalescence of Supermassive Black Holes",http://dx.doi.org/10.1103/PhysRevLett.101.041101 -Stochastically Induced Gamma-Ray Burst Wakefield Processes,http://dx.doi.org/10.1086/589648 -"Evolution from a molecular Rydberg gas to an ultracold plasma in a - seeded supersonic expansion of NO",http://dx.doi.org/10.1103/PhysRevLett.101.205005 -"Monitoring the Thermal Power of Nuclear Reactors with a Prototype Cubic - Meter Antineutrino Detector",http://dx.doi.org/10.1063/1.2899178 -On the Formation of Compact Stellar Disks Around Sgr A*,http://dx.doi.org/10.1086/591471 -"Halperin-Saslow modes as the origin of the low temperature anomaly in - $NiGa_2S_4$",http://dx.doi.org/10.1103/PhysRevB.79.140402 -"Are Self-explaining and Coached Problem Solving More Effective When Done - by Pairs of Students Than Alone?",http://arxiv.org/abs/0805.4223v1 -The Quirky Collider Signals of Folded Supersymmetry,http://dx.doi.org/10.1103/PhysRevD.78.075028 -Productive Dialog During Collaborative Problem Solving,http://arxiv.org/abs/0806.0599v1 -Diffraction of fast atoms and molecules from surfaces,http://dx.doi.org/10.1103/PhysRevB.78.155408 -Clifford Algebra of Nonrelativistic Phase Space and the Concept of Mass,http://dx.doi.org/10.1088/1751-8113/42/4/045204 -"Optical and gamma-ray emissions from internal forward-reverse shocks: - application to GRB 080319B?",http://dx.doi.org/10.1088/0004-637X/692/2/1662 -Hunting the lightest lightest neutralinos,http://dx.doi.org/10.1103/PhysRevD.78.023507 -Properties of Gamma-Ray Burst Progenitor Stars,http://dx.doi.org/10.1126/science.1159003 -An explanation of the solar transition region,http://dx.doi.org/10.1086/591470 -Short Hard Gamma Ray Bursts And Their Afterglows,http://dx.doi.org/10.1088/0004-637X/693/1/311 -Prospects for all-optical ultrafast muon acceleration,http://dx.doi.org/10.1088/0741-3335/51/2/024006 -"Riding the Spiral Waves: Implications of Stellar Migration for the - Properties of Galactic Disks",http://dx.doi.org/10.1086/592231 -"Curvature Effect of a Non-Power-Law Spectrum and Spectral Evolution of - GRB X-Ray Tails",http://dx.doi.org/10.1088/0004-637X/690/1/L10 -ATLAS reach for Quarkonium cross section and polarization measurement,http://dx.doi.org/10.1016/j.nuclphysbps.2009.01.020 -"An Anomalous Extra Z Prime from Intersecting Branes with Drell-Yan and - Direct Photons at the LHC",http://dx.doi.org/10.1016/j.nuclphysb.2009.01.016 -"Prompt TeV Emission from Cosmic Rays Accelerated by Gamma Ray Bursts - Interacting with Surrounding Stellar Wind",http://dx.doi.org/10.1088/0004-637X/691/1/L37 -"GeV Emission from neutron-rich internal shocks of some long Gamma-ray - Bursts",http://dx.doi.org/10.1111/j.1365-2966.2008.13578.x -Prospects for non-standard SUSY searches at LHC,http://arxiv.org/abs/0810.3210v1 -Detection of node group membership in networks with group overlap,http://dx.doi.org/10.1140/epjb/e2008-00418-0 -Semiclassical limits of quantized coordinate rings,http://arxiv.org/abs/0812.1612v1 -"The very slow expansion of an ultracold plasma formed in a seeded - supersonic molecular beam of NO",http://dx.doi.org/10.1103/PhysRevA.79.062706 -"Gamma-ray burst observations with new generation imaging atmospheric - Cerenkov Telescopes in the FERMI era",http://dx.doi.org/10.1063/1.3125779 -"Correlations between Lag, Duration, Peak Luminosity, Hardness, and - Asymmetry in Long GRB Pulses",http://dx.doi.org/10.1063/1.3155923 -Gamma-Ray Burst Pulse Correlations as Redshift Indicators,http://dx.doi.org/10.1063/1.3155957 -The cannonball model of long GRBs - overview,http://dx.doi.org/10.1063/1.3141570 -"Indication of Two Classes in the Swift Short Gamma-Ray Bursts from the - XRT X-Ray Afterglow Light Curves",http://dx.doi.org/10.1063/1.3155860 -"Right-left asymmetry of radiation from fission induced by polarised - neutrons",http://arxiv.org/abs/0902.1387v1 -"Quantifying effective slip length over micropatterned hydrophobic - surfaces",http://dx.doi.org/10.1063/1.3266505 -"The initial Lorentz factors of fireballs inferred from the early X-ray - data of SWIFT GRBs",http://dx.doi.org/10.1051/0004-6361/200811361 -How Dark Matter Reionized The Universe,http://dx.doi.org/10.1103/PhysRevD.80.035007 -Evidence of an initially magnetically dominated outflow in GRB 080916C,http://dx.doi.org/10.1088/0004-637X/700/2/L65 -On the Tate spectrum of tmf at the prime 2,http://arxiv.org/abs/0904.3687v3 -"Simulations of galactic disks including an additional dark baryonic - component",http://arxiv.org/abs/0904.4638v1 -Correlated optical and gamma emissions from GRB 081126,http://dx.doi.org/10.1088/0004-637X/697/1/L18 -"How old are SN Ia Progenitor Systems? New Observational Constraints on - the Distribution of Time Delays from GALEX",http://dx.doi.org/10.1111/j.1365-2966.2009.15065.x -"Slow Heating Model of Gamma-Ray Burst: Photon Spectrum and Delayed - Emission",http://dx.doi.org/10.1088/0004-637X/705/2/1714 -"Signatures of a Maxwellian Component in Shock-Accelerated Electrons in - GRBs",http://dx.doi.org/10.1111/j.1365-2966.2009.15454.x -"About possible contribution of intrinsic charm component to inclusive - spectra of charmed mesons",http://dx.doi.org/10.1088/0954-3899/37/5/055004 -Radiation pressure mixing of large dust grains in protoplanetary disks,http://dx.doi.org/10.1038/nature08032 -Test of a LYSO matrix with an electron beam,http://dx.doi.org/10.1016/j.nima.2009.09.015 -Shopping Uncertainties in a Mobile and Social Context,http://arxiv.org/abs/0906.2307v1 -"Exact non-circular symmetric N-skyrmions in helical magnets without - inversion symmetry",http://arxiv.org/abs/0906.3188v1 -Variability in the Prompt Emission of Swift-BAT Gamma-Ray Bursts,http://arxiv.org/abs/0906.3193v1 -Polarization signature of gamma-ray bursts from fragmented fireballs,http://dx.doi.org/10.1088/0004-637X/700/2/L141 -Master Robotic Net,http://dx.doi.org/10.1155/2010/349171 -GRB Observations with the MAGIC Telescopes,http://arxiv.org/abs/0907.1001v1 -Atmospheric lepton fluxes at ultrahigh energies,http://dx.doi.org/10.1088/1475-7516/2009/09/008 -Quantum Non-Locality and Universe,http://arxiv.org/abs/0907.1414v1 -"GRB afterglow plateaus and Gravitational Waves: multi-messenger - signature of a millisecond magnetar?",http://dx.doi.org/10.1088/0004-637X/702/2/1171 -"The Fermi gamma-ray spectrum of the inner galaxy: Implications for - annihilating dark matter",http://arxiv.org/abs/0907.3953v1 -"Quarkonium plus prompt-photon associated hadroproduction and nuclear - shadowing",http://dx.doi.org/10.1140/epjc/s10052-010-1312-8 -"Causal structure and algebraic classification of area metric spacetimes - in four dimensions",http://dx.doi.org/10.1016/j.aop.2010.04.008 -GRB 080916C and GRB 090510: the high energy emission and the afterglow,http://dx.doi.org/10.1088/0004-637X/706/1/L33 -"Lorentz Factor Constraint from the very early external shock of the - gamma-ray burst ejecta",http://dx.doi.org/10.1111/j.1365-2966.2009.15863.x -GRB090111: extra soft steep decay emission and peculiar re-brightening,http://dx.doi.org/10.1111/j.1745-3933.2009.00747.x -Photon 2009: Summary of Theory Talks,http://arxiv.org/abs/0909.0419v1 -Testing the Gamma-Ray Burst Pulse Start Conjecture,http://dx.doi.org/10.1088/0004-637X/705/1/372 -"Constraining relativistic protons and magnetic fields in galaxy clusters - through radio and gamma-ray observations : the case of A2256",http://dx.doi.org/10.1051/0004-6361/200913177 -Production measurements at LHCb with the first data,http://arxiv.org/abs/0909.5596v1 -"High-energy emission as a test of the prior emission model for gamma-ray - burst afterglows",http://dx.doi.org/10.1111/j.1745-3933.2009.00799.x -"Degeneracy: a link between evolvability, robustness and complexity in - biological systems",http://dx.doi.org/10.1186/1742-4682-7-6 -Why did life emerge?,http://dx.doi.org/10.1017/S1473550408004308 -"Probing the Heavy Flavor Content in t tbar Events and Using t tbar - Events as a Calibration Tool at CMS",http://arxiv.org/abs/0910.3329v1 -"Measurement of the Inclusive Isolated Prompt Photon Cross Section in - ppbar Collisions at sqrt{s} = 1.96 TeV using the CDF Detector",http://dx.doi.org/10.1103/PhysRevD.80.111106 -Gamma Ray Bursts: back to the blackboard,http://arxiv.org/abs/0911.0349v4 -"Causality constraints in AdS/CFT from conformal collider physics and - Gauss-Bonnet gravity",http://dx.doi.org/10.1007/JHEP04(2010)007 -"Thermonuclear explosions of rapidly rotating white dwarfs - II. - Detonations",http://dx.doi.org/10.1051/0004-6361/200912033 -Prompt Decays of General Neutralino NLSPs at the Tevatron,http://dx.doi.org/10.1007/JHEP05(2010)105 -Higher-Twist Dynamics in Large Transverse Momentum Hadron Production,http://dx.doi.org/10.1103/PhysRevLett.105.062002 -Multi-lepton Signatures of a Hidden Sector in Rare B Decays,http://dx.doi.org/10.1103/PhysRevD.83.054005 -"Diphoton production at Tevatron in the quasi-multi-Regge-kinematics - approach",http://dx.doi.org/10.1103/PhysRevD.80.114016 -Estimating the Prompt Electromagnetic Luminosity of a Black Hole Merger,http://dx.doi.org/10.1088/0004-637X/709/2/774 -"Diffuse gamma ray constraints on annihilating or decaying Dark Matter - after Fermi",http://dx.doi.org/10.1016/j.nuclphysb.2010.07.010 -High energy leptons from muons in transit,http://dx.doi.org/10.1103/PhysRevD.81.053003 -"Detecting the QCD phase transition in the next Galactic supernova - neutrino burst",http://dx.doi.org/10.1103/PhysRevD.81.103005 -"An External Inverse Compton Emission Model of Gamma-Ray Burst - High-Energy Lags",http://arxiv.org/abs/0912.3277v1 -"Charmonium suppression in the presence of dissipative forces in a - strongly coupled quark-gluon plasma",http://dx.doi.org/10.1140/epjc/s10052-010-1318-2 -Coherent bremsstrahlung and GDR width from 252Cf cold fission,http://dx.doi.org/10.1016/j.physletb.2010.05.079 -Radiative Signatures of Relativistic Shocks,http://dx.doi.org/10.1088/2041-8205/710/1/L16 -B Physics and Quarkonia studies with early ATLAS data,http://arxiv.org/abs/1001.3806v1 -Continuum Electrostatics in Cell Biology,http://arxiv.org/abs/1002.1411v1 -"The Ep,i - Eiso correlation and Fermi Gamma-Ray Bursts",http://arxiv.org/abs/1002.2232v1 -"The LAT Low-Energy technique for Fermi Gamma-Ray Bursts spectral - analysis",http://arxiv.org/abs/1002.2617v2 -Field Ionization of Cold Atoms near the Wall of a Single Carbon Nanotube,http://dx.doi.org/10.1103/PhysRevLett.104.133022 -Heavy-Flavor Measurements by the PHENIX Experiment at RHIC,http://dx.doi.org/10.1088/0954-3899/37/9/094012 -Ultrafast melting of a charge-density wave in the Mott insulator 1T-TaS2,http://dx.doi.org/10.1103/PhysRevLett.105.187401 -Fermi-LAT results on Galactic Plane gamma-ray Transient Sources,http://dx.doi.org/10.1142/9789814374552_0116 -"Inconsistency of Breathing Mode Extensions of Maximal Five-Dimensional - Supergravity Embedding",http://dx.doi.org/10.1007/JHEP06(2012)067 -A multispectral and multiscale view of the Sun,http://arxiv.org/abs/1005.5175v1 -"p p -> J/psi+Upsilon+X as a clean probe to the quarkonium production - mechanism",http://dx.doi.org/10.1103/PhysRevD.83.054015 -"Higher-twist contributions to large pT hadron production in hadronic - collisions",http://arxiv.org/abs/1006.4045v1 -Simulating chemistry using quantum computers,http://dx.doi.org/10.1146/annurev-physchem-032210-103512 -Gamma ray emission from magnetized relativistic GRB outflows,http://dx.doi.org/10.1051/0004-6361/201015212 -"Pinwheel VBS state and triplet excitations in the two-dimensional - deformed kagome lattice",http://dx.doi.org/10.1038/nphys1761 -Delta Gamma_d: A Forgotten Null Test of the Standard Model,"http://dx.doi.org/10.1088/0954-3899/38/1/015007," -A simple Monte Carlo model for crowd dynamics,http://dx.doi.org/10.1103/PhysRevE.82.026111 -Heavy Quark Production from Relativistic Heavy Ion Collisions,http://dx.doi.org/10.1088/0954-3899/37/11/115006 -R CrA SMM1A: Fragmentation in A Prestellar Core,http://dx.doi.org/10.1088/2041-8205/720/2/L169 -On the probability of a random lattice avoiding a large convex set,http://dx.doi.org/10.1112/plms/pdr021 -"J/psi (psi') production at the Tevatron and LHC at O(α_s^4v^4) in - nonrelativistic QCD",http://dx.doi.org/10.1103/PhysRevLett.106.042002 -Polaronic conductivity in the photoinduced phase of 1T-TaS2,http://dx.doi.org/10.1103/PhysRevLett.106.016401 -Magnetic Field and Flavor Effects on the Gamma-Ray Burst Neutrino Flux,http://dx.doi.org/10.1103/PhysRevD.83.067303 -Gamma Ray Bursts: basic facts and ideas,http://dx.doi.org/10.1017/S1743921310016364 -X-ray pulsations from the radio-quiet gamma-ray pulsar in CTA 1,http://dx.doi.org/10.1088/2041-8205/725/1/L6 -Electromagnetic extraction of energy from merging black holes,http://arxiv.org/abs/1010.6254v2 -"Studies of Stellar Collapse and Black Hole Formation with the - Open-Source Code GR1D",http://dx.doi.org/10.1063/1.3485130 -On Selective Unboundedness of VASS,http://dx.doi.org/10.4204/EPTCS.39.1 -"Hadron plus photon production in polarized hadronic collisions at - next-to-leading order accuracy",http://dx.doi.org/10.1103/PhysRevD.83.074022 -Gamma-Z box contributions to parity violating elastic e-p scattering,http://dx.doi.org/10.1103/PhysRevD.83.113007 -"Instruments of RT-2 Experiment onboard CORONAS-PHOTON and their test and - evaluation IV: Background Simulations using GEANT-4 Toolkit",http://dx.doi.org/10.1007/s10686-010-9208-z -"Sub-Photospheric Emission from Relativistic Radiation Mediated Shocks in - GRBs",http://dx.doi.org/10.1088/0004-637X/733/2/85 -Integration through Generalized Sequences,http://arxiv.org/abs/1101.4695v1 -Spectral evolution and the onset of the X-ray GRB afterglow,http://dx.doi.org/10.1063/1.3621751 -"Gravitational energy as dark energy: Cosmic structure and apparent - acceleration",http://arxiv.org/abs/1102.2045v1 -"Spectral components in the bright, long GRB 061007: properties of the - photosphere and the nature of the outflow",http://dx.doi.org/10.1111/j.1365-2966.2011.18582.x -The thermal roots of correlation-based complexity,http://arxiv.org/abs/1103.2481v1 -Complex network analysis of water distribution systems,http://dx.doi.org/10.1063/1.3540339 -Gamma-ray bursts from magnetized collisionally-heated jets,http://dx.doi.org/10.1088/0004-637X/738/1/77 -Fortnightly Fluctuations in the O-C Diagram of CS 1246,http://dx.doi.org/10.1111/j.1365-2966.2011.18645.x -Rare B decays and Tevatron top-pair asymmetry,http://dx.doi.org/10.1088/0954-3899/38/11/115008 -"Solitons and Physics of the Lysogenic to Lytic Transition in - Enterobacteria Lambda Phage",http://arxiv.org/abs/1104.2252v1 -Final excitation energy of fission fragments,http://dx.doi.org/10.1103/PhysRevC.83.061601 -Discovery of a gamma-ray burst with an associated supernova,http://arxiv.org/abs/1104.3087v1 -"Empirical Encounters with Computational Irreducibility and - Unpredictability",http://arxiv.org/abs/1104.3421v3 -Light Stop NLSPs at the Tevatron and LHC,http://dx.doi.org/10.1007/JHEP08(2011)049 -"Direct Photon Production in Proton-Nucleus and Nucleus-Nucleus - Collisions",http://arxiv.org/abs/1106.0146v1 -The Fermi view of gamma-ray bursts,http://dx.doi.org/10.1016/j.crhy.2011.02.006 -"High Lundquist Number Resistive MHD Simulations of Magnetic - Reconnection: Searching for Secondary Island Formation",http://arxiv.org/abs/1106.0521v1 -A Monopole Instanton-Like Effect in the ABJM Model,http://dx.doi.org/10.1142/S0217751X11053833 -Optimal Bounds in Parametric LTL Games,http://dx.doi.org/10.4204/EPTCS.54.11 -A Random Walk with Drift: Interview with Peter J. Bickel,http://dx.doi.org/10.1214/09-STS300 -Universality and properties of neutron star type I critical collapses,http://dx.doi.org/10.1088/0264-9381/28/15/155002 -Correlations of Heavy Quarks Produced at Large Hadron Collider,http://dx.doi.org/10.1088/0954-3899/39/2/025001 -"Bias-induced destruction of ferromagnetism and disorder effects in - GaMnAs heterostructures",http://arxiv.org/abs/1108.2108v1 -"Topologically Protected Extended States in Disordered Quantum Spin-Hall - Systems without Time-Reversal Symmetry",http://dx.doi.org/10.1103/PhysRevB.85.075115 -"Summed Parallel Infinite Impulse Response (SPIIR) Filters For - Low-Latency Gravitational Wave Detection",http://dx.doi.org/10.1103/PhysRevD.86.024012 -Iterated splitting and the classification of knot tunnels,http://arxiv.org/abs/1108.3671v2 -"Probing nuclear parton densities and parton energy loss processes - through photon + heavy-quark jet production in p-A and A-A collisions",http://dx.doi.org/10.1088/0954-3899/38/12/124187 -"Extension of an Exponential Light Curve GRB Pulse Model Across Energy - Bands",http://dx.doi.org/10.1111/j.1365-2966.2011.19838.x -Low Energy INTEGRAL Positrons from eXciting Dark Matter,http://arxiv.org/abs/1109.3747v1 -"A Simple Energy-Dependent Model for GRB Pulses with Interesting Physical - Implications",http://dx.doi.org/10.1063/1.3621743 -"Scintillation and charge extraction from the tracks of energetic - electrons in superfluid helium-4",http://dx.doi.org/10.1088/1748-0221/7/01/P01002 -Around Gaia Alerts in 20 questions,http://dx.doi.org/10.1017/S1743921312001305 -Remnants of Binary White Dwarf Mergers,http://dx.doi.org/10.1088/0004-637X/746/1/62 -ANDaNA: Anonymous Named Data Networking Application,http://arxiv.org/abs/1112.2205v2 -Clifford Algebras in Symplectic Geometry and Quantum Mechanics,http://dx.doi.org/10.1007/s10701-012-9634-z -On Bayes' theorem for improper mixtures,http://dx.doi.org/10.1214/11-AOS892 -Simulation of high energy emission from gamma-ray bursts,http://arxiv.org/abs/1112.3933v1 -"Spectral and temporal analysis of the joint Swift/BAT-Fermi/GBM GRB - sample",http://dx.doi.org/10.1111/j.1365-2966.2012.21411.x -Measuring market liquidity: An introductory survey,http://arxiv.org/abs/1112.6169v1 -"Local conductivity and the role of vacancies around twin walls of - (001)-BiFeO3 thin films",http://dx.doi.org/10.1063/1.4746073 -Research and Development for a Gadolinium Doped Water Cherenkov Detector,http://arxiv.org/abs/1201.1017v1 -Nuclear reactions in hot astrophysical plasmas with $T>10^{10}$ K,http://dx.doi.org/10.1142/S0218271812500095 -"SSC Emission as the Origin of the Gamma Ray Afterglow Observed in GRB - 980923",http://dx.doi.org/10.1088/0004-637X/751/1/33 -"QCD critical point in the strong coupling lattice QCD and during black - hole formation",http://arxiv.org/abs/1201.6206v1 -"On the polarimetric signature of emerging magnetic loops in the - quiet-Sun",http://dx.doi.org/10.1088/2041-8205/747/2/L36 -"A comment on chiral restoration at finite baryon density in - hyperspherical unit cells",http://arxiv.org/abs/1204.2800v1 -"First Evidence of Globular Cluster Formation from the Ejecta of Prompt - Type Ia Supernovae",http://dx.doi.org/10.1088/2041-8205/751/2/L35 -Hybrid Monte Carlo simulation on the graphene hexagonal lattice,http://arxiv.org/abs/1204.5424v1 -"Comment on Phys. Rev. Lett. 108, 191802 (2012): ""Observation of Reactor - Electron Antineutrino Disappearance in the RENO Experiment""",http://arxiv.org/abs/1205.5626v1 -One-Way Speed of Light Measurements Without Clock Synchronisation,http://arxiv.org/abs/1206.0790v2 -Multiparty Cloud Computation,http://arxiv.org/abs/1206.3717v1 -Fermi Large Area Telescope observations of GRB 110625A,http://dx.doi.org/10.1088/0004-637X/754/2/117 -"Measurement of the inclusive ttgamma cross section at sqrt(s) = 7 TeV - with the ATLAS detector",http://arxiv.org/abs/1206.5696v1 -"Turbulence induced additional deceleration in relativistic shock wave - propagation: implications for gamma-ray burst",http://dx.doi.org/10.1007/s10509-012-1160-0 -"High-Precision Measurement of the 19Ne Half-Life and Implications for - Right-Handed Weak Currents",http://dx.doi.org/10.1103/PhysRevLett.109.042301 -"Possible observation of parametrically amplified coherent phasons in - K0.3MoO3 using time-resolved extreme-ultraviolet ARPES",http://dx.doi.org/10.1103/PhysRevB.88.045104 -"Linear subspaces, symbolic powers and Nagata type conjectures",http://dx.doi.org/10.1016/j.aim.2013.10.029 -"Binary Black-Hole Mergers in Magnetized Disks: Simulations in Full - General Relativity",http://dx.doi.org/10.1103/PhysRevLett.109.221102 -"Heterogeneous length of stay of hosts' movements and spatial epidemic - spread",http://arxiv.org/abs/1207.4746v1 -The chemistry of extragalactic carbon stars,http://dx.doi.org/10.1111/j.1365-2966.2012.21771.x -Particle Production in pA Collisions and QCD Saturation,http://dx.doi.org/10.1016/j.nuclphysa.2012.12.082 -New Sensitivity to Solar WIMP Annihilation using Low-Energy Neutrinos,http://dx.doi.org/10.1103/PhysRevD.88.055005 -"The extremely high peak energy of GRB 110721A in the context of a - dissipative photosphere synchrotron emission model",http://dx.doi.org/10.1088/2041-8205/761/2/L18 -"Density-functional calculations of the electronic structure and lattice - dynamics of superconducting LaO$_{0.5}$F$_{0.5}$BiS$_{2}$: Evidence for an - electron-phonon interaction near the charge-density-wave instability",http://dx.doi.org/10.1103/PhysRevB.87.115124 -Open heavy flavor and J/psi at RHIC and LHC,http://dx.doi.org/10.1016/j.nuclphysa.2012.12.102 -"A theory of photospheric emission from relativistic, collimated outflows",http://dx.doi.org/10.1093/mnras/sts219 -Nanodrop impact on solid surfaces,http://dx.doi.org/10.1063/1.4790807 -Dynamical Capture Binary Neutron Star Mergers,http://dx.doi.org/10.1088/2041-8205/760/1/L4 -"INTEGRAL and Swift observations of the Be X-ray binary 4U 1036-56 (RX - J1037.5-5647) and its possible relation with gamma-ray transients",http://dx.doi.org/10.1088/0004-637X/761/1/49 -"CGC predictions for p+A collisions at the LHC and signature of QCD - saturation",http://dx.doi.org/10.1016/j.physletb.2012.11.066 -Whole Genome Sequencing: Innovation Dream or Privacy Nightmare?,http://arxiv.org/abs/1210.4820v6 -"Search for doubly-charged Higgs bosons in like-sign dilepton final - states at sqrt(s) = 7 TeV with the ATLAS detector",http://dx.doi.org/10.1140/epjc/s10052-012-2244-2 -"Klein-Gordon equations for energy-momentum of relativistic particle in - rapidity space",http://dx.doi.org/10.1134/S1063778813090214 -Long-lived heavy quarks : a review,http://arxiv.org/abs/1210.6369v2 -"D meson nuclear modification factors in Pb-Pb collisions at sqrt(s_NN) = - 2.76 TeV with the ALICE detector",http://dx.doi.org/10.1016/j.nuclphysa.2013.02.096 -Gamma Ray Array Detector Trigger Sub-System,http://arxiv.org/abs/1211.1087v2 -EPS09s and EKS98s: Impact parameter dependent nPDF sets,http://dx.doi.org/10.1016/j.nuclphysa.2013.02.183 -"Probing the ""$μ$ from $ν$"" supersymmetric standard model with - displaced multileptons from the decay of a Higgs boson at the LHC",http://dx.doi.org/10.1103/PhysRevD.88.015009 -Nucleosynthesis in Type I X-ray Bursts,http://dx.doi.org/10.1016/j.ppnp.2012.11.002 -"Testing Lorentz invariance with neutrino bursts from supernova - neutronization",http://dx.doi.org/10.1103/PhysRevD.87.017302 -"Role of the cluster structure of $^7$Li in the dynamics of fragment - capture",http://dx.doi.org/10.1016/j.physletb.2012.11.064 -"On external shock model to explain the high-energy emission: GRB 940217, - GRB 941017 and GRB 970217A",http://dx.doi.org/10.1063/1.4772351 -"A search for prompt lepton-jets in pp collisions at sqrt(s) = 7 TeV with - the ATLAS detector",http://dx.doi.org/10.1016/j.physletb.2013.01.034 -Inclusive hadron and photon production at LHC in dipole momentum space,http://dx.doi.org/10.1103/PhysRevD.87.074023 -Muon Physics in ALICE: The MFT Upgrade Project,http://dx.doi.org/10.1088/1742-6596/446/1/012054 -BRST-invariant boundary conditions and strong ellipticity,http://dx.doi.org/10.1103/PhysRevD.88.104039 -"Photospheric emission as the dominant radiation mechanism in - long-duration gamma-ray bursts",http://dx.doi.org/10.1088/0004-637X/765/2/103 -Extremely long hard bursts observed by Konus-Wind,http://dx.doi.org/10.1063/1.2943422 -Gamma-ray polarization induced by cold electrons via Compton processes,http://dx.doi.org/10.1088/0004-637X/769/1/70 -"Non-classicality of the molecular vibrations assisting exciton energy - transfer at room temperature",http://dx.doi.org/10.1038/ncomms4012 -A Complete Model for R-parity Violation,http://dx.doi.org/10.1103/PhysRevD.88.055023 -Spontaneous synchrony in power-grid networks,http://dx.doi.org/10.1038/NPHYS2535 -"Adiabatic measurements of magneto-caloric effects in pulsed high - magnetic fields up to 55 T",http://dx.doi.org/10.1063/1.4811798 -"Measurement of the X(3872) production cross section via decays to J/psi - pi pi in pp collisions at sqrt(s) = 7 TeV",http://dx.doi.org/10.1007/JHEP04(2013)154 -INTEGRAL Results on Gamma-Ray Bursts,http://arxiv.org/abs/1302.4847v1 -"Towards detailed tomography of high energy heavy-ion collisions by - $γ$-jet",http://dx.doi.org/10.1016/j.physletb.2013.06.029 -Prompt VERITAS Observations Triggered by High Energy Fermi-LAT Photons,http://arxiv.org/abs/1303.2155v1 -Techniques for targeted Fermi-GBM follow-up of gravitational-wave events,http://arxiv.org/abs/1303.2174v2 -Short GRBs and dark matter seeding in neutron stars,http://dx.doi.org/10.1088/0004-637X/768/2/145 -Scintillation of Liquid Helium for Low-Energy Nuclear Recoils,http://dx.doi.org/10.1103/PhysRevC.88.025805 -The polarized Gamma-Ray Burst GRB 061122,http://dx.doi.org/10.1093/mnras/stt439 -"Ultra-relativistic, neutrino driven flows in GRBs: A double transonic - flow solution in Schwarzschild spacetime",http://dx.doi.org/10.1088/0004-637X/770/2/159 -Very Low Energy Supernovae from Neutrino Mass Loss,http://dx.doi.org/10.1088/0004-637X/769/2/109 -On the Scission Point Configuration of Fisioning Nuclei,http://arxiv.org/abs/1303.7473v1 -Sound and light from fractures in scintillators,http://dx.doi.org/10.1103/PhysRevLett.111.154301 -The AGILE Science Alert System,http://arxiv.org/abs/1305.5389v2 -On the neutrino non-detection of GRB 130427A,http://dx.doi.org/10.1088/2041-8205/772/1/L4 -"The Story of Telebrain: A multi-performer telematic platform for - performatization",http://arxiv.org/abs/1305.6332v1 -Prospects for the detection of GRBs with HAWC,http://dx.doi.org/10.1016/j.nima.2013.09.013 -The Chills and Thrills of Whole Genome Sequencing,http://arxiv.org/abs/1306.1264v5 -A parallel Heap-Cell Method for Eikonal equations,http://arxiv.org/abs/1306.4743v2 -"Production of charged heavy quarkonium-like states at the LHC and the - Tevatron",http://dx.doi.org/10.1088/0253-6102/61/3/14 -Two populations of gamma-ray burst radio afterglows,http://dx.doi.org/10.1088/0004-637X/776/2/106 -Why is the superconducting Tc so high in rare-earth-doped CaFe2As2?,http://dx.doi.org/10.1080/14786435.2014.913116 -"Long GRBs and massive stellar explosions from frame dragging around - rotating black holes",http://arxiv.org/abs/1309.0101v1 -Advances in the Logical Representation of Lexical Semantics,http://arxiv.org/abs/1309.1014v1 -"Hot medium effects on J/psi production in p+Pb collisions at - sqrt{s_{NN}}=5.02 TeV",http://dx.doi.org/10.1016/j.physletb.2013.12.016 -"Massively Parallel Computing and the Search for Jets and Black Holes at - the LHC",http://dx.doi.org/10.1016/j.nima.2014.01.038 -Factorization Properties of Leamer Monoids,http://dx.doi.org/10.1007/s00233-014-9578-z -"Polarization properties of photospheric emission from relativistic, - collimated outflows",http://dx.doi.org/10.1093/mnras/stu457 -"Angular momentum evolution of young low-mass stars and brown dwarfs: - observations and theory",http://dx.doi.org/10.2458/azu_uapress_9780816531240-ch019 -Long Period Variable Stars in the Globular Cluster M5,http://arxiv.org/abs/1310.0594v1 -"Investigation of collective radial expansion and stopping in heavy ion - collisions at Fermi energies",http://dx.doi.org/10.1103/PhysRevC.89.034608 -"Weaving, Bending, Patching, Mending the Fabric of Reality: A Cognitive - Science Perspective on Worldview Inconsistency",http://arxiv.org/abs/1310.3765v1 -Supernova Early Warning in Daya Bay Reactor Neutrino Experiment,http://arxiv.org/abs/1310.5783v2 -The High Altitude Water Cherenkov Observatory,http://dx.doi.org/10.1007/s13538-014-0225-7 -Single-Layer MoS2 Phototransistors,http://dx.doi.org/10.1021/nn2024557 -Covert Ephemeral Communication in Named Data Networking,http://arxiv.org/abs/1311.2517v1 -"An Excursion-Theoretic Approach to Regulator's Bank Reorganization - Problem",http://arxiv.org/abs/1311.3019v1 -"$^3$He impurities and mass transport through solid $^4$He: a universal - temperature dependence and flux extinction",http://arxiv.org/abs/1311.4913v2 -An external-shock model for GRB afterglow 130427A,http://dx.doi.org/10.1093/mnras/stt1792 -Measurements of Quarkonium Production and Polarization at CMS,http://arxiv.org/abs/1311.7490v1 -Next-to-leading order diphoton+2-jet production at the LHC,http://arxiv.org/abs/1312.0592v1 -"Mesons, PANDA and the scalar glueball",http://dx.doi.org/10.1088/1742-6596/503/1/012010 -The hard X-ray shortages prompted by the clock bursts in GS 1826--238,http://dx.doi.org/10.1088/0004-637X/782/1/40 -The theory of pulsar wind nebulae,http://dx.doi.org/10.1142/S2010194514601604 -Quantum curves,http://dx.doi.org/10.1007/s00220-015-2287-y -"D-meson nuclear modification factor and v$_2$ in Pb-Pb collisions at the - LHC",http://dx.doi.org/10.1088/1742-6596/509/1/012080 -"The Very Bright and Nearby GRB 130427A: The Extra Hard Spectral - Component and Implications for Very High-energy Gamma-ray Observations of - Gamma-ray Bursts",http://dx.doi.org/10.1142/S2010194514601744 -"i-TED: A novel concept for high-sensitivity (n,$γ$) cross section - measurements",http://arxiv.org/abs/1401.2083v4 -Bulk scalar field in warped extra dimensional models,http://dx.doi.org/10.1103/PhysRevD.89.126001 -Rational homotopy theory of automorphisms of manifolds,http://arxiv.org/abs/1401.4096v3 -"Characterization of the low temperature properties of a simplified - protein model",http://dx.doi.org/10.1103/PhysRevE.89.012705 -"Nuclear parton density modifications from low-mass lepton pair - production at the LHC",http://dx.doi.org/10.1016/j.nuclphysa.2014.03.024 -"Next-to-Leading-Order study on the associate production of - $J/ψ+γ$ at the LHC",http://dx.doi.org/10.1103/PhysRevD.89.114018 -Cloaked Gamma Ray Bursts,http://dx.doi.org/10.1088/2041-8205/787/2/L32 -"Levels of Abstraction and the Apparent Contradictory Philosophical - Legacy of Turing and Shannon",http://arxiv.org/abs/1402.1099v1 -"Initial state radiation effects in inclusive $J/ψ$ production at B - factories",http://dx.doi.org/10.1007/JHEP04(2014)182 -New information on photon fragmentation functions,http://dx.doi.org/10.1140/epjc/s10052-014-3009-x -Rectification of electronic heat current by a hybrid thermal diode,http://dx.doi.org/10.1038/nnano.2015.11 -An efficiency dependency parser using hybrid approach for tamil language,http://arxiv.org/abs/1403.6381v1 -"Similar radiation mechanism in gamma-ray bursts and blazars: evidence - from two luminosity correlations",http://dx.doi.org/10.1088/2041-8205/786/1/L8 -"Gamma-ray burst afterglow plateau break time - luminosity correlations - favour thick shell models over thin shell models",http://dx.doi.org/10.1093/mnras/stu1921 -Establishing Global Policies over Decentralized Online Social Networks,http://arxiv.org/abs/1404.1848v1 -"A quantum mechanical approach to establishing the magnetic field - orientation from a maser Zeeman profile",http://dx.doi.org/10.1093/mnras/stu429 -A prompt extra component in the high energy spectrum of GRB 131108A,http://arxiv.org/abs/1407.0238v1 -Measuring Team Creativity Through Longitudinal Social Signals,http://arxiv.org/abs/1407.0440v1 -Implications of an axino LSP for naturalness,http://dx.doi.org/10.1103/PhysRevD.90.035020 -"The Effect of Doppler Broadening on the $6.3 \ PeV$ $W^-$ Resonance in - $\barν_e e^-$ Collisions",http://arxiv.org/abs/1407.4415v1 -"Inverse cascade of non-helical magnetic turbulence in a relativistic - fluid",http://dx.doi.org/10.1088/2041-8205/794/2/L26 -"The magnetization degree of the outflow powering the highly-polarized - reverse shock emission of GRB 120308A",http://dx.doi.org/10.1088/0004-637X/798/1/3 -Black hole spin down in GRB observations and Cosmology,http://arxiv.org/abs/1407.6550v3 -"A FLUKA Study of $β$-delayed Neutron Emission for the Ton-size - DarkSide Dark Matter Detector",http://arxiv.org/abs/1407.6628v2 -Associated-quarkonium production,http://arxiv.org/abs/1407.7372v1 -Polaritonic Feshbach Resonance,http://dx.doi.org/10.1038/nphys2999 -The Value of Using Big Data Technologies in Computational Social Science,http://arxiv.org/abs/1408.3170v1 -On Spectral Evolution and Temporal Binning in Gamma-Ray Bursts,http://dx.doi.org/10.1093/mnras/stu1925 -Tools and Techniques for Efficient High-Level System Design on FPGAs,http://arxiv.org/abs/1408.4797v1 -Parametric Linear Dynamic Logic,http://dx.doi.org/10.4204/EPTCS.161.8 -"Tunable Band gap of Iron-Doped Lanthanum-Modified Bismuth Titanate - Synthesized by the Thermal Decomposition of a Secondary Phase",http://dx.doi.org/10.3938/jkps.66.1371 -"Depth image hand tracking from an overhead perspective using partially - labeled, unbalanced data: Development and real-world testing",http://dx.doi.org/10.3109/17483107.2015.1027304 -A Uniform History for Galaxy Evolution,http://dx.doi.org/10.1088/0004-637X/796/1/25 -K-Knuth Equivalence for Increasing Tableaux,http://arxiv.org/abs/1409.6659v2 -"On the Dynamics of Ultra Compact X-ray Binaries: 4U 1850-087, 4U 0513-40 - and M15 X-2",http://dx.doi.org/10.1088/0004-637X/798/2/117 -Photon-Jet cross sections in Deep-Inelastic Scattering,http://dx.doi.org/10.1140/epjc/s10052-015-3296-x -Astrophysical implications of the proton-proton cross section updates,http://dx.doi.org/10.1016/j.physletb.2015.01.033 -"$η_c$ production at LHC and indications on the understanding of - $J/ψ$ production",http://dx.doi.org/10.1103/PhysRevLett.114.092005 -"Vortex Nucleation in a Dissipative Variant of the Nonlinear - Schrödinger Equation under Rotation",http://arxiv.org/abs/1412.0615v1 -Universal power law governing pedestrian interactions,http://dx.doi.org/10.1103/PhysRevLett.113.238701 -New results from RENO and prospects with RENO-50,http://arxiv.org/abs/1412.2199v2 -Quantum cascade laser frequency stabilisation at the sub-Hz level,http://dx.doi.org/10.1038/nphoton.2015.93 -"On a random search tree: asymptotic enumeration of vertices by distance - from leaves",http://arxiv.org/abs/1412.2796v3 -"Laser Acceleration and Deflection of 96.3 keV Electrons with a Silicon - Dielectric Structure",http://arxiv.org/abs/1412.5730v1 -Proton structure from hard p-p processes at high energies,http://arxiv.org/abs/1412.8030v1 -Quenched dynamics of the momentum distribution of the unitary Bose gas,http://dx.doi.org/10.1007/s00601-015-0971-2 -"Fitting the Fermi-LAT GeV excess: on the importance of the propagation - of electrons from dark matter",http://arxiv.org/abs/1501.07485v1 -Probing Massive Stars Around Gamma-Ray Burst Progenitors,http://dx.doi.org/10.1093/mnras/stv1677 -"Quark and lepton mixing matrices: manifestations of a violated mirror - symmetry",http://arxiv.org/abs/1502.01501v2 -"Semantics-based services for a low carbon society: An application on - emissions trading system data and scenarios management",http://dx.doi.org/10.1016/j.envsoft.2014.11.007 -Albatross: a Privacy-Preserving Location Sharing System,http://arxiv.org/abs/1502.03407v1 -"Measurement of J/psi and psi(2S) prompt double-differential cross - sections in pp collisions at sqrt(s) = 7 TeV",http://dx.doi.org/10.1103/PhysRevLett.114.191802 -Temporal properties of bright BGO GRBs detected by Fermi,http://arxiv.org/abs/1502.04714v1 -"Ultra-Close Encounters of Stars With Massive Black Holes: Tidal - Disruption Events With Prompt Hyperaccretion",http://dx.doi.org/10.1088/2041-8205/805/2/L19 -"Mid-Infrared Plasmonic Platform based on Heavily Doped Epitaxial - Ge-on-Si: Retrieving the Optical Constants of Thin Ge Epilayers",http://dx.doi.org/10.1109/IRMMW-THz.2014.6956438 -"Tunable spin-orbit coupling synthesized with a modulating gradient - magnetic field",http://dx.doi.org/10.1038/srep18983 -Numerical Models of Blackbody-Dominated GRBs,http://arxiv.org/abs/1502.07134v1 -"Vibronic resonances facilitate excited state coherence in light - harvesting proteins at room temperature",http://dx.doi.org/10.1021/acs.jpclett.5b02058 -Gamma-ray burst jets: uniform or structured?,http://arxiv.org/abs/1503.01131v1 -Observations of Gamma-ray Bursts with ASTRO-H and Fermi,http://arxiv.org/abs/1503.01182v1 -Modelling the Structure and Dynamics of Science Using Books,http://arxiv.org/abs/1503.03287v1 -A note on probability and Hilbert's VI problem,http://arxiv.org/abs/1503.05602v2 -Phenomenology of a Long-Lived LSP with R-Parity Violation,http://dx.doi.org/10.1007/JHEP08(2015)016 -"Classification methods for noise transients in advanced - gravitational-wave detectors",http://dx.doi.org/10.1088/0264-9381/32/21/215012 -"Diphoton production at Tevatron and the LHC in the NLO* approximation of - the Parton Reggeization Approach",http://dx.doi.org/10.1103/PhysRevD.92.094033 -"Design, characterization, and sensitivity of the supernova trigger - system at Daya Bay",http://dx.doi.org/10.1016/j.astropartphys.2015.10.011 -Photon-tagged and B-meson-tagged b-jet production at the LHC,http://dx.doi.org/10.1016/j.physletb.2015.09.029 -Emergence of bimodality in controlling complex networks,http://dx.doi.org/10.1038/ncomms3002 -"Parameterized Linear Temporal Logics Meet Costs: Still not Costlier than - LTL (full version)",http://arxiv.org/abs/1505.06953v5 -"Measurement of differential $J/ψ$ production cross-sections and - forward-backward ratio in p+Pb collisions with the ATLAS detector",http://dx.doi.org/10.1103/PhysRevC.92.034904 -"Shifting the Quantum-Classical Boundary: Theory and Experiment for - Statistically Classical Optical Fields",http://dx.doi.org/10.1364/OPTICA.2.000611 -"Neutrino oscillation from the beam with Gaussian-like energy - distribution",http://arxiv.org/abs/1506.06836v1 -Effectively Stable Dark Matter,http://arxiv.org/abs/1507.00828v1 -RIMES: Embedding Interactive Multimedia Exercises in Lecture Videos,http://dx.doi.org/10.1145/2702123.2702186 -Gamma-Ray Burst observations with Fermi,http://arxiv.org/abs/1507.03478v1 -"Frustrated fragmentation and re-aggregation in nuclei: a non-equilibrium - description in spallation",http://dx.doi.org/10.1103/PhysRevC.92.034607 -The observation of light nuclei at ALICE and the X(3872) conundrum,http://dx.doi.org/10.1103/PhysRevD.92.034028 -"A Hybrid High-Order method for Leray-Lions elliptic equations on general - meshes",http://arxiv.org/abs/1508.01918v3 -Scattering matrix invariants of Floquet topological insulators,http://dx.doi.org/10.1103/PhysRevB.93.075405 -Rapid Bayesian position reconstruction for gravitational-wave transients,http://dx.doi.org/10.1103/PhysRevD.93.024013 -All-sky sensitivity of HAWC to Gamma-Ray Bursts,http://arxiv.org/abs/1508.04120v2 -"Using User Generated Online Photos to Estimate and Monitor Air Pollution - in Major Cities",http://dx.doi.org/10.1145/2808492.2808564 -"Theory of fission detector signals in reactor measurements - detailed - calculations",http://arxiv.org/abs/1508.05032v2 -A new version of the event generator Sibyll,http://arxiv.org/abs/1510.00568v1 -Optimality of Rate Balancing in Wireless Sensor Networks,http://dx.doi.org/10.1109/TSP.2016.2551691 -Search for the dark photon in $π^0$ decays,http://arxiv.org/abs/1510.02632v1 -Light-matter micro-macro entanglement,http://dx.doi.org/10.1103/PhysRevLett.116.190502 -Recent QCD results from the Tevatron,http://dx.doi.org/10.1051/epjconf/20159503036 -"Theoretical corrections and world data for the superallowed ft values in - the beta decays of 42Ti, 46Cr, 50Fe and 54Ni",http://dx.doi.org/10.1103/PhysRevC.92.055505 -"Ground state tuning of the metal-insulator transition by compositional - variations in BaIr1-xRuxO3(0 D^0 pi - Sigma decay",http://dx.doi.org/10.1103/PhysRevC.98.025202 -"Information Security in Health Care Centre Using Cryptography and - Steganography",http://arxiv.org/abs/1803.05593v1 -Gravitational waves from asymmetric oscillon dynamics?,http://dx.doi.org/10.1103/PhysRevD.98.024040 -Mobile Device Type Substitution,http://dx.doi.org/10.1145/3191740 -"Search for Exotic Gluonic States in the Nucleus, A Letter of Intent to - Jefferson Lab PAC 44",http://arxiv.org/abs/1803.11206v1 -The hadronic interaction model Sibyll-2.3c and inclusive lepton fluxes,http://dx.doi.org/10.1103/PhysRevD.100.103018 -Displaced vertices as probes of sterile neutrino mixing at the LHC,http://dx.doi.org/10.1103/PhysRevD.98.035012 -"National debts and government deficits within European Monetary Union: - Statistical evidence of economic issues",http://arxiv.org/abs/1806.07830v1 -"Breaking the Limits in Urban Video Monitoring: Massive Crowd Sourced - Surveillance over Vehicles",http://arxiv.org/abs/1806.09171v1 -The price of debiasing automatic metrics in natural language evaluation,http://arxiv.org/abs/1807.02202v1 -Prompt neutrinos and intrinsic charm at SHiP,http://dx.doi.org/10.1007/JHEP02(2019)077 -"Microfluidic study of effects of flow velocity and nutrient - concentration on biofilm accumulation and adhesive strength in a microchannel",http://arxiv.org/abs/1807.03241v1 -"BlackCAT CubeSat: A Soft X-ray Sky Monitor, Transient Finder, and Burst - Detector for High-energy and Multimessenger Astrophysics",http://arxiv.org/abs/1807.03333v1 -Near-barrier Photofission in $^{232}$Th and $^{238}$U,http://dx.doi.org/10.1103/PhysRevC.98.054609 -Gamma-ray emission of hot astrophysical plasmas,http://dx.doi.org/10.1103/PhysRevD.99.063007 -ScoutBot: A Dialogue System for Collaborative Navigation,http://arxiv.org/abs/1807.08074v1 -"Neutrino emission from the direction of the blazar TXS 0506+056 prior to - the IceCube-170922A alert",http://dx.doi.org/10.1126/science.aat2890 -Digital Blues: An Investigation into the Use of Bluetooth Protocols,http://arxiv.org/abs/1808.02153v1 -"Distinguishing the nature of comparable-mass neutron star binary systems - with multimessenger observations: GW170817 case study",http://dx.doi.org/10.1103/PhysRevD.100.063021 -On-the-Fly Mapping of New Pulsars,http://dx.doi.org/10.3847/1538-3881/aadd02 -"A review on substellar objects beyond the deuterium burning mass limit: - planets, brown dwarfs or what?",http://arxiv.org/abs/1808.07798v1 -"Theory-Driven Automated Content Analysis of Suicidal Tweets : Using - Typicality-Based Classification for LDA Dataset",http://arxiv.org/abs/1808.08331v1 -Rule-based OWL Modeling with ROWLTab Protege Plugin,http://arxiv.org/abs/1808.10108v1 -"Linking gravitational waves and X-ray phenomena with joint LISA and - Athena observations",http://arxiv.org/abs/1811.00050v2 -The Mu-MASS (MuoniuM lAser SpectroScopy) experiment,http://dx.doi.org/10.1007/s10751-018-1525-z -"Holographic integration of $T \bar{T}$ and $J \bar{T}$ via $O(d,d)$",http://dx.doi.org/10.1007/JHEP03(2019)168 -Atmospheric Muons Measured with IceCube,http://dx.doi.org/10.1051/epjconf/201920808007 -Development of the detectors for the DeeMe experiment,http://arxiv.org/abs/1811.04235v5 -"$R$-parity Violating Decays of Wino Chargino and Wino Neutralino LSPs - and NLSPs at the LHC",http://arxiv.org/abs/1811.05581v2 -"Angular decorrelations in $γ+ 2 jet$ events at high energies in - the parton Reggeization approach",http://dx.doi.org/10.1142/S0217732319502663 -Flexoelectret: An Electret with Tunable Flexoelectric-like Response,http://dx.doi.org/10.1103/PhysRevLett.122.148001 -"SModelS v1.2: long-lived particles, combination of signal regions, and - other novelties",http://dx.doi.org/10.1016/j.cpc.2019.07.013 -Algae Detection Using Computer Vision and Deep Learning,http://arxiv.org/abs/1811.10847v1 -Radio Forensics Could Unmask Nearby Off-axis Gamma-ray Bursts,http://dx.doi.org/10.1093/mnras/stz719 -"Constructing Trivariate B-splines with Positive Jacobian by Pillow - Operation and Geometric Iterative Fitting",http://arxiv.org/abs/1811.12597v1 -"FlowSAN: Privacy-enhancing Semi-Adversarial Networks to Confound - Arbitrary Face-based Gender Classifiers",http://arxiv.org/abs/1905.01388v1 -"PBH remnants as dark matter produced in thermal, matter and - runaway-quintessence post-inflationary scenarios",http://dx.doi.org/10.1103/PhysRevD.100.083512 -Random Self-Similar Trees: A mathematical theory of Horton laws,http://arxiv.org/abs/1905.02629v2 -"Flux measurement of fast neutrons in the Gran Sasso underground - laboratory",http://dx.doi.org/10.1140/epjc/s10052-019-7247-9 -Solving the gamma-ray radiative transfer equation for supernovae,http://dx.doi.org/10.1093/mnras/stz1367 -First Results of Arar-Magnetometer Station in Saudi Arabia,http://dx.doi.org/10.1016/j.asr.2019.12.009 -"Next-to-leading power threshold effects for inclusive and exclusive - processes with final state jets",http://dx.doi.org/10.1007/JHEP03(2020)106 -"When to reply? Context Sensitive Models to Predict Instructor - Interventions in MOOC Forums",http://arxiv.org/abs/1905.10851v1 -SSHFD: Single Shot Human Fall Detection with Occluded Joints Resilience,http://arxiv.org/abs/2004.00797v2 -"A Pilot Study of Catching High-$z$ GRBs and Exploring Circumburst - Environment in the Forthcoming SVOM Era",http://dx.doi.org/10.1088/1674-4527/20/8/124 -CLICTD: A monolithic HR-CMOS sensor chip for the CLIC silicon tracker,http://arxiv.org/abs/2004.02537v1 -"Adversarial Genetic Programming for Cyber Security: A Rising Application - Domain Where GP Matters",http://dx.doi.org/10.1007/s10710-020-09389-y -Online Information Search During COVID-19,http://arxiv.org/abs/2004.07183v2 -How long does a lockdown need to be?,http://arxiv.org/abs/2004.11633v1 -"Fuzzy Logic Based Integration of Web Contextual Linguistic Structures - for Enriching Conceptual Visual Representations",http://dx.doi.org/10.1109/TETCI.2018.2849417 -"Size properties of the largest fragments produced in the framework of - the statistical multifragmentation model",http://arxiv.org/abs/2004.12860v1 -Indirect Identification of Psychosocial Risks from Natural Language,http://arxiv.org/abs/2004.14554v1 -"Inelastic dark matter, small scale problems, and the XENON1T excess",http://dx.doi.org/10.1007/JHEP10(2021)135 -"Do You Even Need Attention? A Stack of Feed-Forward Layers Does - Surprisingly Well on ImageNet",http://arxiv.org/abs/2105.02723v1 -On the Ethical Limits of Natural Language Processing on Legal Text,http://arxiv.org/abs/2105.02751v3 -Hamiltonian Formulation of Higher Rank Symmetric Gauge Theories,http://dx.doi.org/10.1140/epjc/s10052-021-09964-2 -"Instance-aware Remote Sensing Image Captioning with Cross-hierarchy - Attention",http://arxiv.org/abs/2105.04996v1 -"RetGen: A Joint framework for Retrieval and Grounded Text Generation - Modeling",http://arxiv.org/abs/2105.06597v4 -"Dark matter interacting via a massive spin-2 mediator in warped - extra-dimensions",http://dx.doi.org/10.1007/JHEP11(2021)036 -The Greedy Algorithm is \emph{not} Optimal for On-Line Edge Coloring,http://arxiv.org/abs/2105.06944v1 -"Impacts of Time-of-Use Rate Changes on the Electricity Bills of - Commercial Consumers",http://dx.doi.org/10.1109/PESGM46819.2021.9638125 -Collider Prospects for Muon $g-2$ in General Two Higgs Doublet Model,http://dx.doi.org/10.1103/PhysRevD.104.075036 -Algebraic properties of face algebras,http://dx.doi.org/10.1142/S0219498823500767 -"Climate Action During COVID-19 Recovery and Beyond: A Twitter Text - Mining Study",http://arxiv.org/abs/2105.12190v1 -Database Workload Characterization with Query Plan Encoders,http://arxiv.org/abs/2105.12287v1 -"Estimating air quality co-benefits of energy transition using machine - learning",http://arxiv.org/abs/2105.14318v1 -Search for the dark photon in $π^0$ decays,http://arxiv.org/abs/1504.00607v2 -Double tidal disruptions in galactic nuclei,http://dx.doi.org/10.1088/2041-8205/805/1/L4 -"Thermal photon radiation in high multiplicity p+Pb collisions at the - Large Hadron Collider",http://dx.doi.org/10.1103/PhysRevLett.116.072301 -Diphotons from Diaxions,http://dx.doi.org/10.1007/JHEP05(2016)077 -"Adaptation Logic for HTTP Dynamic Adaptive Streaming using - Geo-Predictive Crowdsourcing",http://dx.doi.org/10.1007/s00530-016-0525-6 -Two worlds collide: Interacting shells in AdS spacetime and chaos,http://dx.doi.org/10.1103/PhysRevD.94.024003 -The Most Luminous Supernovae,http://dx.doi.org/10.3847/2041-8205/820/2/L38 -The promise of multi-band gravitational wave astronomy,http://dx.doi.org/10.1103/PhysRevLett.116.231102 -Violet emission from bulk Si prompted by surface plasmon polaritons,http://arxiv.org/abs/1608.00793v1 -"Multiview Cauchy Estimator Feature Embedding for Depth and Inertial - Sensor-Based Human Action Recognition",http://arxiv.org/abs/1608.02183v2 -"Stochastic Gravitational-Wave Background due to Primordial Binary Black - Hole Mergers",http://dx.doi.org/10.1103/PhysRevLett.117.201102 -"All-nitride and In-free Al$_x$Ga$_{1-x}$N:Mn/GaN distributed Bragg - reflectors for the near-infrared",http://dx.doi.org/10.1038/srep42697 -"Searching for SUSY and decaying gravitino DM at the LHC and Fermi-LAT - with the $μν$SSM",http://arxiv.org/abs/1608.07912v1 -How bad is selfish routing in practice?,http://arxiv.org/abs/1703.01599v2 -"System-Theoretic Performance Metrics for Low-Inertia Stability of Power - Networks",http://arxiv.org/abs/1703.02646v1 -"Eclipse, transit and occultation geometry of planetary systems at - exo-syzygy",http://dx.doi.org/10.1093/mnras/stx614 -"Automatic Skin Lesion Analysis using Large-scale Dermoscopy Images and - Deep Residual Networks",http://arxiv.org/abs/1703.04197v2 -"Dynamical compensation and structural identifiability: analysis, - implications, and reconciliation",http://dx.doi.org/10.1371/journal.pcbi.1005878 -"D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at - $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV",http://dx.doi.org/10.1103/PhysRevLett.120.102301 -"Optimization of the final settings for the Space-borne Hard X-ray - Compton Polarimeter POLAR",http://arxiv.org/abs/1707.02291v1 -Automatic Understanding of Image and Video Advertisements,http://arxiv.org/abs/1707.03067v1 -"First-order spatial coherence measurements in a thermalized - two-dimensional photonic quantum gas",http://dx.doi.org/10.1038/s41467-017-00270-8 -"Conditional Independence, Conditional Mean Independence, and Zero - Conditional Covariance",http://arxiv.org/abs/1707.04802v2 -Gamma-ray bursts and their use as cosmic probes,http://arxiv.org/abs/1707.05214v2 -Application of Superhalogens in the Design of Organic Superconductors,http://arxiv.org/abs/1707.09606v1 -Some extensions of theorems of Knörrer and Herzog-Popescu,http://arxiv.org/abs/1709.01916v2 -"Automated Dyadic Data Recorder (ADDR) Framework and Analysis of Facial - Cues in Deceptive Communication",http://arxiv.org/abs/1709.02414v1 -"Connecting nth order generalised quantum Rabi models: Emergence of - nonlinear spin-boson coupling via spin rotations",http://dx.doi.org/10.1038/s41534-018-0096-9 -Prompt photon production and photon-jet correlations at the LHC,http://dx.doi.org/10.1007/JHEP03(2018)081 -Galactic evolution of Copper in the light of NLTE computations,http://dx.doi.org/10.1093/mnras/stx2526 -Synthesizing SystemC Code from Delay Hybrid CSP,http://arxiv.org/abs/1709.09019v1 -Inclusive production of vector quarkonia at the LHC,http://arxiv.org/abs/1711.00377v1 -Possible origin of shoulder in the reactor antineutrino spectrum,http://arxiv.org/abs/1711.02801v1 -"Interpreting GRB170817A as a giant flare from a jet-less double - neutron-star merger",http://dx.doi.org/10.1051/0004-6361/201732259 -"Charmonia production from $b$-hadron decays at LHC with - $k_T$-factorization: $J/ψ$, $ψ(2S)$ and $J/ψ+ Z$",http://dx.doi.org/10.1140/epjc/s10052-017-5489-y -ADVISE: Symbolism and External Knowledge for Decoding Advertisements,http://arxiv.org/abs/1711.06666v2 -Mechanisms of Coulomb dissociation processes,http://arxiv.org/abs/1711.06741v1 -"Why ""Redefining Statistical Significance"" Will Not Improve - Reproducibility and Could Make the Replication Crisis Worse",http://arxiv.org/abs/1711.07801v1 -"Physics of the saturation of particle acceleration in relativistic - magnetic reconnection",http://dx.doi.org/10.1093/mnras/sty452 -Rating the online review rating system using Yelp,http://arxiv.org/abs/1711.09737v2 -Curriculum Q-Learning for Visual Vocabulary Acquisition,http://arxiv.org/abs/1711.10837v1 -"The PhotoBook Dataset: Building Common Ground through Visually-Grounded - Dialogue",http://arxiv.org/abs/1906.01530v2 -In Situ Cane Toad Recognition,http://dx.doi.org/10.1109/DICTA.2018.8615780 -"Data Conversion in Area-Constrained Applications: the Wireless - Network-on-Chip Case",http://dx.doi.org/10.1109/DCIS.2018.8681465 -"Are there any challenges in the charmonia production and polarization at - the LHC?",http://dx.doi.org/10.1103/PhysRevD.100.114021 -"Addressing behavioral change towards energy efficiency in European - educational buildings",http://dx.doi.org/10.1109/GIOTS.2017.8016258 -"A Hybrid Precipitation Prediction Method based on Multicellular Gene - Expression Programming",http://arxiv.org/abs/1906.08852v1 -"Quantitative Verification of Neural Networks And its Security - Applications",http://arxiv.org/abs/1906.10395v1 -TMD parton shower effects in associated $γ$ + jet production at LHC,http://dx.doi.org/10.1103/PhysRevD.100.034028 -"Tactile Hallucinations on Artificial Skin Induced by Homeostasis in a - Deep Boltzmann Machine",http://arxiv.org/abs/1906.10592v2 -Data Consortia,http://arxiv.org/abs/1906.11803v1 -Detecting Fake News with Capsule Neural Networks,http://arxiv.org/abs/2002.01030v1 -Large Numbers in Holography,http://arxiv.org/abs/2002.05354v1 -"The use of Convolutional Neural Networks for signal-background - classification in Particle Physics experiments",http://dx.doi.org/10.1051/epjconf/202024506003 -"Development of an advanced Compton telescope for MeV-range gamma-ray - astronomy",http://arxiv.org/abs/2002.11586v1 -Hematite at its thinnest limit,http://dx.doi.org/10.1088/2053-1583/ab6d79 -"Delayed coincidence in electron-neutrino capture on gallium for neutrino - spectroscopy",http://dx.doi.org/10.1016/j.astropartphys.2020.102519 -Towards Controllable Biases in Language Generation,http://arxiv.org/abs/2005.00268v2 -An Imitation Game for Learning Semantic Parsers from User Interaction,http://arxiv.org/abs/2005.00689v3 -KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis,http://arxiv.org/abs/2005.00791v2 -The X(3872) as a mass distribution,http://arxiv.org/abs/2005.01531v1 -"Classification of pediatric pneumonia using chest X-rays by functional - regression",http://arxiv.org/abs/2005.03243v1 -Competitive Algorithms for Minimizing the Maximum Age-of-Information,http://arxiv.org/abs/2005.05873v1 -Rheological basis of skeletal muscle work loops,http://arxiv.org/abs/2005.07238v3 -"A Risk Assessment of a Pretrial Risk Assessment Tool: Tussles, - Mitigation Strategies, and Inherent Limits",http://arxiv.org/abs/2005.07299v1 -Efficient Network Function Backup by Update Piggybacking,http://arxiv.org/abs/2005.07580v1 -Reflections on the Erd\H {o}s Discrepancy Problem,http://arxiv.org/abs/2005.14283v1 -"Characterization of water-based liquid scintillator for Cherenkov and - scintillation separation",http://dx.doi.org/10.1140/epjc/s10052-020-8418-4 -Characterization of new eco friendly gas mixtures based on HFO for RPCs,http://arxiv.org/abs/2006.00331v1 -Debiased Sinkhorn barycenters,http://arxiv.org/abs/2006.02575v1 -"The importance of being discrete: on the inaccuracy of continuous - approximations in auction theory",http://arxiv.org/abs/2006.03016v3 -"$O(n)$ Connections are Expressive Enough: Universal Approximability of - Sparse Transformers",http://arxiv.org/abs/2006.04862v2 -CNN-Based Semantic Change Detection in Satellite Imagery,http://dx.doi.org/10.1007/978-3-030-30493-5_61 -"Search for Axionlike-Particle-Induced Prompt Gamma-Ray Emission from - Extragalactic Core-Collapse Supernovae with the Fermi Large Area Telescope",http://dx.doi.org/10.1103/PhysRevLett.124.231101 -"On the environment-destructive probabilistic trends: a perceptual and - behavioral study on video game players",http://arxiv.org/abs/2006.09706v1 -"An Exploratory Study of Argumentative Writing by Young Students: A - Transformer-based Approach",http://arxiv.org/abs/2006.09873v1 -A computer program to simulate the response of SiPMs,http://arxiv.org/abs/2006.11150v1 -"Fast Mixing of Multi-Scale Langevin Dynamics under the Manifold - Hypothesis",http://arxiv.org/abs/2006.11166v2 -"High-resolution millimeter-wave spectroscopy of CH$_2$DCl: paving the - way for future astronomical observations of chloromethane isotopologues",http://dx.doi.org/10.1016/j.jqsrt.2020.106982 -"Identify Influential Nodes in Online Social Network for Brand - Communication",http://arxiv.org/abs/2006.14104v1 -"Dark Matter Annihilation Can Produce a Detectable Antihelium Flux - through $\barΛ_b$ Decays",http://dx.doi.org/10.1103/PhysRevLett.126.101101 -The M-Sigma Project,http://arxiv.org/abs/0811.0594v1 -The Chinese-French SVOM mission for Gamma-Ray Burst studies,http://arxiv.org/abs/0811.1154v1 -Synthesis of infinite-layer LaNiO2 films by metal-organic deposition,http://dx.doi.org/10.1016/j.physc.2009.05.104 -"Is J 133658.3-295105 a Radio Source at z >= 1.0 or at the Distance of M - 83?",http://dx.doi.org/10.1088/0004-6256/136/6/2468 -Observation of Geo-Neutrinos,http://dx.doi.org/10.1016/j.physletb.2010.03.051 -"Observation of the growth of a magnetic vortex in the transition layer - of a mildly relativistic oblique plasma shock",http://dx.doi.org/10.1063/1.3493627 -"Observations, theory and implications of thermal emission from gamma-ray - bursts",http://arxiv.org/abs/1003.2582v1 -Microscopic Description of Nuclear Fission Dynamics,http://dx.doi.org/10.1088/0954-3899/37/6/064037 -Reconstructing Dark Matter Properties via Gamma-Rays with Fermi-LAT,http://arxiv.org/abs/1012.0217v1 -"Gamma-Ray Burst Observations at High Energy with the Fermi Large Area - Telescope",http://arxiv.org/abs/1012.0558v1 -"η' Multiplicity and the Witten-Veneziano relation at finite - temperature",http://dx.doi.org/10.1103/PhysRevD.84.016006 -"J/psi production at sqrt(s)=1.96 and 7 TeV: Color-Singlet Model, NNLO* - and polarisation",http://dx.doi.org/10.1088/0954-3899/38/12/124110 -"Isolated photon production in $\sqrt{s_{NN}}$ = 2.76 TeV PbPb collisions - as a function of transverse energy and reaction centrality",http://dx.doi.org/10.1088/0954-3899/38/12/124179 -"Toward Early-Warning Detection of Gravitational Waves from Compact - Binary Coalescence",http://dx.doi.org/10.1088/0004-637X/748/2/136 -"Infrared and radio study of the W43 cluster: resolved binaries and - non-thermal emission",http://dx.doi.org/10.1051/0004-6361/201117296 -"Spectral-Temporal Simulations of Internal Dissipation Models of - Gamma-Ray Bursts",http://dx.doi.org/10.1088/0004-637X/739/2/103 -Zone-Center Dynamical Matrix in Magnetoelectrics,http://dx.doi.org/10.1103/PhysRevB.84.214428 -Precise measurement of prompt photon emission for carbon ion therapy,http://dx.doi.org/10.1088/1748-0221/7/03/P03001 -"SSC Emission as Explanation of The Gamma Ray Afterglow Observed in GRB - 980923",http://arxiv.org/abs/1110.6421v1 -Time-dependent CP asymmetries in charm decays,http://arxiv.org/abs/1111.0168v1 -"Measurement of charm production at central rapidity in proton-proton - collisions at $\sqrt{s} = 7$ TeV",http://dx.doi.org/10.1007/JHEP01(2012)128 -"Initial Systematic Investigations of the Landscape of Low Layer NAHE - Variation Extensions",http://arxiv.org/abs/1111.1917v3 -A double component in the prompt emission of GRB 090618,http://arxiv.org/abs/1111.2230v1 -An observational imprint of the Collapsar model of long Gamma Ray Bursts,http://dx.doi.org/10.1088/0004-637X/749/2/110 -"Discovery of a Wolf-Rayet Star Through Detection of its Photometric - Variability",http://dx.doi.org/10.1088/0004-6256/143/6/136 -The Impact of Fermi on the Study of Gamma-ray Bursts,http://arxiv.org/abs/1111.3378v1 -"Self consistent, absolute calibration technique for photon number - resolving detectors",http://dx.doi.org/10.1364/OE.19.023249 -"DECENT: A Decentralized Architecture for Enforcing Privacy in Online - Social Networks",http://arxiv.org/abs/1111.5377v2 -The structure of coevolving infection networks,http://dx.doi.org/10.1209/0295-5075/97/18003 -MIS: a MIRIAD Interferometry Singledish toolkit,http://arxiv.org/abs/1202.1030v1 -Single- and Two-Component GRB Spectra in the Fermi GBM-LAT Energy Range,http://dx.doi.org/10.1088/0004-637X/755/1/12 -"Splitting of 3d quaternion dimensions into 2d-sells and a ""world screen - technology""",http://arxiv.org/abs/1202.2995v1 -"Correlation between the isotropic energy and the peak energy at zero - fluence for the individual pulses of GRBs: towards an universal physical - correlation for the prompt emission",http://dx.doi.org/10.1088/0004-637X/749/2/132 -"Discriminating Minimal SUGRA and Minimal Gauge Mediation Models at the - Early LHC",http://dx.doi.org/10.1007/JHEP04(2012)003 -"Optimization of a Mu2e production solenoid heat and radiation shield - using MARS15",http://arxiv.org/abs/1202.3946v1 -"On the sensitivity of the dijet asymmetry to the physics of jet - quenching",http://dx.doi.org/10.1103/PhysRevC.85.064908 -Jitter Self-Compton Process: GeV Emission of GRB 100728A,http://dx.doi.org/10.1088/0004-637X/748/2/135 -A Note on Solar Cycle Length during the Medieval Climate Anomaly,http://dx.doi.org/10.1007/s11207-012-9964-1 -"Suppression of high transverse momentum D mesons in central Pb-Pb - collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV",http://dx.doi.org/10.1007/JHEP09(2012)112 -"Improving the Prompt Electromagnetic Energy Component of Jet Energy - Resolution with pi0 Fitting in High Granularity Electromagnetic Calorimeters",http://arxiv.org/abs/1203.2577v1 -The Astro-WISE approach to quality control for astronomical data,http://dx.doi.org/10.1007/s10686-012-9296-z -Thermalization at intermediate coupling,http://dx.doi.org/10.1103/PhysRevLett.110.101601 -"Nuclear modification of high transverse momentum particle production in - p+A collisions at RHIC and LHC",http://dx.doi.org/10.1016/j.physletb.2012.10.046 -The universal enveloping algebra of the Witt algebra is not noetherian,http://arxiv.org/abs/1304.0114v3 -Probing Curvature Effects in the Fermi GRB 110920,http://dx.doi.org/10.1088/0004-637X/778/1/3 -Singular superspaces,http://dx.doi.org/10.1007/s00209-014-1323-5 -Testing Quantum Gravity by Quantum Light,http://dx.doi.org/10.1103/PhysRevLett.110.213601 -Wireless sensor network technology for moisture monitoring of wood,http://arxiv.org/abs/1307.0952v1 -"Multiple scattering effects on inclusive particle production in the - large-x regime",http://dx.doi.org/10.1103/PhysRevD.88.054010 -LHC jet suppression of light and heavy flavor observables,http://dx.doi.org/10.1016/j.physletb.2014.05.053 -"Preparing for an Explosion: Hydrodynamic Instabilities and Turbulence in - Presupernovae",http://dx.doi.org/10.1088/0004-637X/785/2/82 -Prompt merger collapse and the maximum mass of neutron stars,http://dx.doi.org/10.1103/PhysRevLett.111.131101 -Linear response of tripartite entanglement to infinitesimal noise,http://dx.doi.org/10.1016/j.aop.2014.06.017 -"Heavy flavour in nucleus-nucleus collisions at RHIC and LHC: a Langevin - approach",http://dx.doi.org/10.1051/epjconf/20146604004 -"AIDSS-HR: An Automated Intelligent Decision Support System for Enhancing - the Performance of Employees",http://arxiv.org/abs/1307.8335v1 -A search for lines in the bright X-ray afterglow of GRB120711A,http://dx.doi.org/10.1051/0004-6361/201321604 -Slow-light enhanced gain in active photonic crystal waveguides,http://dx.doi.org/10.1038/ncomms6039 -Super-Planckian excursions of the inflaton and quantum corrections,http://dx.doi.org/10.1142/S0217732315400088 -"Temporal and spectral disentanglement of laser-driven electron tunneling - emission from a solid",http://dx.doi.org/10.1038/srep35877 -Stars as resonant absorbers of gravitational waves,http://dx.doi.org/10.1093/mnrasl/slu136 -Causality and chance in relativistic quantum field theories,http://dx.doi.org/10.1016/j.shpsb.2014.03.002i -RHIC and LHC jet suppression in non-central collisions,http://dx.doi.org/10.1016/j.physletb.2014.08.063 -Production of Tetraquarks at the LHC,http://dx.doi.org/10.1103/PhysRevD.90.034003 -"The Time Structure of Hadronic Showers in Calorimeters with Scintillator - and with Gas Readout",http://dx.doi.org/10.1088/1742-6596/587/1/012037 -"Localizing gravitational wave sources with optical telescopes and - combining electromagnetic and gravitational wave data",http://dx.doi.org/10.1007/978-3-319-10488-1_5 -"Magneto-optics of general pseudospin-s two-dimensional Dirac-Weyl - fermions",http://dx.doi.org/10.1103/PhysRevB.90.035405 -"Convective turbulence in a Rayleigh-Benard system with and without - rotation in the infinite Prandtl number limit",http://arxiv.org/abs/1406.2232v1 -"Investigating Binary Black Hole Mergers with Principal Component - Analysis",http://dx.doi.org/10.1007/978-3-319-10488-1_24 -"Temporal dynamics of stimulated emission with applications in nuclear - quantum optics",http://dx.doi.org/10.1103/PhysRevA.91.053810 -Weak Values are Interference Phenomena,http://dx.doi.org/10.1103/PhysRevA.91.032116 -Systems for Near Real-Time Analysis of Large-Scale Dynamic Graphs,http://arxiv.org/abs/1410.1903v1 -"A Review of CUDA, MapReduce, and Pthreads Parallel Computing Models",http://arxiv.org/abs/1410.4453v1 -Simulation of Black Hole Collisions in Asymptotically AdS Spacetimes,http://dx.doi.org/10.1103/PhysRevLett.114.081601 -"New insights on Saturn's formation from its nitrogen isotopic - composition",http://dx.doi.org/10.1088/2041-8205/796/2/L28 -"The Scission-Point Configuration within the Two-Center Shell Model Shape - Parameterization",http://dx.doi.org/10.1103/PhysRevC.90.054607 -New Results from RENO and The 5 MeV Excess,http://dx.doi.org/10.1063/1.4915563 -"$Υ(nS)$ and $χ_b(nP)$ production at hadron colliders in - nonrelativistic QCD",http://dx.doi.org/10.1103/PhysRevD.94.014028 -Existence of traversable wormholes in the spherical stellar systems,http://dx.doi.org/10.1007/s10509-016-2803-3 -"Prospects for Gamma-Ray Bursts detection by the Cherenkov Telescope - Array",http://arxiv.org/abs/1509.01438v1 -Dynamic concurrent van Emde Boas array,http://arxiv.org/abs/1509.06948v1 -"Parameterized Linear Temporal Logics Meet Costs: Still not Costlier than - LTL",http://dx.doi.org/10.4204/EPTCS.193.11 -"Hyperinstantons, the Beltrami Equation, and Triholomorphic Maps",http://dx.doi.org/10.1002/prop.201500061 -Latest nH analysis in the Double Chooz experiment,http://arxiv.org/abs/1511.00068v1 -"Optimizing Global Coronal Magnetic Field Models Using Image-Based - Constraints",http://dx.doi.org/10.3847/0004-637X/820/2/113 -High-Performance Computing with Quantum Processing Units,http://arxiv.org/abs/1511.04386v1 -"A search for prompt lepton-jets in $pp$ collisions at $\sqrt{s}=$ 8 TeV - with the ATLAS detector",http://dx.doi.org/10.1007/JHEP02(2016)062 -Dark Matter Annihilation Decay at The LHC,http://dx.doi.org/10.1103/PhysRevD.93.035024 -Signal Processing and Electronic Noise in LZ,http://dx.doi.org/10.1088/1748-0221/11/03/C03029 -A non-equilibrium microscopic description of spallation,http://dx.doi.org/10.1393/ncc/i2016-16384-8 -"Low-Γ jets from Compact Binary Mergers as Candidate - Electromagnetic Counterparts to Gravitational Wave Sources",http://dx.doi.org/10.1017/S174392131601228X -Not-that-heavy Majorana neutrino signals at the LHC,http://arxiv.org/abs/1610.03894v2 -"The cosmic-ray ground-level enhancements of 29 September 1989 and 20 - January 2005",http://arxiv.org/abs/1610.04635v1 -On the IceCube spectral anomaly,http://dx.doi.org/10.1088/1475-7516/2016/12/045 -"Automated assessment of non-native learner essays: Investigating the - role of linguistic features",http://dx.doi.org/10.1007/s40593-017-0142-3 -A microprocessor based on a two-dimensional semiconductor,http://dx.doi.org/10.1038/ncomms14948 -"Post-Outburst Radio Observations of the High Magnetic Field Pulsar PSR - J1119-6127",http://dx.doi.org/10.3847/2041-8213/834/1/L2 -Holographic Photon Production and Anisotropic Flow,http://arxiv.org/abs/1612.05114v2 -Measurement of CP asymmetries in $D^0 \rightarrow hh$ decays,http://arxiv.org/abs/1612.05118v1 -Electromagnetic radiation and collectivity in small quark-gluon droplets,http://arxiv.org/abs/1612.05464v1 -A Fast High-Voltage Switching Multiwire Proportional Chamber,http://arxiv.org/abs/1612.08329v1 -Learning Visual N-Grams from Web Data,http://arxiv.org/abs/1612.09161v2 -"Measurement of D-meson production at mid-rapidity in pp collisions at - $\mathbf{\sqrt{s}=7}$ TeV",http://dx.doi.org/10.1140/epjc/s10052-017-5090-4 -"Acceleration of low-latency gravitational wave searches using - Maxwell-microarchitecture GPUs",http://arxiv.org/abs/1702.02256v1 -A short introduction to secrecy and verifiability for elections,http://arxiv.org/abs/1702.03168v3 -"Northwest Africa 5958: a weakly altered CM-related ungrouped chondrite, - not a CI3",http://dx.doi.org/10.1111/maps.12628 -Information-thermodynamics of Quantum Generalized Measurements,http://arxiv.org/abs/1702.07164v1 -Vector boson-tagged jet production in heavy ion collisions at the LHC,http://dx.doi.org/10.1103/PhysRevC.96.014912 -"Thermodynamic Effects of Single-Qubit Operations in Silicon-Based - Quantum Computing",http://dx.doi.org/10.1016/j.physleta.2018.05.027 -Deep Learning based Isolated Arabic Scene Character Recognition,http://arxiv.org/abs/1704.06821v1 -"Micro-plasticity and recent insights from intermittent and small-scale - plasticity",http://arxiv.org/abs/1704.07297v1 -"Semi-Automated & Collaborative Online Training Module For Improving - Communication Skills",http://dx.doi.org/10.1145/3090097 -The Pragmatics of Indirect Commands in Collaborative Discourse,http://arxiv.org/abs/1705.03454v2 -QCD Results from HERA,http://arxiv.org/abs/1705.05204v1 -"Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes - Processes for Risk Prognosis",http://arxiv.org/abs/1705.05267v1 -Exploiting the Structure via Sketched Gradient Algorithms,http://arxiv.org/abs/1705.05348v2 -Latest ALICE results of photon and jet measurements,http://arxiv.org/abs/1705.06800v1 -"User Selection and Widely Linear Multiuser Precoding for One-dimensional - Signalling",http://arxiv.org/abs/1705.09985v1 -Neural Embeddings of Graphs in Hyperbolic Space,http://arxiv.org/abs/1705.10359v1 -"Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies - Perspective",http://arxiv.org/abs/1712.02193v2 -"Statistics students' identification of inferential model elements within - contexts of their own invention",http://dx.doi.org/10.1007/s11858-018-0986-5 -"Characterization of Germanium Detectors for the Measurement of the - Angular Distribution of Prompt gamma-rays at the ANNRI in the MLF of the - J-PARC",http://dx.doi.org/10.1088/1748-0221/13/02/P02018 -"Constraints On Short, Hard Gamma-Ray Burst Beaming Angles From - Gravitational Wave Observations",http://dx.doi.org/10.3847/1538-4357/aab847 -Detection of the thermal component in GRB 160107A,http://dx.doi.org/10.1093/pasj/psx152 -"Revealing Short GRB Jet Structure and Dynamics with Gravitational Wave - Electromagnetic Counterparts",http://dx.doi.org/10.1017/S1743921318000169 -Gravitational-wave luminosity of binary neutron stars mergers,http://dx.doi.org/10.1103/PhysRevLett.120.111101 -"Linear Block Coding for Efficient Beam Discovery in Millimeter Wave - Communication Networks",http://arxiv.org/abs/1712.07161v1 -A Mixture of Matrix Variate Bilinear Factor Analyzers,http://arxiv.org/abs/1712.08664v3 -"Constraints on the intrinsic charm content of the proton from recent - ATLAS data",http://arxiv.org/abs/1712.09096v2 -Build up of a subject classification system from collective intelligence,http://dx.doi.org/10.3938/NPSM.68.647 -Patterns of variability in supercritical hadronic systems,http://dx.doi.org/10.1093/mnras/sty833 -Knockout driven fragmentation of porphyrins,http://dx.doi.org/10.1039/C7CP01583F -"Hydrogenated pyrene: Statistical single-carbon loss below the knockout - threshold",http://dx.doi.org/10.1140/epjd/e2016-60735-3 -"A MAD Explanation for the Correlation between Bulk Lorentz Factor and - Minimum Variability Timescale",http://dx.doi.org/10.1093/mnras/sty1030 -"The Relevance of Text and Speech Features in Automatic Non-native - English Accent Identification",http://arxiv.org/abs/1804.05689v1 -Photons from thermalizing matter in heavy ion collisions,http://dx.doi.org/10.1016/j.nuclphysa.2018.07.013 -Bolzano's measurable numbers: are they real?,http://arxiv.org/abs/1805.02237v1 -Strategy-Proof Incentives for Predictions,http://arxiv.org/abs/1805.04867v3 -Conversations Gone Awry: Detecting Early Signs of Conversational Failure,http://arxiv.org/abs/1805.05345v1 -Quantum dark solitons in the one-dimensional Bose gas,http://dx.doi.org/10.1103/PhysRevA.99.043632 -"A Talker Ensemble: the University of Wrocław's Entry to the NIPS 2017 - Conversational Intelligence Challenge",http://arxiv.org/abs/1805.08032v1 -Constraints on anharmonic corrections of Fuzzy Dark Matter,http://dx.doi.org/10.1007/JHEP08(2018)073 -Prediction of the second peak in the afterglow of GW170817,http://arxiv.org/abs/1805.08338v1 -Fast Dynamic Routing Based on Weighted Kernel Density Estimation,http://arxiv.org/abs/1805.10807v2 -"Beam Discovery Using Linear Block Codes for Millimeter Wave - Communication Networks",http://arxiv.org/abs/1805.12009v1 -"Challenges for Measuring Usefulness of Interactive IR Systems with - Log-based Approaches",http://arxiv.org/abs/1809.02413v1 -Usable Differential Privacy: A Case Study with PSI,http://arxiv.org/abs/1809.04103v1 -The First Glitch in a Central Compact Object Pulsar: 1E 1207.4-5209,http://dx.doi.org/10.3847/1538-4357/aae152 -Quantum Rolling Friction,http://dx.doi.org/10.1103/PhysRevLett.123.120401 -"A Study of Background Conditions for Sphinx--The Satellite-Borne - Gamma-Ray Burst Polarimeter",http://dx.doi.org/10.3390/galaxies6020050 -"Nanoparticle-lipid interaction: Job scattering plots to differentiate - vesicle aggregation from supported lipid bilayer formation",http://dx.doi.org/10.3390/colloids2040050 -Spoken Pass-Phrase Verification in the i-vector Space,http://arxiv.org/abs/1809.11068v1 -"District heating systems under high CO2 emission prices: the role of the - pass-through from emission cost to electricity prices",http://arxiv.org/abs/1810.02109v1 -"Statistical Study of the Swift X-ray Flash and X-ray Rich Gamma-Ray - Bursts",http://dx.doi.org/10.3847/1538-4357/aadcf8 -"AARTFAAC Flux Density Calibration and Northern Hemisphere Catalogue at - 60 MHz",http://dx.doi.org/10.1093/mnras/sty2810 -"Modeling and Analysis of Wildfire Detection using Wireless Sensor - Network with Poisson Deployment",http://arxiv.org/abs/1810.07511v1 -Fat Jet Signature of a Heavy Neutrino at Lepton Collider,http://dx.doi.org/10.1103/PhysRevD.100.015012 -The remaining parts for the long-standing J/psi polarization puzzle,http://dx.doi.org/10.1103/PhysRevD.99.014044 -Distilling with Performance Enhanced Students,http://arxiv.org/abs/1810.10460v2 -A PMU-based Multivariate Model for Classifying Power System Events,http://arxiv.org/abs/1812.00246v1 -"Revisiting constraints on 3+1 active-sterile neutrino mixing using - IceCube data",http://dx.doi.org/10.1007/JHEP03(2019)203 -Closing the light gluino gap with electron-proton colliders,http://dx.doi.org/10.1103/PhysRevD.99.055011 -"Probing Neutrino Dirac Mass in Left-Right Symmetric Models at the LHC - and Next Generation Colliders",http://dx.doi.org/10.1103/PhysRevD.99.055042 -"Atmospheric Charm, QCD and Neutrino Astronomy",http://arxiv.org/abs/1812.02248v1 -What is the Effect of Importance Weighting in Deep Learning?,http://arxiv.org/abs/1812.03372v3 -Long live the Higgs portal!,http://dx.doi.org/10.1007/JHEP02(2019)140 -Modeling Temporal Evidence from External Collections,http://dx.doi.org/10.1145/3289600.3290966 -"Direct photon production and flow at low transverse momenta in pp, p-Pb - and Pb-Pb collisions",http://arxiv.org/abs/1812.08104v1 -Pan-Cancer Epigenetic Biomarker Selection from Blood Samples Using SAS,http://arxiv.org/abs/1812.09203v1 -Ag-Au alloys BCS-like Superconductors?,http://arxiv.org/abs/1812.09308v1 -The second Higgs at the lifetime frontier,http://arxiv.org/abs/1812.09315v2 -Intent Detection and Slots Prompt in a Closed-Domain Chatbot,http://arxiv.org/abs/1812.10628v2 -Eff Directly in OCaml,http://dx.doi.org/10.4204/EPTCS.285.2 -Pathwise McKean-Vlasov Theory with Additive Noise,http://dx.doi.org/10.1214/20-AAP1560 -"Emergent Commensurability from Hilbert Space Truncation in Fractional - Quantum Hall Fluids",http://dx.doi.org/10.1103/PhysRevB.100.241302 -"The complete study on the inclusive production of Υ + γ - at the LHC",http://dx.doi.org/10.1103/PhysRevD.99.096010 -"Gauss's Law and the Source for Poisson's Equation in Modified Gravity - with Varying G",http://dx.doi.org/10.1093/mnras/stz120 -"A Bleeding Digital Heart: Identifying Residual Data Generation from - Smartphone Applications Interacting with Medical Devices",http://arxiv.org/abs/1901.03724v1 -"Friend, Collaborator, Student, Manager: How Design of an AI-Driven Game - Level Editor Affects Creators",http://dx.doi.org/10.1145/3290605.3300854 -"Centrality dependence of the direct photon multiplicity in heavy ion - collisions",http://arxiv.org/abs/1901.07019v1 -GRB 190114C: An Upgraded Legend,http://arxiv.org/abs/1901.07505v2 -"Phenomenological NLO analysis of eta(c) production at the LHC in the - collider and fixed-target modes",http://dx.doi.org/10.1016/j.nuclphysb.2019.114662 -"An analogue of the Gibbons-Hawking Ansatz for quaternionic Kähler - spaces",http://arxiv.org/abs/1901.11166v3 -"Natural Language Processing, Sentiment Analysis and Clinical Analytics",http://arxiv.org/abs/1902.00679v1 -"Preliminary model of the outer disk of RU Lup presently showing only - four dark gaps",http://arxiv.org/abs/1902.01222v2 -"Plasmas in Gamma-Ray Bursts: particle acceleration, magnetic fields, - radiative Processes and environments",http://arxiv.org/abs/1902.02562v1 -Thresholding normally distributed data creates complex networks,http://dx.doi.org/10.1103/PhysRevE.101.062302 -Rearranging absolutely convergent well-ordered series in Banach spaces,http://arxiv.org/abs/1902.08846v1 -"Multiplicity dependence of heavy-flavour correlations with charged - particle and collective effects in p--Pb collisions at - $\sqrt{s_\mathrm{NN}}$= 5.02 TeV with ALICE at LHC",http://arxiv.org/abs/1903.00649v1 -Linking lepton number violation with $B$ anomalies,http://arxiv.org/abs/1903.01799v1 -Mars Missions Failure Report Assortment: Review and Conspectus,http://dx.doi.org/10.2514/6.2020-3541 -Learning from Synthetic Data for Crowd Counting in the Wild,http://arxiv.org/abs/1903.03303v1 -Critical hysteresis on dilute triangular lattice,http://dx.doi.org/10.1103/PhysRevE.99.062136 -"Evaluating the Contextual Integrity of Privacy Regulation: Parents' IoT - Toy Privacy Norms Versus COPPA",http://arxiv.org/abs/1903.05152v1 -"Lost Silence: An emergency response early detection service through - continuous processing of telecommunication data streams",http://arxiv.org/abs/1903.05372v1 -Correlation of heavy and light flavours in simulations,http://dx.doi.org/10.3390/universe5050118 -"The Configuration of the Perivascular System Transporting Macromolecules - in the CNS (PREPRINT)",http://dx.doi.org/10.3389/fnins.2019.00511 -Inferring Which Medical Treatments Work from Reports of Clinical Trials,http://arxiv.org/abs/1904.01606v2 -"Bakry-Émery Ricci curvature of doubly warped product of weighted - spaces",http://arxiv.org/abs/1904.04134v3 -"Evidence in favor of Single Parton Scattering mechanism in $Υ$ - and $D$ associated production at the LHC",http://dx.doi.org/10.1103/PhysRevD.99.096021 -Reflections on the search for particle dark matter by direct experiments,http://arxiv.org/abs/1904.05711v1 -Analysis of overfitting in the regularized Cox model,http://dx.doi.org/10.1088/1751-8121/ab375c -"Brain on the 3D Visual Art through Virtual Reality; Introducing - Neuro-Art in a Case Investigation",http://arxiv.org/abs/1904.06645v1 -Cyberbullying and Traditional Bullying in Greece: An Empirical Study,http://arxiv.org/abs/1904.07188v1 -"A Multi-Authority Attribute-Based Signcryption Scheme with Efficient - Revocation for Smart Grid Downlink Communication",http://arxiv.org/abs/1904.11105v1 -"The Pinpoint Comets: 133P/Elst-Pizarro, 249P/LINEAR, 331P/Gibbs, 62412 - and 6478 Gault",http://arxiv.org/abs/1907.01096v1 -Is the X(3872) a bound state ?,http://dx.doi.org/10.1088/1674-1137/43/12/124107 -"The MicroBooNE continuous readout stream for detection of supernova - neutrinos",http://dx.doi.org/10.1088/1742-6596/1312/1/012006 -"Strongly interacting dark sectors in the early Universe and at the LHC - through a simplified portal",http://dx.doi.org/10.1007/JHEP01(2020)162 -"Broken adiabaticity induced by Lifshitz transition in MoS$_2$ and WS$_2$ - single layers",http://dx.doi.org/10.1038/s42005-020-0299-1 -"Physics in the information age: qualitative methods (with examples from - quantum mechanics)",http://dx.doi.org/10.1088/1361-6404/ab7c6a -"To catch a long-lived particle: hit selection towards a regional - hardware track trigger implementation",http://dx.doi.org/10.1088/1748-0221/14/11/P11009 -"Decaying dark matter at IceCube and its signature on High Energy gamma - experiments",http://dx.doi.org/10.1088/1475-7516/2019/11/046 -Quantum-memory-assisted entropic uncertainty relations,http://dx.doi.org/10.1002/andp.201900124 -Nanoparticles manipulation in 3D nanotips excited with plasmonic vortex,http://dx.doi.org/10.1364/OL.384899 -"IceCube search for high-energy neutrinos produced in the precursor - stages of gamma-ray bursts",http://arxiv.org/abs/1908.06653v1 -Libra: Is it Really about Money?,http://arxiv.org/abs/1908.07474v2 -"Gain More for Less: The Surprising Benefits of QoS Management in - Constrained NDN Networks",http://dx.doi.org/10.1145/3357150.3357404 -"Dissipative dynamics of atomic and molecular Rydberg gases: Avalanche to - ultracold plasma states of strong coupling",http://dx.doi.org/10.1088/1361-6455/ab604f -Improving Neural Story Generation by Targeted Common Sense Grounding,http://arxiv.org/abs/1908.09451v2 -Campana points of bounded height on vector group compactifications,http://dx.doi.org/10.1112/plms.12391 -The Woman Worked as a Babysitter: On Biases in Language Generation,http://arxiv.org/abs/1909.01326v2 -Searching for a solar relaxion/scalar with XENON1T and LUX,http://dx.doi.org/10.1103/PhysRevD.100.095021 -Say What I Want: Towards the Dark Side of Neural Dialogue Models,http://arxiv.org/abs/1909.06044v3 -"Robust, Expressive, and Quantitative Linear Temporal Logics: Pick any - Two for Free",http://dx.doi.org/10.4204/EPTCS.305.1 -Lighting the Dark: The Evolution of the Post-Inflationary Universe,http://dx.doi.org/10.1103/PhysRevLett.124.061301 -Preference-Based Learning for Exoskeleton Gait Optimization,http://arxiv.org/abs/1909.12316v3 -SAT vs CSP: a commentary,http://arxiv.org/abs/1910.00128v1 -Few-shot tweet detection in emerging disaster events,http://arxiv.org/abs/1910.02290v1 -GRB X-Ray Flare Properties among Different GRB Subclasses,http://dx.doi.org/10.3847/1538-4357/ab3e75 -"Binary Neutron Star Mergers with Missing Electromagnetic Counterparts as - Manifestations of Mirror World",http://dx.doi.org/10.1016/j.physletb.2020.135402 -Biomolecular NMR at 1.2 GHz,http://arxiv.org/abs/1910.07462v1 -"Precision of analytical approximations in calculations of Atmospheric - Leptons",http://arxiv.org/abs/1910.08676v1 -"Scalable Inference for Nonparametric Hawkes Process Using - Pólya-Gamma Augmentation",http://arxiv.org/abs/1910.13052v1 -Multiple-parameter estimation in a sagnac interferometer,http://arxiv.org/abs/1911.02324v1 -Knockdown of human AMPK using the CRISPR-Cas9 genome-editing system,http://dx.doi.org/10.1007/978-1-4939-7598-3_11 -"Prompt hadroproduction of $η_c(1S,2S)$ in the $k_T$-factorization - approach",http://dx.doi.org/10.1007/JHEP02(2020)037 -Neutron Ghost Imaging,http://dx.doi.org/10.1103/PhysRevA.101.053844 -"Multiscale analysis of nutrient uptake by plant roots with sparse - distribution of root hairs: Nonstandard scaling",http://arxiv.org/abs/1911.06293v2 -Hypergraph Contextuality,http://dx.doi.org/10.3390/e21111107 -A charming ICECUBE discover?,http://arxiv.org/abs/1911.07240v1 -"Neutrino Production Associated with Late Bumps in Gamma-Ray Bursts and - Potential Contribution to Diffuse Flux at IceCube",http://dx.doi.org/10.3847/1538-4357/ab6bcf -Coherent control for qubit state readout,http://dx.doi.org/10.1088/1367-2630/ab9982 -DDNet: Dual-path Decoder Network for Occlusion Relationship Reasoning,http://arxiv.org/abs/1911.11582v3 -"Ultra slow electron holes in collisionless plasmas: stability at high - ion temperature",http://dx.doi.org/10.1063/1.5121530 -"Understanding the Impact of On-chip Communication on DNN Accelerator - Performance",http://arxiv.org/abs/1912.01664v1 -"Domain-adaptive Crowd Counting via High-quality Image Translation and - Density Reconstruction",http://arxiv.org/abs/1912.03677v3 -"A Conceptual Design Study of a Compact Photon Source (CPS) for Jefferson - Lab",http://dx.doi.org/10.1016/j.nima.2020.163429 -Search for Long-Lived Heavy Neutrinos at the LHC with a VBF Trigger,http://dx.doi.org/10.1140/epjc/s10052-020-8188-z -Forecasting significant stock price changes using neural networks,http://dx.doi.org/10.1007/s00521-020-04942-3 -"CME -Associated Energetic Ions at 0.23 AU -- Consideration of the - Auroral Pressure Cooker Mechanism Operating in the Low Corona as a Possible - Energization Process",http://dx.doi.org/10.3847/1538-4365/ab63cc -"Inclusive $J/ψ$ and $η_c$ production in $Υ$ decay at - $\mathcal{O}(α_s^5)$ in nonrelativistic QCD factorization",http://dx.doi.org/10.1103/PhysRevD.101.074002 -"Scalable Fine-grained Generated Image Classification Based on Deep - Metric Learning",http://arxiv.org/abs/1912.11082v1 -"Pruning Deep Convolutional Neural Networks Architectures with Evolution - Strategy",http://dx.doi.org/10.1016/j.ins.2020.11.009 -Direct Observation of Quantum Percolation Dynamics,http://arxiv.org/abs/2001.00268v1 -"Attitude Determination and Estimation using Vector Observations: Review, - Challenges and Comparative Results",http://arxiv.org/abs/2001.03787v3 -Real-time Dynamics of Plasma Balls from Holography,http://dx.doi.org/10.1103/PhysRevLett.124.191601 -Informing the Design of Privacy-Empowering Tools for the Connected Home,http://dx.doi.org/10.1145/3313831.3376264 -"The analytic structure of amplitudes on backgrounds from gauge - invariance and the infra-red",http://dx.doi.org/10.1007/JHEP04(2020)078 -"Angular distribution of fragments in neutron-induced nuclear fission at - energies 1-200 MeV: data, theoretical models and relevant problems",http://dx.doi.org/10.1051/epjconf/202125600003 -Self Destructing Atomic DM,http://dx.doi.org/10.1103/PhysRevD.104.035010 -"Integrability of point-vortex dynamics via symplectic reduction: a - survey",http://dx.doi.org/10.1007/s40598-020-00162-8 -"Advertisers Jump on Coronavirus Bandwagon: Politics, News, and Business",http://arxiv.org/abs/2003.00923v1 -Collective vibrations of a hydrodynamic active lattice,http://dx.doi.org/10.1098/rspa.2020.0155 -Pixel-Level Self-Paced Learning for Super-Resolution,http://arxiv.org/abs/2003.03113v2 -"Generating Emotionally Aligned Responses in Dialogues using Affect - Control Theory",http://arxiv.org/abs/2003.03645v2 -Recent progress on superconductors with time-reversal symmetry breaking,http://dx.doi.org/10.1088/1361-648X/abaa06 -"Meta-Learning GNN Initializations for Low-Resource Molecular Property - Prediction",http://arxiv.org/abs/2003.05996v2 -"A Novel Jamming Attacks Detection Approach Based on Machine Learning for - Wireless Communication",http://dx.doi.org/10.1109/ICOIN48656.2020.9016462 -"Neural Network-Optimized Channel Estimator and Training Signal Design - for MIMO Systems with Few-Bit ADCs",http://dx.doi.org/10.1109/LSP.2020.3012794 -"Central exclusive $χ_{c,b}$ production at high energy colliders and - gluon saturation approach",http://dx.doi.org/10.1016/j.physletb.2020.135492 -Decoding Imagined Speech using Wavelet Features and Deep Neural Networks,http://dx.doi.org/10.1109/INDICON47234.2019.9028925 -"In Silico Investigations on the Potential Inhibitors for COVID-19 - Protease",http://arxiv.org/abs/2003.10642v2 -"A time series method to analyze incidence pattern and estimate - reproduction number of COVID-19",http://arxiv.org/abs/2003.10655v1 -Cosmological string backgrounds from super Poisson-Lie T-plurality,http://dx.doi.org/10.1016/j.nuclphysb.2020.115110 -Diffusive photospheres in gamma-ray bursts,http://dx.doi.org/10.1093/mnras/staa868 -Estimating Treatment Effects with Observed Confounders and Mediators,http://arxiv.org/abs/2003.11991v3 -Production mechanisms of open-heavy flavor mesons,http://dx.doi.org/10.1103/PhysRevD.101.094020 -"Hindering loads prompt clustered configurations that enhance stability - during cargo transport by multiple Kinesin-1",http://arxiv.org/abs/2007.02206v2 -"Bespoke vs. Prêt-à-Porter Lottery Tickets: Exploiting Mask - Similarity for Trainable Sub-Network Finding",http://arxiv.org/abs/2007.04091v1 -"Fast Molecular Compression by a Hyperthermal Collision Gives - Bond-Selective Mechanochemistry",http://dx.doi.org/10.1103/PhysRevLett.126.056001 -Exploring rapid transient detection with the Athena Wide Field Imager,http://arxiv.org/abs/2007.05548v1 -Gradient waveform design for tensor-valued encoding in diffusion MRI,http://arxiv.org/abs/2007.07631v2 -"Principled Selection of Baseline Covariates to Account for Censoring in - Randomized Trials with a Survival Endpoint",http://arxiv.org/abs/2007.08190v1 -Improving rigid 3D calibration for robotic surgery,http://dx.doi.org/10.1109/TMRB.2020.3033670 -CACTI: Captcha Avoidance via Client-side TEE Integration,http://arxiv.org/abs/2007.10397v1 -Flowing cryogenic liquid target for terahertz wave generation,http://dx.doi.org/10.1063/5.0023106 -Dark Matter Spectra from the Electroweak to the Planck Scale,http://dx.doi.org/10.1007/JHEP06(2021)121 -RoboTed: a case study in Ethical Risk Assessment,http://arxiv.org/abs/2007.15864v2 -Newly reducible polynomial iterates,http://arxiv.org/abs/2008.01222v1 -Hadronic effects on charmonium elliptic flows in heavy-ion collisions,http://dx.doi.org/10.1103/PhysRevC.103.064910 -"Multimodal Deep Generative Models for Trajectory Prediction: A - Conditional Variational Autoencoder Approach",http://arxiv.org/abs/2008.03880v2 -"Ionospheric response to Strong Geomagnetic Storms during 2000-2005: An - IMF clock angle perspective",http://dx.doi.org/10.1029/2020RS007061 -Floquet second-order topological insulators in non-Hermitian systems,http://dx.doi.org/10.1103/PhysRevB.103.L041115 -"Estimates for the single-spin asymmetries in $p^{\uparrow}p \to J/ψ - X$ process at PHENIX RHIC and SPD NICA",http://dx.doi.org/10.1103/PhysRevD.104.016008 -"Experts and authorities receive disproportionate attention on Twitter - during the COVID-19 crisis",http://arxiv.org/abs/2008.08364v1 -Revisiting the production of $J/ψ$ pairs at the LHC,http://dx.doi.org/10.1140/epjc/s10052-020-08631-2 -Single diffractive production of open heavy flavor mesons,http://dx.doi.org/10.1103/PhysRevD.102.076020 -Muonization of supernova matter,http://dx.doi.org/10.1103/PhysRevD.102.123001 -Shell structure of $^{43}$S and collapse of the $N=28$ shell closure,http://dx.doi.org/10.1103/PhysRevC.102.034325 -"GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised - Learning",http://arxiv.org/abs/2009.03665v1 -Variational wavefunctions for Sachdev-Ye-Kitaev models,http://dx.doi.org/10.1103/PhysRevResearch.3.023020 -"Foodbot: A Goal-Oriented Just-in-Time Healthy Eating Interventions - Chatbot",http://dx.doi.org/10.1145/3421937.3421960 -Improving Language Generation with Sentence Coherence Objective,http://arxiv.org/abs/2009.06358v1 -"Optical study of PKS B1322-110, the intra-hour variable radio source",http://dx.doi.org/10.3847/1538-4357/abaaaf -The Radicalization Risks of GPT-3 and Advanced Neural Language Models,http://arxiv.org/abs/2009.06807v1 -"Frequency-based Multi Task learning With Attention Mechanism for Fault - Detection In Power Systems",http://arxiv.org/abs/2009.06825v1 -"Fast oscillations, collisionless relaxation, and spurious evolution of - supernova neutrino flavor",http://dx.doi.org/10.1103/PhysRevD.102.103017 -Content Planning for Neural Story Generation with Aristotelian Rescoring,http://arxiv.org/abs/2009.09870v2 -Gradients of Connectivity as Graph Fourier Bases of Brain Activity,http://arxiv.org/abs/2009.12567v1 -Gamma Ray Bursts: Not so Much Deadlier than We Thought,http://dx.doi.org/10.1093/mnras/staa3364 -"Energy conditions in $f(Q,T)$ gravity",http://dx.doi.org/10.1088/1402-4896/abaddc -"Linking Threat Tactics, Techniques, and Patterns with Defensive - Weaknesses, Vulnerabilities and Affected Platform Configurations for Cyber - Hunting",http://arxiv.org/abs/2010.00533v2 -Absolute X-ray energy measurement using a high-accuracy angle encoder,http://dx.doi.org/10.1107/S1600577520014526 -"Deep learning algorithms for solving high dimensional nonlinear backward - stochastic differential equations",http://arxiv.org/abs/2010.01319v3 -Acrostic Poem Generation,http://arxiv.org/abs/2010.02239v1 -Interpretable Sequence Classification via Discrete Optimization,http://arxiv.org/abs/2010.02819v1 -Understanding Fundamental Tradeoffs in Nanomechanical Resonant Sensors,http://dx.doi.org/10.1063/5.0035254 -Decoding Methods for Neural Narrative Generation,http://arxiv.org/abs/2010.07375v2 -Combining outlier analysis algorithms to identify new physics at the LHC,http://arxiv.org/abs/2010.07940v1 -Coregular submanifolds and Poisson submersions,http://arxiv.org/abs/2010.09058v2 -Summary-Oriented Question Generation for Informational Queries,http://dx.doi.org/10.18653/v1/2021.dialdoc-1.11 -Sample Efficient Reinforcement Learning with REINFORCE,http://arxiv.org/abs/2010.11364v2 -Activation Map Adaptation for Effective Knowledge Distillation,http://arxiv.org/abs/2010.13500v2 -"Dominance of $γ$-$γ$ electron-positron pair creation in a - plasma driven by high-intensity lasers",http://arxiv.org/abs/2010.14583v2 -"Speaker De-identification System using Autoencoders and Adversarial - Training",http://arxiv.org/abs/2011.04696v1 -"Similarity-Based Clustering for Enhancing Image Classification - Architectures",http://arxiv.org/abs/2011.04728v3 -"Resource Constrained Dialog Policy Learning via Differentiable Inductive - Logic Programming",http://arxiv.org/abs/2011.05457v1 -"Forecasting Emergency Department Capacity Constraints for COVID - Isolation Beds",http://arxiv.org/abs/2011.06058v1 -"Matrix Moments in a Real, Doubly Correlated Algebraic Generalization of - the Wishart Model",http://dx.doi.org/10.1088/1751-8121/abe428 -A First Look at COVID-19 Messages on WhatsApp in Pakistan,http://arxiv.org/abs/2011.09145v2 -Screening for breakthroughs,http://arxiv.org/abs/2011.10090v7 -Collaborative Storytelling with Large-scale Neural Language Models,http://arxiv.org/abs/2011.10208v1 -Modeling the Evolution of Retina Neural Network,http://arxiv.org/abs/2011.12448v2 -"Enhancement of giant refrigerant capacity in Ho$_{1-x}$Gd$_{x}$B$_{2}$ - alloys (0.1 $\leq$ x $\leq$ 0.4)",http://dx.doi.org/10.1016/j.jallcom.2021.158881 -"Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted - Deep Learning",http://arxiv.org/abs/2011.13614v1 -"Revisiting the progenitor of the low-luminosity type II-plateau - supernova, SN 2008bk",http://dx.doi.org/10.1051/0004-6361/202039546 -CTRLsum: Towards Generic Controllable Text Summarization,http://arxiv.org/abs/2012.04281v1 -"Contribution of Secondary Neutrinos from Line-of-sight Cosmic Ray - Interactions to the IceCube Diffuse Astrophysical Flux",http://dx.doi.org/10.3847/1538-4357/abf830 -Near Real-Time Social Distance Estimation in London,http://arxiv.org/abs/2012.07751v4 -"Electric Vehicle Aggregator as an Automatic Reserves Provider in the - European Market Setting",http://arxiv.org/abs/2012.11158v1 -"Nonreversible MCMC from conditional invertible transforms: a complete - recipe with convergence guarantees",http://arxiv.org/abs/2012.15550v2 -Behavior Change in Response to Subreddit Bans and External Events,http://arxiv.org/abs/2101.01793v1 -Cybersecurity of Industrial Cyber-Physical Systems: A Review,http://arxiv.org/abs/2101.03564v1 -Droplet Splashing on Rough Surfaces,http://dx.doi.org/10.1103/PhysRevFluids.6.043604 -Ajalon: Simplifying the Authoring of Wearable Cognitive Assistants,http://arxiv.org/abs/2101.05766v1 -"Personalised Recommendations in Mental Health Apps: The Impact of - Autonomy and Data Sharing",http://arxiv.org/abs/2101.08375v1 -Collaborative Teacher-Student Learning via Multiple Knowledge Transfer,http://arxiv.org/abs/2101.08471v2 -"Strong In-plane Anisotropy in the Electronic Properties of Doped - Transition Metal Dichalcogenides exhibited in W1-xNbxS2",http://dx.doi.org/10.1103/PhysRevB.103.245410 -Local Coherence of Hearts Associated with Thomason Filtrations,http://arxiv.org/abs/2101.12064v3 -Learning to Isolate Muons,http://dx.doi.org/10.1007/JHEP10(2021)200 -Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs,http://arxiv.org/abs/2102.03419v1 -"Applications of Teaching Secondary Mathematics in Undergraduate - Mathematics Courses",http://arxiv.org/abs/2102.04537v1 -Delayed Radio Flares from a Tidal Disruption Event,http://dx.doi.org/10.1038/s41550-021-01300-8 -"Theoretical limits in detachment strength for axisymmetric bi-material - adhesives",http://arxiv.org/abs/2102.11324v3 -Sentiment Analysis of Persian-English Code-mixed Texts,http://arxiv.org/abs/2102.12700v1 -"Eliciting and Analysing Users' Envisioned Dialogues with Perfect Voice - Assistants",http://dx.doi.org/10.1145/3411764.3445536 -Model-Agnostic Defense for Lane Detection against Adversarial Attack,http://arxiv.org/abs/2103.00663v1 -Towards Personalized Federated Learning,http://dx.doi.org/10.1109/TNNLS.2022.3160699 -"Emotion Ratings: How Intensity, Annotation Confidence and Agreements are - Entangled",http://arxiv.org/abs/2103.01667v1 -Remember What You Want to Forget: Algorithms for Machine Unlearning,http://arxiv.org/abs/2103.03279v2 -"X-Ray Scattering from Light-Driven Spin Fluctuations in a Doped Mott - Insulator",http://dx.doi.org/10.1038/s42005-021-00715-z -Extending Contrastive Learning to Unsupervised Coreset Selection,http://arxiv.org/abs/2103.03574v2 -"There Once Was a Really Bad Poet, It Was Automated but You Didn't Know - It",http://arxiv.org/abs/2103.03775v1 -On single-point inversions of magnetic dipole lines in the corona,http://dx.doi.org/10.3847/1538-4357/abebd8 -Knots are Generic Stable Phases in Semiflexible Polymers,http://dx.doi.org/10.1021/acs.macromol.0c02584 -"Identification of neutrino bursts associated to supernovae with - Real-time Test Statistic (RTS$^2$) method",http://dx.doi.org/10.1051/0004-6361/202141305 -"Contribution of $SU(3)$ quadratic Casimir squared to rotational bands in - the interacting boson model",http://arxiv.org/abs/2103.10822v1 -"Let's Ask Students About Their Programs, Automatically",http://dx.doi.org/10.1109/ICPC52881.2021.00054 -Macroscopic Magneto-Chiroptical Metasurfaces,http://dx.doi.org/10.1063/5.0050797 -SD-VEC: Software-Defined Vehicular Edge Computing with Ultra-Low Latency,http://arxiv.org/abs/2103.14225v1 -"Misinformation Warning Labels: Twitter's Soft Moderation Effects on - COVID-19 Vaccine Belief Echoes",http://arxiv.org/abs/2104.00779v1 -"Monte Carlo execution time estimation for Privacy-preserving Distributed - Function Evaluation protocols",http://arxiv.org/abs/2104.01281v1 -"Adaptive Mutual Supervision for Weakly-Supervised Temporal Action - Localization",http://arxiv.org/abs/2104.02357v1 -Probabilistic Box Embeddings for Uncertain Knowledge Graph Reasoning,http://arxiv.org/abs/2104.04597v1 -Latent-Optimized Adversarial Neural Transfer for Sarcasm Detection,http://arxiv.org/abs/2104.09261v3 -Well-posedness of Hibler's dynamical sea-ice model,http://dx.doi.org/10.1007/s00332-022-09803-y -Analyzing COVID-19 Tweets with Transformer-based Language Models,http://arxiv.org/abs/2104.10259v3 -"The external pallidum: think locally, act globally",http://arxiv.org/abs/2104.10795v2 -Weyl Semimetal Made Ideal with a Crystal of Raman Light and Atoms,http://dx.doi.org/10.1016/j.scib.2021.04.038 -"Critical collapse of a spherically symmetric ultrarelativistic fluid in - $2+1$ dimensions",http://dx.doi.org/10.1103/PhysRevD.103.124055 -"An Algorithm to Effect Prompt Termination of Myopic Local Search on - Kauffman-s NK Landscape",http://arxiv.org/abs/2104.12620v2 -Neutrino events within muon bundles at neutrino telescopes,http://dx.doi.org/10.1016/j.astropartphys.2021.102646 -MergeDistill: Merging Pre-trained Language Models using Distillation,http://arxiv.org/abs/2106.02834v1 -"Generate, Annotate, and Learn: NLP with Synthetic Text",http://arxiv.org/abs/2106.06168v3 -"Geodesic structure and quasinormal modes of a tidally perturbed - spacetime",http://dx.doi.org/10.1103/PhysRevD.104.024004 -Causal State Updates in Real Scalar Quantum Field Theory,http://dx.doi.org/10.1103/PhysRevD.105.025003 -"Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge - Bases",http://arxiv.org/abs/2106.09231v1 -Label prompt for multi-label text classification,http://arxiv.org/abs/2106.10076v2 -Graceful Degradation and Related Fields,http://arxiv.org/abs/2106.11119v2 -"CharacterChat: Supporting the Creation of Fictional Characters through - Conversation and Progressive Manifestation with a Chatbot",http://dx.doi.org/10.1145/3450741.3465253 -Mixtures of Deep Neural Experts for Automated Speech Scoring,http://dx.doi.org/10.21437/Interspeech.2020-1055 -Dr. Watson type Artificial Intellect (AI) Systems,http://arxiv.org/abs/2106.13322v1 -"Reformulation of a likelihood approach to fake-lepton estimation in the - framework of Bayesian inference",http://dx.doi.org/10.1016/j.nima.2021.165939 -"Mechanism of electron-beam manipulation of single dopant atoms in - silicon",http://dx.doi.org/10.1021/acs.jpcc.1c03549 -"Efficient and tunable blue light generation using lithium niobate - nonlinear photonics",http://arxiv.org/abs/2107.01171v1 -Interpretation of neutral charm mesons near threshold as unparticles,http://dx.doi.org/10.1103/PhysRevLett.128.032002 -An Initial Investigation of Non-Native Spoken Question-Answering,http://arxiv.org/abs/2107.04691v1 -"Doping-induced dielectric catastrophe prompts free-carrier release in - organic semiconductors",http://arxiv.org/abs/2107.05920v1 -"Mixed reality technologies for people with dementia: Participatory - evaluation methods",http://arxiv.org/abs/2107.07336v1 -Group Contrastive Self-Supervised Learning on Graphs,http://arxiv.org/abs/2107.09787v1 -"Preliminary investigation into how limb choice affects kinesthetic - perception",http://arxiv.org/abs/2107.11174v1 -Semantic Communications for Speech Recognition,http://arxiv.org/abs/2107.11190v2 -"Calculation of kinetic parameters $β_{\mathit{eff}}$ and $Λ$ - with modified open source Monte Carlo code OpenMC(TD)",http://arxiv.org/abs/2107.11197v1 -Model Checking Algorithms for Hyperproperties,http://dx.doi.org/10.1007/978-3-030-67067-2_1 -"Segmentation in Style: Unsupervised Semantic Image Segmentation with - Stylegan and CLIP",http://arxiv.org/abs/2107.12518v2 -Self-Driving Cars and Driver Alertness,http://arxiv.org/abs/2107.14036v1 -"Observation of non-Hermitian topological Anderson insulator in quantum - dynamics",http://dx.doi.org/10.1038/s41467-022-30938-9 -"Probing Virtual ALPs by Precision Phase Measurements: Time-Varying - Magnetic Field Background",http://dx.doi.org/10.1088/1475-7516/2023/04/036 -Uniform Sampling over Episode Difficulty,http://arxiv.org/abs/2108.01662v2 -Constraints on the very high energy Gamma-Ray emission with HAWC,http://dx.doi.org/10.22323/1.395.0921 -On Azadkia-Chatterjee's conditional dependence coefficient,http://arxiv.org/abs/2108.06827v2 -CIGLI: Conditional Image Generation from Language & Image,http://arxiv.org/abs/2108.08955v1 -"Gamma-ray and Optical Observations of Repeating Fast Radio Bursts with - VERITAS",http://dx.doi.org/10.22323/1.395.0857 -Heavy QCD Axion at Belle II: Displaced and Prompt Signals,http://dx.doi.org/10.1103/PhysRevD.105.L071701 -"CoverTheFace: face covering monitoring and demonstrating using deep - learning and statistical shape analysis",http://arxiv.org/abs/2108.10430v1 -"Azimuthal correlations of D mesons with charged particles in simulations - with the ALICE experiment",http://dx.doi.org/10.3390/particles4040037 -"Rota's program on algebraic operators, rewriting systems and - Gröbner-Shirshov bases",http://arxiv.org/abs/2108.11823v1 -N24News: A New Dataset for Multimodal News Classification,http://arxiv.org/abs/2108.13327v4 -"From Movement Kinematics to Object Properties: Online Recognition of - Human Carefulness",http://dx.doi.org/10.1007/978-3-030-90525-5_6 -"ConQX: Semantic Expansion of Spoken Queries for Intent Detection based - on Conditioned Text Generation",http://arxiv.org/abs/2109.00729v1 -"Simultaneous quantification and changepoint detection of point source - gas emissions using recursive Bayesian inference",http://arxiv.org/abs/2109.01603v1 -Data Efficient Masked Language Modeling for Vision and Language,http://arxiv.org/abs/2109.02040v1 -"Detection of Insider Threats using Artificial Intelligence and - Visualisation",http://dx.doi.org/10.1109/NetSoft48620.2020.9165337 -"Pitch Angle Anisotropy Controls Particle Acceleration and Cooling in - Radiative Relativistic Plasma Turbulence",http://dx.doi.org/10.1103/PhysRevLett.127.255102 -A Recipe For Arbitrary Text Style Transfer with Large Language Models,http://arxiv.org/abs/2109.03910v4 -"BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural - Machine Translation",http://arxiv.org/abs/2109.04588v1 -A Digital Forensics Investigation of a Smart Scale IoT Ecosystem,http://arxiv.org/abs/2109.05518v1 -"$\rm{Λ^{+}_{c}}$ production cross section in pp and p--Pb - collisions down to $p_{\rm T}$ = 0 at $\sqrt{s_{\rm NN}}$ = 5.02 TeV measured - with ALICE",http://arxiv.org/abs/2109.05949v1 -Radiogenic neutron background in reactor neutrino experiments,http://dx.doi.org/10.1103/PhysRevD.104.092006 -Can Language Models be Biomedical Knowledge Bases?,http://arxiv.org/abs/2109.07154v1 -"The Language Model Understood the Prompt was Ambiguous: Probing - Syntactic Uncertainty Through Generation",http://arxiv.org/abs/2109.07848v1 -Towards Zero-Label Language Learning,http://arxiv.org/abs/2109.09193v1 -Detection of Small Scale Components in Power Law Spectra,http://arxiv.org/abs/2109.10032v1 -"Finite-key Analysis for Quantum Conference Key Agreement with Asymmetric - Channels",http://dx.doi.org/10.1088/2058-9565/ac1e00 -SD-QA: Spoken Dialectal Question Answering for the Real World,http://arxiv.org/abs/2109.12072v1 -"High-energy spectra of the atmospheric neutrinos: predictions and - measurements",http://arxiv.org/abs/2109.13000v2 -Probing Particle Acceleration through Gamma-ray Solar Flare Observations,http://arxiv.org/abs/2109.13535v1 -"Radio Loud vs. Radio Quiet Gamma-ray Bursts: the Role of Binary - Progenitors",http://dx.doi.org/10.3847/1538-4357/ac54b3 -Grounding Predicates through Actions,http://arxiv.org/abs/2109.14718v2 -Continuous Compliance using Calculated Event Log Layers,http://arxiv.org/abs/2110.00411v1 -A powerful e^{+-} outflow driven by a proto-strange quark star,http://dx.doi.org/10.3847/1538-4357/ac2d2f -Revisiting Self-Training for Few-Shot Learning of Language Model,http://arxiv.org/abs/2110.01256v1 -"AI Chains: Transparent and Controllable Human-AI Interaction by Chaining - Large Language Model Prompts",http://arxiv.org/abs/2110.01691v3 -Traffic control Management System and Collision Avoidance System,http://arxiv.org/abs/2110.01830v1 -How BPE Affects Memorization in Transformers,http://arxiv.org/abs/2110.02782v2 -Workload-Aware Materialization of Junction Trees,http://arxiv.org/abs/2110.03475v1 -Inferring Offensiveness In Images From Natural Language Supervision,http://arxiv.org/abs/2110.04222v1 -"GRB variabilities and following gravitational waves induced by - gravitational instability in NDAFs",http://dx.doi.org/10.1093/mnras/stab2989 -"Noether charges: the link between empirical significance of symmetries - and non-separability",http://arxiv.org/abs/2110.07208v1 -"Establishing the Non-Primordial Origin of Black Hole-Neutron Star - Mergers",http://dx.doi.org/10.3847/1538-4357/ac66da -Gravitational Waves from GRB Core Spindown,http://dx.doi.org/10.1093/mnras/stab2888 -"Multitask Adaptation by Retrospective Exploration with Learned World - Models",http://arxiv.org/abs/2110.13241v1 -Distilling Relation Embeddings from Pre-trained Language Models,http://dx.doi.org/10.18653/v1/2021.emnlp-main.712 -Template Filling for Controllable Commonsense Reasoning,http://arxiv.org/abs/2111.00539v3 -"Recent Advances in Natural Language Processing via Large Pre-Trained - Language Models: A Survey",http://arxiv.org/abs/2111.01243v1 -Status of the DEAP-3600 experiment,http://dx.doi.org/10.1088/1742-6596/2156/1/012070 -Sexism Identification in Tweets and Gabs using Deep Neural Networks,http://arxiv.org/abs/2111.03612v1 -Machine-in-the-Loop Rewriting for Creative Image Captioning,http://arxiv.org/abs/2111.04193v2 -"Photophysics of Deep Blue Acridane- and Benzonitrile-Based Emitter - Employing Thermally Activated Delayed Fluorescence",http://dx.doi.org/10.1021/acs.jpcc.8b08716 -Shifts of prepotentials (with an appendix by Michele Vergne),http://dx.doi.org/10.21468/SciPostPhys.12.5.177 -Solving Linear Algebra by Program Synthesis,http://arxiv.org/abs/2111.08171v1 -NLP based grievance redressal system for Indian Railways,http://arxiv.org/abs/2111.08999v1 -Segmentation of Lung Tumor from CT Images using Deep Supervision,http://arxiv.org/abs/2111.09262v1 -The strangest lifetime: A bizarre story of $τ(Ω_c^0)$,http://dx.doi.org/10.1016/j.scib.2021.11.025 -"Universal Captioner: Inducing Content-Style Separation in - Vision-and-Language Model Training",http://arxiv.org/abs/2111.12727v2 -SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection,http://arxiv.org/abs/2111.13495v3 -Local Edge Dynamics and Opinion Polarization,http://dx.doi.org/10.1145/3539597.3570442 -"Chemical Identification and Indexing in PubMed Articles via BERT and - Text-to-Text Approaches",http://arxiv.org/abs/2111.15622v1 -"Talking about responsible quantum: Awareness is the absolute minimum... - that we need to do",http://arxiv.org/abs/2112.01378v4 -"Takagi-Sugeno Fuzzy Modeling and Control for Effective Robotic - Manipulator Motion",http://dx.doi.org/10.32604/cmc.2022.022451 -"Dedicated Triggers for Displaced Jets using Timing Information from - Electromagnetic Calorimeter at HL-LHC",http://dx.doi.org/10.1007/JHEP08(2022)254 -Step-unrolled Denoising Autoencoders for Text Generation,http://arxiv.org/abs/2112.06749v3 -"Bethe-Heitler signature in proton synchrotron models for gamma-ray - bursts",http://dx.doi.org/10.3847/1538-4357/ac85b7 -"Data-driven chimney fire risk prediction using machine learning and - point process tools",http://arxiv.org/abs/2112.07257v1 -Intrinsic signal optoretinography of dark adaptation kinetics,http://arxiv.org/abs/2112.07838v1 -Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics,http://arxiv.org/abs/2112.08321v3 -"Four Attitudes Towards Singularities in the Search for a Theory of - Quantum Gravity",http://arxiv.org/abs/2112.08531v1 -"NewsClaims: A New Benchmark for Claim Detection from News with Attribute - Knowledge",http://arxiv.org/abs/2112.08544v4 -Reconsidering the Past: Optimizing Hidden States in Language Models,http://arxiv.org/abs/2112.08653v1 -Reframing Human-AI Collaboration for Generating Free-Text Explanations,http://arxiv.org/abs/2112.08674v2 -Few-Shot Semantic Parsing with Language Models Trained On Code,http://arxiv.org/abs/2112.08696v2 -Gravitational Waves and Conformal Time Transformations,http://dx.doi.org/10.1016/j.aop.2022.168833 -"Interplay between Resonant Leptogenesis, Neutrinoless Double Beta Decay - and Collider Signals in a Model with Flavor and CP Symmetries",http://arxiv.org/abs/2112.09710v2 -"Primordial black holes and scalar induced gravitational waves from the - $E$ model with a Gauss-Bonnet term",http://dx.doi.org/10.1103/PhysRevD.105.063539 -Depletion of atmospheric neutrino fluxes from parton energy loss,http://dx.doi.org/10.1016/j.physletb.2022.137541 -Distributed Machine Learning and the Semblance of Trust,http://arxiv.org/abs/2112.11040v1 -Domain-Aware Continual Zero-Shot Learning,http://arxiv.org/abs/2112.12989v1 -A General Glivenko-Gödel Theorem for Nuclei,http://dx.doi.org/10.4204/EPTCS.351.4 -On some Foundational Aspects of Human-Centered Artificial Intelligence,http://arxiv.org/abs/2112.14480v1 -"Buy Now, Pay Later (BNPL)...On Your Credit Card",http://arxiv.org/abs/2201.01758v5 -"Multimodal Representations Learning Based on Mutual Information - Maximization and Minimization and Identity Embedding for Multimodal Sentiment - Analysis",http://arxiv.org/abs/2201.03969v2 -Is magnetically dominated outflow required to explain GRBs?,http://dx.doi.org/10.1093/mnras/stac757 -"Storm Surges as Seen by Coastal and Spaceborne Radars: Case Studies in - British Columbia",http://arxiv.org/abs/2201.06770v1 -"Language Models as Zero-Shot Planners: Extracting Actionable Knowledge - for Embodied Agents",http://arxiv.org/abs/2201.07207v2 -A Latent-Variable Model for Intrinsic Probing,http://arxiv.org/abs/2201.08214v2 -Exploring the acceptability of digital contact tracing for UK students,http://arxiv.org/abs/2201.08650v1 -NAS-VAD: Neural Architecture Search for Voice Activity Detection,http://dx.doi.org/10.21437/Interspeech.2022-975 -Endpoint Detection for Streaming End-to-End Multi-talker ASR,http://arxiv.org/abs/2201.09979v1 -"Investigating the impact of free energy based behavior on human in - human-agent interaction",http://arxiv.org/abs/2201.10164v1 -Describing Differences between Text Distributions with Natural Language,http://arxiv.org/abs/2201.12323v2 -Security Analysis of Mobile Banking Application in Qatar,http://arxiv.org/abs/2202.00582v2 -TRGP: Trust Region Gradient Projection for Continual Learning,http://arxiv.org/abs/2202.02931v1 -"Mapping the power-law decay of high-harmonic spectra from laser-plasma - interactions",http://dx.doi.org/10.1063/5.0087854 -Neutrinos from Gamma-ray Bursts,http://arxiv.org/abs/2202.06480v1 -Student Dangerous Behavior Detection in School,http://arxiv.org/abs/2202.09550v2 -Retrieval Augmented Classification for Long-Tail Visual Recognition,http://arxiv.org/abs/2202.11233v1 -Capturing Failures of Large Language Models via Human Cognitive Biases,http://arxiv.org/abs/2202.12299v2 -Time-coded Spiking Fourier Transform in Neuromorphic Hardware,http://dx.doi.org/10.1109/TC.2022.3162708 -"AugESC: Dialogue Augmentation with Large Language Models for Emotional - Support Conversation",http://arxiv.org/abs/2202.13047v3 -Audio Self-supervised Learning: A Survey,http://arxiv.org/abs/2203.01205v1 -"Birth cluster simulations of planetary systems with multiple - super-Earths: initial conditions for white dwarf pollution drivers",http://dx.doi.org/10.1093/mnras/stac602 -Theory of polar domains in moiré heterostructures,http://dx.doi.org/10.1103/PhysRevB.105.235445 -"ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer - for Event-Centric Generation and Classification",http://arxiv.org/abs/2203.02225v2 -"Detection of Parasitic Eggs from Microscopy Images and the emergence of - a new dataset",http://arxiv.org/abs/2203.02940v1 -"Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics - in Limit-Order Book Markets",http://arxiv.org/abs/2203.03613v2 -"Evaluating feasibility of batteries for second-life applications using - machine learning",http://arxiv.org/abs/2203.04249v2 -ELLE: Efficient Lifelong Pre-training for Emerging Data,http://arxiv.org/abs/2203.06311v2 -"PromptChainer: Chaining Large Language Model Prompts through Visual - Programming",http://arxiv.org/abs/2203.06566v1 -Track-Based Triggers for Exotic Signatures,http://arxiv.org/abs/2203.07314v2 -In-Context Learning for Few-Shot Dialogue State Tracking,http://arxiv.org/abs/2203.08568v3 -"Speaker Information Can Guide Models to Better Inductive Biases: A Case - Study On Predicting Code-Switching",http://arxiv.org/abs/2203.08979v1 -"Reinforcement Learning based Voice Interaction to Clear Path for Robots - in Elevator Environment",http://arxiv.org/abs/2203.09844v2 -Probing Factually Grounded Content Transfer with Factual Ablation,http://arxiv.org/abs/2203.10133v2 -"Telling Stories from Computational Notebooks: AI-Assisted Presentation - Slides Creation for Presenting Data Science Work",http://dx.doi.org/10.1145/3491102.3517615 -"A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models - with Adversarial Learning",http://arxiv.org/abs/2203.11933v4 -Unified Structure Generation for Universal Information Extraction,http://arxiv.org/abs/2203.12277v1 -"Secure Multi-Party Delegated Authorisation For Access and Sharing of - Electronic Health Records",http://arxiv.org/abs/2203.12837v1 -"Can Unsupervised Knowledge Transfer from Social Discussions Help - Argument Mining?",http://arxiv.org/abs/2203.12881v1 -Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors,http://arxiv.org/abs/2203.13131v1 -STaR: Bootstrapping Reasoning With Reasoning,http://arxiv.org/abs/2203.14465v2 -"FlexR: Few-shot Classification with Language Embeddings for Structured - Reporting of Chest X-rays",http://arxiv.org/abs/2203.15723v2 -"How Pre-trained Language Models Capture Factual Knowledge? A - Causal-Inspired Analysis",http://arxiv.org/abs/2203.16747v1 -"BioBART: Pretraining and Evaluation of A Biomedical Generative Language - Model",http://arxiv.org/abs/2204.03905v2 -"Show, Don't Tell: Demonstrations Outperform Descriptions for - Schema-Guided Task-Oriented Dialogue",http://dx.doi.org/10.18653/v1/2022.naacl-main.336 -"Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable - Fine-tuning for Zero-Shot Dialogue Summarization",http://arxiv.org/abs/2204.04362v1 -UniDU: Towards A Unified Generative Dialogue Understanding Framework,http://arxiv.org/abs/2204.04637v2 -"Generative Biomedical Entity Linking via Knowledge Base-Guided - Pre-training and Synonyms-Aware Fine-tuning",http://arxiv.org/abs/2204.05164v3 -"What do Toothbrushes do in the Kitchen? How Transformers Think our World - is Structured",http://arxiv.org/abs/2204.05673v1 -Impossible Triangle: What's Next for Pre-trained Language Models?,http://arxiv.org/abs/2204.06130v2 -"Rows from Many Sources: Enriching row completions from Wikidata with a - pre-trained Language Model",http://arxiv.org/abs/2204.07014v1 -"Where to Go for the Holidays: Towards Mixed-Type Dialogs for - Clarification of User Goals",http://arxiv.org/abs/2204.07299v1 -"Polling Latent Opinions: A Method for Computational Sociolinguistics - Using Transformer Language Models",http://arxiv.org/abs/2204.07483v2 -Wound Severity Classification using Deep Neural Network,http://arxiv.org/abs/2204.07942v1 -Opal: Multimodal Image Generation for News Illustration,http://arxiv.org/abs/2204.09007v3 -Can Voters Detect Errors on Their Printed Ballots? Absolutely,http://arxiv.org/abs/2204.09780v1 -"Nanodiamond quantum sensors reveal temperature variation associated to - hippocampal neurons firing",http://dx.doi.org/10.1002/advs.202202014 -"Subscriptions and external links help drive resentful users to - alternative and extremist YouTube videos",http://arxiv.org/abs/2204.10921v2 -Radio sky reveals primordial electron-proton interactions,http://arxiv.org/abs/2204.13711v1 -A Digital Twin Framework for Cyber Security in Cyber-Physical Systems,http://arxiv.org/abs/2204.13859v1 -"QRelScore: Better Evaluating Generated Questions with Deeper - Understanding of Context-aware Relevance",http://arxiv.org/abs/2204.13921v1 -"EFT Diagrammatica II: Tracing the UV origin of bosonic D6 CPV and D8 - SMEFT operators",http://dx.doi.org/10.1007/JHEP08(2022)190 -"A Polyhedral Approach to Least Cost Influence Maximization in Social - Networks",http://arxiv.org/abs/2205.01274v2 -Finding patterns in Knowledge Attribution for Transformers,http://arxiv.org/abs/2205.01366v2 -"XLTime: A Cross-Lingual Knowledge Transfer Framework for Temporal - Expression Extraction",http://arxiv.org/abs/2205.01757v1 -Implicit N-grams Induced by Recurrence,http://arxiv.org/abs/2205.02724v1 -The Drift of #MyBodyMyChoice Discourse on Twitter,http://dx.doi.org/10.1145/3501247.3531570 -Extracting Latent Steering Vectors from Pretrained Language Models,http://arxiv.org/abs/2205.05124v1 -Inferential Tasks as an Evaluation Technique for Visualization,http://arxiv.org/abs/2205.05712v1 -Can Foundation Models Wrangle Your Data?,http://arxiv.org/abs/2205.09911v2 -"All Birds with One Stone: Multi-task Text Classification for Efficient - Inference with One Forward Pass",http://arxiv.org/abs/2205.10744v1 -"Instruction Induction: From Few Examples to Natural Language Task - Descriptions",http://arxiv.org/abs/2205.10782v1 -The Curious Case of Control,http://arxiv.org/abs/2205.12113v2 -"Secuer: ultrafast, scalable and accurate clustering of single-cell - RNA-seq data",http://dx.doi.org/10.1371/journal.pcbi.1010753 -Are Large Pre-Trained Language Models Leaking Your Personal Information?,http://arxiv.org/abs/2205.12628v2 -"Optimal Multi-robot Formations for Relative Pose Estimation Using Range - Measurements",http://arxiv.org/abs/2205.14263v1 -"Exploring students' backtracking behaviors in digital textbooks and its - relationship to learning styles",http://arxiv.org/abs/2205.14822v2 -Unbalanced CO-Optimal Transport,http://arxiv.org/abs/2205.14923v3 -NEWTS: A Corpus for News Topic-Focused Summarization,http://arxiv.org/abs/2205.15661v1 -"Visual Clues: Bridging Vision and Language Foundations for Image - Paragraph Captioning",http://arxiv.org/abs/2206.01843v2 -"Long-term quantification and characterisation of wind farm noise - amplitude modulation",http://dx.doi.org/10.1016/j.measurement.2021.109678 -PrivHAR: Recognizing Human Actions From Privacy-preserving Lens,http://dx.doi.org/10.1007/978-3-031-19772-7_19 -Quantum Advantage in Cryptography,http://dx.doi.org/10.2514/1.J062267 -Competing magnetic phases in LnSbTe (Ln = Ho and Tb),http://dx.doi.org/10.1021/acs.inorgchem.2c01711 -"Retrospective, Observational Studies for Estimating Vaccine Effects on - the Secondary Attack Rate of SARS-CoV-2",http://arxiv.org/abs/2206.07495v1 -Evaluating and Inducing Personality in Pre-trained Language Models,http://arxiv.org/abs/2206.07550v2 -How Adults Understand What Young Children Say,http://arxiv.org/abs/2206.07807v3 -"Self-Generated In-Context Learning: Leveraging Auto-regressive Language - Models as a Demonstration Generator",http://arxiv.org/abs/2206.08082v1 -Towards the Generation of Musical Explanations with GPT-3,http://arxiv.org/abs/2206.08264v1 -"Object Localization Assistive System Based on CV and Vibrotactile - Encoding",http://arxiv.org/abs/2206.09432v1 -"DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited - Annotations",http://arxiv.org/abs/2206.09541v1 -"Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender - Bias",http://arxiv.org/abs/2206.09860v1 -"Accounting for objective lens autofluorescence in quantum emitter - measurements",http://arxiv.org/abs/2206.10705v1 -Faint debris disk peering through superflare light echo,http://dx.doi.org/10.3847/2041-8213/ac7b24 -Numerical Simulations of Dark Matter Admixed Neutron Star Binaries,http://arxiv.org/abs/2206.10887v1 -"Human-AI communication for human-human communication: Applying - interpretable unsupervised anomaly detection to executive coaching",http://arxiv.org/abs/2206.10987v1 -Hierarchical nuclear norm penalization for multi-view data,http://arxiv.org/abs/2206.12891v1 -"Adaptive Multi-view Rule Discovery for Weakly-Supervised Compatible - Products Prediction",http://dx.doi.org/10.1145/3534678.3539208 -"Expressive, Variable, and Controllable Duration Modelling in TTS",http://arxiv.org/abs/2206.14165v1 -Strong Lensing Source Reconstruction Using Continuous Neural Fields,http://arxiv.org/abs/2206.14820v2 -"Milestones of research activity in quantum computing: EPS grand - challenges",http://arxiv.org/abs/2207.02857v1 -New Local Explorations of the Unitary Coupled Cluster Energy Landscape,http://arxiv.org/abs/2207.04105v2 -"TIPS: Transaction Inclusion Protocol with Signaling in DAG-based - Blockchain",http://arxiv.org/abs/2207.04841v1 -Exploring Length Generalization in Large Language Models,http://arxiv.org/abs/2207.04901v2 -Metal Borohydrides as high-$T_{c}$ ambient pressure superconductors,http://dx.doi.org/10.1103/PhysRevB.107.L060501 -Convolutional Bypasses Are Better Vision Transformer Adapters,http://arxiv.org/abs/2207.07039v3 -"Uncovering a hidden black hole binary from secular eccentricity - variations of a tertiary star",http://dx.doi.org/10.1103/PhysRevD.106.123010 -Language Model Cascades,http://arxiv.org/abs/2207.10342v2 -Democratizing Ethical Assessment of Natural Language Generation Models,http://arxiv.org/abs/2207.10576v2 -"Photons' scattering in a relativistic plasma with velocity shear: - generation of high energy power-law spectra",http://dx.doi.org/10.3847/2041-8213/acaefa -PirouNet: Creating Dance through Artist-Centric Deep Learning,http://arxiv.org/abs/2207.12126v2 -"Rapid localization of gravitational wave sources from compact binary - coalescences using deep learning",http://arxiv.org/abs/2207.14522v1 -Testing Relational Understanding in Text-Guided Image Generation,http://arxiv.org/abs/2208.00005v1 -"Energetic particle loss mechanisms in reactor-scale equilibria close to - quasisymmetry",http://dx.doi.org/10.1088/1741-4326/ac9b07 -"EURADOS Working Group 6, Computational Dosimetry, a history of promoting - good practice via intercomparisons and training",http://dx.doi.org/10.1016/j.radmeas.2022.106829 -Aesthetic Bot: Interactively Evolving Game Maps on Twitter,http://arxiv.org/abs/2208.05017v2 -"RealityTalk: Real-Time Speech-Driven Augmented Presentation for AR Live - Storytelling",http://dx.doi.org/10.1145/3526113.3545702 -"Explainable Artificial Intelligence for Assault Sentence Prediction in - New Zealand",http://arxiv.org/abs/2208.06981v1 -FALSE: Fake News Automatic and Lightweight Solution,http://arxiv.org/abs/2208.07686v1 -ILLUME: Rationalizing Vision-Language Models through Human Interactions,http://arxiv.org/abs/2208.08241v4 -"Intention estimation from gaze and motion features for human-robot - shared-control object manipulation",http://arxiv.org/abs/2208.08688v1 -"Evaluating and Crafting Datasets Effective for Deep Learning With Data - Maps",http://arxiv.org/abs/2208.10033v2 -Open and hidden heavy-flavor production in small systems with ALICE,http://arxiv.org/abs/2208.10254v1 -Few-Shot Table-to-Text Generation with Prefix-Controlled Generator,http://arxiv.org/abs/2208.10709v1 -"FactMix: Using a Few Labeled In-domain Examples to Generalize to - Cross-domain Named Entity Recognition",http://arxiv.org/abs/2208.11464v2 -"Learning from Unlabeled 3D Environments for Vision-and-Language - Navigation",http://arxiv.org/abs/2208.11781v1 -"Building the Intent Landscape of Real-World Conversational Corpora with - Extractive Question-Answering Transformers",http://arxiv.org/abs/2208.12886v2 -"A Multi-Format Transfer Learning Model for Event Argument Extraction via - Variational Information Bottleneck",http://arxiv.org/abs/2208.13017v3 -Stock Market Prediction using Natural Language Processing -- A Survey,http://arxiv.org/abs/2208.13564v1 -PGNAA Spectral Classification of Metal with Density Estimations,http://dx.doi.org/10.1109/TNS.2023.3242626 -"Generation of arbitrary abruptly autofusing circular Airy Gaussian - vortex vector modes",http://arxiv.org/abs/2208.14638v1 -MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model,http://arxiv.org/abs/2208.15001v1 -Incorporating Task-specific Concept Knowledge into Script Learning,http://arxiv.org/abs/2209.00068v3 -"Exploring Effective Information Utilization in Multi-Turn Topic-Driven - Conversations",http://arxiv.org/abs/2209.00250v2 -"In conversation with Artificial Intelligence: aligning language models - with human values",http://arxiv.org/abs/2209.00731v2 -"Reconstructing Action-Conditioned Human-Object Interactions Using - Commonsense Knowledge Priors",http://arxiv.org/abs/2209.02485v1 -Time reflection and refraction in synthetic frequency dimension,http://dx.doi.org/10.1103/PhysRevResearch.5.L012046 -"Coherence, superposition, and Löwdin symmetric orthogonalization",http://dx.doi.org/10.1088/1751-8121/acec20 -"Text-Free Learning of a Natural Language Interface for Pretrained Face - Generators",http://arxiv.org/abs/2209.03953v1 -"Managing a blockchain-based platform ecosystem for industry-wide - adoption: The case of TradeLens",http://dx.doi.org/10.1016/j.techfore.2022.121981 -"Nanosecond Photoemission near the Potential Barrier of a Schottky - Emitter",http://dx.doi.org/10.1103/PhysRevApplied.19.014035 -Leveraging Language Foundation Models for Human Mobility Forecasting,http://arxiv.org/abs/2209.05479v2 -On the Relation between Sensitivity and Accuracy in In-context Learning,http://arxiv.org/abs/2209.07661v2 -Can There be Art Without an Artist?,http://arxiv.org/abs/2209.07667v2 -Truth and Falsity in Buridan's Bridge,http://arxiv.org/abs/2209.10625v2 -"Selecting Better Samples from Pre-trained LLMs: A Case Study on Question - Generation",http://arxiv.org/abs/2209.11000v1 -Spatial model personalization in Gboard,http://dx.doi.org/10.1145/3546737 -"Moral Mimicry: Large Language Models Produce Moral Rationalizations - Tailored to Political Identity",http://arxiv.org/abs/2209.12106v2 -WinoDict: Probing language models for in-context word acquisition,http://arxiv.org/abs/2209.12153v1 -"A Case Report On The ""A.I. Locked-In Problem"": social concerns with - modern NLP",http://arxiv.org/abs/2209.12687v1 -"Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal - Guided Diffusion",http://arxiv.org/abs/2209.13360v2 -"Co-Writing Screenplays and Theatre Scripts with Language Models: An - Evaluation by Industry Professionals",http://arxiv.org/abs/2209.14958v1 -"SmallCap: Lightweight Image Captioning Prompted with Retrieval - Augmentation",http://arxiv.org/abs/2209.15323v2 -"Multi-stage Progressive Compression of Conformer Transducer for - On-device Speech Recognition",http://dx.doi.org/10.21437/Interspeech.2022-10582 -"MALM: Mixing Augmented Language Modeling for Zero-Shot Machine - Translation",http://arxiv.org/abs/2210.00320v1 -"A Comprehensive Study of Bright Fermi-GBM Short Gamma-Ray Bursts: II. - Very Short Burst and Its Implications",http://dx.doi.org/10.3390/universe8100512 -Risk-graded Safety for Handling Medical Queries in Conversational AI,http://arxiv.org/abs/2210.00572v1 -"Dancing with the Unexpected and Beyond: The Use of AI Assistance in - Design Fiction Creation",http://arxiv.org/abs/2210.00829v1 -"Language Models Are Greedy Reasoners: A Systematic Formal Analysis of - Chain-of-Thought",http://arxiv.org/abs/2210.01240v4 -Enabling a Zero Trust Architecture in a 5G-enabled Smart Grid,http://arxiv.org/abs/2210.01739v2 -"clip2latent: Text driven sampling of a pre-trained StyleGAN using - denoising diffusion and CLIP",http://arxiv.org/abs/2210.02347v1 -Learning to Reason With Relational Abstractions,http://arxiv.org/abs/2210.02615v2 -"Rapid reconstruction of compact binary sources using meshfree - approximation",http://arxiv.org/abs/2210.02706v1 -Language Models are Multilingual Chain-of-Thought Reasoners,http://arxiv.org/abs/2210.03057v1 -CLIP model is an Efficient Continual Learner,http://arxiv.org/abs/2210.03114v1 -"SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder - Based Speech-Text Pre-training",http://arxiv.org/abs/2210.03730v1 -"ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational - Finance Question Answering",http://arxiv.org/abs/2210.03849v1 -"Data-Efficiency with a Single GPU: An Exploration of Transfer Methods - for Small Language Models",http://arxiv.org/abs/2210.03871v1 -On some features of the solar proton event on 2021 October 28 (GLE73),http://dx.doi.org/10.1093/mnras/stac2843 -CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation,http://arxiv.org/abs/2210.04873v2 -"Probing Commonsense Knowledge in Pre-trained Language Models with - Sense-level Precision and Expanded Vocabulary",http://arxiv.org/abs/2210.06376v1 -Large Language Models are few(1)-shot Table Reasoners,http://arxiv.org/abs/2210.06710v2 -Explanations from Large Language Models Make Small Reasoners Better,http://arxiv.org/abs/2210.06726v1 -Patterns of Structural Reflection in the large-cardinal hierarchy,http://arxiv.org/abs/2210.07120v1 -"Enabling Classifiers to Make Judgements Explicitly Aligned with Human - Values",http://arxiv.org/abs/2210.07652v1 -"Language Generation Models Can Cause Harm: So What Can We Do About It? - An Actionable Survey",http://arxiv.org/abs/2210.07700v2 -On $\mathcal{I}$-covering images of metric spaces,http://arxiv.org/abs/2210.08544v1 -Systematicity in GPT-3's Interpretation of Novel English Noun Compounds,http://arxiv.org/abs/2210.09492v1 -"Language Does More Than Describe: On The Lack Of Figurative Speech in - Text-To-Image Models",http://arxiv.org/abs/2210.10578v1 -"DALLE-2 is Seeing Double: Flaws in Word-to-Concept Mapping in Text2Image - Models",http://arxiv.org/abs/2210.10606v1 -"TabLLM: Few-shot Classification of Tabular Data with Large Language - Models",http://arxiv.org/abs/2210.10723v2 -Pre-training Language Models with Deterministic Factual Knowledge,http://arxiv.org/abs/2210.11165v1 -"TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting - Decomposition",http://arxiv.org/abs/2210.11277v2 -"Discriminatory and orthogonal feature learning for noise robust keyword - spotting",http://dx.doi.org/10.1109/LSP.2022.3203911 -Large Language Models Can Self-Improve,http://arxiv.org/abs/2210.11610v2 -"Boosting Natural Language Generation from Instructions with - Meta-Learning",http://arxiv.org/abs/2210.11617v1 -"Approaches to Identify Vulnerabilities to Misinformation: A Research - Agenda",http://arxiv.org/abs/2210.11647v1 -"Exploring The Landscape of Distributional Robustness for Question - Answering Models",http://arxiv.org/abs/2210.12517v1 -Code4Struct: Code Generation for Few-Shot Event Structure Prediction,http://arxiv.org/abs/2210.12810v2 -"""Covid vaccine is against Covid but Oxford vaccine is made at Oxford!"" - Semantic Interpretation of Proper Noun Compounds",http://arxiv.org/abs/2210.13039v1 -Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning,http://arxiv.org/abs/2210.14469v1 -"Improving Adversarial Robustness with Self-Paced Hard-Class Pair - Reweighting",http://arxiv.org/abs/2210.15068v2 -Gendered Mental Health Stigma in Masked Language Models,http://arxiv.org/abs/2210.15144v2 -"Can language models handle recursively nested grammatical structures? A - case study on comparing models and humans",http://arxiv.org/abs/2210.15303v3 -The Fisher-Rao Loss for Learning under Label Noise,http://dx.doi.org/10.1007/s41884-022-00076-8 -"Generate, Discriminate and Contrast: A Semi-Supervised Sentence - Representation Learning Framework",http://arxiv.org/abs/2210.16798v1 -"Pneg: Prompt-based Negative Response Generation for Dialogue Response - Selection Task",http://arxiv.org/abs/2210.17238v1 -Generating Sequences by Learning to Self-Correct,http://arxiv.org/abs/2211.00053v1 -"Learning a Condensed Frame for Memory-Efficient Video Class-Incremental - Learning",http://arxiv.org/abs/2211.00833v1 -"Audio Language Modeling using Perceptually-Guided Discrete - Representations",http://arxiv.org/abs/2211.01223v2 -"Contextual information integration for stance detection via - cross-attention",http://arxiv.org/abs/2211.01874v2 -Truthful Matching with Online Items and Offline Agents,http://arxiv.org/abs/2211.02004v1 -Do Users Write More Insecure Code with AI Assistants?,http://arxiv.org/abs/2211.03622v2 -Knowledge Retrieval for Robotic Cooking,http://arxiv.org/abs/2211.04524v2 -Detecting Euphemisms with Literal Descriptions and Visual Imagery,http://arxiv.org/abs/2211.04576v1 -"SETGen: Scalable and Efficient Template Generation Framework for - Groupwise Medical Image Registration",http://arxiv.org/abs/2211.05622v1 -"Optimizing Trigger-Level Track Reconstruction for Sensitivity to Exotic - Signatures",http://dx.doi.org/10.1007/JHEP02(2023)034 -The Sun and Space Weather,http://dx.doi.org/10.3390/atmos13111781 -"Retrieval-Augmented Generative Question Answering for Event Argument - Extraction",http://arxiv.org/abs/2211.07067v1 -Cloning Ideology and Style using Deep Learning,http://arxiv.org/abs/2211.07712v1 -Will Large-scale Generative Models Corrupt Future Datasets?,http://arxiv.org/abs/2211.08095v2 -"FolkScope: Intention Knowledge Graph Construction for E-commerce - Commonsense Discovery",http://arxiv.org/abs/2211.08316v2 -"Analyse der Entwicklungstreiber militärischer Schwarmdrohnen durch - Natural Language Processing",http://arxiv.org/abs/2211.09680v1 -"Audio Anti-spoofing Using a Simple Attention Module and Joint - Optimization Based on Additive Angular Margin Loss and Meta-learning",http://dx.doi.org/10.21437/Interspeech.2022-904 -"Bidirectional Generation of Structure and Properties Through a Single - Molecular Foundation Model",http://arxiv.org/abs/2211.10590v4 -Unsupervised Explanation Generation via Correct Instantiations,http://arxiv.org/abs/2211.11160v1 -TCBERT: A Technical Report for Chinese Topic Classification BERT,http://arxiv.org/abs/2211.11304v1 -"Is the Elephant Flying? Resolving Ambiguities in Text-to-Image - Generative Models",http://arxiv.org/abs/2211.12503v1 -Open-vocabulary Attribute Detection,http://arxiv.org/abs/2211.12914v2 -"Learning to Suggest Breaks: Sustainable Optimization of Long-Term User - Engagement",http://arxiv.org/abs/2211.13585v2 -Solving math word problems with process- and outcome-based feedback,http://arxiv.org/abs/2211.14275v1 -Understanding BLOOM: An empirical study on diverse NLP tasks,http://arxiv.org/abs/2211.14865v2 -BJTU-WeChat's Systems for the WMT22 Chat Translation Task,http://arxiv.org/abs/2211.15009v1 -Validating Large Language Models with ReLM,http://arxiv.org/abs/2211.15458v2 -Multiresolution Textual Inversion,http://arxiv.org/abs/2211.17115v1 -Open Relation and Event Type Discovery with Type Abstraction,http://arxiv.org/abs/2212.00178v1 -Towards Practical Few-shot Federated NLP,http://dx.doi.org/10.1145/3578356.3592575 -Improving Zero-Shot Models with Label Distribution Priors,http://arxiv.org/abs/2212.00784v1 -"General Framework for Self-Supervised Model Priming for - Parameter-Efficient Fine-tuning",http://arxiv.org/abs/2212.01032v1 -Cross-Modal Mutual Learning for Cued Speech Recognition,http://arxiv.org/abs/2212.01083v2 -"PartSLIP: Low-Shot Part Segmentation for 3D Point Clouds via Pretrained - Image-Language Models",http://arxiv.org/abs/2212.01558v2 -Grounded Keys-to-Text Generation: Towards Factual Open-Ended Generation,http://arxiv.org/abs/2212.01956v1 -In-context Examples Selection for Machine Translation,http://arxiv.org/abs/2212.02437v1 -M-VADER: A Model for Diffusion with Multimodal Context,http://arxiv.org/abs/2212.02936v2 -Sunspot periodicity,http://arxiv.org/abs/2212.03249v1 -GONG third generation camera: Detector selection and feasibility study,http://arxiv.org/abs/2212.03963v1 -Transport in deformed centrosymmetric networks,http://dx.doi.org/10.1103/PhysRevE.106.064112 -TRBLLmaker -- Transformer Reads Between Lyrics Lines maker,http://arxiv.org/abs/2212.04917v1 -SmartBrush: Text and Shape Guided Object Inpainting with Diffusion Model,http://arxiv.org/abs/2212.05034v1 -"Structured information extraction from complex scientific text with - fine-tuned large language models",http://arxiv.org/abs/2212.05238v1 -The Turing Deception,http://arxiv.org/abs/2212.06721v2 -Diverse Demonstrations Improve In-context Compositional Generalization,http://arxiv.org/abs/2212.06800v3 -"A fine-grained comparison of pragmatic language understanding in humans - and language models",http://arxiv.org/abs/2212.06801v2 -Toxic Liquidation Spirals,http://arxiv.org/abs/2212.07306v2 -2-charge circular fuzz-balls and their perturbations,http://arxiv.org/abs/2212.07504v1 -"CLAM: Selective Clarification for Ambiguous Questions with Generative - Language Models",http://arxiv.org/abs/2212.07769v2 -ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning,http://arxiv.org/abs/2212.07919v2 -"On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in - Zero-Shot Reasoning",http://arxiv.org/abs/2212.08061v2 -ALERT: Adapting Language Models to Reasoning Tasks,http://arxiv.org/abs/2212.08286v2 -Teaching Small Language Models to Reason,http://arxiv.org/abs/2212.08410v3 -Point-E: A System for Generating 3D Point Clouds from Complex Prompts,http://arxiv.org/abs/2212.08751v1 -"MIGA: A Unified Multi-task Generation Framework for Conversational - Text-to-SQL",http://arxiv.org/abs/2212.09278v1 -Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations,http://arxiv.org/abs/2212.09865v2 -Low $p_T$ direct photon production at RHIC measured with PHENIX,http://arxiv.org/abs/2212.09953v1 -Language Modeling with Latent Situations,http://arxiv.org/abs/2212.10012v1 -On the Role of Parallel Data in Cross-lingual Transfer Learning,http://arxiv.org/abs/2212.10173v1 -"SoK: Analysis of Root Causes and Defense Strategies for Attacks on - Microarchitectural Optimizations",http://arxiv.org/abs/2212.10221v1 -"Go-tuning: Improving Zero-shot Learning Abilities of Smaller Language - Models",http://arxiv.org/abs/2212.10461v1 -"ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language - Models",http://arxiv.org/abs/2212.10815v1 -Chatbots in a Botnet World,http://arxiv.org/abs/2212.11126v2 -"Not Just Pretty Pictures: Toward Interventional Data Augmentation Using - Text-to-Image Generators",http://arxiv.org/abs/2212.11237v3 -TextBox 2.0: A Text Generation Library with Pre-trained Language Models,http://arxiv.org/abs/2212.13005v1 -"Sign reversal of the AC and DC supercurrent diode effect and - 0-$π$-like transitions in ballistic Josephson junctions",http://dx.doi.org/10.1038/s41565-023-01451-x -LAMBADA: Backward Chaining for Automated Reasoning in Natural Language,http://arxiv.org/abs/2212.13894v2 -Maximizing Use-Case Specificity through Precision Model Tuning,http://arxiv.org/abs/2212.14206v1 -"Linear programming word problems formulation using EnsembleCRF NER - labeler and T5 text generator with data augmentations",http://arxiv.org/abs/2212.14657v1 -"ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on - Simplified Radiology Reports",http://arxiv.org/abs/2212.14882v1 -Rethinking with Retrieval: Faithful Large Language Model Inference,http://arxiv.org/abs/2301.00303v1 -The IXPE view of GRB 221009A,http://dx.doi.org/10.3847/2041-8213/acba17 -Critical Perspectives: A Benchmark Revealing Pitfalls in PerspectiveAPI,http://arxiv.org/abs/2301.01874v1 -"Visual Estimation of Fingertip Pressure on Diverse Surfaces using Easily - Captured Data",http://arxiv.org/abs/2301.02310v2 -"Deep Breath: A Machine Learning Browser Extension to Tackle Online - Misinformation",http://arxiv.org/abs/2301.03301v1 -"Pre-merger sky localization of gravitational waves from binary neutron - star mergers using deep learning",http://arxiv.org/abs/2301.03558v1 -"Counteracts: Testing Stereotypical Representation in Pre-trained - Language Models",http://arxiv.org/abs/2301.04347v3 -Memory Augmented Large Language Models are Computationally Universal,http://arxiv.org/abs/2301.04589v1 -"KAER: A Knowledge Augmented Pre-Trained Language Model for Entity - Resolution",http://arxiv.org/abs/2301.04770v1 -Is AI Art Another Industrial Revolution in the Making?,http://arxiv.org/abs/2301.05133v1 -"CLIP the Gap: A Single Domain Generalization Approach for Object - Detection",http://arxiv.org/abs/2301.05499v2 -Latent Autoregressive Source Separation,http://arxiv.org/abs/2301.08562v1 -"The Entoptic Field Camera as Metaphor-Driven Research-through-Design - with AI Technologies",http://dx.doi.org/10.1145/3544548.3581175 -The Next Chapter: A Study of Large Language Models in Storytelling,http://arxiv.org/abs/2301.09790v3 -"ETHNO-DAANN: Ethnographic Engagement Classification by Deep Adversarial - Transfer Learning",http://arxiv.org/abs/2301.10229v1 -"Designing Data: Proactive Data Collection and Iteration for Machine - Learning",http://arxiv.org/abs/2301.10319v2 -"Towards a Unified Model for Generating Answers and Explanations in - Visual Question Answering",http://arxiv.org/abs/2301.10799v2 -ThoughtSource: A central hub for large language model reasoning data,http://arxiv.org/abs/2301.11596v5 -SEGA: Instructing Diffusion using Semantic Dimensions,http://arxiv.org/abs/2301.12247v1 -Crawling the Internal Knowledge-Base of Language Models,http://arxiv.org/abs/2301.12810v1 -"PromptMix: Text-to-image diffusion models enhance the performance of - lightweight networks",http://arxiv.org/abs/2301.12914v2 -Faithful Chain-of-Thought Reasoning,http://arxiv.org/abs/2301.13379v3 -"The Flan Collection: Designing Data and Methods for Effective - Instruction Tuning",http://arxiv.org/abs/2301.13688v2 -Benchmarking Large Language Models for News Summarization,http://arxiv.org/abs/2301.13848v1 -Debiasing Vision-Language Models via Biased Prompts,http://arxiv.org/abs/2302.00070v2 -Exploring Semantic Perturbations on Grover,http://arxiv.org/abs/2302.00509v1 -"Designing, Synthesizing and Modeling Active Fluids",http://dx.doi.org/10.1063/5.0096955 -Multimodal Chain-of-Thought Reasoning in Language Models,http://arxiv.org/abs/2302.00923v4 -SceneScape: Text-Driven Consistent Scene Generation,http://arxiv.org/abs/2302.01133v2 -On the Robustness of Randomized Ensembles to Adversarial Perturbations,http://arxiv.org/abs/2302.01375v3 -"Towards Few-Shot Identification of Morality Frames using In-Context - Learning",http://arxiv.org/abs/2302.02029v1 -Divide and Compose with Score Based Generative Models,http://arxiv.org/abs/2302.02272v1 -Capturing Topic Framing via Masked Language Modeling,http://arxiv.org/abs/2302.03183v1 -"What do Language Models know about word senses? Zero-Shot WSD with - Language Models and Domain Inventories",http://arxiv.org/abs/2302.03353v1 -"Pre-train, Prompt and Recommendation: A Comprehensive Survey of Language - Modelling Paradigm Adaptations in Recommender Systems",http://arxiv.org/abs/2302.03735v3 -"A Vector Quantized Approach for Text to Speech Synthesis on Real-World - Spontaneous Speech",http://arxiv.org/abs/2302.04215v1 -"Spatiotemporal factor models for functional data with application to - population map forecast",http://arxiv.org/abs/2302.04412v2 -Reliable event rates for disease mapping,http://arxiv.org/abs/2302.04582v1 -A Novel Approach for Auto-Formulation of Optimization Problems,http://arxiv.org/abs/2302.04643v1 -"Exploring the Cognitive Dynamics of Artificial Intelligence in the - Post-COVID-19 and Learning 3.0 Era: A Case Study of ChatGPT",http://arxiv.org/abs/2302.04818v1 -MaskSketch: Unpaired Structure-guided Masked Image Generation,http://arxiv.org/abs/2302.05496v1 -Adding Conditional Control to Text-to-Image Diffusion Models,http://arxiv.org/abs/2302.05543v2 -"Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented - Large Language Models",http://arxiv.org/abs/2302.05578v2 -MarioGPT: Open-Ended Text2Level Generation through Large Language Models,http://arxiv.org/abs/2302.05981v2 -Towards Agile Text Classifiers for Everyone,http://arxiv.org/abs/2302.06541v2 -"Gradient-Based Automated Iterative Recovery for Parameter-Efficient - Tuning",http://arxiv.org/abs/2302.06598v1 -Guiding Pretraining in Reinforcement Learning with Large Language Models,http://arxiv.org/abs/2302.06692v2 -STREET: A Multi-Task Structured Reasoning and Explanation Benchmark,http://arxiv.org/abs/2302.06729v1 -Residual Policy Learning for Vehicle Control of Autonomous Racing Cars,http://arxiv.org/abs/2302.07035v2 -"PrefixMol: Target- and Chemistry-aware Molecule Design via Prefix - Embedding",http://arxiv.org/abs/2302.07120v1 -"A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the - Input is Under-Specified?",http://arxiv.org/abs/2302.07159v1 -"Conversational AI-Powered Design: ChatGPT as Designer, User, and Product",http://arxiv.org/abs/2302.07406v1 -Keep it Neutral: Using Natural Language Inference to Improve Generation,http://arxiv.org/abs/2302.08577v1 -Epidemic control in networks with cliques,http://dx.doi.org/10.1103/PhysRevE.107.054304 -Paint it Black: Generating paintings from text descriptions,http://arxiv.org/abs/2302.08808v1 -Natural Response Generation for Chinese Reading Comprehension,http://arxiv.org/abs/2302.08817v2 -"Like a Good Nearest Neighbor: Practical Content Moderation with Sentence - Transformers",http://arxiv.org/abs/2302.08957v2 -"Language-Specific Representation of Emotion-Concept Knowledge Causally - Supports Emotion Inference",http://arxiv.org/abs/2302.09582v4 -On Einstein's last bid to keep a stationary cosmology,http://dx.doi.org/10.1142/9789811269776_0294 -"Conversational Text-to-SQL: An Odyssey into State-of-the-Art and - Challenges Ahead",http://arxiv.org/abs/2302.11054v1 -K-Diag: Knowledge-enhanced Disease Diagnosis in Radiographic Imaging,http://arxiv.org/abs/2302.11557v2 -"CHiLL: Zero-shot Custom Interpretable Feature Extraction from Clinical - Notes with Large Language Models",http://arxiv.org/abs/2302.12343v2 -"ProofNet: Autoformalizing and Formally Proving Undergraduate-Level - Mathematics",http://arxiv.org/abs/2302.12433v1 -Unsupervised Discovery of Semantic Latent Directions in Diffusion Models,http://arxiv.org/abs/2302.12469v1 -Human-in-the-Loop Schema Induction,http://dx.doi.org/10.18653/v1/2023.acl-demo.1 -Topic-Selective Graph Network for Topic-Focused Summarization,http://arxiv.org/abs/2302.13106v1 -"Exploring Opinion-unaware Video Quality Assessment with Semantic - Affinity Criterion",http://arxiv.org/abs/2302.13269v1 -"Navigating the Grey Area: Expressions of Overconfidence and Uncertainty - in Language Models",http://arxiv.org/abs/2302.13439v1 -Weighted Sampling for Masked Language Modeling,http://arxiv.org/abs/2302.14225v2 -H-AES: Towards Automated Essay Scoring for Hindi,http://arxiv.org/abs/2302.14635v1 -In-Context Instruction Learning,http://arxiv.org/abs/2302.14691v1 -"How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language - Understanding Tasks",http://arxiv.org/abs/2303.00293v1 -Rethinking Efficient Tuning Methods from a Unified Perspective,http://arxiv.org/abs/2303.00690v1 -"UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and - Distillation of Rerankers",http://arxiv.org/abs/2303.00807v3 -Symmetrical Anisotropy Enables Dynamic Diffraction Control in Photonics,http://arxiv.org/abs/2303.02482v1 -"Model Sketching: Centering Concepts in Early-Stage Machine Learning - Model Design",http://dx.doi.org/10.1145/3544548.3581290 -"LIDA: A Tool for Automatic Generation of Grammar-Agnostic Visualizations - and Infographics using Large Language Models",http://arxiv.org/abs/2303.02927v3 -ADELT: Transpilation Between Deep Learning Frameworks,http://arxiv.org/abs/2303.03593v1 -Lformer: Text-to-Image Generation with L-shape Block Parallel Decoding,http://arxiv.org/abs/2303.03800v1 -"GRB 221009A, its precursor and two afterglows in the Fermi data",http://arxiv.org/abs/2303.03855v2 -"Leveraging Pre-trained AudioLDM for Text to Sound Generation: A - Benchmark Study",http://arxiv.org/abs/2303.03857v2 -"ChatGPT: Beginning of an End of Manual Linguistic Data Annotation? Use - Case of Automatic Genre Identification",http://arxiv.org/abs/2303.03953v2 -ELODIN: Naming Concepts in Embedding Spaces,http://arxiv.org/abs/2303.04001v2 -"Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and - the Case of Information Extraction",http://arxiv.org/abs/2303.04132v1 -"Corner Detection Based on Multi-directional Gabor Filters with - Multi-scales",http://arxiv.org/abs/2303.04334v1 -Can large language models build causal graphs?,http://arxiv.org/abs/2303.05279v1 -"Open-Ended Medical Visual Question Answering Through Prefix Tuning of - Language Models",http://arxiv.org/abs/2303.05977v2 -"Task and Motion Planning with Large Language Models for Object - Rearrangement",http://arxiv.org/abs/2303.06247v4 -Consistency Analysis of ChatGPT,http://arxiv.org/abs/2303.06273v2 -Universal Instance Perception as Object Discovery and Retrieval,http://arxiv.org/abs/2303.06674v2 -"Advantage of Hardy's Nonlocal Correlation in Reverse Zero-Error Channel - Coding",http://arxiv.org/abs/2303.06848v1 -Audio Visual Language Maps for Robot Navigation,http://arxiv.org/abs/2303.07522v2 -"Exploring ChatGPT's Ability to Rank Content: A Preliminary Study on - Consistency with Human Preferences",http://arxiv.org/abs/2303.07610v1 -Query2doc: Query Expansion with Large Language Models,http://arxiv.org/abs/2303.07678v2 -A Picture is Worth a Thousand Words: Language Models Plan from Pixels,http://arxiv.org/abs/2303.09031v1 -LERF: Language Embedded Radiance Fields,http://arxiv.org/abs/2303.09553v1 -"More Robust Schema-Guided Dialogue State Tracking via Tree-Based - Paraphrase Ranking",http://arxiv.org/abs/2303.09905v1 -"Label Name is Mantra: Unifying Point Cloud Segmentation across - Heterogeneous Datasets",http://arxiv.org/abs/2303.10585v1 -SKED: Sketch-guided Text-based 3D Editing,http://arxiv.org/abs/2303.10735v4 -"Controllable Ancient Chinese Lyrics Generation Based on Phrase Prototype - Retrieving",http://arxiv.org/abs/2303.11005v1 -"Cascaded Latent Diffusion Models for High-Resolution Chest X-ray - Synthesis",http://arxiv.org/abs/2303.11224v1 -SVDiff: Compact Parameter Space for Diffusion Fine-Tuning,http://arxiv.org/abs/2303.11305v4 -Text2Tex: Text-driven Texture Synthesis via Diffusion Models,http://arxiv.org/abs/2303.11396v1 -Preparing Unprepared Students For Future Learning,http://arxiv.org/abs/2303.11960v1 -Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models,http://arxiv.org/abs/2303.11989v2 -Grading Conversational Responses Of Chatbots,http://arxiv.org/abs/2303.12038v1 -"On orbit performance of the solar flare trigger for the Hinode EUV - Imaging Spectrometer",http://arxiv.org/abs/2303.13155v1 -"Measurement of inclusive J/$ψ$ pair production cross section in pp - collisions at $\sqrt{s} = 13$ TeV",http://dx.doi.org/10.1103/PhysRevC.108.045203 -Ablating Concepts in Text-to-Image Diffusion Models,http://arxiv.org/abs/2303.13516v3 -End-to-End Diffusion Latent Optimization Improves Classifier Guidance,http://arxiv.org/abs/2303.13703v2 -"Overview of the ICASSP 2023 General Meeting Understanding and Generation - Challenge (MUG)",http://arxiv.org/abs/2303.13932v1 -"Machine Psychology: Investigating Emergent Capabilities and Behavior in - Large Language Models Using Psychological Methods",http://arxiv.org/abs/2303.13988v4 -"The Dark Side of Algorithms? The Effect of Recommender Systems on Online - Investor Behaviors",http://arxiv.org/abs/2303.14263v1 -Conceptual diagrams in Quantum Mechanics,http://arxiv.org/abs/2303.14306v1 -Nonequilibrium Fractional Josephson Effect,http://dx.doi.org/10.1103/PhysRevLett.131.126301 -WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation,http://arxiv.org/abs/2303.14814v1 -"LMCanvas: Object-Oriented Interaction to Personalize Large Language - Model-Powered Writing Environments",http://arxiv.org/abs/2303.15125v1 -The Stable Signature: Rooting Watermarks in Latent Diffusion Models,http://arxiv.org/abs/2303.15435v2 -"Typhoon: Towards an Effective Task-Specific Masking Strategy for - Pre-trained Language Models",http://arxiv.org/abs/2303.15619v1 -Explicit Planning Helps Language Models in Logical Reasoning,http://arxiv.org/abs/2303.15714v3 -Evaluation of ChatGPT for NLP-based Mental Health Applications,http://arxiv.org/abs/2303.15727v1 -When Brain-inspired AI Meets AGI,http://arxiv.org/abs/2303.15935v1 -Hallucinations in Large Multilingual Translation Models,http://arxiv.org/abs/2303.16104v1 -"ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of - Commonsense Problem in Large Language Models",http://arxiv.org/abs/2303.16421v1 -"Bi-directional Training for Composed Image Retrieval via Text Prompt - Learning",http://arxiv.org/abs/2303.16604v1 -UFOs: Just Hot Air or Something Meteor?,http://arxiv.org/abs/2303.17103v1 -Active User Identification in Fast Fading Massive Random Access Channels,http://arxiv.org/abs/2303.17543v1 -"CAMEL: Communicative Agents for ""Mind"" Exploration of Large Scale - Language Model Society",http://arxiv.org/abs/2303.17760v1 -Can AI Put Gamma-Ray Astrophysicists Out of a Job?,http://arxiv.org/abs/2303.17853v2 -"Towards ""Anytime, Anywhere"" Community Learning and Engagement around the - Design of Public Sector AI",http://arxiv.org/abs/2304.00167v2 -"Parents and Children: Distinguishing Multimodal DeepFakes from Natural - Images",http://arxiv.org/abs/2304.00500v1 -Eight Things to Know about Large Language Models,http://arxiv.org/abs/2304.00612v1 -Inspecting and Editing Knowledge Representations in Language Models,http://arxiv.org/abs/2304.00740v2 -"RegionPLC: Regional Point-Language Contrastive Learning for Open-World - 3D Scene Understanding",http://arxiv.org/abs/2304.00962v2 -Invariant-mass spectroscopy in projectile fragmentation reactions,http://arxiv.org/abs/2304.01124v1 -"Rolling the Dice: Imagining Generative AI as a Dungeons & Dragons - Storytelling Companion",http://arxiv.org/abs/2304.01860v1 -REFINER: Reasoning Feedback on Intermediate Representations,http://arxiv.org/abs/2304.01904v1 -Physics-Inspired Interpretability Of Machine Learning Models,http://arxiv.org/abs/2304.02381v1 -"""It's Weird That it Knows What I Want"": Usability and Interactions with - Copilot for Novice Programmers",http://arxiv.org/abs/2304.02491v1 -CT Multi-Task Learning with a Large Image-Text (LIT) Model,http://arxiv.org/abs/2304.02649v1 -Opportunities and challenges of ChatGPT for design knowledge management,http://arxiv.org/abs/2304.02796v1 -Inst-Inpaint: Instructing to Remove Objects with Diffusion Models,http://arxiv.org/abs/2304.03246v2 -Training-Free Layout Control with Cross-Attention Guidance,http://arxiv.org/abs/2304.03373v1 -RoSteALS: Robust Steganography using Autoencoder Latent Space,http://arxiv.org/abs/2304.03400v1 -Towards Unified Scene Text Spotting based on Sequence Generation,http://arxiv.org/abs/2304.03435v1 -Interpretable Unified Language Checking,http://arxiv.org/abs/2304.03728v1 -A Preliminary Evaluation of ChatGPT for Zero-shot Dialogue Understanding,http://arxiv.org/abs/2304.04256v1 -"PlantDet: A benchmark for Plant Detection in the Three-Rivers-Source - Region",http://arxiv.org/abs/2304.04963v3 -Multi-step Jailbreaking Privacy Attacks on ChatGPT,http://arxiv.org/abs/2304.05197v2 -Bayesian Optimization of Catalysts With In-context Learning,http://arxiv.org/abs/2304.05341v1 -"The Wall Street Neophyte: A Zero-Shot Analysis of ChatGPT Over - MultiModal Stock Movement Prediction Challenges",http://arxiv.org/abs/2304.05351v2 -"Can ChatGPT and Bard Generate Aligned Assessment Items? A Reliability - Analysis against Human Performance",http://dx.doi.org/10.37074/jalt.2023.6.1.28 -"Galactic ChitChat: Using Large Language Models to Converse with - Astronomy Literature",http://arxiv.org/abs/2304.05406v2 -SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM,http://arxiv.org/abs/2304.05622v3 -"Continuous Human Activity Recognition using a MIMO Radar for - Transitional Motion Analysis",http://arxiv.org/abs/2304.06173v1 -Learning Controllable 3D Diffusion Models from Single-view Images,http://arxiv.org/abs/2304.06700v1 -"WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for - Visualization Retrieval",http://arxiv.org/abs/2304.06991v1 -Text-Conditional Contextualized Avatars For Zero-Shot Personalization,http://arxiv.org/abs/2304.07410v1 -In-situ quantification of gamma-ray and beta-only emitting radionuclides,http://arxiv.org/abs/2304.07632v1 -Language Guided Local Infiltration for Interactive Image Retrieval,http://arxiv.org/abs/2304.07747v1 -"The language of sounds unheard: Exploring musical timbre semantics of - large language models",http://arxiv.org/abs/2304.07830v3 -"InstructUIE: Multi-task Instruction Tuning for Unified Information - Extraction",http://arxiv.org/abs/2304.08085v1 -"Quantum Estimation of the Stokes Vector Rotation for a General - Polarimetric Transformation",http://arxiv.org/abs/2304.08258v1 -Synthetic Data from Diffusion Models Improves ImageNet Classification,http://arxiv.org/abs/2304.08466v1 -"When SAM Meets Medical Images: An Investigation of Segment Anything - Model (SAM) on Multi-phase Liver Tumor Segmentation",http://arxiv.org/abs/2304.08506v5 -Vectorlike leptons and long-lived bosons at the LHC,http://dx.doi.org/10.1007/JHEP07(2023)079 -Generative Disco: Text-to-Video Generation for Music Visualization,http://arxiv.org/abs/2304.08551v2 -"Exoskeleton for the Mind: Exploring Strategies Against Misinformation - with a Metacognitive Agent",http://dx.doi.org/10.1145/3582700.3582725 -"AoI-Delay Tradeoff in Mobile Edge Caching: A Mixed-Order - Drift-Plus-Penalty Algorithm",http://arxiv.org/abs/2304.08781v2 -Revisiting k-NN for Fine-tuning Pre-trained Language Models,http://arxiv.org/abs/2304.09058v2 -"Solving Math Word Problems by Combining Language Models With Symbolic - Solvers",http://arxiv.org/abs/2304.09102v1 -How Secure is Code Generated by ChatGPT?,http://arxiv.org/abs/2304.09655v1 -"Dream Recording Through Non-invasive Brain-Machine Interfaces and - Generative AI-assisted Multimodal Software",http://arxiv.org/abs/2304.09858v1 -A Latent Space Theory for Emergent Abilities in Large Language Models,http://arxiv.org/abs/2304.09960v3 -"On the Independence of Association Bias and Empirical Fairness in - Language Models",http://arxiv.org/abs/2304.10153v1 -A data augmentation perspective on diffusion models and retrieval,http://arxiv.org/abs/2304.10253v1 -"MATOQ: a Monte Carlo Simulation of Electron Transport in - Environmental-friendly Gas Mixtures for Resistive Plate Chambers",http://arxiv.org/abs/2304.10307v1 -"Supporting Qualitative Analysis with Large Language Models: Combining - Codebook with GPT-3 for Deductive Coding",http://dx.doi.org/10.1145/3581754.3584136 -"Text2Seg: Remote Sensing Image Semantic Segmentation via Text-Guided - Visual Foundation Models",http://arxiv.org/abs/2304.10597v1 -PiXi: Password Inspiration by Exploring Information,http://arxiv.org/abs/2304.10728v2 -Can GPT-4 Perform Neural Architecture Search?,http://arxiv.org/abs/2304.10970v4 -"DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with - Self-Correction",http://arxiv.org/abs/2304.11015v2 -Improved Diffusion-based Image Colorization via Piggybacked Models,http://arxiv.org/abs/2304.11105v1 -LaMP: When Large Language Models Meet Personalization,http://arxiv.org/abs/2304.11406v2 -Pandemic Data Quality Modelling: A Bayesian Approach,http://arxiv.org/abs/2304.11562v1 -Track Anything: Segment Anything Meets Videos,http://arxiv.org/abs/2304.11968v2 -Creating Large Language Model Resistant Exams: Guidelines and Strategies,http://arxiv.org/abs/2304.12203v1 -"Understanding and Predicting Human Label Variation in Natural Language - Inference through Explanation",http://arxiv.org/abs/2304.12443v1 -"ChatLLM Network: More brains, More intelligence",http://arxiv.org/abs/2304.12998v1 -Answering Questions by Meta-Reasoning over Multiple Chains of Thought,http://arxiv.org/abs/2304.13007v3 -Zero-Shot Slot and Intent Detection in Low-Resource Languages,http://arxiv.org/abs/2304.13292v1 -"Multimodal Grounding for Embodied AI via Augmented Reality Headsets for - Natural Language Driven Task Planning",http://arxiv.org/abs/2304.13676v1 -Customized Segment Anything Model for Medical Image Segmentation,http://arxiv.org/abs/2304.13785v2 -"Translate to Disambiguate: Zero-shot Multilingual Word Sense - Disambiguation with Pretrained Language Models",http://arxiv.org/abs/2304.13803v1 -"Multi-Party Chat: Conversational Agents in Group Settings with Humans - and Models",http://arxiv.org/abs/2304.13835v3 -GazeSAM: What You See is What You Segment,http://arxiv.org/abs/2304.13844v1 -Transferring Procedural Knowledge across Commonsense Tasks,http://arxiv.org/abs/2304.13867v2 -"SweCTRL-Mini: a data-transparent Transformer-based large language model - for controllable text generation in Swedish",http://arxiv.org/abs/2304.13994v3 -"Putting People in Their Place: Affordance-Aware Human Insertion into - Scenes",http://arxiv.org/abs/2304.14406v1 -Prometheus: An Open-Source Neutrino Telescope Simulation,http://arxiv.org/abs/2304.14526v1 -"Human Activity Recognition Using Self-Supervised Representations of - Wearable Data",http://arxiv.org/abs/2304.14912v1 -"Explainable Verbal Reasoner Plus (EVR+): A Natural Language Reasoning - Framework that Supports Diverse Compositional Reasoning",http://arxiv.org/abs/2305.00061v1 -DSEC-MOS: Segment Any Moving Object with Moving Ego Vehicle,http://arxiv.org/abs/2305.00126v1 -Decomposition Enhances Reasoning via Self-Evaluation Guided Decoding,http://arxiv.org/abs/2305.00633v2 -"Large Linguistic Models: Analyzing theoretical linguistic abilities of - LLMs",http://arxiv.org/abs/2305.00948v2 -"Countable Borel treeable equivalence relations are classifiable by - $\ell_1$",http://arxiv.org/abs/2305.01049v1 -"Mitigating Approximate Memorization in Language Models via Dissimilarity - Learned Policy",http://arxiv.org/abs/2305.01550v1 -"How to Unleash the Power of Large Language Models for Few-shot Relation - Extraction?",http://arxiv.org/abs/2305.01555v4 -"Discern and Answer: Mitigating the Impact of Misinformation in - Retrieval-Augmented Models with Discriminators",http://arxiv.org/abs/2305.01579v1 -"Fears about AI-mediated communication are grounded in different - expectations for one's own versus others' use",http://arxiv.org/abs/2305.01670v1 -"AV-SAM: Segment Anything Model Meets Audio-Visual Localization and - Segmentation",http://arxiv.org/abs/2305.01836v1 -"Search for pairs of muons with small displacements in $pp$ collisions at - $\sqrt{s} = 13$ TeV with the ATLAS detector",http://dx.doi.org/10.1016/j.physletb.2023.138172 -"The Benefits of Label-Description Training for Zero-Shot Text - Classification",http://arxiv.org/abs/2305.02239v2 -"Visual Chain of Thought: Bridging Logical Gaps with Multimodal - Infillings",http://arxiv.org/abs/2305.02317v1 -"Informing Innovation Management: Linking Leading R&D Firms and Emerging - Technologies",http://arxiv.org/abs/2305.02476v1 -"Designing Parent-child-robot Interactions to Facilitate In-Home Parental - Math Talk with Young Children",http://arxiv.org/abs/2305.02525v1 -"Distilled density matrices of holographic PEE from thread-state - correspondence",http://arxiv.org/abs/2305.02895v1 -The Role of Global and Local Context in Named Entity Recognition,http://dx.doi.org/10.18653/v1/2023.acl-short.62 -Can Large Language Models Transform Computational Social Science?,http://arxiv.org/abs/2305.03514v1 -"Retrieval Augmented Chest X-Ray Report Generation using OpenAI GPT - models",http://arxiv.org/abs/2305.03660v1 -Attacking Pre-trained Recommendation,http://arxiv.org/abs/2305.03995v1 -"Algorithmic Bias, Generalist Models,and Clinical Medicine",http://arxiv.org/abs/2305.04008v1 -Improving Cross-Task Generalization with Step-by-Step Instructions,http://arxiv.org/abs/2305.04429v1 -Accelerated Stochastic Optimization Methods under Quasar-convexity,http://arxiv.org/abs/2305.04736v2 -"SkillQG: Learning to Generate Question for Reading Comprehension - Assessment",http://arxiv.org/abs/2305.04737v1 -"HistAlign: Improving Context Dependency in Language Generation by - Aligning with History",http://arxiv.org/abs/2305.04782v1 -"On the Origin of Acoustic Spin and Elastic Spin: Uncovering Hidden Wave - Spin of Scalar Fields with Higher-Order Derivative Lagrangian",http://arxiv.org/abs/2305.04939v1 -Revisiting Relation Extraction in the era of Large Language Models,http://arxiv.org/abs/2305.05003v1 -Unit-level mixed effects models for conditional extremes,http://arxiv.org/abs/2305.05106v3 -"Socio-Technical Security Modelling: Analysis of State-of-the-Art, - Application, and Maturity in Critical Industrial Infrastructure - Environments/Domains",http://arxiv.org/abs/2305.05108v1 -Thermodynamic route of Nb3Sn nucleation: Role of oxygen,http://dx.doi.org/10.1063/5.0157659 -Generating Phishing Attacks using ChatGPT,http://arxiv.org/abs/2305.05133v1 -"Dialogue Planning via Brownian Bridge Stochastic Process for - Goal-directed Proactive Dialogue",http://arxiv.org/abs/2305.05290v1 -"RAAD: LIGHT-1 CubeSat's Payload for the Detection of Terrestrial - Gamma-Ray Flashes",http://arxiv.org/abs/2305.05434v2 -An Empirical Study on the Robustness of the Segment Anything Model (SAM),http://arxiv.org/abs/2305.06422v2 -"Chain-of-Dictionary Prompting Elicits Translation in Large Language - Models",http://arxiv.org/abs/2305.06575v3 -Overinformative Question Answering by Humans and Machines,http://arxiv.org/abs/2305.07151v1 -Exploring Zero and Few-shot Techniques for Intent Classification,http://arxiv.org/abs/2305.07157v1 -ZARA: Improving Few-Shot Self-Rationalization for Small Language Models,http://arxiv.org/abs/2305.07355v2 -"WEDGE: A multi-weather autonomous driving dataset built from generative - vision-language models",http://arxiv.org/abs/2305.07528v1 -Generative AI: Implications and Applications for Education,http://arxiv.org/abs/2305.07605v3 -Learning to Generalize for Cross-domain QA,http://arxiv.org/abs/2305.08208v2 -"Cross-Modality Time-Variant Relation Learning for Generating Dynamic - Scene Graphs",http://arxiv.org/abs/2305.08522v1 -Natural Language Decomposition and Interpretation of Complex Utterances,http://arxiv.org/abs/2305.08677v1 -A Reproducible Extraction of Training Images from Diffusion Models,http://arxiv.org/abs/2305.08694v1 -"PMIndiaSum: Multilingual and Cross-lingual Headline Summarization for - Languages in India",http://arxiv.org/abs/2305.08828v2 -The Weighted Möbius Score: A Unified Framework for Feature Attribution,http://arxiv.org/abs/2305.09204v1 -"Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop - Fact Verification",http://arxiv.org/abs/2305.09400v1 -SoundStorm: Efficient Parallel Audio Generation,http://arxiv.org/abs/2305.09636v1 -"Selective Amnesia: A Continual Learning Approach to Forgetting in Deep - Generative Models",http://arxiv.org/abs/2305.10120v2 -"Boosting Distress Support Dialogue Responses with Motivational - Interviewing Strategy",http://arxiv.org/abs/2305.10195v1 -"UniEX: An Effective and Efficient Framework for Unified Information - Extraction via a Span-extractive Perspective",http://arxiv.org/abs/2305.10306v3 -"AI Friends: A Design Framework for AI-Powered Creative Programming for - Youth",http://arxiv.org/abs/2305.10412v1 -"OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation - with Neural Radiance Fields",http://arxiv.org/abs/2305.10503v3 -Statistical Knowledge Assessment for Generative Language Models,http://arxiv.org/abs/2305.10519v1 -"Smiling Women Pitching Down: Auditing Representational and - Presentational Gender Biases in Image Generative AI",http://arxiv.org/abs/2305.10566v1 -"Zero-Day Backdoor Attack against Text-to-Image Diffusion Models via - Personalization",http://arxiv.org/abs/2305.10701v1 -"X-IQE: eXplainable Image Quality Evaluation for Text-to-Image Generation - with Visual Large Language Models",http://arxiv.org/abs/2305.10843v2 -Generalized Multiple Intent Conditioned Slot Filling,http://arxiv.org/abs/2305.11023v1 -"Preparation of cavity Fock state superpositions by reinforcement - learning exploiting measurement back-action",http://arxiv.org/abs/2305.11047v1 -"RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent - Geometry and Texture",http://arxiv.org/abs/2305.11337v1 -Data Redaction from Conditional Generative Models,http://arxiv.org/abs/2305.11351v1 -MD3: The Multi-Dialect Dataset of Dialogues,http://arxiv.org/abs/2305.11355v1 -"Visualizing Linguistic Diversity of Text Datasets Synthesized by Large - Language Models",http://arxiv.org/abs/2305.11364v2 -"Phonetic and Prosody-aware Self-supervised Learning Approach for - Non-native Fluency Scoring",http://arxiv.org/abs/2305.11438v1 -"Efficient Cross-Lingual Transfer for Chinese Stable Diffusion with - Images as Pivots",http://arxiv.org/abs/2305.11540v1 -Introspective Tips: Large Language Model for In-Context Decision Making,http://arxiv.org/abs/2305.11598v1 -"S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid - Question Answering",http://arxiv.org/abs/2305.11725v1 -"Enhancing Chemistry Learning with ChatGPT and Bing Chat as Agents to - Think With: A Comparative Case Study",http://arxiv.org/abs/2305.11890v1 -Deep Learning Approaches to Lexical Simplification: A Survey,http://arxiv.org/abs/2305.12000v1 -"LogiCoT: Logical Chain-of-Thought Instruction-Tuning Data Collection - with GPT-4",http://arxiv.org/abs/2305.12147v1 -Evaluating the Performance of Large Language Models on GAOKAO Benchmark,http://arxiv.org/abs/2305.12474v2 -Retrieving Texts based on Abstract Descriptions,http://arxiv.org/abs/2305.12517v2 -"Reflective Linguistic Programming (RLP): A Stepping Stone in - Socially-Aware AGI (SocialAGI)",http://arxiv.org/abs/2305.12647v1 -"Quantifying Association Capabilities of Large Language Models and Its - Implications on Privacy Leakage",http://arxiv.org/abs/2305.12707v1 -"ChatGPT to Replace Crowdsourcing of Paraphrases for Intent - Classification: Higher Diversity and Comparable Model Robustness",http://arxiv.org/abs/2305.12947v2 -Distilling ChatGPT for Explainable Automated Student Answer Assessment,http://arxiv.org/abs/2305.12962v2 -ControlVideo: Training-free Controllable Text-to-Video Generation,http://arxiv.org/abs/2305.13077v1 -"Improved Compositional Generalization by Generating Demonstrations for - Meta-Learning",http://arxiv.org/abs/2305.13092v1 -"SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim - Verification on Scientific Tables",http://arxiv.org/abs/2305.13186v3 -"Matcher: Segment Anything with One Shot Using All-Purpose Feature - Matching",http://arxiv.org/abs/2305.13310v1 -Type-to-Track: Retrieve Any Object via Prompt-based Tracking,http://arxiv.org/abs/2305.13495v3 -"Can ChatGPT Detect Intent? Evaluating Large Language Models for Spoken - Language Understanding",http://arxiv.org/abs/2305.13512v2 -"Query Structure Modeling for Inductive Logical Reasoning Over Knowledge - Graphs",http://arxiv.org/abs/2305.13585v1 -"LLM-empowered Chatbots for Psychiatrist and Patient Simulation: - Application and Evaluation",http://arxiv.org/abs/2305.13614v1 -On the Risk of Misinformation Pollution with Large Language Models,http://arxiv.org/abs/2305.13661v1 -Towards Legally Enforceable Hate Speech Detection for Public Forums,http://arxiv.org/abs/2305.13677v1 -"Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data - Synthesis",http://arxiv.org/abs/2305.13691v1 -"ChatGPT-EDSS: Empathetic Dialogue Speech Synthesis Trained from - ChatGPT-derived Context Word Embeddings",http://arxiv.org/abs/2305.13724v1 -Aligning Large Language Models through Synthetic Feedback,http://arxiv.org/abs/2305.13735v2 -"L-SA: Learning Under-Explored Targets in Multi-Target Reinforcement - Learning",http://arxiv.org/abs/2305.13741v1 -Goal-Driven Explainable Clustering via Language Descriptions,http://arxiv.org/abs/2305.13749v1 -Generating Data for Symbolic Language with Large Language Models,http://arxiv.org/abs/2305.13917v1 -"IfQA: A Dataset for Open-domain Question Answering under Counterfactual - Presuppositions",http://arxiv.org/abs/2305.14010v1 -Does ChatGPT have Theory of Mind?,http://arxiv.org/abs/2305.14020v2 -"CTQScorer: Combining Multiple Features for In-context Example Selection - for Machine Translation",http://arxiv.org/abs/2305.14105v2 -Dr.ICL: Demonstration-Retrieved In-context Learning,http://arxiv.org/abs/2305.14128v1 -"EASE: An Easily-Customized Annotation System Powered by Efficiency - Enhancement Mechanisms",http://arxiv.org/abs/2305.14169v1 -Revisiting Machine Translation for Cross-lingual Classification,http://arxiv.org/abs/2305.14240v1 -"Learning to Generate Novel Scientific Directions with Contextualized - Literature-based Discovery",http://arxiv.org/abs/2305.14259v3 -Automatic Model Selection with Large Language Models for Reasoning,http://arxiv.org/abs/2305.14333v2 -"Having Beer after Prayer? Measuring Cultural Bias in Large Language - Models",http://arxiv.org/abs/2305.14456v1 -"Dancing Between Success and Failure: Edit-level Simplification - Evaluation using SALSA",http://arxiv.org/abs/2305.14458v2 -Are Large Language Models Robust Zero-shot Coreference Resolvers?,http://arxiv.org/abs/2305.14489v1 -"Unraveling ChatGPT: A Critical Analysis of AI-Generated Goal-Oriented - Dialogues and Annotations",http://arxiv.org/abs/2305.14556v1 -"Self-Checker: Plug-and-Play Modules for Fact-Checking with Large - Language Models",http://arxiv.org/abs/2305.14623v1 -Testing Causal Models of Word Meaning in GPT-3 and -4,http://arxiv.org/abs/2305.14630v1 -"Mastering the ABCDs of Complex Questions: Answer-Based Claim - Decomposition for Fine-grained Self-Evaluation",http://arxiv.org/abs/2305.14750v1 -Allies: Prompting Large Language Model with Beam Search,http://arxiv.org/abs/2305.14766v3 -ChatGPT and Simple Linguistic Inferences: Blind Spots and Blinds,http://arxiv.org/abs/2305.14785v1 -Adapting Language Models to Compress Contexts,http://arxiv.org/abs/2305.14788v1 -Drafting Event Schemas using Language Models,http://arxiv.org/abs/2305.14847v1 -Large Language Model Distillation Doesn't Need a Teacher,http://arxiv.org/abs/2305.14864v1 -Coverage-based Example Selection for In-Context Learning,http://arxiv.org/abs/2305.14907v1 -"Large Language Models are Effective Table-to-Text Generators, - Evaluators, and Feedback Providers",http://arxiv.org/abs/2305.14987v1 -Is GPT-4 a Good Data Analyst?,http://arxiv.org/abs/2305.15038v2 -"A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event - Extraction",http://arxiv.org/abs/2305.15051v1 -"A Mechanistic Interpretation of Arithmetic Reasoning in Language Models - using Causal Mediation Analysis",http://arxiv.org/abs/2305.15054v2 -"AutoPlan: Automatic Planning of Interactive Decision-Making Tasks With - Large Language Models",http://arxiv.org/abs/2305.15064v2 -"InpaintNeRF360: Text-Guided 3D Inpainting on Unbounded Neural Radiance - Fields",http://arxiv.org/abs/2305.15094v1 -"DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion - Models",http://arxiv.org/abs/2305.15194v1 -"MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal - Image Generation",http://arxiv.org/abs/2305.15296v1 -Training on Thin Air: Improve Image Classification with Generated Data,http://arxiv.org/abs/2305.15316v1 -Gorilla: Large Language Model Connected with Massive APIs,http://arxiv.org/abs/2305.15334v1 -Uncovering and Quantifying Social Biases in Code Generation,http://arxiv.org/abs/2305.15377v1 -TOAST: Transfer Learning via Attention Steering,http://arxiv.org/abs/2305.15542v2 -Unsupervised Semantic Correspondence Using Stable Diffusion,http://arxiv.org/abs/2305.15581v1 -"Revisiting non-English Text Simplification: A Unified Multilingual - Benchmark",http://arxiv.org/abs/2305.15678v1 -"On the Planning Abilities of Large Language Models -- A Critical - Investigation",http://arxiv.org/abs/2305.15771v1 -Linguistic Properties of Truthful Response,http://arxiv.org/abs/2305.15875v2 -"Understanding the Capabilities of Large Language Models for Automated - Planning",http://arxiv.org/abs/2305.16151v1 -"Don't Trust ChatGPT when Your Question is not in English: A Study of - Multilingual Abilities and Types of LLMs",http://arxiv.org/abs/2305.16339v2 -AdaPlanner: Adaptive Planning from Feedback with Language Models,http://arxiv.org/abs/2305.16653v1 -Detect Any Shadow: Segment Anything for Video Shadow Detection,http://arxiv.org/abs/2305.16698v1 -Can large language models generate salient negative statements?,http://arxiv.org/abs/2305.16755v2 -Incentive Mechanism for Uncertain Tasks under Differential Privacy,http://arxiv.org/abs/2305.16793v1 -Do GPTs Produce Less Literal Translations?,http://arxiv.org/abs/2305.16806v4 -Songs Across Borders: Singable and Controllable Neural Lyric Translation,http://arxiv.org/abs/2305.16816v1 -"Distributional Reinforcement Learning with Dual Expectile-Quantile - Regression",http://arxiv.org/abs/2305.16877v1 -Volume Singularities in General Relativity,http://arxiv.org/abs/2305.16995v2 -"ControlVideo: Adding Conditional Control for One Shot Text-to-Video - Editing",http://arxiv.org/abs/2305.17098v1 -Entailment as Robust Self-Learner,http://arxiv.org/abs/2305.17197v1 -"Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language - Models",http://arxiv.org/abs/2305.17311v1 -"SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex - Interactive Tasks",http://arxiv.org/abs/2305.17390v1 -AIMS: All-Inclusive Multi-Level Segmentation,http://arxiv.org/abs/2305.17768v1 -"Ask an Expert: Leveraging Language Models to Improve Strategic Reasoning - in Goal-Oriented Dialogue Models",http://arxiv.org/abs/2305.17878v1 -Fairness of ChatGPT,http://arxiv.org/abs/2305.18569v1 -"Policy Gradient Algorithms for Robust MDPs with Non-Rectangular - Uncertainty Sets",http://arxiv.org/abs/2305.19004v2 -"Machine Learning Applications in Cascading Failure Analysis in Power - Systems: A Review",http://arxiv.org/abs/2305.19390v1 -IDAS: Intent Discovery with Abstractive Summarization,http://arxiv.org/abs/2305.19783v1 -"On the effect of angular momentum on the prompt cusp formation via the - gravitational collapse",http://dx.doi.org/10.1016/j.dark.2023.101259 -Self-Verification Improves Few-Shot Clinical Information Extraction,http://arxiv.org/abs/2306.00024v1 -Multilingual Multi-Figurative Language Detection,http://arxiv.org/abs/2306.00121v1 -Automated Annotation with Generative AI Requires Validation,http://arxiv.org/abs/2306.00176v1 -"Diffusion Brush: A Latent Diffusion Model-based Editing Tool for - AI-generated Images",http://arxiv.org/abs/2306.00219v1 -Preference-grounded Token-level Guidance for Language Model Fine-tuning,http://arxiv.org/abs/2306.00398v2 -"Revisiting Event Argument Extraction: Can EAE Models Learn Better When - Being Aware of Event Co-occurrences?",http://arxiv.org/abs/2306.00502v1 -Predicting the Quality of Revisions in Argumentative Writing,http://arxiv.org/abs/2306.00667v1 -In-Context Learning User Simulators for Task-Oriented Dialog Systems,http://arxiv.org/abs/2306.00774v1 -"Interpretable Math Word Problem Solution Generation Via Step-by-step - Planning",http://arxiv.org/abs/2306.00784v1 -Birth of a Transformer: A Memory Viewpoint,http://arxiv.org/abs/2306.00802v1 -"Exploring the Versatility of Zero-Shot CLIP for Interstitial Lung - Disease Classification",http://arxiv.org/abs/2306.01111v2 -"Enhancing the Driver's Comprehension of ADS's System Limitations: An HMI - for Providing Request-to-Intervene Trigger Information",http://arxiv.org/abs/2306.01328v1 -Utilizing ChatGPT to Enhance Clinical Trial Enrollment,http://arxiv.org/abs/2306.02077v1 -Large Language Model Augmented Narrative Driven Recommendations,http://arxiv.org/abs/2306.02250v2 -"OWQ: Lessons learned from activation outliers for weight quantization in - large language models",http://arxiv.org/abs/2306.02272v2 -Exposing Bias in Online Communities through Large-Scale Language Models,http://arxiv.org/abs/2306.02294v1 -"Prompt to be Consistent is Better than Self-Consistent? Few-Shot and - Zero-Shot Fact Verification with Pre-trained Language Models",http://arxiv.org/abs/2306.02569v1 -Stable Diffusion is Unstable,http://arxiv.org/abs/2306.02583v2 -Vocoder drift in x-vector-based speaker anonymization,http://arxiv.org/abs/2306.02892v1 -"Guided scenarios with simulated expert personae: a remarkable strategy - to perform cognitive work",http://arxiv.org/abs/2306.03104v1 -"Prompting Large Language Models to Reformulate Queries for Moment - Localization",http://arxiv.org/abs/2306.03422v1 -Automatic Assessment of Oral Reading Accuracy for Reading Diagnostics,http://dx.doi.org/10.21437/Interspeech.2023-1681 -Natural Language Commanding via Program Synthesis,http://arxiv.org/abs/2306.03460v1 -"Augmenting Reddit Posts to Determine Wellness Dimensions impacting - Mental Health",http://arxiv.org/abs/2306.04059v1 -"ChatGPT is fun, but it is not funny! Humor is still challenging Large - Language Models",http://arxiv.org/abs/2306.04563v1 -"Assessing Phrase Break of ESL Speech with Pre-trained Language Models - and Large Language Models",http://arxiv.org/abs/2306.04980v1 -"Energy-Efficient Downlink Semantic Generative Communication with - Text-to-Image Generators",http://arxiv.org/abs/2306.05041v1 -"Robot Task Planning Based on Large Language Model Representing Knowledge - with Directed Graph Structures",http://arxiv.org/abs/2306.05171v1 -SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions,http://arxiv.org/abs/2306.05178v2 -"Multiplication of the orbital angular momentum of phonon polaritons via - sublinear dispersion",http://arxiv.org/abs/2306.05209v1 -"ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text - Ambiguation to Expand Mental Health Care Delivery",http://arxiv.org/abs/2306.05552v1 -"Ultrafast internal conversion and photochromism in gas-phase - salicylideneaniline",http://arxiv.org/abs/2306.05645v1 -RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models,http://arxiv.org/abs/2306.05668v1 -The Lagrangian Numerical Relativity code SPHINCS_BSSN_v1.0,http://dx.doi.org/10.3389/fams.2023.1236586 -"AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt - Encoder",http://arxiv.org/abs/2306.06370v1 -Face0: Instantaneously Conditioning a Text-to-Image Model on a Face,http://arxiv.org/abs/2306.06638v1 -Augmenting Greybox Fuzzing with Generative AI,http://arxiv.org/abs/2306.06782v1 -"The BEA 2023 Shared Task on Generating AI Teacher Responses in - Educational Dialogues",http://arxiv.org/abs/2306.06941v1 -Explaining CLIP through Co-Creative Drawings and Interaction,http://arxiv.org/abs/2306.07429v1 -SayTap: Language to Quadrupedal Locomotion,http://arxiv.org/abs/2306.07580v3 -Soft Language Clustering for Multilingual Model Pre-training,http://arxiv.org/abs/2306.07610v1 -"ChatGPT vs Human-authored Text: Insights into Controllable Text - Summarization and Sentence Style Transfer",http://arxiv.org/abs/2306.07799v1 -Questioning the Survey Responses of Large Language Models,http://arxiv.org/abs/2306.07951v2 -"Language models are not naysayers: An analysis of language models on - negation benchmarks",http://arxiv.org/abs/2306.08189v1 -"Assessing the Effectiveness of GPT-3 in Detecting False Political - Statements: A Case Study on the LIAR Dataset",http://arxiv.org/abs/2306.08190v1 -On the Robustness of Latent Diffusion Models,http://arxiv.org/abs/2306.08257v1 -Anticipatory Music Transformer,http://arxiv.org/abs/2306.08620v1 -Constructing polylogarithms on higher-genus Riemann surfaces,http://arxiv.org/abs/2306.08644v2 -Top Secrets: Long-Lived ALPs in Top Production,http://arxiv.org/abs/2306.08686v1 -VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing,http://arxiv.org/abs/2306.08707v1 -Enhanced Sampling with Machine Learning: A Review,http://arxiv.org/abs/2306.09111v2 -Undetectable Watermarks for Language Models,http://arxiv.org/abs/2306.09194v1 -Propagating Knowledge Updates to LMs Through Distillation,http://arxiv.org/abs/2306.09306v1 -"The Big Data Myth: Using Diffusion Models for Dataset Generation to - Train Deep Detection Models",http://arxiv.org/abs/2306.09762v1 -"Energy-Based Cross Attention for Bayesian Context Update in - Text-to-Image Diffusion Models",http://arxiv.org/abs/2306.09869v2 -"Learning to Summarize and Answer Questions about a Virtual Robot's Past - Actions",http://arxiv.org/abs/2306.09922v1 -"Friend or Foe? Exploring the Implications of Large Language Models on - the Science System",http://arxiv.org/abs/2306.09928v1 -"Assigning AI: Seven Approaches for Students, with Prompts",http://arxiv.org/abs/2306.10052v1 -"Towards social generative AI for education: theory, practices and ethics",http://arxiv.org/abs/2306.10063v1 -"Object counting from aerial remote sensing images: application to - wildlife and marine mammals",http://arxiv.org/abs/2306.10439v1 -MotionGPT: Finetuned LLMs are General-Purpose Motion Generators,http://arxiv.org/abs/2306.10900v1 -Multilingual Few-Shot Learning via Language Model Retrieval,http://arxiv.org/abs/2306.10964v1 -Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction,http://arxiv.org/abs/2306.11020v1 -OpenP5: Benchmarking Foundation Models for Recommendation,http://arxiv.org/abs/2306.11134v1 -"A Novel Counterfactual Data Augmentation Method for Aspect-Based - Sentiment Analysis",http://arxiv.org/abs/2306.11260v3 -"TrustGPT: A Benchmark for Trustworthy and Responsible Large Language - Models",http://arxiv.org/abs/2306.11507v1 -On Compositionality and Improved Training of NADO,http://arxiv.org/abs/2306.11825v1 -Fast Segment Anything,http://arxiv.org/abs/2306.12156v1 -Noble gas functional defect with unusual relaxation pattern in solids,http://arxiv.org/abs/2306.12252v1 -A Multimodal Prototypical Approach for Unsupervised Sound Classification,http://arxiv.org/abs/2306.12300v2 -"Overview of Robust and Multilingual Automatic Evaluation Metrics for - Open-Domain Dialogue Systems at DSTC 11 Track 4",http://arxiv.org/abs/2306.12794v3 -System-Level Natural Language Feedback,http://arxiv.org/abs/2306.13588v1 -"Exploring the Potential of AI-Generated Synthetic Datasets: A Case Study - on Telematics Data with ChatGPT",http://arxiv.org/abs/2306.13700v1 -Retrieving Supporting Evidence for LLMs Generated Answers,http://arxiv.org/abs/2306.13781v1 -When SAM Meets Sonar Images,http://arxiv.org/abs/2306.14109v1 -"Hairy Black Holes: Non-existence of Short Hairs and Bound on Light Ring - Size",http://arxiv.org/abs/2306.14193v1 -"Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral - Reasoning",http://arxiv.org/abs/2306.14308v1 -Addressing Cold Start Problem for End-to-end Automatic Speech Scoring,http://arxiv.org/abs/2306.14310v1 -Product Information Extraction using ChatGPT,http://arxiv.org/abs/2306.14921v1 -"Searches for supersymmetric particles with prompt decays with the ATLAS - detector",http://arxiv.org/abs/2306.15014v1 -"Sea Change in Software Development: Economic and Productivity Analysis - of the AI-Powered Developer Lifecycle",http://arxiv.org/abs/2306.15033v1 -"Spherical particle orbits around a rotating black hole in massive - gravity",http://dx.doi.org/10.3390/sym15081485 -Measuring the continuous research impact of a researcher: The Kz index,http://arxiv.org/abs/2306.15677v1 -"Stone Needle: A General Multimodal Large-scale Model Framework towards - Healthcare",http://arxiv.org/abs/2306.16034v1 -"Joint constraint on the jet structure from the short GRB population and - GRB 170817A",http://arxiv.org/abs/2306.16795v1 -"DisasterResponseGPT: Large Language Models for Accelerated Plan of - Action Development in Disaster Response Scenarios",http://arxiv.org/abs/2306.17271v1 -"DeepTagger: Knowledge Enhanced Named Entity Recognition for Web-Based - Ads Queries",http://arxiv.org/abs/2306.17413v1 -"RBSR: Efficient and Flexible Recurrent Network for Burst - Super-Resolution",http://arxiv.org/abs/2306.17595v2 -Training-free Object Counting with Prompts,http://arxiv.org/abs/2307.00038v2 -"Queer People are People First: Deconstructing Sexual Identity - Stereotypes in Large Language Models",http://arxiv.org/abs/2307.00101v1 -"AIGCIQA2023: A Large-scale Image Quality Assessment Database for AI - Generated Images: from the Perspectives of Quality, Authenticity and - Correspondence",http://arxiv.org/abs/2307.00211v2 -"BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained - Transformer",http://arxiv.org/abs/2307.00360v2 -Large Language Models Enable Few-Shot Clustering,http://arxiv.org/abs/2307.00524v1 -"CollabKG: A Learnable Human-Machine-Cooperative Information Extraction - Toolkit for (Event) Knowledge Graph Construction",http://arxiv.org/abs/2307.00769v1 -Evaluating Shutdown Avoidance of Language Models in Textual Scenarios,http://arxiv.org/abs/2307.00787v1 -"MVDiffusion: Enabling Holistic Multi-view Image Generation with - Correspondence-Aware Diffusion",http://arxiv.org/abs/2307.01097v4 -"Exploring the In-context Learning Ability of Large Language Model for - Biomedical Concept Linking",http://arxiv.org/abs/2307.01137v1 -"Multilingual Language Models are not Multicultural: A Case Study in - Emotion",http://arxiv.org/abs/2307.01370v2 -Chain of Thought Prompting Elicits Knowledge Augmentation,http://arxiv.org/abs/2307.01640v1 -Jailbroken: How Does LLM Safety Training Fail?,http://arxiv.org/abs/2307.02483v1 -"Building Cooperative Embodied Agents Modularly with Large Language - Models",http://arxiv.org/abs/2307.02485v1 -A luminous precursor in the extremely bright GRB 230307A,http://arxiv.org/abs/2307.02996v2 -Extracting Multi-valued Relations from Language Models,http://arxiv.org/abs/2307.03122v2 -New Early Dark Energy as a solution to the $H_0$ and $S_8$ tensions,http://arxiv.org/abs/2307.03481v1 -"DWReCO at CheckThat! 2023: Enhancing Subjectivity Detection through - Style-based Data Sampling",http://arxiv.org/abs/2307.03550v1 -SVIT: Scaling up Visual Instruction Tuning,http://arxiv.org/abs/2307.04087v2 -"Towards Automated Cyber Range Design: Characterizing and Matching - Demands to Supplies",http://dx.doi.org/10.1109/CSR57506.2023.10224940 -Search for $K^+$ decays into the $π^+e^+e^-e^+e^-$ final state,http://arxiv.org/abs/2307.04579v2 -MultiQG-TI: Towards Question Generation from Multi-modal Sources,http://arxiv.org/abs/2307.04643v1 -Atmospheric muons at PeV energies in radio neutrino detectors,http://dx.doi.org/10.1088/1475-7516/2023/10/043 -RoCo: Dialectic Multi-Robot Collaboration with Large Language Models,http://arxiv.org/abs/2307.04738v1 -"LaunchpadGPT: Language Model as Music Visualization Designer on - Launchpad",http://arxiv.org/abs/2307.04827v2 -"Unleashing the Potential of Regularization Strategies in Learning with - Noisy Labels",http://arxiv.org/abs/2307.05025v1 -"Summary characteristics for multivariate function-valued spatial point - process attributes",http://arxiv.org/abs/2307.05101v1 -"T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional - Text-to-image Generation",http://arxiv.org/abs/2307.06350v1 -On the Effective Horizon of Inverse Reinforcement Learning,http://arxiv.org/abs/2307.06541v1 -"Unsupervised Calibration through Prior Adaptation for Text - Classification using Large Language Models",http://dx.doi.org/10.26615/issn.2603-2821.2023_002 -"Leveraging Vision-Language Foundation Models for Fine-Grained Downstream - Tasks",http://arxiv.org/abs/2307.06795v1 -"Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image - Models",http://arxiv.org/abs/2307.06925v1 -C3: Zero-shot Text-to-SQL with ChatGPT,http://arxiv.org/abs/2307.07306v1 -Can Large Language Models Empower Molecular Property Prediction?,http://arxiv.org/abs/2307.07443v1 -"Can I say, now machines can think?",http://arxiv.org/abs/2307.07526v1 -"Controlling periodic Fano resonances of quantum acoustic waves with a - giant atom coupled to microwave waveguide",http://arxiv.org/abs/2307.07949v1 -"Unleashing the Potential of LLMs for Quantum Computing: A Study in - Quantum Architecture Design",http://arxiv.org/abs/2307.08191v1 -Identity-Preserving Aging of Face Images via Latent Diffusion Models,http://arxiv.org/abs/2307.08585v1 -"GEAR: Augmenting Language Models with Generalizable and Efficient Tool - Resolution",http://arxiv.org/abs/2307.08775v1 -Adapting an ASR Foundation Model for Spoken Language Assessment,http://dx.doi.org/10.21437/SLaTE.2023-20 -"ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring - Instruction Tuning",http://arxiv.org/abs/2307.09474v1 -"Unmaking AI Imagemaking: A Methodological Toolkit for Critical - Investigation",http://arxiv.org/abs/2307.09753v1 -LDP: Language-driven Dual-Pixel Image Defocus Deblurring Network,http://arxiv.org/abs/2307.09815v2 -XSkill: Cross Embodiment Skill Discovery,http://arxiv.org/abs/2307.09955v2 -Survey on Controlable Image Synthesis with Deep Learning,http://arxiv.org/abs/2307.10275v1 -TokenFlow: Consistent Diffusion Features for Consistent Video Editing,http://arxiv.org/abs/2307.10373v2 -"Language-based Action Concept Spaces Improve Video Self-Supervised - Learning",http://arxiv.org/abs/2307.10922v2 -MediaGPT : A Large Language Model For Chinese Media,http://arxiv.org/abs/2307.10930v2 -"Generator-Retriever-Generator: A Novel Approach to Open-domain Question - Answering",http://dx.doi.org/10.1186/s40537-023-00802-8 -"Prompt-Based Zero- and Few-Shot Node Classification: A Multimodal - Approach",http://arxiv.org/abs/2307.11572v1 -"RLCD: Reinforcement Learning from Contrast Distillation for Language - Model Alignment",http://arxiv.org/abs/2307.12950v2 -"Limits on dark matter annihilation in prompt cusps from the isotropic - gamma-ray background",http://arxiv.org/abs/2307.13023v1 -How to use LLMs for Text Analysis,http://arxiv.org/abs/2307.13106v1 -Nonlinear effects in many-body van der Waals interactions,http://arxiv.org/abs/2307.13607v1 -"Large Language Models are Competitive Near Cold-start Recommenders for - Language- and Item-based Preferences",http://arxiv.org/abs/2307.14225v1 -How Can Large Language Models Help Humans in Design and Manufacturing?,http://arxiv.org/abs/2307.14377v1 -"Spatial orientation of the fission fragment intrinsic spins and their - correlations",http://arxiv.org/abs/2307.14455v1 -Angular dependence of the atmospheric neutrino flux with IceCube data,http://arxiv.org/abs/2307.14728v1 -A LLM Assisted Exploitation of AI-Guardian,http://arxiv.org/abs/2307.15008v1 -EnSolver: Uncertainty-Aware CAPTCHA Solver Using Deep Ensembles,http://arxiv.org/abs/2307.15180v1 -"Investigating the Learning Behaviour of In-context Learning: A - Comparison with Supervised Learning",http://arxiv.org/abs/2307.15411v2 -LLMs4OL: Large Language Models for Ontology Learning,http://arxiv.org/abs/2307.16648v2 -Controlling Geometric Abstraction and Texture for Artistic Images,http://arxiv.org/abs/2308.00148v1 -Cosmic time and the initial state of the universe,http://arxiv.org/abs/2308.00371v1 -"SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step - Reasoning",http://arxiv.org/abs/2308.00436v3 -Structural Embeddings of Tools for Large Language Models,http://arxiv.org/abs/2308.00447v1 -"Detecting Cloud Presence in Satellite Images Using the RGB-based CLIP - Vision-Language Model",http://arxiv.org/abs/2308.00541v1 -"Constraining decaying very heavy dark matter from galaxy clusters with - 14 year Fermi-LAT data",http://arxiv.org/abs/2308.00589v1 -"Teaching Smaller Language Models To Generalise To Unseen Compositional - Questions",http://arxiv.org/abs/2308.00946v2 -Do Multilingual Language Models Think Better in English?,http://arxiv.org/abs/2308.01223v1 -Holy Grail 2.0: From Natural Language to Constraint Models,http://arxiv.org/abs/2308.01589v1 -"Baby's CoThought: Leveraging Large Language Models for Enhanced - Reasoning in Compact Models",http://arxiv.org/abs/2308.01684v2 -Ambient Adventures: Teaching ChatGPT on Developing Complex Stories,http://arxiv.org/abs/2308.01734v1 -"The Capability of Large Language Models to Measure Psychiatric - Functioning",http://arxiv.org/abs/2308.01834v1 -Thespian: Multi-Character Text Role-Playing Game Agents,http://arxiv.org/abs/2308.01872v1 -Training Data Protection with Compositional Diffusion Models,http://arxiv.org/abs/2308.01937v2 -Learning to Paraphrase Sentences to Different Complexity Levels,http://arxiv.org/abs/2308.02226v1 -"Performance of Large Language Models in a Computer Science Degree - Program",http://arxiv.org/abs/2308.02432v1 -Language models as master equation solvers,http://arxiv.org/abs/2308.02514v1 -"EduChat: A Large-Scale Language Model-based Chatbot System for - Intelligent Education",http://arxiv.org/abs/2308.02773v1 -Pre-Trained Large Language Models for Industrial Control,http://arxiv.org/abs/2308.03028v1 -"Automatically Correcting Large Language Models: Surveying the landscape - of diverse self-correction strategies",http://arxiv.org/abs/2308.03188v2 -"A Clustering Approach for Remotely Sensed Data in the Western United - States",http://arxiv.org/abs/2308.03227v1 -"Balanced Face Dataset: Guiding StyleGAN to Generate Labeled Synthetic - Face Image Dataset for Underrepresented Group",http://arxiv.org/abs/2308.03495v1 -ChatSim: Underwater Simulation with Natural Language Prompting,http://arxiv.org/abs/2308.04029v2 -Gentopia: A Collaborative Platform for Tool-Augmented LLMs,http://arxiv.org/abs/2308.04030v1 -FLIRT: Feedback Loop In-context Red Teaming,http://arxiv.org/abs/2308.04265v1 -"Big Bang, Low Bar -- Risk Assessment in the Public Arena",http://arxiv.org/abs/2308.04440v2 -"Simulation of the 2E Technique on neutron multiplicity measurement as a - function of fragment mass in spontaneous fission of 252Cf",http://arxiv.org/abs/2308.04550v1 -"Sci-CoT: Leveraging Large Language Models for Enhanced Knowledge - Distillation in Small Models for Scientific QA",http://arxiv.org/abs/2308.04679v1 -CLEVA: Chinese Language Models EVAluation Platform,http://arxiv.org/abs/2308.04813v2 -"Exploring Ligand-to-Metal Charge-transfer States in the - Photo-Ferrioxalate System using Excited-State Specific Optimization",http://arxiv.org/abs/2308.04932v1 -"Usability Assessment of the OnlyKey Hardware Two-Factor Authentication - Key Among Low Vision or Blind Users",http://arxiv.org/abs/2308.05582v1 -"Optimizing transformer-based machine translation model for single GPU - training: a hyperparameter ablation study",http://arxiv.org/abs/2308.06017v1 -"Masked-Attention Diffusion Guidance for Spatially Controlling - Text-to-Image Generation",http://arxiv.org/abs/2308.06027v1 -Improving Joint Speech-Text Representations Without Alignment,http://arxiv.org/abs/2308.06125v1 -Self-Alignment with Instruction Backtranslation,http://arxiv.org/abs/2308.06259v2 -Three Ways of Using Large Language Models to Evaluate Chat,http://arxiv.org/abs/2308.06502v1 -"Determining the Fundamental Failure Modes in Ni-rich Lithium Ion Battery - Cathodes",http://arxiv.org/abs/2308.06537v1 -"Ground Manipulator Primitive Tasks to Executable Actions using Large - Language Models",http://arxiv.org/abs/2308.06810v2 -"CodeHelp: Using Large Language Models with Guardrails for Scalable - Support in Programming Classes",http://arxiv.org/abs/2308.06921v1 -Approximating Human-Like Few-shot Learning with GPT-based Compression,http://arxiv.org/abs/2308.06942v1 -ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate,http://arxiv.org/abs/2308.07201v1 -AI Text-to-Behavior: A Study In Steerability,http://arxiv.org/abs/2308.07326v1 -RIFO: Pushing the Efficiency of Programmable Packet Schedulers,http://arxiv.org/abs/2308.07442v1 -"ST-MLP: A Cascaded Spatio-Temporal Linear Framework with - Channel-Independence Strategy for Traffic Forecasting",http://arxiv.org/abs/2308.07496v1 -"Domain Adaptation via Minimax Entropy for Real/Bogus Classification of - Astronomical Alerts",http://arxiv.org/abs/2308.07538v1 -"TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for - Time Series",http://arxiv.org/abs/2308.08241v1 -"Chat-3D: Data-efficiently Tuning Large Language Model for Universal - Dialogue of 3D Scenes",http://arxiv.org/abs/2308.08769v1 -Exploring Demonstration Ensembling for In-context Learning,http://arxiv.org/abs/2308.08780v2 -"Language-enhanced RNR-Map: Querying Renderable Neural Radiance Field - maps with natural language",http://arxiv.org/abs/2308.08854v1 -Linearity of Relation Decoding in Transformer Language Models,http://arxiv.org/abs/2308.09124v1 -"Enhancing Reasoning Capabilities of Large Language Models: A Graph-Based - Verification Approach",http://arxiv.org/abs/2308.09267v3 -SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT,http://arxiv.org/abs/2308.09331v2 -ChatHaruhi: Reviving Anime Character in Reality via Large Language Model,http://arxiv.org/abs/2308.09597v1 -"On the Effectiveness of LayerNorm Tuning for Continual Learning in - Vision Transformers",http://arxiv.org/abs/2308.09610v1 -A Theory of Topological Derivatives for Inverse Rendering of Geometry,http://arxiv.org/abs/2308.09865v1 -TDG: Text-guided Domain Generalization,http://arxiv.org/abs/2308.09931v1 -"Unilaterally Aggregated Contrastive Learning with Hierarchical - Augmentation for Anomaly Detection",http://arxiv.org/abs/2308.10155v1 -"FoodGPT: A Large Language Model in Food Testing Domain with Incremental - Pre-training and Knowledge Graph Prompt",http://arxiv.org/abs/2308.10173v1 -Prediction of Pneumonia and COVID-19 Using Deep Neural Networks,http://arxiv.org/abs/2308.10368v1 -"FairBench: A Four-Stage Automatic Framework for Detecting Stereotypes - and Biases in Large Language Models",http://arxiv.org/abs/2308.10397v1 -"Evaluating Large Language Models on Graphs: Performance Insights and - Comparative Analysis",http://arxiv.org/abs/2308.11224v2 -"Identification of time-correlated neutrino clusters in populations of - astrophysical transient sources",http://dx.doi.org/10.22323/1.444.1507 -Global Analysis of the ALP Effective Theory,http://arxiv.org/abs/2308.11703v1 -"KnowledGPT: Enhancing Large Language Models with Retrieval and Storage - Access on Knowledge Bases",http://arxiv.org/abs/2308.11761v1 -Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields,http://arxiv.org/abs/2308.11974v2 -Attention-Based Acoustic Feature Fusion Network for Depression Detection,http://arxiv.org/abs/2308.12478v1 -"Fall Detection using Knowledge Distillation Based Long short-term memory - for Offline Embedded and Low Power Devices",http://arxiv.org/abs/2308.12481v1 -"Towards Communication-Efficient Model Updating for On-Device - Session-Based Recommendation",http://arxiv.org/abs/2308.12777v1 -Towards Realistic Unsupervised Fine-tuning with CLIP,http://arxiv.org/abs/2308.12919v1 -Dense Text-to-Image Generation with Attention Modulation,http://arxiv.org/abs/2308.12964v1 -"Joint Modeling of Feature, Correspondence, and a Compressed Memory for - Video Object Segmentation",http://arxiv.org/abs/2308.13505v1 -"A unified perspective on Poincaré and Galilei relativity: I. Special - relativity",http://arxiv.org/abs/2308.13529v1 -ORES: Open-vocabulary Responsible Visual Synthesis,http://arxiv.org/abs/2308.13785v1 -Exploring Large Language Models for Knowledge Graph Completion,http://arxiv.org/abs/2308.13916v3 -SalesBot 2.0: A Human-Like Intent-Guided Chit-Chat Dataset,http://arxiv.org/abs/2308.14266v1 -Machine Unlearning Methodology base on Stochastic Teacher Network,http://arxiv.org/abs/2308.14322v1 -Using ChatGPT as a Static Application Security Testing Tool,http://arxiv.org/abs/2308.14434v1 -360-Degree Panorama Generation from Few Unregistered NFoV Images,http://dx.doi.org/10.1145/3581783.3612508 -MagicAvatar: Multimodal Avatar Generation and Animation,http://arxiv.org/abs/2308.14748v1 -"Quantifying and Analyzing Entity-level Memorization in Large Language - Models",http://arxiv.org/abs/2308.15727v1 -Internal activities in a solar filament and heating to its threads,http://arxiv.org/abs/2308.15747v1 -"Prompting Vision Language Model with Knowledge from Large Language Model - for Knowledge-Based VQA",http://arxiv.org/abs/2308.15851v1 -"Zero-shot Inversion Process for Image Attribute Editing with Diffusion - Models",http://arxiv.org/abs/2308.15854v2 -"A common origin of multi-messenger spectral anomaly of galactic cosmic - rays",http://arxiv.org/abs/2308.15866v1 -"Edge-Assisted Lightweight Region-of-Interest Extraction and Transmission - for Vehicle Perception",http://arxiv.org/abs/2308.16417v1 -MVDream: Multi-view Diffusion for 3D Generation,http://arxiv.org/abs/2308.16512v2 -Language-Conditioned Path Planning,http://arxiv.org/abs/2308.16893v1 -"DiffuGen: Adaptable Approach for Generating Labeled Image Datasets using - Stable Diffusion Models",http://arxiv.org/abs/2309.00248v1 -Explainability for Large Language Models: A Survey,http://arxiv.org/abs/2309.01029v2 -UniSA: Unified Generative Framework for Sentiment Analysis,http://arxiv.org/abs/2309.01339v1 -Open Sesame! Universal Black Box Jailbreaking of Large Language Models,http://arxiv.org/abs/2309.01446v2 -HAGRID -- High Accuracy GRB Rapid Inference with Deep learning,http://dx.doi.org/10.22323/1.444.0724 -"Marked spatial point processes: current state and extensions to point - processes on linear networks",http://arxiv.org/abs/2309.01511v1 -"Recognition of Heat-Induced Food State Changes by Time-Series Use of - Vision-Language Model for Cooking Robot",http://arxiv.org/abs/2309.01528v2 -GRASS: Unified Generation Model for Speech-to-Semantic Tasks,http://arxiv.org/abs/2309.02780v2 -"Knowledge Solver: Teaching LLMs to Search for Domain Knowledge from - Knowledge Graphs",http://arxiv.org/abs/2309.03118v1 -Gender-specific Machine Translation with Large Language Models,http://arxiv.org/abs/2309.03175v1 -Feature Enhancer Segmentation Network (FES-Net) for Vessel Segmentation,http://arxiv.org/abs/2309.03535v1 -Chasing Consistency in Text-to-3D Generation from a Single Image,http://arxiv.org/abs/2309.03599v1 -Exploring an LM to generate Prolog Predicates from Mathematics Questions,http://arxiv.org/abs/2309.03667v2 -Learning from Demonstration via Probabilistic Diagrammatic Teaching,http://arxiv.org/abs/2309.03835v2 -Zero-Shot Audio Captioning via Audibility Guidance,http://arxiv.org/abs/2309.03884v1 -UQ at #SMM4H 2023: ALEX for Public Health Analysis with Social Media,http://arxiv.org/abs/2309.04213v2 -Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges,http://arxiv.org/abs/2309.04550v1 -"MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering - over Text, Tables and Images",http://arxiv.org/abs/2309.04790v1 -Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation,http://arxiv.org/abs/2309.04946v2 -"From Artificially Real to Real: Leveraging Pseudo Data from Large - Language Models for Low-Resource Molecule Discovery",http://arxiv.org/abs/2309.05203v1 -Evaluating the Deductive Competence of Large Language Models,http://arxiv.org/abs/2309.05452v1 -Zero-Shot Co-salient Object Detection Framework,http://arxiv.org/abs/2309.05499v2 -"Characterizing Latent Perspectives of Media Houses Towards Public - Figures",http://arxiv.org/abs/2309.06112v1 -"Identifying multiwavelength counterparts to astrophysical neutrino - events",http://dx.doi.org/10.22323/1.444.1473 -"Leveraging Large Language Models and Weak Supervision for Social Media - data annotation: an evaluation using COVID-19 self-reported vaccination - tweets",http://arxiv.org/abs/2309.06503v1 -"Exploring the Benefits of Differentially Private Pre-training and - Parameter-Efficient Fine-tuning for Table Transformers",http://arxiv.org/abs/2309.06526v1 -"Text Encoders Lack Knowledge: Leveraging Generative LLMs for - Domain-Specific Semantic Textual Similarity",http://arxiv.org/abs/2309.06541v1 -"A Novel Low-Cost, Recyclable, Easy-to-Build Robot Blimp For Transporting - Supplies in Hard-to-Reach Locations",http://arxiv.org/abs/2309.06682v1 -PILOT: A Pre-Trained Model-Based Continual Learning Toolbox,http://arxiv.org/abs/2309.07117v1 -Unbiased Face Synthesis With Diffusion Models: Are We There Yet?,http://arxiv.org/abs/2309.07277v1 -Less is More for Long Document Summary Evaluation by LLMs,http://arxiv.org/abs/2309.07382v1 -"EXFOR-based simultaneous evaluation for neutron-induced fission cross - section of plutonium-242",http://dx.doi.org/10.1080/00223131.2023.2267070 -"Detecting Misinformation with LLM-Predicted Credibility Signals and Weak - Supervision",http://arxiv.org/abs/2309.07601v1 -"Computer says 'no': Exploring systemic hiring bias in ChatGPT using an - audit approach",http://arxiv.org/abs/2309.07664v1 -Tree of Uncertain Thoughts Reasoning for Large Language Models,http://arxiv.org/abs/2309.07694v1 -"CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain - Performance and Calibration",http://arxiv.org/abs/2309.07822v2 -"Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language - Models that Follow Instructions",http://arxiv.org/abs/2309.07875v2 -"Measuring the Quality of Text-to-Video Model Outputs: Metrics and - Dataset",http://arxiv.org/abs/2309.08009v1 -Empowering Private Tutoring by Chaining Large Language Models,http://arxiv.org/abs/2309.08112v1 -"Casteist but Not Racist? Quantifying Disparities in Large Language Model - Bias between India and the West",http://arxiv.org/abs/2309.08573v1 -"ICLEF: In-Context Learning with Expert Feedback for Explainable Style - Transfer",http://arxiv.org/abs/2309.08583v1 -Anchor Points: Benchmarking Models with Much Fewer Examples,http://arxiv.org/abs/2309.08638v1 -Stack-and-Delay: a new codebook pattern for music generation,http://arxiv.org/abs/2309.08804v1 -"X-PARADE: Cross-Lingual Textual Entailment and Information Divergence - across Paragraphs",http://arxiv.org/abs/2309.08873v1 -A Statistical Turing Test for Generative Models,http://arxiv.org/abs/2309.08913v1 -"Multimodal Multi-Hop Question Answering Through a Conversation Between - Tools and Efficiently Finetuned Large Language Models",http://arxiv.org/abs/2309.08922v1 -Universal Metric Learning with Parameter-Efficient Transfer Learning,http://arxiv.org/abs/2309.08944v1 -"CLIPUNetr: Assisting Human-robot Interface for Uncalibrated Visual - Servoing Control with CLIP-driven Referring Expression Segmentation",http://arxiv.org/abs/2309.09183v1 -UGC: Unified GAN Compression for Efficient Image-to-Image Translation,http://arxiv.org/abs/2309.09310v1 -"Distributional Estimation of Data Uncertainty for Surveillance Face - Anti-spoofing",http://arxiv.org/abs/2309.09485v1 -Adapting Large Language Models via Reading Comprehension,http://arxiv.org/abs/2309.09530v1 -CB-Whisper: Contextual Biasing Whisper using TTS-based Keyword Spotting,http://arxiv.org/abs/2309.09552v1 -Instruction-Following Speech Recognition,http://arxiv.org/abs/2309.09843v1 -Context is Environment,http://arxiv.org/abs/2309.09888v2 -Neutrinos from Earth-Bound Dark Matter Annihilation,http://arxiv.org/abs/2309.10032v1 -"Natural Language Dataset Generation Framework for Visualizations Powered - by Large Language Models",http://arxiv.org/abs/2309.10245v2 -"Prompt, Condition, and Generate: Classification of Unsupported Claims - with In-Context Learning",http://arxiv.org/abs/2309.10359v1 -In-Context Learning for Text Classification with Many Labels,http://arxiv.org/abs/2309.10954v1 -Investigating Personalization Methods in Text to Music Generation,http://arxiv.org/abs/2309.11140v1 -Simulation of the response of SiPMs Part II: with saturation effects,http://arxiv.org/abs/2309.11153v1 -"Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for - Knowledge Graph Question Answering",http://arxiv.org/abs/2309.11206v2 -CPLLM: Clinical Prediction with Large Language Models,http://arxiv.org/abs/2309.11295v1 -"DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal - Services",http://arxiv.org/abs/2309.11325v2 -"Generative Agent-Based Modeling: Unveiling Social System Dynamics - through Coupling Mechanistic Models with Generative Artificial Intelligence",http://arxiv.org/abs/2309.11456v1 -A Neural TTS System with Parallel Prosody Transfer from Unseen Speakers,http://dx.doi.org/10.21437/Interspeech.2023-1032 -"SAM-OCTA: A Fine-Tuning Strategy for Applying Foundation Model to OCTA - Image Segmentation Tasks",http://arxiv.org/abs/2309.11758v1 -"Privacy-Preserving In-Context Learning with Differentially Private - Few-Shot Generation",http://arxiv.org/abs/2309.11765v1 -"DimCL: Dimensional Contrastive Learning For Improving Self-Supervised - Learning",http://dx.doi.org/10.1109/ACCESS.2023.3236087 -"Evaluating Large Language Models for Document-grounded Response - Generation in Information-Seeking Dialogues",http://arxiv.org/abs/2309.11838v1 -"Reranking for Natural Language Generation from Logical Forms: A Study - based on Large Language Models",http://arxiv.org/abs/2309.12294v1 -ForceSight: Text-Guided Mobile Manipulation with Visual-Force Goals,http://arxiv.org/abs/2309.12312v2 -"Cultural Alignment in Large Language Models: An Explanatory Analysis - Based on Hofstede's Cultural Dimensions",http://arxiv.org/abs/2309.12342v1 -Studying and improving reasoning in humans and machines,http://arxiv.org/abs/2309.12485v1 -Automatic Answerability Evaluation for Question Generation,http://arxiv.org/abs/2309.12546v1 -Insights into the properties of GRBs with TeV emission,http://arxiv.org/abs/2309.12789v1 -"BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling - Capacities of Large Language Models",http://arxiv.org/abs/2309.13345v1 -"Grounding Description-Driven Dialogue State Trackers with - Knowledge-Seeking Turns",http://arxiv.org/abs/2309.13448v1 -"A SAM-based Solution for Hierarchical Panoptic Segmentation of Crops and - Weeds Competition",http://arxiv.org/abs/2309.13578v1 -"Does the ""most sinfully decadent cake ever"" taste good? Answering Yes/No - Questions from Figurative Contexts",http://arxiv.org/abs/2309.13748v1 -Towards General-Purpose Text-Instruction-Guided Voice Conversion,http://arxiv.org/abs/2309.14324v1 -"Physics of Language Models: Part 3.2, Knowledge Manipulation",http://arxiv.org/abs/2309.14402v1 -Depolarized Holography with Polarization-multiplexing Metasurface,http://arxiv.org/abs/2309.14668v1 -"Legal Question-Answering in the Indian Context: Efficacy, Challenges, - and Potential of Modern AI Models",http://arxiv.org/abs/2309.14735v2 -"A Democratic Platform for Engaging with Disabled Community in Generative - AI Development",http://arxiv.org/abs/2309.14921v1 -"Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of - Language Models",http://arxiv.org/abs/2309.15098v1 -ChatCounselor: A Large Language Models for Mental Health Support,http://arxiv.org/abs/2309.15461v1 -Ultra High Energy Cosmic Rays from Tidal Disruption Events,http://arxiv.org/abs/2309.15644v2 -"How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking - Unrelated Questions",http://arxiv.org/abs/2309.15840v1 -"MedEdit: Model Editing for Medical Question Answering with External - Knowledge Bases",http://arxiv.org/abs/2309.16035v1 -Can the Query-based Object Detector Be Designed with Fewer Stages?,http://arxiv.org/abs/2309.16306v1 -Can LLMs Effectively Leverage Graph Structural Information: When and Why,http://arxiv.org/abs/2309.16595v2 -"KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image - Action Editing",http://arxiv.org/abs/2309.16608v1 -"Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and - Scaling Limit",http://arxiv.org/abs/2309.16620v1 -"I Wish to Have an Argument: Argumentative Reasoning in Large Language - Models",http://arxiv.org/abs/2309.16938v1 -"LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent - Negotiation Games",http://arxiv.org/abs/2309.17234v1 -Text-image Alignment for Diffusion-based Perception,http://arxiv.org/abs/2310.00031v2 -"A Prefrontal Cortex-inspired Architecture for Planning in Large Language - Models",http://arxiv.org/abs/2310.00194v1 -RelBERT: Embedding Relations with Language Models,http://arxiv.org/abs/2310.00299v2 -"In-Context Learning in Large Language Models: A Neuroscience-inspired - Analysis of Representations",http://arxiv.org/abs/2310.00313v2 -"Comics for Everyone: Generating Accessible Text Descriptions for Comic - Strips",http://arxiv.org/abs/2310.00698v1 -No Offense Taken: Eliciting Offensiveness from Language Models,http://arxiv.org/abs/2310.00892v1 -"uSee: Unified Speech Enhancement and Editing with Conditional Diffusion - Models",http://arxiv.org/abs/2310.00900v1 -"Automated Evaluation of Classroom Instructional Support with LLMs and - BoWs: Connecting Global Predictions to Specific Feedback",http://arxiv.org/abs/2310.01132v1 -Detection of long-lasting aurora-like radio emission above a sunspot,http://arxiv.org/abs/2310.01240v1 -On the Generalization of Training-based ChatGPT Detection Methods,http://arxiv.org/abs/2310.01307v2 -"SYRAC: Synthesize, Rank, and Count",http://arxiv.org/abs/2310.01662v3 -"One model to rule them all ? Towards End-to-End Joint Speaker - Diarization and Speech Recognition",http://arxiv.org/abs/2310.01688v1 -Adaptive Functional Principal Component Analysis,http://arxiv.org/abs/2310.01760v1 -"Navigating Cultural Chasms: Exploring and Unlocking the Cultural POV of - Text-To-Image Models",http://arxiv.org/abs/2310.01929v1 -Language Models Represent Space and Time,http://arxiv.org/abs/2310.02207v1 -"Ophiuchus: Scalable Modeling of Protein Structures through Hierarchical - Coarse-graining SO(3)-Equivariant Autoencoders",http://arxiv.org/abs/2310.02508v1 -The philosophical problems of implementing superselection rules,http://arxiv.org/abs/2310.03014v1 -Multimodal Question Answering for Unified Information Extraction,http://arxiv.org/abs/2310.03017v1 -Misusing Tools in Large Language Models With Visual Adversarial Examples,http://arxiv.org/abs/2310.03185v1 -"Can Language Models Employ the Socratic Method? Experiments with Code - Debugging",http://arxiv.org/abs/2310.03210v1 -"Digital Twin-Empowered Smart Attack Detection System for 6G Edge of - Things Networks",http://arxiv.org/abs/2310.03554v1 -HeaP: Hierarchical Policies for Web Actions using LLMs,http://arxiv.org/abs/2310.03720v1 -Improved Baselines with Visual Instruction Tuning,http://arxiv.org/abs/2310.03744v1 -"Bridging Low-level Geometry to High-level Concepts in Visual Servoing of - Robot Manipulation Task Using Event Knowledge Graphs and Vision-Language - Models",http://arxiv.org/abs/2310.03932v1 -A Language-Agent Approach to Formal Theorem-Proving,http://arxiv.org/abs/2310.04353v1 -"Beyond Text: A Deep Dive into Large Language Models' Ability on - Understanding Graph Data",http://arxiv.org/abs/2310.04944v1 -"Walking Down the Memory Maze: Beyond Context Limit through Interactive - Reading",http://arxiv.org/abs/2310.05029v1 -"Unleashing the Multilingual Encoder Potential: Boosting Zero-Shot - Performance via Probability Calibration",http://arxiv.org/abs/2310.05069v2 -"MenatQA: A New Dataset for Testing the Temporal Comprehension and - Reasoning Abilities of Large Language Models",http://arxiv.org/abs/2310.05157v1 -An Investigation of LLMs' Inefficacy in Understanding Converse Relations,http://arxiv.org/abs/2310.05163v1 -Towards Optimizing with Large Language Models,http://arxiv.org/abs/2310.05204v1 -What do larger image classifiers memorise?,http://arxiv.org/abs/2310.05337v1 -"Explaining the Complex Task Reasoning of Large Language Models with - Template-Content Structure",http://arxiv.org/abs/2310.05452v1 -"Terminology-Aware Translation with Constrained Decoding and Large - Language Model Prompting",http://arxiv.org/abs/2310.05824v1 -Transformers and Large Language Models for Chemistry and Drug Discovery,http://arxiv.org/abs/2310.06083v1 -"OptiMUS: Optimization Modeling Using mip Solvers and large language - models",http://arxiv.org/abs/2310.06116v1 -Hexa: Self-Improving for Knowledge-Grounded Dialogue System,http://arxiv.org/abs/2310.06404v2 -"The Limits of ChatGPT in Extracting Aspect-Category-Opinion-Sentiment - Quadruples: A Comparative Analysis",http://arxiv.org/abs/2310.06502v1 -"ObjectComposer: Consistent Generation of Multiple Objects Without - Fine-tuning",http://arxiv.org/abs/2310.06968v1 -Extended Wigner's friend paradoxes do not require nonlocal correlations,http://arxiv.org/abs/2310.06976v1 -"Violation of Expectation via Metacognitive Prompting Reduces Theory of - Mind Prediction Error in Large Language Models",http://arxiv.org/abs/2310.06983v1 -LLM4Vis: Explainable Visualization Recommendation using ChatGPT,http://arxiv.org/abs/2310.07652v2 -Composite Backdoor Attacks Against Large Language Models,http://arxiv.org/abs/2310.07676v1 -"To Build Our Future, We Must Know Our Past: Contextualizing Paradigm - Shifts in Natural Language Processing",http://arxiv.org/abs/2310.07715v1 -Interpretable Diffusion via Information Decomposition,http://arxiv.org/abs/2310.07972v1 -"Harnessing Large Language Models' Empathetic Response Generation - Capabilities for Online Mental Health Counselling Support",http://arxiv.org/abs/2310.08017v1 -"Exploring Large Language Models for Multi-Modal Out-of-Distribution - Detection",http://arxiv.org/abs/2310.08027v1 -"Regularity from $p$-harmonic potentials to $\infty$-harmonic potentials - in convex ring domains",http://arxiv.org/abs/2310.08093v2 -Fine-Grained Annotation for Face Anti-Spoofing,http://arxiv.org/abs/2310.08142v1 -"Would you trust a vehicle merging into your lane? Subjective evaluation - of negotiating behaviour in a congested merging scenario",http://arxiv.org/abs/2310.08361v1 -"Towards Better Evaluation of Instruction-Following: A Case-Study in - Summarization",http://arxiv.org/abs/2310.08394v2 -LLM-augmented Preference Learning from Natural Language,http://arxiv.org/abs/2310.08523v1 -"Transformers as Decision Makers: Provable In-Context Reinforcement - Learning via Supervised Pretraining",http://arxiv.org/abs/2310.08566v1 -Predicting Lung Cancer's Metastats' Locations Using Bioclinical Model,http://arxiv.org/abs/2310.08596v1 -"""Im not Racist but..."": Discovering Bias in the Internal Knowledge of - Large Language Models",http://arxiv.org/abs/2310.08780v1 -Probing axion-like particles at the Electron-Ion Collider,http://arxiv.org/abs/2310.08827v1 -"Hypernymy Understanding Evaluation of Text-to-Image Models via WordNet - Hierarchy",http://arxiv.org/abs/2310.09247v1 -"Topological Data Analysis in smart manufacturing processes -- A survey - on the state of the art",http://arxiv.org/abs/2310.09319v1 -"Estimating Uncertainty in Multimodal Foundation Models using Public - Internet Data",http://arxiv.org/abs/2310.09926v1 -Character-LLM: A Trainable Agent for Role-Playing,http://arxiv.org/abs/2310.10158v1 -"DemoNSF: A Multi-task Demonstration-based Generative Framework for Noisy - Slot Filling Task",http://arxiv.org/abs/2310.10169v1 -"Battle of the Large Language Models: Dolly vs LLaMA vs Vicuna vs Guanaco - vs Bard vs ChatGPT -- A Text-to-SQL Parsing Comparison",http://arxiv.org/abs/2310.10190v1 -Scene Graph Conditioning in Latent Diffusion,http://arxiv.org/abs/2310.10338v1 -Contextual Data Augmentation for Task-Oriented Dialog Systems,http://arxiv.org/abs/2310.10380v1 -"DemoSG: Demonstration-enhanced Schema-guided Generation for Low-resource - Event Extraction",http://arxiv.org/abs/2310.10481v1 -UNO-DST: Leveraging Unlabelled Data in Zero-Shot Dialogue State Tracking,http://arxiv.org/abs/2310.10492v1 -"Evaluation and improvement of Segment Anything Model for interactive - histopathology image segmentation",http://arxiv.org/abs/2310.10493v1 -"Utilising a Large Language Model to Annotate Subject Metadata: A Case - Study in an Australian National Research Data Catalogue",http://arxiv.org/abs/2310.11318v1 -"An Empirical Study of Translation Hypothesis Ensembling with Large - Language Models",http://arxiv.org/abs/2310.11430v1 -"MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language - Models to Generalize to Novel Interpretations",http://arxiv.org/abs/2310.11634v1 -"Zero-shot Faithfulness Evaluation for Text Summarization with Foundation - Language Model",http://arxiv.org/abs/2310.11648v1 -"A Comprehensive Evaluation of Large Language Models on Legal Judgment - Prediction",http://arxiv.org/abs/2310.11761v1 -"A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for - Fairer Instruction-Tuned Machine Translation",http://arxiv.org/abs/2310.12127v1 -"PrivInfer: Privacy-Preserving Inference for Black-box Large Language - Model",http://arxiv.org/abs/2310.12214v3 -"Does Quarkonia Suppression serve as a probe for the deconfinement in - small systems?",http://arxiv.org/abs/2310.12267v1 -"Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in - Large Language Models",http://arxiv.org/abs/2310.12481v1 -"Is ChatGPT a Financial Expert? Evaluating Language Models on Financial - Natural Language Processing",http://arxiv.org/abs/2310.12664v1 -Label-Aware Automatic Verbalizer for Few-Shot Text Classification,http://arxiv.org/abs/2310.12778v1 -Are Large Language Models Geospatially Knowledgeable?,http://arxiv.org/abs/2310.13002v1 -Compositional preference models for aligning LMs,http://arxiv.org/abs/2310.13011v1 -"Weakly-Supervised Semantic Segmentation with Image-Level Labels: from - Traditional Models to Foundation Models",http://arxiv.org/abs/2310.13026v1 -"CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for - Image Manipulation",http://arxiv.org/abs/2310.13165v1 -CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training,http://dx.doi.org/10.1007/978-3-031-43895-0_10 -Test-Time Self-Adaptive Small Language Models for Question Answering,http://arxiv.org/abs/2310.13307v1 -"Robust Training for Conversational Question Answering Models with - Reinforced Reformulation Generation",http://arxiv.org/abs/2310.13505v1 -"Teaching Language Models to Self-Improve through Interactive - Demonstrations",http://arxiv.org/abs/2310.13522v1 -"Retrieval-Augmented Neural Response Generation Using Logical Reasoning - and Relevance Scoring",http://arxiv.org/abs/2310.13566v1 -A Simple Baseline for Knowledge-Based Visual Question Answering,http://arxiv.org/abs/2310.13570v2 -"Long-Form Speech Translation through Segmentation with Finite-State - Decoding Constraints on Large Language Models",http://arxiv.org/abs/2310.13678v2 -Ecologically Valid Explanations for Label Variation in NLI,http://arxiv.org/abs/2310.13850v1 -Towards a General Framework for Continual Learning with Pre-training,http://arxiv.org/abs/2310.13888v1 -"Large Language Models and Multimodal Retrieval for Visual Word Sense - Disambiguation",http://arxiv.org/abs/2310.14025v1 -QA-NatVer: Question Answering for Natural Logic-based Fact Verification,http://arxiv.org/abs/2310.14198v1 -"Customising General Large Language Models for Specialised Emotion - Recognition Tasks",http://arxiv.org/abs/2310.14225v1 -Merging Generated and Retrieved Knowledge for Open-Domain QA,http://arxiv.org/abs/2310.14393v1 -PaRaDe: Passage Ranking using Demonstrations with Large Language Models,http://arxiv.org/abs/2310.14408v1 -"Monte Carlo Thought Search: Large Language Model Querying for Complex - Scientific Reasoning in Catalyst Design",http://arxiv.org/abs/2310.14420v1 -"Which Prompts Make The Difference? Data Prioritization For Efficient - Human LLM Evaluation",http://arxiv.org/abs/2310.14424v1 -InstructExcel: A Benchmark for Natural Language Instruction in Excel,http://arxiv.org/abs/2310.14495v1 -"NormDial: A Comparable Bilingual Synthetic Dialog Dataset for Modeling - Social Norm Adherence and Violation",http://arxiv.org/abs/2310.14563v1 -Reasoning about Ambiguous Definite Descriptions,http://arxiv.org/abs/2310.14657v1 -"API-Assisted Code Generation for Question Answering on Varied Table - Structures",http://arxiv.org/abs/2310.14687v1 -"Tree of Clarifications: Answering Ambiguous Questions with - Retrieval-Augmented Large Language Models",http://arxiv.org/abs/2310.14696v1 -MCC-KD: Multi-CoT Consistent Knowledge Distillation,http://arxiv.org/abs/2310.14747v2 -"Microscopic theory for nonequilibrium correlation functions in dense - active fluids",http://arxiv.org/abs/2310.14812v1 -"Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study - on Syllogism",http://arxiv.org/abs/2310.14868v1 -"Intuitive Multilingual Audio-Visual Speech Recognition with a - Single-Trained Model",http://arxiv.org/abs/2310.14946v1 -"LLM-Based Agent Society Investigation: Collaboration and Confrontation - in Avalon Gameplay",http://arxiv.org/abs/2310.14985v1 -"Statistical Depth for Ranking and Characterizing Transformer-Based Text - Embeddings",http://arxiv.org/abs/2310.15010v1 -"The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained - Multimodal Models",http://arxiv.org/abs/2310.15061v1 -LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis,http://arxiv.org/abs/2310.15100v1 -"Exploring the Potential of Large Language Models in Generating - Code-Tracing Questions for Introductory Programming Courses",http://arxiv.org/abs/2310.15317v1 -GPT-4 as an Effective Zero-Shot Evaluator for Scientific Figure Captions,http://arxiv.org/abs/2310.15405v1 -"Retrieval-based Knowledge Transfer: An Effective Approach for Extreme - Large Language Model Compression",http://arxiv.org/abs/2310.15594v1 -Quiescent times in Gamma-Ray-Bursts: hints of a dormant inner engine,http://dx.doi.org/10.1086/519728 -"An Objectives-Driven Process for Selecting Methods to Support - Requirements Engineering Activities",http://dx.doi.org/10.1109/SEW.2005.18 -Electrostatic AB-Ramjet Space Propulsion,http://arxiv.org/abs/physics/0701072v1 -"Reverse engineering time discrete finite dynamical systems: A feasible - undertaking?",http://dx.doi.org/10.1371/journal.pone.0004939. -Knowware: the third star after Hardware and Software,http://arxiv.org/abs/0711.4309v1 -"Adiabatic expansion, early x-ray data and the central engine in GRBs",http://dx.doi.org/10.1111/j.1365-2966.2009.14584.x -Correlation of Expert and Search Engine Rankings,http://arxiv.org/abs/0809.2851v2 -"JDATATRANS for Array Obfuscation in Java Source Code to Defeat Reverse - Engineering from Decompiled Codes",http://arxiv.org/abs/0809.3503v1 -"A Novel Bid Optimizer for Sponsored Search Auctions based on Cooperative - Game Theory",http://arxiv.org/abs/0906.4764v2 -"Making the road by searching - A search engine based on Swarm - Information Foraging",http://arxiv.org/abs/0911.3979v1 -X-ray flares from propagation instabilities in long Gamma-Ray Burst jets,http://dx.doi.org/10.1111/j.1745-3933.2010.00984.x -"Evolutionary Mechanics: new engineering principles for the emergence of - flexibility in a dynamic and uncertain world",http://arxiv.org/abs/1101.4103v1 -"Ranking of Wikipedia articles in search engines revisited: Fair ranking - for reasonable quality?",http://dx.doi.org/10.1002/asi.21423 -"The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant - Advertising",http://arxiv.org/abs/1109.6263v1 -Ordinary Search Engine Users Carrying Out Complex Search Tasks,http://arxiv.org/abs/1206.1492v3 -"Entanglement engineering and topological protection by discrete-time - quantum walks",http://dx.doi.org/10.1088/0953-4075/46/10/104005 -"Universality of energy conversion efficiency for optimal tight-coupling - heat engines and refrigerators",http://dx.doi.org/10.1088/1751-8113/46/40/402001 -Temporal Decomposition Studies of GRB Lightcurves,http://dx.doi.org/10.1051/eas/1361005 -IntelligentWeb Agent for Search Engines,http://arxiv.org/abs/1310.4774v1 -XQuery Streaming by Forest Transducers,http://dx.doi.org/10.1109/ICDE.2014.6816714 -"A test of the millisecond magnetar central engine model of GRBs with - Swift data",http://dx.doi.org/10.1088/0004-637X/785/1/74 -"Detecting Requirements Defects Utilizing A Mathematical Framework for - Behavior Engineering",http://dx.doi.org/10.7321/jscse.v3.n3.29 -"Relativistic supernovae have shorter-lived central engines or more - extended progenitors: the case of SN\,2012ap",http://dx.doi.org/10.1088/0004-637X/797/2/107 -An Analysis of Research in Software Engineering: Assessment and Trends,http://arxiv.org/abs/1407.4903v1 -The Handbook of Engineering Self-Aware and Self-Expressive Systems,http://arxiv.org/abs/1409.1793v3 -"Quantum Fuel with Multilevel Atomic Coherence for Ultrahigh Specific - Work in a Photonic Carnot Engine",http://dx.doi.org/10.1103/PhysRevE.93.012145 -Join Processing for Graph Patterns: An Old Dog with New Tricks,http://arxiv.org/abs/1503.04169v2 -"The Implementation of Hadoop-based Crawler System and Graphlite-based - PageRank-Calculation In Search Engine",http://arxiv.org/abs/1506.00130v1 -"Identifying Student Difficulties with Entropy, Heat Engines, and the - Carnot Cycle",http://dx.doi.org/10.1103/PhysRevSTPER.11.020116 -GIFT: A Real-time and Scalable 3D Shape Search Engine,http://arxiv.org/abs/1604.01879v2 -"Threshold-Dependent Camouflaged Cells to Secure Circuits Against Reverse - Engineering Attacks",http://arxiv.org/abs/1605.00684v1 -Heat engine in the three-dimensional spacetime,http://dx.doi.org/10.1007/JHEP03(2017)010 -"Where do we stand in requirements engineering improvement today? First - results from a mapping study",http://dx.doi.org/10.1145/2652524.2652555 -"Towards the Assessment of Stress and Emotional Responses of a - Salutogenesis-Enhanced Software Tool Using Psychophysiological Measurements",http://dx.doi.org/10.1109/SEmotion.2017.4 -"Evolution of statistical analysis in empirical software engineering - research: Current state and steps forward",http://arxiv.org/abs/1706.00933v7 -"Controlled Experiments with Student Participants in Software - Engineering: Preliminary Results from a Systematic Mapping Study",http://arxiv.org/abs/1708.04662v1 -Quantum statistics of a single-atom Scovil-Schulz-DuBois heat engine,http://dx.doi.org/10.1103/PhysRevA.96.063806 -"Band Structure Engineering of 2D Materials using Patterned Dielectric - Superlattices",http://dx.doi.org/10.1038/s41565-018-0138-7 -"Rapid Testing of IaaS Resource Management Algorithms via Cloud - Middleware Simulation",http://arxiv.org/abs/1801.09484v1 -"Digitalization of Swedish Government Agencies - A Perspective Through - the Lens of a Software Development Census",http://arxiv.org/abs/1802.00312v2 -"Quantum heat engine based on trapped Bose gases: Its maximum efficiency - can approach the Carnot value at finite power",http://arxiv.org/abs/1803.03734v2 -SEAT: A Taxonomy to Characterize Automation in Software Engineering,http://arxiv.org/abs/1803.09536v1 -"BPjs --- a framework for modeling reactive systems using a scripting - language and BP",http://arxiv.org/abs/1806.00842v1 -Behavior-based evaluation of session satisfaction,http://arxiv.org/abs/1806.08130v2 -"Bilgisayar Muhendisligi Egitiminde Teknoloji Egilimlerinin Takip - Edilmesi",http://arxiv.org/abs/1807.07571v2 -"Thermodynamics of charged accelerating AdS black holes and holographic - heat engines",http://dx.doi.org/10.1007/JHEP02(2019)144 -Instantly Deployable Expert Knowledge - Networks of Knowledge Engines,http://arxiv.org/abs/1811.02964v1 -"Digitally Capturing Physical Prototypes During Early-Stage Engineering - Design Projects for Initial Analysis of Project Output and Progression",http://arxiv.org/abs/1905.01950v2 -"On Applying Machine Learning/Object Detection Models for Analysing - Digitally Captured Physical Prototypes from Engineering Design Projects",http://arxiv.org/abs/1905.03697v1 -"A Deeper Investigation of the Importance of Wikipedia Links to the - Success of Search Engines",http://arxiv.org/abs/2004.10265v1 -"Conversations with Search Engines: SERP-based Conversational Response - Generation",http://arxiv.org/abs/2004.14162v2 -"An Exploratory Study of Hardware Reverse Engineering Technical and - Cognitive Processes",http://arxiv.org/abs/2105.14943v1 -Measurement-based Formulation of Quantum Heat Engine,http://dx.doi.org/10.1103/PhysRevA.95.032132 -Collaborative Real Time Coding or How to Avoid the Dreaded Merge,http://arxiv.org/abs/1504.06741v1 -"Creativity Training for Future Engineers: Preliminary Results from an - Educative Experience",http://arxiv.org/abs/1602.02643v1 -Designing Privacy-aware Internet of Things Applications,http://arxiv.org/abs/1703.03892v5 -"A Visual Narrative Path from Switching to Resuming a Requirements - Engineering Task",http://arxiv.org/abs/1707.01921v1 -Deep Character-Level Click-Through Rate Prediction for Sponsored Search,http://dx.doi.org/10.1145/3077136.3080811 -"Knock Intensity Distribution and a Stochastic Control Framework for - Knock Control",http://arxiv.org/abs/1906.05773v1 -Evaluating the Information Security Awareness of Smartphone Users,http://arxiv.org/abs/1906.10229v1 -Temporal Discounting in Software Engineering: A Replication Study,http://arxiv.org/abs/1906.11072v1 -"The Second Plateau in X-ray Afterglow Providing Additional Evidence for - Rapidly Spinning Magnetars as the GRB Central Engine",http://dx.doi.org/10.3847/1538-4357/ab8f91 -"Enhancing Software Development Process (ESDP) using Data Mining - Integrated Environment",http://arxiv.org/abs/2005.02652v1 -"Towards a Decentralized Digital Engineering Assets Marketplace: - Empowered by Model-based Systems Engineering and Distributed Ledger - Technology",http://arxiv.org/abs/2005.05415v1 -"An Empirical Study of Bots in Software Development -- Characteristics - and Challenges from a Practitioner's Perspective",http://dx.doi.org/10.1145/3368089.3409680 -An Empirical Study of Software Exceptions in the Field using Search Logs,http://arxiv.org/abs/2006.00385v1 -"Semi-Streaming Architecture: A New Design Paradigm for CNN - Implementation on FPGAs",http://arxiv.org/abs/2006.08759v1 -"Robotics Software Engineering: A Perspective from the Service Robotics - Domain",http://dx.doi.org/10.1145/3368089.3409743 -"The Marginalized Identities of Sense-makers: Reframing Engineering - Student Retention",http://dx.doi.org/10.1109/FIE.2010.5673158 -An Analysis of Chinese Search Engine Filtering,http://arxiv.org/abs/1107.3794v1 -"Energy Requirement of Control: Comments on Szilard's Engine and - Maxwell's Demon",http://dx.doi.org/10.1209/0295-5075/98/68001 -"Hyper-accreting black hole as GRB central engine. I: Baryon loading in - GRB jets",http://dx.doi.org/10.1088/0004-637X/765/2/125 -Variability Time Scales of Long and Short GRBs,http://arxiv.org/abs/1307.7618v1 -On the Reverse Engineering of the Citadel Botnet,http://dx.doi.org/10.1007/978-3-319-05302-8_25 -Information Spread in a Connected World,http://arxiv.org/abs/1406.7538v1 -"An exploratory study of the suitability of UML-based aspect modeling - techniques with respect to their integration into Model-Driven Engineering - context",http://arxiv.org/abs/1410.3582v1 -"Material degradation due to moisture and temperature. Part 1: - Mathematical model, analysis, and analytical solutions",http://dx.doi.org/10.1007/s00161-016-0511-4 -Debris Engine: A Potential Thruster for Space Debris Removal,http://arxiv.org/abs/1511.07246v2 -"On the Pragmatic Design of Literature Studies in Software Engineering: - An Experience-based Guideline",http://arxiv.org/abs/1612.03583v1 -Reverse-engineering biological networks from large data sets,http://arxiv.org/abs/1705.06370v2 -Testing Scientific Software: A Systematic Literature Review,http://arxiv.org/abs/1804.01954v1 -"Towards Reproducible Research: Automatic Classification of Empirical - Requirements Engineering Papers",http://arxiv.org/abs/1804.02956v1 -"Numerical simulations of the jet dynamics and synchrotron radiation of - binary neutron star merger event GW170817/GRB170817A",http://dx.doi.org/10.3847/1538-4357/aacf9c -"#ILookLikeAnEngineer: Using Social Media Based Hashtag Activism - Campaigns as a Lens to Better Understand Engineering Diversity Issues",http://arxiv.org/abs/1805.01971v1 -"Engineering a Full Gamut of Structural Colors in All-Dielectric - Mesoporous Network Metamaterials",http://dx.doi.org/10.1021/acsphotonics.7b01569 -Decentralized Search on Decentralized Web,http://arxiv.org/abs/1809.00939v1 -An Agile Software Engineering Method to Design Blockchain Applications,http://arxiv.org/abs/1809.09596v1 -"Secure Deep Learning Engineering: A Software Quality Assurance - Perspective",http://arxiv.org/abs/1810.04538v1 -"Physics-Driven Regularization of Deep Neural Networks for Enhanced - Engineering Design and Analysis",http://dx.doi.org/10.1115/1.4044507 -"Inferencing into the void: problems with implicit populations Comments - on `Empirical software engineering experts on the use of students and - professionals in experiments'",http://dx.doi.org/10.1007/s10664-018-9655-0 -Software Engineering Challenges of Deep Learning,http://dx.doi.org/10.1109/SEAA.2018.00018 -STORE: Security Threat Oriented Requirements Engineering Methodology,http://arxiv.org/abs/1901.01500v1 -"An Experience Report On Applying Software Testing Academic Results In - Industry: We Need Usable Automated Test Generation",http://dx.doi.org/10.1007/s10664-017-9570-9 -An Empirically Evaluated Checklist for Surveys in Software Engineering,http://arxiv.org/abs/1901.09850v1 -"Do We Preach What We Practice? Investigating the Practical Relevance of - Requirements Engineering Syllabi - The IREB Case",http://arxiv.org/abs/1902.01822v3 -Quality quantification in Systems Engineering from the Qualimetry Eye,http://arxiv.org/abs/1902.02997v1 -"Quantum phase properties of photon added and subtracted displaced Fock - states",http://dx.doi.org/10.1002/andp.201900141 -Reverse engineering neural networks from many partial recordings,http://arxiv.org/abs/1907.01588v1 -Robust Dynamic Hamiltonian Engineering of Many-Body Spin Systems,http://dx.doi.org/10.1103/PhysRevX.10.031002 -"Automatic Generation of Atomic Consistency Preserving Search Operators - for Search-Based Model Engineering",http://arxiv.org/abs/1907.05647v1 -"Spatial Flow-Field Approximation Using Few Thermodynamic Measurements - Part I: Formulation and Area Averaging",http://arxiv.org/abs/1908.03431v1 -"Learning the best thermoelectric nanoscale heat engines through evolving - network topology",http://dx.doi.org/10.1038/s42005-021-00553-z -FAIR and Open Computer Science Research Software,http://arxiv.org/abs/1908.05986v1 -Architecture Definition in Complex System Design Using Model Theory,http://arxiv.org/abs/1909.06809v5 -Engineering for a Science-Centric Experimentation Platform,http://arxiv.org/abs/1910.03878v1 -"Software Engineering Education Beyond the Technical: A Systematic - Literature Review",http://arxiv.org/abs/1910.09865v1 -Deep Learning of Subsurface Flow via Theory-guided Neural Network,http://dx.doi.org/10.1016/j.jhydrol.2020.124700 -"Feature engineering workflow for activity recognition from synchronized - inertial measurement units",http://dx.doi.org/10.1007/978-981-15-3651-9_20 -"Engineered Defects to Modulate Fracture Strength of Single Layer MoS2: - An Atomistic Study",http://dx.doi.org/10.1016/j.physb.2020.412219 -"Reservoir engineering with arbitrary temperatures for spin systems and - quantum thermal machine with maximum efficiency",http://dx.doi.org/10.1103/PhysRevResearch.2.043419 -A realistic non-local heat engine based on Coulomb coupled systems,http://dx.doi.org/10.1063/5.0007347 -Evaluating the Apperception Engine,http://arxiv.org/abs/2007.05367v1 -Modeling Thermodynamic Trends of Rotating Detonation Engines,http://dx.doi.org/10.1063/5.0023972 -"Flame/flow dynamics at the piston surface of an IC engine measured by - high-speed PLIF and PTV",http://dx.doi.org/10.1016/j.proci.2018.06.215 -"Reversible engineering the spin-orbit coupling of monolayer MoS2 via - laser irradiation under controlled gas atmospheres",http://arxiv.org/abs/2009.09565v2 -What Makes a Popular Academic AI Repository?,http://dx.doi.org/10.1007/s10664-020-09916-6 -"It's the Journey Not the Destination: Building Genetic Algorithms - Practitioners Can Trust",http://arxiv.org/abs/2010.06406v1 -Comparing the Results of Replications in Software Engineering,http://arxiv.org/abs/2011.02861v1 -"Ethics in the Software Development Process: From Codes of Conduct to - Ethical Deliberation",http://dx.doi.org/10.1007/s13347-021-00451-w -"How do Practitioners Perceive the Relevance of Requirements Engineering - Research?",http://dx.doi.org/10.1109/TSE.2020.3042747 -"Data-Efficient Learning for Complex and Real-Time Physical Problem - Solving using Augmented Simulation",http://arxiv.org/abs/2011.07193v2 -Exploration in Algorithm Engineering: Modeling Algorithms,http://arxiv.org/abs/2012.01908v1 -"Tip-induced nano-engineering of strain, bandgap, and exciton dynamics in - 2D semiconductors",http://arxiv.org/abs/2012.03493v1 -"A novel machine learning-based optimization algorithm (ActivO) for - accelerating simulation-driven engine design",http://arxiv.org/abs/2012.04649v2 -Technology Readiness Levels for Machine Learning Systems,http://dx.doi.org/10.1038/s41467-022-33128-9 -The Daily Life of Software Engineers during the COVID-19 Pandemic,http://arxiv.org/abs/2101.04363v1 -"Robustness of on-device Models: Adversarial Attack to Deep Learning - Models on Android Apps",http://arxiv.org/abs/2101.04401v2 -Assets in Software Engineering: What are they after all?,http://dx.doi.org/10.1016/j.jss.2022.111485 -Deep reinforcement learning for quantum Hamiltonian engineering,http://dx.doi.org/10.1103/PhysRevApplied.18.024033 -"Rethinking complexity for software code structures: A pioneering study - on Linux kernel code repository",http://arxiv.org/abs/2103.00821v1 -Practices for Engineering Trustworthy Machine Learning Applications,http://arxiv.org/abs/2103.00964v1 -Wireframe-Based UI Design Search Through Image Autoencoder,http://dx.doi.org/10.1145/3391613 -"Characterizing and Detecting Mismatch in Machine-Learning-Enabled - Systems",http://arxiv.org/abs/2103.14101v1 -Socio-Technical Grounded Theory for Software Engineering,http://dx.doi.org/10.1109/TSE.2021.3106280 -A Requirements Engineering Technology for the IoT Software Systems,http://arxiv.org/abs/2103.14348v1 -"A Robust CNN Framework with Dual Feedback Feature Accumulation for - Detecting Pneumonia Opacity from Chest X-ray Images",http://arxiv.org/abs/2103.14461v1 -FONTNET: On-Device Font Understanding and Prediction Pipeline,http://arxiv.org/abs/2103.16150v1 -RASAECO: Requirements Analysis of Software for the AECO Industry,http://arxiv.org/abs/2106.08644v1 -Supporting AI Engineering on the IoT Edge through Model-Driven TinyML,http://dx.doi.org/10.1109/COMPSAC54236.2022.00140 -"Eighty Years of the Finite Element Method: Birth, Evolution, and Future",http://arxiv.org/abs/2107.04960v1 -Optimal finite-time Brownian Carnot engine,http://dx.doi.org/10.1103/PhysRevE.105.L052103 -"Structural Design Recommendations in the Early Design Phase using - Machine Learning",http://dx.doi.org/10.1007/978-981-19-1280-1_12 -"Keep Your Stakeholders Engaged: Interactive Vision Videos in - Requirements Engineering",http://arxiv.org/abs/2108.04576v1 -"Requirements-Aided Automatic Test Case Generation for Industrial - Cyber-physical Systems",http://dx.doi.org/10.1109/ICECCS.2015.32. -"Towards Mapping Control Theory and Software Engineering Properties using - Specification Patterns",http://dx.doi.org/10.1109/ACSOS-C52956.2021.00067 -"A Parallel Tempering Approach for Efficient Exploration of the - Verification Tradespace in Engineered Systems",http://arxiv.org/abs/2109.11704v1 -"Characterizing the Experience of Subjects in Software Engineering - Studies",http://arxiv.org/abs/2110.02835v1 -"Using Personality Detection Tools for Software Engineering Research: How - Far Can We Go?",http://dx.doi.org/10.1145/3491039 -"A Methodology for Developing a Verifiable Aircraft Engine Controller - from Formal Requirements",http://arxiv.org/abs/2110.09277v1 -Deep Generative Models in Engineering Design: A Review,http://arxiv.org/abs/2110.10863v4 -Analysis of the first Genetic Engineering Attribution Challenge,http://dx.doi.org/10.1038/s41467-022-35032-8 -"Deep metric learning improves lab of origin prediction of genetically - engineered plasmids",http://arxiv.org/abs/2111.12606v1 -"Unconventional Floquet topological phases from quantum engineering of - band inversion surfaces",http://dx.doi.org/10.1103/PRXQuantum.3.040312 -Multilingual training for Software Engineering,http://dx.doi.org/10.1145/3510003.3510049 -"Efficient FPGA-based ECDSA Verification Engine for Permissioned - Blockchains",http://arxiv.org/abs/2112.02229v1 -"Security Orchestration, Automation, and Response Engine for Deployment - of Behavioural Honeypots",http://dx.doi.org/10.1109/DSC54232.2022.9888808 -Dynamical learning of a photonics quantum-state engineering process,http://dx.doi.org/10.1117/1.AP.3.6.066002 -"Focus Areas, Themes, and Objectives of Non-Functional Requirements in - DevOps: A Systematic Mapping Study",http://arxiv.org/abs/2201.06524v1 -Bus Factor In Practice,http://arxiv.org/abs/2202.01523v1 -"A longitudinal case study on the effects of an evidence-based software - engineering training",http://dx.doi.org/10.1145/3510456.3514150 -"Engine-fed Kilonovae (Mergernovae) -- I. Dynamical Evolution and Energy - Injection / Heating Efficiencies",http://dx.doi.org/10.1093/mnras/stac2380 -Forecasting SQL Query Cost at Twitter,http://dx.doi.org/10.1109/IC2E52221.2021.00030 -The Vision of Self-Evolving Computing Systems,http://arxiv.org/abs/2204.06825v1 -"Software Engineering Approaches for TinyML based IoT Embedded Vision: A - Systematic Literature Review",http://dx.doi.org/10.1145/3528227.3528569 -"Virtual Reality Applications in Software Engineering Education: A - Systematic Review",http://arxiv.org/abs/2204.12008v1 -"An Empirical Evaluation of Flow Based Programming in the Machine - Learning Deployment Context",http://arxiv.org/abs/2204.12781v1 -"Benefits of Feedforward for Model Predictive Airpath Control of Diesel - Engines",http://arxiv.org/abs/2205.05630v1 -"Literature Review to Collect Conceptual Variables of Scenario Methods - for Establishing a Conceptual Scenario Framework",http://arxiv.org/abs/2205.08290v1 -"The Role of Emotional Intelligence in Handling Requirements Changes in - Software Engineering",http://arxiv.org/abs/2206.11603v1 -"Performance analysis of quantum harmonic Otto engine and refrigerator - under a trade-off figure of merit",http://arxiv.org/abs/2207.03374v2 -"A Strongly Correlated Quantum-Dot Heat Engine with Optimal Performance: - An Non-equilibrium Green's function Approach",http://dx.doi.org/10.1002/pssb.202200608 -Energy-Exergy Analysis and Optimal Design of a Hydrogen Turbofan Engine,http://dx.doi.org/10.31219/osf.io/su9pb -"On Evaluating Self-Adaptive and Self-Healing Systems using Chaos - Engineering",http://dx.doi.org/10.1109/ACSOS55765.2022.00018 -Web3 Challenges and Opportunities for the Market,http://arxiv.org/abs/2209.02446v1 -"The Science Gateway Community Institute's Consulting Services Program: - Lessons for Research Software Engineering Organizations",http://arxiv.org/abs/2209.03958v1 -"Are Machine Programming Systems using Right Source-Code Measures to - Select Code Repositories?",http://arxiv.org/abs/2209.11946v1 -Dynamical Control of Quantum Heat Engines Using Exceptional Points,http://dx.doi.org/10.1038/s41467-022-33667-1 -The BlackParrot BedRock Cache Coherence System,http://arxiv.org/abs/2211.06390v1 -Quality Assurance in MLOps Setting: An Industrial Perspective,http://arxiv.org/abs/2211.12706v2 -"Spatially-resolved Thermometry from Line-of-Sight Emission Spectroscopy - via Machine Learning",http://arxiv.org/abs/2212.07836v1 -"A Data Source Dependency Analysis Framework for Large Scale Data Science - Projects",http://arxiv.org/abs/2212.07951v1 -"Thermodynamic geometry of ideal quantum gases: a general framework and a - geometric picture of BEC-enhanced heat engines",http://dx.doi.org/10.1088/1367-2630/acc966 -"Beyond Statistical Similarity: Rethinking Metrics for Deep Generative - Models in Engineering Design",http://arxiv.org/abs/2302.02913v4 -"""Software is the easy part of Software Engineering"" -- Lessons and - Experiences from A Large-Scale, Multi-Team Capstone Course",http://arxiv.org/abs/2302.05536v1 -"Persona-based Assessment of Software Engineering Student Research - Projects: An Experience Report",http://arxiv.org/abs/2302.05618v1 -"Transforming First-Year Calculus Teaching for Engineering Students -- - Blocks with Field Specific Examples, Problems, and Exams",http://arxiv.org/abs/2302.05904v1 -"An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep - Learning Model Registry",http://arxiv.org/abs/2303.02552v1 -PTMTorrent: A Dataset for Mining Open-source Pre-trained Model Packages,http://arxiv.org/abs/2303.08934v1 -"Engineering Software Systems for Quantum Computing as a Service: A - Mapping Study",http://arxiv.org/abs/2303.14713v1 -Cyclic quantum engines enhanced by strong bath coupling,http://dx.doi.org/10.1103/PhysRevApplied.20.024038 -"Understanding the Influence of Motivation on Requirements - Engineering-related Activities",http://arxiv.org/abs/2304.08074v1 -Immunohistochemistry Biomarkers-Guided Image Search for Histopathology,http://arxiv.org/abs/2304.12424v1 -Comparing Software Developers with ChatGPT: An Empirical Investigation,http://arxiv.org/abs/2305.11837v2 -"Term-Sets Can Be Strong Document Identifiers For Auto-Regressive Search - Engines",http://arxiv.org/abs/2305.13859v2 -Superhydrophobicity of Auxetic Metamaterials,http://arxiv.org/abs/2306.02916v1 -Psychological Aspects of Pair Programming,http://dx.doi.org/10.1145/3593434.3593458 -"Injection rate of cylinder lubrication oil in large two-stroke marine - diesel engines using a common rail lubrication system",http://arxiv.org/abs/2307.03408v1 -"Requirements Traceability: Recovering and Visualizing Traceability Links - Between Requirements and Source Code of Object-oriented Software Systems",http://arxiv.org/abs/2307.05188v1 -"Mechanical modeling of the maturation process for tissue-engineered - implants: application to biohybrid heart valves",http://arxiv.org/abs/2307.12439v1 -"Automatic Feature Engineering for Time Series Classification: Evaluation - and Discussion",http://dx.doi.org/10.1109/IJCNN54540.2023.10191074 -"Finding the Optimum Design of Large Gas Engines Prechambers Using CFD - and Bayesian Optimization",http://arxiv.org/abs/2308.01743v1 -"Summary of 2nd International Workshop on Requirements Engineering and - Testing (RET)",http://dx.doi.org/10.1109/ICSE.2015.351 -"Delphic Costs and Benefits in Web Search: A utilitarian and historical - analysis",http://arxiv.org/abs/2308.07525v1 -Software Development in Startup Companies: The Greenfield Startup Model,http://dx.doi.org/10.1109/TSE.2015.2509970 -"On Using Information Retrieval to Recommend Machine Learning Good - Practices for Software Engineers",http://arxiv.org/abs/2308.12095v2 -Privacy engineering through obfuscation,http://arxiv.org/abs/2308.12514v1 -"Improving homology-directed repair by small molecule agents for genetic - engineering in unconventional yeast? -- Learning from the engineering of - mammalian systems",http://arxiv.org/abs/2308.15510v1 -"Copiloting the Copilots: Fusing Large Language Models with Completion - Engines for Automated Program Repair",http://arxiv.org/abs/2309.00608v2 -"Kelvin Waves, Klein-Kramers and Kolmogorov Equations, Path-Dependent - Financial Instruments: Survey and New Results",http://arxiv.org/abs/2309.04547v1 -"Adaptive Model Predictive Control for Engine-Driven Ducted Fan Lift - Systems using an Associated Linear Parameter Varying Model",http://arxiv.org/abs/2309.12552v1 -Guess & Sketch: Language Model Guided Transpilation,http://arxiv.org/abs/2309.14396v1 -"Revisiting Sentiment Analysis for Software Engineering in the Era of - Large Language Models",http://arxiv.org/abs/2310.11113v2 -Fixing detailed balance in ancilla-based dissipative state engineering,http://arxiv.org/abs/2310.12539v1 -A Relativistic Type Ibc Supernova Without a Detected Gamma-ray Burst,http://dx.doi.org/10.1038/nature08714 -"Technology-Enabled Nurturing of Creativity and Innovation: A Specific - Illustration from an Undergraduate Engineering Physics Course",http://arxiv.org/abs/1308.2434v1 -"An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled - photon-electron transport",http://dx.doi.org/10.1118/1.4924473 -Evaluating Hive and Spark SQL with BigBench,http://arxiv.org/abs/1512.08417v2 -"We Don't Need Another Hero? The Impact of ""Heroes"" on Software - Development",http://dx.doi.org/10.1145/3183519.3183549 -"Blindsight: Blinding EM Side-Channel Leakage using Built-In Fully - Integrated Inductive Voltage Regulator",http://arxiv.org/abs/1802.09096v1 -Thermodynamic principles and implementations of quantum machines,http://dx.doi.org/10.1007/978-3-319-99046-0_2 -"The Who, What, How of Software Engineering Research: A Socio-Technical - Framework",http://dx.doi.org/10.1007/s10664-020-09858-z -A survey on dragonfly algorithm and its applications in engineering,http://dx.doi.org/10.1007/s12065-021-00659-x -"Multi-keyword multi-click advertisement option contracts for sponsored - search",http://arxiv.org/abs/1307.4980v7 -"Fluid-structure interaction modelling and stabilisation of a - patient-specific arteriovenous access fistula",http://dx.doi.org/10.1007/s10237-017-0973-8 -"A Method to Assess and Argue for Practical Significance in Software - Engineering",http://arxiv.org/abs/1809.09849v7 -"Optimal power and efficiency of single quantum dot heat engines: theory - and experiment",http://dx.doi.org/10.1103/PhysRevB.99.235432 -"PHANTOM: Curating GitHub for engineered software projects using - time-series clustering",http://dx.doi.org/10.1007/s10664-020-09825-8 -"Can Machine Learning Identify Governing Laws For Dynamics in Complex - Engineered Systems ? : A Study in Chemical Engineering",http://arxiv.org/abs/1907.07755v1 -"Hardening of Artificial Neural Networks for Use in Safety-Critical - Applications -- A Mapping Study",http://arxiv.org/abs/1909.03036v1 -Fairway: A Way to Build Fair ML Software,http://dx.doi.org/10.1145/3368089.3409697 -"Predictors of Well-being and Productivity among Software Professionals - during the COVID-19 Pandemic -- A Longitudinal Study",http://dx.doi.org/10.1007/s10664-021-09945-9 -Understanding Peer Review of Software Engineering Papers,http://arxiv.org/abs/2009.01209v2 -"Quantum advantage in a molecular spintronic engine that harvests thermal - fluctuation energy",http://dx.doi.org/10.1002/adma.202206688 -Customer Support Ticket Escalation Prediction using Feature Engineering,http://dx.doi.org/10.1007/s00766-018-0297-y -Machine Learning assisted Chimera and Solitary states in Networks,http://dx.doi.org/10.3389/fphy.2021.513969 -Quantum engineering with hybrid magnonics systems and materials,http://arxiv.org/abs/2102.03222v1 -"On Understanding the Relation of Knowledge and Confidence to - Requirements Quality",http://arxiv.org/abs/2103.02187v1 -"FeatureEnVi: Visual Analytics for Feature Engineering Using Stepwise - Selection and Semi-Automatic Extraction Approaches",http://dx.doi.org/10.1109/TVCG.2022.3141040 -"Experimental investigation of thermal boundary layers and associated - heat loss for transient engine-relevant processes using HRCARS and phosphor - thermometry",http://dx.doi.org/10.1016/j.combustflame.2021.111567 -"The Effects of Human Aspects on the Requirements Engineering Process: A - Systematic Literature Review",http://dx.doi.org/10.1109/TSE.2021.3051898 -Enhancing business process execution with a context engine,http://dx.doi.org/10.1108/BPMJ-06-2017-0160 -Senatus -- A Fast and Accurate Code-to-Code Recommendation Engine,http://dx.doi.org/10.1145/3524842.3527947 -"Full-privacy secured search engine empowered by efficient genome-mapping - algorithms",http://arxiv.org/abs/2201.00696v2 -"Aggregate effects of advertising decisions: a complex systems look at - search engine advertising via an experimental study",http://dx.doi.org/10.1108/IntR-10-2017-0377 -"The Best of Both Worlds: Combining Learned Embeddings with Engineered - Features for Accurate Prediction of Correct Patches",http://arxiv.org/abs/2203.08912v2 -"Can in-home laboratories foster learning, self-efficacy, and motivation - during the COVID-19 pandemic? -- A case study in two engineering programs",http://arxiv.org/abs/2203.16465v1 -"Deep Learning based Model Predictive Control for Compression Ignition - Engines",http://arxiv.org/abs/2204.00139v2 -"Transfer learning based physics-informed neural networks for solving - inverse problems in engineering structures under different loading scenarios",http://dx.doi.org/10.1016/j.cma.2022.115852 -"Towards Runtime Monitoring of Complex System Requirements for Autonomous - Driving Functions",http://dx.doi.org/10.4204/EPTCS.371.4 -"Low-cost Efficient Wireless Intelligent Sensor (LEWIS) for Engineering, - Research, and Education",http://arxiv.org/abs/2303.13688v1 -"Understanding Self-Efficacy in the Context of Software Engineering: A - Qualitative Study in the Industry",http://arxiv.org/abs/2305.17106v2 -"Divide and Conquer the EmpiRE: A Community-Maintainable Knowledge Graph - of Empirical Research in Requirements Engineering",http://arxiv.org/abs/2306.16791v1 -Software Startups -- A Research Agenda,http://dx.doi.org/10.5277/e-Inf160105 -The emission spectra of radioweak quasars. I. The farinfrared emission,http://arxiv.org/abs/astro-ph/9306022v1 -Implication of Temporal Structure in GRB,http://arxiv.org/abs/astro-ph/9702093v1 -Probing the Central Engine of the Narrow-Line Seyfert 1 Galaxies,http://arxiv.org/abs/astro-ph/0112388v1 -The AGN Paradigm: Radio-Quiet Objects,http://arxiv.org/abs/astro-ph/0211228v1 -Gamma-ray bursts: Restarting the Engine,http://dx.doi.org/10.1086/496881 -Quantum Reservoir Engineering,http://arxiv.org/abs/atom-ph/9603002v1 -A simple denoising algorithm using wavelet transform,http://arxiv.org/abs/chao-dyn/9912038v2 -Adapting the Core Language Engine to French and Spanish,http://arxiv.org/abs/cmp-lg/9605015v1 -Heat Transfer in Lattice BGK Modeled Fluid,http://dx.doi.org/10.1007/BF02179969 -Self-Consistent Microscopic Model of Fluctuation-Induced Transport,http://dx.doi.org/10.1103/PhysRevLett.74.10 -A simple model for Maxwell's demon type information engine,http://dx.doi.org/10.1103/PhysRevE.53.2957 -Magnetic Particles,http://arxiv.org/abs/cond-mat/9806082v1 -Chiral Mesophases of DNA,http://arxiv.org/abs/cond-mat/9812128v1 -"Annular Long Josephson Junctions in a Magnetic Field: Engineering and - Probing the Fluxon Interaction Potential",http://arxiv.org/abs/cond-mat/9911437v1 -Optimizing the classical heat engine,http://dx.doi.org/10.1103/PhysRevLett.85.232 -"Density-Matrix Algorithm for Phonon Hilbert Space Reduction in the - Numerical Diagonalization of Quantum Many-Body Systems",http://dx.doi.org/10.1007/978-3-642-56034-7_12 -Single-electron control of Wigner crystallization,http://arxiv.org/abs/cond-mat/0112221v1 -"Boltzmann theory of engineered anisotropic magnetoresistance in - (Ga,Mn)As",http://dx.doi.org/10.1063/1.1523160 -"Quantum phase transition and engineering in two-component BEC in optical - lattices",http://dx.doi.org/10.1142/S0217984903005846 -"Quasi-one-dimensional ballistic ring in crossed high-frequency electric - fields",http://arxiv.org/abs/cond-mat/0404611v1 -"Entanglement and quantum state engineering in the optically driven - two-electron double-dot structure",http://dx.doi.org/10.1103/PhysRevA.72.022344 -"Ab-initio simulations on growth and interface properties of epitaxial - oxides on silicon",http://dx.doi.org/10.1016/j.mee.2005.04.100 -Engineering Fano resonances in discrete networks,http://dx.doi.org/10.1103/PhysRevE.72.056611 -General Relativity in Electrical Engineering,http://dx.doi.org/10.1088/1367-2630/8/10/247 -"Transition Metal-Ethylene Complexes as High-Capacity Hydrogen Storage - Media",http://dx.doi.org/10.1103/PhysRevLett.97.226102 -"Using Query Mediators for Distributed Searching in Federated Digital - Libraries",http://arxiv.org/abs/cs/9902020v1 -"The Structure of Weighting Coefficient Matrices of Harmonic Differential - Quadrature and Its Applications",http://arxiv.org/abs/cs/9904003v1 -The Use of Instrumentation in Grammar Engineering,http://arxiv.org/abs/cs/0011020v2 -Intelligent Anticipated Exploration of Web Sites,http://arxiv.org/abs/cs/0111012v1 -Software Validation using Power Profiles,http://arxiv.org/abs/cs/0201028v1 -The efficient generation of unstructured control volumes in 2D and 3D,http://arxiv.org/abs/cs/0202038v1 -"Experimental Software Schedulability Estimation For Varied Processor - Frequencies",http://arxiv.org/abs/cs/0302033v1 -Grid-Enabling Natural Language Engineering By Stealth,http://arxiv.org/abs/cs/0304028v1 -BOA: Framework for Automated Builds,http://arxiv.org/abs/cs/0306080v1 -"Contributions to the Development and Improvement of a Regulatory and - Pre-Regulatory Digitally System for the Tools within Flexible Fabrication - Systems",http://arxiv.org/abs/cs/0307054v1 -"WebTeach in practice: the entrance test to the Engineering faculty in - Florence",http://arxiv.org/abs/cs/0310013v1 -Robust Dialogue Understanding in HERALD,http://arxiv.org/abs/cs/0410058v1 -"Where the Rubber Meets the Sky: Bridging the Gap between Databases and - Science",http://arxiv.org/abs/cs/0502011v1 -Fast Recompilation of Object Oriented Modules,http://arxiv.org/abs/cs/0506035v1 -The Workshop - Implementing Well Structured Enterprise Applications,http://arxiv.org/abs/cs/0506050v1 -Software Architecture Overview,http://arxiv.org/abs/cs/0507061v1 -COMODI: Architecture for a Component-Based Scientific Computing System,http://arxiv.org/abs/cs/0509003v1 -Building a logical model in the machining domain for CAPP expert systems,http://arxiv.org/abs/cs/0606027v1 -"Enterprise Content Management: Theory and Engineering for Entire - Lifecycle Support",http://arxiv.org/abs/cs/0607122v1 -Enterprise Portal Development Tools: Problem-Oriented Approach,http://arxiv.org/abs/cs/0607123v1 -"The evolution of the parametric models of drawings (modules) in the - enterprises reconstruction CAD system",http://arxiv.org/abs/cs/0611045v1 -"Environment of development of the programs of parametric creating of the - drawings in CAD-system of renovation of the enterprises",http://arxiv.org/abs/cs/0611083v1 -Coupling Methodology within the Software Platform Alliances,http://arxiv.org/abs/cs/0611127v1 -Parallel Programming with Matrix Distributed Processing,http://arxiv.org/abs/hep-lat/0505005v1 -"Equivalence of Geometric Engineering and Hanany-Witten via Fractional - Branes",http://dx.doi.org/10.1016/S0550-3213(98)00509-4 -On N=1 gauge models from geometric engineering in M-theory,http://dx.doi.org/10.1088/0264-9381/20/23/002 -On ADE Quiver Models and F-Theory Compactification,http://dx.doi.org/10.1088/0305-4470/39/29/024 -"Metastable Vacua, Geometrical Engineering and MQCD Transitions",http://dx.doi.org/10.1088/1126-6708/2007/02/020 -"Output Feedback Control of Jet Engine Stall and Surge Using Pressure - Measurements",http://arxiv.org/abs/math/0007101v1 -"Equations, inequations and inequalities characterizing the - configurations of two real projective conics",http://dx.doi.org/10.1007/s00200-006-0023-8 -Reverse engineering small 4-manifolds,http://dx.doi.org/10.2140/agt.2007.7.2103 -Nonperiodic Oscillations of Pressure in a Spark Ignition Engine,http://dx.doi.org/10.1142/S0218127404010084 -"Cycle-to-Cycle Fluctuations of Burned Fuel Mass in Spark Ignition - Combustion Engines",http://arxiv.org/abs/nlin/0312068v1 -"Estimation of a Noise Level Using Coarse-Grained Entropy of Experimental - Time Series of Internal Pressure in a Combustion Engine",http://dx.doi.org/10.1016/j.chaos.2004.06.057 -"A Numerical Study of a Simple Stochastic/Deterministic Model of - Cycle-to-Cycle Combustion Fluctuations in Spark Ignition Engines",http://arxiv.org/abs/nlin/0405054v1 -"Wavelet Analysis of Cycle-to-Cycle Pressure Variations in an Internal - Combustion Engine",http://arxiv.org/abs/nlin/0607041v1 -Reference Design Project Book: NUSEL-Homestake,http://arxiv.org/abs/nucl-ex/0308015v1 -"Improving the effectiveness of introductory physics service courses: - Bridging to engineering courses",http://arxiv.org/abs/physics/0107001v1 -Power Systems of the Future,http://arxiv.org/abs/physics/0304116v1 -Engineering of directional emission from photonic crystal waveguides,http://dx.doi.org/10.1063/1.1870133 -Traffic Flow Theory,http://arxiv.org/abs/physics/0507126v1 -Geophysical tomography in engineering geology: an overview,http://dx.doi.org/10.2174/1874262900903010030 -"Industrial Strength Software in Computer Based Engineering Education - (CBEE): a Case Study",http://arxiv.org/abs/physics/0612184v1 -Antarctica: A Southern Hemisphere Windpower Station?,http://arxiv.org/abs/physics/0701055v1 -Ocean Terracing,http://arxiv.org/abs/physics/0701100v1 -Lake Titicaca - Physics of an Inherited Hydropower Macroproject Proposal,http://arxiv.org/abs/physics/0703182v1 -de-Broglie Wave-Front Engineering,http://dx.doi.org/10.1103/PhysRevA.62.033612 -Quantum mechanical spectral engineering by scaling intertwining,http://dx.doi.org/10.1238/Physica.Regular.064a00177 -"Electron g-factor Engineering in III-V Semiconductors for Quantum - Communications",http://arxiv.org/abs/quant-ph/0102056v2 -Entanglement and entropy engineering of atomic two-qubit states,http://dx.doi.org/10.1103/PhysRevLett.90.047905 -"Efficient engineering of multi-atom entanglement through single-photon - detections",http://dx.doi.org/10.1103/PhysRevLett.90.253601 diff --git a/data/prompt_engineering_reviewed.csv b/data/prompt_engineering_reviewed.csv deleted file mode 100644 index a365a81..0000000 --- a/data/prompt_engineering_reviewed.csv +++ /dev/null @@ -1,21 +0,0 @@ -Title, Link, Labels -"The Creativity of Text-to-Image Generation", http://dx.doi.org/10.1145/3569219.3569352, Applying prompts/prompt engineering -"Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education", http://dx.doi.org/10.32388/MD04B0, Analysis of model;Applying prompts/prompt engineering;Agents;Ethics/Consequences -"Collective Creativity in Crowd Ideation", http://dx.doi.org/10.1145/3411764.3445782, Survey;Analysis of technique;Applying prompts/prompt engineering;Agents -"Massive prompt cusps: A new signature of warm dark matter", http://dx.doi.org/10.1093/mnrasl/slad043, Analysis of technique;Analysis of model -"Scientific Literature Text Mining and the Case for Open Access", http://dx.doi.org/10.21428/14888, New model - "Don't Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems", https://arxiv.org/pdf/2209.05948v2.pdf, Analysis of Technique;New Model;Applying Prompts/prompt Engineering - "Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review", https://arxiv.org/abs/2310.08129v1, Survey;Analysis of Technique;Applying Prompts/prompt Engineering;Agents - "LoGoPrompt: Synthetic Text Images Can Be Good Visual Prompts for Vision-Language Models", https://arxiv.org/pdf/2309.01155v2.pdf, Survey;Analysis of Technique;Applying Promtps/Prompt Engineering - "Prompt Engineering For Students of Medicine and Their Teachers", https://arxiv.org/pdf/2308.11628v1.pdf, Survey;Analysis of Technique;Applying prompts/prompt engineering - "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models", https://arxiv.org/pdf/2307.12980v1.pdf, Survey;Other modality than text;Applying Prompts/prompt Engineering -"Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models", https://arxiv.org/abs/2106.13353v2, New Technique;Analysis of Technique;Applying Prompts/Prompt Engineering -"Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting", https://arxiv.org/abs/2310.08129v1, New Technique;Analysis of Technique;Other Modality than Text;Applying Prompts/Prompt Engineering -"Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification", https://arxiv.org/abs/2303.07142v3, Applying Promtps/Prompt Engineering;Analysis of Model -"Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models", https://arxiv.org/abs/2310.15127v1, New model;New Technique;Agents -"Addressing Compiler Errors: Stack Overflow or Large Language Models?", https://arxiv.org/abs/2307.10793v1, Analysis of model;Applying Prompts/Prompt Engineering -"Prompt Learning for Action Recognition", http://arxiv.org/abs/2305.12437v1, New Technique; Agents; Analysis of technique; Applying prompts/prompt engineering; Other modality than text -"Effective Structured Prompting by Meta-Learning and Representative Verbalizer",http://arxiv.org/abs/2306.00618v1, New Technique; Analysis of technique -"Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting",http://arxiv.org/abs/2307.10573v2, Analysis of technique -"High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: Establishing a Novel Baseline and Benchmark",http://arxiv.org/abs/2308.08443v1, Analysis of technique; Other modality than text; Applying prompts/prompt engineering -Structured Prompt Tuning,http://arxiv.org/abs/2205.12309v1, New Technique \ No newline at end of file diff --git a/data/prompts/few_shot/vanilla_five-shot_STEM.txt b/data/prompts/few_shot/vanilla_five-shot_STEM.txt deleted file mode 100644 index 6cf8cc3..0000000 --- a/data/prompts/few_shot/vanilla_five-shot_STEM.txt +++ /dev/null @@ -1,39 +0,0 @@ -Question: A 0.217 g sample of HgO (molar mass = 217 g) reacts with excess iodide ions according to the reaction shown above. Titration of the resulting solution requires how many mL of 0.10 M HCl to reach equivalence point? -Choices: -(A) 1.0 mL -(B) 10 mL -(C) 20 mL -(D) 50 mL -Answer: (C) - -Question: Many Web browsers allow users to open anonymous windows. During a browsing session in an anonymous window, the browser does not record a browsing history or a list of downloaded files. When the anonymous window is exited, cookies created during the session are deleted. Which of the following statements about browsing sessions in an anonymous window is true? -Choices: -(A) The activities of a user browsing in an anonymous window will not be visible to people who monitor the user's network, such as the system administrator. -(B) Items placed in a Web store's shopping cart for future purchase during the anonymous browsing session will not be saved on the user's computer. -(C) A user will not be able to log in to e-mail or social media accounts during the anonymous browsing session. -(D) A user browsing in an anonymous window will be protected from viruses launched from any web sites visited or files downloaded. -Answer: (B) - -Question: A point pole has a strength of 4π * 10^-4 weber. The force in newtons on a point pole of 4π * 1.5 * 10^-4 weber placed at a distance of 10 cm from it will be -Choices: -(A) 15 N. -(B) 20 N. -(C) 7.5 N. -(D) 3.75 N. -Answer: (A) - -Question: Joe was in charge of lights for a dance. The red light blinks every two seconds, the yellow light every three seconds, and the blue light every five seconds. If we include the very beginning and very end of the dance, how many times during a seven minute dance will all the lights come on at the same time? (Assume that all three lights blink simultaneously at the very beginning of the dance.) -Choices: -(A) 3 -(B) 5 -(C) 6 -(D) 15 -Answer: (D) - -Question: The pleura -Choices: -(A) have no sensory innervation. -(B) are separated by a 2 mm space. -(C) extend into the neck. -(D) are composed of respiratory epithelium. -Answer: (C) diff --git a/data/prompts/few_shot/vanilla_five-shot_humanities.txt b/data/prompts/few_shot/vanilla_five-shot_humanities.txt deleted file mode 100644 index fa4d8c4..0000000 --- a/data/prompts/few_shot/vanilla_five-shot_humanities.txt +++ /dev/null @@ -1,42 +0,0 @@ -Question: Turtles live long lives and are happy creatures, unless they are injured. -Choices: -(A) (L • H) ≡ I -(B) (L • H) ∨ I -(C) L • (H ∨ I) -(D) L • (H ⊃ R) -Answer: (B) - -Question: A son owed a creditor $5,000. The son's father contacted the creditor and told him that he wanted to pay the son's debt. The father signed a document that stated the father would pay the son's debt at a rate of $500 a month for 10 months. The creditor made no written or oral commitment to forbear to sue the son to collect the $5,000 debt, and the father made no oral or written request for any such forbearance. For the next five months, the father made and the creditor accepted the $500 monthly payments as agreed. During that period, the creditor, in fact, did forbear to take any legal action against the son. However, the father then informed the creditor that he would make no further payments on the debt. Which of the following is the most persuasive argument that the father is liable to the creditor under the terms of their agreement? -Choices: -(A) The father's promise and the creditor's reliance thereon, if proved, gave rise to a valid claim by the creditor against the father based on the doctrine of promissory estoppel. -(B) Because it was foreseeable that the father's promise would induce the creditor to forbear taking any action against the son, such forbearance was, as a matter of law, a bargained-for consideration for the father's promise. -(C) The father's five payments to the creditor totaling $2,500 manifested a serious intent on the father's part to be contractually bound, and such manifestation is generally recognized as an effective substitute for consideration. -(D) By assuming the antecedent debt obligation that the son owed to the creditor, the father became a surety whose promise to the creditor was enforceable, since it was in writing and supported by adequate consideration. -Answer: (A) - -Question: This question refers to the following information. -""""Society in every state is a blessing, but government even in its best state is but a necessary evil; in its worst state an intolerable one; for when we suffer, or are exposed to the same miseries by a government, which we might expect in a country without government, our calamity is heightened by reflecting that we furnish the means by which we suffer. Government, like dress, is the badge of lost innocence; the palaces of kings are built on the ruins of the bowers of paradise. For were the impulses of conscience clear, uniform, and irresistibly obeyed, man would need no other lawgiver; but that not being the case, he finds it necessary to surrender up a part of his property to furnish means for the protection of the rest; and this he is induced to do by the same prudence which in every other case advises him out of two evils to choose the least. Wherefore, security being the true design and end of government, it unanswerably follows that whatever form thereof appears most likely to ensure it to us, with the least expense and greatest benefit, is preferable to all others."""" -Thomas Paine, Common Sense, 1776 -Which of the following """"miseries"""" alluded to above were most condemned by Anti-Federalists of the post-Revolutionary era? -Choices: -(A) Organized response to Bacon's Rebellion -(B) Federal response to Shays's Rebellion -(C) Federal response to Pontiac's Rebellion -(D) Federal response to the Whiskey Rebellion -Answer: (D) - -Question: Which of the following is true of a valid categorical syllogism? -Choices: -(A) The minor premise must deny the antecedent -(B) The major premise must affirm the consequent -(C) The middle term must be used in at least one premise in a universal or unqualified sense -(D) All of the above -Answer: (C) - -Question: How can the Upanishads be characterized? -Choices: -(A) Ritual texts -(B) Philosophical texts -(C) Hymns -(D) Origin stories -Answer: (B) diff --git a/data/prompts/few_shot/vanilla_five-shot_other.txt b/data/prompts/few_shot/vanilla_five-shot_other.txt deleted file mode 100644 index e7e6588..0000000 --- a/data/prompts/few_shot/vanilla_five-shot_other.txt +++ /dev/null @@ -1,47 +0,0 @@ -Question: In contrast to _______, _______ aim to reward favourable behaviour by companies. The success of such campaigns have been heightened through the use of ___________, which allow campaigns to facilitate the company in achieving _________ . -Choices: -(A) Buycotts, Boycotts, Blockchain technology, Charitable donations -(B) Buycotts, Boycotts, Digital technology, Increased Sales -(C) Boycotts, Buyalls, Blockchain technology, Charitable donations -(D) Boycotts, Buycotts, Digital technology, Increased Sales -Answer: (D) - -Question: In the assessment of the hand function which of the following is true? -Choices: -(A) Abduction of the thumb is supplied by spinal root T2 -(B) Opposition of the thumb by opponens policis is supplied by spinal root T1 -(C) Finger adduction is supplied by the median nerve -(D) Finger abduction is mediated by the palmar interossei -Answer: (B) - -Question: As of 2015, since 1990 forests have ____ in Europe and have ____ in Africa and the Americas. -Choices: -(A) increased, increased -(B) increased, decreased -(C) decreased, increased -(D) decreased, decreased -Answer: (B) - -Question: What characteristic is not a key feature of the 'open systems' model of management? -Choices: -(A) Morale -(B) Innovation -(C) Growth resource -(D) Adaptation -Answer: (A) - -Question: When older adults move to a new state after retirement, which of the following is the more likely destination? -Choices: -(A) Texas -(B) California -(C) Hawaii -(D) Vermont -Answer: (A) - -Question: Which of these songs was a Top 10 hit for the rock band The Police? -Choices: -(A) 'Radio Ga-Ga' -(B) 'Ob-la-di Ob-la-da' -(C) 'De Do Do Do De Da Da Da' -(D) 'In-a-Gadda-Da-Vida' -Answer: (C) diff --git a/data/prompts/few_shot/vanilla_five-shot_social_sciences.txt b/data/prompts/few_shot/vanilla_five-shot_social_sciences.txt deleted file mode 100644 index 417c3b3..0000000 --- a/data/prompts/few_shot/vanilla_five-shot_social_sciences.txt +++ /dev/null @@ -1,39 +0,0 @@ -Question: Which of the following is not a problem associated with official statistics on strike action? -Choices: -(A) most strikes go unnoticed by employers and the mass media -(B) not all industrial disputes will be reported by the employer -(C) the definition of strikes excludes those that involve fewer than ten workers or last less than one day -(D) it is hard to compare strikes that were measured in different ways -Answer: (A) - -Question: The realm of policy decisions concerned primarily with relations between the United States and the rest of the world is known as -Choices: -(A) terrorism policy. -(B) economic policy. -(C) foreign policy. -(D) international policy. -Answer: (C) - -Question: In terms of Hofstede’s (1980) five cultural dimensions, the United States scores at the top of the scale on: -Choices: -(A) individualism and power distance. -(B) individualism. -(C) power distance and masculinity. -(D) uncertainty avoidance. -Answer: (B) - -Question: For a stationary autoregressive process, shocks will -Choices: -(A) Eventually die away -(B) Persist indefinitely -(C) Grow exponentially -(D) Never occur -Answer: (A) - -Question: Which of the following statements is NOT accurate regarding the services provided by local governments in the United States? -Choices: -(A) Duplication of efforts occurs often. -(B) Social problems of the central city spill over into the surrounding residential suburbs. -(C) Inefficiency in providing services occurs often. -(D) One neighborhood's efforts to reduce pollution are always supported by neighboring communities. -Answer: (D) diff --git a/data/prompts/zero_shot/CoT_zero_shot.txt b/data/prompts/zero_shot/CoT_zero_shot.txt deleted file mode 100644 index c190358..0000000 --- a/data/prompts/zero_shot/CoT_zero_shot.txt +++ /dev/null @@ -1 +0,0 @@ -Let's think step by step. diff --git a/data/prompts/zero_shot/plan_and_solve_zero_shot.txt b/data/prompts/zero_shot/plan_and_solve_zero_shot.txt deleted file mode 100644 index 43a2977..0000000 --- a/data/prompts/zero_shot/plan_and_solve_zero_shot.txt +++ /dev/null @@ -1 +0,0 @@ -Let's first understand the problem and devise a plan to solve the problem. Then, let's carry out the plan and solve the problem step by step. diff --git a/data/prompts/zero_shot/thread_of_thoughts.txt b/data/prompts/zero_shot/thread_of_thoughts.txt deleted file mode 100644 index a2faa1d..0000000 --- a/data/prompts/zero_shot/thread_of_thoughts.txt +++ /dev/null @@ -1 +0,0 @@ -Walk me through this context in manageable parts step by step, summarizing and analyzing as we go. diff --git a/data/prompts/zero_shot/vanilla_zero_shot_1.txt b/data/prompts/zero_shot/vanilla_zero_shot_1.txt deleted file mode 100644 index 5218515..0000000 --- a/data/prompts/zero_shot/vanilla_zero_shot_1.txt +++ /dev/null @@ -1 +0,0 @@ -Solve the following problem and return your answer as "A", "B", "C" or "D" with quotes surrounding the correct letter. diff --git a/data/prompts/zero_shot/vanilla_zero_shot_2.txt b/data/prompts/zero_shot/vanilla_zero_shot_2.txt deleted file mode 100644 index ec5218c..0000000 --- a/data/prompts/zero_shot/vanilla_zero_shot_2.txt +++ /dev/null @@ -1 +0,0 @@ -Answer the question with quotes surrounding the final answer as such: "A", "B", "C" or "D". diff --git a/data/prompts/zero_shot/vanilla_zero_shot_3.txt b/data/prompts/zero_shot/vanilla_zero_shot_3.txt deleted file mode 100644 index 26afb90..0000000 --- a/data/prompts/zero_shot/vanilla_zero_shot_3.txt +++ /dev/null @@ -1 +0,0 @@ -Solve the problem and return "A", "B", "C" or "D". \ No newline at end of file diff --git a/data/semantic_scholar_data/semantic_scholar_4_to_6_.csv b/data/semantic_scholar_data/semantic_scholar_4_to_6_.csv deleted file mode 100644 index eae9057..0000000 --- a/data/semantic_scholar_data/semantic_scholar_4_to_6_.csv +++ /dev/null @@ -1,192 +0,0 @@ -Title,Model,Probability,Reasoning -align and prompt: video-and-language pre-training with entity prompts,gpt-4-1106-preview,4,"The abstract describes a pre-training framework for video-and-language tasks, focusing on cross-modal alignment and introducing a novel prompting entity modeling concept. Although the study involves 'entity prompts,' it primarily concentrates on video-text interaction rather than exploring 'hard prefix prompts' as may be suggested by prompt engineering in a language model context. The relevance to prompt engineering is secondary and indirect, mainly connected through the novel use of prompts for entity modeling within a multimodal framework, not as a comprehensive study of prompt engineering itself." -why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning,gpt-4-1106-preview,6,"The paper's abstract describes an analysis of head tuning and prompt tuning, which are highly relevant to the study of prompt engineering as a concept. Prompt tuning particularly involves the process of adjusting prompts to achieve better performance in downstream tasks, which is a subset of prompt engineering. However, the abstract suggests a specific focus on the theoretical underpinnings of why pretrained language models are effective, using generative models like HMMs for analysis. The relevance to prompt engineering is therefore significant but not completely aligned, as it does not explicitly address the systematic review of 'hard prefix prompts' or the practical aspect of designing prompts, which might be expected from a 'comprehensive systematic review on hard prefix prompts'." -modeling prompt adherence in student essays,gpt-4-1106-preview,6,"The study is only moderately relevant to prompt engineering. It focuses on modeling prompt adherence in student essays and introduces a corpus and scoring method, which could potentially inform the development of prompts in educational settings. However, prompt adherence is just one aspect of prompt engineering, and the study's scope is limited to student essays rather than a broader application within engineering prompts for AI or human-computer interactions. Therefore, while relevant, it does not wholly represent prompt engineering as a comprehensive field." -how novices use llm-based code generators to solve cs1 coding tasks in a self-paced learning environment,gpt-4-1106-preview,4,"While the presented study does not directly focus on 'hard prefix prompts' or prompt engineering, it does investigate the use of prompts by novice programmers in an educational setting when interacting with a Large Language Model (LLM)-based code generator like Codex. Since prompt crafting is a substantial part of this interaction, and the properties of these prompts are analyzed, the study has some relevance to prompt engineering. However, its primary focus seems to be on the educational implications and usage patterns of the LLM rather than developing or understanding the specific prompt engineering strategies to improve interaction with LLMs." -"reason for future, act for now: a principled framework for autonomous llm agents with provable sample efficiency",gpt-4-1106-preview,5,"The abstract provided discusses a framework for improving the way large language models act and reason over time, with a focus on learning and planning within Bayesian adaptive Markov decision processes. Although this is related to how prompts might be engineered to elicit particular responses from LLMs, it doesn't specifically mention 'hard prefix prompts' or address prompt engineering techniques in a systematic review context. Therefore, while aspects of this framework could potentially inform prompt engineering strategies to some extent (hence not a 0 rating), the relevance to the study of prompt engineering, particularly that of 'hard prefix prompts,' is only tangentially related. Therefore, a middle score reflects this partial relevance." -hide and seek (has): a lightweight framework for prompt privacy protection,gpt-4-1106-preview,5,"The provided abstract focuses on privacy protection in the context of using large language models by introducing the HaS (Hide and Seek) framework, which is relevant to the broader field of responsible AI usage and prompt engineering to a certain degree. It discusses techniques for anonymization and de-anonymization, which could indirectly affect the way prompts are engineered to ensure privacy. However, the main concern of the study is privacy protection rather than methodologies for optimizing or understanding the construction of prompts (hard prefix prompts) in prompt engineering studies. As a result, it holds moderate relevance as it touches upon the privacy aspect of user inputs (prompts) but does not directly deal with the study or advancement of prompt-engineering techniques." -gpt-3-driven pedagogical agents for training children's curious question-asking skills,gpt-4-1106-preview,6,"The relevance to prompt engineering study is moderate. While the focus of this paper appears to be on using large language models to encourage children to ask more curious questions, and it involves a natural language prompting approach, the connection to 'hard prefix prompts' specifically is not directly mentioned. Prompt engineering is certainly a component of training these models for pedagogical purposes, but the abstract does not provide information about a systematic review of prompt engineering or hard prefix prompts explicitly. It suggests using prompting methods for practical applications rather than studying the prompts themselves." -surreal vr pong: llm approach to game design,gpt-4-1106-preview,4,"The title and abstract provided discuss the application of generative models and computational creativity in the context of 3D game design and propose the integration of prompt-based creation into the gameplay itself. While it does not specifically focus on 'hard prefix prompts,' it does touch on prompt engineering by suggesting that prompts can be an element within game mechanics. This indicates some relevance to the study of prompt engineering, but it is not a direct or comprehensive examination of hard prefix prompts in systematic reviews or other studies." -reconcile: round-table conference improves reasoning via consensus among diverse llms,gpt-4-1106-preview,4,"The study presents a multi-agent system for improving consensus and reasoning among Large Language Models (LLMs), which touches on the field of prompt engineering indirectly through the use of 'discussion prompts'. While it does not address hard prefix prompts directly, the mention of prompts as a means for agent communication suggests relevance to prompt design and its impact on model performance. Therefore, it is somewhat relevant to studies in prompt engineering, especially those exploring the interaction dynamics and prompt-response behavior within and between models." -prompting large language models for zero-shot domain adaptation in speech recognition,gpt-4-1106-preview,4,"The abstract touches on using a domain-specific text prompt for zero-shot domain adaptation in speech recognition with a large language model, which involves prompt engineering for a narrowly defined purpose. It highlights utilizing prompts for performance improvement in a specific AI task, which is relevant to the study of prompt engineering. However, it does not directly address a 'systematic review on hard prefix prompts' or cover the broader implications and methodologies of prompt engineering, thus only partially relevant." -interactive data synthesis for systematic vision adaptation via llms-aigcs collaboration,gpt-4-1106-preview,6,"The abstract provided for the study 'interactive data synthesis for systematic vision adaptation via llms-aigcs collaboration' indicates an exploration of the collaboration between language models (LLMs) and artificial intelligence generated content (AIGC) models for more controllable image generation, which is aligned with the practice of prompt engineering. However, the focus seems to be on data augmentation for vision tasks rather than solely on the systematic review of 'hard prefix prompts' in prompt engineering. Although prompt engineering is relevant to the work described, as it is necessary for guiding the LLMs in this process, the absence of a direct and explicit focus on a review of prompt engineering techniques, specifically hard prefix prompts, results in a moderate rating on the relevance scale." -systematic rectification of language models via dead-end analysis,gpt-4-1106-preview,6,"The study presents a method for detoxification of language model outputs, which is tangentially related to prompt engineering. While the main focus is not on the development of prompts, the detoxification process could impact how prompts are engineered by reducing the probability of generating toxic responses and altering the token selection process. This can be relevant in creating safer and more effective prompts. However, the study does not directly address hard prefix prompts or systematic reviews of prompt engineering strategies, so the rating reflects moderate relevance rather than full alignment with the prompt engineering field." -chatrule: mining logical rules with large language models for knowledge graph reasoning,gpt-4-1106-preview,5,"The described paper presents a novel framework called ChatRule, which utilizes large language models to generate logical rules for knowledge graph reasoning. While this application indirectly relates to prompt engineering, as it involves leveraging LLMs to generate content based on structured prompts from knowledge graphs, the focus is more on the application in knowledge graphs and logical rule mining rather than on the study of hard prefix prompts in a general context. Therefore, its relevance to a comprehensive systematic review on hard prefix prompts in prompt engineering may be considered moderate, as the principles could potentially inform prompt engineering techniques, but it is not directly aligned with the review's core subject." -zero-shot prompting for code complexity prediction using github copilot,gpt-4-1106-preview,6,"The relevance of this study to prompt engineering is somewhat indirect. The study investigates the capacity of GitHub Copilot, which leverages a Large Language Model, to predict code complexity in a zero-shot manner. While this addresses the model's ability to understand and generate responses in a specific technical domain without prior training, it does not directly explore the engineering or optimization of prompts (i.e., hard prefix prompts). However, the study does touch on a key aspect of prompt-based interactions with AI, which is the model's performance on tasks with no fine-tuning. This suggests relevance in terms of understanding the capabilities and limitations of LLMs like GPT3 when prompted with untrained tasks, which is a component of prompt engineering." -investigating causal understanding in llms,gpt-4-1106-preview,6,"The study discussed in the abstract is only partially relevant to prompt engineering since the investigation focuses on the causal understanding capabilities of LLMs rather than specifically on 'hard prefix prompts.' However, the research touches on how varying prompt presentations can affect LLM responses, which is related to the concept of prompt engineering. The relevance lies in understanding the influence of presentation form, both in hard prefix prompts and other types of prompting. The rating is not higher because the study does not directly focus on 'hard prefix prompts,' which seems to be the specific area of interest." -evaluating the text-to-sql capabilities of large language models,gpt-4-1106-preview,6,"The abstract describes an empirical study on the capability of a language model, Codex, to interpret and convert natural language into SQL queries. The study focuses on performance evaluation and comparison with state-of-the-art models in terms of few-shot learning, which is facilitated by providing a few in-domain examples in the prompt. Although the abstract does not explicitly mention the term 'prompt engineering,' the essence of evaluating the impact of tailored prompts on the model's performance is captured in the process of providing 'in-domain examples'. This could be considered a form of prompt engineering, as it involves crafting prompts to improve task-specific performance. Hence, the study has relevance to the broader field of prompt engineering, specifically regarding how prompts can enable large language models to understand and generate structured queries like SQL. However, the focus is not on 'hard prefix prompts' or a comprehensive systematic review on such, which would be more directly related to the prompt engineering study described in the prompt, thus warranting a moderate rating rather than a high one." -persistent anti-muslim bias in large language models,gpt-4-1106-preview,6,"While the study is highly relevant to the broader field of AI ethics and bias in machine learning models, its direct relevance to 'prompt engineering' is moderate. It touches on the concept of 'adversarial text prompts' as a means to counteract bias in language models, which does fall under the scope of prompt engineering. However, the study's primary focus is on the identification and analysis of bias, rather than on the engineering of prompts as a method for directing or improving the language model's outputs. More specifically, it does not address 'hard prefix prompts' in the systematic review sense but does explore the dynamic between prompt construction and model responses related to bias." -augesc: dialogue augmentation with large language models for emotional support conversation,gpt-4-1106-preview,4,"The study described does involve prompts as it discusses leveraging large language models for dialogue augmentation, specifically in the context of emotional support conversation. The prompt engineering aspect is present in the sense that the researchers instruct the model to complete dialogues which could be considered a form of a 'prompt'. However, hard prefix prompts, which imply a specific approach to structuring prompts to elicit desired responses, are not directly mentioned. This suggests that while the study is related to prompt design and usage, it may not focus on the 'hard prefix prompts' aspect extensively, leading to a moderate relevance rating." -retroformer: retrospective large language agents with policy gradient optimization,gpt-4-1106-preview,6,"The abstract describes a study related to optimizing large language agents through policy gradient optimization, which indirectly involves engineering of prompts because it mentions the automatic tuning of language agent prompts based on environment feedback. While this does not specifically target 'hard prefix prompts,' it is relevant to the broader field of prompt engineering as it involves refining prompts to improve agent performance. However, the lack of direct mention of 'hard prefix prompts' or a comprehensive systematic review of them justifies a moderate rating rather than a high one." -leveraging large language models for mental health prediction via online text data,gpt-4-1106-preview,6,"The title and abstract indicate that this study involves leveraging large language models (LLMs) for mental health prediction tasks by analyzing online text data, which is related to the application of LLMs, but it doesn't specifically mention 'hard prefix prompts' or 'prompt engineering' as the central theme. However, the use of zero-shot and few-shot prompting, along with instruction finetuning, falls under the broader category of prompt engineering techniques. Therefore, while the study is tangentially relevant to prompt engineering because it involves designing inputs for LLMs to perform specific tasks, it is not focused on a comprehensive systematic review of hard prefix prompts, which makes it only moderately relevant." -can large language models empower molecular property prediction?,gpt-4-1106-preview,4,"The study focuses on the application of Large Language Models (LLMs) for molecular property prediction using SMILES text, which demonstrates a use case for LLMs that is adjacent to the concept of prompt engineering. Although it deals with prompting LLMs for in-context learning and involves the generation of explanations, which are relevant techniques in prompt engineering, the study's primary aim is not a systematic review of prompt engineering itself, nor does it specifically address 'hard prefix prompts'. Therefore, its relevance to a comprehensive systematic review on hard prefix prompts is tangentially related but not directly aligned, warranting a moderate relevance rating." -generating data for symbolic language with large language models,gpt-4-1106-preview,6,"The abstract indicates that the paper is closely related to the use of prompts in the context of LLMs for data generation, specifically in the area of symbolic language tasks. While the study does not directly focus on 'hard prefix prompts' as stipulated in the prompt engineering study question, it does explore 'informative prompt' design in order to steer the LLM's data generation process. This suggests a strong relevance to the practice of prompt engineering and the optimization of prompts for specific tasks in LLMs. However, the lack of explicit focus on 'hard prefix prompts' reduces the direct relevance to the systematic review concerning that specific aspect of prompt engineering." -quantifying memorization across neural language models,gpt-4-1106-preview,4,"While the abstract discusses the issue of memorization in language models, which is indirectly related to how models respond to prompts, it does not directly address prompt engineering, particularly the study of 'hard prefix prompts.' The information provided is relevant to the construction and reliability of prompts in the context of avoiding the elicitation of memorized data, but it does not specifically focus on engineering prompts for systematic review that the original query suggests. Therefore, the relevance is moderate but not directly applicable to the study of hard prefix prompts in prompt engineering." -denseclip: language-guided dense prediction with context-aware prompting,gpt-4-1106-preview,4,"While the study described in the abstract does involve a form of 'prompting' by using contextual language information to guide a model, this is applied in the scope of visual representation learning and not in the explicit context of 'hard prefix prompts' for text-based language models, which is often what is referred to in prompt engineering studies. Therefore, its relevance to prompt engineering study is tangential rather than directly applicable." -speechprompt v2: prompt tuning for speech classification tasks,gpt-4-1106-preview,6,"The paper is relevant to prompt engineering as it discusses prompt tuning, a technique integral to prompt engineering that involves fine-tuning a language model (LM) using prompts to better perform specific tasks. Although the main focus is on speech classification tasks and not solely on hard prefix prompts, it still offers insights into the larger field of prompt engineering, particularly how prompts are used to improve performance and efficiency for various tasks in speech processing. The paper does not directly address a 'comprehensive systematic review on hard prefix prompts,' but the technology it explores falls within the broader scope of prompt engineering studies." -prompting visual-language models for efficient video understanding,gpt-4-1106-preview,6,"The study pertains to efficient adaptation mechanisms for pre-trained visual-language models specifically for video understanding tasks. It suggests a methodology for fine-tuning the models, possibly including the use of prompts to align pre-training objectives with video-related tasks. While it doesn't directly address 'hard prefix prompts', the adaptation of pre-trained models using prompts is a related area of research. The relevance is therefore moderate, as the study could potentially inform prompt engineering practices in multi-modal contexts, even though it does not focus on a systematic review of hard prefix prompts." -reducing sentiment bias in language models via counterfactual evaluation,gpt-4-1106-preview,4,"The study deals with reducing sentiment bias in language models by using a form of counterfactual evaluation, which is related to how prompts might internalize biases present in training data. While it touches on the area of prompt engineering by considering how the conditioning context affects model output, its primary focus is on bias quantification and reduction rather than on the systematic review of 'hard prefix prompts' or the structure and impact of prompt design itself." -simultaneous translation and paraphrase for language education,gpt-4-1106-preview,4,"The study presents work on the generation of translations and paraphrases, which touches upon prompt engineering indirectly through the creation of diverse language sets for training models. However, the primary focus appears to be on translation and paraphrasing rather than prompt engineering itself. It can be relevant to prompt engineering in the context of designing effective prompts for language translation tasks but does not directly address the systematic review of 'hard prefix prompts' for prompt engineering studies." -generative visual prompt: unifying distributional control of pre-trained generative models,gpt-4-1106-preview,6,"The study presents a framework called Generative Visual Prompt (PromptGen) to exercise distributional control over pre-trained generative models. While it does not directly relate to 'hard prefix prompts' that are typically associated with language models and their prompting techniques, the concept of manipulating the output of generative models using prompts (here in the form of external model knowledge) is related to the broader topic of prompt engineering. The focus on controlling generative models aligns with the idea of influencing model behavior through prompts, hence the relevance to the field of prompt engineering. However, it lacks a direct connection to 'hard prefix prompts' in systematic review context and instead deals with a different application of prompting in the visual domain. Thus, the relevance is moderate, and the rating is given a 6." -dream3d: zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models,gpt-4-1106-preview,4,"The paper discusses the use of text prompts for zero-shot text-to-3D synthesis, which involves aspects of prompt engineering as it requires the optimization of text prompts to generate 3D shapes. However, the core focus is on the synthesis of 3D structures from text descriptions rather than on the study of hard prefix prompts specifically. The relevance to prompt engineering is tangential and mainly related to optimizing text prompts within a specific context of 3D content generation." -unleashing the power of visual prompting at the pixel level,gpt-4-1106-preview,6,"The paper describes a study focused on visual prompting as a technique to adapt pre-trained models for recognition tasks, which is related to prompting in a broad sense. However, the query asks for a comprehensive systematic review on hard prefix prompts, which typically refers to textual prompt engineering where specific text prompts are designed to guide language models. Although visual prompting shares similar goals in terms of adapting models to new tasks, it does so in the domain of images rather than text. Hence, the relevance is moderate because the methods and outcomes may have conceptual parallels to textual prompt engineering, but do not directly address or review hard text-based prefix prompts." -being comes from not-being: open-vocabulary text-to-motion generation with wordless training,gpt-4-1106-preview,4,"While the abstract indicates that this study was inspired by prompt learning in NLP and involves the reformulation of input text into a 'prompt' for a generator, the primary focus is on text-to-motion generation rather than on prompt engineering for language models or systematic reviews of 'hard prefix prompts.' The connection to prompt engineering is tangential and based more on a conceptual inspiration than on a direct study or analysis of prompts in the context of text or language processing." -a web-based environment for documentation and sharing of engineering design knowledge,gpt-4-1106-preview,5,"The abstract describes an ontological knowledge-base designed to aid in the engineering design process by prompting engineers to document and share information efficiently. Although it mentions the use of prompts to drive certain behaviors within the engineering design process, the focus is not specifically on the study of 'hard prefix prompts' in the context of 'prompt engineering' as it relates to AI or machine learning. The paper seems to be more aligned with knowledge management and ontological structures in engineering rather than the specific study of designing and engineering prompts for AI systems. Therefore, it is somewhat relevant due to its use of prompting mechanisms but not directly concerned with the study at hand." -what initiates evidence‐based reasoning?: situations that prompt students to support their design ideas and decisions,gpt-4-1106-preview,6,"The document is somewhat relevant to the study of prompt engineering as it discusses the situations that lead students to use evidence-based reasoning, which is an important factor in understanding how to structure prompts to elicit informed responses. Although it focuses on evidence-based reasoning in the context of engineering education rather than the specific area of 'hard prefix prompts', understanding the broader principles of how prompts can initiate certain types of thinking is pertinent to prompt engineering." -a prompt-aware neural network approach to content-based scoring of non-native spontaneous speech,gpt-4-1106-preview,4,"The study focuses on using neural network techniques to assess non-native spontaneous speech, which includes using prompts as a condition for the model. Although this involves engineering a model to interact with prompts, the core emphasis is on automatic assessment rather than on the systematic review or deep exploration of 'hard prefix prompts', which would be central to prompt engineering studies. Therefore, the relevance is moderate as it only touches on prompt-related aspects within a broader application context." -how do different reflection prompts affect engineering students’ academic performance and engagement?,gpt-4-1106-preview,6,"The abstract describes a study that relates to prompt engineering in the educational sense and not in the AI field. It addresses the effectiveness of different types of reflection prompts (generic versus specific) on students' performance and engagement in an engineering course context. While it is not directly about the engineering of AI-based prompt systems, the insights regarding how specificity in prompts can influence outcomes may be partially relevant to the nuances involved in designing prompts for AI systems. However, the primary focus of the study on academic performance and engagement of engineering students limits the relevance to prompt engineering in AI. Thus, the rating reflects moderate relevance due to indirect connections that could be drawn between educational prompting strategies and AI prompt design considerations." -v2p: vision-to-prompt based multi-modal product summary generation,gpt-4-1106-preview,6,"The paper presents a multi-modal product summary generation framework that uses a Generative Pre-trained Language Model with prompts derived from visual attributes, which aligns with the concept of prompt engineering in the sense that it involves designing prompts for guiding text generation. However, the focus seems to be more on the multi-modal interaction and summary generation, rather than on the systematic study of hard prefixes or prompt structures themselves. Therefore, while it is relevant due to its use of prompts, it may not directly address the nuances of prompt engineering as pertains to hard prefix prompts specifically, hence the rating of 6." -"don’t prompt, search! mining-based zero-shot learning with language models",gpt-4-1106-preview,6,"The paper discusses the limitation of the traditional prompt-based approach for zero-shot learning with language models and offers an alternative mining-based approach. It touches upon the subject of how prompts are used and their sensitivity to the task, which is relevant to prompt engineering studies. However, the primary focus seems to be on the mining technique rather than the engineering or optimization of prompts themselves. Therefore, while it relates to the field of prompt engineering, it does so from a perspective of finding an alternative to hard-coded prompts, rather than improving or systematically reviewing them." -prompt-tuned code language model as a neural knowledge base for type inference in statically-typed partial code,gpt-4-1106-preview,6,"The study presents an approach that incorporates elements of prompt engineering by fine-tuning a language model with a specific task-oriented prompt ('pre-train, prompt and predict' paradigm). Although the primary focus is not on prompt engineering for natural language processing, but rather type inference within code, the use of prompts to guide the model suggests relevance. However, it is specialized for code language models which may not fully align with more generalized prompt engineering studies." -prompt deep light-weight vessel segmentation network (plvs-net),gpt-4-1106-preview,6,"The relevance to prompt engineering in this study is moderate. The use of 'prompt blocks' within the network architecture indicates an innovation related to how the network processes information, which might be relevant to prompt engineering in the broader sense of designing inputs that improve the performance of a neural network. However, the primary focus appears to be on biomedical image segmentation, rather than the development or study of prompting methods for natural language processing or other general AI applications. Thus, while the term 'prompt' is used, it may not directly align with the typical context of prompt engineering, which is often related to improving AI responses or behavior based on textual input." -the utility of an evidence-based lecture and clinical prompt as methods to improve quality of care in colorectal cancer screening,gpt-4-1106-preview,4,"The study appears to investigate the effectiveness of clinical prompts in a medical setting, which tangentially relates to the concept of prompt engineering. While not directly studying 'hard prefix prompts' or prompt engineering for AI or computational systems, the principle of using prompts to improve performance outcomes has some relevance to the broader field of study. However, the specific application to colorectal cancer screening and the focus on evidence-based lectures differentiates this from the typical context of prompt engineering in technology, which usually refers to the designing of inputs to elicit desired responses from AI models or systems." -blended diffusion for text-driven editing of natural images,gpt-4-1106-preview,6,"The paper's relevance to prompt engineering is moderate as it deals with the application of language prompts in the context of image editing. Even though the main focus is on the use of natural language prompts to direct image edits, which is related to how prompts are engineered to guide machine learning models, it is not specifically focused on the study of 'hard prefix prompts' or the structure and efficacy of prompts in a general sense. The relevance comes from the intersection with prompt engineering in the domain of combining text prompts with image processing models, which may offer insights into how to better design prompts for specific tasks like image editing. However, without a direct analysis on the design, structure, or impact of the prompts themselves, its relevance is not maximal." -cora: adapting clip for open-vocabulary detection with region prompting and anchor pre-matching,gpt-4-1106-preview,4,"The abstract describes an approach to improve open-vocabulary detection by using region prompting in combination with a visual-language model, which could be relevant to prompt engineering in that it involves the adaptation of prompts to improve recognition tasks. However, the focus is on object detection and adapting existing models to new tasks, rather than investigating the systematic study of hard prefix prompts specifically. While the method of region prompting could potentially inform prompt engineering practices, the direct relevance to the study of hard prefix prompts is tangential." -information and communication technology based prompting for treatment compliance for people with serious mental illness.,gpt-4-1106-preview,5,"The provided abstract discusses the use of ICT-based prompting to improve treatment compliance in people with serious mental illness, which aligns with the broader concept of 'prompts' in behavior modification. However, the term 'hard prefix prompts' typically refers to a specific approach in natural language processing or AI-related prompt engineering, which is not the focus of this study. Therefore, the relevance is moderate as it deals with prompts in a different context than what 'prompt engineering study' typically would imply in technological or AI research." -bloom+1: adding language support to bloom for zero-shot prompting,gpt-4-1106-preview,4,"The provided document abstract pertains to language model adaptation, specifically for the BLOOM model, and how it is applied to zero-shot prompting in new languages. While the study addresses issues relevant to language models and prompting, it does not directly deal with the engineering of prompts, especially with 'hard prefix prompts' as mentioned in the original query. The relevance lies in the broader context of zero-shot learning and language adaptation, which can impact the effectiveness of prompts in multiple languages. However, since it doesn't focus on the specific design or structuring of prompts, or the concept of 'hard prefix prompts', the rating is moderately low." -using simple technology to prompt multistep tasks in the home for people with dementia: an exploratory study comparing prompting formats,gpt-4-1106-preview,5,"The study provides insights into the design of prompts for a specific user group (people with dementia) and highlights that the effectiveness of prompts can be context-dependent, which offers a partial relevance to the general field of prompt engineering. However, the study is focused on cognitive impairment and lacks a direct connection to the broader concepts and methodologies of engineering prompts for software or AI interactions. Therefore, the relevance is moderate." -large language models are state-of-the-art evaluators of translation quality,gpt-4-1106-preview,5,"The study focuses on the use of large language models for evaluating translation quality, which indirectly relates to prompt engineering through the application of zero-shot prompting and comparison of prompt variants. However, it is more centered on the application of language models for translation assessments rather than the principles or effects of prompt engineering itself. Although understanding how different prompts impact the quality evaluation by a language model is relevant, the core of the study is translation quality assessment rather than prompt engineering." -virtual prompt pre-training for prototype-based few-shot relation extraction,gpt-4-1106-preview,4,"While the title suggests the study involves 'virtual prompt pre-training', which pertains to a technique potentially related to prompt engineering in the context of machine learning, the lack of abstract and TLDR makes it difficult to assess its direct relevance to prompt engineering, particularly to 'hard prefix prompts'. The relevance is expected to be moderate as it mentions prototypes and few-shot relation extraction which may involve prompt design but does not explicitly focus on hard prefix prompts as per the provided information." -visual prompt tuning for few-shot text classification,gpt-4-1106-preview,6,"The paper abstract introduces a novel method of Visual Prompt Tuning for few-shot text classification that utilizes vision-language pre-training models, which is somewhat relevant to prompt engineering as it involves a form of prompt tuning. However, the primary focus is on incorporating visual elements rather than exclusively on texts or verbal prompts, which traditionally constitute 'prompt engineering' in language models. The relevance rating is given a moderate score because it deals with tuning aspects pertinent to the deployment of large-scale language models, although it does not directly address 'hard prefix prompts' as described in the original study topic." -augprompt: knowledgeable augmented-trigger prompt for few-shot event classification,gpt-4-1106-preview,5,"The title suggests that the study is related to prompt engineering as it mentions 'augmented-trigger prompt' which implies a method of prompt design for enhanced performance in an NLP task (few-shot event classification). However, without an abstract or TLDR, it is challenging to assess the depth of relevance to prompt engineering, hence a middle-of-the-road rating is given. More information would be required for a more accurate rating." -prompt-based self-training framework for few-shot named entity recognition,gpt-4-1106-preview,5,"The title suggests the study involves 'prompt-based' methodology, which is relevant to prompt engineering. However, without more information from the abstract or TLDR, it's challenging to determine the extent of relevance to hard prefix prompts specifically. The study focuses on few-shot named entity recognition, which may involve prompts, but it is unclear how systematically the prompts are reviewed or engineered in the study. A neutral score reflects this partial relevance based on the information provided." -zero- and few-shot event detection via prompt-based meta learning,gpt-4-1106-preview,6,"The study discusses a meta-learning framework for zero- and few-shot event detection, employing cloze-based prompts within the methodology. Prompt-based approaches are relevant to prompt engineering, as they involve the design of input structures that facilitate model learning and generalization to new tasks. However, the focus on event detection and a meta-learning framework makes this work only partially related to the core study of hard prefix prompts in prompt engineering, hence the rating is moderate." -few-shot composition learning for image retrieval with prompt tuning,gpt-4-1106-preview,6,"The study includes techniques related to prompt tuning and the development of a visual prompt within the context of image retrieval, which is indirectly related to prompt engineering in natural language processing (NLP). While prompt tuning is a concept used in NLP, this study applies it to a visual domain and focuses on compositional learning and few-shot learning mechanisms, which are somewhat tangential to the typical studies on hard prefix prompts in text-based models. The relevance is moderate because the study does show the application of prompt tuning concepts but in a different domain and does not directly address hard prefix prompts in the context of NLP." -balanced distributed augmentation for multi-label few shot learning with prototypical network,gpt-4-1106-preview,4,"The abstract indicates that the study involves novel pipeline for automating the prompt generation, which is somewhat relevant to prompt engineering, particularly if the automated generation includes what could be considered 'hard prefix prompts.' However, the main focus of the paper appears to be on data augmentation techniques and sentiment analysis for few-shot learning rather than directly on prompt engineering. The relevance is therefore moderate and not the primary emphasis of the research." -structure pretraining and prompt tuning for knowledge graph transfer,gpt-4-1106-preview,4,"The abstract describes a study on a knowledge graph pretraining model (KGTransformer) and its application across different knowledge graph-related tasks, which is related to machine learning and transfer learning. The use of 'prompt-tuning' with task data as a 'triple prompt' indicates a form of prompt engineering, but the focus seems to be more on the application of this mechanism for task-specific KG interactions, rather than a comprehensive study of the prompt engineering concept itself. The relevance to prompt engineering study is therefore present but not central to the paper's core contribution, hence the moderate rating." -[cls] token is all you need for zero-shot semantic segmentation,gpt-4-1106-preview,4,"The given abstract pertains to a study on zero-shot semantic segmentation using [CLS] tokens from the CLIP model, which isn't directly related to prompt engineering. However, the use of [CLS] tokens as auxiliary prompts for the visual encoder suggests some relevance to the understanding of how prompts can influence AI models. The rating is not higher because the primary focus of the study is on image segmentation, not on prompt engineering itself." -the unreliability of explanations in few-shot in-context learning,gpt-4-1106-preview,6,"The study seems to address a part of prompt engineering by examining how 'prompting' GPT-3 with explanations affects its performance on certain reasoning tasks, which is relevant to understanding how different types of prompts influence large language models. However, it primarily focuses on the reliability of explanations produced by GPT-3 and their use in validating predictions post-hoc, which is one aspect of prompt engineering. The study does not directly address 'hard prefix prompts' or a comprehensive systematic review of them. Therefore, while not fully aligned, it does contribute to the broader topic of prompt engineering by discussing the impact of explanatory prompts." -retrieving visual facts for few-shot visual question answering,gpt-4-1106-preview,6,"The abstract describes a research study where a language model is prompted with facts retrieved from an image, to improve the performance of few-shot visual question answering systems. While it does not directly address 'hard prefix prompts' as in the study of prompts in the context of natural language processing, it does involve the process of selecting specific information (facts from an image) to inform the prompting process for a language model. Thus, it demonstrates relevance to prompt engineering by showing how tailored information can be used to elicit better responses from a model. However, because it focuses primarily on image-based data and facts rather than text-based prompting, it is not fully centered on 'prompt engineering' as typically understood within NLP, hence the mid-range rating." -"machine translation with large language models: prompting, few-shot learning, and fine-tuning with qlora",gpt-4-1106-preview,4,"The abstract discusses machine translation using large language models and evaluates different methodologies including zero-shot prompting, which is closely related to prompt engineering. However, the focus on QLoRA fine-tuning indicates a greater emphasis on the fine-tuning process rather than on prompt engineering itself. The relevance is present but not central to the topic of prompt engineering, therefore a moderate rating reflects the connection without overstating its focus." -short answer grading using one-shot prompting and text similarity scoring model,gpt-4-1106-preview,5,"The relevance of the study to prompt engineering is moderate. The study involves the use of a large language model for one-shot prompting, which is relevant to the broader field of prompt engineering as it relies on effectively prompting a language model to perform a task—in this case, grading short answers. However, the study specifically focuses on an application of language models for automated grading rather than the systematic review of hard prefix prompts. The relevance is not direct but tangentially related due to the use of prompting techniques within the ASAG model." -metaprompting: learning to learn better prompts,gpt-4-1106-preview,6,"The abstract describes research on prompting methods in natural language processing, specifically focusing on moving from 'hard prompts' to 'soft prompts' and proposing a new method called MetaPrompting that utilizes meta-learning for better prompt initialization. Although the study is highly relevant to the broader topic of prompt engineering, the specific term 'hard prefix prompts' is not the main focus of this abstract. Instead, the research emphasizes soft prompting and the improvement of prompt initialization. Hence, the relevance to 'hard prefix prompts' is indirect, as the study seems to address the transition from hard to soft prompts and the advancement of soft prompt techniques." -learning to paraphrase sentences to different complexity levels,gpt-4-1106-preview,4,"While the study presented in the abstract does touch upon prompting strategies, which are part of prompt engineering, its focus seems to be more on the creation and use of datasets for sentence simplification, complexification, and paraphrasing. Prompt engineering generally refers to the design, testing, and optimization of prompts to improve performance of language models. The abstract indicates that the research includes experimentation on prompting strategies, which is relevant to prompt engineering; however, the main emphasis appears to be on dataset development and performance benchmarks rather than the intricate details of prompt engineering itself. Therefore, the relevance to prompt engineering study is moderate." -application of cognitive rehabilitation theory to the development of smart prompting technologies,gpt-4-1106-preview,4,"While the study addresses the use of prompting technologies, which is a form of human-computer interaction, it primarily focuses on cognitive rehabilitation and assistive technologies for older adults with cognitive impairments. The relevance to 'prompt engineering' in the context of hard prefix prompts and systematic review is tangential. The study could be peripherally related to prompt engineering in the way it seeks to optimize the design of prompts for a specific application (assistive technology), but it does not directly study or review the more general field of prompt engineering, especially as it might relate to conversational AI, machine learning or data input systems." -application of cognitive rehabilitation theory to the development of smart prompting technologies,gpt-4-1106-preview,4,"The document appears to discuss the application of cognitive rehabilitation theory to the development of smart prompting technologies for assisting older adults with cognitive impairments. While it does touch upon the design and effectiveness of prompts (which is indirectly related to prompt engineering), the focus is more on the application of CRT in the development of assistive technologies rather than on a comprehensive systematic review of hard prefix prompts or on the specifics of engineering prompt systems. Thus, the relevance to prompt engineering study, particularly in the context of a comprehensive systematic review on hard prefix prompts, is moderately low, warranting a rating of 4." -task effects on linguistic complexity and accuracy: a large-scale learner corpus analysis employing natural language processing techniques,gpt-4-1106-preview,6,"While the title 'task effects on linguistic complexity and accuracy: a large-scale learner corpus analysis employing natural language processing techniques' does not specifically mention 'prompt engineering' or 'hard prefix prompts', the abstract indicates relevance by discussing the influence of tasks or prompts on linguistic performance in second language acquisition. Prompt engineering is crucial in designing effective tasks that can elicit the desired complexity and accuracy in language learning, which is pertinent to the study at hand. However, since the focus is more broadly on task effects rather than the specifics of engineering prompts, especially 'hard prefix prompts', the relevance is moderate." -automatic code summarization via chatgpt: how far are we?,gpt-4-1106-preview,6,"The abstract discusses evaluating ChatGPT's performance on code summarization tasks using specific prompts, which can be considered a form of 'prompt engineering.' Prompt engineering involves crafting prompts to guide a model towards specific desired outputs or behaviors. While the abstract does not focus exclusively on 'hard prefix prompts,' it does entail exploring appropriate prompts to improve ChatGPT's performance. Therefore, it touches upon aspects of prompt engineering which are relevant to the study of how prompts affect an LLM's output, even though it isn't focused specifically on the systematic review of hard prefix prompts as a topic." -"augmented behavioral annotation tools, with application to multimodal datasets and models: a systematic review",gpt-4-1106-preview,6,"The systematic review discusses the evolution of annotation tools, which are a fundamental part of creating datasets for machine learning, and mentions the increasing emphasis on prompt engineering in the context of training sophisticated multimodal datasets. While the main focus of the paper is on annotation methods and not specifically on hard prefix prompts, the implications for prompt engineering in the context of adding qualitative fine-tuning to models is relevant. This indicates a moderate level of relevance to prompt engineering studies, especially in the context of how these annotation tools may impact the future of prompt engineering as part of machine learning model development." -automatic essay scoring method based on multi-scale features,gpt-4-1106-preview,4,"The study discusses a method for automated essay scoring (AES) that integrates Sentence-BERT for sentence vectorization, deep neural networks, and shallow linguistic features, which includes prompt-related features. Although prompt-related features are mentioned, the focus is on scoring essays rather than engineering prompts which suggests a tangential connection to prompt engineering study. The method addresses the extraction and integration of features in AES, which is peripherally related to understanding prompts in the context of their relevance to essays but does not constitute a comprehensive systematic review on hard prefix prompts. Therefore, the relevance to prompt engineering study is moderate." -embracing ai for better quality engineering,gpt-4-1106-preview,5,"The provided text briefly mentions 'prompt engineering for testing use cases', indicating that prompt engineering is indeed part of the study in the context of quality engineering with AI. However, the focus seems to be on a broader application of AI in quality engineering and does not provide specific details on hard prefix prompts or a comprehensive systematic review on such prompts in engineering study. Therefore, the relevance is moderate, as it touches on the subject but does not delve deeply into it." -telehealth intensive care unit nurse surveillance of sepsis,gpt-4-1106-preview,6,"The article is somewhat relevant to prompt engineering study, as it involves the development of a 'sepsis prompt' that integrates usability and human factors engineering standards. Although the focus is on the medical application of prompts, rather than on the hard prefix prompts often used in machine learning or computational contexts, the principles of design, usability testing, and alert optimization could provide insights into prompt engineering methodologies. The evaluation of sensory processing, cognitive processing, and user satisfaction has parallels in the design of effective prompts in other fields. However, the specific application to a telehealth ICU scenario and sepsis surveillance limits the direct applicability to general prompt engineering studies." -"artificial intelligence in engineering and society: blue skies, black holes, and the job of requirements engineers (keynote)",gpt-4-1106-preview,6,"The abstract provides a comprehensive overview of artificial intelligence's impact on engineering and society, touching briefly on the use of large language models to address requirements engineering problems which may involve prompt engineering to some extent. However, the focus on prompt engineering, particularly in the context of 'hard prefix prompts,' is not explicit or central to the abstract. It mentions the potential for using prompts to check requirements completeness and to generate models, suggesting some relevance to the field of prompt engineering. The relevance rating is above average because prompt engineering is definitely a subset of the topics covered, but since the abstract does not focus on the systematic review of hard prefix prompts specifically, it does not score higher." -smart-llm: smart multi-agent robot task planning using large language models,gpt-4-1106-preview,6,"The study mentioned revolves around the use of Large Language Models (LLMs) for converting high-level instructions into task plans for multi-robot operations, which includes the use of programmatic LLM prompts within the few-shot prompting paradigm. This suggests a relevance to prompt engineering as it involves designing and using prompts to achieve specific outcomes with LLMs. However, the focus is more on robotics and task planning rather than solely on the study of prompt engineering techniques, hence the rating is a medium 6 out of 10 for relevance to prompt engineering studies specifically focused on hard prefix prompts." -prompting large language models with speech recognition abilities,gpt-4-1106-preview,4,"The described study focuses on extending the capabilities of large language models to perform automatic speech recognition by integrating an audio encoder. It does not primarily concentrate on the study or application of hard prefix prompts in the context of prompt engineering. However, because prompt engineering can involve methods for effectively instructing or incorporating additional modalities (like audio) into language models, this paper indirectly relates to the broader field of prompt engineering. The relevance is not direct as it doesn't address hard prefix prompts specifically, but the insights from such a study could potentially influence prompt engineering strategies for multimodal models." -satisfiability-aided language models using declarative prompting,gpt-4-1106-preview,6,"The abstract details a novel approach to improve reasoning capabilities of large language models (LLMs) by using a satisfiability-aided language modeling (SatLM). Although it does not specifically mention 'hard prefix prompts' or 'prompt engineering,' the integration of an automated theorem prover to enhance the model's problem-solving abilities indirectly relates to the broader field of prompt engineering, where devising the right prompts to elicit desired outcomes from language models is crucial. The approach of generating a declarative task specification could be seen as part of the prompt engineering process, since it involves guiding the LLM to produce useful outputs for theorem proving. However, the lack of explicit focus on prompt engineering techniques limits the relevance to a comprehensive systematic review on hard prefix prompts, thus warranting a middling score." -query expansion by prompting large language models,gpt-4-1106-preview,6,"The abstract describes using various prompts, including Chain-of-Thought, in the context of query expansion leveraging Large Language Models (LLMs). The relevance to prompt engineering is clear since it specifically mentions the study of different types of prompts to optimize the performance of LLMs in a search-related task. However, it does not directly address 'hard prefix prompts,' indicating a comprehensive systematic review on a subset of prompt engineering but not covering the full scope that might be suggested by the prompt 'a comprehensive systematic review on hard prefix prompts.' Therefore, while it is relevant due to its focus on prompt types and their effect on LLMs, it's not exactly aligned with the outlined study of hard prefix prompts." -how is chatgpt's behavior changing over time?,gpt-4-1106-preview,4,"The provided abstract and TLDR focus on the changes in the behavior of large language models (LLMs) like GPT-3.5 and GPT-4 over time across various tasks. While it is not directly related to a 'systematic review on hard prefix prompts' in prompt engineering, the study's insights into the performance variability and amenity to different prompting techniques (like chain-of-thought prompting) have indirect relevance to prompt engineering. Knowing how model performance can change over time is valuable for designing and updating prompts to maintain or improve LLMs' effectiveness. However, the focus is not specifically on prompt engineering with hard prefixes, which would make the relevance partial and thus results in a moderate rating." -gpt-ner: named entity recognition via large language models,gpt-4-1106-preview,5,"The relevance to 'prompt engineering study' is moderate. While the abstract discusses a method to adapt large language models (LLMs) for named entity recognition (NER) by transforming it into a text generation task, which implicitly involves engineering prompts (special tokens @@##) for entity extraction, the main focus is on overcoming the shortcomings of LLMs for NER tasks and not specifically on the study of prompt engineering as a field. The self-verification strategy mentioned does relate to the usage of prompts to verify generated content, which is relevant, but the paper does not seem to be centered on prompt engineering as a comprehensive topic." -adaptive test generation using a large language model,gpt-4-1106-preview,5,"The relevance to prompt engineering study is moderate. While the abstract discusses the use of a Large Language Model (Codex) for automated test generation, which involves prompting the model with certain inputs to produce desired outputs (tests in this case), the study is focused on practical application rather than a systematic review of prompt engineering techniques or the study of 'hard prefix prompts' specifically. The process involves an adaptive prompting mechanism to improve test generation, which is somewhat related to prompt engineering studies. Therefore, the relevance is rated a 5, as it addresses some elements of prompt design but does not specifically target a comprehensive review or study of prompt engineering methodologies." -on the risk of misinformation pollution with large language models,gpt-4-1106-preview,4,"While the paper addresses the use of large language models for generating misinformation and explores defensive strategies such as prompting, it is not specifically focused on prompt engineering study with regards to hard prefix prompts. The mention of prompting as a defense strategy does lend some relevance, but because the primary focus is on misinformation and not the systematic review of hard prefix prompts in prompt engineering, the relevance to the specific prompt engineering study is moderate to low." -codehelp: using large language models with guardrails for scalable support in programming classes,gpt-4-1106-preview,6,"Although the study does not focus on 'hard prefix prompts' specifically within the context of prompt engineering, it is related to the field in a broader sense. It examines the use of prompting strategies in the context of a tool called CodeHelp that utilizes large language models to assist students. The relevance rating is above average because understanding how prompts are engineered to generate non-solution revealing outputs in an educational setting can contribute valuable insights to prompt engineering research, especially in terms of designing controllable and ethical AI. However, as the paper's primary focus is on the deployment and effects of an educational tool rather than the systematic review of prompt engineering techniques, it is not rated higher." -reviewergpt? an exploratory study on using large language models for paper reviewing,gpt-4-1106-preview,4,"While the study explores the use of large language models (LLMs) in the context of scientific paper reviewing, which requires sophisticated prompting strategies, it does not specifically focus on 'hard prefix prompts' as the main subject of investigation. The relevance to prompt engineering is present as the research touches upon how different prompts can lead to different performance outcomes by the LLM (e.g., prompting with specific questions versus general review requests). However, since the core study does not concentrate on the engineering of prompts and their systematic review but rather on the application of LLMs in a specific task of paper reviewing, the rating is moderately relevant rather than highly relevant." -graphologue: exploring large language model responses with interactive diagrams,gpt-4-1106-preview,6,"The study is relevant to prompt engineering to a moderate degree. It does not directly deal with hard prefix prompts but explores the broader area of improving interactions with Large Language Models (LLMs) using novel prompting strategies and interface designs. By introducing an interactive system, Graphologue, which converts LLM responses into diagrams, it touches upon enhancing the efficacy of prompts and the ways in which users can solicit and handle information from an LLM. The connection to prompt engineering lies in the fact that extracting entities and relationships for diagrams requires careful prompt design to ensure that the LLM provides structured responses suitable for graphical representation. Although the focus is not on 'hard prefix prompts', the study does contribute to the field of prompt engineering by demonstrating alternative ways to optimize user interactions with LLMs." -visualizing linguistic diversity of text datasets synthesized by large language models,gpt-4-1106-preview,4,"The abstract presented describes a tool, LinguisticLens, which is not directly related to the study of 'hard prefix prompts' in prompt engineering. However, the tool's function of analyzing syntactic diversity of LLM-generated datasets can have tangential relevance to understanding how different prompting methods, including hard prefix prompts, might influence the generative outcomes of LLMs. Therefore, while the primary focus of the abstract is on visualization and analysis of textual diversity rather than on prompt engineering, the insights from such a tool could potentially inform prompt engineering studies to some extent, which warrants a moderate relevance rating." -feasibility of using the privacy-preserving large language model vicuna for labeling radiology reports.,gpt-4-1106-preview,6,"The provided abstract discusses the application of a large language model (LLM), Vicuna, for labeling radiography reports in a manner that preserves patient privacy. The relevance to prompt engineering lies in the mention of 'using a single-step or multistep prompting strategy' which indicates that prompts were designed and tested to achieve the desired outcome. The study evaluates the efficacy of these prompting strategies against established benchmarks. However, the study is not focused on prompt engineering itself, but rather on the application of prompts in a specific domain (medical report analysis). This means that while prompt engineering is a component of the study, the focus is not on the systematic review of 'hard prefix prompts,' but on the feasibility and efficacy of running a privacy-preserving LLM locally for practical applications in healthcare. Therefore, the relevance is moderate, as insights into prompt engineering can be gleaned but are not the central focus of the study." -forward-backward reasoning in large language models for mathematical verification,gpt-4-1106-preview,6,"The study presents an innovative approach in using large language models for mathematical verification through FOBAR, which involves prompt engineering to some extent by integrating backward reasoning into the prompts to verify answers. While it doesn't directly address 'hard prefix prompts' in prompt engineering, the use of CoT prompting and the integration of answer verification templates are related to the techniques used in prompt engineering to improve AI performance. It shows the importance of prompt design in eliciting correct outputs from models. The relevance score isn't higher because it doesn't specifically discuss or review hard prefix prompts, which is the focus of the prompt engineering study mentioned." -"camel: communicative agents for ""mind"" exploration of large scale language model society",gpt-4-1106-preview,6,"The abstract indicates that the paper is related to 'inception prompting' which is a form of prompt engineering as it involves guiding language models. However, the main focus seems to be on the cooperative behavior of communicative agents rather than hard prefix prompts. The relevance is moderate because while the paper touches on prompt engineering, it does not appear to conduct a 'comprehensive systematic review on hard prefix prompts' as specified in the original prompt." -meta-learning the difference: preparing large language models for efficient adaptation,gpt-4-1106-preview,6,"The abstract discusses ways to adapt large pretrained language models to be more efficient in tasks such as dialogue completion, summarization, and multi-domain language modeling, focusing on model weight differences and structural changes without extensive finetuning. This is relevant to prompt engineering because it touches on the efficiency of adapting models to specific tasks, which is a significant aspect of prompt engineering. However, the text does not directly address 'hard prefix prompts' or their systematic review, thus it is moderately relevant but not a perfect match for the topic of prompt engineering study." -explainability for large language models: a survey,gpt-4-1106-preview,5,"The paper's focus on explainability for large language models (LLMs) is indirectly relevant to prompt engineering study because understanding how LLMs work can inform the design of more effective prompts. However, the paper does not directly address prompt engineering or specifically hard prefix prompts. The relevance is moderate since insights into explainability might overlap with some aspects of prompt engineering, such as understanding model behavior and improving performance through better prompts, without being the central focus." -adapting large language models via reading comprehension,gpt-4-1106-preview,6,"The study explores a novel method of training large language models using domain-specific reading comprehension texts, which could indirectly relate to prompt engineering by enhancing the model's ability to understand and respond to prompts more effectively in different domains. However, the study does not directly address the systematic review of 'hard prefix prompts' which would be the focus of an engineering study on prompt format and structure. Thus, the relevance is moderate as the improvements in domain-specific prompting could benefit from such a training approach, but it is not centrally focused on prompt engineering itself." -react: synergizing reasoning and acting in language models,gpt-4-1106-preview,6,"The paper discusses an integrated approach where reasoning and acting are combined within LLMs, which is related to prompt engineering in the sense that it explores how to effectively prompt models to engage in both cognitive processes. Although it doesn't directly address 'hard prefix prompts,' it does deal with the broader topic of prompting LLMs to improve performance, suggesting some relevance. However, its focus on the 'ReAct' system's development and evaluation on specific tasks may not provide in-depth insights into the particular strategies used for engineering prompts, hence the rating isn't higher." -milan: masked image pretraining on language assisted representation,gpt-4-1106-preview,4,"The abstract describes an approach to masked image pretraining using language-assisted representation, rather than directly involving 'hard prefix prompts' in the conventional sense associated with language models or text-based prompting. However, the mention of 'prompting decoder architecture' suggests an involvement of prompting mechanisms, which could be related to the topic of prompt engineering in a broader sense. Hence, it receives a moderate relevance rating due to the possible connection through the architecture design that involves prompts in the image-language pretraining context, but it does not directly focus on prompt engineering studies for text-based models or the specific concept of 'hard prefix prompts'." -are hard examples also harder to explain? a study with human and model-generated explanations,gpt-4-1106-preview,6,"The study touches on aspects of prompting when it investigates size and hardness of the test samples and their impact on the quality of explanations generated by both humans and GPT-3. This is indirectly related to prompt engineering, as the quality of outputs generated by LLMs may depend on the prompt's difficulty level, which can inform prompt engineering strategies. However, the study's primary focus is on the explainability and the comparison of human and model-generated explanations, not specifically on engineering prompts to improve LLM performance." -a comprehensive survey on pretrained foundation models: a history from bert to chatgpt,gpt-4-1106-preview,5,"The provided abstract and TLDR are related to Pretrained Foundation Models (PFMs) and cover a wide range of aspects including their history, applications, and challenges across different data modalities. While prompt engineering is not directly mentioned, the study's focus on 'zero shot or few shot prompting' used by models like ChatGPT suggests an indirect relation to the topic. Thus, the paper may contain insights relevant for understanding the broader context of prompt engineering, particularly in how PFMs accommodate prompt-based interactions. However, since prompt engineering is a more specific discipline focused on the design and optimization of prompts to effectively leverage models like GPT or BERT, and the summary does not explicitly address hard prefix prompts or prompt engineering techniques, it receives a moderate rating of relevance." -reading comprehension quiz generation using generative pre-trained transformers,gpt-4-1106-preview,4,"The study is related to the application of AI in the educational domain, specifically using a pre-trained transformer model (GPT-3) for quiz generation which is a type of prompt engineering. However, it does not specifically focus on 'hard prefix prompts' but rather on the general capability of transformer models to generate educational content. The relevance to prompt engineering is present since quiz generation can be considered a form of prompt design, yet it is not focused on the systematic review of prompts or their optimization, which would make it highly relevant to prompt engineering studies." -codegen: an open large language model for code with multi-turn program synthesis,gpt-4-1106-preview,6,"The abstract describes research on program synthesis using large language models, particularly focusing on a new model called CODEGEN. The relevance to prompt engineering is moderate because it touches on the use of prompts specifying subproblems in a multi-step paradigm for program synthesis. This suggests that different prompt structures (such as multi-turn prompts) can significantly affect the performance of code generation tasks, which is a part of the broader area of prompt engineering. However, the abstract does not specifically discuss 'hard prefix prompts' or provide a systematic review of prompt engineering, so it is only partially relevant to the specified topic of a comprehensive systematic review on hard prefix prompts." -in-context autoencoder for context compression in a large language model,gpt-4-1106-preview,6,"The abstract describes a method for compressing long contexts into compact memory slots, which although not directly related to 'hard prefix prompts', it does pertain to the broader field of prompt engineering by allowing for more efficient handling of prompts in large language models. This efficiency can impact how prompts are created, managed, and used in large language models. The connection to prompt engineering lies in the fine-tuning on instruction data, which likely would involve crafting prompts to generate desirable responses. However, since the primary focus appears to be context compression rather than prompt engineering techniques or the study of 'hard prefix prompts' specifically, the rating is not higher." -decomposed soft prompt guided fusion enhancing for compositional zero-shot learning,gpt-4-1106-preview,4,"The abstract presents a study on a method for compositional zero-shot learning (CZSL) using a framework called Decomposed Fusion with Soft Prompt (DFSP). While it is related to engineering prompts in the context of vision-language models, which indeed falls under the broader category of prompt engineering, it isn't directly focused on hard prefix prompts as mentioned in the initial request. The paper's relevance is therefore not exact but tangentially related since it involves the construction of vector combinations of learnable soft prompts, which can be considered a part of prompt engineering. However, the method described diverges from the original topic of 'hard prefix prompts,' which typically implies a non-modifiable text input for models, as opposed to the learnable prompts discussed here." -prompt-guided zero-shot anomaly action recognition using pretrained deep skeleton features,gpt-4-1106-preview,4,"The study is somewhat related to prompt engineering as it incorporates user prompt-guided zero-shot learning which hints at the use of prompts to guide the anomaly detection model. However, the focus is primarily on skeleton-based anomaly detection and the usage of prompts seems to be a part of the overall anomaly score calculation rather than the core study of different prompt engineering techniques or hard prefix prompts. Therefore, the relevance is moderate." -socratic models: composing zero-shot multimodal reasoning with language,gpt-4-1106-preview,5,"The abstract discusses the use of Socratic Models (SMs) for zero-shot multimodal reasoning which relates to the field of prompt engineering in that it involves effective prompting to enable communication and information exchange between models. While it's not explicitly focused on 'hard prefix prompts' as mentioned in the study prompt, the concept of multimodal-informed prompting falls within the broader scope of prompt engineering. Therefore, the relevance to prompt engineering is moderate but not directly aligned with the specific topic of hard prefix prompts." -partslip: low-shot part segmentation for 3d point clouds via pretrained image-language models,gpt-4-1106-preview,5,"While the abstract indicates the use of a pretrained image-language model, GLIP, in the context of 3D part segmentation leveraging multi-view priors and few-shot prompt tuning, it does not directly address prompt engineering study or the investigation of hard prefix prompts. However, the mention of 'few-shot prompt tuning' suggests a relevant connection to the disciplines of prompt engineering and the model's ability to interpret and process language-based inputs, which may overlap with the interests of those studying prompt design and effectiveness. Thus, the relevance is moderate as it sits at the intersection of neural language models and their application in visual tasks, without focusing explicitly on the study of prompt engineering." -few-shot anaphora resolution in scientific protocols via mixtures of in-context experts,gpt-4-1106-preview,6,"The study presents MICE, a method for few-shot anaphora resolution using in-context learning, which is relevant to prompt engineering in that it involves conditioning language models on specific inputs for desired outputs. The focus on in-context learning and efficiency in handling long sequences could inform strategies in prompt engineering, especially for complex tasks like anaphora resolution. However, the study is not directly focused on designing or optimizing prompts (i.e., 'hard prefix prompts'), but rather on a specific application of in-context learning. As such, the relevance is moderate but not high." -what language model architecture and pretraining objective work best for zero-shot generalization?,gpt-4-1106-preview,5,"While the abstract provided does not directly address prompt engineering or the study of hard prefix prompts specifically, it discusses related aspects of language model performance such as zero-shot generalization, model architectures, and pretraining objectives. Understanding how different architectures and objectives contribute to a model's ability to understand and process prompts is relevant to prompt engineering. However, since the focus is not on prompt engineering itself or on systematic reviews of prompts, the relevance is moderate." -a topic-based prompt learning method for zero-shot stance detection,gpt-4-1106-preview,4,"While the study involves the use of prompts to determine the stance detection ability, it is focused more on the classification and processing of language with respect to stance detection, rather than the creation or systematic review of hard prefix prompts in the context of prompt engineering. Since prompt engineering typically refers to methods for improving language model responses, and this paper seems to touch on related concepts without being squarely focused on prompt engineering, it receives a moderate rating." -go-tuning: improving zero-shot learning abilities of smaller language models,gpt-4-1106-preview,4,"The abstract discusses a method to improve zero-shot learning of smaller language models, which indirectly pertains to prompt engineering, as it may influence the way prompts are designed to interact with these models. However, the focus is on the self-supervised learning approach and the update of language models rather than the systematic study or design of hard prefix prompts specifically." -zero-shot recommendation as language modeling,gpt-4-1106-preview,6,"The abstract indicates a recommendation system that operates using pre-trained language models and unstructured text corpora, which is tangentially related to prompt engineering as it involves using language models in an innovative application. However, the focus on recommendation systems and matrix factorization suggests that the study does not directly address the creation or manipulation of prompts (i.e., the 'hard prefix prompts' mentioned in the original prompt). Therefore, the relevance is moderate because while it deals with language models, it may not directly contribute to our understanding of prompt engineering in the context of a comprehensive systematic review." -sam.md: zero-shot medical image segmentation capabilities of the segment anything model,gpt-4-1106-preview,5,"The title and abstract provided discuss a model that utilizes prompting (SAM) for image segmentation tasks, which is relevant to the concept of prompt engineering as it involves the use of prompts to direct the behavior of AI models. However, the focus is mainly on the zero-shot learning capabilities of SAM in medical image segmentation, rather than a systematic review of 'hard prefix prompts' in a broader context. The relevance to prompt engineering is moderate because it showcases an application of prompts in a specialized domain but does not address prompt engineering study in a comprehensive manner." -enabling calibration in the zero-shot inference of large vision-language models,gpt-4-1106-preview,4,"The abstract presents a study focused on the calibration of vision-language models, particularly CLIP, in the context of zero-shot inference. While the research addresses aspects such as prompt choice, its core contribution lies in proposing a modified temperature scaling method for calibrating the models rather than in-depth analysis or methodology development for 'prompt engineering' itself. The mention of prompt as one of the variables does increase the relevance to 'prompt engineering,' yet since it is not the main focus of the study, the relevance is moderate." -vision-language models are zero-shot reward models for reinforcement learning,gpt-4-1106-preview,6,"The abstract describes the use of vision-language models (VLMs) as zero-shot reward models in reinforcement learning, which includes a component of prompt engineering by providing text prompts to specify tasks. Although the main focus is on reinforcement learning and the efficacy of VLMs in this context, the mention of using 'minimal prompt engineering' indicates that there is a relevance to the study of crafting prompts. However, the primary emphasis is not on the systematic review of 'hard prefix prompts' or the intricacies of prompt engineering methods, which would be required for a higher relevance score." -zero-shot text classification via self-supervised tuning,gpt-4-1106-preview,6,"The abstract discusses a novel approach to zero-shot text classification using self-supervised learning, which includes an alternative prompting method where the model learns to predict the first sentence of a paragraph. This is relevant to prompt engineering as it touches on the use of prompts to improve language model performance without relying on large-scale annotated data. However, the focus is more on the self-supervised learning aspect and the specific learning objective, rather than a deep dive into prompt engineering or hard prefix prompts specifically. Therefore, the relevance is moderate." -harnessing the zero-shot power of instruction-tuned large language model in end-to-end speech recognition,gpt-4-1106-preview,6,"The abstract deals with the utilization of an instruction-tuned large language model within the context of ASR, which relates to prompt engineering in the sense that precise instructions are used to guide the LLM. However, the focus is more on the application of LLMs for improving ASR rather than on the study or optimization of the prompts themselves (i.e., hard prefix prompts or prompt engineering techniques). The relevance is moderate because it showcases an implementation of prompt-instructed LLMs, but it does not directly address a systematic review or study on prompt engineering." -interaction-aware prompting for zero-shot spatio-temporal action detection,gpt-4-1106-preview,6,"The study describes the use of prompting as a mechanism to obtain more appropriate text features for zero-shot spatio-temporal action detection, which falls under the broader scope of prompt engineering. However, the context is very specialized and focuses more on the application to a specific domain (video processing and action detection) rather than the study of hard prefix prompts in general. The relevance is moderate because it deals with an application of prompts in a machine learning system, but it does not directly address a 'comprehensive systematic review on hard prefix prompts' as the original query specifies." -zero-textcap: zero-shot framework for text-based image captioning,gpt-4-1106-preview,4,"The abstract discusses the Zero-TextCap model for text-based image captioning. It touches on prompt engineering indirectly by mentioning the generation of candidate sentences from the prompt 'Image of' and the refinement process for improving caption quality and diversity. However, the main focus is on image captioning and OCR technology, rather than prompt engineering. The relevance to prompt engineering study is moderate because it deals with a specific use of prompts within a different field of study, i.e., text-based image captioning. The study is more relevant to the fields of computer vision and natural language processing than to the study of prompt engineering in general." -are soft prompts good zero-shot learners for speech recognition?,gpt-4-1106-preview,6,"The abstract discusses 'soft prompts' in the context of automatic speech recognition and zero-shot learning, which is related to the field of prompt engineering, as it involves the manipulation of prompts to enhance model performance. However, the prompt specifically asks about 'hard prefix prompts,' and this study focuses on 'soft prompts,' not 'hard' ones. Therefore, the study is relevant to the broader field of prompt engineering but not directly relevant to the specified subset of 'hard prefix prompts.' The relevance rating acknowledges the connection to prompt engineering while also recognizing the divergence from the specified topic of 'hard prefix prompts'." -blended-nerf: zero-shot object generation and blending in existing neural radiance fields,gpt-4-1106-preview,5,"The presented work, Blended-NeRF, involves some aspects of prompt engineering, such as the use of text prompts to guide the editing of 3D scenes. This suggests a connection to natural language processing and the translation of text instructions to visual modifications. However, the focus seems to be more on the application of 3D neural radiance fields and the integration of new objects in existing scenes rather than on the detailed study of prompt engineering itself. Therefore, the relevance to prompt engineering as a primary study objective appears to be moderate." -language models as zero-shot trajectory generators,gpt-4-1106-preview,4,"While the abstract discusses the usage of Large Language Models for trajectory generation in robotics, which would require careful crafting of prompts to interact with the model effectively, the focus on 'hard prefix prompts' in the context of a comprehensive systematic review is not directly addressed. Although the principles of prompt engineering could be applied to formulate the inputs for GPT-4 in this study, the abstract does not specifically mention or concentrate on 'hard prefix prompts', nor does it suggest a systematic review of such prompts. Therefore, the relevance is moderate as the concept of prompting is involved, but not specific to the requested area of study." -zeroprompt: streaming acoustic encoders are zero-shot masked lms,gpt-4-1106-preview,5,"The study presents a technique called ZeroPrompt that is applied to streaming acoustic encoders, which is tangentially relevant to 'prompt engineering' since it involves what can be described as a prompting strategy. However, the core of the study focuses on streaming ASR (Automatic Speech Recognition) models and improving their latency, which is not directly related to the systematic review of 'hard prefix prompts' in the traditional sense of prompt engineering for language models. Therefore, the relevance is moderate as it deals with prompts in a different context than what 'prompt engineering study' might typically imply, which is often associated with text-based language model prompting." -zero-shot text-driven physically interpretable face editing,gpt-4-1106-preview,4,"The paper discusses text-driven face editing and involves the use of text prompts to guide the image editing process. Its relevance to prompt engineering is in the use of the CLIP model which involves understanding and correlating text descriptions to visual content. However, the primary focus of the paper seems to be on face editing using a novel method rather than on the study or improvement of prompt engineering techniques themselves. Therefore, it has some relevance due to the application of text prompts, but it is not a direct study on prompt engineering." -multi-view vision-prompt fusion network: can 2d pre-trained model boost 3d point cloud data-scarce learning?,gpt-4-1106-preview,4,"The abstract discusses the fusion of 2D pre-trained models with 3D point cloud data through a novel network (MvNet) for few-shot 3D classification, which includes aspects of prompt learning inspired by NLP. Although the application is primarily for 3D classification in computer vision and not for prompt engineering in a textual context, the inspiration from prompt learning and the mention of using prompts to describe prior knowledge for image models suggests some relevance to the topic of prompt engineering study. However, since the primary focus is not on textual or linguistic prompts but on prompts that bridge 3D and 2D model data, the relevance is moderate but not high." -b-pet: the pet model with parameter-efficient learning,gpt-4-1106-preview,4,"The abstract provided discusses the B-pet model, which focuses on few-shot learning (FSL), parameter efficiency, and storage reductions for model training and deployment. This involves the concept of 'prompt learning' as a component of the PET model, indicating some relevance to prompt engineering. However, the main content is centered on fine-tuning efficiency and parameter freezing, not directly on the systematic study or development of prompting methods. Consequently, relevance is limited to the aspect of 'prompt learning' in the context of the broader FSL and model efficiency discussions." -grass: unified generation model for speech-to-semantic tasks,gpt-4-1106-preview,4,"The paper is relevant to prompt engineering to some extent as it involves generating target text conditioned on a task-related prompt for audio data. Although it does focus on utilizing prompts for refining the production of target text, which is an aspect of prompt engineering, it specifically addresses speech-to-semantic tasks rather than hard prefix prompts within a text-input domain. Therefore, while it has some relevance due to the usage of prompts in the model's training and task execution, it is not a direct study on hard prefix prompts, reducing its relevance to the specific area of prompt engineering under review." -ontotype: ontology-guided zero-shot fine-grained entity typing with weak supervision from pre-trained language models,gpt-4-1106-preview,6,"The paper discusses a method which leverages pre-trained language models (PLMs) for fine-grained entity typing (FET) and specifically mentions how it ensembles multiple PLM prompting results, suggesting a novel use of prompts in model processing. While the main focus of the study is on FET and it introduces OntoType, a zero-shot ontology-guided FET method, the paper still has relevance to prompt engineering since it deals with generating and refining prompts for PLMs to improve typing resolution. The significance of prompt engineering is not the central theme of the paper, but prompts play a significant role in the described methodology, which aligns with how prompts can be engineered to work with ontological structures. Therefore, the paper is somewhat relevant to prompt engineering but not directly focused on it." -lt at semeval-2023 task 1: effective zero-shot visual word sense disambiguation approaches using external knowledge sources,gpt-4-1106-preview,6,"The paper abstract is partially relevant to prompt engineering study as it discusses different textual prompting strategies as they relate to multi-modal machine learning and zero-shot capabilities. However, the main focus seems to be on Visual Word Sense Disambiguation (VWSD) using pre-trained visiolinguistic models and external knowledge sources, rather than a direct emphasis on hard prefix prompts or a comprehensive analysis of prompt engineering. The relevance rating of 6 reflects that prompt engineering is a supporting concept in the study rather than the primary focus." -odor descriptor understanding through prompting,gpt-4-1106-preview,6,"The study addresses a niche aspect of prompt engineering by focusing on generating word embeddings specific to olfactory descriptors, which implies a form of prompt optimization for a specialized application. The relevance to prompt engineering is moderate because it deals with improving the interaction between an NLP model and domain-specific language, which is an important aspect of prompt engineering. However, the paper does not seem to offer a broad investigation into hard prefix prompts or their systematic review, but rather presents practical methods for a specific type of prompting to improve performance in a specialized benchmark." -winning solution for the cvpr2023 visual anomaly and novelty detection challenge: multimodal prompting for data-centric anomaly detection,gpt-4-1106-preview,6,"The study discusses the use of multimodal prompts within the context of zero-shot anomaly segmentation, which is related to the field of prompt engineering due to the involvement of customized prompts for model adaptation. While the study may not directly address 'hard prefix prompts', the concept of utilizing expert knowledge and context to create prompts for foundation models exhibits a component of prompt-engineering techniques. This relevance is not direct, as prompt engineering typically involves text-based language prompts for natural language models as opposed to prompts for visual anomaly detection; hence, the rating is above the midpoint but not fully aligned with the focus on prompt engineering." -a fine-grained comparison of pragmatic language understanding in humans and language models,gpt-4-1106-preview,6,"The study addresses a comparison of pragmatic language understanding in both humans and language models, which indirectly relates to prompt engineering since the effectiveness of prompts can be influenced by a model's ability to deal with pragmatics and non-literal meanings. However, the study does not focus specifically on prompt engineering or on the design, structure, or optimization of prompts ('hard prefix prompts'), therefore the relevance is not direct and merits a mid-range rating." -knowledge-in-context: towards knowledgeable semi-parametric language models,gpt-4-1106-preview,4,"While the discussed paper presents a novel semi-parametric language model architecture that is closely related to enhancing the performance of language models, it does not directly address prompt engineering, especially with regards to 'hard prefix prompts.' The architecture indeed involves prompting in the broader sense, as it uses prompts to generate output answers, but the main focus of the study lies in knowledge integration and model efficiency, rather than on the design or study of prompts themselves. Therefore, the relevance to prompt engineering is somewhat tangential and not the central theme of the paper." -speechx: neural codec language model as a versatile speech transformer,gpt-4-1106-preview,5,"While the abstract does discuss the use of audio-text prompts for speech generation and how SpeechX leverage task-dependent prompting for various speech tasks, it does not specifically address hard prefix prompts in the context of prompt engineering within the text generation domain, which is generally implied by prompt engineering. There is relevance to prompts and task-dependent prompting, but not directly to the study of hard prefix prompts in a systematic review sense, hence the intermediate score." -an investigation of llms' inefficacy in understanding converse relations,gpt-4-1106-preview,5,"The abstract presents a study on how LLMs process and understand converse relations, which relates to their semantic understanding capabilities. While this does touch on the issue of understanding structured semantics and could have indirect implications for prompt engineering (e.g., designing prompts that account for the converse relations might improve LLMs' performance), the study is not directly focused on prompt engineering or the effectiveness of hard prefix prompts. Thus, the relevance is moderate as the findings might inform prompt engineering strategies indirectly, but it is not the central theme of the study." -zero-shot generalization in dialog state tracking through generative question answering,gpt-4-1106-preview,6,"The abstract discusses the use of a generative question-answering framework with a conditional language model for improving dialog state tracking, which indirectly relates to prompt engineering in that it deals with the generation of language model queries (which can be considered as prompts) for unseen constraints and slots. The system is designed to interpret natural language queries, akin to how prompts are used to extract information from language models. However, the specific focus of the study is not on prompt engineering itself or on the systematic review of 'hard prefix prompts', but rather on the application of a generative language model to dialog systems for zero-shot adaptation. Therefore, while the study is relevant to the general field of language model applications (and thus has some relevance to prompt engineering), it does not directly address the subject of prompt engineering in relation to hard prefix prompts." -transformers as algorithms: generalization and stability in in-context learning,gpt-4-1106-preview,6,"The study examines in-context learning and generalization in transformer models, which is relevant to prompt engineering as it relates to how these models use input prompts to infer outputs. However, the focus on 'hard prefix prompts' is not specifically addressed, meaning the study might offer insights related to prompt engineering more broadly rather than hard prefix prompts in particular. Therefore, the relevance is moderate." -differentially private in-context learning,gpt-4-1106-preview,6,"The study touches on the deployment of large language models (LLMs) and their adaptation to new tasks, which relates to prompt engineering in the broader sense of preparing LMs for specific applications. However, the focus is primarily on maintaining privacy via Differentially Private In-context Learning (DP-ICL), and not on the prompt engineering techniques such as 'hard prefix prompts'. Although prompt engineering may rely on data privacy principles when integrating private data, the abstract lacks a direct mention or analysis of 'hard prefix prompts', yielding a moderate relevance score." -reward modeling for mitigating toxicity in transformer-based language models,gpt-4-1106-preview,4,"While the study focuses on mitigating toxicity in language models, which is related to improving AI behavior and output quality, it is tangential to the specific topic of 'prompt engineering', particularly 'hard prefix prompts'. Prompt engineering involves crafting inputs to guide AI models more effectively, whereas this study seems centered on a method (Reinforce-Detoxify) for reducing toxicity. Although related, it is not a direct study of prompt engineering techniques, thus the moderate rating reflects this indirect relevance." -context-based narrative generation transformer (ngen-transformer),gpt-4-1106-preview,6,"The abstract indicates that the paper discusses a text generation model, the NGen-Transformer, which is relevant to natural language processing and prompt engineering to some extent. It emphasizes the context assimilation capabilities of the architecture, which aligns with the concept of prompt engineering as it involves providing context or prompts for generating text. Although the paper appears to focus more on the model's architecture for story generation rather than on the systematic study of hard prefix prompts, the use of prompts (in the form of user-defined context) and performance evaluation on a prompt-based dataset (WritingPrompts) makes it moderately relevant to prompt engineering studies." -fp-detr: detection transformer advanced,gpt-4-1106-preview,5,"The paper 'fp-detr: detection transformer advanced' mentions the use of a concept analogous to prompts in NLP, where query positional embeddings serve as 'visual prompts' to assist in object detection. While this indicates a potential crossover concept with prompt engineering, it's specific to the visual domain rather than the textual domain typically associated with prompt engineering in NLP studies. Therefore, the relevance is moderate as it offers insight into how the idea of prompts can be applied in different contexts, but it does not address hard prefix prompts or their systematic review in NLP applications specifically." -"tryage: real-time, intelligent routing of user prompts to large language models",gpt-4-1106-preview,4,"While the described paper, 'tryage: real-time, intelligent routing of user prompts to large language models,' indirectly relates to the field of prompt engineering by addressing optimal model selection based on input prompts, it does not explicitly focus on 'hard prefix prompts' or the systematic review of these prompts. Prompt engineering generally refers to the design of input prompts to achieve better performance or more relevant responses from language models. The paper's relevance to prompt engineering is in its ability to select the best-suited model for a given prompt, which could be a component of a larger prompt engineering strategy. However, the absence of specific focus on 'hard prefix prompts' or systematic review thereof limits the relevance score." -tempo: prompt-based generative pre-trained transformer for time series forecasting,gpt-4-1106-preview,5,"The relevance to prompt engineering study is moderate. The described TEMPO framework does incorporate 'selection-based prompts' which indicates some element of prompt engineering. However, the core focus is on time series forecasting using generative transformers rather than the systematic review or study of hard prefix prompts in general. Therefore, the relevance is partial as it pertains to adapting prompts for time series tasks specifically rather than prompt engineering as a broader field." -phenaki: variable length video generation from open domain textual description,gpt-4-1106-preview,4,"The abstract describes a model, Phenaki, which deals with generating videos from textual descriptions using a novel representation and learning approach. This is relevant to prompt engineering to the extent that it involves creating prompts (textual descriptions) that are used to generate content (videos). However, the focus of the study appears to be more on video synthesis and representation learning rather than on the design or optimization of the textual prompts themselves ('hard prefix prompts'). Therefore, the relevance is moderate, indicating a tangential connection to prompt engineering, particularly in how text prompts are used to generate complex media like videos, rather than a direct study on the engineering of prompts." -can language models automate data wrangling?,gpt-4-1106-preview,5,"The content seems to address the utilization of language models for data wrangling tasks, and while it does imply a certain level of task design and user interaction with language models (which could be related to prompt engineering), the focus on data wrangling rather than prompt design specifically for eliciting desired outputs from a language model suggests that this isn't a comprehensive study on hard prefix prompts. There is potential crossover in terms of understanding how prompts work in the context of data wrangling, but it is not directly about prompt engineering." -textdiffuser: diffusion models as text painters,gpt-4-1106-preview,4,"While the study introduces TextDiffuser, which involves generating images from text prompts and might have indirect applications in understanding and improving how models handle text prompts, the main focus is on image generation and enhancing text coherence within visual content. The mention of prompts relates more to the input for image generation rather than the study of prompt engineering itself. Therefore, the relevance to prompt engineering study is moderate as the techniques developed could be tangentially useful, but it is not the central theme of the research." -vector representations of idioms in conversational systems,gpt-4-1106-preview,5,"The study is partially relevant to prompt engineering as it touches on how training on specific language constructs (in this case, idioms) improves the performance of conversational systems. While it does not directly address 'hard prefix prompts' or the systematic review of such prompts, understanding how idiomatic expressions are handled by NLP systems can inform prompt design strategies and might be leveraged in the creation of more sophisticated prompts. This relevance is more tangential than direct to the focus of prompt engineering, thus warranting a mid-range rating." -generalization properties of retrieval-based models,gpt-4-1106-preview,4,"While the abstract discusses retrieval-based models and their generalization properties, which are relevant to the broader field of machine learning and could potentially be applied to prompt engineering with respect to selecting the best prompts in a retrieval-based manner, it does not directly address 'hard prefix prompts' or prompt engineering specifically. Therefore, its relevance to a systematic review on hard prefix prompts in prompt engineering studies is tangential rather than central." -attention satisfies: a constraint-satisfaction lens on factual errors of language models,gpt-4-1106-preview,4,"The abstract provides insights into the internal mechanisms of Large Language Models (LLMs) related to factual accuracy, which is peripherally relevant to prompt engineering. Prompt engineering often involves crafting prompts to elicit accurate and reliable responses from a model. The study's focus on the attention mechanism and factual accuracy can be indirectly useful in understanding how prompts might be structured to improve the likelihood of factually correct outputs. However, the study does not directly investigate hard prefix prompts or prompt engineering techniques, therefore the rating isn't higher." -gpt4mia: utilizing geneative pre-trained transformer (gpt-3) as a plug-and-play transductive model for medical image analysis,gpt-4-1106-preview,4,"The relevance of the paper to prompt engineering is tangential rather than direct. It discusses using GPT-3 as a tool for medical image analysis, which implies a level of prompt engineering in structuring the interaction between the language model and the image analysis tasks. The paper's focus on technical treatments for efficiency and effectiveness might involve innovative prompt design strategies, which is pertinent to prompt engineering. However, because the central theme is the application of GPT-3 to medical image analysis rather than prompt engineering itself, the relevance is moderate." -pretraining data mixtures enable narrow model selection capabilities in transformer models,gpt-4-1106-preview,5,"The study relates indirectly to prompt engineering, as it deals with the ability of transformer models to perform in-context learning and adapt to new tasks based on their pretraining data mixture. Prompt engineering typically involves designing prompts to elicit desired behaviors or responses from LLMs. The relevance lies in understanding how different pretraining data affects the model's response to prompts, which is crucial for prompt engineers. However, the study does not explicitly focus on 'hard prefix prompts' which would be more directly aligned with prompt engineering, thus the rating reflects a moderate relevance." -plato-ad: a unified advertisement text generation framework with multi-task prompt learning,gpt-4-1106-preview,4,"The abstract discusses PLATO-Ad, a framework for online advertisement text generation that incorporates multi-task prompt learning, which is related to prompt engineering. However, it does not focus specifically on 'hard prefix prompts' or systematic reviews of them, but rather the application of prompt learning to advertisement text generation. The relevance to prompt engineering exists due to the implementation of prompts in the learning process, but because the focus is on a specific application and not on a broad study or review of prompts, the rating is not higher." -rgb-t tracking via multi-modal mutual prompt learning,gpt-4-1106-preview,6,"The study uses the concept of 'prompt learning' in the context of computer vision, specifically for RGB-T tracking, which demonstrates an application of prompt engineering to improve the interaction between different data modalities (visible and thermal images) and enhance the model's performance. The relevance to prompt engineering is evident in the design of the 'lightweight prompter' and the use of attention mechanisms as a form of information transfer, which can be seen as a specialized application of prompts in machine learning. However, the study does not directly focus on hard prefix prompts or their systematic review, which limits its relevance to the specific area of prompt engineering referred to in the original query. It is more related to the application and implementation of prompts in a practical task rather than the study of prompt engineering itself." -prefixmol: target- and chemistry-aware molecule design via prefix embedding,gpt-4-1106-preview,6,"The provided title and abstract refer to a generative model using 'prefix embeddings,' which can be seen as a form of prompt engineering, albeit in a different domain (molecular design rather than text generation). The concept of prefix embeddings as contextual prompts shares a conceptual similarity with prefix prompts in text-based models, as they both aim to guide the generation process under specific conditions. However, the application is quite niche and specific to chemistry and drug design, which means the focus is not on prompt engineering in the general sense but is instead applied in a specialized context. Therefore, the relevance to prompt engineering studies is moderate but not direct, as it uses similar concepts in a domain-specific application." -towards a unified view on visual parameter-efficient transfer learning,gpt-4-1106-preview,4,"While the study presents a framework in parameter efficient transfer learning (PETL) and investigates prefix-tuning, it does so in the context of vision models rather than language models, which is the primary domain for prompt engineering. However, the concept of a 'hard prefix prompt' isn't directly addressed, but the methods and findings could be considered somewhat relevant for those interested in the extension of prompt engineering concepts to the vision domain. Thus, relevance is moderate but not directly aligned with the specific focus of hard prefix prompts in prompt engineering studies." -evaluating adaptive pedagogical agents' prompting strategies effect on students' emotions,gpt-4-1106-preview,5,"The relevance to prompt engineering is moderate, as the study examines the impact of different prompting strategies on students' emotions within an Intelligent Tutoring System (ITS). While not directly focused on 'hard prefix prompts' or prompt engineering in the AI language model sense, the research does explore how different types of prompts can influence user experience and engagement, which can be parallel to how prompts are engineered to guide AI behavior. However, the specific connection to 'hard prefix prompts' in prompt engineering is not made, which limits the direct relevance to the topic." -impact of different pedagogical agents' adaptive self-regulated prompting strategies on learning with metatutor,gpt-4-1106-preview,5,"The study focuses on the effect of prompting strategies on learning outcomes within an educational tool, which marginally relates to prompt engineering as it deals with the design and effectiveness of prompts. Prompt engineering specifically pertains to the construction and optimization of prompts to improve the performance of artificial intelligence systems. While the study on pedagogical agents' prompting strategies is adjacent to this domain, its direct application to prompt engineering in AI is not clear. Therefore, the relevance is moderate." -smart homes for people with alzheimer's disease: adapting prompting strategies to the patient's cognitive profile,gpt-4-1106-preview,6,"The study's relevance to prompt engineering is moderate. While it does not directly address 'hard prefix prompts' in the context of machine learning or computational prompt engineering, it deals with the adaptation of prompts (cognitive assistance) to users' needs, which parallels the customization aspect of prompt engineering. Furthermore, the development of guidelines for effective prompting strategies and an experimental protocol has some commonalities with the principles of designing and testing prompts in AI systems. However, the application is specific to smart homes and Alzheimer's patients and may not fully translate to the broader field of prompt engineering study." -understanding the effect of in-video prompting on learners and instructors,gpt-4-1106-preview,4,"While the abstract discusses in-video prompting, which is a form of engagement tactic within an educational context, it does not specifically mention or focus on 'hard prefix prompts' or the systematic review of prompt engineering. The study is relevant to the broader context of prompt design and use in learning environments but does not directly address the topic of a comprehensive review of hard prefix prompts in prompt engineering. Therefore, it has some relevance due to its focus on the effects of prompts in an instructional setting but falls short of directly addressing the specified topic of hard prefix prompts." -effects of a progressive prompting-based educational game on second graders' mathematics learning performance and behavioral patterns,gpt-4-1106-preview,6,"The study focuses on the use of prompting strategies within a game-based learning environment, which is tangentially related to the broader concept of 'prompt engineering' in that it involves the design of prompts to guide users (learners) towards specific outcomes. However, 'prompt engineering' typically refers to designing prompts to interact with AI systems or computer models, rather than human students. Therefore, while the educational prompting strategy is a form of prompt design and may share underlying principles with prompt engineering for AI, it is not a direct study on 'hard prefix prompts' as the context differs. The relevance is moderate because the skills and insights from designing effective prompts for education might be applicable to prompt engineering for AI in developing user instructions or interactions." -considering student choice when selecting instructional strategies: a comparison of three prompting systems.,gpt-4-1106-preview,6,"The study touches upon the effectiveness of prompting systems in educational settings, which is tangentially relevant to prompt engineering as it involves the use of prompts to enhance learning outcomes. However, prompt engineering typically focuses on improving the interaction with AI models and systems, rather than instructional strategies for human learning. Despite the different context, principles from studying human response to prompts could be insightful for designing AI prompts, thus earning a moderate relevance rating." -can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert,gpt-4-1106-preview,6,"The abstract addresses the understanding ability of ChatGPT as compared to fine-tuned BERT models and mentions the use of advanced prompting strategies to improve ChatGPT's understanding. While the main focus is on the comparative analysis of model performance, the mention of prompting strategies implies some relevance to prompt engineering. However, the abstract does not offer a detailed exploration or direct focus on hard prefix prompts or their systematic review, which reduces its direct relevance to the specified topic of prompt engineering study." -boosting static resource leak detection via llm-based resource-oriented intention inference,gpt-4-1106-preview,6,"The provided abstract outlines research on 'InferROI,' a system designed to detect resource leaks in code using large language models (LLMs) for intention inference. Though this approach employs prompts to guide the LLM toward inferring intentions from code snippets, it is indirectly relevant to prompt engineering. The use of prompts is in the context of static analysis in software engineering, while prompt engineering generally refers to designing prompts to accurately elicit specific responses from language models. Since this research involves instructing an LLM via prompts, it could offer some insights into prompt design and effectiveness; hence, it is given a moderate relevance rating. However, it does not focus on prompt engineering as a primary study area, which is why the rating is not higher." -fake news in sheep's clothing: robust fake news detection against llm-empowered style attacks,gpt-4-1106-preview,6,"The relevance to prompt engineering study in the context of 'hard prefix prompts' is moderately substantial as the abstract describes the use of 'style-oriented reframing prompts' which are a form of prompts used in engaging with Language Models (LLMs). Although the main focus is on fake news detection and style-agnostic approaches to improve robustness against camouflage attempts by LLMs, the application of prompts is directly related to the mechanics of how LLMs are manipulated or interacted with to produce or detect certain styles of content. Therefore, while the primary topic is not a comprehensive systematic review of hard prefix prompts, the paper relates to one aspect of prompt engineering—using prompts to reframe content style to train a more robust detection model." -cgsmp: controllable generative summarization via multimodal prompt,gpt-4-1106-preview,5,"The abstract discusses the use of a multimodal approach to reduce hallucination in Natural Language Generation (NLG) and improve the quality of abstractive summarization, relating to language model performance and prompt design to some extent. However, the focus here is on the use of multimodal (image and text) inputs rather than on the study of 'hard prefix prompts' specifically. While prompt engineering is a broader field that includes various methods to control language model outputs, this paper seems to address only a subset of that field related to multimodal interaction and controllability. Therefore, the relevance to prompt engineering study is moderate, as it could provide insights into one aspect of the field without directly focusing on hard prefix prompts." -revisiting the plastic surgery hypothesis via large language models,gpt-4-1106-preview,5,"The abstract describes how Large Language Models (LLMs) can be utilized for Automated Program Repair (APR) and discusses the relevance of the plastic surgery hypothesis in this context. The mention of 'prompting strategy' indicates some level of relevance to prompt engineering, as it suggests that the study explores how to effectively use prompts to improve model performance. However, the focus seems to be on the application of LLM-based APR rather than on the study of prompt engineering itself. Therefore, the relevance to prompt engineering is moderate, as the paper likely touches on elements of prompt engineering as part of APR, but is not centered on prompt engineering as its primary topic of investigation." -instructprotein: aligning human and protein language via knowledge instruction,gpt-4-1106-preview,6,"The abstract describes InstructProtein, a large language model trained for bidirectional human and protein language comprehension, which involves specialized prompt engineering to facilitate this unique form of language alignment. Prompt engineering is relevant here, as it is necessary to construct instructions that enable the model to translate between human language and protein sequences. The knowledge graph-based instruction framework mentioned can be seen as an advanced form of prompt engineering, designed to overcome issues of annotation imbalance and instruction deficits. However, the content is more focused on the application within a bioinformatics context rather than prompt engineering as a standalone subject. Therefore, while prompt engineering is a component of the research, the paper is not primarily about prompt engineering in the broader sense but rather a specific application of it." -evaluation of gpt-3.5 and gpt-4 for supporting real-world information needs in healthcare delivery,gpt-4-1106-preview,6,"The abstract highlights the need for further research in prompt engineering to improve the performance of large language models (LLMs) in healthcare settings. It mentions the variability in the quality of responses by GPT-3.5 and GPT-4 to specific information needs, which implies that there is room for improvement in how prompts are designed to achieve better results. This is relevant to the study of prompt engineering since it suggests that better-designed prompts could potentially lead to more accurate and useful responses from LLMs. However, the abstract does not directly focus on 'hard prefix prompts' but rather on the broader application of LLMs in healthcare. Therefore, it is somewhat relevant but not fully focused on prompt engineering, hence the rating of 6." -chatspot: bootstrapping multimodal llms via precise referring instruction tuning,gpt-4-1106-preview,4,"The study primarily focuses on improving human-AI interactivity within multimodal large language models by introducing a more sophisticated method of instruction via referring prompts. While this does involve some form of prompt engineering, specifically in relation to how the model receives and understands instructions, it is not strictly concerned with 'hard prefix prompts' as it seems to combine multiple input modalities (language, clicks, drag-and-drop, drawings). The relevance is thus moderate because it does intersect with the concept of prompt design and efficacy but does not explicitly address the engineering of hard-coded text prompts within a linguistic context." -chill: zero-shot custom interpretable feature extraction from clinical notes with large language models,gpt-4-1106-preview,6,"The described study focuses on using expert-crafted queries to generate interpretable features from health records, which indirectly relates to prompt engineering since it involves crafting queries (prompts) for a model to generate useful outputs. However, the study applies the technique for feature extraction from clinical notes rather than the systematic review of 'hard prefix prompts,' which is more specific to improving prompt engineering methods or understanding their efficacy. Therefore, the relevance is moderate but not directly focused on the prompt engineering field as defined by the initial prompt." -interleaving pre-trained language models and large language models for zero-shot nl2sql generation,gpt-4-1106-preview,4,"The abstract discusses the development of a framework (ZeroNL2SQL) that involves using prompts to guide language models for a specialized task (NL2SQL generation). Although the specific term 'hard prefix prompts' is not used, the concept of using prompts to direct language model behavior is central to the study. This indicates some relevance to the study of prompt engineering but not directly focused on hard prefix prompts or a systematic review of them. Therefore, it is somewhat relevant to prompt engineering but not fully aligned with a comprehensive systematic review on that specific topic." -divknowqa: assessing the reasoning ability of llms via open-domain question answering over knowledge base and text,gpt-4-1106-preview,4,"The study focuses on the retrieval capabilities of Large Language Models and how they can be grounded on heterogeneous knowledge sources for better question-answering performances. While it relates to prompt engineering in the broader context of machine learning and enhancing LLMs' interactions with external data, the study's primary concern is not with hard prefix prompts directly but rather with improving the information retrieval process, which is a component of the system that supports effective prompting. Therefore, its relevance to prompt engineering, specifically to a systematic review of hard prefix prompts, is tangential rather than central." -scpatcher: mining crowd security discussions to enrich secure coding practices,gpt-4-1106-preview,6,"The paper discusses SCPatcher, a tool that uses Prompt Learning with a Large Language Model to improve secure coding practices by mining crowd security discussions. Although the primary focus is on enhancing secure coding, the use of Prompt Learning is relevant to the study of prompt engineering. However, the paper does not specifically focus on 'hard prefix prompts' as implied by the term 'prompt engineering study.' Therefore, the relevance to prompt engineering is secondary and not central to the main objective of the paper, resulting in a moderate rating of relevance." -can large language models be good path planners? a benchmark and investigation on spatial-temporal reasoning,gpt-4-1106-preview,6,"The title and abstract indicate research dealing with how large language models can handle tasks requiring spatial-temporal reasoning, which includes the analysis of few-shot prompting methodologies. These methodologies are a subset of prompt engineering, as they explore how to design prompts that enable language models to perform spatial reasoning tasks. While the focus is not explicitly on 'hard prefix prompts' as the prompt engineering study may suggest, few-shot prompting, as part of prompt engineering, is relevant because it discusses the effectiveness of different prompting techniques. Therefore, the study is indirectly related to the broader field of prompt engineering but does not directly address the comprehensive systematic review on hard prefix prompts." -amortizing intractable inference in large language models,gpt-4-1106-preview,4,"The provided abstract discusses the use of amortized Bayesian inference to sample from intractable posterior distributions in autoregressive large language models (LLMs) and touches on chain-of-thought reasoning as a latent variable modeling problem. While this research is related to the functioning and fine-tuning of LLMs, it does not directly address 'hard prefix prompts' or any aspect of prompt engineering. However, the methods developed in this work for fine-tuning LLMs could indirectly benefit prompt engineering by enabling more efficient adaptation of models to specific tasks, which is why the relevance rating is not at the lowest end of the scale." -fabricator: an open source toolkit for generating labeled training data with teacher llms,gpt-4-1106-preview,4,"The relevance to prompt engineering is moderate. The abstract discusses the use of LLMs (Large Language Models) to generate labeled data for training other NLP models, which does involve prompting the LLM to produce specific outputs. The process of designing these prompts to effectively direct the LLM's output towards useful labeled data creation is related to 'prompt engineering.' However, the abstract does not specifically mention 'hard prefix prompts' nor does it focus on a comprehensive systematic review of such. Therefore, while the topic is related to prompt engineering, it does not fully align with a 'comprehensive systematic review on hard prefix prompts.' Thus, the given rating is moderately relevant, but not directly on point." -scalable multi-robot collaboration with large language models: centralized or decentralized systems?,gpt-4-1106-preview,6,"The abstract describes research on planning frameworks for pre-trained large language models (LLMs) in multi-robot task scenarios, addressing token efficiency which relates to the token budget and potentially the prompt constructions. While the study isn't focused specifically on 'hard prefix prompts', it does engage with prompt engineering in the context of task planning for robots using LLMs. The relevance to prompt engineering is indirect through its exploration of token-efficient LLM frameworks and mention of prompting techniques, which could include prompt design or optimization. However, the core focus is on the application within robotics rather than the systematic study or review of prompt engineering itself." -the impact of scaffolding prompts on the collaborative problem solving of ill-structured tasks by undergraduate engineering student groups,gpt-4-1106-preview,5,"The study seems to focus on scaffolding prompts in the context of collaborative problem solving for ill-structured tasks, rather than hard prefix prompts specifically. Nonetheless, the research is relevant to the field of prompt engineering to some extent because it explores how certain types of prompts can affect the problem-solving abilities of engineering students. This could indirectly inform studies or practices within prompt engineering, especially concerning the design of prompts that facilitate learning and problem-solving in educational settings." -exploring the impacts of cognitive and metacognitive prompting on students’ scientific inquiry practices within an e-learning environment,gpt-4-1106-preview,4,"While the study focuses on the use of prompts to enhance scientific inquiry in an educational context, and thus tangentially touches upon the concept of prompting, it does not directly address prompt engineering related to natural language processing or AI prompt design. The relevance lies in the investigation of prompts effectiveness, which could be conceptually extended to prompt engineering for AI systems. However, the study's primary focus on educational cognitive and metacognitive prompts limits its direct applicability to prompt engineering study, specifically regarding hard prefix prompts." -autotutor: a tutor with dialogue in natural language,gpt-4-1106-preview,4,"While the 'autotutor' paper focuses on a system that uses dialogue in natural language, which is tangentially related to prompt engineering in the sense that it deals with natural language processing and potentially the design of prompts for tutorial purposes, it does not directly address 'hard prefix prompts' or systematic reviews related to prompt engineering studies. The connection to prompt engineering is more incidental as it relates to dialogue patterns and design, which might apply to the field but are not centrally concerned with the systematic approaches to hard prefix prompts specifically. Therefore, the relevance is moderate, as some of the underlying principles may be applicable, but the core subject of the study diverges from the specific focus on prompt engineering." -which prompts make the difference? data prioritization for efficient human llm evaluation,gpt-4-1106-preview,5,"The provided title and abstract describe a study focused on the optimization of human evaluations of large language models through data prioritization, which is indirectly relevant to prompt engineering. Prompt engineering typically involves constructing prompts to elicit specific outputs from language models but does not directly address the question of human evaluators in the loop. However, the study's implications for improving the efficiency of model-evaluation can influence the prompt engineering process indirectly by refining the human feedback loop connected to prompt tuning performances. This makes it somewhat relevant to prompt engineering, especially in the scope of human-in-the-loop evaluations and performance measurement. Nonetheless, the study does not seem to address 'hard prefix prompts' or any specific prompt engineering methodologies, which limits its direct relevance to the field of prompt engineering." -uniex: an effective and efficient framework for unified information extraction via a span-extractive perspective,gpt-4-1106-preview,4,"The abstract mentions the use of schema-based prompts within the UniEX framework for universal information extraction, which touches upon the aspect of utilizing prompts in AI tasks. However, it does not specifically address 'hard prefix prompts' or conduct a 'comprehensive systematic review' on prompt engineering. The focus appears to be on information extraction tasks and improving their efficiency through a unified extractive framework rather than on the study of prompt engineering itself. Consequently, it has some relevance due to the mention of prompts but is not centrally focused on prompt engineering studies." -multimodal multi-task stealth assessment for reflection-enriched game-based learning,gpt-4-1106-preview,4,"The study mentioned does not directly address 'hard prefix prompts' or 'prompt engineering' as it appears to be more focused on game-based learning environments and using a stealth assessment framework for educational purposes. The relevance comes from the use of in-game reflection prompts and the multifaceted assessment of student responses which tangentially touches upon the concept of prompts and reflection in learning. However, it does not engage with the specific study of engineering prompts in the context of AI systems or conversational models, which would be necessary for a higher relevance rating." -visual chain of thought: bridging logical gaps with multimodal infillings,gpt-4-1106-preview,6,"The study introduces VCoT, which uses a form of prompt engineering by leveraging chain of thought prompting which is relevant to prompt engineering study. However, the focus is more on multimodal integration and recursive infillings to improve reasoning in sequential data, rather than on prompt engineering with hard prefixes specifically. The relevance is moderate because it does involve prompt engineering techniques, though it is not focused on the systematic review of hard prefix prompts." -making a case for spatial prompting in human-robot communication,gpt-4-1106-preview,4,"This paper is somewhat relevant to prompt engineering in that it discusses communication strategies with robots, which could include developing prompts for human-robot interaction. However, it focuses on 'spatial prompting' and non-verbal communication cues, which is a different area than 'hard prefix prompts,' which are typically textual or verbal in nature and used in language model interactions. The study's relevance to prompt engineering is tangential and not directly aligned with the concept of hard prefix prompts in language models or more conventional prompting techniques." -the smartweb corpora: multimodal access to the web in natural environments,gpt-4-1106-preview,4,"The description indicates that the chapter discusses a prompting scheme called SitPro, a recording technique, and properties of created corpora. While the mention of a prompting scheme suggests relevance to prompt engineering, there is no explicit mention of 'hard prefix prompts' or a systematic review approach. The relevance appears to be tangential rather than directly focused on prompt engineering as it pertains to pre-determined structured prompts. The rating reflects moderate relevance due to the connections to prompts and data acquisition which could be applicable to prompt engineering studies but lacks specificity regarding 'hard prefix prompts'." -beyond bounding box: multimodal knowledge learning for object detection,gpt-4-1106-preview,5,"The paper deals with the use of language prompts for improving object detection in machine learning, indicating relevance to prompt engineering in that it involves designing prompts to facilitate learning. However, the study's primary focus is on multimodal knowledge learning in object detection, rather than on prompt engineering specifically. It discusses the creation and use of prompts as part of the method but does not center around designing or systematically reviewing hard prefix prompts, which would be more directly related to prompt engineering studies." -meta learning to bridge vision and language models for multimodal few-shot learning,gpt-4-1106-preview,4,"The presented abstract discusses a multimodal meta-learning approach to bridge vision and language models, aiming to improve few-shot learning by automatically adapting to new tasks. The relevance to 'prompt engineering' is tangential since the abstract mentions induction of tasks without hand-engineering and could relate to auto-generating or tuning prompts in a broad sense. However, it deals more with meta-learning and the interplay between different modalities than the specific study of hard prefix prompts as described in the initial request. Therefore, it is only moderately related to prompt engineering as the focus of the paper is on model adaptation and few-shot learning rather than prompt design or engineering." -a prompt-based multimodal tabular transformer encoder for medical intervention duration estimation,gpt-4-1106-preview,6,"The study introduces a prompt-based approach within a medical context, focusing on a multimodal deep learning framework for medical intervention duration estimation. While it does not directly address 'prompt engineering' in the broader sense, the use of prompts in conjunction with a pre-trained sentence encoder indicates an application of prompt engineering principles. Hence, the relevance is moderate, as it shows an example of how prompts can be interfaced with other machine learning components, but the study is specific to medical interventions and does not cover prompt engineering as a standalone subject." -mpt: multimodal prompt tuning for event detection,gpt-4-1106-preview,5,"The presented abstract discusses a multimodal Prompt Tuning approach for event detection that incorporates both images and text, which pertains to prompt engineering in the context of modal feature aggregation and the use of soft prompts. However, the primary focus is on event detection using multimodal inputs rather than on hard prefix prompts specifically. The mention of 'prompt tuning' indicates relevance to prompt engineering, but the absence of discussion about hard prefix prompts specifically results in a moderate relevance rating." -what matters in training a gpt4-style language model with multimodal inputs?,gpt-4-1106-preview,5,"The abstract discusses various factors that affect the training of a GPT4-style multimodal language model, among which the influence of diversified prompts on the instruction-following ability of the trained models is mentioned. This indicates some relevance to prompt engineering, as understanding how prompts affect model performance is a subset of prompt engineering. However, the focus of the study includes a broader range of topics such as network structures, training data, and benchmarks, which are not exclusively concerned with prompt engineering. Hence, the rating is at the midpoint to reflect this partial relevance." -multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation,gpt-4-1106-preview,6,"The abstract provided references the use of a 'Multimodal Prompt Transformer' which may imply some relevance to prompt engineering, particularly as it pertains to encoding textual features and facilitating multimodal fusion in the context of emotion recognition. However, the primary focus of the study is on emotion recognition in conversation rather than on hard prefix prompts or prompt engineering in a broader sense. Prompt engineering typically involves the strategic design of input prompts to produce desired outputs from AI models, which is a tangential aspect of the described research. Therefore, while there is some relevance, it is not the core subject of the study." -a comparison of prompt delays with trial-and-error instruction in conditional discrimination training,gpt-4-1106-preview,4,"The study focuses on 'prompt delays' within the context of conditional discrimination training, which is relevant to learning processes and instruction strategies but does not directly address 'prompt engineering' as related to computational models or hard prefix prompts. However, considering that 'prompt delays' could potentially be related to the timing and delivery aspects of prompts in computational terms, the study might offer some indirect insights useful for prompt engineering, especially in the nuanced aspects of timing and response effectiveness. Therefore, a moderate relevance rating is provided." -an automated prompting system for smart environments,gpt-4-1106-preview,4,"While the document seems to deal with automation and smart systems, which could involve some form of prompt engineering, the focus on 'hard prefix prompts' is unclear without further content. A 'fully automating prompting system' suggests relevance to automated prompt generation, but the extent to which this aligns with 'hard prefix prompts' is not specified. The relevance rating could be higher if the paper's approach to prompting systems includes or overlaps with the structured method of prompt engineering implied by hard prefix prompts." -prompt cache: modular attention reuse for low-latency inference,gpt-4-1106-preview,5,"The abstract discusses Prompt Cache which is related to the optimization of language model inference using attention state reuse in prompts; however, it does not directly address a 'hard prefix prompt' which might suggest a fixed prompt structure or content in the systematic review. The approach outlined in the abstract is relevant to prompt engineering, as it deals with efficient prompt handling in LLMs, but it doesn't focus on prompt content or structure, thereby making it moderately relevant." -cocomo: computational consciousness modeling for generative and ethical ai,gpt-4-1106-preview,4,"The mention of 'prompt template formulation' implies some relevance to the area of prompt engineering, as this involves crafting inputs that guide the behavior of AI models. However, the CoCoMo model appears to focus more broadly on ethical and emotional intelligence in AI, rather than specifically on the study of 'hard prefix prompts' in prompt engineering. The relevance is present but not the primary focus of the study." -let me check the examples: enhancing demonstration learning via explicit imitation,gpt-4-1106-preview,6,"The abstract discusses Imitation-Demo, a method to enhance demonstration learning for prompt-based predictions. While it does not directly mention 'hard prefix prompts,' it addresses prompt-demonstration dependencies and the optimization of prompt-based learning, which is relevant to prompt engineering. However, since the focus is on imitation and contrastive learning mechanisms rather than the systematic study of hard prefix prompts, the relevance is moderate rather than high." -recruiting patients and collecting data for an observational study using computerised record pop-up prompts: the prog-res study,gpt-4-1106-preview,4,"The study described in the abstract demonstrates the practical application of electronic prompts in the context of patient recruitment and data collection for medical studies, which indirectly relates to prompt engineering as it showcases a real-world use case of prompts facilitating a task, in this case, recruitment, and data collection. However, the study is centered around improving operational aspects of medical research rather than exploring the theoretical or methodological aspects of prompt design, development, or optimization in automated systems or artificial intelligence, which are the central themes of prompt engineering. The relevance is therefore moderate as the conceptual link exists but is tangential to the principal focus of prompt engineering as a field." diff --git a/data/semantic_scholar_data/semantic_scholar_gpt_relevant.csv b/data/semantic_scholar_data/semantic_scholar_gpt_relevant.csv deleted file mode 100644 index c7ea544..0000000 --- a/data/semantic_scholar_data/semantic_scholar_gpt_relevant.csv +++ /dev/null @@ -1,1272 +0,0 @@ -Title,Model,Probability,Reasoning -"""do anything now"": characterizing and evaluating in-the-wild jailbreak prompts on large language models",gpt-4-1106-preview,7,"The paper provides valuable insights into jailbreak prompts, which are a specific type of adversarial prompt that attempts to circumvent the safeguards of large language models. Understanding the characteristics and evaluation of these prompts is relevant to prompt engineering because it guides the development of prompts that can resist misuse and prompts that align better with human values. Although it focuses on the adversarial aspect rather than constructive prompt engineering, the findings can inform the broader field of prompt engineering, particularly in designing robust and safe systems. Therefore, the study is quite relevant but not entirely centered on prompt engineering in its purest form, hence the rating of 7." -latent jailbreak: a benchmark for evaluating text safety and output robustness of large language models,gpt-4-1106-preview,9,"The paper focuses on evaluating the safety and robustness of large language models (LLMs) using a benchmark that entails analysis of prompt design, which is highly relevant to prompt engineering. Specifically, it investigates how malicious instructions embedded within prompts affect the LLM's behavior. This is crucial for understanding how different prompt structures (position of instructions, word replacements, and instruction replacements) influence the model's output, aligning closely with the broader field of prompt engineering that aims to optimize the interaction with LLMs. The systematic review mentioned in the query would likely cover such research, as it is integral to understanding how 'hard prefixes' or fixed parts of prompts can affect the LLM's outputs. The only reason it does not get a full 10 is because the study does not solely focus on the engineering aspect of prompts but also on the safety and ethical concerns related to prompts." -fuzzllm: a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models,gpt-4-1106-preview,7,"The relevance of 'fuzzllm: a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models' to prompt engineering is notable. While the primary focus is on discovering vulnerabilities through fuzz testing, the utilization of templates to understand the structure of prompts and the identification of features within these prompts are directly related to the study of prompt engineering. The process of ensuring that prompts do not lead to service guideline violations requires a deep understanding of how different prompts are constructed and how they interact with LLMs. Therefore, the research indirectly contributes to the field of prompt engineering by seeking ways to prevent manipulative prompts from eliciting undesired responses. However, the study does not directly address hard prefix prompts or the systematic review of such prompts, which would be more central to a targeted prompt engineering study." -latent jailbreak: a test suite for evaluating both text safety and output robustness of large language models,gpt-4-1106-preview,7,"The paper's primary focus is on evaluating the safety and robustness of large language models (LLMs), which is relevant to prompt engineering as it deals with how different prompts (including those that contain harmful or malicious content) can affect the performance of LLMs. The concept of 'latent jailbreak' and the creation of a benchmark that includes 'malicious instruction embedding' directly relates to the study of prompts, particularly 'hard prefixes' which could be considered a form of adversarial input designed to test the limits of the model's behavior. This relevance is crucial because ensuring that models perform consistently well and generate safe content across a variety of prompt types is a key aspect of prompt engineering. However, it does not directly discuss the 'hard prefix prompts' in a systematic review context but rather the safety and robustness in a broader sense, hence the rating does not reach the maximum." -"tricking llms into disobedience: understanding, analyzing, and preventing jailbreaks",gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it addresses the manipulation of prompts to achieve unintended model behaviors, which is a critical aspect of prompt design and engineering. Understanding how to prevent these 'jailbreaks' is crucial for developing more secure and reliable prompt engineering practices. The study provides insights into the vulnerabilities of current models and offers potential solutions, which directly contribute to the field of prompt engineering. The rating is not a full 10 because the study is more focused on security and mitigation rather than the broader aspects of prompt engineering, such as the optimization of prompts for various tasks or the generation of more sophisticated prompts for improved performance." -jailbreaking black box large language models in twenty queries,gpt-4-1106-preview,8,"The abstract discusses an algorithm (PAIR) for generating 'semantic jailbreaks' using adversarial methods on large language models (LLMs) such as GPT-3.5/4, Vicuna, and PaLM-2. This is highly relevant to prompt engineering because understanding and preventing adversarial manipulation of LLMs is crucial for developing more effective and secure prompts. It is directly related to the field as it explores the vulnerabilities in the current engineering of prompts and how they can be exploited. The abstract, however, does not specifically address 'hard prefix prompts', which are a subset of prompts within prompt engineering, hence not warranting a full score of 10." -autodan: automatic and interpretable adversarial attacks on large language models,gpt-4-1106-preview,8,"The paper describes an adversarial attack method named 'AutoDAN' that is highly relevant to prompt engineering as it involves the generation of attack prompts, a form of input manipulation which is a key aspect of prompt engineering. This research contributes to a deeper understanding of Large Language Model vulnerabilities and strategies that can be used to manipulate model outputs, which is pertinent to the field of prompt engineering. However, the study is focused specifically on adversarial attacks rather than the broader topic of 'hard prefix prompts,' therefore it does not fully align with systematic review studies on prompt engineering techniques in general, which might include non-adversarial methods and a wider range of applications. Hence, the rating is high but not at the maximum." -visual prompt tuning,gpt-4-1106-preview,7,"The topic of Visual Prompt Tuning (VPT) is relevant to the prompt engineering study since it deals with the adaptation of pre-trained models, which is a core concept in prompt engineering. However, VPT specifically addresses the visual domain and large-scale Transformer models in vision, which differs from the 'hard prefix prompts' that typically relate to textual input. Despite this difference, the underlying principles of efficient tuning and the introduction of new parameters to influence model behavior without extensive retraining are concepts shared with prompt engineering. This cross-domain relevance is valuable but not directly tied to the initial study of 'hard prefix prompts', hence the rating of 7." -conditional prompt learning for vision-language models,gpt-4-1106-preview,7,"The abstract describes a study that focuses on prompt learning, specifically in vision-language models, which is highly relevant to the field of prompt engineering. The study introduces Conditional Context Optimization (CoCoOp), which is a method for improving the generalization of learned prompts over unseen classes. While this is directly related to prompt engineering, it is specifically tailored to vision-language models, and not directly focused on 'hard prefix prompts' which the original prompt suggests. Therefore, the relevance rating is not a perfect 10, as 'hard prefix prompts' might imply a different subset of prompt engineering concerned with text prompts in NLP. Nevertheless, the concepts studied are transferable to prompt engineering more broadly, warranting a relatively high rating." -prompt-to-prompt image editing with cross attention control,gpt-4-1106-preview,8,"The provided abstract describes a study closely related to 'prompt engineering,' as it involves a framework for editing images using text prompts, which directly entails understanding and manipulating prompts for precise outcomes. The emphasis on cross-attention layers as a mechanism for controlling the relationship between text prompts and the spatial layout of images is particularly relevant to the field of prompt engineering, as it is concerned with the fine-tuned influence of textual input on generative models. While the study is not specifically about 'hard prefix prompts,' it contributes to the broader field of prompt engineering by showing how textual prompts can be used to control and manipulate the output of synthesis models. The 2-point deduction accounts for the specific focus on imagery rather than a systematic review of hard prefix prompts in various contexts." -p-tuning: prompt tuning can be comparable to fine-tuning across scales and tasks,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it discusses 'prompt tuning', which is a method within the field of prompt engineering in natural language understanding (NLU). It compares prompt tuning with fine-tuning, highlighting its efficiency and effectiveness across different tasks and model scales, and introduces an advanced version named 'P-Tuning v2'. This research contributes to the understanding of how continuous prompts can be optimized and sheds light on prompt engineering as a potentially universal method for NLU tasks, making it a significant resource for studying prompt engineering methods." -"pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing",gpt-4-1106-preview,9,"The article provides a detailed survey of prompt-based learning in natural language processing, which is directly relevant to prompt engineering. It covers the adaptation of language models to new tasks using prompts, which is a core concept in prompt engineering. The systematic review and organization of research along with the introduction of a unified set of mathematical notations for existing work are valuable for understanding the breadth and depth of prompt-based methods, making it highly relevant to the study of prompt engineering. Moreover, the article's release of resources like NLPedia–Pretrain aids further research and accessibility. The rating is not a perfect 10 because it might not exclusively focus on 'hard prefix prompts' as the prompt engineering study inquires but generally covers prompting methods in NLP." -learning to prompt for open-vocabulary object detection with vision-language model,gpt-4-1106-preview,8,"The abstract details a novel method, detection prompt (DetPro), which focuses on learning continuous prompts for open-vocabulary object detection, indicating an application of prompt engineering in vision-language models. The relevance is high because it directly tackles the challenge of designing effective prompts to improve model performance. However, it might not cover the theoretical foundations or a wider range of prompt engineering applications, hence not a full score." -"recommendation as language processing (rlp): a unified pretrain, personalized prompt & predict paradigm (p5)",gpt-4-1106-preview,7,"The abstract describes a 'Pretrain, Personalized Prompt, and Predict Paradigm' (P5) which is closely related to the concept of hard prefix prompts in prompt engineering. The study's emphasis on personalized prompts and instruction-based recommendation indicates that it deals with the design and utilization of prompts to elicit desired behaviors from a language model, which is a core element of prompt engineering. However, because the abstract specifically focuses on recommendation tasks and does not explicitly mention 'hard prefix prompts' as a category or detail the systematic review elements that might be expected from a 'comprehensive systematic review,' it does not fully align with a study exclusively centered on hard prefix prompts. Despite this, the principles discussed are relevant to the broader field of prompt engineering." -an information-theoretic approach to prompt engineering without ground truth labels,gpt-4-1106-preview,9,"The article presents a technique for prompt engineering, which is highly relevant to the study of prompt engineering. It focuses on a method that maximizes mutual information between input and output to select effective prompts without labeled data or model access. This method is innovative in the field of prompt engineering as it bypasses the need for substantial labeled datasets and the necessity to tweak model parameters. However, the title does not specifically mention 'hard prefix prompts,' so it may not be entirely focused on 'hard prefix prompts' as the type of prompts being engineered, which is why it doesn't receive a perfect 10." -prompt distribution learning,gpt-4-1106-preview,9,"The abstract and TLDR indicate that the study deals with prompt distribution learning, which is directly related to prompt engineering. It focuses on adapting pre-trained models to downstream tasks by learning prompt distributions, a technique relevant to constructing and using prompts to improve model performance. This is highly pertinent to studies in prompt engineering, which aims to optimize how models interact with prompts for better task performance. Although the term 'hard prefix prompts' is not explicitly mentioned, the overall concept of learning and utilizing prompts makes this study considerably relevant." -ignore previous prompt: attack techniques for language models,gpt-4-1106-preview,8,"The provided abstract is highly relevant to the subject of 'prompt engineering,' as it directly discusses PromptInject, a methodology for adversarial prompt composition designed to exploit vulnerabilities in transformer-based language models like GPT-3. This pertains to the broader category of prompt engineering by showcasing methods of prompting that could lead to model misalignment, thus revealing long-tail risks. Understanding these attack techniques is crucial for developing more robust prompt engineering practices, although the specific focus on 'hard prefix prompts' is not directly mentioned." -language models that seek for knowledge: modular search & generation for dialogue and prompt completion,gpt-4-1106-preview,7,"While the abstract provided doesn't directly address 'hard prefix prompts' or 'prompt engineering' specifically, it does pertain to the broader subject area of how language models can be improved to generate more factual and relevant responses. The research on modular search and generation in the context of dialogue and prompt completion is relevant to prompt engineering as it impacts the effectiveness of the prompts in eliciting accurate and meaningful responses from language models. Therefore, the rating is relatively high due to the indirect relevance of improving language model outputs, which is a fundamental aspect of prompt engineering." -test-time prompt tuning for zero-shot generalization in vision-language models,gpt-4-1106-preview,9,"The abstract describes a study directly related to prompt engineering, specifically the dynamic tuning of prompts for vision-language models to enhance zero-shot generalization. Although the provided text doesn't explicitly mention 'hard prefix prompts,' it discusses an advanced concept of prompt optimization at test-time which is highly relevant to the broader field of prompt engineering. The method's ability to adapt prompts using a single test sample fits well within the study of how prompts can be engineered and optimized to improve model performance, particularly in zero-shot settings." -p-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks,gpt-4-1106-preview,9,"The abstract discusses the concept of prompt tuning in the context of Natural Language Understanding (NLU) and proposes a new method called P-Tuning v2, indicating a significant advancement in the field of prompt engineering. The stated goals of matching the performance of full model fine-tuning with a fraction of tuned parameters make it highly relevant. The only reason it is not rated a perfect 10 is that the abstract does not specifically mention 'hard prefix prompts', but it is likely that the methodology could be applied to or has implications for such prompts, hence the high rating." -diffusiondb: a large-scale prompt gallery dataset for text-to-image generative models,gpt-4-1106-preview,8,"The abstract describes a dataset (DiffusionDB) focused on the synthesis of text-to-image generation using prompts in natural language, which includes the study of syntactic and semantic characteristics of prompts. This relates closely to prompt engineering, as it involves analyzing how prompts influence the outputs of generative models and finding optimal prompts to achieve desired results. The only reason it does not score a perfect 10 is because 'hard prefix prompts' which the initial prompt specified are not mentioned, so it may not cover the specific focus on 'hard prefix prompts'. Nonetheless, it is highly relevant for studies about prompt engineering in the broader context of text-to-image generative models." -learning to prompt for continual learning,gpt-4-1106-preview,8,"The abstract discusses an approach to continual learning that focuses on using prompts as learnable parameters within a memory space to guide model predictions and manage knowledge. This is highly relevant to prompt engineering because it directly deals with the optimization and efficacy of prompts in a machine learning context. However, it is not a 'comprehensive systematic review on hard prefix prompts,' as the prompt specifies, but rather a presentation of a novel framework for continual learning using prompts, which is why the rating is not a perfect 10." -prompt-aligned gradient for prompt tuning,gpt-4-1106-preview,9,"The abstract describes a study focused on improving prompt tuning methods for vision-language models, presenting a new approach called Prompt-aligned Gradient (ProGrad) specifically designed to prevent the loss of general knowledge during the fine-tuning process. This is highly relevant to prompt engineering as it addresses a significant challenge in the field—maintaining the balance between task-specific adaptation and the retention of pre-trained capabilities. The paper shows potential advancements in prompt tuning, which is a core aspect of prompt engineering, hence the high relevance rating." -domain adaptation via prompt learning,gpt-4-1106-preview,7,"The abstract describes a study on 'domain adaptation via prompt learning (DAPrompt)', which is relevant to the field of prompt engineering, as it specifically focuses on the use of prompts in unsupervised domain adaptation. The relevance is not at the maximum because the study concentrates on a particular application of prompt learning (i.e., unsupervised domain adaptation) rather than a comprehensive overview or systematic review of hard prefix prompts in prompt engineering. Nonetheless, it contributes valuable insights into prompt engineering by illustrating how prompts can dynamically adapt classifiers to different domains, which is a significant aspect of the study area." -hyperprompt: prompt-based task-conditioning of transformers,gpt-4-1106-preview,9,"The provided text is highly relevant to prompt engineering study as it directly addresses a novel architecture called 'HyperPrompt' for prompt-based task-conditioning in Transformers, which is a key area in the field of prompt engineering. The text discusses the efficiency and effectiveness of HyperPrompt in the context of few-shot learning and multi-task learning, benchmarks that are essential for evaluating prompt-based methods. The relevance is not rated a full 10 only because the specific term 'hard prefix prompts' is not directly mentioned, although the description strongly suggests relevance to that concept." -prompt for extraction? paie: prompting argument interaction for event argument extraction,gpt-4-1106-preview,8,"The provided abstract describes a model (PAIE) that leverages prompt tuning as part of its methodology for Event Argument Extraction (EAE). The model's use of prompts to guide span selection and capture argument interactions is highly relevant to the study of prompt engineering, as it applies prompt-based methods for a specific NLP task. The paper also discusses extractive prompt tuning strategies and their effectiveness, which contributes to the understanding of prompt engineering. However, it does not specifically address 'hard prefix prompts' which might be a more specialized aspect within the field of prompt engineering, hence the rating isn't a full 10." -promda: prompt-based data augmentation for low-resource nlu tasks,gpt-4-1106-preview,8,"The paper's focus on 'Prompt-based Data Augmentation' and the method of training 'small-scale Soft Prompts' in PLMs directly relates to the concept of prompt engineering, a technique used to interface with and extract specific behaviors from language models. While the paper might not explicitly cover a 'hard prefix prompt,' it does deal with the broader topic of how prompts can be engineered and utilized to improve NLU tasks, which makes it highly relevant to studies within prompt engineering." -no more fine-tuning? an experimental evaluation of prompt tuning in code intelligence,gpt-4-1106-preview,8,"The abstract discusses prompt tuning as an alternative to fine-tuning in the context of code intelligence tasks. Prompt tuning is highly relevant to prompt engineering studies since it involves designing and inserting prompts that aid the pre-trained models in adapting to specific tasks. This specific paper evaluates the efficiency of prompt tuning over fine-tuning, which is a core topic within prompt engineering research. Although it focuses on code intelligence tasks and not 'hard prefix prompts' specifically, the principles and findings can have implications for prompt engineering in general. The relevance could be higher if the study specifically addressed hard prefix prompts or a broader range of prompt engineering techniques." -personalized prompt learning for explainable recommendation,gpt-4-1106-preview,8,"The given title and abstract focus on 'prompt learning', particularly in the context of explainable recommendation systems. It is highly relevant to prompt engineering since prompt learning is a crucial aspect of tailoring prompts to improve the performance of AI models, such as pre-trained transformer models mentioned in the text. Moreover, the paper discusses innovative approaches (discrete and continuous prompt learning) and training strategies, which are essential for advancing the field of prompt engineering. The rating is not a full 10 because the study specifically addresses the use of prompt learning for explainable recommendations rather than a broad systematic review on 'hard prefix prompts' in general, implying a more focused domain application rather than a comprehensive study across multiple domains or types of prompts." -towards unified conversational recommender systems via knowledge-enhanced prompt learning,gpt-4-1106-preview,7,"The abstract discusses the integration of recommendation and conversation modules in a conversational recommender system using a prompt learning paradigm, which is particularly relevant to prompt engineering. The use of knowledge-enhanced prompts and the unification of different subtasks into the prompt learning framework makes it pertinent to the study of how prompts can be designed to improve performance in AI systems. Although the primary focus is on conversational recommender systems rather than prompt engineering in general, the methodology and implications for the field of prompt engineering are significant enough to warrant a relevance rating of 7." -bridge-prompt: towards ordinal action understanding in instructional videos,gpt-4-1106-preview,7,"The paper describes an approach that involves 'reformulating individual action labels as integrated text prompts,' which relates to the concept of incorporating linguistic structures (prompts) to enhance the understanding of actions in videos. This suggests an innovative use of prompt engineering to bridge the semantic gap between actions, which is relevant to the study of prompts in the context of machine learning. However, this application is specific to action recognition in video data and does not address 'hard prefix prompts' directly, which is why the relevance rating is not higher." -prompt consistency for zero-shot task generalization,gpt-4-1106-preview,9,"The title and abstract describe a study focused on improving zero-shot task generalization by regularizing prompt consistency, which is highly relevant to prompt engineering. Prompt engineering involves the careful design of prompts to elicit the desired responses from language models, and this paper directly addresses and proposes a method for enhancing performance in this area. The relevance is not rated a full 10 because the study may not explicitly be about 'hard prefix prompts' as mentioned in the primary query but it does contribute significantly to the broader field of prompt engineering." -promptcap: prompt-guided task-aware image captioning,gpt-4-1106-preview,8,"The article describes 'PromptCap', a model that utilizes natural-language prompts to generate image captions that are tailored to assist large language models in performing visual question answering tasks. While the primary focus is on image captioning to aid knowledge-based VQA, the use of prompts to guide the model's output is directly related to prompt engineering. The research showcases how carefully engineered prompts can significantly enhance the performance of language models in understanding and responding to visual content. Therefore, the study has high relevance to prompt engineering, particularly in the context of integrating textual and visual information. However, it does not directly address hard prefix prompts in a systematic review, which is why the rating is not a perfect 10." -spot: better frozen model adaptation through soft prompt transfer,gpt-4-1106-preview,9,"The abstract describes a study directly relevant to prompt engineering, as it focuses on the use of prompts to enhance performance in natural language processing tasks through a method known as Soft Prompt Transfer (SPoT). The relevance is high because it involves leveraging soft prompts for model adaptation which is a specific aspect of prompt engineering. Moreover, it suggests a systematic approach to understanding task transferability, which can contribute significant insights into the field of prompt engineering. The only reason it does not receive a full 10 is that the abstract does not mention 'hard prefix prompts' which was the specific focus of the systematic review mentioned in the prompt." -prompt programming for large language models: beyond the few-shot paradigm,gpt-4-1106-preview,9,"The abstract discusses advanced concepts in prompt programming and evaluates the effectiveness of 0-shot prompts in comparison to few-shot prompts using GPT-3. It underlines the significant impact of prompt design on language model performance and outcomes. The introduction of 'metaprompt' suggests a forward-thinking approach in prompt engineering, indicating a relevance to the study of prompt engineering. The score is not a perfect 10 because the abstract doesn't specifically mention 'hard prefix prompts,' but the overall discussion is highly pertinent to the field of prompt engineering." -coda-prompt: continual decomposed attention-based prompting for rehearsal-free continual learning,gpt-4-1106-preview,9,"The paper is highly relevant to the field of prompt engineering as it discusses a novel approach for producing dynamic prompts through an attention-based key-query mechanism, specifically for continual learning in computer vision. This study directly addresses the issue of prompt generation in the context of large-scale pre-trained models and presents a solution for improving accuracy without the need for data rehearsal. Although it may not exclusively focus on 'hard prefix prompts', the concept of input-conditioned prompt components is a valuable contribution to prompt engineering studies, making it almost entirely pertinent to the field." -prompt learning with optimal transport for vision-language models,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering, as it directly addresses the challenge of creating efficient prompts for vision-language models, which is a subset of the broader field of prompt engineering. The utilization of optimal transport to match vision and text modalities is a novel approach in learning multiple prompts, which aligns with the topic of systematic review on hard prefix prompts by exploring alternative strategies to enhance prompt effectiveness. The only reason the rating is not a full 10 is that the abstract does not explicitly mention 'hard prefix prompts', suggesting that the study might not be solely focused on that specific aspect of prompt engineering." -idpg: an instance-dependent prompt generation method,gpt-4-1106-preview,9,"The provided abstract directly pertains to prompt engineering within the realm of NLP transfer learning, making it highly relevant. The novel method of Instance-Dependent Prompt Generation (IDPG) is a significant contribution to prompt engineering because it introduces variability and personalization of prompts for different input instances. The effectiveness of this method is demonstrated through experiments on various NLU tasks, situating the paper at the forefront of prompt engineering research. The reason for not awarding a perfect 10 is that the study does not explicitly mention 'hard prefix prompts', but the concept of IDPG seems inherently related to the engineering of task-specific prompts which would include hard prefix prompts among others." -continual prompt tuning for dialog state tracking,gpt-4-1106-preview,7,"The abstract provided discusses 'Continual Prompt Tuning' which is a method for task adaptation in dialog systems that includes learning and storing prompt token embeddings to prevent catastrophic forgetting. Although not directly stated as 'hard prefix prompts,' the methodology is closely related to prompt engineering as it involves the manipulation of prompts to improve the performance of a pre-trained model in continual learning scenarios. This concept is relevant to the study of prompt engineering because it explores ways to effectively utilize prompts in a dynamic and evolving context, which is a crucial aspect of advanced prompt engineering strategies. However, the rating is not a full 10 because it is not directly focused on 'hard prefix prompts' specifically, which is narrower in scope compared to the broader concept of prompt engineering." -exploring the universal vulnerability of prompt-based learning paradigm,gpt-4-1106-preview,9,"The abstract describes a study that directly investigates the vulnerabilities of the prompt-based learning paradigm, which is highly relevant to prompt engineering. The focus on triggers that exploit these vulnerabilities is critical for understanding the limitations and potential risks associated with prompts in language models. While not focused on creating or optimizing prompts, it is fundamentally related to their integrity and security, which is an essential aspect of prompt engineering studies." -how many data points is a prompt worth?,gpt-4-1106-preview,9,"The abstract describes a study focusing on comparing the effectiveness of using prompts versus generic model heads in fine-tuning pretrained models for classification tasks. It specifically aims to quantify the benefits of prompts when working with limited data. Since the study investigates the impact of prompting on model performance across different tasks and data sizes, it contributes valuable insights to the field of prompt engineering. The high rating reflects the direct relevance of the findings to understanding how prompts can improve machine learning models, which is a core aspect of prompt engineering research. However, the rating is not a full 10 because it does not cover the breadth of prompt engineering, such as the design and optimization of prompts, which also includes areas beyond fine-tuning for classification tasks." -knowprompt: knowledge-aware prompt-tuning with synergistic optimization for relation extraction,gpt-4-1106-preview,8,"The paper presents an advancement in prompt-tuning for the specific application of relation extraction. It introduces KnowPrompt, a technique that effectively incorporates domain knowledge into prompt templates, which is highly relevant to studies on prompt engineering. Although the focus is on relation extraction and not hard prefix prompts, the concepts of knowledge-aware prompts and learnable virtual type words are innovative contributions to the field of prompt-tuning as a whole. The lower score is because it does not directly address 'hard prefix prompts' as described in the original broad request, but it is still significantly relevant to the broader subject of prompt engineering." -knowledgeable prompt-tuning: incorporating knowledge into prompt verbalizer for text classification,gpt-4-1106-preview,9,"The paper directly relates to the field of prompt engineering by introducing a novel approach to improve prompt tuning performance for text classification tasks. This approach involves integrating external knowledge into the verbalizer component of the tuning process, which is a specific technique within the broader area of prompt engineering. This is highly relevant as it targets one of the fundamental challenges in the field, which is to optimize the interaction between pre-trained language models and task-specific prompts. The rating is not a full 10 because it does not cover 'hard prefix prompts' specifically, but focuses more broadly on knowledgeable prompt-tuning, which may or may not include hard prefixes." -pro-tuning: unified prompt tuning for vision tasks,gpt-4-1106-preview,8,"The abstract discusses the concept of prompt tuning, termed 'Pro-tuning', which is highly relevant to the field of prompt engineering as it applies prompt-based learning principles to computer vision tasks. While the principle is derived from work in natural language processing, the adaptation to vision models suggests a cross-disciplinary application of prompt engineering techniques, which is pertinent to the broader study of how prompts can be engineered for different types of models across fields. The relevance is not rated a full 10 as the study is specific to computer vision and may not cover all aspects of 'hard prefix prompts' in the context of a systematic review, which would more generally encompass various modalities and tasks." -interactive and visual prompt engineering for ad-hoc task adaptation with large language models,gpt-4-1106-preview,9,"The abstract provided outlines a study that is highly relevant to prompt engineering. It describes the development of PromptIDE, a tool that facilitates the experimentation and optimization of prompts for neural language models. The workflow mentioned is designed to enhance prompt creation and performance evaluation before deployment, which is central to the field of prompt engineering. Although it doesn't explicitly mention 'hard prefix prompts,' the focus on prompt variations and performance signifies a close connection to the concept of prompt design and engineering. Thus, the relevance to prompt engineering is very high, but not a perfect 10 due to the missing specific mention of 'hard prefix prompts'." -promptmaker: prompt-based prototyping with large language models,gpt-4-1106-preview,8,"The content of the article appears to be highly relevant to prompt engineering as it discusses prototyping ML-powered features using natural language prompts, which is a core component of prompt engineering. The emphasis on the experiences of industry professionals indicates insights into practical applications and challenges of prompt-based approaches. The article's focus on broadening access, speeding up prototyping, and improving collaboration directly relates to the evolution of prompt engineering techniques. However, the specific term 'hard prefix prompts' is not mentioned, which might suggest that the study doesn't exclusively focus on that subtype of prompts within prompt engineering. Therefore, the rating is an 8 instead of a perfect 10." -dynamic prompt learning via policy gradient for semi-structured mathematical reasoning,gpt-4-1106-preview,7,"The abstract describes a study that focuses on enhancing the performance of pre-trained language models like GPT-3 on mathematical reasoning tasks by using a novel approach called PromptPG. This approach uses policy gradient to optimize the selection of in-context examples for prompt construction, which is a core aspect of prompt engineering. While the study is not directly about 'hard prefix prompts', it addresses the broader concept of prompt optimization for improving model performance. Therefore, it is relevant to prompt engineering but not specifically focused on a comprehensive systematic review on hard prefix prompts." -conversing with copilot: exploring prompt engineering for solving cs1 problems using natural language,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates the use of natural language interactions to guide GitHub Copilot, an AI code generation tool, in solving programming problems. It focuses on how changes to the wording of a problem can impact the AI's ability to generate correct code, which is at the core of prompt engineering techniques. The fact that the study includes an empirical evaluation of Copilot's performance across a dataset of programming problems and discusses the potential of prompt engineering as a learning tool underscores its relevance to the field. The rating is not a perfect 10 because the study is specific to the domain of programming problem solving and the tool GitHub Copilot, and while it is a significant component of prompt engineering, there may be additional facets of prompt engineering in broader contexts that are not covered by this study." -"zeroprompt: scaling prompt-based pretraining to 1, 000 tasks improves zero-shot generalization",gpt-4-1106-preview,8,"The abstract discusses a multitask pretraining approach named ZeroPrompt which is highly relevant to prompt engineering as it directly relates to enhancing the performance of zero-shot learning using prompts. It also mentions the introduction of a new prompting method that utilizes a genetic algorithm to discover the best prompts for unseen tasks. This is a significant contribution to the field of prompt engineering. Despite not mentioning 'hard prefix prompts,' the focus on task scaling and prompting methods in zero-shot scenarios are pertinent to prompt engineering study. The relevance rating is not a full 10 because the abstract does not explicitly discuss the comprehensive systematic review or focus exclusively on 'hard prefix prompts,' which are specified in the prompt." -fantastically ordered prompts and where to find them: overcoming few-shot prompt order sensitivity,gpt-4-1106-preview,9,"The study directly investigates the effect of prompt order sensitivity and devises a method to overcome it in few-shot settings, which is highly relevant to prompt engineering. It leverages the generative capabilities of language models to improve the performance of GPT-family models without the need for additional data, indicating a significant contribution to the field of prompt engineering. The deduction of one point is due to the fact that it focuses specifically on order sensitivity and not on the entire scope of hard prefix prompts, but it is still highly pertinent." -ptr: prompt tuning with rules for text classification,gpt-4-1106-preview,8,"The document presents research on 'prompt tuning with rules' (PTR), which directly relates to the field of prompt engineering study. It involves constructing prompts with sub-prompts and integrating logic rules, which is a form of hard prefix prompt design in the establishment of many-class text classification tasks. The concept of using human prior knowledge and pre-trained language models (PLMs) in prompt construction is relevant to the study of how prompts can guide or improve the performance of machine learning models. However, the rating is not a perfect 10 because the abstract is missing (listed as 'nan'), which suggests that there may be additional context to the relevance that is not provided in the TLDR summary." -iteratively prompt pre-trained language models for chain of thought,gpt-4-1106-preview,9,"The abstract describes an innovative approach to improving the capability of Pre-trained Language Models (PLMs) for tasks that require multi-step reasoning, an aspect that is central to prompt engineering. This iterative prompting framework that progressively elicits relevant knowledge and dynamically synthesizes prompts based on contexts directly pertains to the field of prompt engineering, as it looks at refining the prompts that are given to language models in order to achieve better performance on complex tasks. While it does not specifically mention 'hard prefix prompts', which is part of the original query, the idea of creating dynamic and context-aware prompts is highly relevant to the study of prompt design and engineering." -black-box prompt learning for pre-trained language models,gpt-4-1106-preview,9,"The paper presents a method for adapting pre-trained language models (PLMs) through black-box discrete prompt learning without needing access to the model's parameters or gradients, which is highly relevant to the field of prompt engineering. The study focuses on efficient optimization of discrete prompts and even though it does not specifically mention 'hard prefix prompts', the concept of discrete prompts is within the scope of prompt engineering. The proposed black-box setting for secure interaction between cloud and edge devices is innovative and directly linked to the adaptability of PLMs for various tasks using prompts. The paper's significant improvements across benchmarks and in-depth case studies on prompt characteristics are valuable contributions to the study of prompt engineering." -visual prompt tuning for test-time domain adaptation,gpt-4-1106-preview,8,"The presented work is highly relevant to prompt engineering study as it introduces a method named 'Data-efficient Prompt Tuning' (DePT), which is a direct application of prompt engineering to adapt models during test-time. It focuses on tuning prompts as a parameter-efficient way to adjust model representation to new data domains. Although the term 'prompt' in the context of this paper refers to visual prompts in a vision Transformer, which differs from textual prompts commonly discussed in NLP prompt engineering, the concept of adjusting a small set of parameters for domain adaptation is aligned with the principles of prompt engineering. The reason for not being a 10 is that the term 'hard prefix prompts' was not mentioned, which suggests that the exact topic of the prompt may not be covered in its entirety." -repository-level prompt generation for large language models of code,gpt-4-1106-preview,9,"The paper presents a framework that directly contributes to the field of prompt engineering by generating example-specific prompts for large language models of code. The fact that this system uses the context of the entire repository and does not rely on the internal weights of the models aligns well with the principles of prompt engineering, where context and relevance are crucial for effective prompt design. The relevance to engineering study is slightly less than perfect only because it is specific to code generation and not the broader application of prompts in general large language models." -gppt: graph pre-training and prompt tuning to generalize graph neural networks,gpt-4-1106-preview,9,"The paper's abstract describes a novel transfer learning framework, which includes the concept of prompt tuning to generalize Graph Neural Networks (GNNs) for downstream tasks. It is highly relevant to prompt engineering study as it involves modifying prompts (by creating token pairs) to influence the behavior of the pre-trained GNNs without extensive fine-tuning. This approach aligns with the practice of designing prompts to effectively elicit desired responses from pre-trained models, which is central to prompt engineering. The only reason it doesn't receive a full 10 is because the paper is specifically about the domain of graph data and might not cover other aspects or generalities of prompt engineering." -visual prompt tuning for generative transfer learning,gpt-4-1106-preview,7,"The provided abstract discusses the topic of prompt tuning which is relevant to prompt engineering, a field that deals with optimizing the input given to AI models to elicit better performance. Although the context of the abstract is specific to the domain of generative image models and visual prompts, which is slightly different from hard prefix prompts in textual domain, the general principles and techniques of prompt tuning can be considered applicable across multiple domains. Hence, the content is substantially relevant to prompt engineering, especially in demonstrating knowledge transfer and domain adaptation which are significant challenges in the field. The lower rating reflects the domain-specific focus on visual transformers rather than a general treatment of all forms of prompt engineering." -prompt vision transformer for domain generalization,gpt-4-1106-preview,8,"The abstract describes a study that involves prompt learning with vision transformers for the purpose of domain generalization. Although the study does not specifically mention 'hard prefix prompts', it does focus on a prompt-based method (DoPrompt) for improving the performance of ViTs in unseen domains. This is relevant to prompt engineering because it is a direct application of using prompts to enhance model generalization. The relevance rating is not a full 10 because the study does not directly address 'hard prefix prompts' as specified in the initial prompt, but it is closely related and contributes to the field of prompt engineering." -prompt tuning for discriminative pre-trained language models,gpt-4-1106-preview,8,"The paper presents DPT, a novel framework for prompt tuning in the context of discriminative pre-trained language models, which is highly relevant to the field of prompt engineering as it explores how to adapt PLMs to different tasks. While it does not directly address 'hard prefix prompts', the concept of prompt tuning is central to prompt engineering. The study's systematic approach to reformulating NLP tasks to suit discriminative PLMs and its comprehensive experiments align closely with prompt engineering methodologies. Thus, the paper contributes valuable insights to the broader field of prompt engineering, even if it is not specialized in hard prefix prompts specifically. The rating is not a full 10 due to the abstract's lack of direct reference to hard prefix prompts." -incremental prompting: episodic memory prompt for lifelong event detection,gpt-4-1106-preview,7,"The presented abstract is relevant to prompt engineering study to a considerable extent because it introduces 'Episodic Memory Prompts (EMP)', which is a technique relevant to prompt engineering. It contributes to the field by addressing the issue of catastrophic forgetting and suggesting a prompt-based method to retain task-specific knowledge in a model that is being continually updated. This is pertinent as it deals with prompt optimization and its role in lifelong learning, both of which fall under the broad umbrella of prompt engineering. However, it is not a 'systematic review on hard prefix prompts' specifically; rather, it is an empirical study about a novel approach to prompting. Hence, the rating is not a full 10, as it does not exactly match the premise of a 'comprehensive systematic review on hard prefix prompts.'" -prompt-matched semantic segmentation,gpt-4-1106-preview,7,"While the abstract discusses 'prompt learning' in the context of visual foundation models and semantic segmentation, which is somewhat related to the concept of 'prompt engineering,' it refers to a different domain (visual tasks rather than text-based tasks). The relevance to prompt engineering studies is indirect, as the principles of learning prompts for tasks could potentially be analogous across domains. However, the term 'prompt' in this context does not directly correspond to 'hard prefix prompts' typically discussed in language models and prompt engineering. The methodology and application are related in a broader sense to the concept of optimizing pre-trained models using prompts, so it receives a medium-high relevance rating." -multitask vision-language prompt tuning,gpt-4-1106-preview,9,"The abstract provides a detailed insight into an advanced application of prompt engineering—specifically in the area of multitask vision-language prompt tuning. It is highly relevant to the study of prompt engineering because it discusses a method for improving the performance of vision-language models through task-specific learned prompt vectors and shares empirical evidence of cross-task benefits. Furthermore, the concept of transferable prompts and their effect on model generalization is directly pertinent to the prompt engineering domain. The only reason the rating isn't a full 10 is because the prompt engineering here is specialized for vision-language tasks, which might be slightly narrower in focus than the broader concept of 'hard prefix prompts' mentioned in the initial prompt." -memory-assisted prompt editing to improve gpt-3 after deployment,gpt-4-1106-preview,9,"The relevance to prompt engineering is very high, as this study focuses on refining the interaction between users and GPT-3 through prompt modification using memory-assisted techniques. The study addresses improving the accuracy of responses from GPT-3 by using recorded instances of misunderstandings and user feedback to inform better prompt construction. This falls directly within the realm of prompt engineering, which is the practice of designing prompts to elicit better performance from language models." -openprompt: an open-source framework for prompt-learning,gpt-4-1106-preview,9,"The given abstract reviews a toolkit called OpenPrompt designed for prompt-learning in natural language processing, which is highly relevant to the study of prompt engineering. Prompt engineering deals with how to best structure and adapt prompts to get effective responses from language models. While it does not specifically mention 'hard prefix prompts', it offers a framework that likely supports experimenting with various prompt strategies, including hard prefixes. Therefore, the relevance to prompt engineering is high, but not maximum as it does not directly address 'hard prefix prompts'." -cpt: colorful prompt tuning for pre-trained vision-language models,gpt-4-1106-preview,8,"The abstract describes an innovative approach called Cross-modal Prompt Tuning (CPT) for pre-trained vision-language models (VL-PTMs), which involves a form of prompt engineering by utilizing color-based co-referential markers in image and text to reformulate visual grounding. This is highly relevant to the study of prompt engineering as it presents a specific instance where prompts are engineered to bridge the gap between pre-training and fine-tuning, enhancing the model's performance on downstream tasks with few-shot or zero-shot learning. Although the study focuses specifically on vision-language models and doesn't address hard prefix prompts in general, the concept of tailoring prompts for better performance is directly applicable to the field of prompt engineering. Thus, the rating reflects its high relevance due to its innovative approach to prompt design, with some points deducted for not directly addressing the broader topic of hard prefix prompts." -differentiable prompt makes pre-trained language models better few-shot learners,gpt-4-1106-preview,8,"The paper presents a method (DART) for enhancing the few-shot learning capabilities of small language models without traditional prompt engineering. Although it claims to bypass 'any prompt engineering,' the method still inherently deals with prompts by differentially optimizing prompt templates. Therefore, it is relevant to the study of prompt engineering since it explores an alternative avenue for prompt manipulation. The rating is not a full 10 because the study appears to focus more on the model's few-shot learning improvement rather than prompting techniques themselves." -adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections,gpt-4-1106-preview,8,"The study describes a process of 'meta-tuning' pre-trained language models on a variety of datasets and unifying label descriptions in a QA format to optimize them for zero-shot learning. While not specifically addressing 'hard prefix prompts,' it heavily involves the concept of using prompts to improve the performance of language models in tasks they were not explicitly trained for. This is highly relevant to the field of prompt engineering, as it explores how different methods of providing input to models (in this case, through meta-tuning) can result in better alignment with desired outcomes. The TLDR further confirms the study's relevance to prompt engineering by emphasizing the improved performance on answering prompts. However, given that it does not directly study hard prefix prompts, the rating is not a full 10." -prompt-learning for fine-grained entity typing,gpt-4-1106-preview,9,"The abstract describes a study focused on prompt-learning, which is directly related to the field of prompt engineering. It highlights the use of language prompts to tune pre-trained language models for specific tasks, which is an essential component of research within prompt engineering. The relevance is high because the work specifically investigates prompt-learning methodologies and their applications, including a new self-supervised strategy for zero-shot scenarios, directly contributing to the understanding and advancement of how prompts can improve model performance on a granular level. The only detail preventing a perfect score is the lack of explicit mention of 'hard prefix prompts,' but the described study is likely to have significant implications for prompt engineering in general." -a good prompt is worth millions of parameters: low-resource prompt-based learning for vision-language models,gpt-4-1106-preview,9,"The abstract clearly pertains to the study of prompt engineering, as it discusses the utilization and effects of prompts in few-shot learning tasks for vision-language models. The research focuses on how different types of prompts (noisy versus hand-crafted) influence the learning process and performance of the model. The mention of 'prefix language modeling' also directly relates to the prompt engineering study, specifically regarding hard prefix prompts. The high score reflects the direct relevance to the study of how prompts can improve or affect the learning capabilities of AI models, despite not exclusively being about 'hard prefix prompts', hence not a perfect score." -on transferability of prompt tuning for natural language processing,gpt-4-1106-preview,9,"The abstract is highly relevant to prompt engineering as it discusses prompt tuning (PT), which is an efficient method in natural language processing to utilize pre-trained language models with adjustable soft prompts. The study's focus on the transferability of these soft prompts and the implications for efficiency and performance improvements directly relates to the core concepts of prompt engineering. They explore how different prompts affect various models and how that can be harnessed to enhance the PT process. Although the study is not strictly about 'hard prefix prompts' as originally sought, the relevance to prompt engineering is significant, thus the high rating. The explicit mention of 'trained soft prompts' and 'prompt transfer' indicates a direct relationship to engineering the inputs to the language models." -pada: example-based prompt learning for on-the-fly adaptation to unseen domains,gpt-4-1106-preview,9,"The paper detailed in the prompt directly pertains to prompt engineering, specifically in the application of 'example-based autoregressive Prompt learning for on-the-fly Any-Domain Adaptation'. It focuses on augmenting the ability of the T5 language model to generate prompts that effectively adapt to unseen domains without the need for prior examples or knowledge about the target domain, which is a crucial aspect of prompt engineering. The relevance rating is high because it directly addresses the generation and utilization of prompts to enhance the adaptability and performance of NLP systems in novel contexts, which is central to the study of prompt engineering." -nsp-bert: a prompt-based few-shot learner through an original pre-training task —— next sentence prediction,gpt-4-1106-preview,9,"The paper described pertains directly to prompt engineering, as it deals with a prompt-based few-shot learner and demonstrates how prompts can be used in conjunction with the BERT model's original pre-training task of Next Sentence Prediction (NSP). The relevance to prompt engineering is clear since it discusses an innovative approach to prompts at the sentence level, contrasting with the common token-level prompts. Furthermore, the paper's focus on how prompt-based learning can be effective in different NLP tasks, and its exploration of factors like the pre-training corpus on the few-shot learning capabilities of the model, are pertinent issues within the study of prompt engineering." -lightner: a lightweight generative framework with prompt-guided attention for low-resource ner,gpt-4-1106-preview,8,"The paper discusses the use of 'prompt-guided attention' within a generative framework for Named Entity Recognition (NER) in low-resource settings. This approach is quite relevant to prompt engineering, as it involves the manipulation of continuous prompts to improve the performance of a pre-trained language model on a specific task, without the need for extensive re-training or large datasets. Although the paper is specifically about NER and not about the broader topic of 'hard prefix prompts', the concept of integrating prompts into the attention mechanism is very much related to the study of how prompts can be effectively used to direct the focus of language models. The rating is not a full 10 because it concentrates on a specific application (NER) and does not cover the entire breadth of prompt engineering, which could also include other tasks and models." -pada: a prompt-based autoregressive approach for adaptation to unseen domains,gpt-4-1106-preview,8,"The abstract describes PADA, a prompt-based approach, which is directly related to prompt engineering as it involves the generation of unique prompts to adapt to unseen domains in NLP tasks. The approach's autoregressive nature and its reliance on Domain Related Features (DRFs) suggest a nuanced and advanced application of prompt engineering. While the study seems to focus more on domain adaptation rather than hard prefix prompts specifically, the technique's success in outperforming other approaches highlights its relevance to the broader field of prompt engineering and its potential contributions to the prompt engineering literature. The paper could provide valuable insights into designing effective prompts for domain adaptation, which is a subset of the overall prompt engineering research area." -why johnny can’t prompt: how non-ai experts try (and fail) to design llm prompts,gpt-4-1106-preview,9,"The study described in the title and abstract addresses a core aspect of prompt engineering by investigating whether non-AI experts are capable of designing effective prompts for large language models (LLMs). It directly focuses on the challenges and learnability of prompt design, which is highly relevant to the field of prompt engineering. The rating is not a perfect 10 because the study appears to focus on the end-user experience and may not delve into technical aspects or methodologies of prompt crafting, such as hard prefix prompts, as much as a more narrowly focused technical paper would." -the power of prompt tuning for low-resource semantic parsing,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it specifically investigates 'prompt tuning', which is a technique within the domain of prompt engineering. The focus on how prompt tuning can enhance the performance of language models for the semantic parsing task suggests that this paper contributes to the understanding and application of prompt engineering. However, it may not cover all aspects of prompt engineering, such as the creation or manipulation of hard prompts, therefore the rating is not a full 10." -the biases of pre-trained language models: an empirical study on prompt-based sentiment analysis and emotion detection,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it focuses on the biases of PLMs when used in prompt-based tasks such as sentiment analysis and emotion detection. These findings are directly applicable to prompt engineering since the biases in label-word mappings, prompt templates, formation of prompts, and others impact how prompts are engineered for effective interaction with PLMs. The high rating is due to the direct investigation and empirical study of issues that would be fundamental to anyone engaged in engineering prompts for PLMs." -adaprompt: adaptive prompt-based finetuning for relation extraction,gpt-4-1106-preview,8,"The paper presents an approach that is highly relevant to prompt engineering as it involves the novel use of adaptive prompts in the context of fine-tuning language models for relation extraction, a specific NLP task. The adaptive label words selection mechanism directly relates to how prompts are engineered to handle complex label spaces, and the auxiliary entity discriminator may be considered a form of prompt that encourages the model to concentrate on certain aspects of input data. Thus, the relevance to prompt engineering studies is significant, though not perfect, as the paper might not cover the entire breadth of prompt engineering topics." -prompt waywardness: the curious case of discretized interpretation of continuous prompts,gpt-4-1106-preview,9,"The study addresses a central issue in prompt engineering by exploring the relationship between continuous and discrete prompt formats and their effectiveness in solving language tasks. The investigation into the 'waywardness' of prompt behavior is highly relevant to developing more robust and interpretable prompting methods, which aligns closely with the field of prompt engineering. The only reason the rating is not a full 10 is because the study does not specifically mention 'hard prefix prompts' but rather deals with continuous prompts more broadly." -sentiprompt: sentiment knowledge enhanced prompt-tuning for aspect-based sentiment analysis,gpt-4-1106-preview,8,"The study presents a method of enhancing language model performance for aspect-based sentiment analysis through the use of customized prompts that incorporate sentiment knowledge. This directly relates to the engineering of prompts, as it involves designing and applying specialized prompt structures (consistency and polarity judgment templates) to improve task-specific model outputs. While the study is not just about 'hard prefix prompts', it still involves the systematic design of prompts to encode task-specific knowledge, which is a significant component of prompt engineering. Therefore, it gets a high relevance score but is not a perfect match due to the specificity of 'hard prefix prompts' not being the central focus." -automated cross-prompt scoring of essay traits,gpt-4-1106-preview,7,"The abstract describes a study on cross-prompt automated essay scoring, which is not directly related to 'hard prefix prompts' or prompt engineering. However, the methodology involves training models to understand and score various traits of essay text, likely making use of several prompt design considerations to generalize across different essay prompts. While not explicitly focused on prompt engineering, the research indirectly involves the creation of prompts that can elicit features used for trait-focused scoring. Thus, the relevance to prompt engineering is moderate due to its indirect but significant implications for designing prompts that can be effectively utilized by AES systems in various contexts." -masterkey: automated jailbreak across multiple large language model chatbots,gpt-4-1106-preview,8,"The abstract discusses a study related to 'jailbreak' attacks on Large Language Models (LLMs), which directly involve the manipulation of prompts to achieve unintended outcomes. This is highly relevant to the field of prompt engineering because it pertains to understanding how prompts can be engineered to exploit or circumvent the intended use of LLMs. Although the specific term 'hard prefix prompts' is not mentioned, the concept of automated jailbreak prompt generation suggests a close relationship with prompt engineering techniques. The research's emphasis on reverse-engineering defensive strategies and developing countermeasures is also pertinent to the design and analysis of prompts in LLMs. The rating is not a full 10 as the abstract doesn't directly address 'hard prefix prompts' specifically, but rather the broader issue of jailbreak prompts." -gptfuzzer : red teaming large language models with auto-generated jailbreak prompts,gpt-4-1106-preview,8,"The 'gptfuzzer : red teaming large language models with auto-generated jailbreak prompts' study is highly relevant to prompt engineering, but with a specific focus on security and adversarial testing. The research presented automates the generation of jailbreak prompts, which are a subset of prompts aimed at testing the robustness and safety of LLMs. This aspect makes it relevant as it deals with the automated creation and effectiveness of hard prefix prompts, tasks that closely relate to prompt engineering. Nonetheless, it does not cover the broader aspects of prompt engineering, such as optimizing prompts for constructive tasks, rephrasing for better understanding, or improving human-AI interaction, hence the rating is not a full 10." -autodan: generating stealthy jailbreak prompts on aligned large language models,gpt-4-1106-preview,8,"The paper directly deals with the issue of creating prompts that can influence the behavior of Large Language Models (LLMs), which is a subset of prompt engineering. Although it focuses on generating adversarial or 'jailbreak' prompts, rather than constructive hard prefix prompts, the techniques and insights from such a study could be highly relevant to prompt engineering, particularly in understanding and preventing unintended responses from LLMs. However, the relevance is not a perfect 10 as the study's primary goal is to address security concerns rather than the broader scope of prompt engineering for beneficial use cases." -developing an accuracy-prompt toolkit to reduce covid-19 misinformation online,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores various accuracy prompts that could be used to encourage the sharing of accurate information online, particularly in the context of COVID-19. The effectiveness of different prompts and their impact on behavior is central to the field of prompt engineering. However, the specificity of the prompts to the domain of misinformation may not encompass the full breadth of prompt engineering, which can also include prompts for eliciting information, generating text, or other interactions in user interfaces beyond accuracy checking." -not what you've signed up for: compromising real-world llm-integrated applications with indirect prompt injection,gpt-4-1106-preview,8,"The abstract presents an in-depth look at how natural language prompts can be used maliciously to exploit LLM-integrated applications, which is closely relevant to the field of prompt engineering. It reveals new attack vectors in the form of Indirect Prompt Injection and stresses on the importance of understanding prompts from a security perspective. While it does not focus solely on 'hard prefix prompts', the study of adversarial prompting is critical to the broader domain of prompt engineering where designing robust and secure prompts is key. Hence, the information is highly relevant, though not exclusively centered on hard prefix prompting methodologies." -hard prompts made easy: gradient-based discrete optimization for prompt tuning and discovery,gpt-4-1106-preview,9,"The abstract details a study focused on prompt engineering, specifically regarding the optimization of 'hard' prompts, which are highly relevant to the field of prompt engineering. It introduces a method for automatically generating and optimizing these prompts, which aligns closely with the study of engineering prompts that are interpretable and easily manipulated. Furthermore, it has applications in both text-to-image and text-to-text models, indicating a broad relevance to different aspects of prompt engineering. The only reason for not giving a full score of 10 is that the abstract does not explicitly mention a 'systematic review', which suggests that the work may be more focused on original research or methodology rather than reviewing existing literature on hard prefix prompts." -more than you've asked for: a comprehensive analysis of novel prompt injection threats to application-integrated large language models,gpt-4-1106-preview,8,"The paper discusses 'prompt injection threats' in Large Language Models (LLMs) which are closely related to prompt engineering as it concerns how prompts are constructed and how they can be manipulated. Prompt engineering involves the strategic creation of prompts to guide the behavior of LLMs, and understanding prompt injection threats is crucial for developing robust and secure prompt engineering methods. Although the paper focuses more on security threats than on prompt engineering in general, the systematic analysis and discussion of these threats are highly relevant for developing better prompt engineering practices." -catastrophic jailbreak of open-source llms via exploiting generation,gpt-4-1106-preview,7,"The abstract details research on exploiting large language models (LLMs) through what is termed 'generation exploitation attack', by altering decoding methods. This is relevant to prompt engineering since understanding how different decoding methods and adversarial prompts affect the model's outputs can inform the development of better prompts. Moreover, the work's exploration of alignment methods to counteract the attack implies the significance of structured prompts to maintain LLMs' alignment with human values. While the study is not focused on 'hard prefix prompts' explicitly, it deals with model manipulations related to input prompts, hence the rating of 7 for its partial but significant relevance to prompt engineering study." -jailbreak and guard aligned language models with only few in-context demonstrations,gpt-4-1106-preview,9,"The abstract details an investigation into the application of In-Context Learning (ICL) for manipulating language models, which falls under the domain of prompt engineering. The study assesses the ability to guide language models towards either harmful or safe responses by providing specific examples or 'prompts'. Although the main focus is on the security aspect of language models, the techniques mentioned—In-Context Attack (ICA) and In-Context Defense (ICD)—are directly relevant to prompt engineering as they involve crafting prompts that significantly alter a model's outputs. Hence, the relevance to prompt engineering is high, but since the study seems to be more targeted at security (alignment and guarding against jailbreaking) rather than on prompt engineering in general, the rating is not a perfect 10." -prompt as triggers for backdoor attack: examining the vulnerability in language models,gpt-4-1106-preview,7,"The paper is relevant to prompt engineering as it discusses utilizing the prompt itself as a potential vector for backdoor attacks in language models, which falls under prompt manipulation and its potential risks. This indicates a direct relationship to the design and usage of prompts within AI models, showing the consequences that can arise from prompt engineering. However, it may not address the broader scope of prompt engineering techniques and their applications directly, focusing instead on the security aspect and vulnerability of the models to prompt-based attacks." -notable: transferable backdoor attacks against prompt-based nlp models,gpt-4-1106-preview,8,"The abstract describes a study that is highly relevant to prompt engineering as it specifically addresses vulnerabilities in prompt-based learning models. The focus on backdoor attacks that are independent of downstream tasks and prompting strategies indicates a notable concern for the prompt engineering domain, considering the increasing utilization of such models in various NLP tasks. The high relevance score is due to the direct relation to prompt-based models' security, an aspect that is crucial for understanding and improving prompt engineering techniques. However, the score is not a full 10, as the primary focus is on security, and while related, it does not exclusively cover the broader range of prompt engineering topics such as prompt design or optimization." -prompts should not be seen as secrets: systematically measuring prompt extraction attack success,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering studies as it addresses the security aspect of prompt-based control of large language models. It directly explores how the prompts, which are integral to shaping model outputs, can be uncovered through extraction attacks. This is crucial for understanding the integrity and confidentiality of proprietary prompting methods, although it is a specialized focus on prompt security rather than the broader field of designing or optimizing prompts for general use." -sam on medical images: a comprehensive study on three prompt modes,gpt-4-1106-preview,7,"The study described in the title and abstract does revolve around the use of prompts—in this case, to guide a machine learning model (SAM) for the task of image segmentation. The research explores different prompt modes, specifically in relation to the performance of zero-shot generalization on medical images, which is a form of prompt engineering in the context of 'foundation models'. The relevance is not at the maximum because the prompt engineering mentioned here mostly refers to the application of prompt types like bounding boxes, rather than the systematic study of 'hard prefix prompts' that might be involved in other areas like NLP or more complex interactions. Nevertheless, the research still contributes to the field of prompt engineering by investigating how different prompts affect model performance, thus the rating is above average." -tdnn: a two-stage deep neural network for prompt-independent automated essay scoring,gpt-4-1106-preview,7,"The abstract pertains to the development of a deep neural network for automated essay scoring that is designed to work under a prompt-independent setting. This is somewhat relevant to prompt engineering as it relates to the broader field of natural language processing and the automated response to prompts (essays). However, the system is not centered around the creation or manipulation of prompts itself (prompt engineering), but rather on evaluating responses to prompts, which is indirectly related to understanding the prompts' influence on the response. Therefore, the relevance is notable but not direct." -an llm can fool itself: a prompt-based adversarial attack,gpt-4-1106-preview,9,"The study directly addresses prompt engineering by proposing PromptAttack, a method that uses a prompt-based approach to generate adversarial attacks against large language models (LLMs). The study's focus on how prompts can be engineered to manipulate LLM outputs is highly relevant to the field of prompt engineering. The only reason it does not receive a full score is that the study is focused specifically on adversarial attacks rather than a broader range of prompt engineering applications." -llm-grounded diffusion: enhancing prompt understanding of text-to-image diffusion models with large language models,gpt-4-1106-preview,7,"The study focuses on enhancing the understanding of complex prompts in text-to-image diffusion models by incorporating a large language model, which relates to prompt engineering as it involves interpreting and acting upon language input. While the study is not explicitly about 'hard prefix prompts' in the context of comprehensive systematic reviews, the improvement of prompt understanding and the interaction between language models and diffusion models is relevant to the broader field of prompt engineering. Therefore, the relevance rating is relatively high, but not maximum due to the lack of direct focus on 'hard prefix prompts' specifically." -prompt distillation for efficient llm-based recommendation,gpt-4-1106-preview,7,"The provided abstract directly relates to prompt engineering in the context of improving the efficiency of large language models (LLMs) for recommendation systems. Prompt distillation, as discussed in the abstract, is a technique aimed at refining the use of prompts in LLMs, which falls within the scope of prompt engineering. Although the term 'hard prefix prompts' is not explicitly mentioned, the concept of distilling discrete prompts to continuous vectors is relevant to the broader study of how prompts are structured and optimized for LLMs. Therefore, the relevance is high but not maximal due to the lack of specificity regarding 'hard prefix prompts'." -"compress, then prompt: improving accuracy-efficiency trade-off of llm inference with transferable prompt",gpt-4-1106-preview,9,"The study highly relates to prompt engineering since it focuses on improving the performance of compressed Large Language Models (LLMs) by means of prompt engineering (i.e., the use of 'hard prompts'). The research suggests a method for enhancing prompt efficacy via a 'soft prompt learning method,' which is specifically tailored to work with compressed models. Although the primary focus of the paper is on model compression and its impact on efficiency and accuracy, the core of the study involves refining the prompt engineering process to ensure high-quality performance from these compressed models. The fact that the study explores the transferability of learned prompts to different tasks and models also demonstrates depth in research pertaining to prompt design and optimization, which is a fundamental aspect of prompt engineering." -prompt sapper: llm-empowered software engineering infrastructure for ai-native services,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering study as it directly discusses the role of prompts in AI-native services and how natural language prompts can be used as executable code, which aligns with the subject of hard prefix prompts in the context of natural language processing and command execution. Although the paper does not specifically mention 'hard prefix prompts', the focus on prompt-based interaction systems and infrastructure indicates a clear relationship with the broader topic of prompt engineering, warranting a high relevance rating. The deduction in the score accounts for the lack of explicit mention of 'hard prefix prompts', which may be a key term if the research sought to target that specific sub-domain within prompt engineering." -prompt sapper: a llm-empowered production tool for building ai chains,gpt-4-1106-preview,8,"The paper introduces 'Prompt Sapper', a tool designed to help build AI services using foundation models like GPT-4. This is highly relevant to prompt engineering because the tool is meant to streamline the process of creating prompt-based AI services. It focuses on incorporating software engineering principles into AI chain engineering, which includes prompt engineering as a subset. The tool aims to make this process more accessible, efficient, and correct, which directly impacts the field of prompt engineering. The rating is not a full 10 because the abstract does not detail the specifics of 'hard prefix prompts' or focus solely on prompt engineering; it discusses AI chain engineering more broadly, of which prompt engineering is a part." -artificial intelligence for health message generation: an empirical study using a large language model (llm) and prompt engineering,gpt-4-1106-preview,9,"The given abstract directly pertains to the use of prompt engineering within the context of generating health awareness messages using a large language model. The study focuses on the method of using AI-generated prompts to compare message quality, clarity, and semantic content with human-generated content. The high relevance comes from the practical application of prompt engineering in creating AI-generated messages and the systematic evaluation of their effectiveness against a human-generated benchmark. It is slightly less than a perfect score because the study is specific to health messages and does not cover all aspects of prompt engineering, such as 'hard prefix prompts' which the original prompt suggests may be of particular interest." -"exploring the relationship between llm hallucinations and prompt linguistic nuances: readability, formality, and concreteness",gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates how various linguistic aspects of prompts affect the behavior of Large Language Models (LLMs), particularly in the context of hallucination, which is a significant issue related to the performance and reliability of LLMs. Understanding the relationship between prompt nuances and LLM output is central to prompt engineering. The only reason for not giving a full score is that the abstract specifies an exploratory investigation, indicating that the findings might not be comprehensive or definitive, which would be necessary for a perfect relevance rating." -promptcrafter: crafting text-to-image prompt through mixed-initiative dialogue with llm,gpt-4-1106-preview,8,"The presented paper focuses on a mixed-initiative system called PromptCrafter that aids in the crafting of text-to-image prompts using a step-by-step process facilitated by a Large Language Model. While it does not explicitly address 'hard prefix prompts', it is substantially related to the field of prompt engineering. It deals with the refinement of prompts and user interaction with language models to produce specific outputs, which are central issues in prompt engineering studies. Therefore, it is highly relevant in terms of offering practical solutions and methodologies for improving prompt design, even if it does not directly tackle the concept of hard prefixes." -two-stage llm fine-tuning with less specialization and more generalization,gpt-4-1106-preview,9,"The abstract describes a method (ProMoT) directly addressing the issues related to prompt engineering by proposing a two-stage fine-tuning framework that reduces format specialization and improves generalization, which is highly relevant to engineering more adaptable and effective prompts for large language models (LLMs). The fact that it seeks to enhance in-context learning through prompt tuning suggests a close connection to the field of prompt engineering, making the study's relevance to prompt engineering very high. The only reason it does not get a 10 is because it doesn't focus exclusively on 'hard prefix prompts' as the original query specifies, but rather on prompt tuning in a broader sense." -llm-adapters: an adapter family for parameter-efficient fine-tuning of large language models,gpt-4-1106-preview,7,"The paper's focus on parameter-efficient fine-tuning (PEFT) of large language models (LLMs) through the use of adapters is relevant to prompt engineering, as it deals with the modification and adaptation of LLMs for specific tasks, which is intrinsic to prompt engineering. However, the study does not directly address 'hard prefix prompts,' which is the specific topic of interest. Although the techniques described could potentially be applied to improve the efficiency of prompt-based learning methods, the abstract does not explicitly mention the application to prompt engineering. Nevertheless, the relevance lies in the broader context of adapting and improving the performance of LLMs in different tasks, which is tangential to the field of prompt engineering." -llm-eval: unified multi-dimensional automatic evaluation for open-domain conversations with large language models,gpt-4-1106-preview,7,"The abstract describes 'LLM-eval,' an evaluation method for open-domain conversations with large language models, focusing on using single prompt-based approaches for comprehensive assessment. While it does not explicitly address 'hard prefix prompts' or prompt engineering studies, the methodology is relevant for understanding how prompt-based systems can be evaluated. Since prompt engineering is a key element in defining how language models interpret and respond to prompts, this study could indirectly contribute to the field by providing a framework for evaluating the effectiveness of different prompt strategies, albeit without directly targeting hard prefix prompts." -a first look at llm-powered generative news recommendation,gpt-4-1106-preview,8,"The abstract describes using a language model for personalized news recommendation, which implies that the system employs some form of prompt engineering to generate or summarize news according to a user's interests. The concept of moving from model design to prompt design suggests that prompt engineering is a significant component of the research. However, the study focuses more on the application of LLMs for recommendation systems rather than on the study of hard prefix prompts in isolation or comprehensive systematic reviews on prompt engineering. Therefore, the relevance is high but not entirely focused on prompt engineering study as it relates to the broader application within recommendation systems." -llm-assisted generation of hardware assertions,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves utilizing natural language prompts to generate code assertions, a clear instance of applying language model prompting to a specialized domain. The use of prompts in this case directly pertains to the concept of 'prompt engineering,' which is about optimizing inputs for language models to achieve desired outputs. However, since the focus is specifically on code generation for security assertions within hardware and not on hard prefix prompts in a broader context, it might not cover all aspects of prompt engineering study. This results in a slightly lower rating." -certifying llm safety against adversarial prompting,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it directly addresses the challenge of adversarial prompting and the need for developing techniques to ensure safe outputs from large language models (LLMs). Since prompt engineering involves crafting inputs that can influence or guide a model's behavior, the presented 'erase-and-check' framework is a significant contribution to understanding and mitigating the risks posed by adversarial prompts. The study’s focus on certifying the safety of prompts against adversarial attacks is essential for advancing the field of prompt engineering while ensuring responsible use of LLMs. It only slightly misses a perfect score because it does not directly cover 'hard prefix prompts,' but it extensively pertains to the broader domain of prompt safety and adversarial resistance." -graph-toolformer: to empower llms with graph reasoning ability via prompt augmented by chatgpt,gpt-4-1106-preview,7,"The abstract discusses a method to enhance large language models (LLMs) by teaching them graph reasoning abilities through the use of prompts augmented by ChatGPT. This is related to prompt engineering since it involves developing ways to optimize prompts to extend the capabilities of LLMs into new domains, like graph reasoning. However, the core focus of the study is on integrating external API tools with LLMs rather than the actual crafting or systematic review of 'hard prefix prompts' specifically. Therefore, while relevant due to the utilization of prompts, it doesn't directly address a comprehensive review of prompt engineering methodologies or specifics of 'hard prefix prompts,' leading to a score that indicates moderate relevance rather than being fully on-topic." -velma: verbalization embodiment of llm agents for vision and language navigation in street view,gpt-4-1106-preview,7,"The abstract describes VELMA as an embodied LLM agent that uses verbalization for navigation, which implies a form of prompt engineering is used to translate visual information into text prompts for the LLM to make decisions. Although not specifically about 'hard prefix prompts,' it does involve constructing and using prompts in a multimodal context (vision and language). Therefore, it is relevant to the field of prompt engineering, but slightly indirectly as the main focus seems to be on navigation and embodiment rather than prompt engineering itself." -llm-empowered chatbots for psychiatrist and patient simulation: application and evaluation,gpt-4-1106-preview,8,"The abstract describes research that is highly relevant to prompt engineering as it specifically addresses the impact of prompt designs on chatbot behavior and user experience. While it doesn't directly mention 'hard prefix prompts,' the study of prompt designs in the context of chatbot performance is directly related to the field of prompt engineering. Therefore, the findings could contribute valuable insights into the subtleties of prompt crafting and optimization, particularly in mental health applications." -chain-of-thought prompting for responding to in-depth dialogue questions with llm,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates an approach (chain-of-thought prompting) to enhance the interaction between users and large language models (LLMs) by focusing on personalizing responses based on user status (personality, emotion, psychology). While it does not directly address 'hard prefix prompts,' it contributes to the field of prompt engineering by exploring advanced prompting techniques aimed at improving the efficacy and personalization of LLM responses. The relevance would be higher if the study specifically addressed hard prefix prompts, but it is still significant due to its focus on improving the quality of prompts and user-model interactions." -trapping llm hallucinations using tagged context prompts,gpt-4-1106-preview,8,"The study addresses the issue of 'hallucinations' in large language models (LLMs) and proposes a methodology that includes the use of context and embedded tags to mitigate this problem. Since prompt engineering involves crafting inputs to effectively interact with LLMs and obtain desired outputs, the technique described in the paper to minimize hallucinations is quite relevant to prompt engineering. It is likely to contribute to designing better prompts that can control or guide model behavior, ensuring more accurate responses. However, the study's specific focus is on combating hallucinations rather than on prompt engineering in its entirety, which explains why the rating is not a perfect 10." -mental-llm: leveraging large language models for mental health prediction via online text data,gpt-4-1106-preview,7,"The study involves the evaluation of large language models (LLMs) with a focus on prompt designs such as zero-shot and few-shot prompting, which are directly related to the field of prompt engineering. Moreover, it discusses instruction fine-tuning, which is a more advanced form of prompt engineering that tailors the model to specific tasks. Although the main application discussed in the study is mental health prediction, which is not directly related to 'hard prefix prompts,' the methodology and findings could have implications for prompt engineering in general, making it moderately relevant to the field." -free-bloom: zero-shot text-to-video generator with llm director and ldm animator,gpt-4-1106-preview,8,"The abstract describes using large language models (LLMs) to generate a 'semantic-coherent prompt sequence', which is directly relevant to prompt engineering, particularly in the niche area of text-to-video generation. While the study focuses more on the application of these prompts to generate video rather than the systematic review of hard prefix prompts themselves, the creation and optimization of prompts remains a central component of the research, justifying a high relevance rating." -benchmarking a foundation llm on its ability to re-label structure names in accordance with the aapm tg-263 report,gpt-4-1106-preview,7,"The study described in the title and abstract is relevant to prompt engineering to a significant extent because it involves using a large language model (GPT-4) with specifically tuned prompts to perform a complex, domain-specific task. However, while the focus of the study is on the application of an LLM to re-label structure names in medical imaging in accordance with a specific standard, it also implicitly involves designing and refining prompts to obtain this accurate outcome. This prompt engineering aspect is an essential part of the study as it directly affects the performance of the LLM, but the study is not explicitly about prompt engineering methodologies or their systematic review. Therefore, the rating is not a perfect 10, but still notably high due to the implicit involvement of prompt fine-tuning and the potential insights it might offer for prompt engineering best practices." -promptly: using prompt problems to teach learners how to effectively utilize ai code generators,gpt-4-1106-preview,9,"The paper directly addresses prompt engineering by introducing the concept of 'Prompt Problems', which are designed to teach students how to effectively craft prompts for large language models that generate code. This is highly relevant to the study of prompt engineering as it focuses on improving the interaction between humans and AI through the construction of effective prompts. Although the paper doesn't specifically mention 'hard prefix prompts', it addresses the broader concept of prompts in the context of educational settings, which is why the rating is not a perfect 10." -promptbreeder: self-referential self-improvement via prompt evolution,gpt-4-1106-preview,9,"The abstract describes a system that revolves around the core idea of evolving and improving prompts, which is directly relevant to the study of prompt engineering. Since the system, Promptbreeder, is designed to enhance the ability of Large Language Models through prompt adaptation and is being compared to other prompt strategies, it holds significant relevance to the field. The only reason it does not receive a full score is that it may not relate exclusively to 'hard prefix prompts' as specified in the initial inquiry but addresses a broader scope of prompt engineering." -backdooring instruction-tuned large language models with virtual prompt injection,gpt-4-1106-preview,9,"The paper discusses the concept of Virtual Prompt Injection (VPI), which directly relates to manipulating the behavior of Large Language Models (LLMs) through the use of hidden or embedded prompts. This is a specific, albeit adversarial, example of prompt engineering. It demonstrates how the model's response can be engineered to follow certain instructions without visible modification to the prompt input. Since prompt engineering is about designing prompts to achieve desired outputs from a model, this study is highly relevant as it explores the consequences and defensive strategies related to prompt manipulation. Although the focus is on a security vulnerability, understanding such backdoor methods contributes to a broader comprehension of how prompt mechanisms work in LLMs and the importance of data integrity in instruction tuning." -on the role of attention in prompt-tuning,gpt-4-1106-preview,9,"The provided abstract discusses the use of prompt-tuning within the context of attention mechanisms in language models, which is directly relevant to studies on prompt engineering. It provides insights into how prompts can be used to direct attention to relevant tokens within a given input, which is a crucial aspect of how prompts function in large language models. The abstract also mentions contextual data models and the expressiveness of prompt-tuning, indicating a deep exploration into prompt mechanics. The only reason it doesn't receive a perfect score is the absence of specific mention of 'hard prefix prompts', but otherwise, it has a high relevance to the field of prompt engineering." -a prompt log analysis of text-to-image generation systems,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to a large extent, as it delves into the analysis of prompt logs from text-to-image generation systems, which is a direct application of understanding user interaction with prompts and could inform better prompt design. However, it focuses more on the analysis of user prompts and behavior rather than the construction of hard prefix prompts, which would be more closely aligned with 'prompt engineering' as it pertains to the design, syntax, and semantics of prompts themselves." -privacy-preserving prompt tuning for large language model services,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it addresses prompt tuning, which is a method of customizing LLMs for specific tasks or applications. The concept of privacy-preserving mechanisms within the realm of prompt tuning is pertinent to prompt engineering study because it expands the scope of how prompts can be engineered, taking into account the crucial aspect of user privacy. The fact that this paper also introduces a novel approach to improve LLMs' learning with privatized data indicates a significant contribution to the field of prompt engineering. The reason the relevance rating is not a full 10 is because it focuses more on the privacy aspect than on the general techniques or effectiveness of prompt engineering." -deep language networks: joint prompt training of stacked llms using variational inference,gpt-4-1106-preview,9,"The abstract discusses the optimization of natural language prompts in stacked large language models (LLMs), which is directly relevant to the field of prompt engineering. The focus on learning and training prompts within a deep learning architecture (DLN) highlights crucial aspects of prompt design and efficacy. This paper would be quite significant for someone studying prompt engineering, as it provides insight into how prompts can be optimized to improve the performance of language models." -are chatbots ready for privacy-sensitive applications? an investigation into input regurgitation and prompt-induced sanitization,gpt-4-1106-preview,7,"The study investigates how LLM-powered chatbots handle sensitive information when provided in prompts and how instructions could influence the chatbot's ability to sanitize outputs to comply with privacy regulations. While this does not specifically address 'hard prefix prompts,' it is closely related to prompt engineering because it examines how specific instructions in prompts can affect the information handling of chatbots. The research could inform the development and refinement of prompts that elicit desired privacy-compliant behaviors from the models, which is a critical aspect of prompt engineering in privacy-sensitive applications." -extracting accurate materials data from research papers with conversational language models and prompt engineering - example of chatgpt,gpt-4-1106-preview,9,"The discussed paper is highly relevant to the field of prompt engineering study because it proposes a new method, 'ChatExtract', which utilizes engineered prompts in a conversational language model to automate data extraction from research papers. These prompts are specifically designed to identify pertinent data and ensure its accuracy, addressing a key challenge in prompt engineering. Although the paper does not specifically mention 'hard prefix prompts', it is an application of prompt engineering for a specific and practical task, thus meriting a high relevance score. Prompt engineering is central to the performance of the ChatExtract method, as it hinges on the quality of the prompts to retrieve and validate information from the language model." -discrete prompt optimization via constrained generation for zero-shot re-ranker,gpt-4-1106-preview,9,"The abstract describes a study focused specifically on the optimization of prompts for a zero-shot re-ranker, which is directly connected to prompt engineering. The proposed discrete prompt optimization method, Co-Prompt, is highly relevant to the field since it addresses the creation and refinement of prompts to improve the performance of pre-trained language models on specific tasks without additional parameter updates. This approach is an important aspect of prompt engineering, hence the high relevance rating. The study appears to contribute valuable insights into prompt effectiveness and optimization, which are key areas of interest in prompt engineering. The reason for not giving a perfect score is that it does not explicitly mention 'hard prefix prompts' as referred to in the original query, but its connection to prompt optimization is clear and significant." -sweeping heterogeneity with smart mops: mixture of prompts for llm task adaptation,gpt-4-1106-preview,9,"The abstract presents research on using a 'Mixture of Prompts' with 'smart gating functionality' to enhance the performance of Large Language Models (LLMs) on heterogeneous tasks. This is highly relevant to prompt engineering as it directly addresses the optimization of prompts for task adaptation in LLMs. It investigates a method of improving prompt tuning for diverse tasks, which is a core issue in prompt engineering. The paper aims to reduce training interference and improve efficiency, areas of significant interest in the prompt engineering field. The reasoning behind not giving a full 10 rating is that the abstract does not explicitly mention 'hard prefix prompts,' the specific focus of the prompt engineering study indicated in the query." -cataloging prompt patterns to enhance the discipline of prompt engineering,gpt-4-1106-preview,9,"The paper is highly relevant to the field of prompt engineering as it directly addresses the conceptualization and codification of prompt patterns to enhance interactions with Large Language Models (LLMs) such as ChatGPT. It underscores the significance of establishing more systematic and repeatable approaches within prompt engineering to improve the performance and evaluation of LLMs across various domains. The only reason for not giving a full 10 is because the abstract does not explicitly mention 'hard prefix prompts', which is the specialized topic of the study in question (assuming that 'hard prefix prompts' refer to a specific subset or technique within prompt engineering)." -promptcare: prompt copyright protection by watermark injection and verification,gpt-4-1106-preview,7,"While the article 'promptcare: prompt copyright protection by watermark injection and verification' addresses prompts in the context of Large Language Models and is relevant to the field of prompt engineering, it focuses more specifically on the protection of intellectual property associated with prompts rather than on the techniques for engineering prompts to improve model performance (which would be directly related to 'hard prefix prompts'). However, the study does contribute to the broader ecosystem of prompt engineering by ensuring the safe and authorized use of prompts, which can be considered an aspect of the prompt engineering life cycle. Therefore, it receives a mid-high relevance rating." -batch calibration: rethinking calibration for in-context learning and prompt engineering,gpt-4-1106-preview,9,"The abstract describes a comprehensive analysis of calibration methods to reduce prompt brittleness and biases in large language models, which is directly related to prompt engineering. The study seems to offer a novel contribution with the Batch Calibration method, aiming to improve the effectiveness of prompts and in-context learning. Although it does not explicitly mention 'hard prefix prompts', the content is highly relevant to the broader field of prompt engineering, hence the high relevance rating." -survival of the most influential prompts: efficient black-box prompt search via clustering and pruning,gpt-4-1106-preview,9,"The paper directly addresses the process of optimizing prompt-based learning for large language models by introducing an efficient black-box prompt search method. The inclusion of clustering and pruning to focus on influential prompt tokens is highly relevant for the field of prompt engineering, as it seeks to refine the approach by which prompts are selected and used to drive LLM predictions. The presented Clustering and Pruning for Efficient Black-box Prompt Search (ClaPS) technique is pertinent to the challenge of search space design in prompt engineering. The study's focus on enhancing the efficiency of the prompt search process validates its high relevance to the topic, although it may not cover the full breadth of 'hard prefix prompts' and could be missing some other aspects of prompt engineering not detailed in the abstract." -prompt engineering or fine tuning: an empirical assessment of large language models in automated software engineering tasks,gpt-4-1106-preview,9,"The study directly explores multiple prompt engineering techniques applied to GPT-4 for ASE tasks. The empirical assessment compares the efficacy of prompt engineering against fine-tuned models, providing valuable insights into the current capabilities and limitations of prompt engineering. The high relevance score reflects the detailed analysis of specific prompting strategies, such as task-specific prompting and conversational prompts, which contributes significantly to the body of knowledge on prompt engineering." -poisonprompt: backdoor attack on prompt-based large language models,gpt-4-1106-preview,9,"The study titled 'poisonprompt: backdoor attack on prompt-based large language models' is highly relevant to prompt engineering as it directly deals with the security vulnerabilities associated with the use of prompts in Large Language Models, which can be either hard (fixed) or soft (more flexible). Although the study's primary focus is on the backdoor attack mechanism (POISONPROMPT), it inherently contributes to the understanding and advancement of prompt engineering by identifying potential threats and exploring the robustness of different prompting methods. This information is crucial for researchers and practitioners working on prompt engineering to create more secure and reliable systems. The rating is not a full 10, as the paper focuses more on the security aspect rather than core prompt engineering techniques or their optimization for better performance on tasks." -selfzcot: a self-prompt zero-shot cot from semantic-level to code-level for a better utilization of llms,gpt-4-1106-preview,9,"The relevance of the study to prompt engineering is high because it focuses on the utilization of a self-prompt mechanism (SelfzCoT) that enhances zero-shot learning capabilities in large language models (LLMs). It directly pertains to the field of prompt engineering as it deals with improving the performance of LLMs on arithmetic reasoning tasks through the use of specialized prompts, which are an essential component of prompt engineering. The systematic improvement across different datasets indicates that the researchers are effectively engineering prompts to better utilize the models' existing knowledge without additional training, which is a core aspect of prompt engineering studies." -prompttts 2: describing and generating voices with text prompt,gpt-4-1106-preview,7,"The abstract indicates that the study is concerned with the use of text prompts in the context of text-to-speech (TTS) and addresses issues surrounding voice variability and the generation of text prompts, which are relevant to prompt engineering. Prompt engineering is often associated with designing and refining inputs to affect the outputs of AI models, which is closely related to what PromptTTS 2 aims to achieve in the TTS domain. However, the study's relevance to prompt engineering may not be a perfect fit as it is specialized towards TTS systems and does not broadly tackle hard prefixed prompts in various AI contexts, which a 'comprehensive systematic review on hard prefix prompts' would imply." -conversation regression testing: a design technique for prototyping generalizable prompt strategies for pre-trained language models,gpt-4-1106-preview,9,"The described study directly pertains to prompt engineering as it focuses on improving pre-trained language model outputs using prompt strategies and assessing the effects of these strategies through Conversation Regression Testing. Although it doesn't specifically mention 'hard prefix prompts,' the broad field of prompt engineering, including the design and systematic review of prompt effects, is central to the study. Thus, the relevance to prompt engineering is high." -prompts matter: insights and strategies for prompt engineering in automated software traceability,gpt-4-1106-preview,9,"The title and abstract indicate that the paper focuses on prompt engineering within the context of using Large Language Models for automated software traceability, which is a specific application of prompt engineering. The paper discusses the construction of effective prompts and proposes strategies for utilizing LLMs. This is highly relevant to the study of prompt engineering, particularly in a specialized domain. However, it is not directly related to 'hard prefix prompts' as the prompt specifies, suggesting there is room for more targeted relevance, hence not a perfect score." -graph-toolformer: to empower llms with graph reasoning ability via prompt dataset augmented by chatgpt,gpt-4-1106-preview,8,"The paper is highly relevant to the field of prompt engineering as it specifically looks into the development of a framework that leverages prompts augmented by ChatGPT to improve the performance of large language models when tasked with graph reasoning. While it does not focus on the 'hard prefix prompts' mentioned in the initial prompt, it explores the prompt-based teaching approach and the construction of prompt datasets for specialized applications, which is a component of prompt engineering. The systematic review aspect isn't directly addressed, but the paper proposes a practical application of prompts in the context of LLMs, indicating significant relevance to the study of prompt engineering." -model tuning or prompt tuning? a study of large language models for clinical concept and relation extraction,gpt-4-1106-preview,7,"The study explores different training strategies for large language models (LLMs), including hard prompts and soft prompts, focusing on clinical concept and relation extraction. It directly investigates prompt engineering by comparing the effectiveness of hard and soft prompts within different LLM training conditions. The relevance to prompt engineering study is high, although the primary focus is on soft prompts in a specific domain (clinical), rather than solely on hard prefix prompts as suggested by the original query. Consequently, the rating reflects substantial but not exclusive relevance." -evoke: evoking critical thinking abilities in llms via reviewer-author prompt editing,gpt-4-1106-preview,9,"The provided abstract directly pertains to prompt engineering, as it discusses the development of a framework called Evoke that refines prompts for Large Language Models (LLMs) to enhance their performance. The inclusion of an automatic feedback loop, which considers 'hard' samples implying a form of 'hard prefix prompts', suggests it is highly relevant to the study of refining and improving prompts to elicit better performance from AI models. The main reason the rating is not a perfect 10 is that while Evoke's approach includes working with challenging prompts, it may not strictly constitute a 'systematic review' of hard prefix prompts but appears to be an application or development of that concept." -decoding prompt syntax: analysing its impact on knowledge retrieval in large language models,gpt-4-1106-preview,9,"The provided abstract focuses on the evaluation of prompt syntax and its impact on knowledge retrieval in Large Language Models (LLMs), which is a significant aspect of prompt engineering. The systematic approach to paraphrase prompts and analyze their structures provides valuable insights into how different types of prompts affect the performance of LLMs. This research can inform the design of more effective prompts (including hard prefix prompts), making it highly relevant to the field of study. The reason for not giving a full score of 10 is the absence of a specific mention of 'hard prefix prompts' in the context of the abstract, but it is still generally relevant to prompt engineering." -progprompt: generating situated robot task plans using large language models,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it deals with designing structured prompts for large language models to generate robot task plans, demonstrating an understanding of the importance of prompt design in achieving functional outputs from LLMs. Its focus on programmatic prompts, ablation experiments for prompt structures, and ultimately demonstrating success in a practical domain like robotics shows a clear overlap with the field of prompt engineering. The rating is not a full 10 because the study focuses specifically on robotics and situated task plans, which is just one application of prompt engineering, rather than a broad investigation of hard prefix prompts across various domains." -universal and transferable adversarial attacks on aligned language models,gpt-4-1106-preview,7,"The paper's focus on developing adversarial attacks against aligned language models is tangentially related to prompt engineering, as it concerns the specific construction of inputs (suffixes) designed to elicit certain responses from language models. While the study does not directly address 'hard prefix prompts', it does deal with the broader theme of how prompts can be engineered (in this case, to be adversarial) to manipulate model output. Therefore, the relevance to prompt engineering is significant, but it is not a direct match to the concept of 'hard prefix prompts' in a systematic review context." -principle-driven self-alignment of language models from scratch with minimal human supervision,gpt-4-1106-preview,8,"The abstract describes a novel approach to aligning language models with human intentions using minimal human supervision and includes stages relevant to prompt engineering, such as generating synthetic prompts to augment diversity. Although the study seems more focused on self-alignment principles and minimization of supervision rather than hard prefix prompts specifically, the method includes aspects like in-context learning from generated prompts, which is a key part of prompt engineering. Therefore, the relevance to prompt engineering is high but not entirely focused on the 'hard prefix prompts' aspect mentioned in the original prompt." -language models don't always say what they think: unfaithful explanations in chain-of-thought prompting,gpt-4-1106-preview,7,"The study investigates the reliability of chain-of-thought (CoT) prompting in the context of Large Language Models (LLMs). While this is highly relevant to the field of prompt engineering as it relates to the integrity and trustworthiness of prompts (especially CoT prompts), it does not specifically address 'hard prefix prompts'. Since the study has implications for how prompts can be engineered to elicit accurate explanations from models and discusses the manipulation of model inputs, which is a core concern in prompt engineering, it has relevance. However, the study's focus on the fidelity of CoT rather than on systematic reviews of 'hard prefix prompts' means it is only partially aligned with the prompt engineering study. Therefore, it receives a moderate to high relevance score." -ask me anything: a simple strategy for prompting language models,gpt-4-1106-preview,9,"The content of the abstract indicates a high relevance to prompt engineering as it discusses the development of prompt strategies to improve the performance of large language models (LLMs) and attempts to reduce the brittleness of prompting by aggregating multiple prompts. The concept of 'ASK ME ANYTHING' (AMA) is directly related to engineering effective prompts and influences how prompts are generated and utilized. The study also evaluates the performance across different models and benchmarks, which is essential for understanding the implications of prompt design strategies. While it may not explicitly focus on 'hard prefix prompts' as mentioned in the original request, the general exploration of prompt formats and strategies makes this abstract highly relevant to the field of prompt engineering." -progressive-hint prompting improves reasoning in large language models,gpt-4-1106-preview,9,"The provided abstract details a study on a novel prompting method, Progressive-Hint Prompting (PHP), designed to improve the reasoning capabilities of Large Language Models (LLMs) by leveraging previously generated responses. This relates directly to the field of 'prompt engineering' as it explores the structure and strategy behind prompts to enhance the performance of LLMs. The fact that it introduces a new methodology and reports on experimentation and results aligns closely with advancements and research in prompt engineering, justifying the high relevance rating. The only reason it is not a perfect 10 is that the abstract does not explicitly mention the 'hard prefix prompts' specified in the original query, otherwise, it charts the advancement in the field of prompt engineering which includes improvements over conventional methods like CoT and self-consistency." -frugalgpt: how to use large language models while reducing cost and improving performance,gpt-4-1106-preview,8,"The paper titled 'frugalgpt: how to use large language models while reducing cost and improving performance' is quite relevant to prompt engineering. One of the strategies mentioned for reducing inference cost is 'prompt adaptation,' which directly pertains to the field of prompt engineering. This strategy likely involves creating and refining prompts to produce more accurate or useful outputs from LLMs, thereby also reducing repetitive or unnecessary queries that could increase costs. Although the study's primary focus is on cost-reduction and performance improvement rather than the specifics of crafting hard-prefix prompts, the concept of prompt adaptation is a core part of prompt engineering. Therefore, it holds substantial relevance to someone interested in the efficient and effective use of prompts in LLMs." -conversational automated program repair,gpt-4-1106-preview,7,"While the abstract primarily outlines a study on conversational automated program repair, which is a different domain from prompt engineering, it does mention the use of constructed input/prompt and iteratively building the input to a large language model. The relevance to prompt engineering lies in the iterative process, engaging with the LLM in a conversational way, and adjusting the prompts based on feedback to avoid generating previously incorrect patches. This indicates that the study touches upon aspects of prompt engineering by refining the prompts to improve output, which is a key technique in prompt engineering. However, it does not directly focus on 'hard prefix prompts' or a comprehensive study of them. Therefore, the relevance is moderate, warranting a rating of 7 out of 10." -annollm: making large language models to be better crowdsourced annotators,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it explores a method to enhance the effectiveness of large language models for the purpose of data annotation, which is a significant aspect of prompt engineering. This paper suggests an innovative way of creating prompts by including explanations along with annotated examples (the 'explain-then-annotate' methodology). This strategy could be beneficial in refining the way prompts are designed to solicit more accurate responses from language models, thus contributing valuable insights to the field of prompt engineering." -keep the conversation going: fixing 162 out of 337 bugs for $0.42 each using chatgpt,gpt-4-1106-preview,8,"The provided abstract describes ChatRepair, a novel approach that leverages a conversational Large Language Model (LLM), specifically ChatGPT, for Automated Program Repair (APR). It uses a unique prompt engineering strategy by incorporating a feedback loop into the generation of input prompts. The methodology involves enhancing the prompts with relevant test failure information and learning from past patch attempts to refine the generation process. This study is highly relevant to prompt engineering as it applies advanced techniques to craft prompts that effectively utilize LLM capabilities to diagnose and fix software bugs. The relevance to prompt engineering is not absolute, as the main focus seems to be on the application of these prompts for APR rather than on the study or analysis of the prompt engineering itself, but it is still highly pertinent due to the innovative use of prompts for iterative and conversational task completion." -marked personas: using natural language prompts to measure stereotypes in language models,gpt-4-1106-preview,7,"The study focuses on using language prompts to measure and understand biases in language models, which is closely related to the field of prompt engineering. While it does not deal directly with 'hard prefix prompts,' it uses a prompt-based method for a specific and important application within the larger scope of prompt engineering—detecting and analyzing stereotypes. Thus, it contributes to the understanding of how prompts can elicit certain types of responses from language models, which is a relevant aspect of prompt engineering studies. The rating is not a perfect 10 because the study is not about prompt engineering techniques or optimizations but rather an application of prompts to understand model biases." -supporting qualitative analysis with large language models: combining codebook with gpt-3 for deductive coding,gpt-4-1106-preview,8,"The study mentioned in the abstract directly explores the use of large language models (LLMs) like GPT-3 for coding tasks in qualitative analysis without the need for task-specific model training or fine-tuning. It specifically illustrates an application of LLMs using prompt learning, which falls under the broader category of prompt engineering. While it is not centered on 'hard prefix prompts,' it does delve into the realm of using prompts effectively to interact with language models. Therefore, the relevance to prompt engineering is high, but not at the maximum because it does not focus exclusively on 'hard prefix prompts' as per the initial prompt." -assessment of chemistry knowledge in large language models that generate code,gpt-4-1106-preview,8,"The study specifically mentions the impact of prompt engineering strategies on the performance of Large Language Models (LLMs) in executing chemistry-related coding tasks. The fact that adding copyright notices at the tops of files leads to a 30-percentage point increase in accuracy directly relates to the field of prompt engineering. The study examines and validates the effectiveness of prompt engineering in enhancing the capabilities of LLMs within a specific domain (chemistry). However, it does not focus exclusively on 'hard prefix prompts' but on prompt engineering in a broader sense, hence the rating does not reach the maximum." -in-context impersonation reveals large language models' strengths and biases,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores the use of hard-coded persona prompts to elicit specific behaviors and capabilities from large language models (LLMs). By analyzing the LLMs' performance across various tasks in the context of the prompted persona, the study contributes insights into how the design of prompts can influence the output of LLMs, an essential aspect of prompt engineering. It directly addresses the impact of prompt construction on the quality and characteristics of the model's responses. However, it doesn't explicitly cover 'hard prefix prompts' in the more general sense, as it's focussed on role-impersonation, which is a subset of prompt engineering." -knn prompting: beyond-context learning with calibration-free nearest neighbor inference,gpt-4-1106-preview,8,"The presented abstract discusses advancements in 'kNN Prompting' which are relevant to the broader realm of prompt engineering in that it explores alternative ways to utilize language model prompts for task completion. kNN Prompting can be seen as an extension or improvement within the field of prompt engineering, particularly since it addresses limitations of typical in-context learning and provides a way to scale with additional training data without a context length restriction. This is highly relevant for studies looking to overcome the current constraints of hard prefix prompts in LLMs. However, the abstract does not address hard prefix prompts specifically, thereby making the relevance less than perfect for a systematic review focused solely on hard prefix prompt engineering." -"on second thought, let’s not think step by step! bias and toxicity in zero-shot reasoning",gpt-4-1106-preview,7,"The given abstract discusses the implications of using zero-shot Chain of Thought reasoning in large language models, which is relevant to prompt engineering studies in that it examines the effect of a specific prompting technique in the context of AI behavior. However, the focus on biases and toxicity rather than hard prefix prompts specifically somewhat limits its direct relevance to a systematic review on hard prefix prompts in prompt engineering." -evaluation of chatgpt for nlp-based mental health applications,gpt-4-1106-preview,7,"The abstract discusses the use of a specific input prompt for classification with ChatGPT in mental health applications, which aligns with the concept of prompt engineering. Even though the study's application is in mental health, the methodology involves designing and utilizing prompts to elicit accurate responses from a language model, which is a core aspect of prompt engineering. Though the focus is not on 'hard prefix prompts,' the relevance lies in how prompts are integral to the performance of LLMs in NLP tasks, which could translate to insights in prompt engineering studies generally. Hence, a rating of 7 suggests that the study is quite relevant but not directly focused on hard prefix prompt engineering." -exploiting asymmetry for synthetic training data generation: synthie and the case of information extraction,gpt-4-1106-preview,7,"The paper is partially relevant to the study of prompt engineering as it discusses the generation of synthetic data by prompting a large language model in reverse to create input text for a target output structure. Although it primarily focuses on synthetic data generation and its application to information extraction, the underlying methodology incorporates elements of prompt engineering by exploiting asymmetry in task difficulty to effectively communicate with the model. It doesn't directly address 'hard prefix prompts,' but the concept of utilizing prompts creatively to generate data is within the domain of prompt engineering research. Therefore, the relevance is significant, but not perfect, as the main focus is not directly on prompt engineering techniques or systematic reviews of said techniques." -guiding large language models via directional stimulus prompting,gpt-4-1106-preview,9,"The provided abstract describes a research approach that is highly relevant to the field of prompt engineering, particularly in the way it deals with the customization and optimization of prompts to guide the behavior of large language models. The concept of using a tunable policy model to generate instance-specific 'directional stimulus prompts' falls directly under the umbrella of prompt engineering techniques. The high relevance score reflects the paper's focus on creating prompts that steer the output of LLMs, which is a central concern in prompt engineering studies. Although the term 'hard prefix prompts' is not explicitly mentioned, the methodology proposed is very much related to the underlying principles of prompting language models." -motiongpt: finetuned llms are general-purpose motion generators,gpt-4-1106-preview,7,"The paper 'motiongpt: finetuned llms are general-purpose motion generators' seems to utilize prompt engineering by treating multimodal signals as special input tokens in large language models (LLMs) to generate human motion. However, it is not focused on 'hard prefix prompts' specifically, but rather on applying prompt engineering principles to multimodal inputs and finetuning LLMs for a specialized task. The concept of formulating signals into a unified prompt instruction is relevant to prompt engineering, but the study is more about motion generation rather than the systematic review of prompt engineering techniques." -up5: unbiased foundation model for fairness-aware recommendation,gpt-4-1106-preview,7,"The given abstract is relevant to prompt engineering study to a good extent because it covers the use of 'personalized prefix prompts' as part of the Counterfactually-Fair-Prompting (CFP) techniques. These techniques contribute to the broader field of prompt engineering by exploring how prompts can be designed or modified to address fairness and bias in recommendations. While the focus is not solely on hard prefix prompts, it does pertain to the sub-domain of prompt engineering for ethical and fairness considerations, which is an important aspect of the field. However, since the primary focus is on fairness-aware recommendation rather than prompt engineering itself, the rating is not a full 10." -fill in the blank: context-aware automated text input generation for mobile gui testing,gpt-4-1106-preview,7,"The paper introduces QTypist, a method which utilizes Large Language Models (LLMs) for the automated generation of semantic text inputs in mobile GUI testing. The relevance to prompt engineering study lies in the fact that the approach involves a 'prompt-based data construction and tuning method' which entails extracting prompts and answers for model tuning. This means the study directly involves designing and utilizing prompts to improve performance of AI models, which is closely related to prompt engineering. However, the study's primary focus is on the application of this technique for improving GUI testing rather than on the theory or principles behind prompt engineering itself. Hence, it's not entirely centered on prompt engineering but is highly related, warranting a 7 out of 10 for relevance." -explaining patterns in data with language models via interpretable autoprompting,gpt-4-1106-preview,9,"The abstract describes a study where a method called interpretable autoprompting (iPrompt) is used to generate and evaluate prompts for large language models, which is directly related to prompt engineering. The systematic review of 'hard prefix prompts' would likely cover different techniques and contributions in the area of prompt engineering, and iPrompt appears to be a notable example of innovation in this field. Therefore, the relevance to prompt engineering is high, although the study might not directly focus on hard prefix prompts but more generally on explanatory prompts and their iterative improvement." -instructzero: efficient instruction optimization for black-box large language models,gpt-4-1106-preview,9,"The abstract details a study that focuses on the optimization of instructional prompts for large language models (LLMs), particularly in the scenario where direct optimization of the instructions isn't possible, such as with black-box LLMs. The study introduces 'InstructZero', a method which indirectly optimizes instructions through the use of 'soft prompts' via Bayesian optimization, which is highly relevant to the field of prompt engineering. This systematic approach to improving efficiency and effectiveness of LLM instructions directly relates to studies of how prompts can be engineered to yield better performance from LLMs. The only reason the rating isn't a perfect 10 is that the abstract doesn't mention 'hard prefix prompts', the specific topic of interest, and focuses instead on 'soft prompts'." -language models enable simple systems for generating structured views of heterogeneous data lakes,gpt-4-1106-preview,8,"The abstract describes a system that leverages large language models (LLMs) for the purpose of generating queryable tables from semi-structured documents. Prompt engineering is an implicit but significant aspect of this work; the LLMs are used to either directly extract values or to generate code based on the natural language prompts given to them. The success of EVAPORATE and EVAPORATE-CODE+ hinges on effective prompt engineering to guide the LLMs. While the study does not seem to be explicitly focused on 'hard prefix prompts,' the underlying principle of using prompts to control LLM output aligns with studies in prompt engineering. Hence, the relevance is rated highly but not maximally due to the lack of specificity regarding 'hard prefix prompts.'" -recurrentgpt: interactive generation of (arbitrarily) long text,gpt-4-1106-preview,8,"The paper presents a novel approach for prompting language models to generate long text sequences by incorporating an LSTM-like recurrence mechanism into GPT, termed RecurrentGPT. Despite not addressing 'hard prefix prompts' directly, the study is relevant to prompt engineering as it explores strategies for enhancing the capabilities of language models through sophisticated prompting techniques by simulating external memory mechanisms. This has implications for how prompts can be engineered to handle more complex tasks like generating long-form content, which can be an aspect of prompt engineering studies. However, the focus on 'hard prefix prompts' is not explicit, thus the rating does not receive a full score." -prd: peer rank and discussion improve large language model based evaluations,gpt-4-1106-preview,7,"The abstract discusses methodologies for improving the evaluation of large language model responses, including a peer rank algorithm and peer discussion system which both can be considered forms of prompt engineering, as they involve crafting prompts to facilitate a discussion between LLMs for better assessment. These processes are relevant to prompt engineering studies because they deal with how input prompts affect LLMs' output and evaluation. Although the study's primary focus is not on the hard prefix prompts but rather on the evaluation techniques for model outputs, it indirectly contributes to the field of prompt engineering by exploring methods to refine the interaction and ranking processes between different models which is a subset of prompting strategies." -open sesame! universal black box jailbreaking of large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores a method (using a genetic algorithm) for exploiting and manipulating large language models (LLMs) through prompts. While it specifically deals with adversarial attacks and alignment issues, understanding these vulnerabilities is crucial for developing robust prompt engineering techniques. It contributes to the field by highlighting the importance of security measures in prompt design to prevent unintended model behavior. However, the paper's primary focus is on the security and manipulation aspect rather than the constructive development or direct study of prompt engineering techniques, hence the rating is not a full 10." -what language reveals about perception: distilling psychophysical knowledge from large language models,gpt-4-1106-preview,8,"Although the study does not specifically focus on 'prompt engineering' or 'hard prefix prompts,' it is highly relevant because it involves the use of prompt auto-completion features of a large language model (GPT-3) for psychophysical research. The method of eliciting similarity scores through prompt responses is a form of prompt engineering where the design of the prompts is critical for the success of the study. However, it did not directly address hard prefix prompts, which would be specific sequences of words or phrases designed to elicit particular behaviors from language models, leading to a rating slightly lower than the maximum." -boosting language models reasoning with chain-of-knowledge prompting,gpt-4-1106-preview,9,"The abstract describes a novel approach in prompt engineering, specifically focusing on enhancing reasoning capabilities in Large Language Models through Chain-of-Knowledge prompting. It directly relates to the field of prompt engineering by proposing a methodology for improving the quality of generated outputs by incorporating structured knowledge evidence. This is highly relevant to prompt engineering studies, especially those concerning the improvement of model reasoning and reliability. The reason for not giving a full score of 10 is that it does not directly mention 'hard prefix prompts,' but the approach is undoubtedly within the scope of advanced prompt engineering techniques." -lion: adversarial distillation of proprietary large language models,gpt-4-1106-preview,8,"The abstract describes a method of adversarial distillation where a 'teacher' large language model generates 'hard' instructions to enhance the training of a 'student' model. This falls under the umbrella of prompt engineering, as it includes the design of specific prompts to identify and produce instructional data that challenges the student model, thereby improving its performance. The innovative use of 'hard' instructions to drive the adversarial loop is particularly relevant to prompt engineering studies, as it directly relates to the crafting of prompts aimed at maximizing learning potential. However, it does not directly address a comprehensive systematic review on the subject, hence the deduction of two points." -towards parameter-efficient automation of data wrangling tasks with prefix-tuning,gpt-4-1106-preview,9,"The title 'towards parameter-efficient automation of data wrangling tasks with prefix-tuning' is highly relevant to prompt engineering study because it directly addresses the development of a method ('prefix-tuning') to optimize the way prompts are used with Large Language Models to perform data wrangling tasks, which is an example of a practical application of prompt engineering. Furthermore, the abstract details the benefits of using prefix-tuning over full fine-tuning, which is central to the efficiency and effectiveness of using language models in various tasks. The mention of learning continuous prompts automatically and the assessment of prefix-tuning on specific tasks provide concrete evidence of the method's applicability and performance, underscoring its relevance to the field of prompt engineering." -herding ai cats: lessons from designing a chatbot by prompting gpt-3,gpt-4-1106-preview,9,"The given abstract is highly relevant to prompt engineering as it specifically addresses challenges and insights gained from attempting to design a chatbot using prompts with GPT-3/4. It highlights difficulties in achieving a fully positive user experience through prompting alone, discusses the limitations of prompt control in practical applications, and considers the broader implications for design methods using Large Language Models. The focus on UX design and interaction with chatbots powered by LLMs correlates directly with studies on prompt engineering, as it deals with crafting prompts to elicit desired behavior from the model. Although it does not explicitly mention 'hard prefix prompts', the study of prompting effectiveness in this context is still pertinent to the broader field of prompt engineering." -exploring large language model for graph data understanding in online job recommendations,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering is significant but not direct. The notion of using a 'meta-path prompt constructor' suggests a novel approach to prompt development, focusing on behavior graphs rather than text generation or parsing. While this represents an innovative application of LLMs in the context of recommendation systems, it is not a 'comprehensive systematic review on hard prefix prompts' as outlined in the initial prompt for engineering study. Yet, the paper does delve into prompt optimization relevant to a specific application (job recommendations), which is a pertinent aspect of prompt engineering. Thus, the relevance is high due to the contribution to the field of prompt construction and bias mitigation in LLMs, but not a perfect match since it doesn't directly address hard prefix prompts or provide a systematic review of the topic." -ten quick tips for harnessing the power of chatgpt/gpt-4 in computational biology,gpt-4-1106-preview,7,"The article provides practical advice for incorporating ChatGPT into computational biology workflows, which includes a component of 'prompt engineering'. Even though the title suggests a broader usage within computational biology, the mention of 'prompt engineering' in the context of using ChatGPT implies that the article will address how to effectively design prompts to interact with the chatbot for various tasks. This makes it relevant to the study of prompt engineering. However, it is not entirely focused on 'hard prefix prompts' specifically, as indicated by the initial prompt request for a 'comprehensive systematic review on hard prefix prompts'. Therefore, it doesn’t fully match the specificity requested in terms of prompt engineering study, but it is still relevant due to the inclusive nature of the tips and discussion on the best use of prompts." -prompting is not a substitute for probability measurements in large language models,gpt-4-1106-preview,7,"The study addresses an aspect of prompt engineering by comparing metalinguistic prompting with direct probability measurements in large language models. Although the study does not specifically discuss 'hard prefix prompts,' it does examine prompting techniques and their effectiveness in understanding linguistic knowledge, which is relevant to the field of prompt engineering. However, since the study is more focused on the comparison with direct probability methods and on metalinguistic judgment rather than on prompt engineering techniques, the rating is not a perfect 10." -automated annotation with generative ai requires validation,gpt-4-1106-preview,7,"While the abstract does not mention 'prompt engineering' or 'hard prefix prompts' directly, it does discuss the quality of prompts as a factor that affects the performance of LLMs in text annotation tasks. The study highlights the importance of validation against human-generated labels, which indirectly ties into the importance of designing effective prompts to get the desired output from an LLM. Therefore, the relevance to prompt engineering is substantial but not explicit, hence the rating of 7 out of 10." -studenteval: a benchmark of student-written prompts for large language models of code,gpt-4-1106-preview,7,"The paper introduces StudentEval, a benchmark for evaluating the efficacy of prompts written by non-expert users (beginning programmers) when interacting with code-based Large Language Models (LLMs). This is relevant to the study of prompt engineering as it provides insight into how well different models respond to prompts that vary in quality and are created by non-experts. It highlights the importance of prompt variability in assessing model performance, which directly relates to the broader inquiry of prompt engineering. Additionally, it contributes to understanding the challenges faced by new programmers in effectively leveraging LLMs for coding tasks, which could inform the development of improved prompt engineering practices. However, the paper might be more narrowly focused on the code LLMs and the non-expert population, rather than a broad, comprehensive systematic review on hard prefix prompts in general prompt engineering." -mindmap: knowledge graph prompting sparks graph of thoughts in large language models,gpt-4-1106-preview,8,"The study described in the abstract appears to be highly relevant to the field of prompt engineering. It focuses on a specific technique of prompting large language models (LLMs) using knowledge graphs (KGs) to address common issues such as knowledge incorporation, hallucinations, and transparency. While the study does not specifically mention 'hard prefix prompts,' which may have been the focus of the requested 'comprehensive systematic review,' it does discuss the broader topic of enhancing the interaction between LLMs and external structured knowledge sources. The concept of 'MindMap' prompting could be considered as a type of advanced prompt engineering that aims to deepen the language model's understanding and reasoning capabilities. Hence, the relevance is rated at 8, acknowledging its importance to the field of prompt engineering but also noting that it does not directly address the specific aspect of 'hard prefix prompts.'" -progprompt: program generation for situated robot task planning using large language models,gpt-4-1106-preview,8,"This publication appears to be highly relevant to prompt engineering as it discusses a structured approach to creating prompts for large language models (LLMs), specifically in the context of generating plans for situated robot tasks. It also mentions the use of ablation experiments to make concrete recommendations about prompt structure, which is an essential part of studying how different prompts affect the performance of LLMs. Although the study's primary focus is on prompts for programmatic tasks within robotics, the methodologies and findings could likely be generalized or applied to other areas of prompt engineering. The rating is not a perfect 10 since the review does not specify that it is a 'systematic review' or that it focuses on 'hard prefix prompts,' but it is still highly applicable to the field." -clusterllm: large language models as a guide for text clustering,gpt-4-1106-preview,7,"The text describes a study on a text clustering framework called ClusterLLM that uses a large language model, ChatGPT, for gaining insights and for tuning clustering granularity based on text prompts. While the study is not specifically about 'prompt engineering', the use of 'hard triplet questions' and 'carefully designed pairwise questions' indicates a deliberate and strategic approach to crafting prompts to achieve specific outcomes from the language model. This shows relevance to the study of prompt engineering, as the effectiveness of ClusterLLM relies on the proper construction of these prompts to guide the clustering process. However, the application is specific to text clustering rather than prompt engineering in general, which is why the rating is not closer to 10." -how to unleash the power of large language models for few-shot relation extraction?,gpt-4-1106-preview,7,"The abstract indicates a study focused on few-shot relation extraction using large language models like GPT-3.5. It discusses in-context learning and data generation, which are both relevant to prompt engineering, as they deal with how to effectively use prompts to leverage the capabilities of language models for specific tasks. The mention of 'task-related instructions' is directly aligned with prompt engineering, as it involves designing prompts to guide the model's responses. However, the study appears to be more broadly focused on the applications of these methods in relation extraction rather than solely on prompt engineering techniques. Therefore, while there is clear relevance, it is not exclusively centered on prompt engineering, meriting a 7 out of 10." -knowledge refinement via interaction between search engines and large language models,gpt-4-1106-preview,7,"The described study 'knowledge refinement via interaction between search engines and large language models' is relevant to the concept of prompt engineering to a considerable extent. The 'InteR' framework focuses on refining the search and query processes by integrating search engines and LLMs, which directly relates to the creation and optimization of prompts to facilitate these interactions. The study touches upon enhancing prompt formulation using search engine-retrieved documents. Even though it does not focus exclusively on hard prefix prompts or a systematic review of such, it presents relevant research on improving input (which includes prompts) to LLMs to achieve better results in information retrieval tasks. Hence, it contributes to the broader field of prompt engineering by proposing practical ways to optimize the interaction between users, LLMs, and search engines." -introspective tips: large language model for in-context decision making,gpt-4-1106-preview,7,"The abstract describes a study focusing on improving the decision-making capabilities of large language models (LLMs) by generating 'Introspective Tips' which are likely a form of advanced prompts. This approach is related to prompt engineering in that it involves enhancing the prompt (a hard prefix, in this case) to improve the model's performance without altering the underlying model parameters. This relates to how prompting can be used to guide an LLM's output. However, it's not a perfect match, as it doesn't focus specifically on a 'systematic review on hard prefix prompts' but rather on a practical application of prompts for LLM decision-making enhancement. Therefore, it doesn't completely align with prompt engineering studies, but it has substantial relevance due to its focus on the optimization and application of prompts." -augmenting greybox fuzzing with generative ai,gpt-4-1106-preview,8,"The abstract describes ChatFuzz, a system that integrates generative AI (such as ChatGPT) with greybox fuzzing to enhance the generation of format-conforming inputs. The use of ChatGPT to transform initial seed inputs into variations through prompting is directly related to prompt engineering, as this process necessitates designing effective prompts to guide the generative model to produce useful outputs for fuzzing tasks. The paper outlines an application of prompt engineering in a cybersecurity context. The reason for not giving a full 10 is because it focuses specifically on the application of generative AI for fuzzing and not on the broader study of prompt engineering across various domains or on the details of how the prompts are constructed and optimized, which would be of direct interest in a systematic review on hard prefix prompts." -taming ai bots: controllability of neural states in large language models,gpt-4-1106-preview,8,"The abstract describes a study that is highly relevant to prompt engineering, as it addresses the ability to control AI bot behavior through prompts, which is a core aspect of prompt engineering. This study's focus on the formal definition of 'meaning' and the conditions under which an AI bot can be directed to reach any given 'meaning' is directly related to how prompts are engineered to achieve desired outcomes in language models. The exploration of controllability in the context of large language models (LLMs) also contributes to understanding how different prompts can influence the state of AI, which is a fundamental concern for prompt engineering. The reason for not giving a perfect score is that the abstract does not mention 'hard prefix prompts' specifically, which was the focus indicated in the prompt engineering study query." -spellburst: a node-based interface for exploratory creative coding with natural language prompts,gpt-4-1106-preview,7,"The described study 'Spellburst' is relevant to prompt engineering as it incorporates the use of natural language prompts to facilitate creative coding, an application of prompt engineering. It indicates the development of a system that allows users to interact using high-level semantic constructs ('expressive prompt-based interactions') for creative tasks, which is a part of prompt engineering. However, the focus on a node-based interface for artists suggests that prompt engineering is only a portion of the study's objectives, hence the study may not be exclusively dedicated to hard prefix prompts or the fundamental principles of prompt engineering." -smoothllm: defending large language models against jailbreaking attacks,gpt-4-1106-preview,7,"The study deals with defence mechanisms against adversarial attacks on large language models, specifically addressing the vulnerability at the level of input prompts. Although it is not directly related to 'hard prefix prompts,' it is highly relevant to the broader field of prompt engineering as it tackles the manipulation of prompts to secure desired or undisturbed outputs from language models. The relevance is particularly notable in the context of creating robust prompting strategies that could prevent adversarial attacks and thus maintain the integrity of the interaction with the models. However, the research does not specifically focus on the systematic review of hard prefix prompts, which would be the core topic for direct relevance." -fully autonomous programming with large language models,gpt-4-1106-preview,7,"The title and abstract indicate that this study deals with program synthesis using Large Language Models (LLMs) and explores different strategies for improving the code generation process, which includes evaluating various prompt-based instructions for the LLM. Although the study does not directly mention 'hard prefix prompts,' it implies a close examination of how to effectively prompt LLMs (like OpenAI Codex) to generate, repair, and debug programs. Given that the study involves exploring and comparing different prompt-generation techniques for improving the performance of LLMs in a programming context, it is relevant to prompt engineering to a significant extent. Thus, the rating recognizes the relevance of exploring effective instructions for LLMs, but it is not a perfect match since the study does not explicitly focus on 'hard prefix prompts' but rather on a broader set of prompt-generating techniques and program synthesis strategies." -large language models and (non-)linguistic recursion,gpt-4-1106-preview,7,"The abstract indicates that the study involves designing prompts to elicit certain behaviors from a large language model (LLM), specifically with respect to recursive structures in language. Since prompt engineering is about how to effectively design prompts to achieve desired outputs from LLMs, this study's focus on prompt design for testing meta-linguistic awareness of recursion is relevant to prompt engineering. Although it does not directly address 'hard prefix prompts', it does touch on a related aspect of prompt design. The relevance is not maximal as it doesn't seem to focus on different categories or types of prompts, such as 'hard prefixes', but rather on a specific feature of language (recursion) and how well it can be elicited and analyzed in LLMs." -domain knowledge distillation from large language model: an empirical study in the autonomous driving domain,gpt-4-1106-preview,8,"The paper's abstract discusses the use of prompt engineering with the LLM ChatGPT for the semi-automation of domain knowledge distillation in the engineering process, which is relevant to the subject of 'prompt engineering study'. It explores the practical application of prompts in creating knowledge-based systems, which aligns with the idea of 'hard prefix prompts' in that it examines structured interactions with an LLM. The paper presents empirical findings on the efficacy of prompt engineering in a specific domain, which is valuable for the broader study of prompt engineering techniques. The rating is not a full 10 since the 'hard prefix prompts' might refer to a more specific subset of prompts or methodologies within the field of prompt engineering, which the paper's abstract does not explicitly address." -reducing retraining by recycling parameter-efficient prompts,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it addresses the issue of retraining prompts when an underlying language model is updated. The concept of 'Prompt Recycling' directly pertains to prompt engineering, by aiming to adapt prompts to new versions of a model without the need for extensive retraining. This research could significantly contribute to the efficiency and practicality of using prompts in various applications, hence the high relevance rating." -selecting better samples from pre-trained llms: a case study on question generation,gpt-4-1106-preview,8,"The paper presents a study on selecting the best outputs from samples generated by Large Language Models (LLMs) using prompt-based approaches, which is highly relevant to the field of prompt engineering. Although the study focuses specifically on the task of question generation, the research on improving the diversity and quality of LLM outputs through prompt manipulation is a direct application of prompt engineering principles. The rating is not a full 10 because the paper is a case study limited to question generation and does not cover the broader spectrum of hard prefix prompts or systematic reviews of prompt engineering." -validating large language models with relm,gpt-4-1106-preview,8,"The abstract mentions the validation and evaluation of language model concerns including bias and inappropriate language, which are topics relevant to prompt engineering because they address the model's outputs in response to prompts. Furthermore, ReLM's increased prompt-tuning coverage directly pertains to prompt engineering as it suggests an improved method for evaluating and refining how prompts are designed and how models respond to them. The connection to 'hard prefix prompts' is not explicit, leading to a rating lower than 10, but the general subject matter is pertinent to studies in prompt engineering." -preserving in-context learning ability in large language model fine-tuning,gpt-4-1106-preview,9,"The discussed paper addresses a crucial aspect of prompt engineering, which is preventing the loss of a large language model's innate in-context learning abilities during the fine-tuning process. The proposed two-stage fine-tuning framework, ProMoT, is highly relevant as it involves prompt tuning, a method directly connected to prompt engineering. The study's findings on how to maintain a model's performance across various tasks and its ability to work with different formats add valuable insights to the field. The research is relevant to prompt engineering as it provides a potential solution to a common problem faced when fine-tuning models with hard prompts, although it does not directly discuss 'hard prefix prompts'. Nonetheless, the principles could be applicable to the systematic review on hard prefix prompts." -improving knowledge extraction from llms for robotic task learning through agent analysis,gpt-4-1106-preview,8,"The abstract outlines a study that, while not focusing exclusively on hard prefix prompts, does address the broader concept of prompt engineering within the context of LLMs and robotic task learning. It directly engages with how prompt engineering can be improved and augmented through a cognitive-agent approach, making it relevant to those interested in the intricacies and optimizations of prompting large language models. This is highly pertinent to the field of prompt engineering, although the text does not specifically mention 'hard prefix prompts.'" -large language models as superpositions of cultural perspectives,gpt-4-1106-preview,7,"The abstract discusses the concept of 'perspective controllability' within Large Language Models (LLMs), which is relevant to prompt engineering. It highlights how LLMs can exhibit context-dependent values and personality traits, a concept critical to understanding how different prompts can influence the output of such models. Despite not directly addressing 'hard prefix prompts', the study does engage with the underlying mechanics that would be essential for designing effective prompts to guide LLM responses, which is a fundamental aspect of prompt engineering. Therefore, while not focused on hard prefix prompts specifically, the research contributes to the broader understanding of prompt design and LLM interaction methods, which could impact the study of prompt engineering." -robot task planning based on large language model representing knowledge with directed graph structures,gpt-4-1106-preview,7,"The given title and abstract involve the development of an LLM prompt template, which indicates a study related to prompt engineering as it aims to create a prompt structure with strong expressive power. This is directly relevant to the exploration of how prompts are structured and their relation to large language models (LLMs) in the context of task planning for robots. The systematic review of 'hard prefix prompts' could likely benefit from insights derived from this proposed method and its structured template. However, the study might be more focused on the application side of prompt engineering in robot task planning, rather than a broad and comprehensive review of prompt engineering techniques and theories. Therefore, it is not entirely focused on 'hard prefix prompts' but is relevant to the broader field of prompt engineering." -using large language models to generate engaging captions for data visualizations,gpt-4-1106-preview,8,"The abstract discusses the application of large language models to generate captions for data visualizations, with a focus on the process of 'prompt engineering'. Although it does not mention a 'hard prefix prompt' specifically, the study is centered around the broader concept of prompt engineering, which is designing the most effective prompts to elicit desired responses from a language model like GPT-3. This falls under the umbrella of prompt engineering and is therefore highly relevant to the study of how prompts can affect the output of language models. The rating is not a full 10 because the study abstract does not specifically address a 'systematic review' on 'hard prefix prompts' but seems more focused on practical experimentation and application." -spec: a soft prompt-based calibration on performance variability of large language model in clinical notes summarization,gpt-4-1106-preview,7,"The relevance of the provided title and abstract to prompt engineering is quite significant, given that the study centers on the application of prompts, specifically 'soft prompts,' to refine the performance of large language models in the context of summarizing clinical notes. Prompt engineering fundamentally involves the strategic use of prompts to effectively steer language models towards desired outputs. The research introduces a Soft Prompt-Based Calibration (SPeC) pipeline, which pertains to optimizing the use of prompts to achieve more consistent and accurate results. Although the study is situated in a specific application area—healthcare—and focuses on 'soft prompts' rather than 'hard prefixes,' it contributes to the broader understanding of how prompt design can affect language model behavior and performance. Nonetheless, it does not directly address the systematic review of hard prefix prompts, which would be the core of a prompt engineering study, hence the rating is not a perfect 10." -using a large language model to control speaking style for expressive tts,gpt-4-1106-preview,7,"While the study primarily focuses on the use of a language model for controlling prosody in text-to-speech (TTS) systems, it is relevant to prompt engineering due to the use of prompts to control language model outputs. Specifically, the study involves engineering prompts that guide the language model to produce suggestions on pitch, energy, and duration for expressive TTS, which is an application of prompt engineering. Though the study’s main goal is not about prompt engineering itself, the methodology of designing prompts to achieve desired outcomes in model behavior is an essential aspect of prompt engineering. Therefore, this study would provide useful information for those interested in the intersection of prompt engineering and TTS technology." -gpt4tools: teaching large language model to use tools via self-instruction,gpt-4-1106-preview,8,"The paper is relevant to prompt engineering study because it discusses an advanced method of enabling Large Language Models (LLMs) to use tools through the generation of an instruction-following dataset using a form of prompt engineering. It specifically mentions 'sophisticated prompt engineering' as a crucial component for LLMs tool usage capabilities. Although the focus is more on self-instruction and tool usage within multimodal contexts, prompt engineering is a significant part of the methodology used in teaching the LLMs. However, it does not focus exclusively on 'hard prefix prompts,' which would be central to a study specifically addressing prompt engineering, hence the rating is not a full 10." -simulating h.p. lovecraft horror literature with the chatgpt large language model,gpt-4-1106-preview,9,"The study directly investigates the application and effectiveness of prompt engineering techniques to guide a language model's output to emulate H.P. Lovecraft's horror literature style. Given that the focus is on both the generation of text in a specific literary style and the examination of prompt engineering methods, this is highly relevant to the field of prompt engineering. The rating is not a perfect 10 because the study also delves into the model's architecture and comparative analysis, which, while related, are not exclusively focused on prompt engineering." -s3: social-network simulation system with large language model-empowered agents,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering, as it explicitly mentions the use of prompt engineering and prompt tuning techniques to shape the behavior of agents within the social network simulation system. It indicates that these techniques are critical for the agents' performance in emulating human-like sensing, reasoning, and behavior, which are key in the context of the study. The rating is not a full 10 because the abstract does not provide detailed insight into the nature of the prompt engineering study or its findings specific to the 'hard prefix prompts', which is the specific focus of the prompt engineering study in question." -hierarchical prompting assists large language model on web navigation,gpt-4-1106-preview,8,"The abstract discusses a hierarchical prompting approach specifically designed to improve the performance of large language models on tasks involving complex observations, such as web navigation. While this is not directly related to 'hard prefix prompts', it falls under the broader category of prompt engineering which aims to enhance how models interpret and react to prompts. The hierarchical structure mentioned involves creating more efficient prompts that enable better decision making. Therefore, the study is highly relevant to the field of prompt engineering, albeit with a specific focus on a hierarchical strategy rather than hard prefix prompting techniques." -co-training improves prompt-based learning for large language models,gpt-4-1106-preview,9,"The abstract describes research on enhancing prompt-based learning with co-training, which is directly relevant to the field of prompt engineering. It explores methods to improve and iterate on prompt models, which are integral to the efficiency and effectiveness of large language models like GPT-3. Although the title and abstract do not specifically mention 'hard prefix prompts,' the systematic review of improving prompt-based learning in LLMs is encompassed within the broader scope of prompt engineering. A small deduction is made because the exact term 'hard prefix prompts' was not discussed, but the overall content is highly pertinent." -prompt text classifications with transformer models! an exemplary introduction to prompt-based learning with large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates prompt-based learning, a key concept within this field, especially as it pertains to the use of transformer models and large language models for classification tasks. Although it does not specifically mention engineering 'hard prefix prompts', it still examines the broader subject of using prompts in machine learning. The emphasis on the practical application of prompt-based learning and comparison with human ratings also adds value to the context of prompt engineering." -on robustness of prompt-based semantic parsing with large pre-trained language model: an empirical study on codex,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to a significant extent as it investigates the robustness of prompt-based semantic parsing with a large pre-trained language model such as CODEX, which is a practical aspect of prompt engineering. However, the focus is more on adversarial robustness and less on the hard prefix prompts specifically. As the study involves understanding how prompts work with a language model trained on code, it has implications for the design of prompts (engineering) for better robustness, which is a critical aspect of prompt design. Nevertheless, the absence of a direct investigation into 'hard prefix prompts' as suggested by the original prompt, limits the full relevance of this study to the prompt engineering field described in the initial question." -investigating the translation performance of a large multilingual language model: the case of bloom,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering is fairly high as it touches upon prompt design within the context of evaluating a multilingual language model's performance in machine translation tasks. While the study is not exclusively focused on 'hard prefix prompts,' it does examine how variations in prompts (0-shot vs. few-shot settings) influence the language model's output. Therefore, the investigation of prompt design as a factor in model performance is pertinent to the broader field of prompt engineering, particularly as it relates to enhancing the model's understanding and generating the correct language output. However, the rating is not a full 10 since the primary focus is on the translation performance rather than on prompt engineering methodologies or prompt optimization techniques exclusively." -cold-start data selection for few-shot language model fine-tuning: a prompt-based uncertainty propagation approach,gpt-4-1106-preview,8,"The title and abstract are highly relevant to prompt engineering as they discuss 'PATRON', a method utilizing prompt-based approaches for improving the data selection process in few-shot learning scenarios for language models. This method directly relates to engineering prompts to handle uncertainty, which is a subset of the broader field of prompt engineering. However, the study does not seem to concentrate on 'hard prefix prompts', which is the specific type mentioned in the prompt. Hence, it may not cover the full scope of the systematic review on hard prefix prompts if that is the sole focus, but still remains very relevant to the broader category of prompt engineering studies." -can large language models reason about medical questions?,gpt-4-1106-preview,8,"This abstract is highly relevant to prompt engineering as it discusses the effectiveness of different prompting scenarios such as Chain-of-Thought, zero-shot, few-shot, and retrieval augmentation in eliciting accurate responses from large language models for medical questions. The study focuses on the LLM's reasoning capabilities in the context of complex real-world questions, which is a critical component of prompt engineering, especially when evaluating the utility and reliability of prompts in domain-specific knowledge tasks." -tabllm: few-shot classification of tabular data with large language models,gpt-4-1106-preview,7,"The study addresses the conversion of tabular data to a natural-language string for classification tasks, which can be considered a specific form of prompt engineering. The relevance lies in the fact that the process involves crafting prompts that enable a language model to interpret and classify non-textual data efficiently. However, the study's primary focus is on tabular data and classification tasks, rather than the broader topic of hard prefix prompts used across various types of data and tasks in prompt engineering. Therefore, the rating is a 7, indicating that the study is relevant but not fully aligned with a systematic review of hard prefix prompts in prompt engineering." -prompting is programming: a query language for large language models,gpt-4-1106-preview,9,"The abstract provided discusses Language Model Programming (LMP) and Language Model Query Language (LMQL), which are novel approaches to prompt engineering. The focus on an efficient inference procedure and the ability to impose constraints on language model outputs is highly relevant to the field of prompt engineering, as it aims to optimize the way we interact with language models. The relevance is not rated a full 10 only because prompt engineering can encompass a broader range of techniques and considerations beyond the specific innovations of LMP and LMQL, such as different prompting strategies, the study of few-shot learning, etc. However, the presented work is undeniably pertinent and likely to contribute significantly to the advancement of prompt engineering methodologies." -large language models are reasoning teachers,gpt-4-1106-preview,8,"The paper is highly relevant to the study of prompt engineering as it discusses an advanced technique, Fine-tune-CoT, which generates reasoning samples from large models to improve the prompt-based capabilities of smaller models. Although the technique focuses on fine-tuning smaller models rather than the creation of prompts per se, the central idea of using larger models as a 'reasoning teacher' is deeply intertwined with generating more effective prompts that leverage the large model's understanding to enhance reasoning in smaller models. This contributes to the field of prompt engineering by optimizing the efficiency and capability of prompts in eliciting desired responses, particularly for complex reasoning tasks." -class-aware visual prompt tuning for vision-language pre-trained model,gpt-4-1106-preview,9,"The title and abstract of the paper indicate a high relevance to prompt engineering as the study focuses on tuning prompts for a vision-language pre-trained model, which involves modifying and optimizing the input prompts to elicit desired responses from the model. Although the paper does not explicitly mention 'hard prefix prompts', it falls within the broader category of prompt engineering by exploring 'visual prompts' and 'text prompts'. This makes it significantly relevant to the topic of prompt engineering study as it contributes to the understanding of how to efficiently tune and adapt pre-trained models to specific tasks through prompt modifications." -analogy generation by prompting large language models: a case study of instructgpt,gpt-4-1106-preview,9,"The study's focus on prompt design and its effectiveness in generating analogies is highly relevant to prompt engineering. It explores how different prompts affect InstructGPT's output, which is a core aspect of the field. The sensitivity analysis to prompt structure and variations is also pertinent to understanding how to engineer prompts for better performance. The rating is not a full 10 because the study is specifically about analogy generation, so it might not cover other aspects of prompt engineering comprehensively." -using large language models to simulate multiple humans,gpt-4-1106-preview,8,"The presented abstract is highly relevant to prompt engineering as it discusses the use of prompt templates to generate varied responses from a language model in the context of behavioral experiments. The methodology relies heavily on designing effective prompts to ensure the simulation accuracy of human responses. This is directly related to prompt engineering, as it requires an understanding of how to tailor prompts to illicit specific reactions from the model. The study's validation and exploration of model responses to different scenarios are a core part of prompt engineering research. However, the study does not explicitly focus on 'hard prefix prompts', thus the rating is not a full 10." -large language models in the workplace: a case study on prompt engineering for job type classification,gpt-4-1106-preview,9,"The abstract provided discusses a case study that centers on the use of prompt engineering for the specific task of job classification. It details the comparative performance analysis of various models including state-of-the-art GPT-3.5-based language models. Considering that prompt engineering is both a focus of the study and is used as a tool to direct the language models toward the desired classification task, the relevance to prompt engineering is very high. A point is subtracted because the details on 'hard prefix prompts' specifically are not mentioned, which could be an aspect of prompt engineering but is not explicitly covered in the abstract provided." -soft-prompt tuning for large language models to evaluate bias,gpt-4-1106-preview,7,"The abstract discusses 'soft-prompt tuning' for evaluating biases in large language models, which is related to prompt engineering as it involves the refinement of prompts to achieve specific outcomes from language models. However, the study focuses specifically on sentiment classification tasks and the evaluation of bias, not on 'hard prefix prompts' as specified in the original query for a comprehensive systematic review. Therefore, the relevance to the precise subject of 'hard prefix prompts' is indirect, hence the rating of 7, indicating moderate relevance to prompt engineering but not closely aligned with the original request for information on hard prefix prompts." -promptify: text-to-image generation through interactive prompt exploration with large language models,gpt-4-1106-preview,8,"The paper describes 'Promptify', a system designed to aid in prompt engineering for text-to-image generation by making the process interactive, which is highly relevant to the study of prompt engineering. While it doesn't specifically address 'hard prefix prompts,' the general field of designing and refining prompts to achieve better alignment with user intent is central to prompt engineering. The suggestion engine's utilization of large language models to aid in crafting prompts further aligns this work with the broader domain of prompt engineering. However, the paper's focus on text-to-image and not purely text outputs means it's not a complete overlap with prompt engineering studies that may deal with a variety of output modalities (e.g., text-to-text, text-to-speech), hence the rating is not a full 10." -you only prompt once: on the capabilities of prompt learning on large language models to tackle toxic content,gpt-4-1106-preview,8,"The study directly investigates the use of prompt learning with large language models, which is a clear application of prompt engineering. It focuses on how prompting these models can be used to address toxicity, a significant part of language model applications. The relevance is high because it involves creating prompts for classification, detection, and detoxification tasks. However, the study is specific to toxic content moderation, which is a subset of prompt engineering, hence not a full 10." -controlling the extraction of memorized data from large language models via prompt-tuning,gpt-4-1106-preview,8,"The abstract details a study that is highly relevant to prompt engineering, as it directly involves the technique of prompt-tuning to manipulate the behavior of Large Language Models. It is relevant to the study of controlling the output of such models, particularly concerning data extraction and privacy issues, which are key considerations in prompt engineering. The deduction of two points reflects that the abstract specifically focuses on the memorization aspect and the privacy concerns rather than the broader field of prompt engineering or hard prefix prompts in general." -sensitivity and robustness of large language models to prompt template in japanese text classification tasks,gpt-4-1106-preview,8,"The given abstract is highly relevant to prompt engineering as it investigates the effects of prompt template modifications on the performance of Large Language Models (LLMs), specifically in the context of Japanese text classification tasks. It addresses critical aspects of prompt engineering, such as sensitivity and robustness of language models to changes in prompt templates. The study's focus on how simple changes can lead to significant discrepancies in model performance is directly linked to prompt engineering. The rating is not a full 10 because the abstract mentions a specific application (Japanese text classification) rather than providing a broader analysis across various applications and languages, which could impact the generalizability of the findings to all areas of prompt engineering." -sensitivity and robustness of large language models to prompt in japanese,gpt-4-1106-preview,8,"The paper focuses on the sensitivity and robustness of Large Language Models to prompt changes, which is a core aspect of prompt engineering. It is highly relevant as it evaluates how minor alterations in prompts can impact model performance, directly relating to the study of prompt engineering. The slight deduction in rating is because it does not address 'hard prefix prompts,' the specific type of prompt mentioned in the original query, but rather the broader concept of prompt sensitivity and robustness in the context of Japanese language prompts." -bounding the capabilities of large language models in open text generation with prompt constraints,gpt-4-1106-preview,9,"The abstract presents a relevant study in the area of prompt engineering as it focuses on analyzing and bounding abilities of generative models with a prompt-centric approach. The researchers' use of structural and stylistic constraints directly pertains to prompt engineering, given that they are well-defined constraints that can affect how prompts guide model generation. The relevance is further supported by the use of a major model like GPT-3 as a case study and the consideration of generalizability to other large models. The deduction of one point is due to the absence of specific details about 'hard prefix prompts' from the given abstract, though the content is strongly related to prompt engineering overall." -linguist: language model instruction tuning to generate annotated utterances for intent classification and slot tagging,gpt-4-1106-preview,9,"The abstract describes a method called LINGUIST which involves fine-tuning a large language model using a flexible instruction prompt to improve the generation of annotated data for Intent Classification and Slot Tagging. This process is closely related to prompt engineering, as it involves the specific design of prompts to achieve desired outcomes in the model's performance. Although it is not exclusively focused on 'hard prefix prompts,' the practice of instruction tuning and prompt design to guide the model's output makes this study highly relevant to the field of prompt engineering. The fine-tuning on instruction prompts is a subset of prompt engineering that has a broad impact on the data generation process for natural language understanding tasks." -conal: anticipating outliers with large language models,gpt-4-1106-preview,8,"The abstract describes a methodology for improving text classification models' handling of out-of-distribution (OOD) examples by generating these examples via prompts to a large language model. The relevance to prompt engineering lies in the fact that it utilizes prompt-based techniques to generate new datasets that represent novel classes, which is a part of the broader field of prompt engineering. While the study does not focus on 'hard prefix prompts' specifically, the process of generating prompts to create OOD examples is an integral part of prompt engineering. Therefore, the relevance is rated as high but not maximal due to the specific approach not being the central topic of prompt engineering studies." -variational prompt tuning improves generalization of vision-language models,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it explores an innovative approach to prompt tuning for vision-language models, which is a key area in the field. It proposes a method that enhances the generalization capabilities of foundational models by using a probabilistic model to generate prompts. This addresses a common issue with prompt tuning where prompts may be too narrow or specific, thus hindering the ability of the model to generalize. The mention of integration with standard and conditional prompt learning frameworks suggests that this study is specifically tailored towards improving the efficacy of prompt engineering in practical applications. The only reason it doesn't receive a perfect score is because the study focuses on vision-language models, and while it is highly relevant, it may not encompass all aspects of prompt engineering that might be applicable in purely language-based models." -prompt-and-rerank: a method for zero-shot and few-shot arbitrary textual style transfer with small language models,gpt-4-1106-preview,8,"The abstract describes a method that directly involves prompt engineering through the use of zero-shot or few-shot prompting as part of a 'Prompt-and-Rerank' process for textual style transfer. Deliberate prompt design choices are discussed as affecting the quality of style transfer, including the use of prompt paraphrasing and delimiter-pair choice. This directly ties to the area of prompt engineering as it is about optimizing the prompts given to language models to achieve a certain task. However, the relevance is not a full 10 as the primary focus is on textual style transfer rather than the structure and formulation of the prompts themselves which would constitute a comprehensive systematic review on hard prefix prompts." -visual-language navigation pretraining via prompt-based environmental self-exploration,gpt-4-1106-preview,8,"The abstract presents a study on improving Vision-Language Navigation (VLN) by utilizing a method called Prompt-based Environmental Self-exploration (ProbES). This involves prompt tuning for language embeddings to adapt a pretrained model like CLIP to new environments without human supervision. Although not directly concerned with 'hard prefix prompts', it relates to prompt engineering significantly as it deals with the adaptation and tuning of prompts to enhance learning efficiency in AI models. The focus is more on vision-language applications and self-exploration but it still falls under the broad umbrella of prompt engineering." -prcbert: prompt learning for requirement classification using bert-based pretrained language models,gpt-4-1106-preview,9,"The relevance of the given paper to the study of prompt engineering is high. The paper discusses the application of prompt learning, a technique within prompt engineering, to the domain of software requirement classification using BERT-based pre-trained language models. Since it explicitly deals with the use of prompt templates to improve classification performance, it is highly relevant to the prompt engineering field, particularly in the context of applying these techniques to domain-specific tasks. However, the focus appears to be more on the classification performance rather than the prompt engineering methodology itself, which is why the rating is not a full 10." -fundamental limitations of alignment in large language models,gpt-4-1106-preview,8,"The abstract discusses the concept of 'alignment' in language models and the theoretical approach to understand the limitations of alignment, which is highly relevant to prompt engineering. The Behavior Expectation Bounds (BEB) framework mentioned in the abstract directly relates to how prompts can influence a model's behavior, which is a core component of prompt engineering. The paper addresses the ability to trigger particular behaviors in large language models through the use of prompts, making it pertinent to the study of hard prefix prompts and how they can be engineered. Although the focus seems to be on the alignment aspect rather than the specific structure and content of prompts (i.e., 'hard prefixes'), the findings about adversarial prompting and the length of the prompt influencing behavior is crucial for the domain of prompt engineering. Therefore, I've rated it an 8 as it is quite pertinent but not exclusively centered on hard prefix prompts." -synthetic prompting: generating chain-of-thought demonstrations for large language models,gpt-4-1106-preview,9,"The relevance of the given article to prompt engineering is very high. Synthetic prompting, as described, directly addresses the creation and refinement of prompts for large language models, aiming to improve their reasoning capabilities. The systemic approach to generating chain-of-thought demonstrations ties closely to the study and evolution of prompt engineering techniques. It demonstrates the iterative process of generating questions and enhancing reasoning chains, which is at the heart of prompt engineering. The only reason it doesn't receive a perfect score is because the content might not be exclusively focused on 'hard prefix prompts' as mentioned in the original request, but rather on the broader concept of prompt generation and optimization." -prompting large language models with answer heuristics for knowledge-based visual question answering,gpt-4-1106-preview,8,"The relevance to prompt engineering is high, as the study directly addresses the utilization of prompts in improving the performance of a large language model (GPT-3) for the specific task of knowledge-based visual question answering (VQA). The approach involves training a model to generate 'answer heuristics' which are then used as part of the prompts to refine GPT-3's understanding of the questions, thereby enhancing its ability to produce accurate answers. This method represents a novel application of prompt engineering, highlighting its effectiveness in extracting and utilizing implicit knowledge for complex tasks. However, the focus is particularly on incorporating answer heuristics into prompts for a VQA task rather than on hard prefix prompts in general, so the rating is not a perfect 10." -large language models are effective text rankers with pairwise ranking prompting,gpt-4-1106-preview,9,"The paper addresses a technique called Pairwise Ranking Prompting (PRP) which is highly relevant to the field of prompt engineering for large language models (LLMs). It contributes to the understanding of how different prompting methods can affect the capabilities of LLMs in the context of ranking tasks. As prompt engineering is largely about optimizing the interaction between users and LLMs for specific tasks, a study that advances the state-of-the-art in this manner is closely related to prompt engineering studies." -exploring the mit mathematics and eecs curriculum using large language models,gpt-4-1106-preview,7,"The abstract describes a study where large language models are evaluated and fine-tuned for solving mathematics and EECS problems, which relates to prompt engineering in terms of optimizing inputs to enhance model performance. GPT-4's 'perfect solve rate' with prompt engineering indicates a direct application of prompt engineering techniques. However, the study focuses more broadly on the model's capabilities in academic problem-solving rather than strictly on prompt engineering methodologies and their systematic review, which would be the core interest of a 'hard prefix prompts' study. Hence, the relevance is strong but not complete." -sequential monte carlo steering of large language models using probabilistic programs,gpt-4-1106-preview,8,"The paper presents a method for controlling the outputs of large language models using sequential Monte Carlo steering, which is highly relevant to prompt engineering as it deals with influencing and guiding the performance of these models at inference time. This approach could be viewed as an advanced form of prompt engineering where the prompts are not fixed but are instead dynamic and take into account syntactic and semantic constraints. Although it does not explicitly tackle 'hard prefix prompts', it proposes a method that is applicable to prompt engineering in a broader sense. Hence, the relevance is high but not absolute, as it is not directly focusing on a 'systematic review' or explicitly on 'hard prefix prompts'." -fineval: a chinese financial domain knowledge evaluation benchmark for large language models,gpt-4-1106-preview,7,"While the title 'fineval: a chinese financial domain knowledge evaluation benchmark for large language models' and abstract presented do not directly deal with 'prompt engineering' in the context of designing or studying hard prefix prompts, the mention of employing various prompt types (zero-shot, few-shot, answer-only, and chain-of-thought) within the evaluation benchmark touches on the principles of prompt engineering. Assessing different prompting strategies is essential to understanding how LLMs like GPT-4 respond in domain-specific tasks. The study's focus on measuring the performance of these LLMs using a set of prompts tailored for the financial domain implies a level of relevance to prompt engineering, as it would provide insights into the effectiveness of prompt design in eliciting the desired response from the models. However, the absence of a specific focus on the systematic review of hard prefix prompts limits the rating from being higher." -analyzing chain-of-thought prompting in large language models via gradient-based feature attributions,gpt-4-1106-preview,9,"The provided abstract is highly relevant to the field of prompt engineering, as it focuses on the Chain-of-thought (CoT) prompting method, which is an advanced tactic in prompting for large language models. The study investigates the impact CoT has on the models' interpretation and weighting of input tokens, which is a fundamental aspect of prompt engineering. Although the paper does not specifically address 'hard prefix prompts,' the examination of CoT prompting mechanisms contributes valuable insights into the broader topic of prompt design effectiveness in LLMs, making it pertinent to the prompt engineering study. The reduction in relevancy score from a perfect 10 to a 9 is due to the specified focus on CoT rather than hard prefix prompts specifically." -"utilizing large language models to simplify radiology reports: a comparative analysis of chatgpt-3.5, chatgpt-4.0, google bard, and microsoft bing",gpt-4-1106-preview,8,"The presented study, while not focusing on 'hard prefix prompts' specifically, addresses the broader field of prompt engineering by evaluating the effectiveness of different prompts in guiding LLMs to simplify radiology reports. Since the performance variation based on the type of prompt used is central to the paper, it contributes relevant insights into how prompts can be engineered for specific applications in medical communication. Thus, the relevance is high, but not a perfect score due to it not focusing exclusively on 'hard prefix prompts'." -understanding the effectiveness of very large language models on dialog evaluation,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates the structure of prompts and their impact on the performance of various large language models in dialog evaluation tasks. While it does not specifically address 'hard prefix prompts,' it does concern the broader category of prompting and example selection, which are integral components of prompt engineering. The systematic review of how the datasets influence prompt construction and the exploration of example quantity and selection type are directly related to understanding and optimizing prompt efficacy." -identifying and extracting rare disease phenotypes with large language models,gpt-4-1106-preview,9,"The abstract describes a study focused on the development and evaluation of novel prompts for named entity recognition (NER) in the context of extracting rare disease (RD) phenotypes using large language models such as ChatGPT. This work is highly relevant to the field of prompt engineering as it directly involves designing and testing prompts to improve NER performance in zero-shot and few-shot settings, as well as comparing these results to traditional fine-tuning methods. This investigation contributes to understanding the potential and limitations of prompt engineering in practical applications, although it is specific to a particular domain of rare diseases." -prompt tuning or fine-tuning - investigating relational knowledge in pre-trained language models,gpt-4-1106-preview,8,"The relevance of the study to prompt engineering is high since it directly deals with the optimization of query prompts for relational knowledge extraction from pre-trained language models. The study compares prompt tuning techniques against adaptive fine-tuning, which is an essential contrast in the field of prompt engineering, as it investigates how pre-trained models can be made more efficient in understanding and responding to prompts without extensive additional training. While the paper does not focus solely on 'hard prefix prompts', it addresses the broader topic of optimizing prompts for better model performance which is integral to prompt engineering studies." -on transferability of prompt tuning for natural language understanding,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study, specifically within the domain of natural language understanding. It discusses prompt tuning, an essential aspect of prompt engineering, where the reusability and transferability of prompts across different tasks and models are investigated. The exploration of knowledge transfer for improving prompt tuning efficiency is directly applicable to strategies in prompt engineering for large pre-trained language models. The reason for not giving a perfect score is the absence of a direct mention of 'hard prompts,' but the study's content is still very pertinent to the broader field of prompt engineering." -knowledge-augmented language model prompting for zero-shot knowledge graph question answering,gpt-4-1106-preview,8,"The relevance of this study to prompt engineering is significant, as it involves the augmentation of input prompts with factual information retrieved from a knowledge graph to improve the performance of Large Language Models (LLMs) in answering questions. This approach directly pertains to prompt engineering by structuring the input to LLMs in a way that aids in zero-shot knowledge graph question answering. Although the focus is not specifically on 'hard prefix prompts,' the method does relate to constructing effective prompts that align with the principles of prompt engineering. The high rating reflects the close relation of knowledge augmentation in prompting to enhance model performance without additional training, which is a core aspect of prompt engineering. The rating is not a perfect 10 because the study specifies a specialized application in knowledge graphs and does not broadly survey prompt engineering techniques or include a systematic review of hard prefix prompts generally." -purr: efficiently editing language model hallucinations by denoising language model corruptions,gpt-4-1106-preview,7,"The study discusses improving the editing and attribution of language model outputs through prompt-based editing methods, which is closely related to prompt engineering. However, the focus is specifically on reducing hallucinations and improving efficiency, rather than on hard prefix prompts. While it does pertain to the broader category of prompt engineering, it does not address the systematic review of hard prefix prompts directly, hence the relevance rating is above average but not maximum." -revealing the unwritten: visual investigation of beam search trees to address language model prompting challenges,gpt-4-1106-preview,8,"The study is highly relevant to the field of prompt engineering as it explores prompt refinement and the intricacies of guiding outputs of generative language models. By introducing a method to investigate the beam search tree visually, it aids in understanding how prompts affect generation, which is a key area in prompt engineering. The paper focuses on improving human understanding of the model decision-making process, which is crucial for effective prompt engineering. Although it does not directly address 'hard prefix prompts,' the broader topic of prompt refinement and model output guidance is closely related to prompt engineering. The rating is not a full 10 because it is not specific to 'hard prefix prompts,' but it is still highly relevant to the general area of study." -training language models to follow instructions with human feedback,gpt-4-1106-preview,7,"The abstract describes a study where language models are fine-tuned with human feedback to improve their alignment with user intent, which is a form of prompt engineering. The process of creating 'InstructGPT' involves using prompts and enhancing the model's response to them; thus, it's relevant to the study of how prompts can be engineered to elicit better responses from language models. However, the study focuses more broadly on model alignment rather than specifically on 'hard prefix prompts', which might be a more technical aspect of prompt engineering. Therefore, it does not entirely focus on hard prefix prompts but is still significantly related to the general field of prompt engineering." -"translating radiology reports into plain language using chatgpt and gpt-4 with prompt learning: results, limitations, and potential",gpt-4-1106-preview,7,"The relevance to prompt engineering is significant, given that the title suggests the study involves using GPT models to translate radiology reports and this would likely involve devising specific prompts to generate plain language explanations. This indicates the research is about the application of prompt engineering to improve language model outputs in a clinical education context. However, the absence of detailed information in the abstract limits the ability to fully assess the degree to which prompt engineering is the focus of the study, so the rating is not a full 10." -a systematic survey of prompt engineering on vision-language foundation models,gpt-4-1106-preview,9,"The abstract provided is highly relevant to prompt engineering, as it specifically addresses the application of prompt engineering techniques to vision-language foundation models. These are a subset of tasks within the broader field of prompt engineering. The abstract indicates a systematic review of how prompts are used in this context, discusses different types of models and how they are prompted, and outlines research directions in prompt engineering. While it does not exclusively focus on 'hard prefix prompts', which would be the only aspect potentially limiting a perfect score, the content is indeed directly related to studies on prompt engineering, hence the high relevance rating." -pouf: prompt-oriented unsupervised fine-tuning for large pre-trained models,gpt-4-1106-preview,8,"The abstract describes a study focused on prompt-oriented unsupervised fine-tuning for pre-trained models, which is highly relevant to the field of prompt engineering. Although it does not specifically mention 'hard prefix prompts,' the concept of aligning discrete distributions from prompts and target data, as well as the application to various tasks, indicates a strong connection to the techniques and objectives in prompt engineering. The fact that it involves unsupervised learning approaches to enhance the performance of the models on unlabeled data by using prompts makes it valuable to the prompt engineering study despite it not being a systematic review or explicitly focused on 'hard prefix prompts'." -model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning,gpt-4-1106-preview,9,"The abstract discusses the approach of improving few-shot performance of prompt tuning through knowledge transfer and model ensembles, directly targeting the optimization of prompt engineering. Although it does not specifically mention 'hard prefix prompts', it is highly relevant to the broader area of prompt engineering which involves techniques to better adapt large language models to specific tasks with minimal examples. The proposed SESoM focuses on sample-specific adaptation that is a key aspect of prompt engineering, thus justifying the high relevance rating." -attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing,gpt-4-1106-preview,9,"The abstract describes a study on a new method for parameter-efficient language model tuning called ATTEMPT, which utilizes a novel approach of soft prompt tuning for multi-task knowledge sharing. This is highly relevant to prompt engineering as it directly involves the development and optimization of prompts that influence the behavior of language models. The introduction of a light-weight sub-network for computing instance-wise attention for prompt interpolation is a significant contribution to the field. The fact that this approach contributes to multi-task learning, parameter efficiency, and interpretability in prompt tuning makes it extremely pertinent. The reason why the rating is not a perfect 10 is that the abstract does not mention 'hard prefix prompts' specifically, which was the exact interest stated in the initial 'prompt engineering study' query." -prompting large pre-trained vision-language models for compositional concept learning,gpt-4-1106-preview,8,"The abstract describes research on the use of prompt-based learning within vision-language models, focusing on compositional learning. While the study emphasizes the use of 'soft-prompting' as opposed to 'hard-prompting', it still falls under the broader category of prompt engineering. The work is highly relevant to the field as it explores how prompts can be engineered to enhance the performance of machine learning models, which is a core part of prompt engineering studies. The rating is not a perfect 10 because the study does not exclusively deal with 'hard prefix prompts' as specified in the initial request but instead focuses on an alternative method within the same field." -proqa: structural prompt-based pre-training for unified question answering,gpt-4-1106-preview,9,"The abstract of 'proqa: structural prompt-based pre-training for unified question answering' is highly relevant to the study of prompt engineering. It details the use of structural prompts as a method to train a QA system, thus highlighting an approach to prompt engineering. The paper not only presents a model that is pre-trained with structural prompt-formatted data but also emphasizes the model's performance on benchmarks and its abilities in various learning scenarios. Although it doesn't specifically mention 'hard prefix prompts', the focus on structural prompt-based pre-training indicates a strong connection to prompt engineering studies." -novelty controlled paraphrase generation with retrieval augmented conditional prompt tuning,gpt-4-1106-preview,8,"The abstract describes research related to adapting pre-trained language models using a method called Retrieval Augmented Prompt Tuning and a variation for controlling lexical novelty in paraphrases. Although the study does not directly address 'hard prefix prompts', it is closely related to prompt engineering because it involves the use of specialized prompt tokens and is model-agnostic, which contributes to prompt engineering literature. This relevance is bolstered by the fact that altering prompts to control generation outcomes is a key area within prompt engineering. The study's focus on parameter-efficiency and controlled generation is not the primary focus of hard prefix prompts, hence the rating is not a full 10 but is still relatively high due to the overlapping interests." -discup: discriminator cooperative unlikelihood prompt-tuning for controllable text generation,gpt-4-1106-preview,9,"The paper describes an advanced technique for prompt learning with Casual Language Models, focusing on attribute-controllable text generation, which is a core aspect of prompt engineering. The method of utilizing a discriminator to refine the generation process is directly relevant to the study of hard prefix prompts and their optimization in prompt engineering. The relevance is not a perfect 10 since the abstract does not specifically mention 'hard prefix prompts,' yet the overall topic is highly pertinent to the field." -deep continuous prompt for contrastive learning of sentence embeddings,gpt-4-1106-preview,8,"The title and abstract describe a study that is highly relevant to prompt engineering, particularly with regard to optimizing and innovating within the framework of contrastive learning and sentence embeddings. The proposed method involves 'prefix deep continuous prompts,' which aligns with prompt engineering, though it does not explicitly mention 'hard prefix prompts.' Nonetheless, the focus on efficiently prompting a language model without full fine-tuning is a significant contribution to the field of prompt engineering. The emphasis on performance improvement with minimal parameter tuning and the avoidance of handcrafted prompt search provides valuable insights for prompt engineering studies. Thus, the relevance is rated high, but not full, due to the lack of direct reference to 'hard prefix prompts.'" -improving the sample efficiency of prompt tuning with domain adaptation,gpt-4-1106-preview,9,"The given abstract describes research focused on improving the efficiency of prompt tuning for pretrained language models through domain adaptation methods. Although it does not directly mention the term 'hard prefix prompts', the study investigates 'soft prompts' and is highly relevant to the broader field of prompt engineering. It addresses a key challenge in the area, which is enhancing performance in data-scarce situations—a topic of interest for prompt engineering. The proposed OPTIMA method and its potential to improve the transferability and sample efficiency of prompt tuning are of significant value to prompt engineering studies. The rating is not a full 10 as the study might not be exclusively focused on hard prefix prompts, but it remains extremely relevant to the subject matter." -prompt-augmented linear probing: scaling beyond the limit of few-shot in-context learners,gpt-4-1106-preview,8,"The paper addresses an advanced technique in prompt engineering by combining linear probing with in-context learning, which directly pertains to how language models are prompted to enhance their understanding and usage of data. The concept of 'prompt-augmented linear probing' (PALP) is relevant to the field of prompt engineering as it seeks to improve the model's performance by carefully designing prompts that fit within the input constraints of language models and make the input more understandable for the model. This is central to the study of prompt engineering. However, it does not specifically address 'hard prefix prompts', though the technique may still be applicable to that subset of prompt engineering. The TLDR section does not provide information in this context, hence the rating is not a full 10." -reduce communication costs and preserve privacy: prompt tuning method in federated learning,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it discusses 'prompt tuning,' which is a method within the field of natural language processing that directly relates to how prompts are engineered and optimized. While the primary focus of the study appears to be on the application of prompt tuning in federated learning, which entails privacy-preserving and communication-efficient aspects, it still contributes to the broader understanding of prompt engineering by showcasing its efficiency and robustness in different data distribution scenarios. The presence of a 'backdoor threat' evaluation further adds to its relevance as it touches on the security aspect of prompt engineering." -doubly right object recognition: a why prompt for visual rationales,gpt-4-1106-preview,7,"The abstract discusses the development of a 'why prompt' for visual recognition models, which is relevant to the study of prompt engineering as it involves creating prompts that guide models to give not only correct classifications but also the underlying rationales. Although the study is focused more on visual rationales and the intersection of language models with visual models, it still pertains to the broader category of prompt engineering. However, it is not directly related to 'hard prefix prompts' specifically, as it doesn't mention them explicitly, leading to a slightly lower relevance rating." -xprompt: exploring the extreme of prompt tuning,gpt-4-1106-preview,9,"The paper directly relates to the domain of prompt engineering, as it explores prompt tuning techniques and their impact on performance with Pre-trained Language Models (PLMs). The research addresses a specific issue in prompt engineering—negative impact of trained prompt tokens—and introduces a novel solution (XPrompt) to mitigate this issue. Therefore, it is highly relevant to studies focused on refining the application of prompts in PLMs. The only reason it does not receive a full score of 10 is that the prompt does not specifically mention 'hard prefix prompts,' so it may slightly deviate from that narrow aspect of prompt engineering study if the method described does not strictly apply to hard prompts." -automatic prompt augmentation and selection with chain-of-thought from labeled data,gpt-4-1106-preview,9,"The content of the presented paper is highly relevant to prompt engineering study due to its focus on Chain-of-thought prompting (CoT), which is a technique used in prompt engineering. Automate-CoT, the proposed strategy in the paper, directly addresses the process of generating and selecting prompts in an automated fashion, which aligns with the core components of prompt engineering. This technique also impacts how language models can be efficiently used in various reasoning tasks, which are central to the application of prompt engineering. The reason the rating is not a perfect 10 is because the abstract does not specifically mention 'hard prefix prompts' that the user inquiry is about, instead it refers to CoT in a general sense." -multitask prompt tuning enables parameter-efficient transfer learning,gpt-4-1106-preview,8,"The provided abstract describes a method for prompt tuning in the context of adapting large language models to various tasks, which is highly relevant to the field of prompt engineering. Multitask prompt tuning (MPT) is a technique that is specifically designed to create versatile prompts that are applicable across multiple tasks, indicating a direct application to prompt engineering. The abstract focuses on the efficient use of prompts and parameter tuning, which are central themes in prompt engineering studies. However, the abstract does not directly mention 'hard prefix prompts' but rather it discusses soft prompts and their adaptation for multitask learning, so it may not be fully comprehensive in the context of a systematic review on hard prefix prompts. This is why the rating is not a full 10." -declaration-based prompt tuning for visual question answering,gpt-4-1106-preview,8,"The paper presents a method for fine-tuning visual-language models for VQA tasks (Declaration-based Prompt Tuning, DPT), which involves aligning downstream task objectives with pre-training objectives. While the paper focuses on an application within cross-modal tasks (visual question answering), the method of 'prompt tuning' is central to 'prompt engineering,' which involves designing inputs that efficiently guide models to perform specific tasks. Therefore, the concept of reformulating questions into declarative sentence form for prompt tuning is highly relevant to the study of prompt engineering, albeit in a more specialized context." -prompt generation networks for efficient adaptation of frozen vision transformers,gpt-4-1106-preview,9,"The abstract describes a new method in prompt engineering, the Prompt Generation Network (PGN), which is highly relevant to the study of how to efficiently adapt frozen vision transformers for various tasks without fine-tuning. The fact that PGN pertains to learning input-dependent prompts places it within the domain of prompt engineering. The reason it is not a full 10 is that it might not cover 'hard prefix prompts' specifically as the systematic review requires, but rather discusses a more generalized approach to prompt engineering." -spt: semi-parametric prompt tuning for multitask prompted learning,gpt-4-1106-preview,9,"The study titled 'spt: semi-parametric prompt tuning for multitask prompted learning' is highly relevant to prompt engineering since it directly deals with an innovative method for prompt tuning which is a central theme in prompt-based learning and modeling. The semi-parametric approach, utilizing a memory bank to retrieve memory prompts based on discrete prompts, is a novel contribution to the field of prompt engineering, and the extensive experiments conducted across various tasks and domains underscore its potential impact on the efficiency and generalization of large language models. The reason why the rating is not a full 10 is that the prompt engineering relevance is specific to semi-parametric methods, and it does not address the entire spectrum of prompt engineering techniques, such as hard prefix prompts." -cup: curriculum learning based prompt tuning for implicit event argument extraction,gpt-4-1106-preview,7,"The abstract describes a method for enhancing a machine learning model's ability to perform implicit event argument extraction—'Curriculum learning based Prompt tuning (CUP).' This approach is relevant to prompt engineering because it involves adapting prompt templates over different stages of learning to better utilize pre-trained language models. Although the paper does not exclusively focus on 'hard prefix prompts,' which the prompt engineering study may specifically be interested in, it talks about prompt-based models and their tuning, which is closely related to the domain of prompt engineering. Therefore, the relevance to prompt engineering is significant, although not perfectly aligned with the prompt engineering area targeting hard prefixes." -zero-label prompt selection,gpt-4-1106-preview,9,"The abstract describes a method named Zero-Label Prompt Selection (ZPS) that evidently pertains to the field of prompt engineering as it directly involves the selection and use of prompts for natural language models without the need for labeled data. Despite not explicitly mentioning 'hard prefix prompts', it addresses a critical component of prompt engineering, which is prompt performance in zero or few-shot settings. The relevance to prompt engineering is high because it contributes to the understanding of how to effectively utilize prompts to improve model performance under constrained conditions." -clip-tuning: towards derivative-free prompt learning with a mixture of rewards,gpt-4-1106-preview,8,"The paper describes an innovative approach to prompt learning that is highly relevant to the field of prompt engineering. Derivative-free prompt learning is a part of prompt engineering, and the technique of using 'thinned' networks to create a mixture of rewards is a novel contribution to optimizing prompts. While the paper focuses specifically on Clip-Tuning and derivative-free methods as opposed to a broader systematic review of hard prefix prompts, it still provides valuable insights and advancements in the area of prompt engineering. Therefore, the rating is high for relevance but not the maximum score since it doesn't cover the entire scope of 'hard prefix prompts'." -uom&mmu at tsar-2022 shared task: prompt learning for lexical simplification,gpt-4-1106-preview,8,"The paper describes an approach for using prompts in a language model to achieve lexical simplification. It directly relates to prompt engineering since it involves fine-tuning language models with a specifically designed prompt template. The method described is an example of how prompt engineering can be used to improve the performance of language tasks in different settings (zero-shot, fine-tuned, and multilingual). This is closely aligned with the study of prompt engineering, although it is focused on one particular application (lexical simplification) rather than hard prefix prompts in a broader sense." -bidirectional language models are also few-shot learners,gpt-4-1106-preview,8,"The abstract discusses the concept of prompt-based learning in the realm of bidirectional language models, which is a central component of prompt engineering. It presents a novel technique (SAP) for prompting bidirectional models, which is highly relevant to the study of how to effectively design and use prompts to elicit desired responses from such models. While it doesn't directly address 'hard prefix prompts,' the subject of designing prompts and demonstrating their utility across different models (bidirectional and unidirectional) is pertinent to the broader field of studies into prompt engineering. The work's implications for the adaptability and performance of language models when prompted make it significantly relevant, though not perfectly aligned since the prompt primely focuses on 'hard prefix prompts.'" -language models in the loop: incorporating prompting into weak supervision,gpt-4-1106-preview,9,"The document describes a methodology deeply tied to the application of prompt engineering, where large language models are prompted with multiple queries to generate labeled data for a classifier in a weak supervision context. This is highly relevant to prompt engineering studies as it directly involves developing and refining methods for eliciting structured responses from language models through prompts. The only reason why the rating is not a perfect 10 is the study's specific focus on weak supervision, which might not cover all aspects of prompt engineering, such as constructing prompts for different kinds of language tasks beyond weak supervision." -prompting as probing: using language models for knowledge base construction,gpt-4-1106-preview,8,"The study described in the abstract details the use of various prompting techniques with GPT-3 to perform Knowledge Base Construction, an advanced application of prompt engineering. The multi-step approach to optimizing prompts, including manual prompt curation and the use of true/false questions, directly relates to the field of prompt engineering. Although it does not specifically mention 'hard prefix prompts,' the overarching use of prompts to elicit specific information from a language model is highly relevant. Therefore, the paper is quite pertinent to the study of prompt engineering, but since 'hard prefix prompts' are not exclusively the focus, the rating is not a perfect 10." -what does clip know about a red circle? visual prompt engineering for vlms,gpt-4-1106-preview,9,"The abstract describes a study on prompt engineering within the domain of Vision-Language Models, such as CLIP, specifically focusing on the use of visual cues (a red circle) to direct the model's attention. Although the study is about visual prompt engineering rather than traditional text-based prompts ('hard prefix prompts'), it is still highly relevant to the broader field of prompt engineering as it explores how different types of prompts can influence model behavior and performance on various tasks. The rating is not a perfect 10 because it does not directly address 'hard prefix prompts' in text but instead a visual method, which may not be precisely what is meant by 'prompt engineering' in the original query context." -healthprompt: a zero-shot learning paradigm for clinical natural language processing,gpt-4-1106-preview,8,"The abstract outlines a research study that is highly relevant to prompt engineering study. It describes the development of a new prompt-based learning framework specifically for clinical NLP tasks, which is an example of applying prompt engineering to a specialized domain (healthcare). The fact that this framework operates in a zero-shot learning context enhances its relevance, as it illustrates the potential of prompt engineering in scenarios where annotated datasets are scarce or non-existent. However, while the study does focus on prompt-based learning, which is a subset of prompt engineering, it does not explicitly mention 'hard prefix prompts' as the prompt type being investigated. Consequently, the rating is not a full 10, as it might not cover the comprehensive systematic review aspect explicitly focused on hard prefixes." -an automatically discovered chain-of-thought prompt generalizes to novel models and datasets,gpt-4-1106-preview,9,"The abstract discusses a study focused on the effectiveness of chain-of-thought (CoT) reasoning prompts across different language models and datasets, which is highly relevant to prompt engineering. The exploration of how previously devised prompts can be applied and generalized to new model generations provides valuable insights for prompt engineering research. The study investigates the impact of prompts on the performance of language models, which is central to the field of prompt engineering. However, the abstract doesn't specifically mention 'hard prefix prompts,' which might slightly reduce the relevance considering the precise topic in the initial prompt." -pbnr: prompt-based news recommender system,gpt-4-1106-preview,8,"The paper describes the 'prompt-based news recommendation' (PBNR) system which closely relates to prompt engineering as it involves designing personalized prompts to interact with a pre-trained language model (T5) for the specific task of news recommendation. This system is an example of applying prompt engineering to adapt language models for a specific application. However, the relevance is not a full 10 because the paper seems more focused on the application of prompt engineering in the context of news recommendation, rather than on the study of the hard prefix prompts or the systematic review of the methodology itself." -p rompt c ap : prompt-guided image captioning for vqa with gpt-3,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it introduces 'P ROMPT C AP', a model that utilizes natural-language prompts to guide the image captioning process which in turn enhances the performance of visual question answering (VQA) with a language model like GPT-3. The method directly involves engineering prompts to control the content of image captions, ensuring they contain the necessary details for LMs to answer questions. This is a specific application of prompt engineering in the context of integrating textual prompts with image understanding for improved knowledge-based task performance. The paper's focus on synthesizing prompts for effective LM use aligns closely with the study of prompt engineering." -visual clues: bridging vision and language foundations for image paragraph captioning,gpt-4-1106-preview,7,"The study relates to prompt engineering in that it discusses the creation of structured textual prompts, termed 'visual clues,' from an image using a vision model, and then using these prompts to generate image captions with a language model. Although the research does not focus on 'hard prefix prompts' per se, it is relevant to the broader field of prompt engineering, considering it involves the construction and utilization of prompts to facilitate communication between vision and language models. Therefore, it offers insights into one aspect of the prompt engineering area - namely, how to effectively generate prompts for a specific cross-modal task." -response generation with context-aware prompt learning,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it focuses on a novel approach that treats dialogue generation as a prompt-learning task. The methodology of learning continuous prompt embeddings customized for dialogue contexts aligns closely with prompt engineering, as it involves designing prompts that can effectively interact with pre-trained language models to produce desired responses. Despite the paper not explicitly mentioning the term 'hard prefix prompts', it is implicit in the context of prompt embeddings. The reduction of two points is because it doesn't directly address the systematic review aspect of hard prefix prompts but is still very much within the realm of prompt engineering for dialogue systems." -few-shot self-rationalization with natural language prompts,gpt-4-1106-preview,8,"The presented study explores natural language prompts extensively in the context of self-rationalization models, which is a form of prompt engineering where the model is prompted to not only provide a decision but also to generate explanations for its decisions. Even though the study does not exclusively focus on 'hard prefix prompts', it is relevant to the broader topic of engineering prompts in such a way that enables models to perform complex tasks with minimal training data. The focus on few-shot learning and the use of prompts to improve plausibility ratings also contribute to the field of prompt engineering. However, the rating is not a full 10 as the specific term 'hard prefix prompts' is not directly addressed." -controllable generation from pre-trained language models via inverse prompting,gpt-4-1106-preview,9,"The abstract presents a direct application of prompt engineering by proposing a novel technique called inverse prompting to improve controllability in text generation from pre-trained language models. The concept of predicting the prompt from generated text during beam search for better alignment between the two is a clear attempt at enhancing the prompt engineering field. The study seems highly relevant to prompt engineering, especially in creating more efficient and controlled generation of texts. The rating is not a full 10 simply because the abstract does not mention 'hard prefix prompts' specifically, which was outlined in the original inquiry regarding a 'systematic review on hard prefix prompts'. However, inverse prompting is still clearly within the domain of prompt engineering." -progressive prompts: continual learning for language models,gpt-4-1106-preview,9,"The provided abstract directly addresses the development of a new method within the field of prompt engineering referred to as 'Progressive Prompts.' This approach is relevant because it is a specific technique aimed at improving the capabilities of language models by facilitating continual learning. Since prompt engineering involves the design and utilization of prompts to effectively interact with language models, a study on Progressive Prompts is highly pertinent to the field. The relevance is not rated as a full 10 only because the prompt specifically asks about 'hard prefix prompts,' while this method pertains to soft prompts learned for each task, and it's not clear whether hard prompts are considered or compared in the approach." -boosting natural language generation from instructions with meta-learning,gpt-4-1106-preview,7,"The abstract describes a study focused on improving natural language generation using meta-learning strategies, specifically in a multi-task instructional learning setting. While the study does not directly address 'hard prefix prompts,' it does explore how language models can better extract and utilize information from instructions, which is a critical aspect of prompt engineering. Enhancing the generalization of language models to perform unseen tasks based on instructions is relevant to prompt engineering as it addresses the challenge of designing prompts that can guide models to perform specific NLP tasks effectively. The application of meta-learning to MTIL is an innovative approach within the broader field of prompt engineering, thus earning a relevance rating of 7 out of 10." -strategic reasoning with language models,gpt-4-1106-preview,9,"The abstract highlights the use of 'systematically generated prompts' in conjunction with large language models to facilitate strategic reasoning, which is highly relevant to prompt engineering. The study's exploration of how prompts can guide AI to generalize to new tasks with little or no additional training intersects with the core concepts of creating effective prompts that drive AI performance. The slight deduction from a perfect score is due to the specific context of strategic games, which may not cover all aspects of prompt engineering, but the principles discussed are broadly applicable." -respectful or toxic? using zero-shot learning with language models to detect hate speech,gpt-4-1106-preview,8,"The paper focuses on prompt-based methods for hate speech detection, which falls under the broader category of prompt engineering within the field of natural language processing. Prompting is a core technique used in this study and is relevant to the understanding and development of effective prompt strategies in the context of language model applications. Although the paper's primary concern isn't about 'hard prefix prompts' specifically, it still contributes to the knowledge base regarding how prompts can be engineered to enhance zero-shot learning capabilities in AI models, which is pertinent to the study of prompt engineering." -meta-tuning language models to answer prompts better,gpt-4-1106-preview,9,"The abstract discusses a method called 'meta-tuning' for improving the ability of large pretrained language models to answer prompts, which is directly related to prompt engineering. The relevance is high because the study aims to specialize and generalize language models to better understand and respond to prompts, which is a core aspect of prompt engineering. The only reason it doesn't score a perfect 10 is because the abstract doesn't directly address 'hard prefix prompts', but the concept can likely be applied to various types of prompts including hard prefixes." -"a sign language recognition system with pepper, lightweight-transformer, and llm",gpt-4-1106-preview,7,"The abstract indicates that prompt engineering was used as part of the process to enable the Pepper Robot to generate natural Co-Speech Gesture responses. While the focus of the study is on sign language recognition and robot interaction, the mention of tailoring interactions through prompt engineering shows relevance to the prompt engineering field. However, the study does not appear to be a comprehensive systematic review on hard prefix prompts specifically but instead applies prompt engineering within the scope of robot interaction and sign language processing. Therefore, the rating is a 7 out of 10, acknowledging the connection without it being the central theme of the research." -question decomposition improves the faithfulness of model-generated reasoning,gpt-4-1106-preview,7,"The study discusses a method of improving the quality and faithfulness of responses from large language models by decomposing questions into subquestions, which is related to prompt engineering. The utilization of specific prompting strategies to elicit more explainable and verifiable outputs from the models is a part of prompt engineering. Although the focus is more on question decomposition and the faithfulness of the reasoning process rather than on 'hard prefix prompts' specifically, the principles and findings can still have implications for prompt engineering practices in general, hence the relatively high relevance score." -improving gender fairness of pre-trained language models without catastrophic forgetting,gpt-4-1106-preview,8,"The study described in the abstract is highly relevant to prompt engineering because it develops a method called GEEP (GEnder Equality Prompt) to improve the performance of pre-trained language models. GEEP specifically involves learning gender-related prompts, which makes it a direct application of prompt engineering in addressing the issue of gender bias in AI models. Although the study is not a comprehensive systematic review on hard prefix prompts and is more focused on gender fairness, the concept of 'hard prefix prompts' as a key component of 'prompt engineering' makes this study quite relevant to the broader field of prompt engineering." -few-shot instruction prompts for pretrained language models to detect social biases,gpt-4-1106-preview,8,"The study involves the construction of few-shot instruction-based prompts for pretrained language models, which is highly relevant to the field of prompt engineering. It examines how effectively these prompts can guide language models in detecting social biases in text, which is a specific application of prompt engineering. Although it does not directly mention 'hard prefix prompts,' the methodology of using instructional prompts to achieve a task with a language model fits under the broader umbrella of prompt engineering. The relevance is rated an 8 instead of a 10 because the focus is more on detecting social biases rather than on the systematic review of prompting techniques themselves." -evaluating the instruction-following robustness of large language models to prompt injection,gpt-4-1106-preview,9,"The study directly examines the interaction between large language models and prompts, specifically investigating the challenge of adversarial instruction injection. This is highly relevant to prompt engineering as it deals with understanding and improving the robustness of LLMs in discerning and responding to prompts. The focus on how models discern and follow instructions is a critical aspect of prompt engineering, especially when considering the creation of prompts that intend to guide the model towards producing specific outcomes or behaviors without succumbing to manipulation." -(ab)using images and sounds for indirect instruction injection in multi-modal llms,gpt-4-1106-preview,8,"The provided title and abstract are relevant to prompt engineering study as they discuss a method of manipulating the output of multi-modal LLMs (Large Language Models) through indirect prompt and instruction injection via images and sounds, which can be considered a form of prompt engineering. Although the focus is on adversarial perturbations and security, understanding this process is crucial for developing effective prompts, especially in the context of preventing misuse. It highlights the importance of prompt design in multi-modal systems and contributes to the broader field of prompt engineering by exploring potential vulnerabilities and manipulative techniques." -chat-rec: towards interactive and explainable llms-augmented recommender system,gpt-4-1106-preview,7,"The relevance of the provided study to prompt engineering is moderately high, with a rating of 7 out of 10. The study focuses on a method for augmenting recommender systems with large language models by converting user data into prompts, which falls within the scope of prompt engineering. Prompt design plays a crucial role in enabling the Chat-Rec system to function by guiding the language model to generate relevant and personalized recommendations. While the study does not specifically target 'hard prefix prompts,' it does explore a practical application of prompts within an interactive system and contributes to the body of knowledge on how to effectively leverage LLMs through prompt engineering. However, if the focus were specifically on a 'systematic review on hard prefix prompts,' the rating might be lower as this study presents an application rather than a review on hard prefix prompts." -dialogue for prompting: a policy-gradient-based discrete prompt optimization for few-shot learning,gpt-4-1106-preview,9,"The study described focuses on prompt-based optimization for few-shot learning in the context of pre-trained language models, which is directly relevant to prompt engineering. The novel Dialogue-comprised Policy-gradient-based Discrete Prompt Optimization (DP2O) method aims to improve the efficiency, quality, and applicability of prompt-based methods in NLP tasks. The use of a reinforcement learning framework to optimize discrete prompts signifies a technical advancement in the field. The only reason it doesn't score a perfect 10 is that it doesn't address 'hard prefix prompts' specifically but discusses discrete prompt optimization in a broader sense." -promptagent: strategic planning with language models enables expert-level prompt optimization,gpt-4-1106-preview,9,"The article is highly relevant to prompt engineering as it discusses 'PromptAgent', an optimization method aimed at automating the generation of expert-level prompts, which is directly aligned with prompt engineering studies. It addresses the strategic planning problem within prompt optimization and demonstrates the system's effectiveness across various domains and tasks. The only reason it does not receive a 10 is that the specific focus on 'hard prefix prompts' is not explicitly stated, but the scope still remains within the general field of prompt engineering." -emotion-conditioned text generation through automatic prompt optimization,gpt-4-1106-preview,9,"The title and abstract discuss an automatic prompt optimization approach specifically for emotion-conditioned text generation, which is clearly within the domain of prompt engineering. The study focuses on refining prompts to improve the performance of instruction-fine-tuned models, which is at the core of prompt engineering studies. The relevance is not rated a perfect 10 as the study is narrowly focused on emotion-conditioned text generation and not prompt engineering in general. Overall, however, the relevance to prompt engineering is very high." -multiprompter: cooperative prompt optimization with multi-agent reinforcement learning,gpt-4-1106-preview,9,"The paper presents a new framework, MultiPrompter, that directly addresses the issue of prompt optimization, which is a core aspect of prompt engineering. It introduces a novel concept of using multi-agent reinforcement learning for cooperative prompt optimization. Such a technique is highly relevant for studies in prompt engineering, as it could lead to improvements in the generation of interpretable prompts and better interaction with foundation models. Although the paper is applied to the text-to-image task, the concepts and methodologies presented could be generalizable and thus highly relevant to the broader field of prompt engineering." -robust prompt optimization for large language models against distribution shifts,gpt-4-1106-preview,9,"The presented paper directly addresses a key issue in prompt engineering, namely the optimization of prompts for large language models, especially in the context of distribution shifts, which is a crucial aspect in the robustness of language models. Although the abstract does not specify the use of 'hard prefix prompts,' the focus on prompt optimization and generalization across different distributions indicates a close relevance to the broader field of prompt engineering. The proposed Generalized Prompt Optimization framework, which utilizes unlabeled data in optimization, is highly pertinent to advancing the study and application of prompt engineering." -query-dependent prompt evaluation and optimization with offline inverse rl,gpt-4-1106-preview,8,"The abstract indicates a study focused on enhancing arithmetic reasoning of LLMs (Large Language Models) specifically through prompt optimization, which is directly related to prompt engineering. The introduction of Prompt-OIRL as a method to evaluate query-prompt pairs and recommend optimal prompts without requiring live interaction with LLMs is notable for prompt engineering efficiency and effectiveness. It suggests a more nuanced approach to evaluating and optimizing prompts based on query dependency, which is an important aspect of prompt engineering. However, the study is not centered on 'hard prefix prompts' specifically but rather on a broader prompt optimization problem, which includes but is not limited to hard prefix prompts. Therefore, the rating is not a perfect 10." -visual-language prompt tuning with knowledge-guided context optimization,gpt-4-1106-preview,8,"The presented abstract directly addresses an aspect of prompt engineering, focusing on improving the generalization ability of learnable prompts in the context of a visual-language model. The introduction of Knowledge-guided Context Optimization (KgCoOp) pertains to the optimization of prompts, which is a fundamental component of prompt engineering. The relevance rating is not a full 10 because the study specifically targets visual-language models and may not cover other prompt engineering contexts, such as text-based models or hard prefix prompts more broadly." -copner: contrastive learning with prompt guiding for few-shot named entity recognition,gpt-4-1106-preview,9,"The study introduces the use of class-specific prompts for few-shot NER, employing these prompts as supervision signals and metric referents, which is highly relevant to prompt engineering. The methodology specifically addresses the optimization of token representations and inferencing strategies, which are central concerns in prompt engineering. The relevance score is not a full 10 because the study focuses on one specific application (NER) and it is not a systematic review on hard prefix prompts in general." -cpl: counterfactual prompt learning for vision and language models,gpt-4-1106-preview,8,"The paper discusses 'Counterfactual Prompt Learning (CPL)' for vision and language models, which is directly related to prompt tuning, a subset of prompt engineering. It introduces an innovative approach to optimize prompt learning and aims to improve generalization of learned representations for few-shot learning tasks. Although it does not specifically mention 'hard prefix prompts', it still contributes to the broader field of prompt engineering by advancing techniques for efficient and non-spurious prompt learning. This is highly relevant for the study of prompt engineering as it explores new methods and their impact on model performance. Therefore, the rating is high but not maximum, as the exact focus on 'hard prefix prompts' is not clear from the abstract." -prompt engineering for zero‐shot and few‐shot defect detection and classification using a visual‐language pretrained model,gpt-4-1106-preview,9,"The abstract indicates that the study focuses on the optimization of prompts, which is intrinsic to prompt engineering. It investigates how different types of prompts affect the performance of a VLP model, particularly for the task of defect detection and classification. The findings on domain-specific definitions, sentence structure, and modality of information are directly relevant to understanding how prompts can be engineered for better performance in zero-shot and few-shot learning tasks, which is a key component of prompt engineering. The only reason the rating is not a full 10 is that it doesn't discuss 'hard prefix prompts' specifically but prompt optimization in a broader sense within the context of VLP models." -understanding and mitigating overfitting in prompt tuning for vision-language models,gpt-4-1106-preview,8,"The abstract discusses the mitigation of overfitting in prompt tuning for vision-language models, which is highly relevant to prompt engineering studies. The focus on understanding and addressing overfitting issues during prompt tuning is pertinent as prompt engineering encompasses the design, optimization, and evaluation of prompts used to guide machine learning models. The abstract presents a direct application and improvement in the field of prompt engineering by proposing a new method (Subspace Prompt Tuning) to enhance the training process of models, making the study very relevant. However, it does not explicitly cover 'hard prefix prompts' which is specifically mentioned in the query, thus the rating is slightly reduced." -bbtv2: pure black-box optimization can be comparable to gradient descent for few-shot learning,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it presents an advanced technique (BBTv2) for optimizing the prompts used in language models, seeking to improve performance in few-shot learning tasks without relying on gradient-based methods. This research is directly related to how prompts can influence model performance and efficiency, which is a core aspect of prompt engineering. Although it does not specifically address 'hard prefix prompts' as mentioned in the initial study prompt, it deals with the continuous prompt tokens and optimizing them, which falls under the broader umbrella of prompt engineering. Therefore, the rating is not a full 10 but remains high due to the close relevance." -connecting large language models with evolutionary algorithms yields powerful prompt optimizers,gpt-4-1106-preview,9,"The paper directly relates to prompt engineering by introducing a framework (EvoPrompt) for optimizing prompts using evolutionary algorithms, which is a novel approach within the field of prompt engineering study. The use of both large language models and evolutionary algorithms specifically to improve the efficiency and effectiveness of prompt generation is extremely relevant to those researching how to develop better prompts for LLMs. The only reason it does not receive a full 10 is that, without access to the full text, it's not clear how much the paper focuses on 'hard prefix prompts' specifically, if at all, since it doesn't mention this specific term in the provided abstract or TLDR content." -pfedprompt: learning personalized prompt for vision-language models in federated learning,gpt-4-1106-preview,8,"The abstract describes a study on a method called pFedPrompt that focuses on personalizing prompts for pre-trained vision-language models in a federated learning context. It directly engages in prompt engineering by refining how the prompts adapt to user characteristics, attempting to improve performance and relevance of the model outputs. While it doesn’t address 'hard prefix prompts' directly, the study is highly relevant to prompt engineering as it talks about optimizing prompts, which is a core area of interest in prompt engineering studies. The methodological focus on personalization in a federated learning framework is an innovative contribution to the field." -iterative prompt learning for unsupervised backlit image enhancement,gpt-4-1106-preview,8,"The abstract describes a study that focuses on the development of an unsupervised image enhancement method by using prompt learning within the CLIP framework. Although the primary application is not textual prompt engineering but rather improving visual quality in backlit images, the concept of iterative prompt learning is highly relevant to prompt engineering. The lifecycle of prompts, their optimization, and their iterative improvement are at the core of prompt engineering studies. This work can contribute to the understanding of prompt-based models and how they can be fine-tuned for specific tasks, which is valuable knowledge for the field of prompt engineering. Hence, the relevance rating is 8, acknowledging the connection to prompts and learning frameworks but also recognizing that the study doesn't focus on textual prompts or their direct use in text-based models." -meta learning for domain agnostic soft prompt,gpt-4-1106-preview,8,"The abstract discusses a new approach to prompt-based learning, which is highly relevant to the field of prompt engineering as it focuses on optimizing soft prompts for domain-agnostic applications. The method aims to improve the generalizability of prompts which is a critical aspect in the study of prompt engineering. The relevance is not a full 10 because it specifically addresses soft prompts and unsupervised domain adaptation rather than hard prefixes or a comprehensive review of prompt engineering techniques." -temporally-extended prompts optimization for sam in interactive medical image segmentation,gpt-4-1106-preview,7,"The study described in the abstract is somewhat relevant to prompt engineering as it involves optimizing the interaction between human experts and a machine learning model through the form of prompts (e.g., points, bounding boxes). However, the primary focus seems to be on the application of this technique to the medical image segmentation field rather than the theory or methodology of prompt engineering itself. The relevance is thus rated a 7, recognizing the contribution to the prompt engineering field in the specific context of medical image segmentation but also noting that it does not address broader prompt engineering topics." -styleclip: text-driven manipulation of stylegan imagery,gpt-4-1106-preview,7,"The relevance to prompt engineering is substantial, as the study addresses a text-based interface which involves users providing text prompts that manipulate images generated by StyleGAN. This process inherently relies on prompt engineering to achieve meaningful image manipulations, effectively turning textual descriptions into stylistic changes in images. The use of CLIP models to understand and execute these prompt-induced manipulations highlights an important application of prompt engineering in the field of AI and image processing. However, the primary focus of the study is on the interface and leveraging CLIP for image manipulation rather than the detailed study of the prompt engineering itself, which slightly reduces the rating." -null-text inversion for editing real images using guided diffusion models,gpt-4-1106-preview,7,"The paper presents an inversion technique and a method for text-based image editing using diffusion models, which involves prompt engineering concepts such as working with textual embeddings and guiding diffusion models using text. While the focus is on image editing rather than constructing or evaluating hard prefix prompts explicitly, the techniques developed could be relevant to prompt engineering by enabling more sophisticated control and manipulation of generated content based on text prompts. However, the study does not directly address hard prefix prompts in systematic review, thus the relevance is significant but not complete." -clip-mesh: generating textured meshes from text using pretrained image-text models,gpt-4-1106-preview,8,"The given abstract presents a technique that utilizes a pre-trained CLIP model for the zero-shot generation of textured 3D models from text prompts, which aligns well with the field of 'prompt engineering' as it demonstrates a practical application of generating content from textual descriptions. The relevance is marked as an 8 because while it heavily leverages the engineering of prompts to create 3D models, the focus is on the product of the prompt (a 3D model) rather than on the study of prompt engineering itself. It does not address the systematic review aspect of hard prefix prompts, but it is related to the domain of how text prompts can guide AI to produce desired outputs." -what changes can large-scale language models bring? intensive study on hyperclova: billions-scale korean generative pretrained transformers,gpt-4-1106-preview,8,"The abstract indicates extensive exploration of prompt-based learning within the context of a non-English large-scale language model, HyperCLOVA, and discusses the integration of prompt optimization into the prompt engineering pipeline. This is highly relevant to prompt engineering, but not specifically centered on 'hard prefix prompts'. However, it does address prompt engineering more broadly and introduces an interactive prompt engineering interface, suggesting considerable coverage of the topic. Some points were deducted as the abstract does not focus precisely on 'hard prefix prompts', but instead on a wider range of prompt engineering aspects." -directed diffusion: direct control of object placement through attention guidance,gpt-4-1106-preview,7,"The study described in the abstract engages with the concept of hard prompt engineering by introducing methods for providing 'direction' to the model's output, specifically in terms of spatial object placement. This work falls under the study of prompt engineering to the extent that it addresses a fine-grained aspect of the control mechanism one might use in a prompt to guide the output of a generative model. However, the focus is somewhat tangential to hard prefix prompts specifically, as the emphasis seems to be on the manipulation of cross-attention maps rather than the construction of text prompt prefixes. The rating is not a perfect 10 because the abstract does not directly reference hard prefix prompts or their systematic review; rather, it offers a novel contribution that could be considered in the broader field of prompt engineering within generative AI." -clip-actor: text-driven recommendation and stylization for animating human meshes,gpt-4-1106-preview,7,"The relevance of the described paper 'clip-actor: text-driven recommendation and stylization for animating human meshes' to prompt engineering study is moderately high. While the main focus is on animating 3D human meshes using text prompts, the fact that it leverages natural language prompts to drive the animation process indicates an overlap with prompt engineering research. The system's ability to interpret and respond to natural language inputs demonstrates a practical application of prompt engineering in the field of computer graphics and animation. However, the study is not explicitly centered on the systematic review or theoretical examination of hard prefix prompts in the broader context of prompt engineering, which slightly limits its full relevance to the specific subject of a comprehensive systematic review on hard prefix prompts." -promptboosting: black-box text classification with ten forward passes,gpt-4-1106-preview,9,"The abstract discusses PromptBoosting, an approach to text classification that effectively uses prompts to train a classifier without needing access to the underlying language model's internal workings, which is highly relevant to prompt engineering. The method involves creating a set of prompts and using an ensemble learning algorithm to improve classification performance. This process aligns closely with prompt engineering by proposing a novel way to interface with and manipulate language models using prompts, thereby making it highly pertinent to studies in prompt engineering. The paper does not specifically focus on 'hard prefix prompts' as stated in the potentially narrower research interest of the initial inquiry but still provides significant insights into the general area of prompt-based methods." -reward collapse in aligning large language models,gpt-4-1106-preview,8,"The paper discusses an important aspect of prompt-based training in large language models, specifically how prompt-related information is incorporated into the training process. This is highly relevant to prompt engineering because it deals with the effectiveness of prompts and the responses generated by language models. The concept of 'reward collapse' is directly related to the outcomes of different prompts, and thus to the study of prompt engineering. The paper proposes a solution to make rewards prompt-dependent, which is a significant concern in prompt engineering. While it does not directly address 'hard prefix prompts', the study's implications for the design of prompts and training methods are closely related to prompt engineering." -speechprompt: an exploration of prompt tuning on generative spoken language model for speech processing tasks,gpt-4-1106-preview,8,"The provided document is highly relevant to prompt engineering as it discusses prompt tuning, which is a key aspect of prompt engineering. Although the focus is on speech processing tasks rather than hard prefix prompts in textual contexts, the principles of prompt tuning and leveraging pre-trained models with minimal additional parameter training are central to the concept of prompting in both speech and text applications. The exploration of this technique's effects on efficiency and performance in speech models contributes useful insights to the broader field of prompt engineering. The rating is not a full 10 as the study specifics are tailored towards speech models, thereby making it somewhat less directly applicable to prompt engineering studies focused exclusively on text-based models." -late prompt tuning: a late prompt could be better than many prompts,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it discusses prompt tuning—a specific area within prompt engineering. It introduces 'Late Prompt Tuning' as a method to improve efficiency and performance of prompt tuning, which is directly related to the concerns of prompt engineering. The only reason why it is not rated a perfect 10 is that the abstract does not explicitly mention 'hard prefix prompts,' but rather focuses on an improved methodology of soft prompt tuning. Nevertheless, understanding the prompt tuning aspect, even if it is soft prompt related, is essential for comprehensive knowledge in the overall field of prompt engineering." -kipt: knowledge-injected prompt tuning for event detection,gpt-4-1106-preview,9,"The described study directly relates to prompt engineering by discussing Knowledge-injected Prompt Tuning (KiPT) for event detection, which is a technique to enhance the performance of prompt-based models by injecting external knowledge. It is highly relevant to the field of prompt engineering, as it proposes a specific way to refine prompts (a core component of prompt engineering) to increase precision. This is applicable to the broader study of prompt engineering, particularly in the context of few-shot learning tasks and the integration of external knowledge bases into the prompting process." -making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning,gpt-4-1106-preview,9,"The paper presents a framework related to improving the efficiency of PLMs in low-resource scenarios through a method known as Contrastive Prompt Tuning. It tackles the challenge of creating task-specific prompts and verbalizers without manual engineering, which is highly relevant to the field of prompt engineering. The mention of 'task-invariant continuous prompt encoding' and 'fully trainable prompt parameters' directly relates to engineering prompts to improve few-shot learning capabilities of language models. Therefore, the study is highly pertinent to prompt engineering, especially considering its focus on end-to-end and contrastive learning approaches for enhancing language model performance. The only reason it is not rated a full 10 is that it doesn't explicitly mention 'hard prefix prompts,' which the original study inquiry specified, but it covers the overarching theme of prompt engineering sufficiently." -lpt: long-tailed prompt tuning for image classification,gpt-4-1106-preview,7,"The paper introduces an approach for adapting pretrained models to long-tailed classification problems using prompts. This is relevant to prompt engineering since LPT (Long-tailed Prompt Tuning) involves creating and tuning prompts as a method of model adaptation, which falls under the broader category of prompt engineering strategies. The systematic review sought is broader and looks for hard prefix prompts, which might imply a specific subset of prompt engineering. Nonetheless, as LPT involves modifying prompt mechanisms for a specific end, it shares concepts with the overall field of prompt engineering. The rating is not a full 10 because the described method does not directly focus on the general study of prompt engineering or the particular 'hard prefix prompts' but rather a specialized application of prompt tuning in image classification." -multi-prompt alignment for multi-source unsupervised domain adaptation,gpt-4-1106-preview,8,"The abstract describes the use of prompts in the context of unsupervised domain adaptation, introducing a new framework called Multi-Prompt Alignment (MPA). This is directly related to prompt engineering as it involves training and aligning prompts to minimize domain gaps. Although the focus here is more on domain adaptation rather than the study of 'hard prefix prompts' in isolation, the application of prompt learning techniques makes it relevant to the field of prompt engineering. The rating is not a full 10 because the abstract does not directly address a comprehensive systematic review on hard prefix prompts per se, but rather introduces a novel application of prompt engineering in UDA." -eliciting knowledge from pretrained language models for prototypical prompt verbalizer,gpt-4-1106-preview,9,"The paper describes an approach that directly pertains to prompt engineering by discussing the elicitation of knowledge from pretrained models and the optimization of said models for prompt-tuning. The concept of a prototypical prompt verbalizer and the use of contrastive learning are specific methodologies within the broader field of prompt engineering, thus highly relevant. The rating isn't a perfect 10 as the abstract provided is missing, and therefore the review may not cover all aspects of 'hard prefix prompts' specifically mentioned in the initial term." -fine-grained retrieval prompt tuning,gpt-4-1106-preview,7,"The paper titled 'Fine-grained Retrieval Prompt Tuning' is relevant to prompt engineering as it introduces a method (FRPT) involving prompts to steer a pre-trained model's behavior without fine-tuning the entire model. This is in line with the concept of prompt engineering wherein strategic prompts are used to harness a model's capabilities for specific tasks. Although the paper deals with a specialized domain of fine-grained object retrieval and is more focused on the retrieval aspect rather than prompt engineering in a broad sense, the principles and methods it introduces are applicable to the study of prompt engineering, especially in how prompts can be used to adapt a model's output without extensive retraining. The rating is not a full 10 because the paper appears to be narrowly focused on a specific instance of prompt use, rather than a comprehensive systematic review on hard prefix prompts as potentially indicated by the phrase 'prompt engineering study.'" -exploring low-dimensional intrinsic task subspace via prompt tuning,gpt-4-1106-preview,8,"The abstract and TLDR provided pertain to the study of prompt tuning within pre-trained language models (PLMs), and they discuss how adjustments to these models for various tasks can be achieved by optimizing a small set of parameters within a low-dimensional subspace. This suggests a strong relevance to prompt engineering, as it directly explores methodologies for tuning prompts to improve task adaptability of language models. The only reason the rating is not a full 10 is that, while highly relevant, the study seems to focus on a specific aspect of prompt engineering rather than a comprehensive review of hard prefix prompts in general." -exploring universal intrinsic task subspace via prompt tuning,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates the adaptability of pre-trained language models to different NLP tasks by optimizing a small number of parameters. It directly examines prompt tuning, which is a crucial aspect of prompt engineering, and explores the concept of an intrinsic task subspace that could significantly impact how PLMs are fine-tuned for various tasks. Although the focus is on intrinsic prompt tuning (IPT) rather than hard prefix prompts specifically, the findings are broadly applicable to the field of prompt engineering." -how to design the perfect prompt: a linguistic approach to prompt design in automotive voice assistants – an exploratory study,gpt-4-1106-preview,8,"The provided title and abstract are highly relevant to the broad field of prompt engineering, especially in the context of voice user interfaces (VUIs). The exploratory study focuses on the linguistic aspects of prompt design, which covers syntactical, lexical, and grammatical elements that are fundamental to the construction of effective prompts within the automotive industry's voice assistants. Although the study is specific to a particular application (automotive VUIs) and language (German), the methodology and findings regarding the impact of language parameters on user perception can offer significant insights for prompt engineering in general. The rating falls short of a perfect score because the study's scope is restricted to a single language and use case, which may or may not be directly applicable to hard prefix prompts specifically mentioned in the original query." -improving chatgpt prompt for code generation,gpt-4-1106-preview,9,"The abstract provided details an empirical study on how prompt design, particularly in the use of ChatGPT for code generation tasks, affects performance. This is highly relevant to prompt engineering, as it outlines a method of prompt optimization (leveraging the chain-of-thought strategy) and discusses the impact of different prompts on the efficacy of an AI model. It does not focus specifically on 'hard prefix prompts,' as might be suggested by the original query on 'prompt engineering study,' but it does deal with the broader area of prompt engineering, warranting a high relevance rating." -dynamic prompting: a unified framework for prompt tuning,gpt-4-1106-preview,9,"The paper in the title focuses on the topic of prompt tuning, specifically the effectiveness of dynamic prompts versus fixed soft prompts. It directly addresses optimizing prompt position and how it affects performance in extracting knowledge from various pretrained models. The 'hard prefix prompts' mentioned in the request for a systematic review relates to the broader field of prompt engineering and tuning, and while the paper appears to discuss a more advanced approach (dynamic prompts), it is highly relevant to the study of prompts in general, including hard prefixes. The abstract provided offers insights and tangible outcomes of prompt tuning research, thus the relevance rating is high. However, it is not exclusively focused on 'hard prefix prompts' but considers prompt tuning more broadly, hence the rating is not a perfect 10." -exploring sparse visual prompt for domain adaptive dense prediction,gpt-4-1106-preview,8,"The provided abstract is highly relevant to prompt engineering study because it discusses an advanced application of prompts—Sparse Visual Domain Prompts (SVDP)—in the context of Test-Time Adaptation (TTA) for domain adaptive dense prediction tasks. It examines the role of prompts in addressing domain shift challenges and introduces methods for optimal prompt placement and updating on a per-sample basis. Although the abstract focuses specifically on visual domain prompts, which may be a more specialized area within the broader field of prompt engineering, the concepts of domain-specific knowledge extraction and efficient adaptation to target domains through prompts are essential to the study of prompt engineering. Therefore, the relevance is rated highly but not at the maximum because it is specific to the visual domain and dense prediction tasks rather than general prompt engineering." -stylediffusion: prompt-embedding inversion for text-based editing,gpt-4-1106-preview,7,"The given abstract is moderately relevant to prompt engineering study. It discusses a method for text-based editing of images using pretrained diffusion models, which involves prompt-editing. The relevance is substantial because working with prompts is integral to guiding AI models in generating or editing content. The paper proposes improvements for image editing using text prompts, which is related to prompt engineering in the way that it attempts to refine how prompts influence the AI's output. However, the focus seems to be more on image editing and attention regularization rather than hard prefix prompts, which would be the core topic in a prompt engineering study. Hence, the relevance is not complete, but the approach to handle and edit prompts for better results is pertinent to the field." -efficient transfer learning for visual tasks via continuous optimization of prompts,gpt-4-1106-preview,8,"The title suggests that the study involves optimizing prompts for transfer learning in visual tasks, indicating a focus on prompt engineering as it applies to machine learning and possibly to neural networks that process visual data. Although details are lacking in the abstract and TLDR, the title implies relevance to prompt engineering, particularly in the context of improving the efficiency of transfer learning through some form of prompt optimization. The rating is not a full 10 due to the lack of information provided in the other fields, which could have either strengthened or weakened the relevance." -a simple zero-shot prompt weighting technique to improve prompt ensembling in text-image models,gpt-4-1106-preview,8,"The abstract presents a study that is directly related to prompt engineering, focusing on automated scoring and ensembling of prompts to improve the accuracy of zero-shot text-image models. Although the study does not specifically mention 'hard prefix prompts', it does address the broader topic of prompt engineering and optimization, which is highly relevant. The only reason it does not receive a full 10 is the absence of a direct discussion about 'hard prefix prompts', which might be considered more specialized within the domain of prompt engineering." -drpt: disentangled and recurrent prompt tuning for compositional zero-shot learning,gpt-4-1106-preview,8,"The provided abstract describes research on prompt tuning, specifically a novel framework called DRPT, in the context of Compositional Zero-shot Learning (CZSL). Its relevance to prompt engineering is high, given that it addresses the optimization of prompts through the use of disentangled and recurrent tuning strategies. While the study might not focus exclusively on 'hard prefix prompts' as mentioned in the initial prompt, the described techniques are directly related to enhancing the efficacy of prompts in interacting with vision-language models (VLMs). Therefore, the content is substantially pertinent to the broader field of prompt engineering." -reprompt: automatic prompt editing to refine ai-generative art towards precise expressions,gpt-4-1106-preview,9,"The abstract pertains directly to the field of prompt engineering, specifically concerning the refinement of AI-generated images based on textual prompts. The introduction of RePrompt, an automatic method for editing prompts to achieve precise emotional expressiveness in AI-generated images, represents a focused study within prompt engineering. This is highly relevant since it deals with optimizing text prompts, albeit in the context of generative art rather than 'hard prefix prompts' used for textual outputs or structured data queries. The reason it's not a 10 is the study's specific angle on emotional expressiveness, which may not encompass the entirety of prompt engineering studies, such as technical or informational aspects." -prompt engineering for text-based generative art,gpt-4-1106-preview,8,"The paper is significantly relevant to prompt engineering study as it explores prompt modifiers in the context of text-based generative art, which is a direct application of prompt engineering techniques. The identification of a taxonomy of prompt modifiers aids in understanding how prompts can be engineered or modified for specific outcomes in creative AI applications. Although the study is not exclusively on 'hard prefix prompts', it does provide valuable insights into the broader field of prompt engineering, which is inclusive of various types of prompts including hard prefixes. The conclusion mentioning further research opportunities suggests its utility in expanding the knowledge base of prompt engineering. The rating is not a full 10 because the study is specific to the domain of text-based generative art and does not focus solely on hard prefix prompts, which may be a subset of the broader topic of prompt modifiers." -prompting ai art: an investigation into the creative skill of prompt engineering,gpt-4-1106-preview,9,"The provided abstract directly pertains to the study of prompt engineering, focusing on understanding the skillset necessary for effective text-to-image generation, which is indeed a form of prompt engineering. The research explores participants' abilities to assess, write, and improve prompts, which is highly relevant to the study of prompt engineering as a creative process. The conclusion that prompt engineering requires expertise and practice is a significant insight into the field. The only reason the full score is not given is that the abstract does not specifically address 'hard prefix prompts' which was mentioned in the initial query, indicating it may not cover all possible facets of prompt engineering." -grimm in wonderland: prompt engineering with midjourney to illustrate fairytales,gpt-4-1106-preview,8,"The given abstract describes a study that is highly relevant to prompt engineering, as it focuses on refining text inputs to achieve better outcomes in text-to-image generation, specifically for the purpose of illustrating popular fairytales. The investigation into a methodical process for converting pre-existing text into image prompts aligns with the essence of prompt engineering. However, the study's relevance is slightly limited as it emphasizes action research within the context of fairytales' illustration rather than a broad analysis of the hard prefix prompts aspect in the general field of prompt engineering." -prompt engineering in medical education,gpt-4-1106-preview,8,"The abstract discusses the importance of prompt engineering within the context of medical education using generative language models (GLMs). It highlights the necessity of properly formulated instructions (or prompts) to maximize the utility of GLMs like ChatGPT, Perplexity AI, and Google Bard. The relevance is high because it directly addresses how prompt crafting affects the performance of GLMs in delivering personalized learning and feedback, which is core to prompt engineering studies. However, it is not a perfect 10 as it does not focus solely on the systematic review of 'hard prefix prompts' but rather on prompt engineering in a broader sense within the specific domain of medical education." -"multi-party goal tracking with llms: comparing pre-training, fine-tuning, and prompt engineering",gpt-4-1106-preview,9,"The study involves a direct comparison of different adaptation methods for language models, including prompt engineering, to handle a complex task such as multi-party goal-tracking and intent-slot recognition in conversations. The relevance to prompt engineering is high as the paper specifically evaluates and discusses the efficacy of prompt engineering techniques and compares it to other methodologies such as fine-tuning and pre-training in the context of understanding user goals in multi-party conversations. The high performance of prompt engineering in the few-shot setting demonstrates its significance in the study of language model capabilities and applications." -real estate insights unleashing the potential of chatgpt in property valuation reports: the “red book” compliance chain-of-thought (cot) prompt engineering,gpt-4-1106-preview,9,"The article specifically addresses prompt engineering within the context of property valuation and compliance with industry standards, namely the 'Red Book'. It discusses the direct application and importance of crafted prompts for instructing large language models to generate specific, accurate results that comply with professional property valuation standards. Even though it does not focus on 'hard prefix prompts' in a general sense, its contribution to prompt engineering for practical, domain-specific use cases is highly relevant. The deduction of one point is due to the lack of a TLDR and no explicit mention of 'hard prefix prompts', which would have given a precise summary and tied the relevance more directly to the topic." -improving formality-sensitive machine translation using data-centric approaches and prompt engineering,gpt-4-1106-preview,8,"The paper appears to be highly relevant to prompt engineering as it explicitly mentions the use of 'empirically-grounded prompt engineering' as a part of its methodology to improve machine translation relative to a baseline. Prompt engineering is used here in conjunction with a data-centric approach to specifically address the challenge of formal language variations in translation, indicating a direct application of prompt engineering for enhancing model performance. The rating is not a full 10 since the focus is not solely on prompt engineering, but also includes language-specific data-driven approaches." -artificial intelligence prompt engineering as a new digital competence: analysis of generative ai technologies such as chatgpt,gpt-4-1106-preview,9,"The provided abstract for the article, 'artificial intelligence prompt engineering as a new digital competence: analysis of generative ai technologies such as chatgpt,' is highly relevant to the field of prompt engineering. It discusses creating a theoretical framework for AI prompt engineering, analyzing best practices through extensive literature review, and introducing the AI PROMPT framework, which is directly related to the study of prompt engineering. It only falls short of a perfect score because the abstract does not mention 'hard prefix prompts' specifically, which was the core subject of the initial statement. However, the general discussion on AI prompt engineering strategies and their implications in various sectors makes it significantly relevant to the topic at hand." -cases of efl secondary students' prompt engineering pathways to complete a writing task with chatgpt,gpt-4-1106-preview,9,"The paper presents an empirical study about how EFL secondary students engineer prompts for a chatbot, specifically ChatGPT, in the context of completing a writing task. It explores the strategies students use and the trial-and-error process they undergo, which is central to understanding the practical applications and educational needs for prompt engineering. The study is highly relevant to the subject of prompt engineering as it shows the significance of this skill in educational settings and provides direct insight into the ways in which non-technical users interact with language models. The reason for not giving a full score of 10 is that it does not cover the theoretical or systematic review aspect of prompt engineering, but focuses specifically on the practical application and user experience." -enhancing automated program repair through fine-tuning and prompt engineering,gpt-4-1106-preview,8,"This abstract discusses a study where language models such as PLBART and CodeT5 are fine-tuned with datasets that contain code review and code changes to improve automated program repair. The relevance to prompt engineering comes from the part of the study that focused on utilizing zero-shot and few-shot learning-based prompt engineering with advanced code generative models like Codex and GPT-3.5-Turbo to assess their performance. Although the primary focus of the study appears to be automated program repair through fine-tuning of language models with specific datasets, the inclusion of prompt engineering as a method to enhance model performance gives it substantial relevance to the topic of prompt engineering. It does not directly address 'hard prefix prompts' as specified in the original inquiry, but it does deal with the employment of prompts in the context of language models, which is why the relevance is rated slightly lower." -"supporting self-directed learning and self-assessment using teachergaia, a generative ai chatbot application: learning approaches and prompt engineering",gpt-4-1106-preview,8,"The abstract indicates that the study involves leveraging prompt engineering to guide the interactions of an AI chatbot, named TeacherGAIA, to support self-directed learning and self-assessment. It specifically contrasts the engineered prompts with the default behavior of a chatbot like ChatGPT, suggesting a focus on how prompts can be tailored to achieve specific educational objectives. While the study is not exclusively focused on 'hard prefix prompts', it clearly involves a significant component of prompt engineering. The rating is not a full 10 because the abstract does not explicitly mention a 'systematic review' or a focus on 'hard prefix prompts', which are key aspects of the complete prompt stated in the requirement." -ncu-iisr: prompt engineering on gpt-4 to stove biological problems in bioasq 11b phase b,gpt-4-1106-preview,9,"The abstract indicates a high relevance to prompt engineering study as it describes a system that focuses on the application of prompt engineering strategies using GPT-4. The system's design for addressing biomedical questions implies substantial engagement with the crafting of prompts to interact with a language model effectively. The paper details experimental steps on prompt engineering, compares methodologies, and notes performance improvements due to optimized prompts. This offers considerable insight into how prompt engineering can be applied to enhance the utility of language models in a specific domain. The point deduction from a perfect score is due to the absence of details about 'hard prefix prompts', which may or may not have been a part of their strategies, as it is not explicitly stated." -"optimizing mobile-edge ai-generated everything (aigx) services by prompt engineering: fundamental, framework, and case study",gpt-4-1106-preview,9,"The title and abstract indicate that the study is highly relevant to prompt engineering as it directly discusses optimizing services through prompt engineering methods. The study reviews the evolution from AI-Generated Content (AIGC) to AI-Generated Everything (AIGX), and presents a framework that uses prompt engineering to enhance the performance of AI services on edge devices. It also includes a case study on training a prompt optimizer, which is directly related to employing prompt engineering techniques. The only reason the rating is not a full 10 is that the study focuses on a specific application (mobile-edge services) rather than prompt engineering in the broadest sense, which could include other domains and use-cases." -prompt engineering as an important emerging skill for medical professionals: tutorial,gpt-4-1106-preview,8,"The title and abstract provided describe a paper that is significantly relevant to the field of prompt engineering. It specifically discusses the application of prompt engineering in the context of medical professionals, thereby addressing a niche yet important aspect of prompt engineering. The relevance is not a full 10 because the focus is narrowed to the medical field, and the study is a tutorial rather than a comprehensive systematic review on 'hard prefix prompts'. Therefore, while it is highly relevant to prompt engineering, it does not fully address the broader aspect of the engineering study as requested in the initial prompt." -exploring the intersection of large language models and agent-based modeling via prompt engineering,gpt-4-1106-preview,9,"The title and abstract are highly relevant to prompt engineering as they describe research that directly utilizes large language models through prompt engineering to simulate human behavior. By exploring two specific simulations (a negotiation and a murder mystery game), the study emphasizes the application of prompt engineering in creating believable scenarios, which aligns closely with the prompt engineering discipline. One point is deducted because the abstract does not explicitly mention 'hard prefix prompts,' which was specified in your original request; however, it does focus on the broader context of prompt engineering within large language models." -the prompt engineering librarian,gpt-4-1106-preview,7,"The abstract discusses the role of librarians in the emerging field of prompt engineering, which is directly related to the study of prompt engineering as a discipline. It also covers the concept of optimizing prompts for artificial intelligence models, which is a fundamental aspect of prompt engineering. However, it focuses more on the potential professional development for librarians rather than a systematic review of hard prefix prompts specifically, which is why the rating is not a full 10." -contextual stance classification using prompt engineering,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it directly addresses the use of natural language prompts in the domain of few-shot learning. Furthermore, it relates to the creation of prompts based on existing conversation threads, which is a specific application of prompt engineering. The focus on how these prompts can potentially replace supervised methods while maintaining accuracy and reducing development costs further emphasizes the practical significance of prompt engineering in machine learning tasks such as contextual stance classification. The rating is not a full 10 because the abstract does not explicitly mention 'hard prefix prompts' which was a specific aspect mentioned in the initial query." -promptmagician: interactive prompt engineering for text-to-image creation,gpt-4-1106-preview,8,"The described research directly addresses prompt engineering within the context of text-to-image generation. It focuses on helping users effectively generate prompts that produce the desired image outcomes, which is a core aspect of prompt engineering. The relevance rating is not a full 10 because the study does not specifically discuss 'hard prefix prompts' as mentioned in your query; rather, it deals with prompt engineering in a broader sense. However, the system it introduces, PromptMagician, is very relevant as it is a direct application of prompt engineering principles to improve user interaction with generative models." -logprompt: prompt engineering towards zero-shot and interpretable log analysis,gpt-4-1106-preview,8,"The abstract describes a novel approach to log analysis using zero-shot learning through the employment of large language models (LLMs) with advanced prompt strategies, which is highly relevant to the field of prompt engineering. The significant performance improvements and the use of no training data underscore the utility of prompt engineering techniques in practical applications. However, the paper seems to be focused more on the application of prompt engineering within the specific domain of log analysis rather than a broad study of hard prefix prompts or a general evaluation of various prompt engineering strategies across different domains." -"the artificially intelligent entrepreneur: chatgpt, prompt engineering, and entrepreneurial rhetoric creation",gpt-4-1106-preview,8,"The title suggests that the study focuses on the use of chatbot technology, specifically ChatGPT, in the context of prompt engineering. It implies an analysis of how entrepreneurial rhetoric can be generated through prompt engineering techniques, which is closely related to the study of how prompts are used to steer the performance of AI models like ChatGPT. Although the 'hard prefix prompts' are not explicitly mentioned, the title indicates a strong relevance to the field of prompt engineering in general." -a survey on segment anything model (sam): vision foundation model meets prompt engineering,gpt-4-1106-preview,7,"While the title suggests the primary focus of the study is on the Segment Anything Model (SAM), the abstract indicates a secondary aspect that touches upon the versatility of SAM when combined with various models, including some that involve prompt engineering (e.g., ChatGPT). Although prompt engineering is not the central theme of the study, the impact of the work on prompt engineering is tangential and relevant as it involves the integration of SAM with models that may require or benefit from prompt engineering techniques. Therefore, the relevance to prompt engineering is moderate to high." -plain template insertion: korean-prompt-based engineering for few-shot learners,gpt-4-1106-preview,8,"The abstract indicates that the study is highly relevant to prompt engineering as it focuses on the application of prompt-based few-shot learning to Korean-language datasets, and it specifically mentions the introduction of a plain template insertion method. The fact that it addresses few-shot learning, data scarcity, and the adaptability of prompts to language-specific contexts means that it offers valuable insights into the field of prompt engineering. However, it does not explicitly address 'hard prefix prompts' as mentioned in the original query, which is why the rating is not a full 10." -polyglot prompt: multilingual multitask prompt training,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it explores the concept of 'Polyglot Prompting', a framework specifically designed for prompt-based learning across multiple languages and tasks. Prompt engineering is central to the approach of creating a unified semantic space within a multilingual context. Additionally, the paper's comprehensive evaluation and the development of an interpretable multilingual evaluation methodology further contribute to the field of prompt engineering by providing insights and tools that can be used to gauge the effectiveness of different prompting methods in a multilingual setting." -"chatgpt prompt patterns for improving code quality, refactoring, requirements elicitation, and software design",gpt-4-1106-preview,9,"The paper outlines a set of patterns for prompt designing, explicitly targeting the automation of software engineering tasks through large language models (LLMs) like ChatGPT. The relevance to prompt engineering is high because it directly discusses prompt design techniques for specific professional tasks and contributes a catalog of patterns that can enhance the effectiveness of LLMs in software engineering contexts. The reason for not giving a full 10 is that the paper does not solely focus on the general concept of 'hard prefix prompts' but rather on broader prompt patterns for software engineering activities." -retrieval-based prompt selection for code-related few-shot learning,gpt-4-1106-preview,8,"The provided abstract is highly relevant to prompt engineering as it discusses a technique centered around the creation of effective prompts, specifically for code-related few-shot learning tasks. The approach, Cedar, leverages retrieval-based methods to choose appropriate code demonstrations to accompany the task prompt, which is a direct application of prompt engineering principles. The results indicating the technique's effectiveness and its comparison with state-of-the-art models further underscore its relevance to the field. The deduction of two points is due to the lack of direct mention of 'hard prefix prompts', as the abstract focuses more broadly on prompt creation rather than the specific systematic review mentioned in the initial prompt." -"a study on prompt design, advantages and limitations of chatgpt for deep learning program repair",gpt-4-1106-preview,8,"The study directly relates to prompt engineering by investigating how ChatGPT's performance in deep learning program repair can be enhanced through tailored prompts. It explores ChatGPT's debugging capabilities and proposes prompt templates, which are central to prompt engineering. Additionally, the study addresses the effectiveness of dialogue in facilitating program repair, which is a novel aspect of prompt design. The rating is not a perfect 10 because the focus is more on program repair rather than exclusively on prompt engineering. However, prompt design is a significant component of this research, making it highly relevant to the field of prompt engineering." -ip-adapter: text compatible image prompt adapter for text-to-image diffusion models,gpt-4-1106-preview,7,"The paper describes IP-Adapter, an adapter for text-to-image diffusion models to incorporate image prompts along with text prompts. Although not focused on 'hard prefix prompts' specifically within text prompt engineering, it tackles the broader area of prompt engineering by enhancing the interface between human input and AI models to improve the generation of images. It is relevant to the field as it addresses the complexity of prompt engineering and offers a solution that enhances multimodal interactions, thus providing insights into how prompt systems could be improved. However, the paper's main focus is on the technical implementation of the adapter and the decoupled cross-attention mechanism for image prompts, so it is not entirely centered on the systematic review or standard text-based prompt engineering." -prompt space optimizing few-shot reasoning success with large language models,gpt-4-1106-preview,9,"The title and abstract indicate that the study is highly relevant to prompt engineering with a particular focus on optimizing prompt strategies for large language models in few-shot reasoning contexts. The introduction of 'Prompt Space' and its theoretical foundation based on text embeddings and matrix decomposition aligns closely with the field of prompt engineering. The claimed improvements over state-of-the-art methods further validate the study's pertinence to the topic. The only reason it is not a perfect 10 is that the study does not appear to narrowly focus on 'hard prefix prompts', but rather on prompt engineering as a whole, which may include a broader range of techniques beyond just hard prefix prompts." -bim-gpt: a prompt-based virtual assistant framework for bim information retrieval,gpt-4-1106-preview,7,"The abstract presents a study focused on utilizing prompt-based virtual assistant technologies for information retrieval in the construction industry, which is tangentially relevant to prompt engineering. While the primary application is specific to building information models (BIM), the fact that it involves engineering prompt systems (in this case, for integration with GPT models) to interpret natural language makes it partially relevant to the study of prompt engineering. The rating is not higher because the study is not solely focused on the systematic review of hard prefix prompts or prompt engineering specifically but rather on an application of those principles within a specific domain." -api entity and relation joint extraction from text via dynamic prompt-tuned language model,gpt-4-1106-preview,7,"The paper discusses the use of a dynamic prompt-tuned language model for the task of API entity and relation extraction, which is a form of prompt engineering applied to software engineering tasks. Although the main focus is on API extraction rather than the prompt engineering itself, the use of dynamic prompts is a relevant application of prompt engineering techniques. Hence, the relevance to prompt engineering study is significant, but not entirely central to the work, as prompt engineering seems to be a part of the method rather than the sole focus." -performance of chatgpt on the us fundamentals of engineering exam: comprehensive assessment of proficiency and potential implications for professional environmental engineering practice,gpt-4-1106-preview,7,"The study focuses on the use of ChatGPT in the context of an engineering certification exam, which is highly relevant to the engineering field. It examines the role of AI in educational settings, specifically related to professional environmental engineering practice. However, the study is narrowly tailored to the Environmental sector of the FE exam and does not directly address 'prompt engineering' as a systematic study across various disciplines or in a broad context. Prompt engineering usually refers to how prompts are structured to elicit the best response from an AI model, and while the abstract mentions 'noninvasive prompt modifications', it does not seem to be the central focus of the study. Therefore, the rating is a 7, indicating substantial but not complete relevance to prompt engineering study." -exploring the effects of the design prompt on students’ design cognition,gpt-4-1106-preview,8,"The abstract discusses the influence of design prompts on students' design cognition, which is highly relevant to prompt engineering in the context of educational research. It examines the hypothesis that the task provided (the design prompt) impacts the student's design process and experience. While the concept of 'hard prefix prompts' is not specifically mentioned, the study of how prompts affect design cognition is closely related to exploring how different types of prompts (potentially including hard prefixes) can shape the design process. Therefore, the relevance to prompt engineering study is high, but not maximal due to the absence of a specific focus on 'hard prefix prompts'." -symbolic knowledge distillation: from general language models to commonsense models,gpt-4-1106-preview,9,"The abstract provided discusses the use of prompt engineering as a central technique in the process of Symbolic Knowledge Distillation. The careful construction of prompts and the use of a critic model to refine the results from a general language model like GPT-3 directly relate to the field of prompt engineering. It demonstrates the effectiveness of well-engineered prompts in training more specialized commonsense models. Although the abstract does not focus exclusively on 'hard prefix prompts,' the relevance of the work to the broader field of prompt engineering is substantial, meriting a high rating." -"chat2vis: generating data visualizations via natural language using chatgpt, codex and gpt-3 large language models",gpt-4-1106-preview,9,"The paper discusses a novel system, Chat2VIS, which relies heavily on effective prompt engineering to guide large language models (LLMs) like ChatGPT and GPT-3 to generate data visualizations from natural language text. Although the focus is more on the application side of using LLMs for data visualization, the process inevitably involves the study and construction of prompts that can accurately convey user queries to these models, despite potential misspecification or under-specification. This reliance on specialized prompt design for improving the reliability and accuracy of LLM outputs suggests a significant overlap with the topic of prompt engineering. The rating is not a full 10 because the abstract does not indicate if the study explicitly covers theoretical aspects of hard prefix prompts or a systematic review of such." -"chatgpt evaluation on sentence level relations: a focus on temporal, causal, and discourse relations",gpt-4-1106-preview,7,"The abstract provided is relevant to prompt engineering to a significant extent as it describes the evaluation of an AI language model, specifically ChatGPT, using different prompt templates such as zero-shot, zero-shot PE (prompt engineering), and ICL (in-context learning). These templates are inherently connected to the study of prompt engineering as they directly impact the performance and accuracy of the model on various tasks related to inter-sentential relations. Although the abstract does not directly address 'hard prefix prompts', the use of different prompt templates including the PE template aligns with the broader field of prompt engineering. The systematic approach taken in evaluating these templates relates to the systematic review aspect of a 'comprehensive systematic review on hard prefix prompts.' However, given that the focus is on sentence-level relations rather than hard prefix prompts explicitly, it does not fully align with the prompt, hence the rating is not a full 10." -cutting down on prompts and parameters: simple few-shot learning with language models,gpt-4-1106-preview,8,"The abstract discusses how fine-tuning language models in a few-shot setting can reduce the need for prompt engineering, indirectly addressing the challenges associated with hard prefix prompts by proposing an alternative solution. Although the study targets the broader concept of prompt engineering, its findings offer valuable insights into the specific area of hard prompting, demonstrating ways to optimize the process. The lower rating reflects that while the study is relevant, it is not exclusively focused on hard prefix prompts." -fake it till you make it: learning transferable representations from synthetic imagenet clones,gpt-4-1106-preview,7,"The abstract describes a study where the researchers explore using class-agnostic prompt engineering to generate ImageNet clones with Stable Diffusion, suggesting a focus on prompt engineering to enhance synthetic image training for image classification models. While the focus on 'hard prefix prompts' isn't explicitly mentioned, the paper still significantly revolves around the concept of prompt engineering and its effects on machine learning model outcomes. Thus, the study is quite relevant to the broader field of prompt engineering, albeit in the context of image generation, rather than text-based applications." -text-guided synthesis of artistic images with retrieval-augmented diffusion models,gpt-4-1106-preview,7,"The abstract describes a method where 'prompt-engineering' is used to achieve a certain visual style in synthesized images, which is relevant to the study of how prompts are engineered to guide AI models. However, the focus on 'retrieval-augmented diffusion models' which use external databases for conditioning, offers an alternative to crafting hard prefix prompts. The relevance is rated a 7 as it deals with prompt engineering indirectly by presenting an alternative method to achieve specific outcomes in generative tasks. The study emphasizes the conditioning of models post training rather than the design of the prompts themselves." -bigbio: a framework for data-centric biomedical natural language processing,gpt-4-1106-preview,8,"The text discusses the creation of BigBIO, a library that contains numerous biomedical NLP datasets, supporting meta-dataset curation. Its compatibility with current platforms for prompt engineering makes it highly relevant for studies focused on prompting, though the abstract does not specifically address 'hard prefix prompts'. Therefore, its relevance to the broader subject of prompt engineering is high, but it may not directly address the specificity of hard prefix prompts, thus the rating is not a full 10." -repair is nearly generation: multilingual program repair with llms,gpt-4-1106-preview,7,"The abstract describes a research study on RING, a multilingual repair engine that uses a large language model for code repair tasks, which relies on prompts to guide the repair process. Although the study focuses on automated program repair, the use of a prompt-based strategy to assist in the repairing process is aligned with prompt engineering concepts. This suggests that the study contributes to the understanding of how prompts can be engineered to interact with AI models, specifically in the context of code repair. However, it doesn't specifically target 'hard prefix prompts' in prompt engineering, nor does it seem to focus on the systematic review of such prompts. Therefore, the relevance rating is not a perfect 10, but still substantial given the use of prompt-based strategies in the context of AI-powered code repair." -prompting is all your need: automated android bug replay with large language models,gpt-4-1106-preview,9,"The abstract describes the use of prompt engineering to automatically reproduce bugs from bug reports using a methodology called AdbGPT. This directly involves prompt engineering as a crucial component for leveraging Large Language Models (LLMs) to understand and process bug reports, enabling automated bug replay. The relevance to prompt engineering is high, as it is a key part of the proposed system for understanding and acting on natural language inputs, which demonstrates an advanced application of prompt engineering in software maintenance. The reason the rating is not a perfect 10 is because the focus is on the application of prompt engineering in a specific context (automated android bug replay) rather than a general study or comprehensive review of hard prefix prompts within the broader scope of engineering studies." -qaner: prompting question answering models for few-shot named entity recognition,gpt-4-1106-preview,9,"The abstract discusses the development of a new method for prompt-based learning in the context of Named Entity Recognition (NER), which is directly related to the field of prompt engineering. The research is aimed at refining prompt strategies, generating prompts, and tuning QA models with prompts, addressing various challenges in prompt-based methods. This is highly relevant to the study of prompt engineering, especially in its application to NER tasks. The reason for not giving a full 10 is because the abstract does not explicitly mention 'hard prefix prompts,' suggesting that the study might not cover that specific aspect of prompt engineering." -prompting the hidden talent of web-scale speech models for zero-shot task generalization,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it focuses on adapting a web-scale speech model, Whisper, to perform zero-shot tasks by using specialized prompt engineering techniques. The paper demonstrates significant performance improvements on new tasks by designing task-specific prompts, which directly pertains to the field and thereby scores a high relevance rating. It only falls short of a perfect score because it is not a comprehensive systematic review, but rather an experimental study illustrating practical applications of prompt engineering." -the creativity of text-based generative art,gpt-4-1106-preview,8,"The abstract indicates that the paper focuses on 'text-based generative art' and discusses the role of human creativity in the context of prompt engineering, which is directly related to prompt engineering study. It references Rhodes’s conceptual model of creativity, which could provide insight into the design and evaluation of prompts. The critique of product-centered creativity views hints at a theoretical exploration relevant to understanding how prompts are engineered and used in practice. Although the paper does not seem to be exclusively about 'hard prefix prompts' in prompt engineering, it appears to address the broader context and implications of prompt use and creativity in text-based generative systems. Thus, the relevance to prompt engineering study is high, but it is not a perfect match since it does not focus solely on 'hard prefix prompts'." -no token left behind: explainability-aided image classification and generation,gpt-4-1106-preview,8,"The paper abstract indicates that the research addresses issues related to the instability in zero-shot learning when using models like CLIP, which is related to how input prompts are constructed and used (prompt engineering). The study proposes an explainability-based approach to ensure that the model considers all relevant semantic parts of the input, likely including how the prompts are designed and their tokens. This is highly relevant to prompt engineering, although the study focuses more broadly on zero-shot learning and explainability, not solely on prompt engineering. Thus, the relevance rating is high, but not maximum." -automatically generating cs learning materials with large language models,gpt-4-1106-preview,7,"The content of the provided abstract is relevant to prompt engineering in that it discusses the application of Large Language Models (LLMs) in generating code and educational content based on natural language prompts. Although it does not specifically mention 'hard prefix prompts', it is related to the broader subject of how prompts can be utilized to facilitate computer science learning and to the design of prompts for effective interaction with models like GPT-3 and Codex. The abstract also touches upon the implications of LLM integration in pedagogy, which could include discussions on the crafting of prompts for educational purposes. Therefore, while it is not a direct study on prompt engineering, it is certainly relevant to the field, especially in the context of their application in education." -language-aware soft prompting for vision & language foundation models,gpt-4-1106-preview,8,"The shared abstract and summary are highly relevant to prompt engineering, specifically in the context of Vision & Language (V&L) models, indicating a study of prompt design and their application to model training. Although the study focuses on 'soft' prompts and not 'hard' prompts as mentioned in the initial query, it significantly engages with prompt engineering concepts by discussing the creation and adjustment of prompts. It researches how prompts can be optimized and regularized to improve model performance and addresses an important aspect of prompt engineering: the resistance to overfitting and the ability to generalize to unseen classes. Therefore, it contributes to the overall understanding and methodology of prompt engineering even if it does not directly address 'hard prefix prompts'." -chatgpt4pcg competition: character-like level generation for science birds,gpt-4-1106-preview,8,"The paper's focus on a competition that centers on creating prompts for ChatGPT to generate specific game levels is highly relevant to the field of prompt engineering. Although it doesn't address 'hard prefix prompts' specifically, it contributes to the understanding and application of prompt engineering in procedural content generation. This relevance is somewhat niche as it applies to a gaming context, yet the principles and methods used can offer valuable insights into prompt engineering best practices and strategies." -will it blend? mixing training paradigms & prompting for argument quality prediction,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it specifically describes the use of prompt engineering with GPT-3 for the task of Argument Quality Prediction. The focus on mixing training paradigms and the experimentation to determine the best setup for predicting different aspects of argument quality are central to the study of how different prompts can influence the output of large language models. The relevance is not a full 10 only because the paper also delves into training paradigms along with prompt engineering, which implies it does not solely concentrate on prompt engineering but rather on a combination of techniques." -the infinite index: information retrieval on generative text-to-image models,gpt-4-1106-preview,9,"The abstract discusses the concept of 'prompt engineering' directly in the context of generative models like DALL-E and Stable Diffusion, which is highly relevant to the field of prompt engineering study. It addresses a unique challenge within prompt engineering—information retrieval based on prompts given to generative models, which is an advanced aspect of prompt engineering. The introduction of the 'infinite index' concept and the exploration of active learning for image retrieval are pertinent to the engineering of prompts and the optimization of results from generative models. The deduction of one point is due to the lack of explicit mention of 'hard prefix prompts,' which may or may not be part of the 'interactive text-based retrieval' system referenced. However, the content is still highly relevant for researchers and practitioners interested in the intricacies of prompt engineering for generative text-to-image models." -exploring the benefits of visual prompting in differential privacy,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant due to the mention of Visual Prompting (VP), which constitutes a form of prompt engineering applied to visual tasks. This technique aligns with the concept of prompt engineering in the machine learning context, which involves designing inputs that guide the model to perform specific tasks or improve its performance. Even though 'hard prefix prompts' are not explicitly mentioned, the study still falls within the broader scope of prompt engineering by exploring the modification and utilization of input prompts to enhance the performance of machine learning models with differential privacy. The incorporation of VP into DP training methods like PATE and the exploration of its benefits in neural network classifiers make it relevant to the study of prompt engineering. However, the specific exploration of 'hard prefix prompts' is not addressed, which led to a rating of 7 instead of 10." -textgraphs-16 natural language premise selection task: zero-shot premise selection with prompting generative language models,gpt-4-1106-preview,9,"The paper seems to directly address the use of prompt engineering in the context of a natural language premise selection task, which is relevant to the study of prompt engineering effects on AI models' capabilities. It specifically assesses the performance of prompt engineering with GPT-3 in comparison to semantic similarity ranking with SBERT, and although it doesn't outperform SBERT when used alone, the combined approach yields better results. This indicates the paper significantly contributes to the understanding of prompt engineering's influence and utility in complex NLP tasks such as automated theorem proving, making it highly relevant to prompt engineering study." -generating requirements elicitation interview scripts with large language models,gpt-4-1106-preview,9,"The referenced study focuses on the application of prompt engineering to the generation of requirements elicitation interview scripts using large language models. It specifically discusses the use of prompt engineering techniques to generate various structured outputs, and even touches on refining prompts for better performance. This directly correlates with the study of prompt engineering as it involves optimizing and fine-tuning prompts to achieve specific outcomes with AI models. The reason for not giving a full 10 is that it's not exclusively about 'hard prefix prompts', but more broadly about prompt engineering applied within a specific context. However, it still holds high relevance to the overall field of prompt engineering." -an empirical evaluation of prompting strategies for large language models in zero-shot clinical natural language processing,gpt-4-1106-preview,9,"The described paper is highly relevant to prompt engineering as it conducts an empirical evaluation of prompting strategies for large language models specifically within the clinical NLP context. It assesses several prompt types like simple prefix, chain of thought, and introduces new types such as heuristic prompting and ensemble prompting, which are directly related to the study of prompt engineering. The only reason it doesn't receive a perfect score is that it is focused on the clinical domain and the prompt types are not limited to 'hard prefix prompts' as inquired in the original query." -an experimental investigation of analogy formation using the engineering-to-biology thesaurus,gpt-4-1106-preview,7,"The study focuses on the use of an Engineering-to-Biology thesaurus to facilitate analogy formation, which is a cognitive strategy closely related to the concept of 'hard prefix prompts'. Although it does not explicitly mention 'hard prefix prompts', the experimentation with keywords to generate ideas is akin to the process of using specific prompts to guide thought processes. However, its relevance is not a perfect match as it does not directly deal with the systematic review of hard prefix prompts or their use in studies; instead, it focuses on the application of a thesaurus in bioinspired design, which is just one aspect of prompt engineering." -an empirical study on few-shot knowledge probing for pretrained language models,gpt-4-1106-preview,8,"The study presents an empirical analysis of prompt-based knowledge probing with a focus on few-shot settings, which is highly relevant to the field of prompt engineering as it explores how models can be effectively used with limited data. Although it does not directly analyze 'hard prefix prompts,' the mention of optimizing prompts and a comparison of various approaches is pertinent to prompt engineering techniques and strategies. The findings related to finetuning bias vectors could contribute to the prompt engineering literature, especially since they claim to outperform existing methods." -generating disentangled arguments with prompts: a simple event extraction framework that works,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it introduces a prompt-based learning strategy to the domain of Event Extraction. The use of prompts to automate the exploitation of label semantics indicates a direct application of prompt engineering. The fact that this work sets new records for Argument and Trigger Extractions suggests that it advances the field significantly. While the paper does not focus on 'hard prefix prompts' specifically, its contribution to prompt-based methods in Event Extraction demonstrates its relevance to studies on prompt engineering." -how to prompt? opportunities and challenges of zero- and few-shot learning for human-ai interaction in creative applications of generative models,gpt-4-1106-preview,9,"The abstract provided outlines a study that delves into the usage, challenges, and potential advancements in the field of prompt engineering, specifically in the context of zero-shot and few-shot learning for creative applications with generative models. The focus on how end-users interact with AI through prompts and the subsequent proposal of design goals for user interfaces that support prompt-based interactions is highly relevant to prompt engineering. The study appears to be concerned with improving the effectiveness and intuitiveness of prompts, which is crucial to the field. Therefore, the relevance rating is high, albeit not maximum, as it might not cover the 'hard prefix prompts' as specified in the original prompt, but it still relates significantly to the broader subject of prompting in AI." -few-shot learning with multilingual generative language models,gpt-4-1106-preview,8,"The study appears to be highly relevant to prompt engineering as it includes an in-depth analysis of different multilingual prompting approaches and demonstrates the utility of templates and example demonstrations in achieving strong few-shot learning performance across languages. Although the abstract does not explicitly mention 'hard prefix prompts', the principle of engineering effective prompts to enhance model performance in few-shot learning scenarios is fundamentally related to prompt engineering. The rating is not a full 10 because the abstract does not directly address 'hard prefix prompts', but it ishigh due to the clear relevance of the study's focus on prompting techniques and few-shot learning." -tuning language models as training data generators for augmentation-enhanced few-shot learning,gpt-4-1106-preview,8,"The study deals with few-shot learning in pretrained language models (PLMs) leveraging prompts which is highly relevant to prompt engineering. It explores how to effectively utilize a limited amount of data to tune PLMs and then generate additional data to enhance performance on various language tasks. Even though the study does not specifically mention 'hard prefix prompts', it discusses training methodology that involves prompt formulation for modeling, which is a significant aspect of prompt engineering. For this reason, the work is very much related to prompt engineering but does not directly address the systematic review of 'hard prefix prompts', hence the rating of 8 instead of 10." -true few-shot learning with prompts—a real-world perspective,gpt-4-1106-preview,8,"This abstract describes an extensive study on Pet (Pattern-exploiting Training), which is a method that leverages prompt-based few-shot learning without relying on a development set for tuning. This research is highly relevant to prompt engineering because it evaluates the effectiveness of prompt-based approaches in few-shot learning scenarios. This can help understand how different prompting strategies can be designed and employed effectively in real-world settings. However, the study seems to focus specifically on Pet rather than a broader range of hard prefix prompts, hence the rating is not a full 10." -cins: comprehensive instruction for few-shot learning in task-oriented dialog systems,gpt-4-1106-preview,7,"The study is highly relevant to prompt engineering as it details an approach for leveraging pre-trained language models (PLMs) using task-specific instructions, which is a core aspect of prompt engineering. The 'CINS' system's specific focus on utilising instructions for few-shot learning in task-oriented dialog systems indicates relevance to the field. However, the paper might not center exclusively on hard prefix prompts or a systematic review of such prompts, thus not fully aligning with the potential scope implied by the term 'comprehensive systematic review on hard prefix prompts'. The rating reflects the significance of instructional design in prompting while acknowledging the potential mismatch in the specificity of the topic." -story centaur: large language model few shot learning as a creative writing tool,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to some extent, as it deals with the application of few shot learning with large language models, which is an aspect of prompt engineering. The design of the Story Centaur interface can imply the use of prompts to guide the language model in generating text based on the writer's input. However, the relevance is not full (i.e., not a 10) because the abstract does not specifically mention 'hard prefix prompts' or a systematic review of prompt engineering techniques. It is more focused on the end-user experience and tool creation for creative writing rather than the detailed study of prompt engineering methods." -few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning,gpt-4-1106-preview,7,"The abstract discusses Parameter-efficient fine-tuning (PEFT), which includes prompt tuning, a technique directly relevant to prompt engineering. Prompt tuning is a method of adjusting a pre-trained model to understand and perform new tasks using prompt-based instructions. The PEFT, and specifically the novel (IA)$^3$ method mentioned, likely relate to how prompts can be engineered or optimized for better performance with fewer resources, making it relevant to the study of prompt engineering. However, the focus on the comparative benefits over in-context learning and the overarching goal to improve model efficiency and performance, while related, do not strictly fall within the typical exploration of hard prefix prompts, and therefore do not warrant a maximum relevance rating." -exploring effectiveness of gpt-3 in grammatical error correction: a study on performance and controllability in prompt-based methods,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates how prompt-based methods, a key aspect of prompt engineering, impact GPT-3's performance in Grammatical Error Correction tasks. It examines the effects of varying task instructions and examples, which are central to designing effective prompts. The focus on the controllability aspect of GPT-3 with different instructional prompts makes this study pertinent to understanding and enhancing the use of language models in prompt engineering." -knowledge injected prompt based fine-tuning for multi-label few-shot icd coding,gpt-4-1106-preview,7,"The abstract presents a study that involves using prompt-based fine-tuning for a multi-label classification task, which is a relevant aspect of prompt engineering. However, the focus is more on the injection of domain-specific knowledge into the model and its application to ICD coding rather than a broad analysis of hard prefix prompts across various domains or a generalizable framework. The relevance is therefore significant but not entirely central to prompt engineering and lacks discussion on hard prefix prompts specifically." -improved universal sentence embeddings with prompt-based contrastive learning and energy-based learning,gpt-4-1106-preview,8,"The abstract discusses 'PromCSE', a method which focuses on using a 'Soft Prompt', that is, a set of trainable vectors, in a prompt-based contrastive learning setting for sentence embeddings. This is related to prompt engineering, a domain that comprises methods to better integrate and tune prompts for effective use with pre-trained language models. Although the abstract does not explicitly mention 'hard prefix prompts', it addresses the topic of prompt-based learning and even touches on energy-based learning mechanisms. For these reasons, the abstract is highly relevant to the study of prompt engineering, but slightly less so specifically to 'hard prefix prompts'. Hence, the rating is 8 instead of 10." -do we still need human assessors? prompt-based gpt-3 user simulation in conversational ai,gpt-4-1106-preview,8,"The study directly addresses a critical aspect of prompt engineering by exploring the generation of synthetic data through prompting a language model, which is a subset of the broader field. It assesses the viability of using prompted synthetic responses as a replacement for human-generated data, an inquiry that overlaps with prompt engineering since it evaluates the quality and utility of the prompts and the resulting data. The relevance to prompt engineering is high, although not perfect, because it does not focus on 'hard prefix prompts' specifically but rather on the general application of prompts for data generation in AI conversational models." -towards open-vocabulary scene graph generation with prompt-based finetuning,gpt-4-1106-preview,8,"The abstract indicates the use of 'prompt-based techniques' for fine-tuning a pre-trained model in the context of scene graph generation (SGG). Although it does not explicitly mention 'hard prefix prompts,' it does involve the concept of prompt engineering as it leverages prompts to adapt the model to new tasks without updating parameters. This is directly related to studying different prompt engineering strategies, particularly in the open-vocabulary setting. Thus, the relevance to prompt engineering is high but not focused solely on the aspect of hard prefix prompts, hence the rating is not a full 10." -zero-shot cross-lingual transfer of prompt-based tuning with a unified multilingual prompt,gpt-4-1106-preview,9,"The abstract describes research on prompt-based tuning for multilingual pretrained language models with a focus on a unified, language-agnostic prompt, which is highly relevant to the field of prompt engineering. It addresses the challenge of creating prompts that work across multiple languages and demonstrates significant performance improvements, which is a core aspect of engineering effective prompts. The only reason it does not receive a full score is because it does not address 'hard prefix prompts' specifically, but it is still very relevant to the broader topic of prompt engineering." -promptcast: a new prompt-based learning paradigm for time series forecasting,gpt-4-1106-preview,8,"The paper's focus on 'prompt-based time series forecasting (PromptCast)' is highly relevant to the study of prompt engineering as it explores transforming numerical inputs and outputs into prompts, thus framing the forecasting task as a language model problem. This suggests innovative applications of prompt engineering techniques outside of traditional language tasks. The relevance is not a perfect 10 because the paper may not deal specifically with 'hard prefix prompts' and there is no explicit mention of a 'systematic review'. However, it still represents a significant piece of research within the broader field of prompt engineering." -lego-absa: a prompt-based task assemblable unified generative framework for multi-task aspect-based sentiment analysis,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it discusses a generative framework that uses task prompts, which are akin to hard-coded prompts, to control the generation of outputs for different tasks in ABSB. The methodology directly relates to how prompts are engineered to produce specific responses from a generative model. Its approach to assemblable task prompts is a novel application within the area of prompt engineering, even if the focus is more on sentiment analysis rather than on hard prefix prompts specifically." -prompt-based connective prediction method for fine-grained implicit discourse relation recognition,gpt-4-1106-preview,8,"The study introduces a Prompt-based Connective Prediction (PCP) method that is relevant to prompt engineering since it discusses instructing pre-trained models to utilize prompts for tasks in natural language processing. This is directly involved with prompt design and its implications on model performance. Although the main focus is on discourse analysis, the core concept of using prompts to guide model understanding and predictions is inherent to prompt engineering studies. Therefore, the relevance rating is high but not perfect due to the niche application within discourse relation recognition, rather than a broad study of prompt engineering techniques." -prompt-based distribution alignment for domain generalization in text classification,gpt-4-1106-preview,8,"The abstract mentions 'prompt-based learning' or 'prompting' as a key method for improving text classification across different domains. Although the study focuses on domain generalization and distribution alignment, the technique of prompting described is indeed crucial within the understanding of prompt engineering. It speaks to the customization of prompts to align data distributions across domains which could be understood as an advanced topic in prompt engineering. The study, however, does not directly address 'hard prefix prompts' but explores the broader concept of prompting and its application for domain generalization in natural language processing tasks. The rating is therefore not a full 10, as it does not specifically focus on hard prefix prompts but is still highly relevant due to its broader application in task alignment which is a subset of prompt engineering." -context variance evaluation of pretrained language models for prompt-based biomedical knowledge probing,gpt-4-1106-preview,9,"The abstract discusses advanced methods in prompt engineering, particularly in the context of biomedical knowledge probing. It details creating 'context variance' prompts, which directly relates to the development of prompt engineering techniques and introduces a new evaluation metric (UCM) for this purpose. These aspects are highly relevant to the study of prompt engineering as they contribute to the understanding and improvement of prompting methods for evaluating language models, though it doesn't explicitly mention 'hard prefix prompts,' hence the rating is not a perfect 10." -zero-shot event detection based on ordered contrastive learning and prompt-based prediction,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant since the abstract mentions the use of prompt-based prediction in a zero-shot natural language processing model. The study's methods directly involve prompt engineering by utilizing prompts to identify trigger words. However, prompt engineering is not the sole focus of the study, as it also involves ordered contrastive learning techniques. Therefore, while prompt engineering is relevant, it may not be the central theme of the research." -prompt-based time series forecasting: a new task and dataset,gpt-4-1106-preview,7,"The paper introduces a novel approach to time series forecasting by leveraging prompt-based methods, which is within the realm of prompt engineering. This is relevant as it explores the adaptation of language models to tasks outside their initial scope (i.e., forecasting) using prompts. However, the study does not focus specifically on 'hard prefix prompts' but on transforming numerical time series forecasting problems into a language model-friendly format. Therefore, it is a contribution to the broader context of prompt engineering rather than a targeted study on the more specific 'hard prefix prompts'." -prompt-based meta-learning for few-shot text classification,gpt-4-1106-preview,9,"The abstract discusses the application of prompt-tuning within a meta-learning framework for few-shot text classification, which is directly related to prompt engineering. As prompt-based systems are a critical study area within the broader scope of prompt engineering, this work's focus on a Prompt-Based Meta-Learning (PBML) model is highly relevant. It contributes to understanding how prompts can be effectively used in conjunction with meta-learning to enhance performance in low-data regimes. The paper offers insights into the practical application and theoretical underpinning of using prompts in machine learning, which is at the core of prompt engineering studies." -ai illustrator: translating raw descriptions into images by prompt-based cross-modal generation,gpt-4-1106-preview,7,"The study explores a Prompt-based Cross-Modal Generation Framework (PCM-Frame), which is relevant to prompt engineering as it involves using prompts to bridge the semantic gap between text descriptions and image generation. While the field of prompt engineering often refers to optimizing input language for language models, the abstract suggests a broader scope where prompts assist in mapping text to image embeddings. This makes it pertinent to the study of how prompts can be engineered to improve cross-modal generation tasks. However, the paper's focus seems more on the application of prompt engineering in the context of AI illustration and image generation, rather than a comprehensive review of prompt engineering techniques or hard prefix prompts specifically. Hence, the rating is not a full 10." -parabart: a prompt-based method with parabiotic decoder for few-shot named entity recognition,gpt-4-1106-preview,7,"The abstract describes a novel method, ParaBART, for improving few-shot named entity recognition (NER) by enhancing entity boundary detection with a specialized decoder. While it does not directly address 'hard prefix prompts' in the context of prompt engineering, the research does involve 'prompt-based methods' (as mentioned in line 001) in the application of NER. Prompt engineering is a broader field that includes the design and use of prompts to improve model performance in language tasks. Therefore, the relevance to prompt engineering study is significant, but not directly focused on addressing hard prefix prompts specifically, warranting a rating of 7." -clamp: prompt-based contrastive learning for connecting language and animal pose,gpt-4-1106-preview,7,"The abstract discusses the use of prompt-based methods (in the context of CLAMP) to connect language models with animal pose estimation tasks, which is highly relevant to prompt engineering as it involves crafting prompts to facilitate an application of language understanding. The relevance is not a perfect 10 because the study focuses specifically on contrastive learning for animal pose estimation, rather than a broad systematic review of hard prefix prompts in general. Nevertheless, the adaptation and engineering of prompts for a specific task like this contributes to the understanding of how prompts can be effectively utilized in various domains, which is a pertinent aspect of prompt engineering research." -promptattack: prompt-based attack for language models via gradient search,gpt-4-1106-preview,8,"The paper discusses 'Prompt Learning', a method directly related to prompt engineering, and addresses security vulnerabilities within this approach, a relevant aspect not often considered in standard prompt engineering studies. The focus is on constructing malicious prompts to reveal security issues, which is a valuable angle in prompt engineering research. Although the paper does not specifically mention a 'hard prefix prompt', it does delve into prompt-based methods and their implications, thus warranting a high relevance rating. However, the rating is not a full 10 because the paper's core topic is security rather than the effectiveness or optimization of prompt engineering itself." -pts: a prompt-based teacher-student network for weakly supervised aspect detection,gpt-4-1106-preview,8,"The paper describes a method that utilizes prompts to enhance the performance of weakly supervised aspect detection by using a teacher-student network structure. This is directly relevant to the field of prompt engineering as it involves constructing and utilizing prompts to train language models more effectively, especially with limited labeled data. The use of hand-crafted and auto-generated prompts also indicates a deeper exploration into prompt methodologies, which is significant for prompt engineering studies. The primary reason why the rating is not a 10 is due to the specificity of the application to aspect detection and the paper's focus on a novel network architecture, which may slightly deviate from a 'comprehensive systematic review' of hard prefix prompts, thus not completely aligning with the broader aspect of the prompt engineering study." -prompt-based metric learning for few-shot ner,gpt-4-1106-preview,8,"The abstract describes a method that uses multiple prompt schemas to enhance label semantics in the context of few-shot named entity recognition, which is relevant to prompt engineering as it involves the design of prompts to influence the model's performance. The proposed method indicates an improvement in metric learning for NER by incorporating prompt-based representations, aligning with the study of how different prompting techniques can affect machine learning tasks. However, it does not explicitly address 'hard prefix prompts,' which may be a more specialized area within the broader field of prompt engineering, hence the rating is not a full 10." -prompt-based zero-shot video moment retrieval,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it directly involves the design and usage of prompts ('Proposal Prompt' and 'Verb Prompt') for a zero-shot learning task in video moment retrieval. Although the focus is on video and text, the principles of prompt learning and their application to a zero-shot context align well with studies in prompt engineering, particularly in the innovative use of 'hard prefixes' or structured prompts in neural language models. However, the rating is not a full 10 because it may not directly tackle the methodological aspects of prompt engineering or address a 'hard prefix prompt' in a broader sense but rather applies prompt concepts to a specialized domain." -on the robustness of dialogue history representation in conversational question answering: a comprehensive study and a new prompt-based method,gpt-4-1106-preview,7,"The title and abstract suggest a study that investigates the robustness of dialogue history representation in Conversational Question Answering (CQA), which does not directly deal with 'prompt engineering' per se. However, the introduction of a 'prompt-based history modeling approach' signifies the study's partial relevance to prompt engineering, as it involves the strategic integration of prompts into the passage text to enhance model performance. The mention of 'textual prompts' indicates that part of the study is concerned with understanding how prompts can affect the outcome of a CQA task. Even though the study is not solely dedicated to 'hard prefix prompts' or prompt engineering in general, the development of a new prompt-based method implies that it could offer insightful data and practices relevant to prompt engineering research. The rating is not higher because the primary focus still seems to be on robustness and not explicitly on the engineering of prompts." -promptda: label-guided data augmentation for prompt-based few shot learners,gpt-4-1106-preview,9,"The abstract describes a study on the use of a novel framework called PromptDA that focuses on data augmentation in the context of prompt-based few-shot learning for natural language understanding tasks. The study appears highly relevant to prompt engineering as it directly addresses the development of prompts for few-shot learners and investigates ways to improve their performance through specialized data augmentation that leverages label semantic information. This relates closely to the study of 'hard prefix prompts' as it pertains to the design and enhancement of prompt-based methods. The only reason the rating is not a 10 is it doesn't specify if 'hard prefix prompts' are specifically addressed, but it's clear that the work is valuable to the field of prompt engineering." -nsp-bert: a prompt-based zero-shot learner through an original pre-training task-next sentence prediction,gpt-4-1106-preview,7,"The abstract indicates that the study introduces a novel method for utilizing BERT's Next Sentence Prediction (NSP) in zero-shot scenarios, which contrasts with the token-level methods most prompt-based learning approaches use. Seeing as prompt engineering is fundamentally about designing inputs and templates that effectively harness the capabilities of language models, the methods proposed in the paper for various NLP tasks and prompt construction templates contribute to the field of prompt engineering. Additionally, the abstraction of token-level constraints aligns with the goal of refining prompt engineering to achieve better performance with language models. However, the paper appears to focus more on the pre-training task and zero-shot learning rather than the detailed intricacies of prompt engineering, which is why the relevance is scored as a 7 rather than higher." -unified multimodal pre-training and prompt-based tuning for vision-language understanding and generation,gpt-4-1106-preview,7,"The abstract discusses the use of prompt-based methods for fine-tuning models on different downstream tasks. This is directly related to prompt engineering as it involves designing and choosing the right prompts for effective model performance. The information provided is relevant, as the study deals with how prompts can be used in model tuning, particularly in few-shot scenarios, although it does not specifically discuss 'hard prefix prompts'. This might slightly reduce the relevance as the prompt seems to inquire about a systematic review on a specific type of prompts known as 'hard prefix prompts', which is not mentioned in the abstract. Nevertheless, the general relevance to prompt engineering is still significant." -adversarial robustness of prompt-based few-shot learning for natural language understanding,gpt-4-1106-preview,7,"The study focuses on prompt-based few-shot learning (FSL) methods within natural language understanding, which is a subset of prompt engineering as it investigates the utilization of prompts for model fine-tuning. Evaluating the adversarial robustness of prompt-based FSL is relevant as it considers the stability and reliability of these prompts under adversarial conditions, a crucial aspect for prompt engineering. However, the study is more focused on the robustness to adversarial attacks rather than on the broader aspects of prompt engineering such as prompt design, optimization, or the systematic review of 'hard prefix' prompts. Therefore, while the study is highly relevant to a specialized area of prompt engineering, it does not cover the full scope of a 'comprehensive systematic review on hard prefix prompts,' so it gets a rating of 7." -decorate the newcomers: visual domain prompt for continual test time adaptation,gpt-4-1106-preview,8,"The paper described involves the concept of 'prompt learning' from NLP but applies it to the visual domain, suggesting a novel crossover of prompt engineering techniques to continual test-time adaptation for images. While the research isn't about textual 'hard prefix prompts' in NLP, the principles of designing prompts for domain adaptation and mitigating issues like catastrophic forgetting are closely related to prompt engineering in how they shape model inputs for better performance. Thus, it is relevant but not directly focused on the prompt engineering study in the text domain." -"toward human readable prompt tuning: kubrick's the shining is a good movie, and a good prompt too?",gpt-4-1106-preview,9,"The paper discussed is highly relevant to prompt engineering as it addresses the direct issue of how to create effective and fluent prompts through a novel tuning method. It contributes to the understanding of what makes a prompt effective, ensuring topical relevance and adjusting prior probabilities. The only reason it is not rated a perfect 10 is that the prompt engineering study specifically asked for 'hard prefix prompts,' which this summary does not explicitly state that the paper addresses. However, the general principles and methodology presented are very likely applicable to prompt engineering as a whole." -parameter-efficient prompt tuning makes generalized and calibrated neural text retrievers,gpt-4-1106-preview,9,"The abstract discusses prompt tuning, a form of prompt engineering, in the context of neural text retrievers. It emphasizes parameter efficiency, which is a crucial factor in the design and use of prompts for AI models. Moreover, the study explores prompt tuning's impact on generalizability across various domains, directly relating to advancements in prompt engineering methodologies. Hence, it is highly relevant to the prompt engineering study, although it focuses on a specific application rather than a broad range of use cases." -relation extraction as open-book examination: retrieval-enhanced prompt tuning,gpt-4-1106-preview,8,"The abstract discusses a novel application of prompt tuning in the context of relation extraction, by utilizing a retrieval-enhanced prompt tuning approach. While it does not directly address 'hard prefix prompts' or a 'comprehensive systematic review', it certainly falls within the broader category of prompt engineering studies. The focus on improving performance on hard or rare patterns, and the method of combining parametric and non-parametric techniques, relate closely to the challenges prompt engineering aims to address, especially in the context of improving prompt-based models' generalization capabilities. Thus, the relevance is high, although not perfect, due to the absence of a specific focus on 'hard prefix prompts' or a 'systematic review' aspect." -rethinking reinforcement learning for recommendation: a prompt perspective,gpt-4-1106-preview,7,"The relevance to prompt engineering in this study lies in the proposed Prompt-Based Reinforcement Learning (PRL) framework for recommendations, which intersects with the field of prompt engineering by leveraging state-reward inputs as prompts during the decision-making process. The study doesn't center on prompt engineering as it typically applies to language models or processes of tuning textual inputs, but it does conceptualize a similar method within the RL context, angling prompts as essential elements in training RL models for improved recommendation systems. Therefore, its relevance is notable but not directly central to the typical application of prompt engineering, which more commonly refers to optimizing inputs for generative language models." -point prompt tuning for temporally language grounding,gpt-4-1106-preview,7,"The abstract discusses 'Point Prompt Tuning (PPT)' as a novel approach that integrates prompt-based strategies within a multi-modal learning framework, specifically applied to the task of temporally language grounding (TLG) in videos. Since the methodology involves formulating a query rewriting strategy as prompts and integrating it with a multi-modal transformer, it directly relates to the concept of prompt engineering. The relevance to prompt engineering is quite high since it involves designing and using prompts to improve task performance. However, it is not a comprehensive systematic review on hard prefix prompts, as the initial prompt suggested, but rather an application of prompt tuning strategies in a specific domain. Therefore, the rating is not a perfect 10, but still significant due to the use of prompt engineering techniques." -generative prompt tuning for relation classification,gpt-4-1106-preview,9,"The abstract presents a study that is highly relevant to the field of prompt engineering. It addresses the limitations of the existing prompt tuning methods when dealing with complex label spaces for relation classification tasks. By introducing a generative prompt tuning approach that reformulates the problem into an infilling task, the study directly applies to developing new techniques within prompt engineering. The relevance is therefore rated a 9 out of 10 because it contributes significantly to the understanding and development of prompt-based methods, although it focuses specifically on relation classification rather than prompt engineering in general." -prompt learning for few-shot dialogue state tracking,gpt-4-1106-preview,8,"The paper described is relevant to prompt engineering as it discusses a prompt learning framework for few-shot dialogue state tracking (DST), which is inherently related to the utilization of prompts to improve model performance with limited labeled data. The use of value-based prompts and an inverse prompt mechanism connects directly to the design and implementation of prompts in the context of leveraging pre-trained language models (PLM). While the study is not specifically about 'hard prefix prompts' and does not perform a systematic review, it is still highly relevant to the broader field of prompt engineering due to its focus on improving the efficiency of knowledge probing from PLMs using specially designed prompts, which is an essential aspect of prompt engineering. Therefore, the paper receives a high relevance score." -"prompt, generate, then cache: cascade of foundation models makes strong few-shot learners",gpt-4-1106-preview,7,"The abstract discusses the use of GPT-3 to 'Prompt, Generate, then Cache', indicating an application of language generation for creating prompts, which is relevant to prompt engineering. Additionally, the integration of multi-modal models such as CLIP and DALL-E implies the use of prompts to facilitate communication across language and image domains, which is an advanced form of prompt engineering. However, the primary focus of the paper appears to be on few-shot learning and integrating diverse pre-training knowledge, rather than on systematic review of hard prefix prompts specifically. Therefore, while it is related to prompt engineering, it is not directly focused on a comprehensive review of that domain, hence the rating is not a full 10." -instructionner: a multi-task instruction-based generative framework for few-shot ner,gpt-4-1106-preview,7,"The relevance of the provided abstract to prompt engineering is quite significant, as it discusses the usage of prompt-based methods in few-shot learning and the refinement of those prompts for a specific downstream task, which is named entity recognition (NER). While the focus of the study is on the development of a framework for NER, the essence of reformulating tasks as generation problems and enriching source sentences with task-specific instructions is closely related to prompt engineering. This process involves creating prompts that effectively guide the language model to perform a desired task. However, because the abstract does not explicitly mention 'hard prefix prompts' or conduct a systematic review on prompt engineering, the rating is not a full 10." -finding skill neurons in pre-trained transformer-based language models,gpt-4-1106-preview,7,"The paper is moderately relevant to prompt engineering study. It doesn't directly focus on generating or optimizing prompts -- which would be the core subject of a prompt engineering study. However, the identification of 'skill neurons' within transformers after prompt tuning relates to understanding how prompts can affect neural language models and how specific neurons contribute to processing tasks after prompt-based training. This has implications for prompt engineering, as insight into which neurons are 'skill neurons' might inform how to structure or alter prompts to target these neurons and improve task performance." -good examples make a faster learner: simple demonstration-based learning for low-resource ner,gpt-4-1106-preview,8,"The abstract details a study on demonstration-based learning, which is a part of prompt-based learning methodologies. Although it focuses specifically on named entity recognition (NER), the principles of designing demonstrations and templates are directly related to the broader field of prompt engineering. The study's emphasis on the effect of different demonstration strategies on performance and the exploration of in-context learning provide insights that are applicable to prompt engineering. The relevance to prompt engineering is notable due to the systematic study of these strategies, which is a component of the hard prefix prompts mentioned in the initial query. However, the rating is not a full 10, as the abstract suggests a specific application (NER) rather than a focus on prompt engineering in general." -promptbert: improving bert sentence embeddings with prompts,gpt-4-1106-preview,9,"The paper describes a method that directly pertains to prompt engineering, specifically within the context of improving sentence embeddings using a novel contrastive learning method named PromptBERT. The emphasis on overcoming the limitations of BERT by integrating prompts into the sentence embedding process is highly relevant to the study of prompts in engineering. The research not only introduces a new prompt-based embedding method but also explores prompt representation and searching methods, which are central themes in prompt engineering. The proposed unsupervised training objective with template denoising is similarly a significant contribution to this field. The only reason the score is not a full 10 is that it doesn't mention 'hard prefix prompts' explicitly, but the overall context is very much applicable to the subject of prompt engineering." -how can we know what language models know?,gpt-4-1106-preview,9,"The paper directly addresses a core aspect of prompt engineering by focusing on the automatic generation of high-quality and diverse prompts to elicit more accurate information from language models. Improving prompt quality is a fundamental part of prompt engineering, and the paper's experimental results on the enhancement of LM accuracy are highly relevant to studies of prompt effectiveness. The slight deduction from a perfect score is due to the abstract not specifying 'hard prefix prompts', indicating the review might not focus exclusively on that particular subset of prompt engineering." -realfusion 360° reconstruction of any object from a single image,gpt-4-1106-preview,8,"The abstract describes the use of a conditional image generator and the engineering of a prompt to improve the neural network's ability to 'dream up' or synthesize novel views of an object from a single image. This directly relates to the field of prompt engineering, as the research involves designing and refining a prompt to guide an AI model to perform a specific task more effectively. The relevance to prompt engineering study is high because it involves a practical application of prompt design to achieve better results in an AI-driven task. The score is not a full 10 because the abstract focuses on the application of this prompt engineering in the context of 3D reconstruction rather than the study of prompt engineering itself." -active prompting with chain-of-thought for large language models,gpt-4-1106-preview,9,"The paper addresses an advanced technique within prompt engineering, specifically for large language models (LLMs), by introducing active prompting with example-based CoT reasoning. This is highly relevant to the field of prompt engineering as it involves creating task-specific example prompts and evaluating their effectiveness for LLMs' performance on complex reasoning tasks. The mention of uncertainty metrics and the adaptation of the active learning framework to prompt design underscore the paper's direct and substantial contribution to developing and improving prompting strategies. The reason it's not a 10 is that it doesn't cover 'hard prefix prompts' which may suggest a more specific subset of prompt engineering techniques not explicitly mentioned in the abstract." -gpt3mix: leveraging large-scale language models for text augmentation,gpt-4-1106-preview,7,"The paper is highly relevant to the study of prompt engineering as it discusses a method to leverage large-scale language models, like GPT-3, using prompts to generate text for data augmentation. This is intrinsically linked to the concept of prompt engineering, which involves designing prompts to elicit desired responses from language models. However, the focus of the paper is more on the application of these prompts for data augmentation rather than a systematic review on hard prefix prompts specifically. The relevance is high because the technique proposed is a practical application of prompt engineering, but it is not a comprehensive review on the topic." -warp: word-level adversarial reprogramming,gpt-4-1106-preview,8,"The abstract presents research that extends earlier work on automatic prompt generation, which is highly relevant to the prompt engineering field. Adversarial reprogramming, as discussed in the paper, is a method for learning task-specific prompts that improve the performance of language models on various tasks. The focus on prompt generation suggests a strong relevance to studies on 'hard prefix prompts' or engineered prompts intended to direct model behavior. However, as the abstract does not explicitly mention 'hard prefix prompts', the rating is not a full 10." -prompting for multimodal hateful meme classification,gpt-4-1106-preview,8,"The study appears to be highly relevant to prompt engineering as it involves the creation of a prompt-based model (PromptHate) that specifically addresses the task of hateful meme classification by leveraging the capabilities of pre-trained language models through the use of prompts. The use of 'simple prompts' alongside 'in-context examples' indicates a direct application of prompt engineering techniques to extract and utilize implicit knowledge from the models. However, the study seems to focus on a specific application of prompts in the context of multimodal tasks (hateful meme classification), which may slightly limit its generalizability to prompt engineering as a whole. Despite this, the study's effort in optimizing prompts for a complex task adds valuable insights to the field of prompt engineering." -badprompt: backdoor attacks on continuous prompts,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it focuses on the security aspect of continuous prompt-learning algorithms, which are a core component of prompt-based learning paradigms. Although the study is not directly analyzing 'hard prefix prompts' but is rather investigating 'backdoor attacks' on continuous prompts, understanding such vulnerabilities is crucial for the overall field of prompt engineering, particularly for ensuring the robustness and reliability of models using prompts. However, it may be slightly less relevant if the specific focus of the inquiry is on 'hard prefix prompts,' as this paper investigates continuous prompts, which could be conceptually distinct." -multilingual relation classification via efficient and effective prompting,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it focuses on the application of prompting methods in multilingual relation classification, a specific area within NLP tasks that can benefit from engineered prompts. The research introduces a method for constructing prompts efficiently and evaluates its effectiveness in various scenarios, including low-resource languages. The relevance to hard prefix prompts is indirect since the focus is more on the application and efficacy of prompt methods rather than on the systematic analysis of different prompt structures, but it still contributes valuable insights to the field of prompt engineering." -knowledge prompting in pre-trained language model for natural language understanding,gpt-4-1106-preview,9,"The abstract describes a method for incorporating factual knowledge into Pre-trained Language Models (PLMs) via a 'knowledge prompting' technique, which is highly relevant to prompt engineering. The study not only discusses the integration of knowledge prompts with PLMs but also introduces novel knowledge-aware tasks. This indicates a direct application and exploration of prompting mechanisms within language models, thereby warranting a high relevance rating. A point is withheld because the abstract does not explicitly mention 'hard prefix prompts,' suggesting that while the paper is relevant to prompt engineering, it may not specifically cover the systematic review aspect of hard prefix prompts." -multi-stage pre-training for automated chinese essay scoring,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant given that the paper outlines a method that requires fine-tuning an essay scoring model on different types of prompts. This aligns with prompt engineering since the quality and characteristics of these prompts directly influence the training and performance of the AI model. Furthermore, including weakly supervised and cross-prompt fine-tuning stages implies a deep understanding of how prompts interact with the model. However, the focus appears to be more on automated essay scoring than on hard prefix prompts specifically, which is why the score is not higher." -punifiedner: a prompting-based unified ner system for diverse datasets,gpt-4-1106-preview,8,"The paper presents PUnifiedNER, a model leveraging prompt learning, which is a subfield of prompt engineering. This NER system's ability to train across multiple datasets and efficiently recognize a wide range of entity types by using prompts directly relates to the study of prompt design and utilization within models, a key aspect of prompt engineering. The relevance is not maximal since the abstract does not specifically discuss the nature of the 'hard prefix prompts' mentioned in the initial query, but it does focus on prompt-based learning which is closely related to the field of study in question." -prompting through prototype: a prototype-based prompt learning on pretrained vision-language models,gpt-4-1106-preview,8,"The abstract describes a relevant method in the field of prompt engineering, specifically focusing on a prototype-based prompting approach for few-shot image recognition tasks using pretrained vision-language models (PVLMs). Although the study presented is not directly examining 'hard prefix prompts', it is relevant to the broader topic of prompt engineering as it explores how prompts can be optimized and tailored for specific instances or tasks. The prototype-based approach is an innovative instance-level technique that directly contributes to the understanding and development of prompt-based methods in machine learning. The high rating reflects the study's potential contributions to the field of prompt engineering, despite not addressing hard prefix prompts explicitly." -self-prompting large language models for open-domain qa,gpt-4-1106-preview,8,"The abstract describes a research study focusing on the use of Large Language Models (LLMs) for Open-Domain Question Answering (ODQA) by introducing a Self-Prompting framework that relies on in-context learning through prompts generated by the LLMs themselves. This approach directly involves the concept of 'prompt engineering,' as it requires the design and structuring of prompts to effectively guide LLMs to produce useful pseudo QA pairs for learning. It is highly relevant to prompt engineering because it explores an innovative way of using prompts to leverage the internal knowledge of LLMs, thereby eliminating the dependency on external datasets. Although the study does not focus specifically on 'hard prefix prompts', it does tackle the broader area of how prompts can be used to enhance the performance of LLMs in a specific task, which makes it quite relevant to the field of prompt engineering." -dialogue state tracking with a language model using schema-driven prompting,gpt-4-1106-preview,8,"The abstract discusses a novel approach that employs 'schema-driven prompting' for dialogue state tracking, which is relevant to prompt engineering as it involves designing prompts that guide a language model's behavior. The use of prompts for task-aware history encoding aligns with the subject of prompt engineering. Although it does not directly reference 'hard prefix prompts', the concept of schema-driven prompts is closely related to the topic of how prompts affect the performance of language models. The high rating reflects the relevance of schema-driven prompting in the broader field of prompt engineering study, despite it not being an exact match for 'hard prefix prompts'." -mapl: parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting,gpt-4-1106-preview,7,"The abstract describes a method (MAPL) for adapting pre-trained unimodal models for few-shot learning in multimodal vision-language settings, which is relevant to prompt engineering as it involves leveraging existing models to perform new, related tasks with minimal training data. However, the focus is on a parameter-efficient adaptation technique rather than the systematic study of prompt design or hard prefix prompts specifically, hence the rating of 7 reflecting substantial relevance but not a direct focus on the prompt engineering methodology." -transprompt: towards an automatic transferable prompting framework for few-shot text classification,gpt-4-1106-preview,8,"The mentioned study focuses on a prompting framework aimed at few-shot text classification tasks, which is highly relevant to prompt engineering. The transferability aspect of the prompts across similar NLP tasks suggests novel techniques in prompt design and application, contributing to the field of prompt engineering. The use of cross-task transferable knowledge is especially pertinent, although the provided abstract does not specifically mention 'hard prefix prompts,' which was the topic requested. Therefore, while the study is much related to prompt engineering, it may not entirely focus on the subset of 'hard prefix prompts,' leading to a slightly lower rating." -context-faithful prompting for large language models,gpt-4-1106-preview,9,"The paper presents methods for improving the performance of Large Language Models (LLMs) on context-sensitive tasks using advanced prompt engineering techniques. Although it does not explicitly mention 'hard prefix prompts,' the focus on 'carefully designed prompting strategies' is highly relevant to the broader field of prompt engineering. Opinion-based prompts and counterfactual demonstrations are specific types of prompts that could fall under the category of systematic review on hard prefix prompts. Therefore, the paper is likely to contribute valuable insights to the study of prompt engineering." -prompting technologies: a comparison of time-based and context-aware transition-based prompting.,gpt-4-1106-preview,7,"The study presented in the abstract is relevant to prompt engineering as it investigates the timing and context of delivering prompts, which can be crucial for the effectiveness of interventions in cognitive tasks. Although the study does not directly address 'hard prefix prompts,' which are specifically designed prompts in language models or AI environments, the underlying principles of effective prompting are closely related to prompt engineering. The comparison between time-based and context-aware prompting can inform how to design better prompts by understanding user interaction and response patterns. Therefore, this study holds relevance for the broader field of prompt engineering, especially in user-centric applications where user experience and interaction timing are important, even though it doesn't directly deal with hard prefix prompts." -"self-contradictory hallucinations of large language models: evaluation, detection and mitigation",gpt-4-1106-preview,8,"The provided abstract is highly relevant to prompt engineering as it discusses a prompting-based framework to address self-contradictions in large language models. Self-contradiction is a critical issue that can affect the effectiveness of prompts, and the study's focus on evaluation, detection, and mitigation is directly related to improving the performance of prompts in generating consistent and reliable output from LMs. The high relevance rating is justified because the paper tackles the challenge of crafting prompts that can lead to better-managed discourse by the LM, which is a core aspect of prompt engineering. While the study does not specifically mention 'hard prefix prompts,' it is closely allied with prompt engineering principles and practices." -a prompting-based approach for adversarial example generation and robustness enhancement,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it focuses on the development of prompt-based adversarial attacks and a robustness enhancement technique that uses prompts to improve model resistance to attacks. It indicates the potential of prompting paradigms in identifying and mitigating the vulnerabilities of pre-trained language models, which are at the core of prompt engineering. The only reason it is not rated a full 10 is that it is more focused on the application of prompts for adversarial purposes rather than a comprehensive study of hard prefix prompts in general." -dictionary-based phrase-level prompting of large language models for machine translation,gpt-4-1106-preview,8,"The article titled 'dictionary-based phrase-level prompting of large language models for machine translation' is highly relevant to prompt engineering as it describes a novel method for improving machine translation through the use of prompt engineering techniques. Specifically, it explores the use of large language models for MT and addresses the challenge of rare words by incorporating bilingual dictionaries into prompts, which directly falls within prompt engineering. The rating is not a full 10 because the study focuses on machine translation and the use of dictionaries for assisting translation of rare words which is a specific application of prompt engineering rather than a comprehensive review of hard prefix prompts in general." -fine-grained controllable text generation using non-residual prompting,gpt-4-1106-preview,8,"The abstract presents an approach to improve fine-grained control of text generation in Causal Language Models (CLMs) using an encoder-decoder architecture and intermediate text prompts. While the study is focused on text generation control rather than prompt engineering directly, it is highly relevant to the field of prompt engineering as it proposes a method for enhancing the control and versatility of prompts within these models. The introduction of intermediate prompts as a mechanism for controlling text generation could be applicable to 'hard prefix prompts' research, hence the high relevance score. However, it does not address 'hard prefix prompts' specifically, which prevents a full score." -understanding and improving visual prompting: a label-mapping perspective,gpt-4-1106-preview,7,"The study deals with visual prompting (VP), which is closely related to the concept of 'prompt engineering' in the sense that both involve techniques for effectively leveraging pre-trained models for new tasks. However, the focus on 'label-mapping' and visual tasks diverges from the typical context of 'hard prefix prompts,' which often relates to text prompts in natural language processing. Still, the principles investigated can be relevant to prompt engineering in a broader sense as it explores the relationship between prompting and label mapping to improve task accuracy." -automatic multi-label prompting: simple and interpretable few-shot classification,gpt-4-1106-preview,9,"The study presents a new method within the field of prompt engineering, directly aiming to improve the efficiency and efficacy of prompt-based few-shot text classification. As prompt engineering is a critical aspect of utilizing pretrained language models, and the paper offers a systematic approach to select label mappings for prompts, it is highly relevant to the field of prompt engineering. The only reason it does not receive a 10 is because it does not specifically address 'hard prefix prompts,' but rather prompt-based learning in a broader sense." -fs-detr: few-shot detection transformer with prompting and without re-training,gpt-4-1106-preview,7,"The paper discusses a new few-shot detection transformer (FS-DETR) that uses visual prompting, which is a form of prompt engineering. Visual prompts are used to provide the model with additional context without re-training. While the study does not specifically focus on 'hard prefix prompts', it does explore the concept of using prompts in a transformer-based model, which is a relevant aspect of prompt engineering. Therefore, the relevance to prompt engineering is significant but not directly focused on 'hard prefix prompts' which may suggest a slightly lower rating." -prompting contrastive explanations for commonsense reasoning tasks,gpt-4-1106-preview,9,"The study directly involves the use of language models to generate explanations for commonsense reasoning tasks by contrasting alternatives, which is a form of prompt engineering. This approach modifies how prompts are presented to the language model to elicit more informative and justifiable outputs, closely aligning with the concept of 'hard prefix prompts' where the prompt structure is critical to guide the language model's generation process. The relevance is high because the research focuses on improving the interpretability and effectiveness of prompts given to PLMs." -enhancing cross-lingual prompting with mask token augmentation,gpt-4-1106-preview,8,"The title 'Enhancing Cross-Lingual Prompting with Mask Token Augmentation' suggests a focus on improving the effectiveness of prompts within the context of multilingual language models. The abstract confirms that the paper investigates prompt-based approaches, particularly in cross-lingual scenarios, and proposes a method to optimize this process. Although the study deals with 'prompting' in the broader sense of language model applications and doesn't specify 'hard prefix prompts', it is still highly relevant to the field of prompt engineering. It presents empirical analysis and a novel framework for prompt enhancement. However, without explicit mention of 'hard prefix prompts', the rating is not a full 10." -generated knowledge prompting for commonsense reasoning,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering since it discusses 'generated knowledge prompting,' which is a method of using generated knowledge as a prompt to enhance performance in commonsense reasoning tasks. This falls within the purview of prompt engineering as it involves the strategic manipulation of inputs to a language model to garner better performance. Although it does not specifically mention 'hard prefix prompts,' it does approach the broader topic of how prompts can be used to integrate external knowledge into a language model's reasoning process, which may be beneficial to those studying ways to optimize prompting techniques." -dynamic prefix-tuning for generative template-based event extraction,gpt-4-1106-preview,7,"The abstract discusses a generative template-based event extraction method that utilizes dynamic prefix (GTEE-DynPref), which is highly relevant to prompt engineering as it involves type-specific prefixes that are adaptively integrated with context information. This suggests an innovation in how prompts are engineered to be context-specific rather than static, contributing to the study of prompts in NLP tasks. However, the focus on event extraction as a specific application may slightly limit the relevance to the broader field of prompt engineering since it doesn't address prompt engineering in a variety of other AI model contexts." -an empirical study of gpt-3 for few-shot knowledge-based vqa,gpt-4-1106-preview,7,"The paper describes a novel approach to using GPT-3 with prompts, specifically tailored for knowledge-based visual question answering (VQA). Although the primary focus is on VQA and not on 'hard prefix prompts' in general, the method of incorporating prompts using image captions is indeed relevant to the broader field of prompt engineering. The study explores how prompts can effectively guide a language model to utilize its latent knowledge base for a specific task. The systematic investigation into what text formats best describe image content and how to select in-context examples could provide valuable insights for prompt engineering studies." -interactive-chain-prompting: ambiguity resolution for crosslingual conditional generation with interaction,gpt-4-1106-preview,7,"The study's focus on 'interactive-chain prompting' as a mechanism to resolve ambiguities in crosslingual conditional generation suggests a significant overlap with prompt engineering techniques, especially within the context of natural language processing and machine learning. Even though the paper does not explicitly study 'hard prefix prompts,' the proposed method represents a form of advanced prompting strategy that can be valuable in the broader field of prompt engineering. The paper could hence provide insights into the design and effectiveness of complex prompting mechanisms, which is relevant for the study of prompt engineering. However, since the paper's primary focus is not on prompt engineering but on improving translation quality through interaction, the rating is not a full 10." -fast and constrained absent keyphrase generation by prompt-based learning,gpt-4-1106-preview,7,"The prompt engineering relevance of the study is substantial, considering it details a novel approach for keyphrase generation using prompt-based learning, which falls under the domain of controlled natural language generation—a key aspect of prompt engineering. The proposed method's constrained generation technique, which uses prompts derived from keywords to guide the production of absent keyphrases, is closely related to the concept of 'hard prefix prompts' where prompts direct the generative process. Although the main focus of the study is on efficient and consistent keyphrase generation rather than prompt engineering per se, the techniques employed for creating and utilizing prompts in the learning process have significant implications for the field of prompt engineering. It demonstrates a method to control and speed up the language generation process, which are key challenges in the development of efficient prompt engineering strategies. Nonetheless, the relevance is not given a full score as the primary focus seems to be on absent keyphrase generation rather than on the prompt engineering itself." -prompt-based re-ranking language model for asr,gpt-4-1106-preview,8,"The abstract discusses the application of a prompt-based method in the context of re-ranking for Automatic Speech Recognition, which is a form of prompt engineering. Although it does not directly address 'hard prefix prompts' in the systematic review sense, it describes a practical application of prompts in a machine learning model, BERT, indicating an overlap with prompt engineering studies. Therefore, the relevance is significant but not complete, as the focus is on a specific use-case rather than a broad analysis of prompt engineering techniques." -lfpt5: a unified framework for lifelong few-shot language learning based on prompt tuning of t5,gpt-4-1106-preview,7,"The paper presents a framework for lifelong few-shot language learning based on prompt tuning of T5, which is relevant to the concept of prompt engineering. Although the main focus is on lifelong learning and few-shot learning capabilities, the utilization of prompt tuning indicates that the work contributes to the understanding of how prompts can be engineered and optimized for specific language tasks. Additionally, the generation of pseudo samples for preventing forgetting involves creating prompts that are conducive to the model's learning process. Therefore, the paper has significant relevance to prompt engineering, despite not focusing exclusively on 'hard prefix prompts.'" -cold-start data selection for better few-shot language model fine-tuning: a prompt-based uncertainty propagation approach,gpt-4-1106-preview,9,"The abstract describes a study focusing on a prompt-based data selection method (PATRON) for fine-tuning pre-trained language models, which is highly relevant to prompt engineering. The mention of designing a prompt-based uncertainty propagation approach directly involves the development and refinement of prompts, and thus it directly contributes to the study of prompt engineering. The 'partition-then-rewrite (PTR) strategy' is slightly less relevant to the core concept of 'hard prefix prompts' but still within the domain of prompt engineering. The only reason the rating is not a full 10 is that the detailed application to 'hard prefix prompts' is not specified, as this technique seems broader than just hard prefix prompts, covering aspects such as data selection and sample diversity." -visual prompt based personalized federated learning,gpt-4-1106-preview,7,"The paper presents a novel personalized federated learning framework that uses visual prompts to capture the data distribution information of clients. This approach is relevant to the study of prompt engineering because it involves the use of prompts (visual in this case) as a mechanism to improve the performance of machine learning models. While the term 'hard prefix prompts' typically refers to textual prompts, the use of visual prompts in this context is an extension of the idea to the visual domain. Hence, the relevance is substantial due to the innovation in prompt utilization, although it may not directly address 'hard prefix prompts' as understood in natural language processing." -memobert: pre-training model with prompt-based learning for multimodal emotion recognition,gpt-4-1106-preview,8,"The paper's abstract discusses the use of a prompt-based method in the context of multimodal emotion recognition, which is highly relevant to prompt engineering. The relevance is underscored by the fact that the prompt-based learning is used to redefine a downstream task, which is a core area of interest in prompt engineering studies. However, the focus on emotion recognition rather than hard prefix prompts specifically means it is not entirely focused on prompt engineering, hence the rating is not a perfect 10." -prompt-based text entailment for low-resource named entity recognition,gpt-4-1106-preview,7,"The abstract discusses a methodology for adapting pre-trained language models to named entity recognition tasks by changing the task to text entailment with entity type-specific prompts. This is related to prompt engineering as it involves crafting prompts to interact with language models and manipulate their behavior to improve performance on specific tasks without extensive labeled data. However, the term 'hard prefix prompt' is not explicitly mentioned, indicating that the study might not be focused on hard prefix prompts specifically but rather on prompt-based methods in a broader sense. The relevance is significant due to the use of prompts in adjusting language model behavior but is not fully aligned with a study specifically on hard prefix prompts." -consprompt: easily exploiting contrastive samples for few-shot prompt learning,gpt-4-1106-preview,9,"The title and abstract indicate the study is highly relevant to prompt engineering. It discusses the development of a model (ConsPrompt) that leverages contrastive samples to enhance the fine-tuning process in prompt learning, particularly in few-shot settings. The paper's focus on finding strategies for more effective prompt initialization and improving the robustness of prompt learning aligns well with the topic of prompt engineering. It offers a novel approach, aligns with current challenges in the field, and claims to set a new standard for performance and robustness in few-shot learning tasks." -towards informative few-shot prompt with maximum information gain for in-context learning,gpt-4-1106-preview,9,"The study addresses a fundamental aspect of prompt engineering by exploring the effect of data example selection on the stability and performance of LLMs in few-shot scenarios. The introduction of a method to quantify Information Gain from data examples and the proposal to choose examples with maximum IG are directly relevant to enhancing prompt design. Additionally, the identification and mitigation of template bias in assessing IG can improve the quality of prompt engineering. While not exclusively focused on 'hard prefix prompts', this work contributes to the broader field of prompt engineering, thus receiving a high relevance rating." -few-shot multi-modal sentiment analysis with prompt-based vision-aware language modeling,gpt-4-1106-preview,7,"The described study focuses on multi-modal sentiment analysis (MSA) using a few-shot learning approach and a prompt-based vision-aware language modeling (PVLM) method. The relevance to prompt engineering lies in the paper's emphasis on 'prompt tuning' as a method to incorporate multimodal information into a pre-trained language model for sentiment analysis tasks. This suggests that the study addresses the use of prompts within a deep learning model, specifically to bridge the gap between pre-training and specific NLP tasks. However, it does not primarily focus on 'hard prefix prompts', as mentioned in the prompt engineering study interest. Instead, it appears to be utilizing prompts as part of a broader framework for multi-modal learning. Therefore, the relevance is significant but not entirely on-topic with respect to studies centered specifically on 'hard prefix prompts'." -unified multi-modal pre-training for few-shot sentiment analysis with prompt-based learning,gpt-4-1106-preview,7,"The abstract presents work related to 'prompt-based fine-tuning (PF)' for 'few-shot multi-modal sentiment analysis (MSA)', which suggests relevance to prompt engineering particularly in the context of model fine-tuning. The concept of using prompts to bridge modalities and improve few-shot learning is applicable to the study of prompt engineering, especially considering the innovative approach of a multi-modal prompt-based system. However, the focus is specifically on sentiment analysis and not on hard prefix prompts or a comprehensive systematic review of them. Therefore, while the study is related to prompt engineering, it is not a direct match for a comprehensive systematic review on hard prefix prompts, which affects the rating." -commonsense knowledge-aware prompt tuning for few-shot nota relation classification,gpt-4-1106-preview,9,"The paper presents a study on commonsense knowledge-aware prompt tuning, which is directly related to prompt engineering as it discusses constructing relation-oriented templates and incorporating external knowledge for improving pre-trained language model performance in few-shot tasks. This is highly relevant to the field of prompt engineering, as it deals with optimizing prompts to effectively utilize the knowledge within language models. The only reason it doesn't receive a full 10 is that the focus is specifically on NOTA relation classification, which is a subset of the broader field of prompt engineering." -dual context-guided continuous prompt tuning for few-shot learning,gpt-4-1106-preview,9,"The abstract describes a research work that is highly relevant to prompt engineering, specifically in the niche of continuous prompt tuning methods. The paper introduces a novel method to improve the efficiency of prompts in few-shot learning scenarios, which is a direct contribution to the field of prompt engineering. The proposal of dual context-guided continuous prompts (DCCP) and the discussion of its advantages over existing methods highlight its significance for studies on how prompts influence the performance of NLP models. The reason for not giving a full score of 10 is that while the paper is highly relevant, it may not cover the 'hard prefix prompts' aspect mentioned in the original prompt but focuses more broadly on continuous prompt tuning." -a dual prompt learning framework for few-shot dialogue state tracking,gpt-4-1106-preview,8,"The paper describes the application of prompt learning in the context of Dialogue State Tracking (DST), which is a highly relevant area within natural language processing for task-oriented dialogue systems. The use of dual prompts and the idea of formulating the DST task as a language modeling task under few-shot settings directly concerns the engineering of prompts for effective model learning with limited data. The relevance to prompt engineering is high because it explores how to use prompts to assist pre-trained language models in understanding and generating dialogue states, which is an innovative approach to embed task-specific knowledge into the language model's processes. The paper's focus on incorporating task-related knowledge into prompts for language models aligns with prompt engineering objectives, such as improving model performance on targeted tasks with minimal examples. However, it does not cover all aspects of prompt engineering, such as the systematic study of different types of prompts (e.g., hard prefixes), hence the rating is not a full 10." -multi-task pre-training of modular prompt for few-shot learning,gpt-4-1106-preview,9,"The abstract pertains directly to the field of prompt engineering, discussing an approach to improving few-shot learning in language models through pre-trained modular prompts (MP2). This is highly relevant to prompt engineering as it addresses enhancing the adaptability and efficiency of prompt tuning, which is a core aspect of the application of language models to downstream tasks. It presents empirical results showing the method's superiority over traditional prompt tuning and full model tuning in few-shot settings. The relevance is not rated a full 10 because the abstract mentions the specific application to Chinese tasks, which might not cover the full breadth of the general field of prompt engineering, but it is otherwise highly pertinent." -idiapers @ causal news corpus 2022: efficient causal relation identification through a prompt-based few-shot approach,gpt-4-1106-preview,8,"The paper's methodology is highly relevant to prompt engineering as it specifically deals with fine-tuning language models using prompts in a few-shot learning configuration. The approach treats a specialized task, Causal Relation Identification, as a masked language modeling problem, which aligns with the concept of utilizing prompts to steer LMs towards desired outputs without extensive training data. This suggests relevance to prompt-engineering techniques, although it is not a direct study on 'hard prefix prompts,' which might be a specific subset of prompt engineering." -p4e: few-shot event detection as prompt-guided identification and localization,gpt-4-1106-preview,8,"The provided abstract describes P4E, a framework for event detection that utilizes prompting (cloze-based prompting) as part of its methodology. The usage of prompts in the identification task is directly relevant to the field of prompt engineering. The study shows how prompts can be effectively integrated into the pre-training of language models for specific tasks like event detection, which falls within the scope of prompt engineering studies. However, the abstract also covers broader aspects of event detection, such as structured prediction and not exclusively prompts, so the rating is not a full 10." -few-shot natural language inference generation with pdd: prompt and dynamic demonstration,gpt-4-1106-preview,7,"The study introduces a framework to improve performance in few-shot natural language inference generation tasks by incorporating prompts and dynamic demonstrations within a language model. Although it does not directly study 'hard prefix prompts', it is relevant to prompt engineering because it involves the development of prompts and their application to enhance model performance in natural language processing tasks. The improvements on benchmark datasets and the claim of good generalizability suggest that the techniques used could potentially inform prompt engineering strategies, particularly in few-shot learning contexts." -discriminative language model as semantic consistency scorer for prompt-based few-shot text classification,gpt-4-1106-preview,9,"The paper introduces a finetuning method for text classification using prompts, which is highly relevant to prompt engineering. ELECTRA, being a language model used to distinguish between genuine and artificially generated text, contributes to the creation and evaluation of prompts, indicating a direct application to prompt engineering. This method is focused on improving the performance of language models in few-shot learning scenarios, which is a subset of prompt engineering. The rating is not a perfect 10 because the paper appears to be more focused on the application of a discriminative language model rather than on the prompt engineering process itself." -dfs-ner: description enhanced few-shot ner via prompt learning and meta-learning,gpt-4-1106-preview,7,"The paper's abstract indicates that it involves 'prompt learning' as a part of the proposed DFS-NER model. The focus on using prompts to guide a masked-language model learning objective for semantic information absorption is relevant to prompt engineering, as it implies constructing and employing prompts for improving model performance. However, the paper is more specifically about Named Entity Recognition and how prompt learning can be integrated with meta-learning for this task, rather than a broad study of prompt engineering itself. Thus, it is only moderately relevant to the prompt about 'hard prefix prompts,' as the paper might not be directly focused on studying prompts in a comprehensive systematic manner but rather using them as a tool for a specific application in NER." -a prompt-based few-shot machine reading comprehension model for intelligent bridge management,gpt-4-1106-preview,8,"The abstract describes a machine reading comprehension model that utilizes prompt-based techniques, which are relevant to the field of prompt engineering. The model's use of domain-specific heuristic rules to design prompt templates indicates a direct application and study of prompt engineering principles. However, the focus appears to be more on the model's application to bridge management rather than a comprehensive systematic review of prompt engineering, which might be expected from a study explicitly titled 'hard prefix prompts.' Therefore, the rating reflects its high relevance but not a perfect match due to the specific application context." -a study on prompt-based few-shot learning methods for belief state tracking in task-oriented dialog systems,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores prompt-based few-shot learning, which directly relates to the development and use of prompts in the training of language models. The formulation of DST as a prompt-based task indicates a significant engagement with prompt design and optimization, which is a core aspect of prompt engineering. The empirical analysis of the performance of these prompt-based methods contributes to the understanding of their effectiveness, which is crucial for prompt engineering research. The study might not be focused exclusively on 'hard prefix prompts' as mentioned in the systematic review title, but it addresses a related and important aspect of the field." -prompt and contrastive learning for few-shot sentiment classification,gpt-4-1106-preview,7,"The abstract you've provided describes a paper which is relevant to prompt engineering as it addresses a method for few-shot sentiment classification that uses prompts as part of the strategy. The proposed Prompt and Contrastive Learning (PCL) is directly related to the field of prompt engineering because it deals with bridging the gap between pre-training and fine-tuning of language models, a central issue in the utilization of prompts in NLP tasks. However, it does not specifically address 'hard prefix prompts' as mentioned in the prompt engineering study, therefore the rating is not a full 10. It is relevant due to its focus on the application of prompts to improve language model performance but does not directly address the systematic review aspect of 'hard prefix prompts'." -ti-prompt: towards a prompt tuning method for few-shot threat intelligence twitter classification*,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it details a prompt-based method specifically designed for a few-shot classification task which is a key area of interest in prompt engineering studies. The approach of leveraging prompt tuning and refining verbalizer techniques directly pertains to the domain of prompt engineering, as it involves crafting and optimizing prompts to interface with language models effectively. Although the study is focused on a niche application of threat intelligence classification on Twitter, the methodologies and insights could be broadly applicable to other prompt engineering contexts." -adaptive prompt learning-based few-shot sentiment analysis,gpt-4-1106-preview,9,"The paper appears highly relevant to prompt engineering as it proposes an adaptive prompt learning method for few-shot sentiment analysis, directly addressing the construction of prompts. The specific focus on adaptive prompts demonstrates an advanced application of prompt engineering aimed at improving the effectiveness of language models in few-shot learning scenarios. The only reason it is not rated a full 10 is due to the lack of information on 'hard prefix prompts', which may be a specific subset of the broader prompt engineering field." -prompt-based few-shot learning for table-based fact verification,gpt-4-1106-preview,8,"The abstract discusses the use of the prompt method in the context of few-shot learning for table-based fact verification, which is directly relevant to prompt engineering because it explores how to design and utilize prompts to improve the performance of a pre-trained model on a specific NLP task with limited data samples. Although the main focus is on structured information in tables, the application of prompt-based approaches is a key part of prompt engineering. The rating is not a full 10 because the study seems to be more focused on a particular application of prompt engineering (table-based fact verification) rather than a broad systematic review of hard prefix prompts." -investigating prompt learning for chinese few-shot text classification with pre-trained language models,gpt-4-1106-preview,8,"The abstract describes a study on a prompt-based framework for Chinese text classification, especially in few-shot learning scenarios, which is highly relevant to prompt engineering. However, it specifically addresses the adaptation of prompt-based methods for Chinese language tasks, which may not be directly applicable to the concept of 'hard prefix prompts' as it is not clear if the techniques are universally applicable to other languages or specific to Chinese. Therefore, while the study is related to prompt engineering, the rating is not a full 10 due to potential limitations in generalizability." -"few-shot information extraction is here: pre-train, prompt and entail",gpt-4-1106-preview,8,"The abstract discusses an approach that employs prompting and fine-tuning pre-trained language models (PLMs) for achieving state-of-the-art results in Information Extraction with minimal annotations. Although it does not specifically mention 'hard prefix prompts', it centrally addresses prompt engineering by explaining how natural language prompts are used to harness PLMs and enhance their inference abilities for specific tasks. This work is highly relevant to prompt engineering studies, as it showcases the effectiveness of prompts in the context of improving PLM performance. The reason for not giving a full score is that the exact term 'hard prefix prompts' is not referenced, which may indicate this study focuses on a broader range of prompting methodologies." -towards unified prompt tuning for few-shot learning,gpt-4-1106-preview,9,"The abstract discusses the concept of prompt-based fine-tuning and introduces a novel framework, Unified Prompt Tuning (UPT), designed for improving few-shot learning in BERT-style pre-trained language models by capturing prompt semantics. This is highly relevant to the field of prompt engineering as it directly addresses the enhancement of model performance through better understanding and utilization of prompts. It may not receive a perfect score as the abstract does not specifically mention 'hard prefix prompts' which could infer a nuanced subset within prompt engineering." -cqare: contrastive question-answering for few-shot relation extraction with prompt tuning,gpt-4-1106-preview,9,"The abstract discusses prompt tuning, a relevant aspect of prompt engineering, specifically in the context of relation extraction (RE). The entire concept of 'prompt tuning' is central to the field of prompt engineering as it involves the refinement and manipulation of prompts to improve performance with pre-trained language models (PLMs). While the abstract does not discuss 'hard prefix prompts' directly, it does mention the challenges of prompt engineering for label mapping and the attempt to improve prompt tuning with Contrastive Question-Answering method (CQARE). Considering the abstract's focus on developing improved methods for prompt tuning which is a vital part of prompt engineering, the relevance rating is high." -prompt-guided few-shot event detection,gpt-4-1106-preview,8,"The abstract describes the use of cloze prompts to assist in few-shot event detection by eliciting knowledge from pretrained language models. Although the main focus is on event detection, the study's reliance on prompt engineering is clear as it uses specifically crafted prompts to enhance the capabilities of machine learning models in a limited data scenario. The term 'hard prefix prompts' isn't mentioned directly, but the concept of designing efficient prompts is crucial to their methodology. This makes the study relevant to the field of prompt engineering, justifying the high rating." -psp: pre-trained soft prompts for few-shot abstractive summarization,gpt-4-1106-preview,9,"The abstract provided discusses a novel methodology for few-shot abstractive summarization that relates closely to prompt engineering. It introduces a new concept of soft prompts along with a training paradigm focussed on these prompts. Although the study introduces 'soft prompts' rather than 'hard prefix prompts', it is still highly relevant due to its focus on the broader area of prompt tuning and engineering for model performance improvement. This contribution to prompt architecture and training directly informs how prompts can be effectively implemented and optimized in various machine learning scenarios. The difference in the type of prompts (soft vs. hard prefix) results in a rating of 9 instead of a perfect 10." -few-shot text-to-sql translation using structure and content prompt learning,gpt-4-1106-preview,9,"The abstract presents a novel approach to prompt engineering within the specific domain of Text-to-SQL translation. It discusses the design of a hybrid prompt strategy that is particularly relevant for enhancing the performance of pre-trained language models on few-shot learning tasks. This directly ties into the study of prompt engineering by exploring how prompts can be optimized to guide language models more effectively. Although the application is specialized in Text-to-SQL, the concepts of structure stage and content stage prompting are highly relevant to the field of prompt engineering. The high rating reflects the paper's substantive contribution to the methodology of crafting and utilizing prompts to improve the performance of AI models." -vppt: visual pre-trained prompt tuning framework for few-shot image classification,gpt-4-1106-preview,8,"The abstract describes a method for prompt tuning in the context of few-shot image classification with pre-trained transformers, which is closely related to prompt engineering. Although the subject is applied to computer vision rather than language models (which are more commonly associated with prompts), the principles of tuning prompts to adapt to downstream tasks are highly relevant. The approach discussed involves specific challenges and solutions in initializing and fine-tuning prompt modules in a parameter-efficient way, which is a key area of prompt engineering. The reason why the rating is not a full 10 is that the prompt engineering discussed is specific to visual tasks and may not directly translate to linguistic prompt engineering studies." -decomposed two-stage prompt learning for few-shot named entity recognition,gpt-4-1106-preview,8,"The study presents a novel approach to prompt learning within the task of Named Entity Recognition (NER) in a few-shot setting, which is directly related to prompt engineering as it contributes to advancements in precision and efficiency of using prompts in machine learning models. The relevance to prompt engineering is high because it involves creating and using prompts specifically designed to improve the performance of NER tasks. The deduction of points is due to the specificity of the application to NER rather than a broader exploration of prompt engineering in general." -few-shot table-to-text generation with prompt planning and knowledge memorization,gpt-4-1106-preview,8,"The study presents a framework called PromptMize, intended for table-to-text generation within few-shot learning scenarios, which focuses on the design of prompts to guide pre-trained language models. While it does not specifically address 'hard prefix prompts', it is highly relevant to the field of prompt engineering due to its emphasis on designing prompts (prompt planner) to bridge the gap between different data modalities (tabular data and text). This is a direct application of prompt engineering techniques in the context of natural language generation from structured data, and it advances the domain by integrating domain-specific knowledge into the prompting process. Therefore, this study should be of significant interest for those researching or studying prompt engineering, albeit not directly focused on hard prefix prompts." -locoop: few-shot out-of-distribution detection via prompt learning,gpt-4-1106-preview,8,"The abstract describes an advancement in prompt learning specifically applied to few-shot out-of-distribution detection in the context of a vision-language model, which is relevant to the field of prompt engineering. However, the study focuses more on the application of prompt learning for improving OOD detection rather than the structure, phrasing, or systematic review of 'hard prefix' prompts. Despite this, the introduction of a local regularization technique called LoCoOp that enhances performance in prompt-based models indicates a significant contribution to the prompt engineering domain, particularly in algorithmic improvement for better model generalization. Therefore, it is not a perfect match to the study of 'hard prefix prompts,' but it is closely related due to its focus on enhancing prompt learning methods." -few-shot joint multimodal aspect-sentiment analysis based on generative multimodal prompt,gpt-4-1106-preview,8,"The study introduces a Generative Multimodal Prompt model within the context of Multimodal Aspect-Based Sentiment Analysis, a subfield of prompt engineering related to few-shot learning. Prompt engineering typically involves crafting inputs that guide machine learning models, especially in few-shot or zero-shot settings. The relevance to prompt engineering is substantiated by the creation and use of prompts to handle multimodal data when labeled instances are sparse. This implies a strong connection to the strategies involved in prompt engineering. However, the study is specifically targeted at multimodal data and aspect-sentiment analysis, and it doesn't cover the entire breadth of prompt engineering, which may also include text-only or other single-modality frameworks. Thus, the relevance is rated as high but not absolute." -partseg: few-shot part segmentation via part-aware prompt learning,gpt-4-1106-preview,9,"The paper presents a method for few-shot part segmentation by leveraging a part-aware prompt learning technique, which directly relates to the process of prompt engineering. The relevance is high because prompt engineering involves generating inputs that help models like CLIP better interpret and process information, which is what the paper appears to be achieving with its part-specific prompts. It's not a perfect 10 because the paper is application-specific (focused on few-shot part segmentation), whereas prompt engineering can also encompass broader methodologies and applications beyond this context." -evolutionary verbalizer search for prompt-based few shot text classification,gpt-4-1106-preview,9,"The given abstract describes research focused on improving prompt-based tuning, specifically within the realm of few-shot text classification by developing a novel evolutionary verbalizer search (EVS) algorithm. Since prompt-based tuning is a direct application of prompt engineering, and this paper deals with the construction of optimal verbalizers, which are integral to the functioning of prompt-based models, its relevance to prompt engineering is high. However, it doesn't cover every aspect of prompt engineering, such as hard prefix prompts specifically, thus warranting a slightly less than perfect score." -a chinese few-shot text classification method utilizing improved prompt learning and unlabeled data,gpt-4-1106-preview,8,"The abstract discusses a method for Chinese few-shot text classification (FSTC) that employs an improved prompt learning technique, indicating a close relevance to prompt engineering. It details an approach for creating and optimizing prompt prefixes specifically designed for Chinese, which falls directly within the study of prompt engineering. The method's use of multiple masks in prompt learning and its application in a semi-supervised context with unlabeled data enhance the relevance. The reason for not giving a full 10 is because the focus seems heavily on the application to Chinese text and the improvement of performance in FSTC; the abstract does not broadly address various aspects of prompt engineering beyond its specific use case." -unified prompt learning makes pre-trained language models better few-shot learners,gpt-4-1106-preview,8,"The paper described is highly relevant to prompt engineering because it discusses a novel approach to prompt-based learning, which is an essential aspect of prompt engineering. It specifically addresses the challenge of balancing task-specific and instance-dependent information in prompts to enhance few-shot learning in language models. While it may not focus exclusively on 'hard prefix prompts,' which would be directly related to a systematic review on such prompts, it deals with the broader question of how to design and utilize prompts effectively, crucial for the field of prompt engineering." -boosting prompt-based few-shot learners through out-of-domain knowledge distillation,gpt-4-1106-preview,7,"The abstract describes a method to improve prompt-based learning in the context of few-shot learning and knowledge distillation (KD), which is relevant to prompt engineering as it deals with enhancing the efficiency and performance of prompt-tuned Pre-trained Language Models (PLMs). Although the study focuses on knowledge distillation and model compression rather than the direct creation or manipulation of prompts, the optimization of models for prompt-based few-shot learning is a significant aspect of prompt engineering. Therefore, the relevance is fairly high, but not maximal due to the indirect focus on the engineering of prompts themselves." -prompt-based zero- and few-shot node classification: a multimodal approach,gpt-4-1106-preview,7,"The study mentioned in the abstract focuses on the use of prompts in a multimodal approach for node classification, which is relevant to the field of prompt engineering in the context of machine learning. The 'prompt- and graph-based module' specifically indicates that prompts are engineered as part of the model to handle zero-shot scenarios, which is an application of prompt engineering. However, the primary focus seems to be on integrating text and graph modalities rather than on the systematic review of hard prefix prompts, which would more directly address the prompt engineering study. Thus, while the study is relevant due to the inclusion of prompts in the machine learning model, it may not fully represent a comprehensive review strictly on prompt engineering with 'hard prefix prompts'." -prompt-distiller: few-shot knowledge distillation for prompt-based language learners with dual contrastive learning,gpt-4-1106-preview,8,"The article is highly relevant to prompt engineering as it directly addresses an aspect of prompt-based learning, which is a key area in prompt engineering. It offers innovative solutions for the deployment of prompt-tuned Pre-trained Language Models in few-shot learning scenarios through Knowledge Distillation. The focus on the few-shot KD algorithm designed for prompt-tuned PLMs ('Prompt-Distiller') aligns with the broader topic of engineering effective prompts for language models to enhance learning performance. While it may not specifically cover 'hard prefix prompts,' the overall context of prompt-based learning and improving the efficiencies of such systems makes it pertinent to the field of prompt engineering. A full 10 is not awarded as the abstract does not directly mention 'hard prefix prompts,' which was the specific subject of the systematic review requested." -few-shot text-to-sql translation using structure and content prompt learning,gpt-4-1106-preview,9,"The paper describes a hybrid prompt strategy that leverages learnable and fixed vectors to guide Pre-trained Language Models (PLMs) for few-shot Text-to-SQL translation tasks. This is highly relevant to prompt engineering as it relates directly to the development of prompts that assist in task-specific predictions and facilitate model understanding. Although 'hard prefix prompts' are not mentioned explicitly, the approach is fundamentally connected to creating effective prompts for language models, thus making it pertinent to studies in prompt engineering." -dreamartist: towards controllable one-shot text-to-image generation via positive-negative prompt-tuning,gpt-4-1106-preview,8,"The abstract discusses the use of prompt-tuning strategies, specifically introducing a 'positive-negative prompt-tuning learning strategy' in the context of text-to-image generation, which falls within the realm of prompt engineering. Prompt engineering is about finding effective ways to interface with language models or other AI systems using written prompts; the mention of positive and negative prompt tuning is a concrete example of this, tailored for a specific application. Therefore, this study is relevant to the broader field of prompt engineering as it explores a novel method to enhance the controllability and quality of outputs from AI systems. However, it does not specifically address 'hard prefix prompts,' which would be even more directly related to the prompt engineering study mentioned in the request. Thus, the rating is not a full 10." -dreamartist: towards controllable one-shot text-to-image generation via contrastive prompt-tuning,gpt-4-1106-preview,7,"The paper discusses 'contrastive prompt-tuning,' which is a technique relevant to prompt engineering. Since prompt engineering involves methods to efficiently communicate with AI models, and in this case, to control text-to-image generation, the paper's subject is pertinent to the field. However, it doesn't focus on the 'hard prefix prompts,' which the initial request emphasizes. Therefore, the relevance is substantial but not entirely on point with the specific systematic review criteria stated." -one-shot and partially-supervised cell image segmentation using small visual prompt,gpt-4-1106-preview,7,"The abstract describes a framework for cell image segmentation that uses concepts from prompt learning, which is related to the field of prompt engineering. While the main focus is on the application of these concepts to one-shot and partially-supervised learning for cell image segmentation, the utilization of 'small prompt images' and the attention given to prompt learning techniques in the study suggest relevance to prompt engineering. However, it does not appear to closely study hard prefix prompts as applied in NLP or broader prompt engineering, hence it is not a perfect match for the prompt engineering study." -pøda: prompt-driven zero-shot domain adaptation,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering because it introduces a novel methodology that utilizes natural language prompts to drive the process of zero-shot domain adaptation. Though it does not focus specifically on 'hard prefix prompts', it does explore the role of prompts in guiding the adaptation of models to new domains, which is an essential aspect of prompt engineering in the broader sense. The use of CLIP and the approach to optimize feature transformations based on target text embeddings are elements that connect closely to the principles of prompt engineering, which includes crafting prompts to guide model behavior." -cohoz: contrastive multimodal prompt tuning for hierarchical open-set zero-shot recognition,gpt-4-1106-preview,7,"The abstract describes CoHOZ, an approach for open-set recognition and zero-shot learning by leveraging hierarchical label trees and contrastive continuous prompt tuning. While it does not directly mention 'hard prefix prompts', it does engage with 'prompt tuning', which is a relevant aspect of prompt engineering. The relevance is marked as a 7 because the techniques and experiments could potentially contribute to the broader understanding of prompt engineering without being specifically focused on 'hard prefix prompts'. The concept of prompt tuning, particularly in a contrastive and multimodal setting, is pertinent to the study of how prompts are constructed and used, especially in zero-shot learning scenarios." -proze: explainable and prompt-guided zero-shot text classification,gpt-4-1106-preview,7,"The abstract discusses 'ProZe,' a text classification approach that utilizes prompting pretrained language models, which is directly relevant to prompt engineering as it involves the method of using prompts to guide language models. However, the abstract also includes mention of querying ConceptNet for adding explainability, which is somewhat peripheral to the core concept of prompt engineering. Moreover, the study focuses on zero-shot text classification, which is only one aspect of the broader field of prompt engineering. Therefore, while prominently featuring elements of prompt engineering, the paper's focus on the combination of prompts with an external knowledge base and its aim for explainability dilutes the pure relevance to hard prefix prompts, hence the rating of 7." -prompt-based extraction of social determinants of health using few-shot learning,gpt-4-1106-preview,7,"The study described in the abstract involves the use of one-shot prompting with GPT-4 to extract social determinants of health from unstructured text. This is relevant to prompt engineering because it focuses on the methodology of leveraging language models through prompts to achieve a specific task. While it does not directly study 'hard prefix prompts', which suggests a more specific kind of prompt engineering, the exploration of one-shot prompts and their comparison to traditional supervised approaches is within the broader domain of prompt engineering. Therefore, its relevance is high but not entirely focused on hard prefix prompts, warranting a rating of 7." -augmenters at semeval-2023 task 1: enhancing clip in handling compositionality and ambiguity for zero-shot visual wsd through prompt augmentation and text-to-image diffusion,gpt-4-1106-preview,7,"The paper focuses on enhancing the performance of the CLIP model by addressing issues related to prompt engineering, such as the compositionality and ambiguity in natural language and generating more contextual prompts using large language models. While it is not specifically about 'hard prefix prompts', it does involve an in-depth look at modifying and improving prompts for better results, which is relevant to the broader field of prompt engineering study." -self-supervised meta-prompt learning with meta-gradient regularization for few-shot generalization,gpt-4-1106-preview,9,"The abstract describes an approach to prompt tuning, particularly focusing on few-shot generalization, which is highly relevant to the field of prompt engineering. The method outlined involves learning soft prompts and touches on the challenges of generalization and overfitting, key issues in prompt engineering. The proposed framework, SUPMER, addresses these problems by creating a universal initialization for prompts, which contributes significantly to the study and advancement of prompt engineering methods. The reason the rating is not a perfect 10 is that the abstract does not explicitly discuss 'hard prefix prompts,' which was mentioned in the user's request for a 'comprehensive systematic review on hard prefix prompts.'" -enhancing black-box few-shot text classification with prompt-based data augmentation,gpt-4-1106-preview,7,"The provided abstract focuses on the use of large-scale language models (LLMs) like GPT-3 for few-shot text classification and explores a method of interacting with them purely through their inference APIs, without requiring access to the gradients. The relevance to prompt engineering is found in the application of prompt-based data augmentation, which is a technique integral to the practice of prompt engineering. Although the primary focus seems to be on the black-box modeling approach and parameter-efficient adaptation, the utilization of prompts to augment data for better performance in few-shot scenarios suggests that the research contributes to the prompt engineering field. It does not, however, directly address a systematic review on hard prefix prompts, which would be the core topic of a prompt engineering study. Hence, the relevance is significant but not complete, leading to a rating of 7." -"cocoopter: pre-train, prompt, and fine-tune the vision-language model for few-shot image classification",gpt-4-1106-preview,7,"The document's title suggests the use of a process that includes 'prompt' as part of the procedure for improving few-shot image classification. This indicates that the study involves some level of modification or creation of prompts to enhance model performance, which is relevant to prompt engineering. However, without further details on the nature of these prompts, particularly whether they pertain to language prompts typically used in prompt engineering, or are more broadly related to model conditioning, it's difficult to assess the full relevance. The mention of 'hard prefix prompts' in the initial query was not directly addressed, resulting in a rating that acknowledges relevance but cannot confirm an exact match." -few-shot fake news detection via prompt-based tuning,gpt-4-1106-preview,8,"The abstract presents a study on a Fake News Detection model that utilizes prompt-based tuning, which is directly relevant to prompt engineering. The model's design incorporates contextual prompts to enhance the detection capabilities of pre-trained language models in few-shot scenarios. While the study is not a comprehensive systematic review on hard prefix prompts, it does focus on the application of prompts in a specific important area, hence the relatively high relevance score." -lm-cppf: paraphrasing-guided data augmentation for contrastive prompt-based few-shot fine-tuning,gpt-4-1106-preview,8,"The paper 'lm-cppf: paraphrasing-guided data augmentation for contrastive prompt-based few-shot fine-tuning' directly relates to prompt engineering as it discusses the use of prompt-based tuning in the context of language model fine-tuning. Since prompt engineering fundamentally involves crafting input prompts to elicit the desired output from a language model, this paper's focus on leveraging paraphrasing-guided augmentation within the prompt-based few-shot fine-tuning framework demonstrates an application of prompt engineering. The relevance is not rated as a perfect 10 because the study seems to emphasize data augmentation and contrastive learning in addition to prompt-based methods rather than focusing solely on prompt engineering." -syntax-aware hybrid prompt model for few-shot multi-modal sentiment analysis,gpt-4-1106-preview,9,"The paper describes a novel approach to prompt engineering by integrating hand-crafted and learnable prompts within a hybrid model for few-shot multi-modal sentiment analysis. Since prompt engineering involves crafting input prompts to guide models, especially in few-shot learning scenarios, this paper is highly relevant to prompt engineering studies. It also touches upon optimizing prompt encoders using attention mechanisms, which is a sophisticated technique within this field. The only reason it doesn't receive a full 10 is that it is specific to sentiment analysis and may not cover the entire breadth of prompt engineering applications." -enhancing few-shot ner with prompt ordering based data augmentation,gpt-4-1106-preview,7,"The relevance is fairly high because the paper discusses a Prompt Ordering based Data Augmentation (PODA) method, which is related to prompt engineering in that it involves manipulating data to improve the performance of language models in a low-resource setting. Prompt engineering typically involves crafting prompts that guide the model's predictions or generating capabilities, and while this method is specifically targeting a data augmentation approach for named entity recognition, it is relevant insofar as it involves ordered prompts and their effect on the training process. However, it does not directly address 'hard prefix prompts' or a broader range of prompt engineering outside the context of few-shot NER, hence the rating is not a full 10." -image-object-specific prompt learning for few-shot class-incremental learning,gpt-4-1106-preview,8,"The study presents a novel training framework in the context of few-shot class-incremental learning (FSCIL), incorporating the use of specialized prompts, which are biased towards specific attributes of class objects to guide the learning process. This biasing through prompts is relevant to prompt engineering as it involves the strategic use of prompts to direct the model's attention to specific features, which is an integral concept in prompt engineering. The use of key-prompt pairs is directly associated with designing effective prompts. While the study does not explicitly state 'hard prefix prompts' or a comprehensive review on it, it does demonstrate practical application and manipulation of prompts in a machine learning context, which is relevant to the broader field of prompt engineering." -overcoming catastrophic forgetting in zero-shot cross-lingual generation,gpt-4-1106-preview,9,"The abstract discusses the use of prompt tuning, a parameter-efficient adaptation technique, to overcome challenges in zero-shot cross-lingual generation, which is directly relevant to prompt engineering. The study focusses on how prompts can be engineered and factored to enable a generative multilingual model to perform tasks in languages it wasn't explicitly trained on, without catastrophic forgetting. Although it does not specifically mention 'hard prefix prompts,' the concept of prompt tuning is a crucial part of prompt engineering studies, so the relevance to the broader field of prompt engineering is high." -nearest neighbor zero-shot inference,gpt-4-1106-preview,7,"The abstract presents kNN-Prompt, a k-nearest neighbor Language Model with expanded verbalizers, which is relevant to the study of prompt engineering because it involves the automatic expansion of prompts for improved zero-shot learning. While the study emphasizes retrieval-augmented language models and zero-shot inference rather than directly focusing on 'hard prefix prompts,' the concept of expanding verbalizers to include synonyms directly pertains to the engineering of prompts to enhance model performance. Thus, the relevance to prompt engineering study is significant, though not entirely focused on 'hard prefix' prompts specifically." -kecp: knowledge enhanced contrastive prompting for few-shot extractive question answering,gpt-4-1106-preview,7,"The abstract describes an approach involving a novel method of prompt-tuning, which is highly relevant to prompt engineering studies. The focus on Knowledge Enhanced Contrastive Prompt-tuning (KECP) is especially pertinent to the field as it introduces a non-conventional method of leveraging prompts through external knowledge bases and contrastive learning objectives. Nevertheless, since the study doesn't specifically address 'hard prefix prompts' but rather a broader prompt-tuning strategy for EQA, the rating is not a full 10." -cross-lingual retrieval augmented prompt for low-resource languages,gpt-4-1106-preview,7,"The study described in the abstract is relevant to prompt engineering because it discusses the creation and use of a pipeline (PARC) that augments prompts to enhance the performance of Multilingual Pretrained Language Models (MPLMs) in zero-shot learning scenarios for low-resource languages. This directly relates to the field of prompt engineering, as it involves designing and manipulating prompts to improve language model performance. However, it may not be directly related to 'hard prefix prompts,' as it does not specify the nature of the prompts used (whether hard-coded, soft, or another type). The focus is more on cross-lingual retrieval and augmentation rather than the systematic review of the prompt types or their design characteristics, hence the rating is not a full 10." -indirect: language-guided zero-shot deep metric learning for images,gpt-4-1106-preview,7,"The abstract introduces Language-Guided Zero-Shot Deep Metric Learning (LanZ-DML) which emphasizes the use of natural language text prompts to control image representation without the need for training data. The model InDiReCT mentioned utilizes CLIP to transfer the variation in text prompt embeddings to the image embedding space. Although the study focuses on the metric learning aspect and the application in image retrieval systems, it is highly relevant to prompt engineering because it involves the use of text prompts to guide a zero-shot learning process. This showcases an intricate way that prompts can interact with deep learning models to influence their behavior. However, it does not directly address hard prefix prompts or a systematic review of such, which limits the rating to a 7." -list: lite self-training makes efficient few-shot learners,gpt-4-1106-preview,7,"The abstract discusses a method related to fine-tuning pre-trained language models with the use of prompts, which is relevant to prompt engineering. LiST improves prompt-tuning with techniques like self-training and lightweight fine-tuning, which fall within the realm of prompt optimization strategies. However, the abstract does not specifically mention 'hard prefix prompts' as in the initial prompt, so it may not address the complete systemic review aspect of hard prefix prompts in prompt engineering. Thus, the relevance to prompt engineering study is significant but not fully aligned with the specificity of 'hard prefix prompts'." -prompt-based multi-modal image segmentation,gpt-4-1106-preview,8,"The study presents a system that utilizes prompts in the form of text or images to generate image segmentation, indicating a strong relevance to 'prompt engineering.' Although the primary focus is image segmentation and not prompt engineering itself, the system's capability to interpret and process arbitrary prompts at test time is indicative of a significant application of prompt engineering principles. This demonstrates the integration of prompt-based methods into AI tasks, which is a key aspect of prompt engineering research. The rating is not a full 10 because the study's primary aim is not the investigation of the prompts themselves or their optimization, but rather their application to a particular AI task." -jurassic is (almost) all you need: few-shot meaning-to-text generation for open-domain dialogue,gpt-4-1106-preview,8,"The given title and TLDR indicate research related to few-shot meaning-to-text generation using semantic prompts. This is relevant to prompt engineering as it specifically pertains to the utilization of prompts to guide natural language generation (NLG) systems to produce text in a conversational context. Despite not explicitly mentioning 'hard prefix prompts', the study appears to contribute to the broader field of prompt-based learning and NLG. Hence, the rating is high but not maximum, due to the lack of direct reference to 'hard prefix prompts'." -prompt scoring system for dialogue summarization using gpt-3,gpt-4-1106-preview,8,"The abstract provided discusses the development of a scoring system specifically designed for improving few-shot training performances in the context of dialogue summarization with GPT-3, which involves an aspect of prompt engineering. Prompt engineering is integral to optimizing few-shot learning techniques by crafting effective prompts that guide language models like GPT-3 to perform specific tasks. The research focuses on the structure of dialogues and how tuned prompts can enhance the summarization task, which is highly relevant to the study of prompt engineering. Although the paper does not explicitly mention 'hard prefix prompts', it addresses the broader subject of prompt design and effectiveness, thus earning a high relevance rating. The 2-point deduction from a perfect score is due to the lack of specificity regarding 'hard prefix prompts', which may be a more narrow area within prompt engineering." -inverse is better! fast and accurate prompt for slot tagging,gpt-4-1106-preview,8,"The abstract describes an innovative method in prompt engineering, specifically for the task of slot tagging in few-shot learning scenarios. While it doesn't discuss 'hard prefix prompts' directly, it presents the concept of 'inverse prompting', which is a technique within the broader domain of prompt engineering. The improvement in efficiency and accuracy mentioned in the abstract is highly relevant to studies in prompt engineering, especially when considering the impact on state-of-the-art performance. The score is not a full 10 because it is not explicitly tied to 'hard prefix prompts' but does address closely related concepts within prompt engineering." -is a prompt and a few samples all you need? using gpt-4 for data augmentation in low-resource classification tasks,gpt-4-1106-preview,8,"The described study is highly relevant to prompt engineering as it directly involves using prompts to leverage GPT-4 and ChatGPT for the purpose of data augmentation in classification tasks. Prompt engineering is a core component of this because the quality of the generated synthetic data heavily depends on the design and effectiveness of the prompts used. Although the study does not exclusively focus on 'hard prefix prompts,' it covers an application of prompts that is central to understanding and improving the use of language models in low-resource situations. The only reason the rating is not a 10 is that it does not specifically mention 'hard prefix prompts' or explore a comprehensive systematic review of such prompts, rather it looks at practical applications of prompt-related techniques for data augmentation." -residual prompt tuning: improving prompt tuning with residual reparameterization,gpt-4-1106-preview,9,"The abstract presents a study that directly addresses improvements in prompt tuning, which is an essential aspect of prompt engineering. The introduction of Residual Prompt Tuning as a method that advances the performance and stability of prompt tuning is highly relevant to engineers and researchers working with language models. The fact that it outperforms standard prompt tuning and shows robustness against various hyper-parameters and initializations makes it a significant contribution to the study of prompt engineering. The reason the rating is not a perfect 10 is that the abstract doesn't directly address 'hard prefix prompts', but it is relevant to the broader field of prompt engineering." -ds4dh at mediqa-chat 2023: leveraging svm and gpt-3 prompt engineering for medical dialogue classification and summarization,gpt-4-1106-preview,8,"The study described in the title uses prompt engineering as a part of its methodology to generate summaries for medical dialogues using GPT-3.5. Even though the study focuses on a specific application of prompt engineering within the medical field and combines it with Support Vector Machines (SVM) for classification tasks, the use of one-shot prompts to operate with GPT-3.5 embeds elements of prompt engineering which are relevant to the study of this domain. The relevance is not rated a full 10 due to the specificity of the application (medical dialogues), as opposed to a broader coverage of hard prefix prompts in prompt engineering." -soft prompt tuning for augmenting dense retrieval with large language models,gpt-4-1106-preview,8,"The article presents a novel approach for enhancing dense retrieval through the use of soft prompt tuning with large language models, which is a technique within the scope of prompt engineering. This is closely relevant to the study of prompt engineering since it involves the optimization of prompts to improve the performance of language model tasks. Although the study focuses specifically on 'soft' prompt tuning rather than 'hard' prefix prompts, the methods and insights from soft prompt tuning contribute to the broader understanding of how prompts can influence language model behavior and performance. Therefore, the relevance is high but not absolute, hence the rating of 8." -self-prompting large vision models for few-shot medical image segmentation,gpt-4-1106-preview,8,"The abstract discusses the application of a segmentation model (SAM) in medical image analysis and introduces a novel technique for self-prompting in the context of few-shot learning. This is highly relevant to prompt engineering as it deals directly with how to leverage and optimize prompts for a model to improve its performance, especially in a domain like medical imaging where data can be scarce. The self-prompting approach relies on prompt tuning strategies which are an integral part of prompt engineering. The rating is not a full 10 because the abstract does not specifically mention 'hard prefix prompts' or the systematic review aspect of prompt engineering, which would cover a broader scope including various strategies beyond the one mentioned in the paper." -cipta: contrastive-based iterative prompt-tuning using text annotation from large language models,gpt-4-1106-preview,8,"The study described in the title and abstract is highly relevant to prompt engineering as it focuses on 'prompt tuning,' which is a method used to enable models to quickly adapt to new tasks or domains using a limited amount of data or examples. The innovation in prompt tuning that the study proposes, CIPTA, particularly targets low-resource scenarios, which is a critical area of research in prompt engineering for improving the efficiency and applicability of large language models. The study's use of contrastive embedding training as part of the prompt-tuning process also contributes to the field. Therefore, it scores high in relevance. It doesn’t get a full score because it is specifically angled towards public opinion analysis rather than covering prompt engineering in broader scenarios." -unleashing the potential of prompt engineering in large language models: a comprehensive review,gpt-4-1106-preview,9,"The abstract provided is highly relevant to the field of prompt engineering as it covers a breadth of topics within the discipline, including foundational principles, advanced methodologies, assistance tools, prospective research directions, and applications in various fields. The rating is not a perfect 10 as there is some information missing, such as empirical data or case studies that would make it an exhaustive review. Nevertheless, the paper appears to be a comprehensive resource that would substantially benefit those interested in the workings and advancements of prompt engineering for Large Language Models." -multi-mask label mapping for prompt-based learning,gpt-4-1106-preview,8,"The abstract discusses a novel prompt-based learning method called Multi-Mask Label Mapping (MMLM) that is designed to address the issues of misleading lexical cues in few-shot learning. Although the study does not specifically mention 'hard prefix prompts', its focus on improving prompt-based learning through strategic label mapping and instance augmentation is very relevant to the field of prompt engineering. Given that prompt engineering involves crafting prompts to effectively communicate with a model, the methodology proposed in this study could potentially be applied to the study of hard prefix prompts, thereby enhancing the state of prompt engineering. The deducted points are due to the lack of direct reference to 'hard prefix prompts', which was the specific focus of the prompt engineering study mentioned." -prompts can play lottery tickets well: achieving lifelong information extraction via lottery prompt tuning,gpt-4-1106-preview,8,"The relevance to prompt engineering is high, given that the abstract discusses a novel prompt tuning method called Lottery Prompt Tuning (LPT) which directly pertains to modifying prompts in the context of a universal information extraction system trained for lifelong learning. Prompt engineering broadly encompasses the tweaking and optimization of prompts to improve the performance of language models, and the LPT method falls within this field. Although it is not explicitly focused on 'hard prefix prompts', the study of prompt tuning methods is a significant aspect of prompt engineering. Therefore, the relevance is rated as an 8, with some points deducted because the description might not target 'hard prefix prompts' specifically but rather a related area within prompt engineering." -llm comparative assessment: zero-shot nlg evaluation through pairwise comparisons using large language models,gpt-4-1106-preview,7,"The paper focuses on zero-shot NLG evaluation using large language models (LLMs) and specifically addresses new methods for assessment, which closely relates to the field of prompt engineering as it pertains to the performance assessment of language model outputs. While it does not directly study 'hard prefix prompts' or design prompts for LLMs, the study of assessment methods is relevant for fine-tuning and validating prompts during the engineering process. The inclusion of discussion on prompt positional biases and debiasing methods is particularly relevant, as these considerations can impact the effectiveness of engineered prompts." -tuning multi-mode token-level prompt alignment across modalities,gpt-4-1106-preview,9,"The presented abstract discusses a novel approach to prompt tuning that emphasizes token-level prompt alignment across different modalities, which is a specific aspect of prompt engineering. Although it does not explicitly address 'hard prefix prompts,' it concentrates on the generalizable and nuanced aspects of prompt tuning in the context of vision-language models, which is highly relevant to the field of prompt engineering. The focus on multi-mode prompts and token-level alignment is crucial for fine-tuning prompt-based models, which is why it receives a high relevance rating." -metricprompt: prompting model as a relevance metric for few-shot text classification,gpt-4-1106-preview,8,"The paper described is highly relevant to the field of prompt engineering as it discusses MetricPrompt, a method that directly addresses the optimization of prompt design for text classification tasks. It specifically tackles the challenge of designing verbalizers and leverages the power of prompting models as relevance metrics, which falls within the domain of prompt engineering. The relevance rating is not a perfect 10 because, while the study is related to prompt engineering, the term 'hard prefix prompts' is not explicitly mentioned, and it is unclear how closely the proposed MetricPrompt methodology aligns with 'hard prefix prompts' specifically." -unsupervised dual modality prompt learning for facial expression recognition,gpt-4-1106-preview,9,"The abstract describes a study that is highly relevant to prompt engineering, as it proposes an 'Unsupervised Dual Modality Prompt Learning framework' which is directly related to adapting and tuning prompts for better performance in facial expression recognition tasks. This study focuses on optimizing the prompts used in vision-language models, which is a core area of interest in prompt engineering. The only reason it does not receive a perfect score is that it is specialized in facial expression recognition rather than covering prompt engineering in a broader sense across various applications." -label-aware automatic verbalizer for few-shot text classification,gpt-4-1106-preview,8,"The study focuses on the verbalizer component within prompt-based learning, a crucial element of prompt engineering, especially in the context of few-shot text classification. The relevance to prompt engineering is strong as it addresses the optimization of prompt output translation into class predictions, which is directly related to how prompts are engineered to interact with language models. Although the study does not explicitly mention 'hard prefix prompts,' it aligns with the broader field of prompt engineering. The rating is not a perfect 10 because it does not directly address a comprehensive systematic review of hard prefix prompts, which the initial query specifies." -speak foreign languages with your own voice: cross-lingual neural codec language modeling,gpt-4-1106-preview,7,"The abstract describes the use of speech in the source language as a 'prompt' to generate speech in another language, preserving the voice, emotion, and acoustic environment of the original speaker. Although the term 'prompt' in this context does not directly refer to 'hard prefix prompts' as used in prompt engineering for text-based language models, it is relevant as it shows an application of prompts in a different but related domain of language processing and AI, i.e., speech synthesis and cross-lingual translation. The technology leverages in-context learning similar to how prompts are used in text-based models to guide the generation of synthetic speech, suggesting a form of prompt engineering in a speech synthesis model. Therefore, the rating is moderately high for relevance to the broader field of prompt engineering but is not a direct match since it pertains to speech rather than text-based prompting." -pointclip v2: adapting clip for powerful 3d open-world learning,gpt-4-1106-preview,7,"The abstract discusses leveraging large-scale language models to automatically design a more descriptive 3D-semantic prompt for CLIP’s textual encoder, indicating a study or application of prompt engineering to improve performance in 3D classification tasks. While it does not explicitly focus on 'hard prefix prompts,' it does deal with the broader topic of prompt engineering in the context of a real-world application—enhancing the compatibility of language-image pre-training models with 3D point cloud data. Therefore, the study is relevant to the subject of prompt engineering but perhaps less so to the specific aspect of 'hard prefix prompts.'" -image segmentation using text and image prompts,gpt-4-1106-preview,8,"The study presents a system for generating image segmentations based on arbitrary prompts, which directly involves prompt engineering as it requires understanding and designing prompts that the model can interpret accurately. The use of text and image prompts to dictate model behavior demonstrates a practical application of prompt engineering. However, the specifics of 'hard prefix prompts' mentioned in the study inquiry are not directly addressed, so it may not fully cover the systematic review aspect of the inquiry but is still highly relevant to the field of prompt engineering." -sega: instructing diffusion using semantic dimensions,gpt-4-1106-preview,7,"The studied paper 'sega: instructing diffusion using semantic dimensions' discusses a method for providing semantic control over text-to-image diffusion models through something called SEGA. Although it doesn't directly address 'hard prefix prompts,' it is highly relevant to the field of prompt engineering because it focuses on improving the interaction between user inputs and the model's output. Such research contributes to the broader understanding of how to engineer prompts to achieve desired results, which is a crucial aspect of prompt engineering. The relevance to 'hard prefix prompts' itself is indirect but still significant due to the overlap in goals of increasing control over generative models' responses to textual prompts." -learnable ophthalmology sam,gpt-4-1106-preview,8,"The provided abstract and TLDR indicate a study that involves a form of prompt engineering, as it discusses a 'learnable prompt layer' in the context of a deep learning model for ophthalmology image analysis. This is pertinent to prompt engineering study, specifically within the domain of medical image analysis, as it involves the tailoring of prompts (inputs to the model which guide its responses) to improve performance on specialized tasks. The connection to 'hard prefix prompts' is not directly stated, but the concept of learnable prompts closely relates to the broader field of prompting techniques in machine learning, hence the relevance to prompt engineering studies." -prompting multilingual large language models to generate code-mixed texts: the case of south east asian languages,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates how different prompt templates and language pairings affect the capability of multilingual large language models to generate code-mixed texts, which is a key aspect of designing effective prompts. While the study does not focus exclusively on 'hard prefix prompts', it does explore the broader topic of how prompts can influence the output of language models. This falls within the range of studies related to prompt engineering. The findings have implications for how one should engineer prompts for multilingual contexts, particularly in the domain of code-mixing." -the stable artist: steering semantics in diffusion latent space,gpt-4-1106-preview,7,"The abstract describes an approach that improves the precision of text-conditioned generative diffusion models, which is relevant to prompt engineering because it addresses the challenge of achieving fine-grained control over generated content through text input modifications. While the study's focus is on image generation and editing, the semantic guidance technique is applicable to the broader concept of steering output in response to precise prompt adjustments. The relevance rating is not higher because the study is not specifically about hard prefix prompts or their systematic review but instead about a related yet distinct area of prompt-based control in generative models." -internet-augmented language models through few-shot prompting for open-domain question answering,gpt-4-1106-preview,8,"The study focuses on the utilization of few-shot prompting to enhance language models' ability to answer open-domain questions by conditioning the responses on web-searched information. Although it does not specifically mention 'hard prefix prompts,' it is highly relevant to the field of prompt engineering since it explores methodologies for effective prompting to improve information retrieval and question-answering capabilities of language models. This closely aligns with the goal of prompt engineering, which is to design prompts that enable language models to perform specific tasks more accurately. Therefore, the relevance rating is high." -the unreliability of explanations in few-shot prompting for textual reasoning,gpt-4-1106-preview,8,"The described study directly investigates the role of explanations within the context of few-shot prompting, which is a pertinent area of research within prompt engineering. Although it does not explicitly mention 'hard prefix prompts', it explores the impact of the style and quality of prompts (including explanations) on the performance of large language models in textual reasoning tasks. This is relevant to prompt engineering as it informs on how different prompt constructs can affect model outputs, especially in tasks requiring understanding and explanations. The relevance is not rated as a perfect 10 since there's no detailed focus on 'hard prefix prompts' specifically, but the research closely aligns with investigating prompt effects in LLMs." -few-shot prompting towards controllable response generation,gpt-4-1106-preview,8,"The paper discusses an advanced application of prompt-based learning in the context of chatbot response generation, showing relevance to the field of prompt engineering. The use of prompting combined with reinforcement learning to direct model output without parameter access aligns with the concept of hard prefix prompts because it explores methods of prompt manipulation for controllable outcomes. The emphasis on few-shot learning for task generalization is also pertinent to prompt engineering as it demonstrates efficiency in prompt application. Though the study doesn't solely focus on hard prefix prompts, its methods and objectives are closely related to the core ideas of engineering prompts for language models." -zero- and few-shot prompting with llms: a comparative study with fine-tuned models for bangla sentiment analysis,gpt-4-1106-preview,7,"The study's focus on zero- and few-shot prompting with language models is closely related to prompt engineering, as it deals with the efficacy of prompts when minimal examples are provided to the model. While the study is not specifically about hard prefix prompts, it explores in-context learning with language models, which is an essential aspect of prompt engineering. The investigation into the effectiveness of different prompting strategies for a low-resource language like Bangla is relevant because it contributes to the understanding of how various models respond to prompts in different scenarios, which is a critical component of prompt engineering research. However, the title and abstract do not mention 'hard prefix prompts' specifically, which would have made it a perfect match for the topic of comprehensive systematic review on hard prefix prompts. Thus, the rating is above average for relevance but not a perfect score." -multilingual social media text generation and evaluation with few-shot prompting,gpt-4-1106-preview,8,"The abstract describes a research study on adapting large language models to generate multilingual social media text with specific objectives, mentioning the use of prompts to achieve these goals. Since prompt engineering is the process of designing and formulating prompts to effectively interact with language models, and this work includes developing generalizable prompt formation techniques, it is highly relevant. The relevance is not rated as a perfect 10 due to the lack of explicit discussion of 'hard prefix prompts' which would be required for a comprehensive systematic review specific to that sub-topic of prompt engineering." -sparsefit: few-shot prompting with sparse fine-tuning for jointly generating predictions and natural language explanations,gpt-4-1106-preview,8,"The study described focuses on a fine-tuning strategy for Pre-trained Language Models (PLMs) that utilizes 'discrete prompts' to generate predictions and Natural Language Explanations (NLEs), which is highly relevant to prompt engineering. While it does not directly study 'hard prefix prompts,' the use of prompts in the context of few-shot learning to enhance the model's performance and explanations is closely related to prompt engineering and how it can be optimized in practice. The relevance is not maximum because the abstract does not detail the nature of the prompts used (e.g., hard prefix prompts specifically), but the methodology is still pertinent to the field of prompt engineering." -prompting electra: few-shot learning with discriminative pre-trained models,gpt-4-1106-preview,7,"The provided abstract details an approach to adapting ELECTRA, a discriminative pre-trained model, to prompt-based few-shot learning. Although the focus is primarily on the model's learning capabilities and performance rather than 'hard prefix prompts' specifically, the relevance lay in the use of prompts to facilitate model understanding and few-shot learning, which is a component of prompt engineering. The study explores how the model interacts with prompts, an essential aspect of prompt engineering, hence the relatively high relevance rating. However, it does not directly address a 'comprehensive systematic review on hard prefix prompts', so it cannot receive a perfect score." -knowledge prompting for few-shot action recognition,gpt-4-1106-preview,7,"The study described in the abstract addresses the use of structured external knowledge (knowledge prompting) to enhance the performance of a pre-trained vision-language model for few-shot classification. Although it does not specifically mention 'hard prefix prompts,' it does involve the engineering of prompts (text proposals) to improve machine learning performance. This indicates a moderate level of relevance to the broader topic of prompt engineering study, especially considering the systematic approach taken to generate and utilize these prompts. However, without a specific focus on the concept of 'hard prefix prompts' as described in the original query, the relevance is not complete." -promptner: a prompting method for few-shot named entity recognition via k nearest neighbor search,gpt-4-1106-preview,7,"The paper discusses PromptNER, a method that incorporates prompting, which relates to prompt engineering by using prompts to construct label prototypes for few-shot Named Entity Recognition. While the primary focus is on NER and not on prompt engineering as a general concept, the use of prompts as a way to improve machine learning models' performance through fine-tuning with limited data is pertinent to the study of prompt engineering. However, as the paper does not seem to conduct a comprehensive systematic review specifically on hard prefix prompts and does not address prompt engineering in the broader sense, the relevance is not maximal." -prompting large language models with chain-of-thought for few-shot knowledge base question generation,gpt-4-1106-preview,9,"The abstract discusses an advanced application of prompt engineering where Chain-of-Thought (CoT) prompting is used to enhance few-shot question generation over Knowledge Bases (KBQG). It is highly relevant to prompt engineering because it directly involves the process of designing prompts to improve the performance of Large Language Models. The research proposes a novel methodology (KQG-CoT) which leverages the CoT prompting technique, and the paper claims significant improvement over state-of-the-art results. The only reason it doesn't score a perfect 10 is because it doesn't explicitly mention 'hard prefix prompts', which is the specific focus of prompt engineering study mentioned in the initial query." -investigating prompting techniques for zero- and few-shot visual question answering,gpt-4-1106-preview,8,"The described study is highly relevant to the field of prompt engineering, as it directly investigates how different prompting strategies can influence the performance of a visual question answering (VQA) system in zero- and few-shot scenarios. The systematic examination of various question templates and the use of few-shot exemplars are core aspects of prompt engineering. The exploration of chain-of-thought reasoning and the integration of additional visual cues also fall within the scope of prompting techniques. Although the study specifically targets the VQA domain and does not mention 'hard prefix prompts', the general principles and findings are pertinent to the prompt engineering literature. The rating is not a full 10 because the paper focuses more broadly on VQA performance via prompting rather than the specific 'hard prefix prompts' indicated by the original prompt." -prod: prompting-to-disentangle domain knowledge for cross-domain few-shot image classification,gpt-4-1106-preview,8,"The paper presents a method named prompting-to-disentangle (ProD) that utilizes prompts to improve the performance of image classification in cross-domain few-shot learning scenarios. This approach is directly related to prompt engineering as it involves designing prompts to manipulate the behavior of a model (in this case, a transformer) for better performance. The technique specifically leverages prompts to separate domain-general and domain-specific knowledge, which demonstrates an application of prompt engineering in the context of machine learning and image classification. However, it does not address 'hard prefix prompts' as mentioned in the original study prompt, which suggests a more specific focus within the broader area of prompt engineering. The rating is not a full 10 due to the absence of a direct alignment with 'hard prefix prompts,' but it remains high because the paper still significantly contributes to the overarching field of prompt engineering." -template-free prompting for few-shot named entity recognition via semantic-enhanced contrastive learning.,gpt-4-1106-preview,9,"The paper presents a novel technique for named entity recognition (NER) using prompt-based contrastive learning that does not require prompt templates or label word mappings, which is highly relevant to prompt engineering. It focuses on token-level classification tasks and introduces a new way to apply prompts in few-shot learning scenarios, which is a key area of interest in prompt engineering studies. The only reason it does not receive a full score is that it does not specifically address 'hard prefix prompts,' which was the indicated topic of interest, but it is still very pertinent to the broader field of prompt engineering." -lmcap: few-shot multilingual image captioning by retrieval augmented language model prompting,gpt-4-1106-preview,7,"The study involves prompting a language model with retrieved captions, which is a form of prompt engineering. However, the focus is on multilingual image captioning rather than hard prefix prompts specifically. While it does not address hard prefix prompts in its methodology, the concept of using prompts to generate language model outputs is relevant to the broader field of prompt engineering. Therefore, the relevance is moderate to high." -hiprompt: few-shot biomedical knowledge fusion via hierarchy-oriented prompting,gpt-4-1106-preview,9,"The study introduces HiPrompt, a framework that leverages hierarchy-oriented prompts to improve few-shot biomedical knowledge fusion tasks by utilizing large language models. This is highly relevant to prompt engineering because it directly involves designing and employing prompts that are specifically structured to leverage and extract hierarchical relationships within large language models. The fact that it deals with prompting techniques to enhance the model's reasoning capabilities makes it pertinent to the field. The only reason it does not receive a perfect score is that the information provided centers more on biomedical knowledge fusion rather than a generalized application in prompt engineering." -few-shot learning with prompting methods,gpt-4-1106-preview,9,"The abstract describes research focused on prompting methods in the context of few-shot and zero-shot learning within the field of natural language processing. It specifically addresses the use of hard prefixes in prompting by mentioning pattern-exploiting methodologies such as PET and iPET. These methodologies are a form of prompt engineering that modify the input to language models in a structured way to improve performance with limited data. Given that the paper reviews studies on prompt-based learning and relates to hard prefix prompts through the use of structured input, it is highly relevant to prompt engineering studies. The rating is not a full 10 because the abstract does not exclusively focus on hard prefix prompts but also discusses prompt-based learning more broadly." -adversarial knowledge stimulated contrastive prompting for few-shot language learners,gpt-4-1106-preview,9,"The abstract describes a method for improving the efficiency of pre-trained language models for few-shot learning tasks by introducing a novel prompting framework, which is highly relevant to prompt engineering studies. The AKSCP framework leverages Cloze-driven prompts for prompt-based learning and joint prompt tuning, which directly relates to the development and optimization of prompts for language models. Additionally, the use of adversarial contrastive learning to enhance generalization further aligns with advanced prompt engineering techniques. The only reason it does not receive a full 10 is that it does not specifically mention 'hard prefix prompts' which the original prompt inquires about, however, the general relevance to prompt engineering is very high." -multi-step prompting for few-shot emotion-grounded conversations,gpt-4-1106-preview,7,"The paper presented is relevant to prompt engineering as it discusses the design of a prompting approach, which is a core concept within prompt engineering. By identifying emotions and using them to inform subsequent prompts, the study contributes to the field by showing how prompts can be adapted based on contextual information (emotional content in this case). However, the paper focuses specifically on a two-step prompting method for conversational AI and emotion recognition rather than on 'hard prefix prompts' in a broad sense. Therefore, while the paper is relevant to prompt engineering, it does not directly address the topic of hard prefix prompts, hence the rating is not a full 10." -leveraging few-shot data augmentation and waterfall prompting for response generation,gpt-4-1106-preview,8,"The abstract mentions the development of methodologies and strategies for response generation in task-oriented conversational modeling, including the use of a 'waterfall prompting technique'. This indicates an exploration into how prompts are structured and how they can be optimized for better performance in conversation engines using AI like GPT-3 and ChatGPT. Although 'hard prefix prompts' are not explicitly mentioned, the study is still highly relevant to prompt engineering as it focuses on improving and understanding how prompts can be leveraged along with few-shot learning for effective response generation. The lower rating is due to the lack of specific mention of 'hard prefix prompts', suggesting that while the study is relevant, it may not directly tackle the named concept." -self-convinced prompting: few-shot question answering with repeated introspection,gpt-4-1106-preview,8,"The provided abstract outlines a study involving 'few-shot question answering with repeated introspection' which is closely related to the field of prompt engineering, particularly in refining prompts to improve the performance of large language models (LLMs). Although the study does not specifically mention 'hard prefix prompts', it does deal with the broader category of prompts and their optimization through an iterative process. This makes the work relevant to prompt engineering but not exclusively focused on the hard prefix aspect. Therefore, the relevance to 'prompt engineering' is high, but it might be less directly related to a 'systematic review on hard prefix prompts'." -continued pretraining for better zero- and few-shot promptability,gpt-4-1106-preview,9,"The provided abstract discusses continued pretraining with an emphasis on enhancing the effectiveness of natural language prompts in zero-shot and few-shot learning contexts, which is highly relevant to prompt engineering. The systematic examination of pretraining methods, identification of gaps, and concrete recommendations based on experimental results are directly related to the advancements in the field of prompt engineering. Although it does not directly mention 'hard prefix prompts', the focus on trainable prompts during multi-task learning and prompt tuning is integral to the broader field of prompt engineering. A point is deducted because the relevance to 'hard prefix prompts' specifically is not clear, but otherwise, it is highly pertinent to the study of how prompts can be engineered and optimized for better performance in machine learning models." -what makes pre-trained language models better zero/few-shot learners?,gpt-4-1106-preview,9,"The paper directly addresses prompt learning, which is a critical aspect of prompt engineering. It presents both a theoretical framework to understand the efficiency of prompts and a practical approach to select prompts without relying on development sets. The focus on zero/few-shot scenarios is particularly relevant to the current challenges faced in prompt engineering where labeled data is scarce. Although the paper does not address 'hard prefix prompts' specifically, it does contribute to the broader field of prompt engineering which encompasses the study of prompts and their optimization. Therefore, it receives a high relevance score." -plan-and-solve prompting: improving zero-shot chain-of-thought reasoning by large language models,gpt-4-1106-preview,9,"The abstract discusses a novel approach to prompt engineering for large language models, focusing on improving chain-of-thought reasoning in a zero-shot context. It addresses key issues such as calculation errors, missing-step errors, and semantic misunderstandings by introducing the Plan-and-Solve (PS) Prompting technique. As prompt engineering is central to optimizing the performance of LLMs in multi-step reasoning tasks, this study is highly relevant to the field. The high rating is due to the direct application of prompt engineering strategies to enhance the capabilities of these models without relying on multiple examples for training, which is an innovative contribution to the prompt engineering literature. However, it does not explicitly mention 'hard prefix prompts', which the original prompt might specifically refer to, hence not a perfect 10." -better zero-shot reasoning with self-adaptive prompting,gpt-4-1106-preview,8,"The provided abstract and TLDR relate closely to prompt engineering, as they describe the development and application of a novel prompt design method intended to enhance the zero-shot reasoning capabilities of large language models (LLMs) without relying on handcrafted responses or ground-truth labels. The method, Consistency-based Self-adaptive Prompting (COSP), addresses a core aspect of prompt engineering by strategically selecting and constructing prompts to improve LLM performance. While the abstract doesn't mention 'hard prefix prompts' explicitly and instead focuses on the broader field of prompt design and optimization, the relevance is high due to the overall focus on improving prompt-based LLM interactions." -gpt-3 for few-shot dialogue state tracking,gpt-4-1106-preview,9,"The abstract details a study focused on few-shot Dialogue State Tracking (DST) using GPT-3 and the influence of prompt crafting on performance. It explores methodologies around prompt engineering, such as different completion strategies, and the effects of fine-tuning, ensembling, and context example selection. This information is highly relevant to prompt engineering, as it contributes to the understanding of how prompts can be optimized for certain tasks. However, the study doesn't strictly focus on 'hard prefix prompts', which might be a specific subset of prompt engineering, hence the rating is not a perfect 10." -multi-modal prompting for low-shot temporal action localization,gpt-4-1106-preview,8,"The paper is highly relevant to the study of prompt engineering as it involves the design and utilization of prompts to guide a pre-trained text encoder (CLIP) to perform open-vocabulary classification in the context of temporal action localization. The experimentation with both detailed action descriptions and visually-conditioned instance-specific prompt vectors directly ties into the methodologies of prompt engineering, aiming to improve the model performance on low-shot learning tasks. However, the primary focus on temporal action localization slightly reduces its direct relevance to general prompt engineering studies that are not focused on the specific application of action localization." -program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks,gpt-4-1106-preview,8,"The provided abstract details a study relevant to prompt engineering by introducing a 'Program of Thoughts' (PoT) method which separates computation from reasoning in numerical reasoning tasks. This separation directly impacts how prompts are designed for language models, as it leads to a fundamental change in the expected output (programs vs. solutions). The study's relevance is high because it exemplifies an advanced application of prompt engineering to improve performance on language models for specific tasks. The reason the rating is not a full 10 is because the study focuses specifically on numerical reasoning tasks and might not be directly applicable to other prompt engineering domains." -generative zero-shot prompt learning for cross-domain slot filling with inverse prompting,gpt-4-1106-preview,8,"The paper described is highly relevant to the field of prompt engineering as it discusses a novel methodology for zero-shot prompt learning in the context of cross-domain slot filling, which is a specific application within the broader domain of prompt engineering. It focuses on using prompts to transfer knowledge between domains without additional labeled data, which is a core aspect of prompt engineering. The proposed inverse prompting strategy is particularly pertinent for creating effective prompts that can distinguish between different types of data. Although the paper does not directly address 'hard prefix prompts', the concepts and strategies discussed are likely to contribute valuable insights to the prompt engineering literature and thus receive a high relevance rating." -language-aware soft prompting: text-to-text optimization for few- and zero-shot adaptation of v &l models,gpt-4-1106-preview,8,"The given title discusses 'Language-Aware Soft Prompting (LASP)' which is directly related to prompt engineering, especially in the context of optimizing text-to-text models for few- and zero-shot tasks. This indicates a high level of relevance as prompt engineering is about devising and employing prompts to guide or improve the performance of language and vision-and-language (V&L) models. The proposed method seems to enhance the interaction between hand-crafted textual prompts and model-generated outputs. Although the study doesn't explicitly mention 'hard prefix' prompts, the focus on soft prompting suggests it is in the broader area of prompt engineering, thus earning a high relevance rating." -"large language model is not a good few-shot information extractor, but a good reranker for hard samples!",gpt-4-1106-preview,8,"The abstract discusses the effectiveness of LLMs relative to SLMs in few-shot information extraction tasks and introduces a paradigm that involves prompting strategies. The relevance to prompt engineering is significant because it examines the role of prompts in improving performance of LLMs when combined with SLMs. Although the primary focus is on LLMs as rerankers for hard samples rather than on constructing or studying 'hard prefix prompts' specifically, the concept of using adaptive prompting to achieve better results is closely related to the field of prompt engineering. This suggests that the paper could offer valuable insights into prompt strategies that may be beneficial for designing or evaluating hard prefix prompts." -towards few-shot identification of morality frames using in-context learning,gpt-4-1106-preview,8,"The study discusses using pre-trained Large Language Models for few-shot in-context learning, which is directly related to prompt engineering as it involves designing prompts for these models to handle specific tasks, in this case, identifying morality frames. However, it doesn't focus specifically on 'hard prefix prompts,' which the original request mentions, but rather on prompting methodologies in a broader sense. Therefore, the rating isn't a perfect 10 but still high due to the relevance of few-shot learning and in-context learning methodologies, which are integral to prompt engineering." -enhancing few-shot text-to-sql capabilities of large language models: a study on prompt design strategies,gpt-4-1106-preview,9,"The paper's focus on exploring various prompt design strategies and their systematic investigation into demonstration selection methods and optimal instruction formats for prompting LLMs in the Text-to-SQL task is highly relevant to the field of prompt engineering. The study is specifically addressing how to effectively use prompts to improve the performance of LLMs on a specialized task, which is a core aspect of prompt engineering. The relevance rating is not a full 10 because the paper is specialized in the Text-to-SQL context and prompt engineering can be applied to a broader range of tasks beyond this specific application. Nonetheless, the findings and methodology could be valuable for prompt engineering studies in general." -few-shot and prompt training for text classification in german doctor's letters,gpt-4-1106-preview,8,"The given abstract describes the use of prompt-based methods, specifically pattern-exploiting training, for text classification in a few-shot learning context, which is highly relevant to the field of prompt engineering. Although the focus is on a specific application within the medical domain for German doctor's letters, the core concept of using prompts to effectively guide a language model and improve performance with limited data is central to the study of prompt engineering. The improvement in accuracy and efficiency mentioned aligns with the goals of prompt engineering to enhance model performance. The rating is not a full 10 as the study seems to be applied and specific rather than a comprehensive and systematic review on hard prefix prompts in general." -exploring zero and few-shot techniques for intent classification,gpt-4-1106-preview,8,"This study is highly relevant to prompt engineering as it explores zero and few-shot learning techniques, which are integral to the development of efficient prompting methods. The use of zero-shot intent classification with descriptions and parameter-efficient fine-tuning indicates a direct application of prompt engineering principles. The fact that they are testing these methods on large language models, which are often used in conjunction with prompts, further adds to the relevance. While the study does not focus exclusively on 'hard prefix prompts,' its implications on prompt engineering strategies are significant, particularly for intent classification in low-resource settings." -knowledge-guided prompt learning for few-shot text classification,gpt-4-1106-preview,9,"The abstract discusses a study that is highly relevant to prompt engineering, specifically within the context of leveraging implicit knowledge in pre-trained language models for few-shot text classification. The introduction of a knowledge-guided prompt learning method directly relates to prompt engineering, as it addresses how prompts can be optimized to improve model performance. The slight deduction from a perfect score is due to the lack of explicit mention of 'hard prefix prompts' which may or may not be a part of their 'knowledge prompting template'. Despite this, the study's focus on improving and understanding prompt-based learning is closely aligned with the field of prompt engineering." -a smashed glass cannot be full: generation of commonsense explanations through prompt-based few-shot learning,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering due to its focus on generating commonsense explanations through the use of prompts on pre-trained language models. Although it does not specifically mention 'hard prefix prompts', the methodology involving prompting and few-shot learning is a core technique within the field of prompt engineering. The ability to generate explanations from semantically related sentences is an important aspect of prompt engineering, which contributes to the relevance of this study to the field. However, full relevance to 'hard prefix prompts' specifically would require a more direct investigation into that subset of prompt engineering techniques." -successive prompting for decomposing complex questions,gpt-4-1106-preview,8,"The abstract discusses 'Successive Prompting', a methodology directly related to prompt engineering, involving the iterative process of breaking down complex questions for large language models. This is highly relevant to prompt engineering studies as it provides insights into the structuring of prompts for complex problem-solving. The approach could lead to more effective design of prompts, which is a core element of prompt engineering, thereby improving the performance of LMs in complex question-answering tasks. The rating is not a full 10 because it is more focused on the iterative prompting process rather than a broad application of prompt engineering techniques across different domains." -"structured prompting: scaling in-context learning to 1, 000 examples",gpt-4-1106-preview,9,"The abstract presents a study directly related to prompt engineering, with a focus on structured prompting to overcome length constraints in in-context learning for large language models. This is highly relevant as it addresses a limitation often encountered in the field of prompt engineering, where the length of input can restrict the number of examples a language model can learn from. The improvement of end-task performance and reduction of variance in results mentioned in the abstract suggests significant empirical findings for prompt engineering applications. Although the study does not specifically mention 'hard prefix prompts,' its relevance lies in advancing the methodologies used in prompt engineering, which could be applicable or foundational to hard prefix prompts as well." -zero-shot prompting for implicit intent prediction and recommendation with commonsense reasoning,gpt-4-1106-preview,7,"The paper abstract discusses a framework for multi-domain dialogue systems that can understand implicit user intents and appropriately trigger task-oriented bots using zero-shot prompting. While this is not specifically about 'hard prefix prompts' as might be investigated in a prompt engineering study, the relevance is reasonably high because zero-shot prompting is a closely related concept where the effectiveness of the prompt in eliciting the correct response from a language model without prior examples is crucial. The system's dependence on 'commonsense knowledge' and inference of 'implicit intents' also implies that there is prompt engineering occurring to facilitate these operations. However, the abstract does not directly mention the study or optimization of prompts, which would be the primary focus of a prompt engineering study, hence the relevance is not rated higher." -naturalspeech 2: latent diffusion models are natural and zero-shot speech and singing synthesizers,gpt-4-1106-preview,7,"The abstract describes a text-to-speech system, NaturalSpeech 2, which includes a speech prompting mechanism as a means to facilitate in-context learning. Although the system is not primarily focused on 'hard prefix prompts' for text input, the speech prompting mechanism can be seen as related to prompt engineering, particularly for speech synthesis. The relevance is significant because the paper addresses how prompting can be utilized in TTS systems to improve performance. However, it is not an exact match because the study does not focus solely on the prompt engineering aspect but rather on the overall TTS system that includes prompting as one of its components." -udapdr: unsupervised domain adaptation via llm prompting and distillation of rerankers,gpt-4-1106-preview,7,"The abstract discusses the use of large language models (LLMs) to generate synthetic queries which relates to prompt engineering as it may involve crafting prompts to elicit these queries from the LLMs. The focus on domain adaptation and efficient information retrieval can be seen as an application of prompt engineering, particularly in the context of generating useful data for model fine-tuning. However, the abstract doesn't specifically mention 'hard prefix prompts' or detail the prompt engineering process, hence the rating is not a full 10." -probing power by prompting: harnessing pre-trained language models for power connotation framing,gpt-4-1106-preview,8,"The abstract describes a study on probing pre-trained language models (PLMs) by using prompts to understand and predict power connotations in language, which is relevant to prompt engineering. The research focuses on how prompts can elicit different connotations about power from language models and the impact of fine-tuning on the models' accuracy in this task. Although the study primarily explores connotation framing rather than hard prefixes specifically, the methodology closely relates to prompt engineering as it involves designing prompts to harness the capabilities of language models. This indicates a high level of relevance, but not the maximum score as it does not directly focus on 'hard prefix prompts'." -what do language models know about word senses? zero-shot wsd with language models and domain inventories,gpt-4-1106-preview,7,"The paper discusses an innovative use of language models for Word Sense Disambiguation (WSD) by casting the problem as one of textual entailment, which inherently involves crafting prompts that effectively convey the different domain-relevant hypotheses that are matched against the given word senses. This is related to prompt engineering as it shows a specific application where the design of the prompts (i.e., the relation between word senses and domains phrased as hypotheses) is crucial for the successful application of language models to this task. Although not directly addressing 'hard prefixes', which are a specific type of prompt, the study does engage with the broader notion of how to construct prompts to extract desired outputs from language models. Therefore, the relevance is quite high, albeit not perfectly aligned with the specific topic of hard prefix prompts." -compresso: structured pruning with collaborative prompting learns compact large language models,gpt-4-1106-preview,7,"The abstract discusses 'Compresso,' a new paradigm for structurally pruning Large Language Models, which includes a 'collaborative prompt' to foster collaboration between the LLM and the pruning algorithm. While the main focus is on model compression, the use of collaborative prompts for enhancing the pruning process does touch upon the broader field of prompt engineering. Prompt engineering generally refers to the design and optimization of prompts to elicit desired responses from language models, and the collaborative prompt in this context serves to improve the interaction between model components during compression. However, it is not directly focused on prompt engineering study in the conventional sense, which typically deals with how different prompts affect the output of LLMs in natural language tasks, rather than model pruning. Therefore, the relevance is moderate but not entirely central to traditional prompt engineering studies." -you can generate it again: data-to-text generation with verification and correction prompting,gpt-4-1106-preview,8,"The paper discusses an advanced methodology in the field of text generation which involves a multi-step process including generation, verification, and correction stages. This is directly relevant to the practice of prompt engineering, as the proposed VCP method deals with iteratively refining the prompts based on feedback, which is a key aspect of designing effective prompts that can lead to high-quality outputs. The relevance is not a perfect score because the study does not focus exclusively on hard prefix prompts or prompt engineering in general, but rather on a multi-step generation process with verification and correction, which is just one aspect of prompt engineering." -transprompt v2: a transferable prompting framework for cross-task text classification,gpt-4-1106-preview,8,"The abstract discusses the development of TransPrompt v2, which is a prompting framework specifically designed for improving performance in few-shot text classification tasks across various NLP applications. By focusing on prompt-based fine-tuning and transferring prompting knowledge across tasks, it is highly relevant to studies on prompt engineering, especially in the context of how prompts can be optimized and utilized to enhance the capabilities of pre-trained language models with limited data. Though the abstract does not mention 'hard prefix prompts' specifically, the overall framework is pertinent to the field of prompt engineering. The significant increase in performance compared to other baselines, as evidenced in the text, further solidifies its relevance to the study of efficient prompting methods." -dynamic strategy chain: dynamic zero-shot cot for long mental health support generation,gpt-4-1106-preview,8,"The abstract presents a novel methodology involving prompting Large Language Models with chain-of-thought techniques, specifically tailored for generating long counseling texts for mental health support. The development of the zero-shot Dynamic Strategy Chain (DSC) prompting method is a direct application of prompt engineering, as it focuses on improving the performance of the LLM by designing specialized prompts based on dynamic mental health counseling strategies. This is highly relevant to the study of prompt engineering because it demonstrates an advanced use-case of prompt design to produce more effective and personalized responses from language models. The use of GPT2 and the claim of state-of-the-art performance further indicates an engagement with prompt engineering techniques. However, it does not fully match the requirement for a 'systematic review on hard prefix prompts' as it seems to introduce a new prompting strategy rather than review existing strategies." -adapt and decompose: efficient generalization of text-to-sql via domain adapted least-to-most prompting,gpt-4-1106-preview,8,"The paper describes a method for improving generalization in Text-to-SQL tasks by preparing and adapting prompts for specific domains and compositions. This research directly involves creating efficient prompts for large language models, which is an important aspect of prompt engineering. The relevance is high because it devises strategies for prompt construction and adaptation, which is a part of prompt engineering studies. It gets an 8 instead of a perfect 10 because it is focused on a specific application (Text-to-SQL) rather than prompt engineering in general." -leveraging large language models for multiple choice question answering,gpt-4-1106-preview,7,"The abstract focuses on improving the effectiveness of large language models (LLMs) such as GPT-3 in multiple choice question answering (MCQA) tasks. It highlights an approach where the LLM is presented with both the question and the answer options and outputs a symbol representing its chosen answer. This method is related to prompt engineering because it involves structuring the input to the LLM in a way that helps it utilize its capabilities more efficiently (known as natural prompting). The concept of multiple choice symbol binding (MCSB) reflects a specialized form of prompt engineering that is highly relevant to developing efficient prompting strategies for MCQA. Although the text does not explicitly use the term 'prompt engineering' or focus broadly on various types of prompts (e.g., hard prefix prompts), it is relevant as it tackles a specific challenge within the field of prompting LLMs to optimize performance on MCQA tasks." -data augmentation for intent classification with off-the-shelf large language models,gpt-4-1106-preview,8,"The study described in the title and abstract is highly relevant to prompt engineering as it deals with the generation of training data for intent classification using prompts with large language models like GPT-3. Although it does not address the 'hard prefix prompts' specifically, the research is indeed focused on utilizing prompting techniques to improve the data generation process for machine learning tasks, which is a core concept in prompt engineering. The relevance is not maximum because the study concentrates more on the application of prompt-generated data for classification and its quality rather than on the systematic study of the prompts themselves." -unraveling chatgpt: a critical analysis of ai-generated goal-oriented dialogues and annotations,gpt-4-1106-preview,7,"The paper titled 'unraveling chatgpt: a critical analysis of ai-generated goal-oriented dialogues and annotations' addresses the use of large pre-trained language models like ChatGPT for generating high-quality text, which is tangentially related to the study of prompt engineering. Although the specific focus on 'hard prefix prompts' is not mentioned, the exploration of 'prompting techniques' for data generation and annotation in AI models directly influences studies related to crafting prompts to achieve desired outputs. Thus, the relevance is quite high as it may provide insights into prompt efficiency and effectiveness, crucial for prompt engineering. However, the rating is not a full 10 because it does not directly discuss hard prefix prompts, which is the central theme of the prompt engineering study." -improving patient pre-screening for clinical trials: assisting physicians with large language models,gpt-4-1106-preview,8,"The paper discusses the use of InstructGPT and prompt-engineering techniques, such as chaining one-shot, selection-inference and chain-of-thought prompts, to improve the process of determining eligibility for clinical trials. Although the study is not directly focused on hard prefix prompts, it is within the domain of prompt engineering and examines how tailored prompts can enhance a language model's performance in a specific, practical application. Thus, the relevance rating is high due to the examination of prompts' design and efficacy in a real-world task, which is a central aspect of prompt engineering studies." -sinc: spatial composition of 3d human motions for simultaneous action generation,gpt-4-1106-preview,7,"The abstract discusses the use of large language models, particularly GPT-3, to understand the relationship between actions and body parts through prompt engineering ('what are the body parts involved in the action?'). This implies a relevance to prompt engineering study as it involves designing prompts to extract specific knowledge from the language model that can be used for another application, which in this case is 3D human motion synthesis. However, the main focus of the study is on the spatial composition of 3D human motions rather than prompt engineering itself, and thus the rating is not a perfect 10." -the potential and pitfalls of using a large language model such as chatgpt or gpt-4 as a clinical assistant,gpt-4-1106-preview,8,"The provided abstract describes studies assessing the performance of GPT-4 and ChatGPT in the medical field, specifically with tasks such as identifying patients with specific diagnoses and providing diagnostic assistance. The relevance to prompt engineering is high because the study involves the use of 'chain of thought and few-shot prompting' indicating that prompt engineering techniques were indeed utilized and studied in the context of their effectiveness in a real-world application. The rating is not a full 10 because the study does not solely focus on prompt engineering but also on the broader application and implications of using language models in clinical settings." -little giants: exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task,gpt-4-1106-preview,9,"The paper's focus on assessing the effectiveness of prompt-based techniques directly addresses prompt engineering, which is the practice of formulating prompts to elicit specific responses from language models. The use of various prompting techniques and the integration with zero-shot and one-shot learning methods are key components of prompt engineering studies. Although the paper's primary domain is quality estimation for summaries and machine translations, the core of the research involving systematic experiments with prompts is highly relevant to prompt engineering. The only reason the rating is not a perfect 10 is because it might be more narrowly focused on evaluation metrics rather than the broader context of prompt engineering." -hitachi at semeval-2023 task 4: exploring various task formulations reveals the importance of description texts on human values,gpt-4-1106-preview,7,"While the paper primarily focuses on the task of human value detection behind arguments, it is relevant to prompt engineering because it also explores various task formulations, including question answering with chain-of-thought prompting. The exploration of different task approaches, the effectiveness of including description texts, and the evaluation of model performance directly relate to how prompts are engineered and optimized for specific NLP tasks. Additionally, the insights on zero-shot learning and the importance of task formulation could inform prompt design strategies. However, since the primary focus isn't solely on prompt engineering but a broader scope of task formulation, the relevance is not at its maximum." -"investigating the perception of the future in gpt-3, -3.5 and gpt-4",gpt-4-1106-preview,7,"The given study indirectly relates to prompt engineering by exploring how models like GPT-3, GPT-3.5, and GPT-4 process and generate concepts of the future through different prompting techniques such as fine-tuning, prompt-tuning, and few-shot prompting. These methods fall under the broader category of prompt engineering. Although the study's primary focus is on the models' perception of time, rather than exclusively on prompt engineering efficiency or methodology, understanding the nuances of how different models perform with various prompt designs is relevant to prompt engineering practices. The detailed investigation into the efficacy of these prompting methods can provide insights into how to craft better prompts to achieve specific outcomes, which is a critical aspect of prompt engineering." -tree of clarifications: answering ambiguous questions with retrieval-augmented large language models,gpt-4-1106-preview,8,"The study introduces a novel framework, Tree of Clarifications (ToC), which is directly related to prompt engineering as it involves few-shot prompting to disambiguate open-domain questions. The method of recursively constructing a tree of disambiguations and leveraging external knowledge for generating long-form answers shows an application of designing and engineering prompts to improve question-answering systems. While it doesn't specifically mention 'hard prefix prompts', the concept is within the realm of prompt engineering, hence the high relevance rating. However, it doesn't fully match the exact concept of 'hard prefix prompts' as it does not mention systematic review of them directly." -evaluation of prompts to simplify cardiovascular disease information using a large language model,gpt-4-1106-preview,9,"The described study directly relates to prompt engineering by proposing and evaluating a 'rubric prompting' strategy to optimize the simplification of complex medical information using large language models. The focus on evaluating different prompting techniques, particularly the comparison with zero-shot or one-shot prompting methods, indicates a high relevance to the field of prompt engineering. The systematic approach to developing prompts that yield complete, readable, and syntactically simple outputs, especially in a critical domain like healthcare, illustrates the application of prompt engineering principles. Although not specifically using 'hard prefix prompts,' the study is highly pertinent as it discusses the design and impact of prompt structures on the quality of AI-generated text, reflecting on a major aspect of prompt engineering." -category-specific prompts for animal action recognition with pretrained vision-language models,gpt-4-1106-preview,7,"The study described in the abstract appears to be relevant to prompt engineering because it involves the development of a 'category-specific prompting module' which generates adaptive prompts for text and video inputs based on detected animal categories. This is a form of prompt engineering where prompts are crafted to improve the performance of a vision-language model on the task of animal action recognition. Although the focus is not on 'hard prefix prompts' specifically, the creation and utilization of tailored prompts is a pertinent aspect of prompt engineering. The relevance is not rated higher because the abstract does not provide details on how the prompts are engineered or whether hard prefix prompts are a part of the study, which would be critical for a 'comprehensive systematic review on hard prefix prompts.'" -deeplyrics: gpt2 for lyrics generation with finetuning and prompting techniques,gpt-4-1106-preview,7,"The study outlined in the abstract describes the use of 'tuning-free prompting' as a method to assist lyric generation with AI, indicating that it does involve prompt engineering. However, the specifics of 'hard prefix prompts', which are the main focus of the implied systematic review, are not explicitly mentioned. It implies work on prompting techniques without giving details on whether these are 'hard prefix prompts' or another form of prompting. Therefore, the relevance is significant but not fully aligned due to the lack of explicit mention of 'hard prefix prompts'." -"a survey of graph prompting methods: techniques, applications, and challenges",gpt-4-1106-preview,7,"The survey is highly relevant to prompt engineering as it discusses the 'pre-train, prompt, predict training' paradigm, which is at the core of how prompts are used within modern machine learning frameworks to make models generalize better with less labeled data. The focus on graph prompting methods indicates a novel approach to designing prompts using structured graph knowledge, which is a specific aspect within the broader field of prompt engineering. The relevance is not a full 10 because the survey is specialized in graph-based prompting rather than covering all aspects of prompt engineering, including 'hard prefix' prompts or other prompting techniques not related to graphs." -help me think: a simple prompting strategy for non-experts to create customized content with models,gpt-4-1106-preview,9,"The abstract describes a novel approach to prompting language models. It is highly relevant to the study of prompt engineering as it directly addresses the problem of how non-expert users can effectively interact with such models. The HELP ME THINK strategy is a form of prompt engineering designed to aid users in generating customized content, an area of growing interest in the field. It also touches on the challenge of control within language model outputs, a central issue in prompt engineering. The slightly less than perfect score is due to the paper potentially not addressing a 'systematic review on hard prefix prompts' specifically, which would be necessary for a 10 rating." -neuro-symbolic causal language planning with commonsense prompting,gpt-4-1106-preview,9,"The paper presents a method called Neuro-Symbolic Causal Language Planner (CLAP) that directly addresses the challenge of eliciting procedural knowledge from large language models (LLMs) through advanced prompting techniques that involve commonsense knowledge. Given that prompt engineering involves the strategic construction of prompts to extract or generate specific responses from LLMs, this paper's focus on using prompts as causal interventions to improve language planning capabilities in AI systems is highly relevant to the field of prompt engineering. The fact that it also employs a Structural Causal Model (SCM) to construct structured prompts makes it even more pertinent, as it represents a sophisticated approach to prompt design. However, it does not focus exclusively on 'hard prefix prompts', thus the rating is not a full 10." -generative speech recognition error correction with large language models and task-activating prompting,gpt-4-1106-preview,9,"The study addresses the use of large language models (LLMs) for speech recognition error correction and investigates various prompting schemes, which directly relates to prompt engineering. The focus on in-context learning, task activation prompting, and the combination of causal instructions with demonstrations are key elements of prompt engineering, showing how different prompts can improve the performance of LLMs in specific tasks without fine-tuning. Although the study does not explicitly mention 'hard prefix prompts', it explores related methods of instruction prompting, making it highly relevant to prompt engineering studies." -llm-rec: personalized recommendation via prompting large language models,gpt-4-1106-preview,8,"The given abstract directly relates to prompt engineering, as it investigates various prompting strategies to improve the performance of large language models, particularly for personalized recommendations. The relevance to prompt engineering is high because the study specifically examines how different types of prompts can enhance LLM's capabilities. This is pertinent to prompt engineering as it contributes to understanding how LLMs can be tuned for better performance on specific tasks by using tailored prompts. The mention of 'hard prefix prompts' is not explicitly stated; however, the exploration of prompting strategies such as 'recommendation-driven' and 'engagement-guided' prompting falls within the broader scope of prompt engineering studies." -a general language assistant as a laboratory for alignment,gpt-4-1106-preview,8,"The abstract describes a study that investigates various techniques and evaluations, including prompting, to align large language models with human values. While it does not specifically mention 'hard prefix prompts,' prompting in general is a significant aspect of prompt engineering. The investigation into baseline techniques for alignment has relevance to the field of prompt engineering, as it can inform the development of more sophisticated prompts that are better aligned with human intentions. The study also examines the scalability of different training objectives relevant to alignment, which is pertinent to advancing the effectiveness of prompt engineering in large language models. However, without a focus on 'hard prefix prompts' specifically, the relevance is not absolute, hence the rating is not a perfect score." -enabling conversational interaction with mobile ui using large language models,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it explores the design of prompting techniques to adapt large language models (LLMs) for conversational interactions with mobile UIs. This indicates a direct engagement with the process of developing and refining prompts to elicit desired responses from LLMs, which is the essence of prompt engineering. However, it is not exclusively focused on 'hard prefix prompts' as might be suggested by a comprehensive systematic review on such. Its focus on mobile UIs also suggests a specific application area rather than a broad study of prompting techniques. Nevertheless, the work contributes significantly to the field of prompt engineering by demonstrating the practical application of LLMs in a relevant domain without the need for task-dedicated resources." -recent advances in natural language processing via large pre-trained language models: a survey,gpt-4-1106-preview,7,"The title and abstract indicate that the survey covers pre-trained language models and their applications in various NLP tasks, including 'prompting.' Since prompt engineering is a subset of techniques applied to language models to improve performance on various tasks, this survey's content is relevant to the study of prompt engineering, particularly concerning the 'prompting' methods mentioned in the abstract. However, it does not appear to focus exclusively on 'hard prefix prompts' or prompt engineering, hence the rating is not a full 10." -are large language models ready for healthcare? a comparative study on clinical language understanding,gpt-4-1106-preview,8,"The provided abstract discusses the evaluation of large language models (LLMs) for clinical language understanding tasks in healthcare, which is indirectly related to prompt engineering, as it involves creating effective prompts for complex tasks. More specifically, it introduces a new prompting strategy known as self-questioning prompting (SQP), which is a direct application of prompt engineering aimed at improving the performance of LLMs on healthcare-related tasks. Although the main focus is not on 'hard prefix prompts', SQP likely employs principles of prompt engineering to elicit better responses from the models. This justifies the high relevance rating, although it is not a perfect match since it doesn't focus solely on prompt engineering but includes broader topics of LLM application in healthcare." -voyager: an open-ended embodied agent with large language models,gpt-4-1106-preview,7,"The abstract describes an AI agent (Voyager) that uses a new iterative prompting mechanism, which is relevant to prompt engineering studies. This mechanism involves environment feedback and self-verification processes, which are significant topics within prompt engineering research. However, the focus is on an embodied agent in a gaming environment, rather than on hard prefix prompts. While there is significant overlap with interests in prompt engineering, the specific focus on 'hard prefix prompts' in a comprehensive systematic review is not directly addressed, thus the relevance is rated as a 7 instead of a higher score." -the flan collection: designing data and methods for effective instruction tuning,gpt-4-1106-preview,9,"The given abstract is highly relevant to prompt engineering study as it specifically discusses design decisions, task balancing, enrichment techniques, and mixed prompt settings, which are central concepts in the development and improvement of instruction tuning for language models. Despite not using the term 'hard prefix prompts', it directly addresses the broader domain of prompt optimization and the impact on model performance, therefore meriting a high relevance rating." -learning to compose soft prompts for compositional zero-shot learning,gpt-4-1106-preview,8,"The abstract discusses the development of Compositional Soft Prompting (CSP), which is directly relevant to prompt engineering, as CSP is a form of prompt-related technique designed to improve the interaction between users (or systems) and AI models, specifically pretrained vision-language models. While the reference to 'soft prompts' and not 'hard prefix prompts' might suggest a slight deviation, the overall study is still highly pertinent to the field of prompt engineering, especially given its focus on parameter efficiency, zero-shot learning, and the manipulation of prompt structures (attributes and objects) to optimize model performance. Hence, the rating of 8 acknowledges its strong relevance with a minor deduction for the difference in prompt type (soft versus hard)." -generating training data with language models: towards zero-shot language understanding,gpt-4-1106-preview,8,"The study's focus on using prompts to generate class-conditioned texts with a unidirectional PLM directly pertains to the field of prompt engineering. The prompts guide the text generation process, which is a practical application of hard prefix prompts within the context of zero-shot learning. Although the study isn't exclusively a systematic review on hard prefix prompts, it demonstrates a relevant application of prompts in the engineering process to improve NLU tasks, making it highly relevant to the subject." -factual probing is [mask]: learning vs. learning to recall,gpt-4-1106-preview,8,"The abstract discusses the use of cloze-style prompts to retrieve factual information from a pre-trained language model, which is highly relevant to the field of prompt engineering. The introduction of OptiPrompt, which optimizes prompts in continuous embedding space, is a direct contribution to the development of prompt engineering techniques. The paper's investigation into the distinction between 'learning' and 'learning to recall' is also pertinent to understanding how models respond to prompts. However, the paper does not specifically address 'hard prefix prompts,' hence the rating is not a full 10." -"using chatgpt standard prompt engineering techniques in lesson preparation: role, instructions and seed-word prompts",gpt-4-1106-preview,8,"The abstract provided discusses the use of standard prompt engineering techniques (which could potentially include hard prefix prompts as a subset) in the context of lesson preparation for an AI tool, specifically ChatGPT. It emphasizes the effectiveness of structuring prompts with additional defined roles and seed words. Although it does not explicitly mention 'hard prefix prompts,' it is closely related to the broader topic of prompt engineering. The study's findings could contribute valuable insights into the usage of specific prompting methods, which may include but are not explicitly limited to hard prefix prompts. Therefore, it is relevant to the study of prompt engineering, but it would have a higher relevance rating if it directly addressed hard prefix prompts." -generative ai tools in art education: exploring prompt engineering and iterative processes for enhanced creativity,gpt-4-1106-preview,8,"The study directly addresses prompt engineering within the context of generative AI tools in art education, which involves teaching students how to craft and refine prompts for creative purposes. Although the focus is tailored to art and design, the principles of prompt engineering discussed are relevant to the broader field of study. The emphasis on iterative processes and the detail-oriented approach required for effective prompt engineering are particularly pertinent. The study's lower relevance in terms of being a 'comprehensive systematic review on hard prefix prompts' specifically, is acknowledged, hence the rating is not a full 10." -how does prompt engineering affect chatgpt performance on unsupervised entity resolution?,gpt-4-1106-preview,9,"The study directly investigates the impact of prompt engineering on the performance of ChatGPT in the context of unsupervised entity resolution, which is a relevant topic in natural language processing and artificial intelligence. The systematic experimental approach to understanding how different prompts can influence the results of entity resolution tasks using a language model like ChatGPT is highly pertinent to studies in prompt engineering. The deduction of one point is due to the preliminary nature of the results mentioned in the abstract, which suggests that there could be further work required to fully understand the relationship and generalize the findings." -multimodal propaganda detection via anti-persuasion prompt enhanced contrastive learning,gpt-4-1106-preview,8,"The relevance to prompt engineering is substantial, given that the study introduces a novel model (APCL) that utilizes prompt engineering as a core component for detecting propaganda in memes. The model specifically incorporates category words from propaganda techniques in its prompt engineering strategy, using these prompts to enhance contrastive learning in a multi-label classification task. Though the focus is on propaganda detection rather than prompt engineering itself, the use of 'persuasion' and 'anti-persuasion' prompts directly relates to the study of how prompts can be engineered to improve machine learning tasks. Therefore, the rating is high but not maximum because prompt engineering is a means to an end in this study, rather than the primary focus." -user-friendly image editing with minimal text input: leveraging captioning and injection techniques,gpt-4-1106-preview,8,"The study focuses on making prompt engineering more user-friendly by categorizing prompts by semantic details and proposing methods to simplify the text prompt process for image editing, which is relevant to prompt engineering. The relevance is marked down slightly because the abstract suggests a specific application to image editing rather than a comprehensive systematic review on hard prefix prompts, but it still contributes to the broader topic of prompt optimization and efficiency." -ascm: an answer space clustered prompting method without answer engineering,gpt-4-1106-preview,8,"This paper is highly relevant to prompt engineering as it proposes an innovative approach to prompt-based learning, addressing limitations in answer mapping by using semantic clustering and synonym initialization. Although not explicitly focused on 'hard prefix prompts,' the concept of improved answer-category mapping in prompt-based learning and the influence on model performance is central to the study of efficient and effective prompt designs. The model's approach of clustering answers to manage diverse linguistic expressions without manual or automatic answer constraints is integral to the broader conversation of how prompts interact with pre-trained language models in tasks like classification and NLI. The semi-supervised stair learning method could also contribute to a better understanding of knowledge distillation in the context of prompt engineering." -do llms possess a personality? making the mbti test an amazing evaluation for large language models,gpt-4-1106-preview,7,"The paper addresses the feasibility of using the Myers-Briggs Type Indicator (MBTI) to evaluate large language models (LLMs), which involves investigating if the personality types of LLMs can be influenced by prompt engineering. This suggests that the study explores, to some extent, how prompts can be used to shape the output of LLMs, aligning with the broader topic of prompt engineering. However, the focus on MBTI and personality assessment is somewhat tangential to the core aspects of prompt engineering, such as prompt formats or effectiveness, and does not directly address the concept of hard prefix prompts. Therefore, while the study is related to prompt engineering, it is not entirely centered on it, leading to the rating of 7 for relevance." -the application of chatgpt in healthcare progress notes: a commentary from a clinical and research perspective,gpt-4-1106-preview,7,"The text discusses the use of ChatGPT, an AI-driven language model, in the context of healthcare progress notes, emphasizing the relevance of 'prompt engineering techniques' for effective integration into clinical practice. While the text does not focus specifically on a 'comprehensive systematic review on hard prefix prompts,' it does reference the application of prompt engineering in a practical setting, demonstrating its significance in real-world applications and hence has relevance to the field of prompt engineering. That said, the focus on healthcare rather than the technical aspects of prompt engineering itself means the relevance is substantial but not complete." -copilot for xcode: exploring ai-assisted programming by prompting cloud-based large language models,gpt-4-1106-preview,8,"The paper's relevance to prompt engineering is significant as it describes how an AI-assisted tool, Copilot for Xcode, utilizes prompt engineering through a chat interface to enable features such as code generation, autocompletion, documentation, and error detection. The integration of Large Language Models with a development environment and the tool's ability to process 'small' decisions for program composition signifies the application of prompt engineering techniques, making it highly relevant to the study of prompt engineering, especially within the domain of software development and AI-assisted programming tools." -towards equitable representation in text-to-image synthesis models with the cross-cultural understanding benchmark (ccub) dataset,gpt-4-1106-preview,7,"The abstract discusses a 'culturally-aware priming approach' and mentions the use of automated prompt engineering with GPT-3, which is relevant to the topic of prompt engineering. However, the main focus seems to be on text-to-image synthesis and fighting bias through data curation, rather than on the details of prompt engineering itself. Therefore, while prompt engineering is a component of the study, it is not the central topic, hence the rating of 7 for relevance." -omniscientdb: a large language model-augmented dbms that knows what other dbmss do not know,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering study as it discusses automatic prompt engineering within the context of a database management system (DBMS). It specifically addresses the issue of constructing appropriate prompts to a large language model in response to SQL queries for the purpose of data augmentation. The paper's focus on exploring different prompting techniques and their application in a DBMS setting makes it pertinent to the field of prompt engineering. However, it does not cover the topic of hard prefix prompts exclusively or systematically, as the abstract suggests a broader application, hence the rating is not a full 10." -how understanding large language models can inform their use in physics education,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering study as it specifically discusses the impact of prompt-engineering techniques on an LLM's (ChatGPT) performance in physics education. It includes practical illustrations of how prompt engineering can be used to aid understanding and problem-solving in physics, which is a direct application of prompt engineering. The only reason the rating isn't a 10 is that the paper's focus is somewhat narrow—specific to physics education—and does not address the broader spectrum of hard prefix prompts across various domains." -automatic bug fixing via deliberate problem solving with large language models,gpt-4-1106-preview,9,"The abstract discusses leveraging a large language model to improve automated program repair, specifically by using an interactive prompting technique called Tree of Thoughts (ToT). Since this technique is directly related to the use and innovation of prompt engineering to enhance the model's ability to solve complex tasks such as bug fixing, the relevance to prompt engineering study is very high. The only reason it's not a perfect 10 is that the description doesn't solely focus on the prompt engineering aspect but also on the overall capability of large language models in automated program repair." -data-driven approach for formality-sensitive machine translation: language-specific handling and synthetic data generation,gpt-4-1106-preview,8,"The paper presents an empirical prompt engineering strategy as part of its data-driven approach to FSMT. Although it does not focus solely on hard prefix prompts, the mention of prompt engineering indicates that this aspect was a significant component of their research methodology. The study's focus on artificial data generation and tailoring the model performance using prompt engineering suggests that the paper would be relevant to someone interested in prompt engineering, even if the main context is machine translation rather than a 'comprehensive systematic review' of hard prefix prompts specifically." -noisy exemplars make large language models more robust: a domain-agnostic behavioral analysis,gpt-4-1106-preview,8,"The abstract discusses the use of systematic approaches in prompt engineering to assess the robustness of large language models (LLMs) in multi-hop reasoning tasks. While it doesn't specifically mention 'hard prefix prompts', it does cover the broader topic of prompt engineering and the perturbations used to test and potentially improve the models' responses. The research's emphasis on few-shot prompting and robustness is highly relevant to the field of prompt engineering, thus warranting a high rating." -original research,gpt-4-1106-preview,7,"The abstract describes a study that focuses on the use of prompt engineering techniques within the context of art and design education, specifically using OpenAI's DALL-E2. The research explores how students were taught to refine their ideas and prompts iteratively to produce better visual outcomes with generative AI tools. This emphasis on the iterative refinement of prompts is directly relevant to the field of prompt engineering, as it pertains to understanding and improving the way prompts are constructed and their effects on the outputs generated by AI models. However, the study also touches on ethical considerations, the replacement of artists, and the integration of AI tools into the art and design curriculum, which, while important, are somewhat tangential to the technical and methodological aspects of prompt engineering. Therefore, the rating reflects the study's substantial relevance to prompt engineering due to the focus on teaching and practicing prompt refinement, but it is not exclusively centered on the systematic study of hard prefix prompts." -chatgpt-based debate game application utilizing prompt engineering,gpt-4-1106-preview,9,"The abstract describes a study focused on the application of prompt engineering within an educational debate game, utilizing ChatGPT. It illustrates the use of prompt engineering to control and refine the outputs of the language model for a specific domain, which is directly related to prompt engineering study. The research aims to improve ChatGPT's responses by providing specific instructions and case-based prompts, aligning perfectly with the concept of hard prefix prompts in prompt engineering. One point is docked because the abstract does not explicitly mention 'hard prefix prompts' as a focus; however, the content is highly relevant to the overall field of prompt engineering." -exploring the impact of prompt engineering on chatgpt 3.5 text summarization: a bert score evaluation,gpt-4-1106-preview,9,"The described study directly investigates the impact of prompt engineering on ChatGPT 3.5, with a particular emphasis on text summarization tasks. It measures the performance by using BERT score evaluation, which is highly relevant to understanding how different prompts can affect the output of AI in NLP applications. Thus, the relevance to prompt engineering studies is high. The reason for not giving a perfect score is the absence of a 'TL;DR' which could provide a concise summary of the results, an element that could further solidify its relevance by directly showcasing how prompts influence the summarization process." -promptor: a conversational and autonomous prompt generation agent for intelligent text entry techniques,gpt-4-1106-preview,9,"The abstract discusses the creation and impact of an agent called Promptor, which directly relates to the field of prompt engineering, as it generates prompts for language models. This is highly relevant because it addresses the challenge of creating effective prompts, a core issue in prompt engineering. Moreover, it involves actual user studies to compare prompts generated by Promptor against those created by human designers. The slight deduction in rating is due to the abstract not focusing exclusively on 'hard prefix prompts,' which was specified in the original prompt, but the overall study still contributes significantly to the domain of prompt engineering." -simple llm prompting is state-of-the-art for robust and multilingual dialogue evaluation,gpt-4-1106-preview,8,"The abstract discusses the use of a novel framework that incorporates prompting Large Language Models (LLMs) for improving dialogue evaluation, which is relevant to prompt engineering. Prompt engineering involves designing inputs that help LLMs produce more effective and relevant outputs, and the context here is applying such techniques to evaluate dialogues in multiple languages and ensuring robustness. The relevance might not be a perfect 10 because it is specific to dialogue evaluation rather than prompt engineering in general, but the principles and implications of this research can contribute significantly to the field of prompt engineering as it applies to dialogue systems." -towards understanding chain-of-thought prompting: an empirical study of what matters,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it delves into the specifics of how Chain-of-Thought prompting impacts the performance of language models. Understanding the effectiveness of CoT, even with invalid demonstrations, offers significant insights into prompt design and how language models can generate coherent reasoning steps. This may directly influence future prompt engineering strategies." -improving language model prompting in support of semi-autonomous task learning,gpt-4-1106-preview,9,"The abstract provided discusses the development of an agent capability for constructing effective prompts that elicit useful responses from language models for the purpose of learning new tasks. This is highly relevant to the field of prompt engineering, as it directly involves optimizing interaction strategies (prompts) to improve the utility of language model outputs in specific contexts. Although the term 'hard prefix prompts' from the initial prompt is not explicitly mentioned, the essence of the study is deeply intertwined with the principles of prompt engineering, hence the high relevance rating." -boosting theory-of-mind performance in large language models via prompting,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates the effectiveness of in-context learning prompts in improving the theory-of-mind performance of large language models (LLMs). It directly addresses how tailored prompts can enhance the reasoning capabilities of AI systems, which is a core aspect of prompt engineering. Although the focus is specifically on theory-of-mind tasks, the findings have broader implications for the field of prompt engineering, especially concerning the design of prompts that can guide LLMs towards better understanding and interpreting human mental states and intentions." -"see, think, confirm: interactive prompting between vision and language models for knowledge-based visual reasoning",gpt-4-1106-preview,7,"The paper introduces a framework, IPVR, which integrates interactive prompting mechanisms within a vision-language reasoning context. While the study primarily focuses on knowledge-based visual reasoning tasks, the use of prompting in the 'think stage' directly relates to prompt engineering as it involves designing prompts to steer a large language model's (LLM) output. This is relevant to the concept of 'hard prefix prompts' which consist of prefixed instructions that guide the model's generation process. Thus, the relevance to prompt engineering is significant, but not exclusive since the paper also emphasizes few-shot learning, transparency, and trustworthiness in reasoning, deviating from prompt engineering as the sole focus." -"prompting and evaluating large language models for proactive dialogues: clarification, target-guided, and non-collaboration",gpt-4-1106-preview,8,"The abstract points to a study focused on the evaluation and enhancement of conversational systems using Large Language Models through prompt engineering techniques such as the 'Proactive Chain-of-Thought'. Although the main emphasis does not appear to be on 'hard prefix prompts' specifically, the relevance to prompt engineering is clear as it discusses prompting strategies and schemes to handle complex dialogue scenarios. This aligns with the study of how different prompts can influence the behavior and responses of language models. However, because it does not explicitly mention 'hard prefix prompts', it cannot receive a perfect score for relevance." -pive: prompting with iterative verification improving graph-based generative capability of llms,gpt-4-1106-preview,8,"The study involves a specific application of prompt engineering in the context of generating structured data from large language models (LLMs). The introduction of a framework (PiVe) that uses fine-grained prompts through iterative verification to enhance an LLM's output is directly related to the mechanics of designing effective prompts, which is a key aspect of prompt engineering. While the focus is on improving graph-based generation, which is a specialized subfield, the core concept of using prompts iteratively to refine outcomes is highly relevant to prompt engineering studies. The rating is not a perfect 10 as the extract does not mention 'hard prefix prompts' directly, but the methodology is clearly within the realm of prompt engineering." -enhancing small medical learners with privacy-preserving contextual prompting,gpt-4-1106-preview,8,"The abstract describes a study focused on enhancing the capabilities of small language models (SLMs) in the medical field through advanced prompting techniques that involve large language models (LLMs) without compromising patient privacy. The core of the study revolves around prompt engineering by designing a system that uses LLMs to generate contextual prompts, which then assist SLMs in performing medical tasks more effectively. This falls under the realm of prompt engineering as it pertains to the creation and optimization of prompts to elicit desired responses from language models. Although it is specific to the medical domain and privacy preservation, the principles and methodologies employed are relevant to the broader study of prompt engineering, especially in how it can be tailored to enhance the performance of smaller models within confidential constraints." -grammar prompting for domain-specific language generation with large language models,gpt-4-1106-preview,8,"The abstract describes an approach to improve the performance of large language models on domain-specific language generation tasks by using grammar prompting. Although the term 'hard prefix prompts' is not explicitly mentioned, grammar prompting can be considered a form of structured prompt engineering, and the systematic review would likely be interested in various methods of prompt engineering, including grammar prompting. This would make the study significantly relevant to those looking to understand different prompting techniques, especially in the context of generating highly structured languages. The relevance is not rated as a full 10 because the abstract does not directly address a review on 'hard prefix prompts' but rather discusses a related concept in prompt engineering." -allies: prompting large language model with beam search,gpt-4-1106-preview,7,"The described study, 'allies: prompting large language model with beam search', presents a method that iteratively refines and expands on initial queries. This iterative process of generating new queries can be seen as a form of prompt engineering, where the goal is to improve the performance of a large language model for complex tasks. Although the study does not directly focus on 'hard prefix prompts' as specified in the prompt engineering study request, the concept of refining and modifying prompts to leverage hidden knowledge aligns with the broader field of prompt engineering. Therefore, the relevance to prompt engineering is significant but not entirely focused on the specific aspect of 'hard prefix prompts'." -prompting language-informed distribution for compositional zero-shot learning,gpt-4-1106-preview,7,"The abstract indicates that the paper introduces a model called PLID that uses prompting strategies with pre-trained large language models for compositional zero-shot learning, which aligns with the field of prompt engineering. While the term 'hard prefix prompts' is not directly mentioned, prompting language-informed distributions could potentially involve relevant concepts. The relevance is rated as 7 because prompt engineering constitutes a significant aspect of the research, but it's not clear if it specifically and directly addresses 'hard prefix prompts' as the primary focus." -retrieval-augmented gpt-3.5-based text-to-sql framework with sample-aware prompting and dynamic revision chain,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it delves into the utilization of large language models (LLMs) and the design of efficient prompts to generate SQL queries from natural language questions. It directly addresses the challenges of prompt learning in contexts that require precise syntax, such as SQL, and proposes innovative solutions to improve the process. The concepts of retrieval-augmented prompting, sample-aware prompting, and a dynamic revision chain are advanced techniques within the scope of prompt engineering, showing how refined prompting strategies can lead to better model performance on specialized tasks." -towards better chain-of-thought prompting strategies: a survey,gpt-4-1106-preview,9,"The abstract indicates that the study is a systematic survey of the Chain-of-Thought (CoT) prompting technique, a relevant aspect of prompt engineering for large language models (LLM). CoT is directly tied to the strategies used to elicit better performance from LLMs, which is a central concern of prompt engineering. The survey’s aims to provide a comprehensive analysis and guide on the influencing factors of CoT prompting makes it highly relevant. However, since it does not cover the 'hard prefix prompts' explicitly, but rather prompting strategies as a whole, one point is deducted, thus not making it a perfect 10." -"reinforcement learning in the era of llms: what is essential? what is needed? an rl perspective on rlhf, prompting, and beyond",gpt-4-1106-preview,7,"The paper in question discusses Reinforcement Learning from Human Feedback (RLHF) and its applications to Large Language Models (LLMs). Prompt engineering is relevant to the use of LLMs, as it encompasses the techniques and strategies used to effectively instruct LLMs to produce desired outputs. While the paper does not focus narrowly on 'hard prefix prompts' specifically, the discussion around RLHF and prompting evaluation is pertinent to prompt engineering as a whole. Understanding RLHF and its implications can contribute to more advanced prompt engineering methods, particularly in evaluating and optimizing prompts for better performance in various tasks assigned to LLMs. Thus, the relevance to prompt engineering study is significant, though not exclusively focused on hard prefix prompts." -can instruction fine-tuned language models identify social bias through prompting?,gpt-4-1106-preview,8,"The study is relevant to prompt engineering as it specifically investigates the use of zero-shot prompting, including Chain-of-Thought (CoT) prompts, to evaluate the capability of language models at bias identification tasks. Since prompt engineering encompasses designing and refining the prompts given to language models to elicit specific types of responses or measure certain capabilities, the study’s focus on how these prompts can be used to detect social biases is pertinent to the field. However, the study does not appear to specifically address 'hard prefix prompts', which would be necessary for a 10 rating since the initial query asked for relevance to prompt engineering studies focused on hard prefix prompts." -approximating online human evaluation of social chatbots with prompting,gpt-4-1106-preview,9,"The study introduces a Dialog system Evaluation framework based on Prompting (DEP), which directly relates to prompt engineering as it involves using large language models conditioned with specific prompts to evaluate conversational agents. This is highly relevant to the study of prompts and their impact on the performance of language models. The relevance is not a full 10 because the study seems to be more focused on evaluating chatbots rather than on the 'hard prefix prompts' and the methodology might be broader than hard prefix prompts alone." -march in chat: interactive prompting for remote embodied referring expression,gpt-4-1106-preview,7,"The provided title and abstract describe a study that engages with large language models (LLMs) for the task of Vision-and-Language Navigation (VLN), particularly focusing on generating navigation plans from high-level instructions — a form of interactive prompting. Although it doesn't directly address the concept of 'hard prefix prompts' in the described system, the use of prompts to communicate with LLMs is relevant to the field of prompt engineering. The March-in-Chat (MiC) model's interactive prompting mechanism that adapts to visual observations could lend insights into how prompt engineering can be applied in dynamic, real-world environments. While the study emphasizes action planning over strict prompting techniques, the interaction between the LLM and the environment via prompts and the adaptability of these prompts is related to the broader topic of engineering prompts for specific tasks. Hence, the rating reflects that the paper has relevance but is not entirely focused on 'hard prefix prompts' specifically within prompt engineering study." -prompting a large language model to generate diverse motivational messages: a comparison with human-written messages,gpt-4-1106-preview,9,"The study directly investigates prompt engineering by comparing the effectiveness of different prompt structures on the output diversity of a large language model (GPT-4). The use of a crowdsourcing pipeline as a model for constructing LLM prompts, and then measuring the impact on message diversity, provides empirical insights into the principles of prompt engineering. It explores the nuances of constructing prompts based on successful human instruction strategies and their potential utility in eliciting quality and diverse outputs from AI systems. This is highly relevant to the field of prompt engineering, although not focused on 'hard prefix prompts' specifically, it evaluates the broader concept of structured prompting." -prompting gpt-3.5 for text-to-sql with de-semanticization and skeleton retrieval,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it discusses a framework for improving the performance of large language models (LLMs) on the Text-to-SQL task, which is inherently based on the concept of providing effective prompts to the model. The de-semanticization process and skeleton retrieval align with hard prefix prompts since they involve manipulating the input to the LLM to enhance its understanding and output. This systematic approach to tailoring demonstrations and prompts to an LLM's requirements is a direct application of prompt engineering strategies. The reason why it's not a full 10 is that it focuses specifically on Text-to-SQL tasks, which is just a subset of all possible applications of prompt engineering." -large language models can self-improve,gpt-4-1106-preview,8,"The abstract outlines a method for self-improvement of large language models using 'Chain-of-Thought' prompting and self-consistency without requiring ground truth labels. This is highly relevant to the field of prompt engineering, as it deals with the creation and use of specific prompts ('high-confidence' rationale-augmented answers) to enhance a model's performance. The study's relevance is not a full 10 because the prompt engineering is focusing specifically on 'hard prefix prompts,' and it is not clear from the abstract if the 'high-confidence' prompts used exactly fit under this category. However, the techniques are closely related to prompt engineering and have implications for the development of prompts used in training LLMs." -multimodal chain-of-thought reasoning in language models,gpt-4-1106-preview,7,"The abstract pertains to the field of language models and their ability to perform complex reasoning, a topic which is inherently connected to prompt engineering as it explores how prompts can be structured to improve performance. While the study focuses on CoT (chain-of-thought) prompting, which is a specific technique within prompt engineering, it also introduces a multimodal approach by incorporating both text and images. The relevance to prompt engineering is significant, as the multimodal CoT could be a novel prompt engineering strategy, but it does not directly address hard prefix prompts, which would have been the direct subject of a prompt engineering study according to the initial prompt inquiry. Therefore, the rating is not a perfect score." -towards expert-level medical question answering with large language models,gpt-4-1106-preview,8,"The abstract provided discusses the use of Large Language Models (LLMs) and their application in medical question answering. It emphasizes the role of prompting strategies, including a 'novel ensemble refinement approach', which are essential components of prompt engineering. This indicates that the study involves research into optimizing prompts for LLMs to improve their performance in a specific domain, which is highly relevant to the broader field of prompt engineering. The rating is not a full 10 because the abstract focuses on medical question answering and LLM improvements in a specific domain rather than a general examination of hard prefix prompts or prompt engineering as a standalone subject." -language models can solve computer tasks,gpt-4-1106-preview,8,"The abstract describes a study related to the use of a prompting scheme called RCI in improving the performance of a pre-trained large language model (LLM) for computer tasks and natural language reasoning. While the study does not specifically mention 'hard prefix prompts', it directly involves the broader field of prompt engineering by showcasing how an LLM can be prompted to enhance its ability to interpret and execute tasks based on natural language commands. The emphasis on the efficacy of specialized prompting schemes (including the comparison with 'chain of thought' prompting) indicates that this research is highly relevant to the study and development of prompt engineering methods. The rating is not a full 10 as it does not explicitly focus on hard prefixes but prompt engineering in general." -is chatgpt the ultimate programming assistant - how far is it?,gpt-4-1106-preview,8,"The title and abstract provided describe an empirical study of ChatGPT's capabilities as a programming assistant and, importantly, they highlight the significance of prompt engineering in its effectiveness. Although the study itself is not about 'hard prefix prompts' specifically, the ramifications of the research touch upon the broader theme of how to interact effectively with LLMs (like ChatGPT) to solve programming tasks. The mention of 'demonstrating the importance of prompt engineering' illustrates a direct relevance to the field of study, however, since it's not strictly about 'hard prefix prompts', but more broadly covers ChatGPT's functionality, the rating is slightly reduced." -art: automatic multi-step reasoning and tool-use for large language models,gpt-4-1106-preview,8,"The provided abstract describes a framework (ART) that enhances the capabilities of Large Language Models by enabling them to automatically generate intermediate reasoning steps and integrate tool use. This is related to prompt engineering because it explores advanced techniques to optimize how prompts are given to large language models to evoke sophisticated reasoning and external information integration. Although it does not specifically mention 'hard prefix prompts,' the research is highly relevant to the field of prompt engineering as it advances how models are prompted to solve tasks. It falls slightly short of a perfect relevance score because it does not directly address 'hard prefix prompts' but rather focuses on the broader context of generating reasoning steps and tool integration, which can be considered a part of prompt engineering." -graph of thoughts: solving elaborate problems with large language models,gpt-4-1106-preview,9,"The provided abstract relates closely to prompt engineering study as it introduces a new framework for advancing prompting capabilities in LLMs, which is directly relevant to the field. The introduction of 'Graph of Thoughts' as a means to improve LLM reasoning and the possibility of it being used to develop new prompting schemes suggest a high relevance to the study and practice of prompt engineering. The abstract alleges an enhancement over existing prompting paradigms, pointing to a significant contribution to the field. However, the exact term 'hard prefix prompts' is not mentioned, which prevents a full rating of 10." -task and motion planning with large language models for object rearrangement,gpt-4-1106-preview,8,"The abstract describes 'LLM-GROP,' a system that leverages large language models (LLMs) through prompting to understand commonsense knowledge about object arrangements. Prompt engineering is directly used to retrieve information about object configurations, which is relevant to studies of prompt engineering. The paper seems to explore the efficacy of different prompts to enable a robot to understand and execute tasks involving physical objects, thus demonstrating a practical application of prompts in AI/robotic systems. While the main focus appears to be on task and motion planning, the use of prompt engineering is a significant aspect of the study, hence the high relevance rating." -interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions,gpt-4-1106-preview,7,"The study presents an approach, IRCoT, that combines retrieval techniques with chains-of-thought reasoning for enhancing multi-step question-answering in large language models. While it doesn't specifically talk about hard prefix prompts, it is indirectly relevant to prompt engineering as it deals with improving the quality and relevance of the responses generated by the AI. Considering that prompt engineering is all about optimizing how we interact with AI models to improve their output, the study's focus on utilizing a CoT to guide retrieval and improve the AI's reasoning steps is a valuable contribution to the field. It could be applied to warrant investigations into how prompts can be optimized to generate more accurate and contextually relevant retrieval queries, which is a crucial aspect of prompt engineering. However, it does not address hard prefix prompts directly, hence the rating is not a full 10." -unleashing cognitive synergy in large language models: a task-solving agent through multi-persona self-collaboration,gpt-4-1106-preview,8,"The described study is quite relevant to prompt engineering as it explores the concept of Solo Performance Prompting (SPP) which is a method of engaging a Large Language Model (LLM) in multi-turn self-collaboration with multiple personas. This relates to prompt engineering because it involves designing prompts that can elicit certain behaviors or responses from the model, akin to engaging with different facets or 'personas' of the AI. Crafting these nuanced prompts that can stimulate cognitive synergy is a direct example of prompt engineering. The paper does not specifically address 'hard prefix prompts', but the concept of using predetermined prompts to instigate particular responses or modes of operation in the LLM are within the scope of prompt engineering studies. Thus, the study is highly relevant to the development of sophisticated prompt engineering techniques." -safety assessment of chinese large language models,gpt-4-1106-preview,8,"The abstract describes a study focused on the development of a benchmark for the safety assessment of Chinese large language models (LLMs) using a method that involves providing test prompts and evaluating the safety of the model's responses. Since this method relies heavily on 'prompt engineering' (the strategy of crafting prompts to elicit specific responses or behaviors from AI models), there is a high relevance to prompt engineering studies. Specifically, the benchmark involves prompting as a core part of the assessment process. However, it does not directly focus on improving or innovating prompt engineering techniques, therefore the rating is not a perfect 10." -can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms,gpt-4-1106-preview,8,"The presented study is highly relevant to prompt engineering as it explores confidence elicitation in large language models (LLMs) without the need for fine-tuning or access to proprietary information. Prompt engineering is a subset of AI research focused on finding ways to improve the performance of AI models by crafting effective prompts. The methods investigated, which include verbalize-based, consistency-based, and hybrid methods, are directly related to how prompts can be designed to elicit more accurate confidence levels from LLMs. This is a key aspect of prompt engineering because it relates to improving the interaction with and the outputs of LLMs, which is a central goal of prompt engineering. However, it doesn’t focus specifically on 'hard prefix' prompts, which slightly reduces its relevance from a perfect score." -when to make exceptions: exploring language models as accounts of human moral judgment,gpt-4-1106-preview,8,"The paper addresses the development and application of a novel prompting strategy (MORALCOT) with the goal of improving the performance of Large Language Models (LLMs) on rule-breaking question-answering tasks that relate to human moral judgments. Since prompt engineering involves crafting inputs that guide AI models to produce the desired outputs, and the MORALCOT strategy is essentially a method of prompt engineering tailored for moral reasoning contexts, this study is quite relevant to prompt engineering. Although it focuses specifically on moral judgments rather than the broader range of prompt engineering applications, the insights gleaned from creating effective prompts in this challenging area are valuable for the field. The rating is not a full 10 as the content of the paper is narrowly focused on moral reasoning, which is just one of many domains where prompt engineering can be applied." -expertprompting: instructing large language models to be distinguished experts,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it discusses a novel strategy 'ExpertPrompting' to improve the performance of large language models by crafting detailed prompts that contextualize the model as an expert. This approach is directly aligned with the study and applications of prompt engineering, aiming to enhance the quality of outputs generated by LLMs. While the paper may not specifically mention 'hard prefix prompts', the concept of customizing prompts to induce expert-level answers fits well into the broader category of prompt engineering techniques, making the paper's content significantly pertinent to the field." -automatic evaluation of attribution by large language models,gpt-4-1106-preview,7,"The relevance to prompt engineering study is significant because the abstract describes research on prompting Large Language Models (LLMs) as one of the approaches for automatic evaluation. Although the main focus is on evaluating attribution, the fact that prompting is used as a method indicates that the results and methodologies could be applicable and informative for prompt engineering studies. However, the primary emphasis seems to be on attribution evaluation rather than prompt construction or optimization itself, which prevents a full relevance score." -logic-lm: empowering large language models with symbolic solvers for faithful logical reasoning,gpt-4-1106-preview,7,"The study presents a framework that improves the logical reasoning capabilities of Large Language Models (LLMs) through the integration with symbolic solvers. While the topic is not directly related to 'hard prefix prompts' or prompt engineering, the methodology described includes a step to translate natural language problems into symbolic formulation, which could be considered as a form of complex prompt engineering. The method's aim to enhance LLMs' problem-solving skills with better input translation is relevant to the wider field of prompt engineering, especially in terms of constructing prompts that require logical reasoning. Therefore, the relevance is somewhat high, but not directly focused on the core concept of 'hard prefix prompts'." -red-teaming large language models using chain of utterances for safety-alignment,gpt-4-1106-preview,8,"The study presents relevant information to prompt engineering by discussing the effects of 'Chain of Utterances-based (CoU) prompting,' which directly relates to how prompts are structured and used to interact with large language models. Additionally, the work on safety evaluation benchmark RED-EVAL and proposing RED-INSTRUCT for the safety alignment of LLMs contributes to understanding and improving prompt-based interactions with these models. This has a direct implication on prompt engineering as it informs the construction of prompts that can be used to evaluate and align LLMs for safety. However, the paper primarily focuses on the safety and ethical implications of prompting rather than on prompt engineering for improving the general performance or functionality, which is why the rating is not a full 10." -can large language models write good property-based tests?,gpt-4-1106-preview,7,"The abstract describes research into leveraging large language models (LLMs) to synthesize property-based tests, which is a subset of prompt engineering because it specifically looks at how to prompt LLMs to perform a particular software engineering task. The relevance to prompt engineering study is significant as it involves the design of prompts to effectively communicate with LLMs and generate meaningful output. However, it is not directly focused on hard prefix prompts or a comprehensive systematic review of such prompts, which would be the central concern in prompt engineering studies. Therefore, the rating is not a full 10 but still high due to the close connection with the practice of prompt engineering in the context of LLMs." -i spy a metaphor: large language models and diffusion models co-create visual metaphors,gpt-4-1106-preview,8,"The described study involves a sophisticated form of prompt engineering where the Large Language Models (LLMs) are specifically instructed to generate textual content that then serves as a prompt for diffusion-based text-to-image models. Although the study focuses on the creation of visual metaphors, the process requires careful engineering of text-based prompts to elicit the desired visual outputs from the AI. Therefore, while the research does not directly study 'hard prefix prompts,' it contributes to the broader understanding of how different prompting strategies can guide AI behavior, which is highly relevant to the field of prompt engineering." -"despite ""super-human"" performance, current llms are unsuited for decisions about ethics and safety",gpt-4-1106-preview,8,"The abstract discusses the development and evaluation of a new prompting strategy for Large Language Models (LLMs), and specifically mentions how this strategy outperforms humans at ethical reasoning tasks. Since prompt engineering involves crafting inputs that can significantly affect the performance of LLMs, and this abstract describes a prompting strategy that notably changes the model's output, the content is highly relevant to the study of prompt engineering. The reduction of two points is due to the focus also being on ethical reasoning and model limitations rather than purely prompt engineering techniques." -human-in-the-loop through chain-of-thought,gpt-4-1106-preview,7,"The abstract presents a study that is related to improving the performance of language models through human intervention, specifically in the context of Chain-of-thought prompting. While not directly addressing 'hard prefix prompts,' it discusses the broader topic of prompt engineering and the optimization of human-in-the-loop systems. This is relevant to the field of prompt engineering as it explores enhancing reasoning by correcting intermediate steps, which could be considered a form of prompt optimization. However, since it does not specifically mention 'hard prefix prompts,' the rating is not a full 10." -an evaluation of log parsing with chatgpt,gpt-4-1106-preview,8,"The evaluation study focuses on the performance of ChatGPT in log parsing tasks and how different prompting methods affect this performance. While it does not specifically mention 'hard prefix prompts', it does address the broader concept of 'prompting methods', which is directly relevant to prompt engineering. The focus on few-shot prompting and the exploration of effective prompts for log parsing imply that understanding prompt engineering is a significant component of the research. The study's relevance to prompt engineering is therefore high, but it is not a perfect match since it is not a 'comprehensive systematic review on hard prefix prompts' specifically." -evaluating gpt-3 generated explanations for hateful content moderation,gpt-4-1106-preview,7,"The abstract is relevant to prompt engineering study to a considerable extent, as it discusses the utilization of GPT-3's language model for generating explanations which requires careful design of prompts to tailor the model's outputs for hate speech moderation. The study's focus on evaluating the effectiveness and limitations of explanations prompted from a language model directly ties in with the principles of prompt engineering, which seeks to understand how best to interface with language models to achieve desired outcomes. However, it does not specifically discuss 'hard prefix prompts' but rather general prompting strategies, so the relevance is not absolute." -large language models are strong zero-shot retriever,gpt-4-1106-preview,8,"The relevance to prompt engineering study is high since the abstract describes the use of a large language model (LLM) to improve the efficiency and effectiveness of information retrieval through a prompt-based approach. Specifically, it mentions augmenting a query with potential answers and using prompts to make the LLM generate more precise answers, which aligns with understanding and improving the interaction with language models via prompts. However, it did not focus exclusively on 'hard prefix prompts' which might have been a part of a more targeted study of prompt engineering." -careful data curation stabilizes in-context learning,gpt-4-1106-preview,7,"The abstract discusses in-context learning (ICL) and the impact of data selection on the performance of large language models (LLMs), which is pertinent to prompt engineering study as it relates to the optimization of input data to improve model response. While the focus appears to be on data curation rather than prompt formulation (i.e., hard prefix prompts), the principles of selecting high-quality examples and understanding their influence on model performance are relevant. The methods described, such as CONDACC and DATAMODELS, could potentially be applied to or inform approaches in prompt engineering, making the study somewhat relevant although not exclusively focused on prompt design." -forward-backward reasoning in large language models for verification,gpt-4-1106-preview,8,"The paper discusses a method related to prompt engineering, specifically 'Chain-of-Though (CoT) prompting', which is a form of structuring prompts to guide large language models (LLMs) in reasoning tasks. The introduction of 'forward-backward reasoning,' as a means to enhance the verification of candidate answers generated by LLMs, represents a novel approach within the domain of prompt engineering. Although the paper does not directly mention 'hard prefix prompts', the relevance is high due to the focus on developing novel prompting methodologies to improve the performance and reliability of LLMs in complex reasoning tasks, which falls under the broader umbrella of prompt engineering studies." -how to catch an ai liar: lie detection in black-box llms by asking unrelated questions,gpt-4-1106-preview,7,"The study presents an approach for detecting lies from LLMs that involves crafting and using follow-up prompts or questions, which is related to the concept of prompting in language models. Lie detection in this context can be considered a fringe or specialized aspect of prompt engineering aimed at improving the reliability and truthfulness of LLM responses. While not directly focused on 'hard prefix prompts', the research highlights the impact of prompt design on the behavior of LLMs, which falls within the broader scope of prompt engineering. Hence, the rating reflects that the paper is relevant but not central to a comprehensive systematic review on prompt engineering, specifically with a focus on 'hard prefix prompts'." -self-checker: plug-and-play modules for fact-checking with large language models,gpt-4-1106-preview,8,"The abstract describes the 'Self-Checker' framework, which is relevant to prompt engineering, as it involves constructing prompts for large language models to perform fact-checking tasks in a zero-shot or few-shot setting. While the main focus of the paper is on the application of fact-checking, it directly involves prompt engineering to enable the large language models to understand and execute the task without extensive training or fine-tuning. Therefore, the paper is highly relevant to prompt engineering, especially in the context of using prompts to elicit specific functionalities from pre-trained models. However, it does not exclusively focus on 'hard prefix prompts' as indicated in the prompt engineering study, which might slightly limit its relevance in terms of specificity to that particular type of prompting." -llms to the moon? reddit market sentiment analysis with large language models,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant since the abstract describes a semi-supervised learning approach that utilizes a large language model (LLM) and involves prompting the LLM to generate Chain-of-Thought summaries to improve the quality of sentiment analysis on social media. This indicates the study focuses on how to engineer prompts to obtain more accurate outputs from the LLM, which is a key aspect of prompt engineering. However, the study does not specifically mention 'hard prefix prompts', which suggests that while it is related to prompt engineering, it does not directly address the comprehensive systematic review of such prompts. Therefore, the rating is not a full 10." -leveraging commonsense knowledge from large language models for task and motion planning,gpt-4-1106-preview,8,"The abstract describes the use of prompting techniques within Large Language Models (LLMs) to extract commonsense knowledge for task and motion planning, which is highly relevant to the field of prompt engineering. Specifically, the LLMGROP system leverages prompts to guide the LLM in generating information about plausible physical arrangements, a task that aligns closely with the development of hard prefix prompts for specific applications. Although the study focuses on a practical application for service robots rather than a broad systematic review of prompt engineering, the underlying methodology and use of prompts to gain desired outputs from an LLM provide valuable insights into the prompt engineering process. The rating is not a full 10 as the paper does not explicitly focus on a systematic review of prompt engineering techniques, which appears to be the central requirement of the 'prompt engineering study' in question." -what do llms know about financial markets? a case study on reddit market sentiment analysis,gpt-4-1106-preview,8,"The study's focus on using large language models for sentiment analysis is highly relevant to prompt engineering, as it explores the effect of different prompting strategies on the performance of the model. The mention of Chain-of-Thought summaries and forcing the LLM through several reasoning paths is particularly pertinent to how prompts can be designed to elicit better responses from language models. Although the primary application is market sentiment analysis, the techniques used for prompting can be generalized and applied to other domains, making this research relevant to the study of prompt engineering. The rating is not a full 10 because the paper's primary goal is not the study of prompt engineering itself, but rather the application of prompting techniques to a specific problem, i.e., financial sentiment analysis." -enhancing in-context learning with answer feedback for multi-span question answering,gpt-4-1106-preview,8,"The paper describes a methodology for improving the performance of large language models in specific tasks through in-context learning and a novel prompting approach which involves providing feedback on model outputs. This is highly relevant to prompt engineering as it directly pertains to techniques for constructing prompts that can better guide models like ChatGPT. The focus on multi-span question answering does not explicitly pertain to 'hard prefix prompts' as indicated in the original query, but it does explore the broader field of prompt design and optimization, which is why the relevance is rated an 8 instead of a perfect 10." -retrieving texts based on abstract descriptions,gpt-4-1106-preview,8,"The abstract describes research on using Large Language Models (LLMs) to generate training data for a new model focused on semantic retrieval, which pertains to prompt engineering in that the data sourcing process involves prompting a LLM effectively. The relevance lies in addressing the use of LLMs to formulate prompts that yield useful data for specific tasks, which is a key part of prompt engineering. However, the text does not explicitly address 'hard prefix prompts', a more specialized topic within prompt engineering, hence the rating is not a full 10." -queer people are people first: deconstructing sexual identity stereotypes in large language models,gpt-4-1106-preview,8,"The study is highly relevant to the field of prompt engineering because it discusses a post-hoc method to alter the prompts (chain-of-thought prompting) in order to influence the output of large language models. It addresses the issue of bias in LLMs, particularly against marginalized groups, an essential consideration within prompt engineering to ensure responsible AI practices. Recovering fair and unbiased responses from LLMs is a key application of prompt engineering, even though the study does not focus solely on 'hard prefix prompts' but rather on a broader set of prompt modification strategies." -knowing what llms do not know: a simple yet effective self-detection method,gpt-4-1106-preview,8,"The paper proposes a method that relies on prompt engineering to elicit different responses from LLMs to the same question, which directly involves the study of how prompts can be constructed and used to understand and evaluate the model's knowledge boundaries. Although it does not focus on 'hard prefix prompts' explicitly, the concept of diversifying textual expressions as prompts is closely related to the field of prompt engineering. The systematic approach to identify nonfactual responses through the analysis of divergences in LLM outputs is pertinent to the broader study of prompt engineering strategies, hence the high relevance rating." -retrieving supporting evidence for llms generated answers,gpt-4-1106-preview,8,"The described paper focuses on an experiment which involves prompting a Large Language Model (LLM) with a combination of a question and a retrieved answer to check for support of the LLM's generated answer. While it's not directly studying 'hard prefix prompts', it tackles a closely related topic in the field of prompt engineering: the verification and reliability of responses from LLMs, which could involve a form of prompt crafting. The relevance is high because understanding how prompts can be engineered to elicit verification behavior from an LLM is within the scope of prompt engineering studies. However, because it does not directly address the systematic review or exploration of 'hard prefix prompts', it does not get a full 10." -knowledge sanitization of large language models,gpt-4-1106-preview,7,"The abstract describes an approach for modifying the behavior of large language models using a specific fine-tuning technique to avoid disclosing sensitive information, which is relevant to the field of prompt engineering. This study is indirectly related to prompt engineering as it involves the engineering of prompts to ensure that the model's responses meet certain security and privacy requirements. This demonstrates the use of prompts to control and influence the output of language models. However, it does not specifically address 'hard prefix prompts,' which was the original topic, hence it doesn't receive a full relevance score." -reasoning in large language models through symbolic math word problems,gpt-4-1106-preview,8,"The study's focus on improving the alignment between the symbolic reasoning and the numeric answers of LLMs using a self-prompting approach is closely related to prompt engineering. It hints at the optimization of prompts to yield better performance from large language models in the context of solving symbolic math word problems, which is an exercise in prompting strategies. This aligns with the notion of hard prefix prompts that guide the LLMs towards a specific mode of reasoning. However, the study is not exclusively centered on prompt engineering but also explores the model's reasoning capabilities, hence the rating is not a full 10." -alphazero-like tree-search can guide large language model decoding and training,gpt-4-1106-preview,7,"The abstract discusses an approach to enhance the decoding and reasoning of LLMs by incorporating an AlphaZero-like tree-search framework. This is indirectly relevant to prompt engineering, as the paper seems to focus on improving LLMs' performance on tasks through tree-search algorithms rather than prompting techniques. However, the fact that it references the use of prompts in traditional models, such as CoT, and seeks to provide a method that reduces reliance on human-designed prompts, makes it relevant to the study of prompt engineering. It addresses a limitation of current prompt-based techniques and offers an alternative that could influence future prompt design and utilization." -exploring human-like translation strategy with large language models,gpt-4-1106-preview,7,"The study focuses on the MAPS framework, which involves Multi-Aspect Prompting and Selection, a system that seemingly pertains to 'prompt engineering' as it includes the design of prompts that enable LLMs to extract and utilize translation-related knowledge. While the study does not directly address 'hard prefix prompts', it is implicitly relevant because it involves the engineering of prompts to improve the translation process of LLMs. Therefore, it has relevance to the subject of prompt engineering, albeit not strictly focused on hard prefix prompts specifically." -"mmhqa-icl: multimodal in-context learning for hybrid question answering over text, tables and images",gpt-4-1106-preview,7,"The paper describes a novel method for improving question answering across multiple modalities using in-context learning strategies with large language models, which is relevant to prompt engineering. The technique enhances LLM prompting strategies for the task, which is a core aspect of prompt engineering. However, it does not focus directly on hard prefix prompts but on a broader application of prompts within multimodal question answering. Therefore, the relevance is significant but not entirely focused on the specific topic of hard prefix prompts." -gear: augmenting language models with generalizable and efficient tool resolution,gpt-4-1106-preview,7,"The title and abstract provided discuss an algorithm named GEAR that is relevant to the domain of prompt engineering, as it deals with enhancing the efficiency and effectiveness of large language models (LLMs) by using smaller models for tool grounding. Prompt engineering is a process that's closely related to how a language model interacts with external tools and uses prompts to perform tasks. While the study does not directly address 'hard prefix prompts' which may be a specific kind of prompt engineering technique, it does engage with the overall theme of improving the interaction between language models and tool utilization. Thus, its relevance is considerable but not entirely specific to 'hard prefix prompts' as suggested by the initial inquiry." -constitutionmaker: interactively critiquing large language models by converting feedback into principles,gpt-4-1106-preview,8,"The abstract provided discusses an interactive tool called ConstitutionMaker that is directly involved in prompt engineering by allowing users to refine large language model outputs and steer chatbot behavior through feedback. While the study does not cover 'hard prefix prompts' in specific, it engages with the broader field of prompt engineering through user feedback and principles, which are fundamental to prompt engineering methodology. Thus, the relevance is high but not maximal since the specific focus on 'hard prefix prompts' is not mentioned." -theory of mind in large language models: examining performance of 11 state-of-the-art models vs. children aged 7-10 on advanced tests,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to a significant degree, as it includes examining and scoring the performance of LLMs on complex cognitive tasks using various types of prompts, potentially revealing how different prompts can elicit sophisticated language understanding and reasoning. While the primary focus of the study seems to be on the cognitive abilities of LLMs, particularly Theory of Mind, the aspect of using prompts and evaluating different kinds of prompts (open versus closed questions) is a substantial component of prompt engineering. However, the study doesn't seem to be centered exclusively on 'hard prefix prompts' or the mechanics of prompt design, thus it's not fully aligned with a 'systematic review on hard prefix prompts'. Therefore, the rating isn't a perfect 10." -retrieving supporting evidence for generative question answering,gpt-4-1106-preview,8,"The abstract provided discusses experiments on the validation of generated answers by large language models (LLMs) using a combination of questions and answers as prompts for retrieval processes. This work is directly connected to the concept of prompt engineering, as it involves designing and refining prompts (in this case, combining questions with generated answers) to improve the performance of LLMs. The relevance is not a perfect 10 because the study focuses specifically on verification of LLM-generated content against a corpus, and not broadly on 'hard prefix prompts' or a systematic review of prompt engineering techniques. However, it addresses a key aspect of prompt construction and interaction with language models, which is essential to the field of prompt engineering." -synergistic integration of large language models and cognitive architectures for robust ai: an exploratory analysis,gpt-4-1106-preview,7,"The abstract describes the integration of Large Language Models (LLMs) with Cognitive Architectures (CAs), which is relevant to prompt engineering to the extent that it deals with utilizing prompts for directing LLM behavior. Mention of 'chain-of-thought prompting' indicates a direct relevance to prompt engineering techniques. However, the primary focus seems to be on the broader framework of integrating LLMs and CAs, rather than specifically on the development or study of hard prefix prompts within prompt engineering. Therefore, the relevance is substantial but not complete." -empirical study of zero-shot ner with chatgpt,gpt-4-1106-preview,7,"The abstract describes research focused on improving the performance of language models on the zero-shot named entity recognition task, which involves strategies related to prompt engineering such as 'syntactic prompting' and 'tool augmentation'. This indicates relevance to prompt engineering as it involves designing inputs to elicit better performance from the model. However, the focus is more on the specific application of NER and the methodology to enhance LLMs like ChatGPT, rather than on prompt engineering in general or 'hard prefix prompts' specifically. This constitutes a partial but significant relevance to the broader field of prompt engineering studies." -large language models can learn rules,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it discusses a method for improving the performance of large language models (LLMs) in reasoning tasks through a novel prompting framework, Hypotheses-to-Theories (HtT). This framework directly relates to the development and refinement of prompts to enhance the reasoning capabilities of LLMs, which is at the core of prompt engineering. The systematic approach to generate, verify, and use rules for modeling better represents the kind of systematic review that could be applied in hard prefix prompts research. The only reason it doesn't receive full marks is that it does not specifically mention 'hard prefix prompts', but it addresses the broader field of prompting methods." -less is more for long document summary evaluation by llms,gpt-4-1106-preview,7,"The abstract describes a novel approach to evaluating summaries of long documents by LLMs that involves a key step of prompting the models after key sentence extraction, which is closely related to the concept of 'prompt engineering.' While the study is not directly focused on 'hard prefix prompts,' its relevance lies in the method of using prompts to efficiently guide LLMs towards desired tasks, which is an essential component of prompt engineering. Additionally, the results and practical recommendations could indirectly contribute to the understanding of how prompts affect the performance of language models in processing long documents. However, it is not a direct study of 'hard prefix prompts' in the sense of a comprehensive systemic review or an exploration of prompt structures and their effects, hence the rating does not reach the top of the scale." -developing a scalable benchmark for assessing large language models in knowledge graph engineering,gpt-4-1106-preview,8,"The described benchmarking framework for assessing Large Language Models in knowledge graph engineering seems to be highly relevant to prompt engineering as it deals with automatic evaluation and storage of LLM responses. This indicates that prompt engineering plays a crucial role in how well these models perform on the specified tasks of syntax and error correction, facts extraction, and dataset generation. The relevance is not a full 10 because the abstract does not specifically focus on 'hard prefix prompts', but rather on prompt engineering in a more general context within knowledge graph generation." -c o rrpus: codex-leveraged structured representations for neurosymbolic story understanding,gpt-4-1106-preview,7,"The abstract discusses the enhancement of neurosymbolic work in natural language generation and understanding tasks through the use of structured prompts (referred to as 'abstracted prompting procedures'). Although the study primarily focuses on story understanding and generation, the mention of 'abstracted prompting procedures' which can be considered a technique within prompt engineering, signifies a relevance to the broader field of prompt engineering studies. However, the context is specific to story understanding tasks rather than a 'comprehensive systematic review on hard prefix prompts,' hence the rating is not a full 10." -s3-dst: structured open-domain dialogue segmentation and state tracking in the era of llms,gpt-4-1106-preview,7,"The study presents a structured prompting technique, which is relevant to prompt engineering as it involves mechanisms used to improve the interfacing with language models. The concept of 'Pre-Analytical Recollection' could offer insights into designing prompts that facilitate better state tracking and context understanding in conversations with language models. However, the focus seems to be more on dialogue state tracking and segmentation in the context of LLM-based systems, rather than directly on engineering prompts using hard prefixes. The relevance is therefore not maximal, as it does not directly address hard prefix prompts; however, the structured prompting approach is a component of prompt engineering within the larger scope of utilizing language models for complex tasks." -corrpus: detecting story inconsistencies via codex-bootstrapped neurosymbolic reasoning,gpt-4-1106-preview,8,"The provided abstract discusses the use of abstracted prompting procedures alongside neurosymbolic approaches for story understanding tasks. Although it does not specifically mention 'hard prefix prompts,' the subject of prompt engineering is still highly relevant. The abstract explicitly refers to the design of specialized prompts to guide large language models, which aligns with the broader field of prompt engineering studies. The creation and optimization of prompts to improve the performance of language models on specific tasks is a direct example of prompt engineering work. Therefore, the study appears to be very relevant to those interested in how tailored prompting can enhance model performance, even if it doesn't directly address hard prefix prompts." -automatic chain of thought prompting in large language models,gpt-4-1106-preview,9,"The abstract presents a direct study on improving the effectiveness of large language models using a specific type of prompt engineering strategy known as Chain-of-Thought (CoT) prompting. This is highly relevant to prompt engineering as it addresses the optimization of the prompting process to enhance the performance of language models. The approach of automatically generating CoT prompts (Auto-CoT) to replace manual effort is a significant contribution to the field of prompt engineering. The only reason this is not rated a 10 is that the study does not specifically address 'hard prefix prompts' but rather CoT prompting in general, which is a subset of prompt engineering." -neuro-symbolic procedural planning with commonsense prompting,gpt-4-1106-preview,8,"The given abstract discusses the use of commonsense-infused prompting to improve procedural planning in large language models, which aligns with prompt engineering concepts. The study presents a neuro-symbolic approach that incorporates commonsense knowledge into prompts to form a causal structure, reflecting an advanced and targeted application of prompts to enhance model performance. Although the focus is more on procedural planning and less on the structure of prompts themselves, the use of prompts generated from knowledge bases and their optimization for better outcomes in language models is fundamentally connected to prompt engineering." -analyzing bert’s knowledge of hypernymy via prompting,gpt-4-1106-preview,9,"The study on BERT's knowledge of hypernymy through prompting directly relates to prompt engineering because it investigates the effectiveness of using prompts to elicit specific linguistic knowledge from a language model. The paper analyzes how well BERT responds to direct prompts about lexical semantic relations, which is a key aspect of prompt engineering. The relevance is rated at 9 instead of a perfect 10 because the focus is specifically on hypernymy recognition, not on the broader range of prompt engineering strategies or types of prompts (like hard prefix prompts mentioned in the original topic), which could have an impact on how language models generate more diverse responses." -prompter: utilizing large language model prompting for a data efficient embodied instruction following,gpt-4-1106-preview,8,"The abstract discusses 'Prompter,' an approach that involves replacing a semantic search module with language model prompting, which is highly relevant to prompt engineering. The utilization of language model prompting to control robots based on natural language instructions is a practical application of prompt engineering, demonstrating how well-crafted prompts can improve performance in embodied instruction following tasks. The work implies a novel use of prompts and their significance in improving data efficiency, which are key topics in prompt engineering research. The rating is not a full 10 because while the paper is related to the use of prompts, it does not explicitly focus on 'hard prefix prompts' per se, but broadly on the application of language model prompts in a different context." -chain of thought prompting elicits reasoning in large language models,gpt-4-1106-preview,9,"The abstract directly discusses the impact of 'chain of thought prompting' on the performance of large language models. Given that 'chain of thought prompting' is a technique used in prompt engineering to elicit detailed reasoning from language models, and the abstract indicates significant performance improvements on complex tasks, it is highly relevant to the study of prompt engineering. It may not score a perfect 10 as it is not exclusively focused on 'hard prefix prompts' which might be a more specialized subset of prompt engineering." -rethinking with retrieval: faithful large language model inference,gpt-4-1106-preview,7,"The paper described involves using 'chain-of-thought (CoT) prompting' which falls under the broader category of prompt engineering in the context of large language models. Although the main focus appears to be on improving the model's ability to integrate external knowledge and thus enhance inference, it is still relevant because it discusses a method that modifies how prompts are used to obtain explanations from a model. However, the paper doesn't exclusively focus on the design or study of 'hard prefix prompts', so it may not completely align with studies exclusive to prompt engineering techniques. Therefore, the rating indicates moderate relevance, with points deducted for not being directly focused on hard prefix prompts, yet still relating to prompt engineering methodology." -pop quiz! can a large language model help with reverse engineering?,gpt-4-1106-preview,8,"The abstract discusses the use of prompting techniques with Codex, a large language model, to investigate its utility in reverse engineering tasks. This falls under the broader category of 'prompt engineering' as it involves the strategic formulation of prompts to elicit specific information from a language model regarding code comprehension. The study's focus on the model's response to these prompts and the development of a structured quiz to measure its performance is highly relevant to understanding how different prompt strategies might affect the outcome of interactions with AI. However, it is not precisely about 'hard prefix prompts', which suggests a more specialized aspect of prompt engineering, hence the deduction of 2 points." -least-to-most prompting enables complex reasoning in large language models,gpt-4-1106-preview,9,"The described research directly investigates a novel prompting strategy for language models, which is highly relevant to the field of prompt engineering. The 'least-to-most prompting' method addresses a common limitation in generalizing from easy to hard problems. Given that the strategy involves designing prompts to guide the model through incrementally challenging subproblems, this study contributes significantly to the understanding and development of advanced prompt engineering techniques. Therefore, it scores a 9, as it may not solely focus on 'hard prefix' prompts, but covers a broader approach to prompting that includes handling complex problems." -thoughtsource: a central hub for large language model reasoning data,gpt-4-1106-preview,7,"While the provided title and abstract do not specifically mention hard prefix prompts, the mention of 'large language model reasoning data' implies that the study could include research into various prompt engineering techniques, which may encompass hard prefix prompts. The 'ThoughtSource' project aims to facilitate a qualitative understanding of chain-of-thoughts (CoTs), which is a technique often used in prompt engineering to improve language models' performance. Furthermore, the focus on 'empirical evaluations' and 'providing training data' could be relevant to optimizing hard prefix prompts for better language model outputs. Thus, the study might contribute valuable insights to prompt engineering, albeit not exclusively to hard prefix prompts." -large language model prompt chaining for long legal document classification,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it focuses on the technique of 'prompt chaining' to improve the classification of lengthy and complex legal documents. The method specifically involves breaking down the task into parts and using successive prompts, which is at the core of advanced prompt engineering strategies. The successful performance improvement over the zero-shot method and the comparison with larger models like ChatGPT demonstrate a direct application and advancement in the field of prompt engineering for complex tasks such as legal document classification." -generate rather than retrieve: large language models are strong context generators,gpt-4-1106-preview,8,"The abstract describes a novel prompting method within the context of large language models, specifically applied to knowledge-intensive tasks. It details a process where the model generates contextual documents from a given question, which aligns with the concept of 'hard prefix prompts' in that it involves crafting inputs to elicit specific types of outputs from a model. Despite not using the exact term 'hard prefix prompt,' the essence of designing prompts to guide the generation of content is central to prompt engineering. The significance of an 8 rather than a 10 is because the abstract doesn't explicitly discuss hard prefix prompts or broader prompt engineering strategies beyond its specific 'generate-then-read' method." -lessons learned from gpt-sw3: building the first large-scale generative language model for swedish,gpt-4-1106-preview,7,"While the primary focus of the paper seems to be on the development and evaluation of a Swedish language model (GTP-SW3), it is mentioned that an 'extensive prompting study' was part of the research. Although the details of the prompting study are not provided, it suggests that there was an investigation into how the model responds to different prompts, which is relevant to prompt engineering. The rating isn't higher because the prompt study is not the central focus of the paper and without more information on the 'hard prefix prompts' aspect, the overall relevance to the specific area of prompt engineering study mentioned cannot be fully assessed." -a recipe for arbitrary text style transfer with large language models,gpt-4-1106-preview,8,"The paper focuses on a prompting method named 'augmented zero-shot learning' for text style transfer using large language models (LLMs). While it does not directly address 'hard prefix prompts,' it is significantly relevant to the broader field of prompt engineering. The concept of instructing LLMs to perform specific style transformations through natural language prompts aligns with the principles of prompt engineering, which involves crafting input prompts to guide the behavior of AI models. Although the study's primary application is text style transfer, the prompting techniques developed could have implications for the design and effectiveness of hard prefix prompts." -dehallucinating large language models using formal methods guided iterative prompting,gpt-4-1106-preview,8,"The abstract describes a study focused on refining the prompting process to reduce 'hallucinations' in large language models, such as ChatGPT, especially for safety-critical applications. Although it doesn't specifically mention 'hard prefix prompts,' the study's aim to create an architecture for iterative prompting and self-monitoring to ensure the accuracy of the models' responses is relevant to prompt engineering. Prompt engineering involves crafting prompts to obtain better performance from language models, and the research on reducing hallucinations can be seen as an advanced form of prompt engineering. The paper's relevance is not a perfect 10, as it doesn't directly address hard prefix prompts but instead looks at a broader issue within prompt engineering itself." -towards a mathematics formalisation assistant using large language models,gpt-4-1106-preview,8,"The study discusses the efficacy of large language models in formalizing mathematical statements, emphasizing the importance of 'careful inputdependent prompt selection and postprocessing.' This relates closely to prompt engineering as it highlights the critical role of prompt design in achieving higher performance with language models. Though it doesn't focus on 'hard prefix prompts' specifically, the overall concept of optimizing prompts to improve a model's ability to understand and generate specific outcomes is central to prompt engineering studies." -tree of thoughts: deliberate problem solving with large language models,gpt-4-1106-preview,9,"The title 'Tree of Thoughts: Deliberate Problem Solving with Large Language Models' directly refers to an advanced method of prompt engineering for language models. It describes a new framework, Tree of Thoughts (ToT), which improves upon the existing 'Chain of Thought' approach. The abstract explains how this method allows language models to explore different reasoning paths and make more informed decisions. The fact that it facilitates exploration over coherent units of text is highly relevant to the study of hard prefix prompts, as it implies a structured and systematic way to lead and evaluate the language model's output. The significant improvement in problem-solving tasks like Game of 24, Creative Writing, and Mini Crosswords demonstrates the practical impact of this approach on prompt engineering. Despite not using the term 'hard prefix prompts' specifically, the concept and results are very pertinent to the field." -htlm: hyper-text pre-training and prompting of language models,gpt-4-1106-preview,8,"The abstract describes the development and advantages of the HTLM model which is relevant to prompt engineering insofar as it discusses the model's improved efficiency with hyper-text prompts over plain text prompts. This indicates a focus on how different formats of prompts influence the performance of language models. It also touches on 'structured prompting' which is a key aspect of prompt engineering. The relevance is not a perfect 10 since the study is about hyper-text specific prompting rather than 'hard prefix prompts' in general, but the study is still highly pertinent to the field of prompt engineering." -have llms advanced enough? a challenging problem solving benchmark for large language models,gpt-4-1106-preview,7,"While the abstract discusses a comprehensive benchmark for evaluating large language models on complex problem-solving tasks, involving hard problems from IIT JEE-Advanced exam, it indirectly relates to prompt engineering. The techniques mentioned like self-consistency, self-refinement, and chain-of-thought prompting are part of prompt engineering strategies. These strategies contribute to shaping the input provided to the models in order to improve their output. However, the focus of the study is more on the assessment of the models' abilities and the development of a confidence-thresholding method, rather than on the design or study of prompts (hard prefix prompts) specifically. Thus, the relevance to prompt engineering is significant but not the central theme of the paper." -radadapt: radiology report summarization via lightweight domain adaptation of large language models,gpt-4-1106-preview,7,"The study discusses adaptation strategies for large language models, including 'discrete prompting', which is relevant to prompt engineering as it involves designing specific prompts to guide the model's performance on a task. While the main focus is on domain adaptation through pretraining and fine-tuning, the mention of discrete prompting shows that the methodology studied does intersect with prompt engineering, especially in how the prompts can affect RRS model effectiveness. Thus, the relevance is significant but not central to prompt engineering studies, which might have a broader scope beyond domain adaptation and parameter tuning." -evaluating factual consistency of summaries with large language models,gpt-4-1106-preview,9,"The abstract addresses the evaluation of factual consistency in summaries using large language models and places a significant focus on the role of prompting methods. The relevance to prompt engineering is high, given that it explores various prompting methods including vanilla, chain-of-thought, and sentence-by-sentence, which are integral to the way LLMs are leveraged to perform tasks. This empirical study contributes to the understanding of how different prompts affect the performance of LLMs, which is a core aspect of prompt engineering. The rating is not a perfect 10 as the study is not exclusively on 'hard prefix prompts' (which was specified in the original prompt engineering study question), but the subject matter is very closely related." -large language models are diverse role-players for summarization evaluation,gpt-4-1106-preview,8,"The provided abstract outlines a study focused on leveraging large language models (LLMs) for the evaluation of text summarization, which is relevant to the domain of prompt engineering. Although the study does not solely concentrate on 'hard prefix prompts', it does propose a framework that involves 'roleplayers prompting mechanism' and 'context-based prompting,' which are examples of prompt engineering techniques used to guide LLMs towards a specific task. The 'multi-roleplayer prompting technology' and 'integrating multiple outputs into the final evaluation results' are indicative of advanced prompt engineering methods to evaluate LLMs' performance on text summarization tasks. The study's high relevance comes from its methodological innovation in prompt engineering for LLM evaluation, but it falls slightly short of perfect relevance due to the absence of a direct focus on 'hard prefix prompts.'" -can chatgpt detect intent? evaluating large language models for spoken language understanding,gpt-4-1106-preview,8,"The paper in question focuses on the ability of language models like ChatGPT to understand and classify intent in spoken language, which is closely related to prompt engineering. In-context learning and prompting are integral parts of language model interactions in natural language understanding tasks. Even though the study does not directly address 'hard prefix prompts,' it discusses the broader context of using prompts to elicit specific model behaviors and understandings, such as intent classification, which is a fundamental part of prompt engineering. The rating is not a full 10 because the study does not specifically focus on 'hard prefix prompts,' but it is highly relevant for anyone studying how prompting affects large language models' abilities." -complexity-based prompting for multi-step reasoning,gpt-4-1106-preview,9,"The given abstract discusses the concept of complexity-based prompting as a method for improving the multi-step reasoning capabilities of large-scale language models. This is highly relevant to prompt engineering because it explores how the complexity of prompts affects the performance of models like GPT-3 and Codex on reasoning tasks. The study directly relates to the process of crafting prompts that elicit better responses from language models, thus contributing to the field of prompt engineering. The systematic assessment of how prompt complexity influences the quality of model-generated reasoning chains is a specific aspect of prompt engineering, making the study pertinent though it doesn't focus on 'hard prefix prompts' as a specific type of prompt construction method." -"""according to ..."" prompting language models improves quoting from pre-training data",gpt-4-1106-preview,9,This study is highly relevant to prompt engineering as it explores a specific technique (according to prompting) aimed at improving the accuracy and reliability of Large Language Models by directing them to reference their pre-training data. The introduction of a novel evaluation metric (QUIP-Score) to measure grounding in underlying text corpora is also a significant contribution to the field. The focus on grounding responses and the empirical evidence showing the impact of different prompts on model output are central to the discipline of prompt engineering. -prompting for a conversation: how to control a dialog model?,gpt-4-1106-preview,9,"The paper directly addresses the challenge of prompt engineering by discussing a method to condition prompts on specific queries, which is a key issue in the field of dialog model control. Exploring alternatives to fine-tuning with this form of prompt engineering has direct implications on how to effectively influence the behavior of language models without compromising their diversity and expressiveness. The relevance to prompt engineering is very high because it contributes to the understanding and application of prompting techniques to guide dialog models. The paper's findings on improved BLEU scores and response diversity are valuable metrics when evaluating the performance of prompt-based methods. The only aspect keeping this from a perfect score may be the specificity of the application in dialogue systems, which, while still under the umbrella of prompt engineering, could be seen as a subset of larger prompt engineering challenges." -scaling instruction-finetuned language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it focuses on the effects of finetuning language models with instructions, which is a key method for improving the performance of language models on task-specific prompts. However, the study does not directly address 'hard prefix prompts', which may suggest specific, fixed prompts that are difficult for models to interpret, rather than the general approach of instruction finetuning. While the study has a strong connection to the field of prompt engineering by demonstrating the benefits of instruction-based finetuning on various models and benchmarks, the absence of a direct focus on 'hard prefixes' warrants a slightly lower rating." -multi-stage prompting for knowledgeable dialogue generation,gpt-4-1106-preview,8,"The paper presents a relevance to prompt engineering study as it focuses on improving dialogue generation by proposing a multi-stage prompting approach with a pretrained language model. This methodology directly relates to the design and refinement of prompts to enhance the model's performance in generating knowledgeable responses. Although the title suggests a dialogue system rather than an explicit 'hard prefix prompt' structure, the concepts of controlling and structuring prompts to improve output are central to prompt engineering. The high relevance score reflects the significance of multi-stage prompting within the broader scope of prompt engineering techniques." -unnatural instructions: tuning language models with (almost) no human labor,gpt-4-1106-preview,9,"The described study is highly relevant to prompt engineering as it involves the creation of a large dataset of instructions for fine-tuning language models, which is a core facet of prompt engineering. The method of using language models to generate additional prompts and then employing these prompts for subsequent model training directly pertains to techniques in prompt engineering. The effectiveness of using generated prompts to achieve comparable or superior performance to human-curated datasets provides valuable insights into prompt engineering methodologies and their potential efficiencies. The point deduction is due to the abstract not addressing 'hard prefix prompts' directly, which may indicate the study doesn't focus specifically on that aspect of prompt engineering." -language models are multilingual chain-of-thought reasoners,gpt-4-1106-preview,7,"The content is relevant to prompt engineering because it discusses the use of prompts (chain-of-thought prompting) to evaluate the reasoning abilities of language models in a multilingual context. Although the focus is on the reasoning abilities and multilingual capabilities of the models rather than on the engineering of prompts per se, the effectiveness of different types of prompts, especially those encouraging a chain of thought, is an essential aspect of prompt engineering. Hence, the study indirectly contributes valuable insights to the field of prompt engineering by showcasing the impact of prompt types on the performance of language models across various languages." -teaching small language models to reason,gpt-4-1106-preview,7,"The abstract is highly relevant to the field of prompt engineering as it discusses the teaching of reasoning capabilities to smaller language models via knowledge distillation from larger models. Even though it does not specifically mention 'hard prefix prompts', it is related to the concept of improving model performance through advanced prompting strategies like chaining thoughts. The study's outcome indicates that refined prompting techniques can transfer complex reasoning skills to smaller models, which is a significant aspect of prompt engineering." -instruction induction: from few examples to natural language task descriptions,gpt-4-1106-preview,9,"The provided title and abstract are highly relevant to prompt engineering study as they explicitly discuss the ability of large language models to generate natural language instructions from a few examples. This ability is directly related to the engineering of prompts, as it involves designing prompts that help the model infer the desired task. The systematic exploration and evaluation of this ability are fundamental to understanding and improving prompt engineering strategies. The mention of a novel evaluation metric and differentiation between models based on their alignment with instructions also suggests a nuanced approach to prompt engineering that may yield insights for the systematic review on hard prefix prompts." -weakly supervised data augmentation through prompting for dialogue understanding,gpt-4-1106-preview,8,"The study presented in the prompt directly engages with prompt engineering as it discusses the use of 'prompting' with large pre-trained language models for data augmentation in dialogue understanding tasks, which is a subset of prompt engineering. The relevance is high because it examines the iterative improvement of prompts through weakly-supervised techniques, although it may not focus exclusively on 'hard prefix prompts' but rather on the broader context of prompts for few-shot learning and augmentation. Given that it deals with prompts and language models and their application in a practical task, the study is substantially related to the field of prompt engineering." -pointclip v2: prompting clip and gpt for powerful 3d open-world learning,gpt-4-1106-preview,7,"The study discusses utilizing both CLIP and GPT models in unison to enhance 3D open-world learning, with specific emphasis on zero-shot learning capabilities in classification, segmentation, and detection tasks. The relevance to prompt engineering is evident in the methodology where the authors design prompts for both the visual (CLIP) and textual (GPT) components to align 3D data with the pre-trained language knowledge. This indicates an element of prompt engineering to facilitate the interface between visual and language models for processing 3D point cloud data. Nevertheless, the study appears to be more focused on the application of these models in the 3D domain rather than specifically on the engineering of prompts. Hence, while prompt engineering is a component of the paper, it is not the core focus, which is why the rating is not higher." -errors are useful prompts: instruction guided task programming with verifier-assisted iterative prompting,gpt-4-1106-preview,7,"The relevance of the provided abstract to prompt engineering is fairly high, as the paper focuses on a method, CLAIRIFY, that uses iterative prompting combined with program verification. These techniques are critical for refining the interaction between humans and AI to generate accurate outputs, which is a central theme in prompt engineering. While the study is not about 'hard prefix prompts' specifically, it contributes to prompt engineering by exploring error utilization and iterative prompting to improve task programming, which could be applied in the broader context of prompt engineering studies. Therefore, a rating of 7 seems appropriate, given it may indirectly inform methodologies within prompt engineering but is not wholly centered on the specific concept of hard prefix prompts." -language is not all you need: aligning perception with language models,gpt-4-1106-preview,7,"While the provided abstract does not directly discuss 'hard prefix prompts' or 'prompt engineering,' it details the capabilities of Kosmos-1, a Multimodal Large Language Model (MLLM), which is relevant to the field of prompt engineering. The ability of Kosmos-1 to learn in context and follow instructions, including zero-shot and few-shot settings, as well as its evaluation in multimodal chain-of-thought prompting, relates closely to how prompts can be engineered and optimized to interact with language models. Moreover, the cross-modal knowledge transfer mentioned is a component of understanding how prompts can be designed to leverage language in multimodal environments. However, since the focus is primarily on the model's capabilities rather than on the study of prompts themselves, the relevance rating is not a maximal score." -improving factuality and reasoning in language models through multiagent debate,gpt-4-1106-preview,8,"The paper described is highly relevant to prompt engineering as it discusses a novel method for improving language model responses through a multiagent debate system. Although it does not specifically mention a 'hard prefix prompt', the techniques involved in creating prompts that facilitate a debate among language models are closely linked to advanced prompt engineering strategies. The 'society of minds' approach likely involves intricate prompting mechanisms to orchestrate the debate process. This has a direct bearing on the study and advancement of prompting methods, making the paper's content pertinent to the field. However, the rating is not a full 10 due to the lack of explicit mention of 'hard prefix prompts', which are the specific focus of the prompt engineering study mentioned." -orca: interpreting prompted language models via locating supporting data evidence in the ocean of pretraining data,gpt-4-1106-preview,8,"The abstract discusses a novel method named ORCA for interpreting how prompted language models such as BERT perform tasks by locating supporting data from pretraining, which is highly relevant to studies on 'prompt engineering.' Understanding how models relate to pretraining data when generating responses to prompts is a crucial aspect of prompt engineering. It informs how models process prompts and can lead to designing better prompts that leverage the model's knowledge effectively. However, the focus on 'hard prefix prompts' hasn't been explicitly mentioned, which might slightly reduce its relevance to that specific field of study." -prefix-tuning: optimizing continuous prompts for generation,gpt-4-1106-preview,8,"The paper discusses 'prefix-tuning,' which is highly relevant to the field of prompt engineering as it involves optimizing task-specific vectors (prefixes) to improve performance on natural language generation tasks without the need to fine-tune all parameters of a language model. While the term 'hard prefix prompts' isn't explicitly used, the concept of prefix-tuning relies on a similar principle of using prompts (in this case, a trainable prefix) to guide the behavior of a language model. This is pertinent to the study of how prompts affect model performance and behavior, thus earning a high relevance rating. However, it's not a perfect match because the prompt specified a 'hard prefix prompts' review, and this paper focuses on a subset of prompt engineering that is not strictly the 'hard prefix.'" -segment everything everywhere all at once,gpt-4-1106-preview,8,"The abstract provided describes the creation of an interactive and promptable model (SEEM) for image segmentation tasks that is inspired by the mechanism of large language models (LLMs). Since prompt engineering refers to the design and refinement of prompts to effectively interact with models, such as LLMs, the study of SEEM's novel decoding mechanism that allows for diverse prompting is relevant to the field of prompt engineering. SEEM's ability to handle different types of dynamic prompts and its focus on a joint visual-semantic space are aspects that can provide valuable insights into how prompts can be optimized for better interaction with models across various domains. The work also touches on compositionality and semantic-awareness, both of which are key concepts in prompt engineering. While the focus is on image segmentation, the principles of designing prompts for interactive and semantic tasks align closely with prompt engineering methodologies. Therefore, the relevance rating is high but not maximum because the primary application is in the domain of image segmentation rather than text-based models, which are more commonly associated with prompt engineering." -verify-and-edit: a knowledge-enhanced chain-of-thought framework,gpt-4-1106-preview,8,"The abstract describes a method for improving the performance of large language models by addressing the factuality of generated content through a Verify-and-Edit framework in the context of Chain-of-Thought prompting. This is highly relevant to prompt engineering as it presents a new technique for refining prompts to enhance model factuality and trustworthiness. Although it does not directly address 'hard prefix prompts,' it contributes to the broader field of prompt engineering by presenting a strategy to improve output quality, which is a crucial aspect of the study of prompts and their optimizations. Therefore, it scores high on relevance, but not the maximum due to its specific focus on factuality rather than prompt types." -graphprompt: unifying pre-training and downstream tasks for graph neural networks,gpt-4-1106-preview,8,"The paper discusses a novel framework called GraphPrompt, which is directly related to prompt engineering in the context of graph neural networks (GNNs). While the study's focus is on the application of prompts to GNNs rather than text-based models traditionally associated with prompt engineering, it still contributes to the overall field of prompt engineering by extending its principles to another domain of artificial intelligence. The relevance to prompt engineering is high as it involves the development of a learnable prompt to bridge the gap between pre-training and downstream tasks, which is a core concept in prompt engineering studies." -symbolic chain-of-thought distillation: small models can also “think” step-by-step,gpt-4-1106-preview,9,"The abstract describes a method called Symbolic Chain-of-Thought Distillation (SCoTD) that directly relates to prompt engineering, as it involves training smaller language models on the rationalizations produced by larger models. This process is a form of prompt engineering since it deals with enhancing the ability of smaller models to sequentially reason through problems, akin to crafting effective prompts that guide model reasoning. The high relevance rating is due to the focus on improving model performance through engineered prompts (chain-of-thought prompting), which is central to prompt engineering studies. However, the rating is not a full 10 because the abstract does not explicitly mention 'hard prefix prompts' or a systematic review, which is specifically noted in the prompt." -towards revealing the mystery behind chain of thought: a theoretical perspective,gpt-4-1106-preview,8,"The provided title and abstract discuss the effectiveness of Chain-of-Thought (CoT) prompting in improving the performance of Large Language Models (LLMs), particularly for complex tasks. While the study does not explicitly mention 'hard prefix prompts,' it is closely related to prompt engineering, as CoT is a form of prompting strategy used to enhance the problem-solving capabilities of LLMs. The relevance to prompt engineering is high because the theoretical perspective on the mechanism of CoT can contribute significantly to the understanding and development of advanced prompt engineering techniques. However, the rating is not a full 10 because the explicit focus is not on hard prefix prompts but rather on a broader category of CoT prompting strategies." -zeroshotdataaug: generating and augmenting training data with chatgpt,gpt-4-1106-preview,8,"This paper is highly relevant to prompt engineering study as it directly explores the generation of synthetic data using task-specific prompts with ChatGPT. The study delves into the principles of prompt engineering by designing appropriate prompts that lead to superior performance in data augmentation for low resource scenarios. While the paper does not specifically mention 'hard prefix prompts' and the focus is more on data augmentation rather than the core concept of prompt engineering, the underlying premise involves crafting effective prompts to elicit desired outputs from a language model, which is a central aspect of prompt engineering." -meet your favorite character: open-domain chatbot mimicking fictional characters with only a few utterances,gpt-4-1106-preview,8,"The paper presents a method, Pseudo Dialog Prompting (PDP), which is highly relevant to prompt engineering study as it directly involves designing prompts to induce specific behaviors from a language model (mimicking fictional characters). This directly contributes to the broader field of prompt engineering by exploring how to effectively use limited data (a few utterances) to shape the output of a language model. It might not cover 'hard prefix prompts' in the systematic review sense but provides practical insights into the application of prompt engineering for conversational AI." -towards facet-driven generation of clarifying questions for conversational search,gpt-4-1106-preview,8,"The study described in the provided title and abstract demonstrates relevance to prompt engineering as it involves generating clarifying questions in response to user queries using a fine-tuned GPT-2 language model. This is closely related to prompt engineering as it requires careful design of prompts, or inputs, to the language model to ensure that the generated questions are coherent, relevant, and useful in the context of conversational search. While the main focus of the paper seems to be on the generation of clarifying questions rather than on hard prefix prompts specifically, the techniques and findings are likely applicable to prompt engineering studies, especially those concerned with improving interaction patterns with AI systems through conversational interfaces. The only reason the rating isn't higher is because 'hard prefix prompts' isn't explicitly mentioned, but the methodology and goals are nevertheless aligned with the principles of prompt engineering." -promptchainer: chaining large language model prompts through visual programming,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves creating complex tasks by sequencing multiple prompt-driven interactions with a Large Language Model (LLM). While it doesn't specifically mention 'hard prefix prompts,' it approaches the broader topic of prompt design and chaining, which is a subset of prompt engineering. It also focuses on the user-interface side of prompt engineering through the PromptChainer tool, making it relevant for researchers and practitioners interested in optimizing the human-model interaction process. However, the rating is not a full 10 because the study does not directly focus on 'hard prefix prompts' specifically, which is the exact topic of interest." -"grips: gradient-free, edit-based instruction search for prompting large language models",gpt-4-1106-preview,9,"The article describes an innovative approach to prompt engineering specifically designed for large language models, which is directly relevant to the prompt engineering study. The 'Gradient-free Instructional Prompt Search (GrIPS)' is highly relevant as it directly addresses the challenge of improving language model performance through prompt optimization without the need for computationally expensive gradient-based methods. The relevance is slightly below 10 because the systematic review is not solely focused on hard prefix prompts, but on a broader method of prompt improvement. Nevertheless, the study's contributions to the field of prompt engineering are substantial and directly applicable to the systematic review topic." -ai chains: transparent and controllable human-ai interaction by chaining large language model prompts,gpt-4-1106-preview,8,"The study addresses a novel approach to interacting with large language models through 'Chaining LLM steps', indicating a clear relevance to the field of prompt engineering. Chaining can be viewed as an advanced form of prompt engineering where prompts are not static but follow a dynamic, modular process. Although the study does not directly discuss 'hard prefix prompts,' it explores the controllability and transparency of LLMs, which are crucial aspects in designing effective prompts. The relevance rating is not a full 10 because the study's focus is on chaining mechanisms rather than the specific concept of 'hard prefix prompts.'" -craft an iron sword: dynamically generating interactive game characters by prompting large language models tuned on code,gpt-4-1106-preview,7,"The abstract indicates a study that involves using example conversational prompts with a language model to enhance NPC interactions in games. While the main focus seems to be on generating natural language and code for game development purposes, the underlying premise is that these prompts are essential in directing the behavior of the language model. This relates to the subject of prompt engineering, as the quality and design of the prompts directly affect the output and capabilities of the conversational agent. However, the study does not appear to focus primarily on the systematic review of 'hard prefix prompts' specifically, hence the rating is not a perfect 10. The findings could still contribute valuable insights into prompt engineering as it relates to practical applications in game design and NPC character development." -learning to prompt clip for monocular depth estimation: exploring the limits of human language,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it explores the efficiency of CLIP—a model trained on language and vision inputs—when prompted for a specialized task like Monocular Depth Estimation. The research discusses replacing human-language prompts with continuous learnable tokens, which directly pertains to prompt engineering by investigating alternative ways to communicate with AI models. It demonstrates how prompt design can influence performance and understanding of AI models, which is a central concern of prompt engineering studies. The fact that it also touches upon the limitations of human language in prompts and investigates non-linguistic tokens is a novel contribution to the field." -zero-shot rumor detection with propagation structure via prompt learning,gpt-4-1106-preview,8,"The abstract discusses a new approach to rumor detection using a prompt learning framework which is directly relevant to the field of prompt engineering. The study addresses the design of prompts and their integration with data representations and structural features, which are core considerations for prompt engineering. However, the study is more focused on the application of prompt learning for rumor detection rather than the general study of 'hard prefix prompts', so it may not fully cover the systematic review aspect that the hypothetical study on hard prefix prompts suggests." -efficiently enhancing zero-shot performance of instruction following model via retrieval of soft prompt,gpt-4-1106-preview,8,"The described study focuses on the use of soft prompts to improve the zero-shot performance of instruction-following models, specifically mentioning the assistance of these soft prompts to hard prompts. This is relevant to prompt engineering as the research is exploring an innovative approach to optimize how prompts are used, which lies at the core of prompt engineering. The relevance is not maximized (10 out of 10) because the study does not directly focus on 'hard prefix prompts' as specified in the original query but is sufficiently related as it investigates the conjunction of soft and hard prompts in the context of model tuning and performance enhancement. Therefore, it contributes valuable insights to the broader field of prompt engineering studies." -enhancing class understanding via prompt-tuning for zero-shot text classification,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it proposes a method that explicitly uses prompts to enhance semantic understanding in zero-shot text classification tasks. This approach falls within the scope of prompt engineering as it involves the generation of discriminative words (presumably prompts) and a matching model conditioned on prompts. The study focuses on enhancing class understanding which is a key aspect of prompt-based models, although it does not specifically mention 'hard prefix prompts', which was the focus of the original prompt." -prompt-based zero-shot relation classification with semantic knowledge augmentation,gpt-4-1106-preview,9,"The abstract describes a study focused on leveraging prompt-based approaches along with semantic knowledge to address the challenge in relation classification, especially for unseen relations under a zero-shot setting. The methodology described involves creating prompts that incorporate semantic knowledge from an external knowledge graph and using these to train a model. This aligns closely with the field of prompt engineering as it specifically addresses the development and use of prompts to guide model performance in a challenging AI task. The reason for not giving a full 10 is due to the absence of specific mention of 'hard prefix prompts,' which may indicate this study does not focus exclusively on that aspect of prompt engineering." -matching exemplar as next sentence prediction (mensp): zero-shot prompt learning for automatic scoring in science education,gpt-4-1106-preview,8,"The abstract describes a study that investigates the use of a zero-shot approach to automatically score student responses in science education using a novel method called Matching Exemplars as Next Sentence Prediction (MeNSP). This approach is highly relevant to the field of prompt engineering, as it involves the use of prompts to align with a scoring procedure without the need for fine-tuning. While the abstract does not explicitly mention 'hard prefix prompts', it does discuss prompt-based techniques for language model adaptation, which falls under the broader umbrella of prompt engineering. Therefore, the rating is an 8, indicating high relevance due to the innovative application of prompt-related methods in an educational context, but not a perfect score as the specific term 'hard prefix prompts' was not discussed." -controlling personality style in dialogue with zero-shot prompt-based learning,gpt-4-1106-preview,9,"The abstract describes a study focused on 'prompt-based learning' for controlling both personality and semantic accuracy in natural language generation, which is highly relevant to the field of prompt engineering. The experimentation with different classes of prompts and their effects on the NLG performance directly pertains to how prompts can be engineered to achieve specific outcomes. The high rating acknowledges the direct relevance to prompt engineering studies, especially within the context of controlling specific attributes in generated text, which is a crucial aspect of prompt engineering. The only reason it does not receive a full score might be because it does not explicitly address 'hard prefix prompts' but rather prompt-based learning in general." -structured prompt interrogation and recursive extraction of semantics (spires): a method for populating knowledge bases using zero-shot learning,gpt-4-1106-preview,8,"The given abstract describes a method, SPIRES, for populating knowledge bases using Large Language Models (LLMs) through zero-shot learning and prompt interrogation. As prompt engineering involves the design and refinement of prompts to effectively communicate with AI models, this abstract is highly relevant, as it suggests a structured way to use prompts to extract information and populate databases, a task that directly pertains to how prompts are constructed and their effectiveness. The rating is not a perfect 10 as the abstract specifically focuses on knowledge extraction and ontologies, which is a subset of prompt engineering." -bayesian sharpness-aware prompt tuning for cross-domain few-shot learning,gpt-4-1106-preview,8,"The paper presents a novel approach to prompt tuning, specifically Bayesian Sharpness-Aware Prompt Tuning (BSAPT), within the context of few-shot learning and domain adaptation. This is highly relevant to prompt engineering as it directly focuses on enhancing the method through which prompts are constructed and tuned, a core aspect of prompt engineering studies. The application to cross-domain few-shot learning demonstrates an advanced utilization of prompt engineering techniques. The rating is not a full 10 because the abstract suggests a specific application of prompt engineering rather than a comprehensive study of hard prefix prompts in general." -zero-shot generative model adaptation via image-specific prompt learning,gpt-4-1106-preview,7,"The provided abstract discusses Image-specific Prompt Learning (IPL), a methodology related to adapting generative models using text-based prompts, which is highly relevant to the field of prompt engineering. Although the text does not directly address 'hard prefix prompts', it does tackle the use of text prompts in controlling and improving the output of generative models, thus making significant contributions to the broader topic of prompt engineering. The connection to prompt engineering is substantial as IPL is an innovative way of providing domain-specific textual directions to a generative model, which aligns with the disciplines involved in studying how prompts affect the behavior of AI models. However, it does not fully align with a 'comprehensive systematic review on hard prefix prompts' as the abstract seems to focus on a specific application rather than a broad review. Hence, the rating is not a perfect score." -relationprompt: leveraging prompts to generate synthetic data for zero-shot relation triplet extraction,gpt-4-1106-preview,9,"The study directly addresses prompt engineering by exploring how prompts can be used to generate synthetic data for a Zero-Shot Relation Triplet Extraction task. It presents a novel method of leveraging language model prompts in conjunction with structured text approaches to create relation samples, which is a significant contribution to prompt engineering literature. The fact that they also designed a novel decoding method to work with their prompting strategy further emphasizes its high relevance to the field of prompt engineering." -decoupling knowledge from memorization: retrieval-augmented prompt learning,gpt-4-1106-preview,9,"The presented abstract is highly relevant to prompt engineering study as it directly addresses the concept of prompt learning, which is a cornerstone of prompt engineering. It proposes a novel method, RetroPrompt, which aims to enhance the general learning capabilities of language models by decoupling knowledge from memorization. This pertains to an advanced area within prompt engineering that targets improvements in model generalization and few-shot learning abilities, both of which are critical metrics in evaluating the effectiveness of prompts. Although it does not explicitly mention 'hard prefix prompts,' the subject matter is closely related to the broader field of prompt design and optimization." -zero-shot video captioning with evolving pseudo-tokens,gpt-4-1106-preview,7,"The abstract describes a method for zero-shot video captioning that involves a form of prompt engineering by optimizing part of the prompt during the generation process. This relates to the prompt engineering study as it includes the manipulation of prompts to improve language model outputs. Although it does not specifically mention 'hard prefix prompts,' the concept of evolving pseudo-tokens could potentially fall under a broader interpretation of prompt engineering. Therefore, the relevance is fairly high but not completely aligned, as the central focus is on video captioning rather than prompt engineering in isolation." -improving few-shot performance of language models via nearest neighbor calibration,gpt-4-1106-preview,7,"The study targets the optimization of in-context learning for pre-trained language models (PLMs), which is closely related to prompt engineering, as it deals with the arrangement and selection of prompts to enhance few-shot learning performances. The introduction of a nearest-neighbor calibration framework addresses the effectiveness of prompts. Even though the study does not explicitly mention 'hard prefix prompts', the principles and methodologies used for calibration and enhancement of few-shot learning may be applicable to the systematic review and improvement of hard prefix prompts. Hence, the study is relevant but not fully focused on hard prefix prompts, leading to a rating of 7." -few-shot fine-grained entity typing with automatic label interpretation and instance generation,gpt-4-1106-preview,7,"The abstract discusses a novel framework for few-shot Fine-grained Entity Typing (FET) that utilizes prompt-based tuning, which is directly related to the concept of prompt engineering. It addresses the challenge of how to effectively design prompts (verbalizers) automatically, considering the target corpus and label hierarchy, which is a core problem in prompt engineering studies. Moreover, it also introduces a generation aspect to create new instances, hinting at iterative prompt improvement or instance augmentation, which could be relevant for generating more effective prompts. However, the study seems to focus more on entity typing within a few-shot learning framework rather than on hard prefix prompts specifically or prompt engineering more broadly, which may include a variety of other techniques and applications. Therefore, the rating is not a full 10 but still significant due to its partial relevance." -natural language inference prompts for zero-shot emotion classification in text across corpora,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it examines the effects of different prompt formulations on the performance of a natural language inference-based zero-shot-learning classifier. This is directly related to the field of prompt engineering, which involves studying how the design of prompts influences the behavior and output of language models. The study's focus on tailoring prompt selection to fit specific language corpora aligns well with prompt engineering objectives, which seek to optimize interactions with language models for various tasks, including emotion classification mentioned in the abstract." -clinical prompt learning with frozen language models,gpt-4-1106-preview,8,"The abstract discusses the application of prompt learning within the specialized domain of clinical texts, comparing its effectiveness to traditional fine-tuning methods. While it doesn't focus exclusively on 'hard prefix prompts', prompt learning is a closely related aspect of prompt engineering. It's highly relevant to a study on prompt engineering, particularly due to the exploration of efficiency and domain-specific challenges, which are key considerations in the field. However, the absence of a specific mention of 'hard prefix prompts' precludes a perfect score." -language models as zero-shot planners: extracting actionable knowledge for embodied agents,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it explores the use of language models to interpret and execute high-level tasks by breaking them down into actionable steps. This indicates a level of prompt engineering where the model is not only responding to prompts but is being evaluated on its ability to translate prompts into a sequence of actions in a simulated environment. Although the title does not explicitly mention 'hard prefix prompts', the concept of prompt engineering is central to the study as it requires effective prompts to guide the model in generating plans that can map to executable actions. The study's focus on grounding tasks and improving the executability of plans derived from language models is at the core of advanced prompt engineering techniques." -few-shot table-to-text generation with prefix-controlled generator,gpt-4-1106-preview,8,"The study presents a prompt-based approach, specifically the Prefix-Controlled Generator, which is highly relevant to the field of prompt engineering. It addresses the challenge of few-shot table-to-text generation by pre-pending task-specific prompts to improve the ability of Pre-trained Language Models to handle structured data like tables. The focus on controlling the output through hard prefixes is directly applicable to prompt engineering. The two-point deduction from a perfect score acknowledges that the paper might be tangentially related to a 'systematic review on hard prefix prompts' since it appears to be a novel methodology rather than a review. However, the proposed method's successful application in a few-shot learning context and control over PLM outputs keeps it highly relevant to the study of engineering prompts for language models." -p3 ranker: mitigating the gaps between pre-training and ranking fine-tuning with prompt-based learning and pre-finetuning,gpt-4-1106-preview,8,"The abstract provided discusses the utilization of prompt-based learning in the context of adapting pre-trained language models for search ranking tasks. This approach aligns closely with prompt engineering, which focuses on designing prompts that effectively guide models to perform specific tasks or understand particular contexts. The P3 Ranker's emphasis on converting the ranking task to fit a pre-training schema using prompts directly relates to the study of prompt engineering, justifying a high relevance rating. Although the paper specifically targets the search ranking domain and may not address hard prefix prompts directly, the principles of prompt-based learning discussed are central to prompt engineering studies." -prompt tuning with soft context sharing for vision-language models,gpt-4-1106-preview,9,The paper presents research directly relevant to prompt engineering by discussing a novel methodology for prompt tuning in vision-language models. The primary focus on fine-tuning models for few-shot tasks using a shared meta network for prompt generation aligns closely with advanced techniques in prompt engineering. The relevance is only slightly less than maximum because it is specifically about vision-language models and may not cover the broader aspects or methods used in all types of models related to 'prompt engineering.' -prompt-tuning can be much better than fine-tuning on cross-lingual understanding with multilingual language models,gpt-4-1106-preview,8,"The abstract discusses the effectiveness of prompt-tuning compared to fine-tuning in multilingual language models for natural language understanding tasks. The relevance to prompt engineering is significant, as prompt-tuning is a method of prompt engineering that modifies the input prompt to improve model performance, without extensive retraining. This is particularly applicable to the engineering study of 'hard prefix prompts' as it provides empirical evidence of how different prompting strategies can impact cross-lingual understanding and transferability of language models. The reason why it is not a full 10 is that it does not specifically discuss 'hard prefix prompts,' but rather prompt tuning in a general sense, and thus, it is not exclusively focused on the prompt engineering aspect described in the original query." -exploiting domain-slot related keywords description for few-shot cross-domain dialogue state tracking,gpt-4-1106-preview,7,"The paper describes an approach to enhancing dialogue state tracking by using domain-slot related descriptions which act as prompts to identify slot information. This is relevant to prompt engineering because the paper discusses a method of designing and utilizing prompts (in the form of domain-slot descriptions) to improve the performance of an NLP model. Furthermore, the results indicate that these engineered prompts (domain-slot descriptions) help the model to outperform other methods. While the focus is on dialogue state tracking rather than on prompt engineering directly, the usage of customized descriptions to improve model performance does partially fall under the broader umbrella of prompt engineering." -decorate the examples: a simple method of prompt design for biomedical relation extraction,gpt-4-1106-preview,9,"The title and abstract indicate that the paper directly addresses prompt design, an essential aspect of prompt engineering, specifically for the task of biomedical relation extraction. The use of a systematic method to generate prompts and the evaluation of their effectiveness in the context of fine-tuning and few-shot learning are highly relevant to studying prompt engineering. Furthermore, the concrete results showing improved performance by using prompts suggest practical significance in the field. The only reason for not giving a full score of 10 is that the paper focuses on a specific domain (biomedical), which may slightly limit the breadth of its relevance to prompt engineering in general, even though the methodology may be applicable across different domains." -slot dependency modeling for zero-shot cross-domain dialogue state tracking,gpt-4-1106-preview,8,"The study's focus on utilizing slot prompts combination in dialogue state tracking is highly relevant to prompt engineering due to its emphasis on prompt construction for capturing dependencies and domain knowledge in natural language processing tasks. Although it is not directly focused on 'hard prefix prompts', the principles of designing and utilizing prompts for zero-shot learning are closely related to prompt engineering, hence the high relevance rating." -pre-trained language models can be fully zero-shot learners,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses a method (NPPrompt) for zero-shot language understanding that relies on pre-trained language models without the need for labeled data, fine-tuning, or human-constructed prompts. This directly pertains to the study of prompting since it tackles the challenge of leveraging the underlying knowledge of PLMs for various NLP tasks using a novel prompting technique. While it doesn't specifically mention 'hard prefix prompts,' it is within the domain of research and advancing the understanding of how to use prompts effectively with PLMs. The rating is not a full 10 because the direct relevance to 'hard prefix prompts' is not explicit, which might be specifically addressed in a comprehensive systematic review on that sub-topic." -multitask prompted training enables zero-shot task generalization,gpt-4-1106-preview,9,"The provided abstract discusses the development of a system for mapping natural language tasks into a prompted form and explicitly training a model on a diverse set of prompts. This is highly relevant to prompt engineering as it explores the creation and use of different prompts to achieve zero-shot task generalization. The focus on prompted datasets is directly tied to the study of how prompts affect language model behavior, a core aspect of prompt engineering. The relevance is not a full 10 because the abstract does not specifically mention 'hard prefix prompts', which could be a more narrow subtopic within prompt engineering." -tess: zero-shot classification via textual similarity comparison with prompting using sentence encoder,gpt-4-1106-preview,8,"The mentioned study on the TeSS (Text Similarity Comparison using Sentence Encoder) framework is highly relevant to prompt engineering because it focuses on a method where label assignment in zero-shot classification is achieved through the comparison of embeddings from text input and label prompts. This process is integral to prompt engineering as it relies on the design and utilization of prompts that can effectively represent the semantic space for classification tasks. The use of external corpora to enhance the descriptive power of label prompts (TeSS-R) is particularly pertinent to prompt engineering research. However, the study did not explicitly focus on 'hard prefix prompts,' which would encompass a specific subset of prompting techniques and strategies, hence the rating of 8 rather than a perfect 10." -zero-shot program representation learning,gpt-4-1106-preview,7,"The abstract discusses 'Zecoler', which utilizes the concept of inserting trainable prompts into code to elicit knowledge from pre-trained models in the context of code representation learning tasks. This approach is relevant to prompt engineering study because it involves optimizing the input to a pre-trained model through trainable prompts, which is akin to hard prompting strategies. The concept of transforming downstream tasks into the form of pre-training tasks using prompts is central to prompt engineering. However, the focus on code intelligence tasks and domain-specific applications like Solidity reduces the relevance slightly, as a comprehensive systematic review on hard prefix prompts may encompass a broader range of tasks and domains beyond code representation learning." -queryform: a simple zero-shot form entity query framework,gpt-4-1106-preview,7,"The study presents a zero-shot transfer learning framework called QueryForm, which includes a 'dual prompting mechanism.' Although the paper does not focus specifically on 'hard prefix prompts' as a separate study area, the concept of using prompts to extract information from a model without task-specific training data is a form of prompt engineering. The relevance to prompt engineering lies in the framework's ability to influence a model's behavior with carefully constructed queries (prompts). However, the paper discusses prompting within the context of a specific document understanding task rather than a wider exploration of various prompt engineering techniques. The rating reflects relevance in terms of prompting mechanisms and their application, but it is not a direct study of hard prefix prompts in a comprehensive manner." -generating variable explanations via zero-shot prompt learning,gpt-4-1106-preview,8,"The abstract addresses the use of 'zero-shot prompt learning' as a central method in generating explanations for variables in programming, which is relevant to the field of prompt engineering. Prompt engineering typically involves designing and refining prompts to improve interaction with AI models, and the study’s focus on leveraging prompts in a zero-shot context to enhance program comprehension is closely related. However, it does not specifically address 'hard prefix prompts' which would be more directly related to the exact terminology in the prompt engineering study. Hence, a couple of points are deducted for the specialized focus on variable explanations rather than the actual construction or analysis of prompt formats or their impacts in broader applications." -prompt gating: a parameter efficient tuning method for zero-shot multi-source translation,gpt-4-1106-preview,8,"The paper introduces 'Prompt Gating', a method that appends prompts to model inputs, which is directly related to prompt engineering as it involves manipulating prompts to achieve better performance in a machine learning task. The study's relevance to prompt engineering is high because it deals with the integration of prompts into translation models and discusses their impact. The fact that it is applied to machine translation, however, makes it slightly less relevant than if it would have been a study solely focused on prompt engineering for a broader range of applications." -peinet: joint prompt and evidence inference network via language family policy for zero-shot multilingual fact checking,gpt-4-1106-preview,8,"Although the title and abstract do not specifically mention 'hard prefix prompts', they discuss the concept of using joint prompt and evidence inference for zero-shot multilingual fact-checking. This is relevant to prompt engineering as it involves the design of prompts (in this case, for understanding and verifying multilingual claims) and how these prompts interact with an AI model to achieve better performance in a specific task. The novel approach of combining prompts with a mechanism for evidence aggregation aligns with prompt-based methodologies. Hence, the paper is quite relevant to the study of prompt engineering, although it is not directly focused on 'hard prefix prompts,' which might be a specific subset of prompt engineering." -an exploration of prompt-based zero-shot relation extraction method,gpt-4-1106-preview,8,"The relevance to prompt engineering is high because the work involves prompt-tuning, a technique directly related to prompt engineering. It suggests optimizing a model for zero-shot relation extraction by utilizing prompts which influence the model's predictions. Although it's not specifically about 'hard prefix prompts' as the original prompt indicates, prompt-tuning is a subset of prompt engineering and thus highly relevant to studies of prompts and their impact on model performance. The rating is not a full 10 due to the abstract being unavailable ('nan'), which limits the ability to fully assess the relevance, and the absence of direct mention of 'hard prefix prompts', which the original study prompt seems to specify." -prompt-guided scene generation for 3d zero-shot learning,gpt-4-1106-preview,7,"The paper presents an application of prompt engineering in the context of 3D zero-shot learning, where prompts are used to guide scene generation and are integral to the architecture of the learning model. Although prompt engineering is usually discussed in relation to natural language processing, this study adapts the concept for a novel application in 3D data augmentation and model training. It is relevant to the broader field of prompt engineering in that it showcases its adaptability and potential in different areas of AI. However, it might not be considered a pure study of prompt engineering in the textual or linguistic sense, hence the rating is not a full 10." -prompt-based zero-shot relation extraction with semantic knowledge augmentation,gpt-4-1106-preview,8,"The paper discusses a prompt-based model, which is highly relevant to the field of prompt engineering, particularly in the context of zero-shot learning. The focus on generating prompts with semantic knowledge integration touches on a core area of how prompts can be engineered to improve task performance in natural language processing. The relevance score is not a full 10 because the study seems to emphasize the zero-shot relation extraction aspect alongside prompt engineering, rather than being exclusively focused on the methodologies for creating and optimizing prompts (i.e., hard prefix prompts). Nevertheless, the paper still offers substantial insight into the application of prompt engineering concepts." -from visual prompt learning to zero-shot transfer: mapping is all you need,gpt-4-1106-preview,8,"The article discusses a novel approach to adapting large-scale pre-trained models to new tasks using a technique called SeMap, which aligns semantic knowledge for visual prompt learning. The relevance to prompt engineering is high because the research deals with the optimization and creation of prompts that facilitate the use of pre-trained models in new tasks without fine-tuning (zero-shot transfer). This is closely related to the concept of hard prefix prompts in prompt engineering, where the goal is to improve the interaction with a model to produce better performance on target tasks. However, since the main focus is on visual prompt learning rather than hard prefix prompts specifically, the rating is not a full 10." -layout and task aware instruction prompt for zero-shot document image question answering,gpt-4-1106-preview,7,"The relevance to prompt engineering is moderately high because the paper discusses the use of instruction-tuning language models and emphasizes the understanding of layout via spaces and line breaks, which relates to generating prompts that are layout-aware. The proposed LATIN-Prompt and LATIN-Tuning are direct applications of modifying prompts to include layout information and improve task performance, which is a form of prompt engineering. However, the paper is more focused on the interaction between layout awareness and zero-shot learning, rather than on hard prefix prompts specifically. Therefore, while the study is relevant to prompting techniques and their optimizations in the context of language models, it does not directly address the systematic review of hard prefix prompts." -navigating prompt complexity for zero-shot classification: a study of large language models in computational social science,gpt-4-1106-preview,9,"The study directly addresses the role of different prompting strategies in the performance of large language models on classification tasks, which is a core component of prompt engineering. The exploration of how prompt complexity and modifications affect model performance is highly relevant to understanding the mechanisms by which prompts can be engineered for better outcomes in natural language processing tasks. Although the study does not specifically mention 'hard prefix prompts,' it does analyze the influence of variations in prompts, which is closely related to the concept of prompt engineering." -injecting commonsense knowledge into prompt learning for zero-shot text classification,gpt-4-1106-preview,8,"The provided abstract is relevant to prompt engineering to a significant extent. The research discusses enhancing prompt learning for NLP tasks in scenarios with limited data by injecting commonsense knowledge from a Knowledge Graph (KG) into Pre-trained Language Models (PLMs). While this does not directly reference 'hard prefix prompts', it does focus on the improvement of prompts (referred to as verbalizer) used in NLP models. Since prompt engineering generally deals with methods for designing and improving prompts to make them more efficient for language models, this research contributes to the wider field of study by proposing a method to enrich prompts with commonsense knowledge for better performance in zero-shot text classification." -knowledge-embedded prompt learning for zero-shot social media text classification,gpt-4-1106-preview,7,"The title and abstract detail a study that focuses on prompt learning which is an aspect of prompt engineering, specifically within the context of zero-shot text classification for social media. While it does not explicitly mention 'hard prefix prompts', it does discuss embedding knowledge within the prompts, which suggests a degree of specificity and deliberation in prompt design that is relevant to the field of prompt engineering. The method seems to enhance the model's performance without large datasets by using prompts effectively, which is a core concern in prompt engineering studies. Therefore, the relevance to prompt engineering is fairly high, but it might be less relevant to a systematic review specifically focused on 'hard prefix prompts'." -zero-shot continuous prompt transfer: generalizing task semantics across language models,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it directly addresses an advanced application of prompt tuning—namely, the transferability of continuous prompts between different language models. The zero-shot learning aspect and the focus on preserving 'task semantics' when transferring prompts make the research important for the broader understanding of how prompt engineering can be applied across various models. It does not, however, directly address 'hard prefix prompts,' but is still substantially connected to the field of prompt engineering." -spteae: a soft prompt transfer model for zero-shot cross-lingual event argument extraction,gpt-4-1106-preview,8,"The abstract discusses 'SPTEAE', a model which utilizes tunable vectors as prompts, indicating a level of relevancy to prompt engineering. The focus on soft prompts and the mechanism of transferring knowledge from a source language to a target language via prompts are of particular interest to prompt engineering studies, especially in the context of zero-shot cross-lingual tasks. Although the study does not deal with hard prefix prompts directly, the concept of prompt transfer and the use of event type prompts are relevant to the broader field of prompt engineering. The rating is not a full 10 as the specific emphasis of the study is on zero-shot cross-lingual event argument extraction rather than a general exploration of prompt engineering or hard prefix prompts." -prompt-based zero-shot text classification with conceptual knowledge,gpt-4-1106-preview,8,"The paper described seems highly relevant to prompt engineering as it directly discusses the use of prompts for text classification in a zero-shot learning context. The incorporation of conceptual knowledge into prompt-based systems is closely aligned with the study of how different prompt formulations can impact AI performance. While the study's focus on zero-shot learning is slightly broader than prompt engineering alone, its relevance is still significant since prompt engineering is a major component of zero-shot learning approaches." -prompt-ner: zero-shot named entity recognition in astronomy literature via large language models,gpt-4-1106-preview,8,"The study described in the title and abstract is highly relevant to prompt engineering as it proposes and evaluates a prompt-based strategy (Prompt-NER) for enhancing zero-shot Named Entity Recognition (NER) using Large Language Models (LLMs). Although the application is specific to astronomy literature, the methodology and findings can contribute valuable insights to the broader field of prompt engineering, especially in the development and application of prompts for domain-specific zero-shot learning tasks." -"synthesize, prompt and transfer: zero-shot conversational question generation with pre-trained language model",gpt-4-1106-preview,7,"The paper presents a multi-stage knowledge transfer framework (SPARTA) that involves a prompt-based approach for conversational question generation in a zero-shot setting. While it is not explicitly focused on 'hard prefix prompts' in prompt engineering study, the utilization of prompts in the training process to facilitate knowledge transfer from single-turn instances to conversational question generation does relate to prompt engineering. Therefore, it holds relevance for those studying the broader field of prompt engineering, though the exact technique may differ from hard prefix prompting." -weakly supervised few-shot and zero-shot semantic segmentation with mean instance aware prompt learning,gpt-4-1106-preview,8,"The abstract describes a novel approach in semantic segmentation that leverages language-guided segmentation techniques, which is directly related to prompt engineering as it involves learning from class prompts. However, the focus seems to be more on the application of prompt learning for weakly supervised few-shot and zero-shot semantic segmentation rather than a comprehensive study of hard prefix prompts. The relevance is high as prompt engineering is essential to the proposed MIAPNet system, but it is not a systematic review of hard prefix prompts." -"entities, dates, and languages: zero-shot on historical texts with t0",gpt-4-1106-preview,8,"This abstract is highly relevant to prompt engineering as it directly discusses using prompts to achieve zero-shot Named Entity Recognition with the T0 model on historical texts in various languages. It indicates an exploration of prompt-based methods and their efficacy in a challenging domain, which is central to prompt engineering studies. However, the paper does not focus solely on 'hard prefix prompts' but also addresses broader topics such as zero-shot learning and Named Entity Recognition, hence the rating of 8 instead of a perfect 10." -pesco: prompt-enhanced self contrastive learning for zero-shot text classification,gpt-4-1106-preview,8,"The abstract describes PESCO, a framework that uses prompts as part of its contrastive learning approach for zero-shot text classification, which is relevant to the field of prompt engineering. Although it does not focus exclusively on 'hard prefix prompts,' the use of prompts to enhance label retrieval is a direct application of prompt engineering techniques. Therefore, the relevance is high, but not perfect since the abstract does not specify 'hard prefix prompts' as its primary subject." -prompt to be consistent is better than self-consistent? few-shot and zero-shot fact verification with pre-trained language models,gpt-4-1106-preview,7,"The paper's focus on a novel method called ProToCo, which stands for 'Pro' to 'Co'nsistent, involves prompt engineering as it seeks to improve the accuracy of pre-trained language models (PLMs) for fact verification by generating multiple prompt variants and using consistency as a constraint. This method is directly related to prompt engineering as it involves crafting prompts that can effectively query PLMs. However, the paper does not seem to concentrate specifically on 'hard prefix prompts' but on prompting techniques in general to enforce consistency in predictions. Therefore, while it is relevant, it might not directly address the specifics of hard prefix prompt engineering as indicated by your query but still offers significant insights into the broader field of prompt engineering for PLMs." -hierarchical prompt learning for compositional zero-shot recognition,gpt-4-1106-preview,7,"The paper appears to address the concept of prompt engineering by exploring hierarchical prompt learning within the context of Compositional Zero-Shot Learning (CZSL). While it is not a comprehensive systematic review of hard prefix prompts as such, it does contribute to the field of prompt engineering by proposing a novel approach to learning prompts hierarchically, and is thus relevant. The use of prefixed prompts to improve the performance of a vision-language model like CLIP could be considered a form of prompt engineering. However, the rating is not a full 10 because the study is not specifically a systematic review of hard prefix prompts, which was the exact topic requested." -anomalyclip: object-agnostic prompt learning for zero-shot anomaly detection,gpt-4-1106-preview,7,"The abstract describes AnomalyCLIP, a novel approach to adapting the CLIP model for zero-shot anomaly detection by learning object-agnostic text prompts. Although the main focus is on improving anomaly detection, the method involves prompt engineering specifically designed to capture generic concepts of normality and abnormality in images, which is relevant to the study of prompt design and effectiveness. The rating is not a full 10 because the primary application is anomaly detection rather than prompt engineering itself, but the method provides valuable insights into prompt engineering within the context of zero-shot learning." -enhancing zero-shot crypto sentiment with fine-tuned language model and prompt engineering,gpt-4-1106-preview,8,"The abstract provided focuses on the enhancement of sentiment analysis for cryptocurrencies using fine-tuned language models and an investigation into the efficacy of different instruction-based fine-tuning methods. The relevance to prompt engineering lies in the part of the study that examines instruction tuning, which is a form of prompt engineering, as it entails optimizing the instructions given to the model to improve its performance on unseen tasks. Also, it discusses the impact of short and simple versus long and complex instructions on the performance of language models. However, it doesn't explicitly mention the term 'hard prefix prompts,' which suggests that the paper might not delve into that specific area of prompt engineering, instead covering a broader range of instruction-based fine-tuning strategies. Therefore, the relevance is high but not complete, as the connection to 'hard prefix prompts' is not clearly established." -zero-shot domain adaptation for neural machine translation with retrieved phrase-level prompts,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it investigates a prompt-based method for domain adaptation in neural machine translation, which is a novel approach within the field of machine learning and specifically relates to the engineering of prompts. It does not focus on 'hard prefix prompts' specifically, but the usage of bilingual phrase-level prompts for domain adaptation suggests a strong connection to the concept of engineering prompts to improve the performance of a language model. The improvement in BLEU scores and translation accuracy further attests to the effectiveness of the prompt-based method, highlighting its potential relevance in the study of prompt engineering." -"electra is a zero-shot learner, too",gpt-4-1106-preview,8,"The provided abstract primarily relates to prompt engineering as it discusses a novel prompt-based learning method using ELECTRA for zero-shot learning tasks. Prompt engineering is explicitly mentioned as part of the new 'pre-train, prompt, and predict' paradigm. Even though it does not specifically discuss 'hard prefix prompts,' the focus on prompt-based approaches and their effectiveness in improving model performance is highly relevant to studies of prompt design and implementation in NLP models." -empowering sentence encoders with prompting and label retrieval for zero-shot text classification,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it addresses the enhancement of sentence encoders using prompted label candidates. Additionally, the incorporation of retrieval-based methods to refine the label prompts directly relates to the concept of hard prompts in prompt engineering. Although the study does not exclusively focus on 'hard prefix prompts', the general exploration of leveraging prompts in the context of zero-shot text classification closely aligns with the topic of prompt engineering. The retrieval-augmented approach (RaLP) presented in the study exemplifies a practical application of prompt engineering in improving model performance without the need for fine-tuning on specific tasks. The only reason it does not receive a full score is that it doesn't focus solely on 'hard prefix prompts', but instead encompasses a broader range of prompting techniques." -evaluating prompts across multiple choice tasks in a zero-shot setting,gpt-4-1106-preview,8,"This abstract describes a study focused on the evaluation of natural language prompts across multiple choice tasks in a zero-shot setting, which is highly relevant to the field of prompt engineering. It seeks to understand the impact of prompt qualities on model performance, aligning well with the interests of prompt engineering research. The study’s goal to standardize prompts for tasks they were not initially designed for and the quantitative analysis of prompt attributes is significant for the design of effective prompts. Although the study does not explicitly mention 'hard prefix prompts', it contributes to the broader context of prompt engineering, thus the rating of 8 rather than a perfect 10." -zerotop: zero-shot task-oriented semantic parsing using large language models,gpt-4-1106-preview,8,"The paper presents a novel application of large language models (LLMs) for zero-shot semantic parsing, which is indirectly related to prompt engineering. Prompt engineering involves crafting inputs to LLMs in a way that optimizes their performance on a given task, and the study's focus on decomposing the semantic parsing problem into a series of QA problems is a form of prompt engineering. They are effectively engineering prompts to elicit specific types of information from an LLM in a structured format. However, the paper is more about the application of LLMs in a zero-shot learning setting than about the systematic study of prompt engineering techniques. Therefore, the relevance is rated high but not perfect." -"how to prompt llms for text-to-sql: a study in zero-shot, single-domain, and cross-domain settings",gpt-4-1106-preview,9,"The abstract describes a study focused on the effectiveness of different prompt constructions in the context of using large language models for the text-to-SQL task. This directly relates to prompt engineering as it explores how varying prompts influence the performance of language models in specific language processing tasks. The study's investigation into the impact of different prompts and its goal to provide insights for future work is highly relevant to the field of prompt engineering, although it is more specialized towards text-to-SQL rather than hard prefix prompts specifically." -dialogue state tracking with zero-shot and few-shot learning for generalization: a review,gpt-4-1106-preview,7,"The paper's abstract suggests that one of the categories reviewed in the study is 'DST using a prompt,' which directly relates to prompt engineering as it likely involves the use of prompts to improve the performance of dialogue state tracking models. The relevance to prompt engineering is significant since the study appears to include a systematic review of this method among others. However, the abstract does not focus solely on 'hard prefix prompts' as specified in the initial query, indicating that while relevant, it may not cover the full scope of 'hard prefix prompts.' Therefore, the rating is not a full 10." -malm: mixing augmented language modeling for zero-shot machine translation,gpt-4-1106-preview,7,"The abstract discusses the usage of large pre-trained language models and their effectiveness in avoiding off-target language errors for zero-shot machine translation when conditioned with prompts. This suggests that the study delves into prompt engineering to some extent, particularly with regard to its influence on language model behavior in translation tasks. However, the core focus seems to be on zero-shot translation and multilingual model performance rather than exclusively on prompt engineering, so the relevance is significant but not complete." -zero-shot domain-sensitive speech recognition with prompt-conditioning fine-tuning,gpt-4-1106-preview,8,"The study described is highly relevant to prompt engineering as it involves fine-tuning a pre-trained model using text prompts to achieve domain sensitivity and adaptation in speech recognition tasks. Such conditioning on prompts is a direct application of prompt engineering principles to improve model performance on specific domains, showcased by the significant Word Error Rate reductions. However, it is focused specifically on speech recognition and does not cover a broader spectrum of 'hard prefix prompts', which might include other areas beyond speech recognition, hence the rating is not a full 10." -kbpt: knowledge-based prompt tuning for zero-shot relation triplet extraction,gpt-4-1106-preview,7,"Despite the absence of an abstract or TLDR, the title indicates the study is related to 'knowledge-based prompt tuning,' which falls under the broader scope of prompt engineering. The application of prompt tuning for zero-shot relation triplet extraction suggests an advanced use of prompts to improve model performance without extra training data which is relevant to prompt engineering. However, without additional information on the study's methodology or results, a full assessment of relevance cannot be completed, thus the rating cannot be maximized." -zero-shot learning by generating task-specific adapters,gpt-4-1106-preview,7,"The relevance to prompt engineering is fairly high as the abstract describes a novel approach to zero-shot learning that includes utilizing task descriptions as prompts, which could be seen as related to 'hard prefix prompts' in the context of designing inputs that guide the model's predictions. The study focuses on improving the model's ability to generalize to new tasks through a meta-learning framework, which aligns with the concept of improving the effectiveness of prompts in a zero-shot learning setting. However, it does not explicitly address 'hard prefix prompts' in any systematic review manner, which would be necessary for a 10 rating. Nonetheless, the connection to prompt engineering is clear enough to warrant a relatively high rating." -domain-aware continual zero-shot learning,gpt-4-1106-preview,7,"The abstract indicates that the study involves a 'class-wise learnable prompt' which is relevant to prompt engineering as it relates to the generation of text representations for facilitating zero-shot learning. However, the focus of the study seems to be more on addressing challenges of domain awareness and continual learning in the context of zero-shot learning, rather than on hard prefix prompts specifically. Therefore, while it is relevant due to its inclusion of a learnable prompt component for class representation, it does not appear to be a comprehensive systematic review or focus directly on hard prefix prompts in prompt engineering, hence the rating of 7 instead of a full 10." -zero-shot clinical entity recognition using chatgpt,gpt-4-1106-preview,8,"The abstract indicates that the study investigates the use of different prompt strategies for enhancing the performance of ChatGPT in a zero-shot clinical entity recognition task. It directly tackles prompt engineering by comparing the effectiveness of prompts in a specialised application (clinical NER), which is highly relevant to the study of how prompts affect AI behavior. However, it doesn't specify that it focuses on 'hard prefix prompts,' which would be essential for a 'comprehensive systematic review on hard prefix prompts,' hence not a perfect score." -a preliminary evaluation of chatgpt for zero-shot dialogue understanding,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering is notable due to the exploration of ChatGPT's capabilities in zero-shot dialogue understanding tasks, which inherently involves crafting prompts that can elicit the desired outcomes without task-specific training. The mention of 'multi-turn interactive prompt' within the dialogue state tracking (DST) task highlights an aspect of prompt engineering. Understanding how ChatGPT responds to different kinds of prompts, especially in zero-shot scenarios, is crucial for developing better prompt-engineering strategies. However, the study does not focus primarily on the 'hard prefix prompts' which is specific to the systematic review in question, hence the rating is not a full 10." -"clip for all things zero-shot sketch-based image retrieval, fine-grained or not",gpt-4-1106-preview,7,"The abstract discusses the application of prompt learning specifically tailored to the sketch community and its impact on zero-shot sketch-based image retrieval. While it does not explicitly focus on 'hard prefix prompts,' it does mention the implementation of a prompt learning setup, and designing sketch-specific prompts which are relevant to prompt engineering. The substantial performance gains reported indicate the relevance and effectiveness of prompt tuning in this domain. However, the focus seems to be more on the application of prompts in conjunction with the CLIP model rather than a comprehensive study of prompts engineering itself, hence the rating is not a perfect 10." -rapgen: an approach for fixing code inefficiencies in zero-shot,gpt-4-1106-preview,8,"The abstract describes a method called Retrieval-Augmented Prompt Generation (RAPGen) that involves the construction and utilization of prompts to fix performance issues in code. Although it specifically targets performance bugs and uses a pre-constructed knowledge-base intended for this purpose, the basic principles of constructing and using prompts for a language model are at the core of both tasks. Therefore, this paper is highly relevant to the study of prompt engineering because it explores a novel, prompt-based method to interact with a language model to solve a specific problem." -zero-shot cross-lingual summarization via large language models,gpt-4-1106-preview,7,"The reported study is directly related to prompt engineering as it involves using prompts to guide Large Language Models in the task of zero-shot cross-lingual summarization. The relevance is high because it assesses how well prompts can improve the performance of LLMs in a complex task that combines translation and summarization. Nonetheless, the study's primary focus is on cross-lingual summarization rather than on the depth of prompt engineering mechanisms like hard prefix prompts, which reduces the relevance rating slightly." -clipn for zero-shot ood detection: teaching clip to say no,gpt-4-1106-preview,8,"The abstract reveals that the study involves designing a 'learnable no prompt' and a 'no text encoder' to capture negation semantics within images, which is directly related to prompt engineering as it focuses on developing prompts that enable a language-image model to understand and respond with negation, a nuanced language feature. This development aligns with engineering prompts that can enhance model performance in specific tasks, such as OOD detection in this case. Although the emphasis is on OOD detection rather than on prompt engineering itself, the methodology is highly relevant to the study of prompt engineering techniques." -zero-shot information extraction for clinical meta-analysis using large language models,gpt-4-1106-preview,8,"The abstract describes a study that employs large language models for zero-shot prompt-based information extraction in the medical field, which is directly related to the concept of prompt engineering. The investigation of zero-shot performance implicates the design and structuring of prompts to elicit accurate information from language models without any training examples, which is a subset of prompt engineering. While the study focuses on a specialized application in clinical meta-analysis rather than a broad systematic review of hard prefix prompts, it does contribute to the overall knowledge of prompt engineering effectiveness and challenges. Therefore, the relevance is high, but not absolute given the specialized context." -align your prompts: test-time prompting with distribution alignment for zero-shot generalization,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study, especially in the context of zero-shot generalization and prompt tuning to align feature distributions between source and test data, which are key components of prompt engineering. The paper discusses a specific method of prompt tuning that takes distribution shift into account, a topic that is directly related to the engineering and optimization of prompts for better performance in unseen domains. The only reason it doesn't receive a full 10 is that it doesn't specifically mention 'hard prefix prompts', which was the specific focus mentioned in the initial prompt, but it still seems to represent a significant contribution to the field of prompt engineering broadly." -towards realistic zero-shot classification via self structural semantic alignment,gpt-4-1106-preview,7,"The relevance of the text to prompt engineering is moderate to high. The paper discusses a Self Structural Semantic Alignment (S^3A) framework that involves generating discriminative prompts using large language models, which is directly related to the field of prompt engineering. The fact that the S^3A framework includes a component where prompts are generated to discern confusing candidates demonstrates the application of prompt engineering in the paper. However, the overarching goal of the paper is zero-shot classification using Vision Language Models, and prompt engineering is only one aspect of the complex methodology being proposed. The rating is not higher because the main focus is not solely on prompt engineering; instead, it's a part of a larger framework designed for a specific application in machine learning." -instruction distillation makes large language models efficient zero-shot rankers,gpt-4-1106-preview,8,"The abstract discusses the instruction distillation method as a means of improving efficiency and performance in zero-shot relevance ranking by LLMs, which is directly related to prompt engineering. This research tackles the issues of complexity and inefficiency in typical prompt-based ranking methods by simplifying instructions. However, it does not focus solely on 'hard prefix prompts,' but rather on instruction distillation for overall efficiency and performance enhancement in a broader context. Thus, the relevance is high but not entirely focused on the specific subtopic of hard prefix prompts." -locally differentially private document generation using zero shot prompting,gpt-4-1106-preview,8,"The abstract discusses the use of 'zero-shot prompting' with pretrained language models to address privacy concerns, which is relevant to prompt engineering. The introduction of DP-Prompt as a mechanism relies on the strategic use of prompts to enhance privacy while maintaining utility. Although the focus is more on privacy preservation than on prompt engineering in itself, the application of zero-shot prompting techniques is at the core of the study, earning a high relevance rating. However, it isn't exclusively focused on 'hard prefix prompts' or a comprehensive systematic review of such prompts, therefore the rating is not a full 10." -zero-shot relation triple extraction with prompts for low-resource languages,gpt-4-1106-preview,8,"The study directly deals with prompt engineering as it involves creating and using prompts to guide a language model for relation extraction. The work focuses on zero-shot learning for low-resource languages, specifically using prompts to generate structured texts that facilitate the extraction of relation triplets. The structured relation prompt template mentioned also indicates a direct manipulation of prompts to improve model performance. However, the use of the term 'hard prefix prompts' is not specifically mentioned, so the study may not align perfectly with a systematic review on hard prefix prompts but still is highly relevant to the field of prompt engineering." -instance needs more care: rewriting prompts for instances yields better zero-shot performance,gpt-4-1106-preview,9,"The abstract describes a study that directly involves prompt engineering, focusing on improving large language model (LLM) performance in zero-shot tasks by customizing prompts for individual test instances. The approach aligns closely with prompt engineering as it involves the strategic rewriting of prompts to enhance model understanding and performance, which is central to the study of prompt engineering. The high relevance is due to the proposed method's focus on the construction and optimization of prompts for better task execution by LLMs, although the study seems to be more practical and application-oriented rather than theoretical, as implied by the term 'systematic review' in the original query." -zyn: zero-shot reward models with yes-no questions,gpt-4-1106-preview,8,"The abstract describes a method of using yes-no questions as prompts to guide the behavior of a language model without additional labeled data, which is highly relevant to prompt engineering. It addresses the use of prompts to achieve zero-shot learning and align a model's output with user preferences, which are core areas of interest in the study of prompts. However, it is not focused specifically on 'hard prefixes,' but on a broader application of prompts, so the rating is not a full 10." -random word data augmentation with clip for zero-shot anomaly detection,gpt-4-1106-preview,8,"The paper presents a method that uses CLIP, a visual-language model, and involves prompt-guided classification which is clearly related to prompt engineering. Although the focus is on zero-shot anomaly detection and data augmentation, the use of prompts to guide the CLIP model's text encoder for generating data brings it within the domain of prompt engineering studies. The prompts are crucial for the generation of text embeddings which are subsequently used to train the anomaly detection model, significantly impacting the performance of the system. The paper does not focus on 'hard prefix prompts' specifically, so it may not align completely with a comprehensive review of that exact topic, but it certainly provides relevant information about prompt usage in the context of AI-powered anomaly detection." -supplementary - i2mvformer: large language model generated multi-view document supervision for zero-shot image classification,gpt-4-1106-preview,7,"The abstract discusses the use of a large language model (LLM) for prompting strategy in the context of zero-shot image classification. Although it does not directly reference 'hard prefix prompts' or a 'systematic review', the mention of LLM prompting strategies and the analysis of their robustness is relevant to the broader field of prompt engineering. The abstract suggests an investigation into the effectiveness of different prompts, which is a central concern of prompt engineering studies. Therefore, the relevance rating is moderately high, as the content could provide valuable insights for those studying how prompts can affect the performance of AI models, even though it is not a direct match for a study focused specifically on 'hard prefix prompts'." -a setwise approach for effective and highly efficient zero-shot ranking with large language models,gpt-4-1106-preview,8,"The abstract details a study on zero-shot ranking with Large Language Models (LLMs) through the use of different prompting approaches (Pointwise, Pairwise, Listwise, and a novel Setwise approach). Although the study does not specifically mention 'hard prefix prompts,' it does deeply engage with prompt engineering for zero-shot tasks in LLMs. Since prompt engineering is essential in operationalizing these models for specific tasks, and the study clearly contributes to understanding and innovating in this field, it has high relevance to prompt engineering study. However, it does not directly address 'hard prefix prompts,' hence the rating is not a perfect 10." -reducing negative effects of the biases of language models in zero-shot setting,gpt-4-1106-preview,7,"The paper is relevant to prompt engineering as it addresses the issue of biases in language models, particularly GPTs, which is a key concern when engineering prompts for zero-shot settings. By proposing a method to reduce bias through the use of probing samples and a Calibration Adapter, the study is relevant to the prompt engineering field as it contributes to the development of more fair and balanced prompting strategies. However, the primary focus seems to be on model calibration rather than on designing or structuring prompts, hence the rating is not a perfect 10." -model-generated pretraining signals improves zero-shot generalization of text-to-text transformers,gpt-4-1106-preview,7,"The paper is relevant to prompt engineering, particularly in the exploration of training strategies that could impact how effectively models respond to prompts. Although the main focus is on zero-shot generalization of text-to-text Transformers and pretraining strategies (e.g., using model-generated signals), the fact that it includes prompt-finetuning on a mixture of NLP tasks indicates relevance. The creation of METRO-T0, which competes with state-of-the-art models on prompted NLP benchmarks, underscores the potential impact of pretraining on prompt-based tasks. However, the paper does not seem to focus specifically on 'hard prefix prompts' but rather on a broader approach to pretraining and finetuning." -beyond yes and no: improving zero-shot llm rankers via scoring fine-grained relevance labels,gpt-4-1106-preview,9,"The paper discusses improving zero-shot text rankers by refining the prompting mechanism used in large language models (LLMs), specifically by introducing fine-grained relevance labels instead of binary ones. This is highly relevant to prompt engineering as it directly involves optimizing the way prompts are structured to achieve better performance in text ranking tasks. The incorporation of more nuanced labels is a method of prompt engineering aimed at enhancing the model's capability to assess relevance. The study's focus on prompting strategies and its impact on the model's output makes it pertinent to the field of prompt engineering study, hence the high score." -exploring grounding potential of vqa-oriented gpt-4v for zero-shot anomaly detection,gpt-4-1106-preview,7,"The abstract details a study focused on the application of a Large Multimodal Model (GPT-4V) for anomaly detection using the Visual Question Answering paradigm, which includes an aspect of 'Prompt Designing' as one component of the proposed framework. This directly relates to prompt engineering as it involves designing prompts to effectively interact with AI models. However, the study's primary focus seems to be on the application of the model to anomaly detection rather than the intricacies or methodologies behind prompt engineering. Therefore, while prompt engineering is a component of the study, it is not the central theme, which is why the relevance is rated as a 7 rather than a full 10." -zero-shot learning for named entity recognition in software specification documents,gpt-4-1106-preview,8,"The abstract discusses the application of zero-shot learning to Named Entity Recognition (NER) in the context of software specification documents. One of the two zero-shot approaches mentioned employs prompt engineering, achieving a high accuracy of 93%. The relevance to prompt engineering is high because the study specifically involves the use of prompt engineering techniques in an NER task, which is a significant part of language model application. However, the relevance is not rated as a full 10 because the abstract also describes a second approach that diverts from prompt engineering and is based on transforming the problem into a question-answering task. Therefore, while prompt engineering is a central theme, it is not the exclusive focus of the study." -global constraints with prompting for zero-shot event argument classification,gpt-4-1106-preview,9,"The abstract describes a novel approach that leverages prompting techniques, specifically prefix prompts, in the context of event argument classification which is highly relevant to prompt engineering. The study's focus on how prompts can be used to improve performance in a zero-shot learning scenario indicates a significant contribution to the area of natural language processing related to prompt engineering. Although the work is not solely about hard prefix prompts in general, the application and development of new prompt templates for a specific task align closely with prompt engineering studies. The only reason it does not receive a full 10 is that it does not address a 'comprehensive systematic review' on prompts but rather presents a specific applied use-case of prompt engineering." -large language models are frame-level directors for zero-shot text-to-video generation,gpt-4-1106-preview,7,"The provided abstract discusses the use of large language models (LLMs) to generate frame-by-frame descriptions for text-to-video generation, which is relevant to prompt engineering. While the primary focus seems to be on video generation, the role of LLMs in interpreting and directing user prompts aligns with the study of designing and improving prompts to achieve specific outcomes. The framework's ability to translate user prompts into separate and temporally consistent frame prompts demonstrates an application of prompt engineering techniques. Therefore, the approach of dissecting abstract prompts into frame-level instructions can be viewed as a form of prompt engineering. The rating is not a full 10 because the abstract does not explicitly focus on the study of prompt engineering in general but rather its application within a specific context of video generation." -sc vall-e: style-controllable zero-shot text to speech synthesizer,gpt-4-1106-preview,7,"The title of the study 'SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer' indicates a research focus on text to speech (TTS) synthesis with style control, which is tangentially relevant to prompt engineering. Although prompt engineering typically involves refining input prompts to achieve better performance in language models, the abstract describes a system that takes text and prompt audio as input to control speech attributes like emotion and pitch. This relates to a form of prompt engineering where the prompt is not just textual but also auditory. The mention of 'tokens in the style embedding matrix' also suggests a relationship with prompt engineering as it implies the manipulation of specific elements to guide the model's output. However, the primary focus on TTS synthesis and lack of explicit discussion on prompt engineering in language models warrants a rating that isn't at the highest relevance." -applenet: visual attention parameterized prompt learning for few-shot remote sensing image generalization using clip,gpt-4-1106-preview,7,"The provided abstract demonstrates relevance to prompt engineering as it discusses the development of a novel approach to prompt learning, which is central to adapting language models to specific tasks. The Visual Attention Parameterized Prompts Learning Network (APPLeNet) incorporates visual tokens combined with textual tokens, indicating that it deals with the intersection of language (through prompts) and vision, which is a component of prompt engineering. Additionally, the TLDR section reinforces the focus on prompt learning strategies. However, the application is specifically for remote sensing image generalization, which is a niche area within the broader scope of prompt engineering studies. Hence, the rating is not a full 10, because while it does contribute to the field, it does so in a specific context rather than addressing hard prefix prompts in a broad sense." -schema-aware reference as prompt improves data-efficient relational triple and event extraction,gpt-4-1106-preview,9,"The abstract presents research on a novel approach for prompt-based information extraction using pre-trained language models, which directly relates to the study of engineering prompts for better performance in language understanding tasks. As the study introduces a schema-aware mechanism to improve the efficiency of prompts by leveraging global training data and knowledge, it is highly relevant to the concept of 'hard prefix prompts' in the prompt engineering field. The approach is designed to overcome the semantic gap and representation learning limitations, which are critical considerations in prompt engineering. The only reason it does not receive a 10 is because the abstract does not explicitly mention 'hard prefix prompts', but the content is otherwise highly relevant." -prompt combines paraphrase: teaching pre-trained models to understand rare biomedical words,gpt-4-1106-preview,8,"The abstract describes an approach to prompt-based fine-tuning tailored towards the biomedical domain, which is relevant to the field of prompt engineering. It focuses on helping models learn and understand rare biomedical terminology, a challenge unique to this specialized area. The approach is directly related to improving the capabilities of pre-trained models with prompt engineering in a specific and practical instance, which can be beneficial for the broader study of prompts in different contexts. However, the abstract does not discuss 'hard prefix prompts' specifically, which may slightly reduce its relevance to the precise topic of a systematic review on such prompts. Therefore, while it is highly relevant to prompt engineering overall, it is not a perfect match for the subject of 'hard prefix prompts.', which is why the rating is not a perfect 10." -domain prompt learning for efficiently adapting clip to unseen domains,gpt-4-1106-preview,9,The abstract describes Domain Prompt Learning (DPL) as a novel approach for domain inference through the generation of conditional prompts. This is highly relevant to prompt engineering as it explicitly deals with the creation of prompts to improve the performance of a foundation model in domain generalization. The approach's focus on prompt generation and its impact on model accuracy makes it a significant contribution to the field of prompt engineering. -amortized prompt: lightweight fine-tuning for clip in domain generalization,gpt-4-1106-preview,7,"The abstract discusses the use of prompt generation as a novel approach for domain inference with an emphasis on improving domain generalization in image classification using the CLIP model. This is relevant to prompt engineering, as it describes developing a method (Amortized Prompt) related to creating and utilizing prompts to enhance model performance without fine-tuning. Although the study appears to focus more broadly on domain generalization and does not specifically address 'hard prefix prompts,' the concept of prompt generation within this context is still within the domain of prompt engineering, hence the rating of 7. The absence of a direct mention of 'hard prefix prompts' means it is not entirely focused on that specific aspect of prompt engineering, thus not receiving a full score." -feature normalization and cartography-based demonstrations for prompt-based fine-tuning on emotion-related tasks,gpt-4-1106-preview,8,"The relevance to prompt engineering is high because the paper discusses a novel approach to prompt-based fine-tuning, which is a method within prompt engineering. It focuses on improving the performance of language models on NLP tasks through feature normalization and the introduction of training dynamics to select informative samples for prompts. The paper's central theme revolves around optimizing the input context for prompt-based models, which is directly relevant to prompt engineering. However, it does not specifically address 'hard prefix prompts,' but rather the broader concept of prompt-based fine-tuning. Hence the reasoning for not giving a full score of 10." -understanding prompt engineering may not require rethinking generalization,gpt-4-1106-preview,8,"The provided abstract directly involves the study of prompt engineering within the context of zero-shot learning and vision-language models. It discusses the impact of manual prompt crafting on generalization performance and how classical PAC-Bayes bounds can explain the success of such methods. Although the specific term 'hard prefix prompts' is not mentioned, the abstract's focus on the structural aspects of prompt design and their implications for model performance is highly relevant to the field of prompt engineering. The TLDR further emphasizes the significance of the discrete nature of prompts and language model priors in maintaining tight generalization bounds, which are central considerations in prompt engineering studies." -few shot learning approaches to essay scoring,gpt-4-1106-preview,8,"The abstract provided discusses few-shot learning methods, specifically the use of a prompt-based few-shot learning method (PET) in the context of automated essay scoring. Although the primary focus is on AES, the implementation of prompt-based learning is highly relevant to the study of prompt engineering, as PET is a methodology that relies on engineering prompts to improve model performance with limited training data. Therefore, the study is substantially relevant to prompt engineering, specifically within the field of NLP and machine learning. The deduction in the rating arises because the prompt engineering for AES may not cover the entire scope of 'hard prefix prompts' but is nevertheless significant in demonstrating the application and impact of prompt engineering techniques." -byoc: personalized few-shot classification with co-authored class descriptions,gpt-4-1106-preview,8,"The study presents a novel approach to few-shot text classification with the involvement of an LLM and interaction with users to generate class descriptions. This is highly relevant to prompt engineering, as the method relies on creating effective prompts that enable the LLM to categorize texts with minimal training data. Although the research focuses specifically on text classification and user interaction for class description generation, rather than hard prefix prompts exclusively, the process of prompt construction and its role in model performance is central to the field of prompt engineering. Therefore, the study contributes valuable insights to prompt engineering by exploring interactive ways to enhance LLM understanding and classification accuracy." -prompt sketching for large language models,gpt-4-1106-preview,9,"The provided abstract for 'prompt sketching for large language models' discusses an innovative prompting strategy that involves generating a template with variables that the LLM predicts values for, which directly relates to engineering better prompts for LLMs. The approach aims to address issues with current prompting strategies that result in disconnected and verbose responses by proposing a more structured interaction with the model via templated prompts. The abstract mentions the improvement in performance on various benchmarking tasks, indicating a substantial contribution to the study of prompt engineering. The paper's focus on optimizing the generation process and providing control over the model's output through a novel prompting paradigm makes it highly relevant to the field. It is rated slightly less than 10 because the prompt specifically asks for a review on 'hard prefix prompts', and it is not explicitly clear from this abstract whether prompt sketching falls into that category. However, the general relevance to prompt engineering study is evident." -strength in numbers: estimating confidence of large language models by prompt agreement,gpt-4-1106-preview,9,"The paper discusses a method to improve confidence estimates for language model predictions by using a variety of prompts, which is highly relevant to the field of prompt engineering. The study focuses on the generation of multiple prompts to enhance the reliability of large language model outputs, which directly pertains to the design and usage of prompt strategies to elicit more accurate responses from these models. The relevance is not a full 10 only because it does not specifically mention 'hard prefix prompts' but rather the broader concept of improving confidence estimation through the use of diverse prompts." -the language of prompting: what linguistic properties make a prompt successful?,gpt-4-1106-preview,9,"The described study directly relates to prompt engineering because it investigates how linguistic properties of prompts affect the performance of language model tasks. It focuses on the nuances of prompt design, which is a core aspect of prompt engineering, aiming to understand what makes a prompt effective. This is highly relevant as it contributes to the development of guidelines and standards for prompt creation, essential for refining the prompt engineering process. The only reason it does not receive a perfect score is that it does not specify 'hard prefix prompts' but prompts in general, which could include a variety of types beyond the hard prefix category." -hard sample aware prompt-tuning,gpt-4-1106-preview,9,"The provided abstract describes research directly related to prompt-tuning, specifically addressing challenges in differentiating between informative hard samples and misleading samples during few-shot learning for NLP tasks. The relevance to prompt engineering is high, considering that the study introduces a 'Hard Sample Aware Prompt-Tuning framework (HardPT)' to improve the effectiveness of prompts in machine learning models by using advanced techniques such as reinforcement learning and contrastive learning. These methodologies directly contribute to the field of prompt engineering by enhancing the model's ability to learn from limited data. The only reason for not giving a perfect score is the focus on 'hard sample' differentiation may be considered a specific subset within the broader domain of prompt engineering." -voucher abuse detection with prompt-based fine-tuning on graph neural networks,gpt-4-1106-preview,8,"The study presents a novel application of prompt-based fine-tuning, albeit in the domain of graph neural networks for voucher abuse detection rather than natural language processing. The focus on designing a prompting function to better align the pre-training and fine-tuning tasks shows relevance to prompt engineering, as it involves creating effective prompts to improve machine learning models’ performance. The improvement in performance with this method demonstrates the potential effectiveness of prompt engineering strategies in various domains, which is relevant for the broader field of study. However, the specificity to graph neural networks slightly reduces its direct applicability to studies focused exclusively on text-based prompt engineering." -modal interaction-enhanced prompt learning by transformer decoder for vision-language models,gpt-4-1106-preview,9,"The title suggests that the study introduces a prompt tuning method specifically designed for improving the performance of transformer decoders in vision-language models. This is highly relevant to prompt engineering as it deals with enhancing model interaction with prompts. Although the term 'hard prefix prompts' from the original query is not explicitly mentioned, the nature of the study seems to be closely related to developing and enhancing prompting strategies. Hence, the relevance rating is high. The abstract being 'nan' does not provide additional information, but the TLDR suggests that the method being proposed has shown improved performance over a baseline model, indicating that this research contributes valuable insights to the field of prompt engineering." -stabilized in-context learning with pre-trained language models for few shot dialogue state tracking,gpt-4-1106-preview,8,"The study addresses designing prompts for complex tasks like dialogue state tracking (DST) and discusses techniques to stabilize in-context learning performance with pre-trained language models. As prompt engineering involves both the creation of effective prompts and the stability of model performance when using those prompts, this study is highly relevant to the field. However, it specifically focuses on few-shot learning techniques and dialogue tasks, which may not fully cover the broad spectrum of prompt engineering topics such as hard prefix prompts. Thus, it does not merit a perfect score, but it is still significantly pertinent." -does gpt-3 generate empathetic dialogues? a novel in-context example selection method and automatic evaluation metric for empathetic dialogue generation,gpt-4-1106-preview,8,"The provided abstract directly relates to prompt engineering as it discusses the exploration of GPT-3's ability to generate empathetic dialogues through prompt-based in-context learning, which is a part of the field of prompt engineering. The study's investigation of novel in-context example selection methods and the introduction of a new automatic evaluation metric are also relevant to the development and optimization of prompts, which are essential for fine-tuning the performance of language models in specific tasks. Although it doesn't mention hard prefix prompts specifically, the focus on in-context learning and prompt-based methods makes it highly relevant to the broader field of prompt engineering in the context of empathetic dialogue generation." -emotionprompt: leveraging psychology for large language models enhancement via emotional stimulus,gpt-4-1106-preview,8,"The presented abstract is highly relevant to prompt engineering, as it specifically addresses the enhancement of large language models (LLMs) through 'EmotionPrompt', which is essentially an innovative technique in prompt engineering involving emotional stimuli. Although the focus on 'hard prefix prompts' is not directly mentioned, the research could be considered adjacent or complementary due to its emphasis on improving the interaction between humans and LLMs by refining the way prompts are engineered. Hence, the relevance to prompt engineering is significant, warranting a high rating. Nonetheless, the specificity to 'hard prefix prompts' is not clearly stated, which is why the rating is not a full 10." -scone: benchmarking negation reasoning in language models with fine-tuning and in-context learning,gpt-4-1106-preview,7,"The abstract describes a study focusing on negation reasoning in language models, particularly in the context of NLI (Natural Language Inference) and sentence completion tasks. Although the study is not directly about 'hard prefix prompts', prompt engineering is inherent in the design of tasks for language models to assess their abilities. The construction of the ScoNe-NLG and the insights from testing different prompt strategies with InstructGPT are relevant to prompt engineering, as they can inform how prompts can be optimized for better model performance, especially in handling negations. Therefore, the study is moderately relevant to prompt engineering, even if the primary focus is not on prompt construction itself." -enabling classifiers to make judgements explicitly aligned with human values,gpt-4-1106-preview,9,"The abstract describes a study that is highly relevant to prompt engineering. It discusses how prompt-based few-shot learning is used to generate training data from large-scale language models, which is a key aspect of prompt engineering. The focus on value alignment and the construction of classifiers based on explicit human input also reflects on the prompt's ability to direct model behavior in a specific way, showcasing an advanced application of prompt engineering. The only reason it doesn't receive a perfect score is that it does not exclusively deal with 'hard prefix prompts', which the study request specifically asks for, but addresses a broader topic of prompt-based few-shot learning and classifier fine-tuning." -bits of grass: does gpt already know how to write like whitman?,gpt-4-1106-preview,7,"The study is relevant to prompt engineering insofar as it examines how generative language models like GPT-3.5 and GPT-4 respond to zero-shot and many-shot prompts without fine-tuning. It evaluates the model's ability to generate poetry in a specific style, which is closely related to the effectiveness of the prompts used. It does not, however, specifically address 'hard prefix prompts,' but rather the broader concept of prompt effectiveness in generating author-specific language patterns. Therefore, the relevance is high but not entirely focused on the specific aspect of 'hard prefix prompts'." -do prompts solve nlp tasks using natural language?,gpt-4-1106-preview,9,"The given title and abstract are highly relevant to prompt engineering as they discuss the effectiveness of different types of prompts in NLP tasks, a core issue in the study of prompt engineering. The research specifically evaluates human-designed prompts, schema prompts, and null prompts, which are directly related to the process of engineering and optimizing prompts for language models. However, it might not be a 'comprehensive systematic review' as the prompt specifies, which is why it doesn't receive a full 10 rating." -cliptexture: text-driven texture synthesis,gpt-4-1106-preview,8,"The abstract discusses a texture synthesis framework that utilizes language-based controls to guide the synthesis process, which is relevant to prompt engineering. The use of text prompts to influence the output of an AI model aligns closely with prompt engineering principles, where the goal is to effectively communicate an intended outcome to the model through language. However, this paper specifically focuses on texture synthesis in images rather than prompt engineering as a broader field of study, hence the rating is not a perfect 10." -bertnet: harvesting knowledge graphs with arbitrary relations from pretrained language models,gpt-4-1106-preview,7,"The research is highly relevant to prompt engineering as it involves using prompts to interrogate pretrained language models for extracting knowledge graph relationships. While the study does not focus on 'hard prefix prompts' specifically, the concept of designing prompts to elicit specific types of knowledge from language models is central to prompt engineering. Therefore, the use of prompts to define relations and the subsequent extraction process aligns with studying the effectiveness and methodology of prompt engineering, despite not directly addressing the systematic review topic on 'hard prefix prompts'." -learning disentangled prompts for compositional image synthesis,gpt-4-1106-preview,9,"The abstract describes a study highly relevant to prompt engineering, focusing on a specific application in image synthesis. The research introduces a framework for learning disentangled prompts that separate semantic and domain information, which is a concept closely associated with constructing effective prompts in generative models. The ability to control these aspects and the application to zero-shot domain adaptation show a direct relevance to the field of prompt engineering. However, the focus is specific to image synthesis rather than a broad range of applications or a purely theoretical exploration, hence the rating is not a full 10." -language models as black-box optimizers for vision-language models,gpt-4-1106-preview,9,"The provided abstract describes research into a novel fine-tuning approach for vision-language models (VLMs) using natural language prompts, which is highly relevant to prompt engineering. The study's focus on refining prompts using large language models and without requiring white-box access aligns with the core principles of prompt engineering. The research advances the understanding of how effective prompts can be generated and optimized, which is a fundamental aspect of prompt engineering. The deduction of one point is due to the specificity of the application to vision-language models and not to the broader spectrum of prompt engineering, but it still remains a significant contribution to the field." -effects of target words and their locations in prompts,gpt-4-1106-preview,9,"The researched document is highly relevant to the field of prompt engineering as it directly investigates the effects of target words and their placement within prompts, which are critical components in constructing effective prompts for language models. The study's examination of different prompt structures and their outcomes on model performance, as well as comparisons between models that are instruction tuned (T0) and those that are not (ALBERT), provide valuable insights into prompt design strategies. The focus on varying difficulties and tasks, including NLI, coreference resolution, sentence completion, and multiple choice Q&A, further underscores the study's comprehensive approach to understanding prompt engineering. Although the title does not specifically mention 'hard prefix prompts,' the abstract indicates a thorough examination of prompt-related factors which are indeed pertinent to the study of prompt engineering. The only reason it's not a full 10 is that the thesis does not seem to exclusively focus on 'hard prefix prompts,' which could be construed as a specific type of prompt from the title of the systematic review." -weak supervision for question type detection with large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates the use of rules as an alternative to manual prompts for leveraging large pre-trained language models in a specific NLP task, which is question type detection in dialogue. This aligns with prompt engineering by exploring how to effectively communicate with LLMs to produce desired outputs. The systematic review aspect is not directly mentioned, but given that the work compares different models and addresses the design of prompts versus rules, it reflects an understanding of the prompt engineering landscape, which is essential for a systematic review." -automatic data transformation using large language model: an experimental study on building energy data,gpt-4-1106-preview,8,"The study presents a framework that includes a prompt generator for large language models, which is highly relevant to the field of prompt engineering. The iterative prompt optimization mechanism for flaw detection aligns well with advanced prompt engineering techniques. Although the focus is on building energy data and SQL code transformation, the core concept of utilizing LLMs with a prompt-based interface has broad implications for prompt engineering. The study emphasizes the integration of domain knowledge and adaptive learning, which are crucial components of prompt engineering. The reason for not rating it a full 10 is that the primary application is data transformation rather than a broad analysis of 'hard prefix prompts' in general." -leveraging vision-language foundation models for fine-grained downstream tasks,gpt-4-1106-preview,7,"The abstract mentions developing a multitask fine-tuning strategy based on a positive/negative prompt formulation to improve the performance of vision-language foundation models on fine-grained attribute detection and localization tasks. This indicates a utilization of prompt engineering for improving model accuracy on specific tasks. While it is not specifically about 'hard prefix prompts' which could be more related to text-based tasks, the concept of using prompt strategies to finetune models, even in the vision-language domain, is related to the broader field of prompt engineering. Hence, the relevance is moderately high but not entirely direct with respect to the specific topic of hard prefix prompts." -towards expert systems for improved customer services using chatgpt as an inference engine,gpt-4-1106-preview,8,"The abstract indicates that the paper discusses an iterative procedure that involves prompt engineering as part of the process to develop ChatGPT-powered expert systems for customer services. Since it addresses the design of descriptive knowledge and few-shot prompts, which are key components of prompt engineering for AI models, it is relevant to the study of prompt engineering. The relevance is not at the maximum since the abstract suggests that the paper covers a broader range of topics within the AI application in customer service, and prompt engineering is only one part of the study." -generating domain-specific programs for diagram authoring with large language models,gpt-4-1106-preview,8,"The study addresses the concept of engineering prompts specifically for one-shot learning with Large Language Models (LLMs) to generate domain-specific language (DSL) programs, which is relevant to prompt engineering. Developing structured prompts that can effectively guide LLMs, like the study's use of LLMs for Penrose diagram creation from prose, illustrates a practical application of prompt engineering. This process is central to optimizing LLM performance in specific tasks, thus the high relevance. However, the provided title and abstract do not mention 'hard prefix prompts' as a focused subject within the realm of prompt engineering, which would align directly with the systematic review of hard prefix prompts. Instead, it discusses prompt structures for DSL program creation in general, which may not comprehensively cover all aspects of prompt engineering or the specific topic of hard prefix prompts, leading to a rating slightly less than perfect." -rewriting math word problems with large language models,gpt-4-1106-preview,9,"The abstract provided talks about a study where Large Language Models, specifically GPT-4, were used to rewrite math word problems, following the same guidelines as human authors. It directly relates to prompt engineering as it involves developing and comparing different prompting strategies like zero-shot, few-shot, and chain-of-thought. Furthermore, it discusses the process of encoding mathematical components using GPT´s capacity to write python code, which is an essential aspect of prompt engineering when dealing with specialized tasks such as math word problems. Although the primary focus is on improving learning outcomes rather than prompt optimization, the process of refining the prompts to achieve high-quality rewrites is squarely within prompt engineering methodology. The reason for not rating it a full 10 is because the primary outcome seems to be focused on educational efficacy rather than the refinement of the prompt engineering itself." -eliciting knowledge from language models for event extraction,gpt-4-1106-preview,8,"The paper is clearly relevant to prompt engineering as it discusses the use of prompt-based learning to elicit knowledge from language models for a complex NLP task like event extraction. Designing such prompts is closely related to the concept of prompt engineering, which involves crafting inputs that help elicit desired responses from the model. Although the paper might not focus solely on 'hard prefix prompts' as per the original systematic review topic, it pertains to the general field of study of how prompts can be engineered to improve the extraction of information from language models. The deduction of two points in rating reflects that while it is highly relevant, it might not cover 'hard prefix prompts' specifically if that were the exclusive focus of the review." -ccprompt: counterfactual contrastive prompt-tuning for many-class classification,gpt-4-1106-preview,9,"The provided abstract relates to the development and analysis of a specific type of prompt-tuning approach named 'Counterfactual Contrastive Prompt-Tuning (CCPrompt)' which is highly relevant to the field of prompt engineering. Prompt engineering involves the design and optimization of prompts to improve the performance of neural language models on various tasks. The described CCPrompt method focuses on enhancing many-class classification by identifying contrastive attributes and using them to construct elaborate prompts, which is a direct application of prompt engineering techniques. The high relevance rating is supported by the abstract's discussion on the method's effectiveness for different NLP tasks and the use of prompts as a core element of the model. The rating is not a perfect 10 primarily because it does not cover a 'systematic review' of hard prefix prompts but instead introduces a novel approach within prompt engineering." -what does a platypus look like? generating customized prompts for zero-shot image classification,gpt-4-1106-preview,8,"The abstract describes research on generating prompts to improve the performance of open-vocabulary image classification models, which is a significant contribution to the field of prompt engineering, particularly in the realm of zero-shot learning. While the study focuses on image classification and doesn't specifically mention 'hard prefix prompts', it does address the creation and optimization of prompts to improve task performance, which is relevant to the general area of prompt engineering." -a uto g raphex : zero-shot biomedical definition generation with automatic prompting,gpt-4-1106-preview,8,"The abstract discusses a zero-shot definition generation model that leverages prompting with pre-trained language models, specifically in the context of biomedical terminology. While it does not explicitly mention 'hard prefix prompts', it does relate to prompt engineering as it involves automatically generating prompts to facilitate knowledge elicitation from language models. This is highly relevant to studies exploring various aspects of prompt engineering, although it may not address the 'hard prefix prompts' directly. The high relevance is due to the focus on automatic prompting which is a subset of prompt engineering. The rating is not a full 10 as the abstract does not cover the full breadth of prompt engineering, specifically not mentioning the term 'hard prefix prompts'." -better zero-shot reasoning with role-play prompting,gpt-4-1106-preview,9,"The study's theme is highly relevant to prompt engineering as it focuses on advanced techniques of prompting, specifically role-play prompting, and its impact on the performance of large language models (LLMs). Prompt engineering is crucial for the effective utilization of LLMs, and this research delves into the significant aspect of how different prompting methods, like role-play, can enhance a model's reasoning abilities in zero-shot scenarios across a variety of benchmarks. Although the study is not specifically about 'hard prefix prompts,' the broader category of prompt engineering still applies, thus the high relevance rating." -zero-shot slot filling with slot-prefix prompting and attention relationship descriptor,gpt-4-1106-preview,8,"The described paper introduces a novel prompting scheme specifically designed for zero-shot slot filling, which is directly related to prompt engineering. Prompt engineering involves creating effective prompts to guide models' behavior without extensive training, and this paper's approach to including learnable tokens and slot names fits within that scope. The use of attention values to enhance the prompts further ties it to advancements in the methodology of how prompts are constructed and their relationship to the model's attention mechanisms. The rating is not a perfect 10 because the paper is more focused on slot filling and attention features rather than a broad study on prompt engineering, but it still offers significant insights into the field." -relational representation learning for zero-shot relation extraction with instance prompting and prototype rectification,gpt-4-1106-preview,7,"The paper's focus on Instance Prompting as a method to bridge the gap between pre-training and fine-tuning for relation extraction aligns with techniques used in prompt engineering, particularly in the context of tailoring model outputs to specific tasks without extensive additional training data (zero-shot learning scenarios). Additionally, the mechanism of guiding pre-trained models to generate more task-specific representations is akin to the notion of constructing prompts to elicit desired responses from a model. However, the paper does not explicitly address 'hard prefix prompts' or the systematic review of prompt engineering as a broader field, thereby receiving a moderate score instead of a higher one for full relevance." -distilling hypernymy relations from language models: on the effectiveness of zero-shot taxonomy induction,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it discusses the extraction of structured knowledge from language models via prompting techniques, which is a core aspect of prompt engineering. Although it specifically focuses on taxonomy learning, prompt engineering is central to the methodology, making the paper relevant to the field. However, the exact match for 'hard prefix prompts' is not indicated, so the paper might not address that specific aspect of prompt prompting, hence the rating is not a full 10." -prompting scientific names for zero-shot species recognition,gpt-4-1106-preview,7,"The study is relevant to prompt engineering because it explores how different forms of prompts (using scientific names vs. common English names) can affect the performance of Vision-Language Models like CLIP in zero-shot species recognition tasks. Although it doesn't focus specifically on 'hard prefix prompts,' it directly examines the impact of prompt design on model accuracy, which is a significant aspect of prompt engineering. The study’s findings that common names yield better results than scientific names for prompts provide insight into effective strategies for prompt creation, thus contributing to the field of prompt engineering." -zero-shot next-item recommendation using large pretrained language models,gpt-4-1106-preview,8,"The abstract describes the process of using prompting strategies for LLMs to conduct next-item recommendations, which is directly related to prompt engineering. The study details a prompting approach specific for improving the performance of LLMs in a zero-shot recommendation task. While the focus is on the application of prompts in recommender systems, rather than on the study of 'hard prefix prompts' more generally, it contributes valuable insights into how prompts can be engineered and utilized to enhance the capabilities of LLMs in a practical scenario. This aligns with the broader field of prompt engineering, hence the high relevance rating." -selfcheck: using llms to zero-shot check their own step-by-step reasoning,gpt-4-1106-preview,7,"While the study described in the abstract is not directly related to prompt engineering in terms of developing or enhancing hard prefix prompts, it does address an important aspect of how LLMs (Large Language Models) can be improved in processing and verifying their reasoning, which can indirectly benefit prompt engineering. The ability of an LLM to self-check its reasoning is valuable for prompt engineering as it can lead to more effective prompting strategies that rely on the model's self-assessment of its reasoning process. Specifically, if an LLM can recognize errors in its own reasoning and adjust accordingly, this can inform the development of more advanced prompting techniques. The study is relevant to the field of prompt engineering, but it's not a direct study on prompt engineering itself, hence the rating of 7." -c3: zero-shot text-to-sql with chatgpt,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering because it focuses on a method that involves 'Clear Prompting' which is essentially a form of prompt engineering. It must strategically craft inputs to guide the ChatGPT model to generate correct SQL queries without previous training (zero-shot capability). Although the main focus is on Text-to-SQL, the principles and methods applied are directly related to prompt engineering as they deal with how to effectively prompt a language model to achieve a specific task." -tab-cot: zero-shot tabular chain of thought,gpt-4-1106-preview,8,"The abstract describes Tab-CoT, a novel prompting method that enhances the structure and explicit detailing of the reasoning process for complex tasks in a tabular format. This is highly relevant to prompt engineering, particularly as it relates to refining the interventions used to elicit specific and structured responses from AI systems. However, it is specifically tailored for tabular data and reasoning tasks, so it might not cover all aspects of prompt engineering study which can include other types of data and tasks. Hence the rating is not a perfect 10." -the benefits of label-description training for zero-shot text classification,gpt-4-1106-preview,7,"The abstract describes a method to improve zero-shot text classification accuracies by using data that describes labels, which aligns with prompt engineering efforts that involve describing tasks or labels to better inform the model's predictions. Although it doesn't explicitly address 'hard prefix prompts', the concept of using label descriptions can be relevant to designing more effective prompts. Thus, the relevance to prompt engineering is substantial but not direct, hence the rating of 7." -self-icl: zero-shot in-context learning with self-generated demonstrations,gpt-4-1106-preview,7,"The abstract describes a novel approach to in-context learning (ICL) with language models, which is indeed relevant to the study of prompt engineering as it focuses on generating and utilizing prompts to improve the performance of models without the need for additional demonstrations. The concept of Self-ICL generates pseudo-inputs and pseudo-labels as part of the prompting process, which aligns with the techniques used in prompt engineering. The relevance is not a perfect 10 because the study doesn't specifically address 'hard prefix prompts' as mentioned in the original query, but it is still highly relevant to the broader field of prompt engineering and the design of prompting strategies to improve language model outcomes in a zero-shot setting." -jack-ryder at semeval-2023 task 5: zero-shot clickbait spoiling by rephrasing titles as questions,gpt-4-1106-preview,7,"The paper addresses the use of pre-trained models to manipulate and interact with prompts by rephrasing clickbait titles into questions to optimize the models' response towards the task of clickbait spoiling. Although not directly focusing on 'hard prefix prompts', this study is relevant to the broader field of prompt engineering, as it involves the strategic alteration of prompts to suit the capabilities of pre-trained QA models and to achieve specific outcomes without task-specific training. The rephrasing technique and optimization strategy for better alignment with pre-trained models' strengths are of interest in prompt engineering research." -anovl: adapting vision-language models for unified zero-shot anomaly localization,gpt-4-1106-preview,7,"The abstract discusses the adaptation of CLIP models for zero-shot anomaly localization which involves designing specialized prompts for text supervision, a key aspect of prompt engineering. The introduction of a unified domain-aware contrastive state prompting template is directly related to the study of how prompts influence model performance, which is a subset of prompt engineering. The focus on aligning text with specific visual representations indicates relevance as it showcases a practical application of prompt engineering in the field of computer vision and anomaly detection. However, the paper's primary focus is on anomaly localization rather than prompt engineering itself, which is why the rating is not closer to 10." -instruction tuning with lexicons for zero-shot style classification,gpt-4-1106-preview,7,"The abstract discusses the use of lexicons for instructing language models in style classification without the need for fine-tuning. This study is relevant to prompt engineering, as it explores how specific language structures (style lexicons) can be used to guide pre-trained language models to perform new tasks without additional training. The concept of using lexical cues fits within the larger framework of prompt engineering, which seeks to optimize prompts to elicit desired outputs from language models. However, the focus on 'style classification' and 'zero-shot performance' is slightly tangential to prompt engineering's central theme of crafting and testing various prompts, hence the rating is not a full 10." -zero-shot faithfulness evaluation for text summarization with foundation language model,gpt-4-1106-preview,8,"The paper's relevance to prompt engineering study is high, since it investigates the use of a new metric FFLM, which involves prefixing text to evaluate faithfulness in text summarization. This approach is directly related to how prompts, including hard-coded prefixes, can be engineered to improve the predictions of a language model. Although the main focus is on faithfulness evaluation rather than the study of prompts in general, the use of prefixes is a significant component of prompt engineering techniques." -the art of socratic questioning: zero-shot multimodal reasoning with recursive thinking and self-questioning,gpt-4-1106-preview,7,"The study introduces Socratic Questioning as a method to improve problem-solving in large-scale language models, which is closely related to prompt engineering as it informs how prompts can be structured to facilitate more complex reasoning in AI. The emphasis on recursive thinking and self-questioning aligns with designing prompts that elicit more detailed and nuanced responses. However, it slightly diverges from the specific topic of 'hard prefix prompts' as it discusses a broader technique rather than focusing solely on the effects of hard prefixes in prompts." -zero-shot refinement of buildings' segmentation models using sam,gpt-4-1106-preview,8,"The abstract discusses the adaptation of foundation models using prompting strategies, which is relevant to prompt engineering. Specifically, it mentions the use of prompts to augment a Segment Anything Model (SAM) with recognition abilities. This is a direct application of prompt engineering to improve the performance of AI models. The focus is not on a 'hard prefix prompt' as outlined in the initial request, which would fit the definition of prompt engineering more closely, but the use of prompts to refine the SAM model's capabilities suggests a strong relevance to the field." -mm-react: prompting chatgpt for multimodal reasoning and action,gpt-4-1106-preview,7,"The title and abstract of the study discuss 'MM-REACT,' a system designed to enhance the capabilities of language models like ChatGPT by integrating them with vision experts for multimodal reasoning and action. The relevance to prompt engineering study is significant given that MM-REACT involves designing textual prompts that can facilitate multimodal information processing. Although the study does not exclusively focus on 'hard prefix prompts,' the concept of textual prompt design lies at the core of prompt engineering, hence the relevance. This system demonstrates an application of prompt engineering principles in the context of multimodal reasoning, which is a subset of the broader field of prompt engineering." -can an embodied agent find your “cat-shaped mug”? llm-guided exploration for zero-shot object navigation,gpt-4-1106-preview,7,"The abstract describes 'Language-guided Exploration' (LGX), which is a novel algorithm that uses Large Language Models (LLMs) to assist an embodied agent in zero-shot object goal navigation. The relevance to prompt engineering is significant in that it involves leveraging LLMs and employing various prompting strategies to improve sequential navigational decisions. The study of different prompting strategies directly pertains to prompt engineering, as it impacts how the language model guides the agent. While the primary focus of the study seems to be on robot navigation and object detection, the aspects where LLMs are being utilized and prompting strategies are analyzed contributes to the field of prompt engineering studies, hence the rating of 7. However, it's not exclusively focused on hard prefix prompts or a comprehensive systematic review of such prompts in prompt engineering, which would have resulted in a higher rating." -the art of prompting: event detection based on type specific prompts,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it explores the effectiveness of type-specific prompts for event detection in various scenarios, including few-shot and zero-shot learning. It directly addresses how the construction and application of prompts can affect model performance, a crucial aspect of prompt engineering." -is evalita done? on the impact of prompting on the italian nlp evaluation campaign,gpt-4-1106-preview,8,"The provided title and abstract directly relate to prompt-based learning, a key component of prompt engineering. The study assesses the efficacy of these prompts in Italian NLP tasks, which contributes to the understanding of prompt-based learning within a specific linguistic context. Although the study is more focused on the applications and implications for evaluation campaigns, rather than the methodological exploration of 'hard prefix prompts', it remains significantly relevant to the field of prompt engineering, especially in demonstrating the practical implications and current challenges in the field." -clip also understands text: prompting clip for phrase understanding,gpt-4-1106-preview,8,"The paper explores the use of the text encoder of CLIP for phrase understanding, which relates directly to prompt engineering as it involves designing effective prompts to leverage the model's capabilities. The comparison with other language models like BERT underlines the importance of how prompts are formulated in model performance. This research contributes to the understanding of how different prompting strategies can impact the outcome of language understanding tasks. Although it doesn't focus on 'hard prefix prompts' as specified, the study is highly relevant to the broader field of prompt engineering and how prompts can be optimized for model understanding." -arggen: prompting text generation models for document-level event-argument aggregation,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering since it discusses the use of prompt-based methods for text generation in Information Extraction tasks, specifically for document-level event-argument aggregation. This demonstrates a practical application of prompt engineering in natural language understanding and reasoning, which aligns with the broader topic of prompt engineering study. However, it may not directly address the systematic review of 'hard prefix prompts,' hence the rating is not a full 10." -on the evaluations of chatgpt and emotion-enhanced prompting for mental health analysis,gpt-4-1106-preview,8,"The study evaluates how different prompting strategies, specifically those with emotional cues, affect the performance of a large language model like ChatGPT in the context of mental health analysis. Since prompt engineering involves the design of inputs that can effectively guide AI models to produce desired outputs, the research's focus on the impact of prompts enhanced with emotional information is highly relevant to the field of prompt engineering. The study's analysis of the efficacy of these prompts directly contributes to understanding and optimizing prompt design, which is a central concern in prompt engineering. However, the score is not a perfect 10 because the study is not exclusively dedicated to prompt engineering — it also delves into the broader scope of mental health analysis performance of language models." -reasoning implicit sentiment with chain-of-thought prompting,gpt-4-1106-preview,8,"The study addresses advanced prompt engineering techniques for implicit sentiment analysis (ISA) using chain-of-thought prompting, introducing a Three-hop Reasoning (THOR) framework. This is highly relevant to the field as it demonstrates prompt engineering's applicability in complex reasoning tasks and shows how to structure prompts to induce reasoning steps. The relevance is not rated a perfect 10 since the study focuses more on the reasoning aspect than on prompt engineering itself, but it is nonetheless a significant contribution to the area of prompt construction and optimization." -pearl: prompting large language models to plan and execute actions over long documents,gpt-4-1106-preview,9,"The study introduces PEARL, a framework specifically designed for prompting large language models (LLMs) that enhances their capability to process and reason over lengthy texts. This is highly relevant to prompt engineering as it directly tackles challenges in designing prompts that assist LLMs in managing complex tasks such as decomposing questions, planning, and executing a sequence of actions to generate accurate responses. The successful application of PEARL over challenging datasets and its comparison with other prompting methods like zero-shot and chain-of-thought demonstrates a significant advancement in the field of prompt engineering, particularly for tasks involving extensive reasoning. It only falls short of a perfect rating because it addresses a specific subset of prompt engineering focused on long documents rather than the entire breadth of prompt engineering." -multimodal procedural planning via dual text-image prompting,gpt-4-1106-preview,9,"The provided abstract discusses a dual-modality prompting method involving text and image prompts to guide procedural planning, which is highly relevant to the field of prompt engineering since it directly deals with how prompts can be engineered and optimized for multi-modal tasks. The method described leverages the capabilities of large language models and text-to-image generation, which are both core technologies relevant to prompt engineering. The relevance isn't perfect 10 due to the specific focus on the generation of text-image pairs for task completion, rather than on the hard prefix prompts mentioned in the initial query, but the study still contributes significantly to the broader topic of how prompts can be structured and used effectively." -federated prompting and chain-of-thought reasoning for improving llms answering,gpt-4-1106-preview,7,"The study appears to address question handling and improving response accuracy in Large Language Models through techniques that could be considered part of prompt engineering, namely the Self-Consistency (SC) and Chain-of-Thought (CoT) techniques. Prompt engineering often involves strategies to enhance the model's understanding and output, and these techniques align with such goals. While the study does not directly mention 'hard prefix prompts', it engages with the broader area of prompts and their optimization, therefore the relevance is moderate to high." -code prompting: a neural symbolic method for complex reasoning in large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores advanced prompting methods (code prompting) in the context of improving the performance of large language models in complex reasoning tasks. This directly pertains to the development and evaluation of new prompting techniques, which is a core aspect of prompt engineering. The abstract indicates significant experimental work and analysis that can contribute to the field, such as comparing code prompting with the existing chain-of-thought (CoT) prompting. However, the study seems to focus on a specific type of prompting (neural symbolic prompting with code), rather than a comprehensive systematic review. Hence, the rating is not a full 10, but it's still high because of the clear relevance and potential impact on the study of prompting methods." -set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering lies in its methodology of using a visual prompting method (Set-of-Mark or SoM) to improve the performance of a language model with visual capabilities (GPT-4V). Although the study is focused on enhancing the visual grounding aspects of multimodal models, it does indirectly relate to the broader concept of prompt engineering by demonstrating a specific way to structure input (in this case, visual input) to achieve better performance on tasks that require understanding and interpreting visual information. Thus, the study is somewhat relevant as it expands the scope of prompt engineering into the multimodal domain, demonstrating that the structuring of prompts is important not just in text but also in how models interact with and interpret visual data." -legal syllogism prompting: teaching large language models for legal judgment prediction,gpt-4-1106-preview,8,"The paper focuses on a specific application of prompt engineering in the context of legal judgment prediction using a technique named 'legal syllogism prompting'. Although it is not about 'hard prefix prompts' per se, it explores a similar area by using prompts to direct the response of large language models. This is relevant to prompt engineering as it demonstrates the application of custom prompts to structure logical reasoning in AI, which is in line with the broader study of how prompts can be designed to elicit specific types of responses from language models. The systematic review on hard prefix prompts would likely cover various approaches in prompt engineering including such domain-specific applications; hence, the paper could offer valuable insights into this niche but relevant application within the field." -ramp: retrieval and attribute-marking enhanced prompting for attribute-controlled translation,gpt-4-1106-preview,9,"The study presents 'Retrieval and Attribute-Marking enhanced Prompting (RAMP)', a method that modifies and enhances the standard prompting approach in the context of machine translation, specifically for attribute-controlled translation. The inclusion of attribute annotations and the use of a semantic retrieval component are innovative strategies within prompt engineering. This approach is relevant to prompt engineering as it directly involves manipulating and engineering prompts to improve performance on a language task. It is particularly focused on prompting in the context of large language models, which is a core area of interest in prompt engineering studies. Although the paper is focused on translation tasks, the techniques and concepts discussed may be applicable to prompt engineering in broader contexts as well." -pieclass: weakly-supervised text classification with prompting and noise-robust iterative ensemble training,gpt-4-1106-preview,8,"The paper discusses PIEClass, which includes a pseudo label acquisition module utilizing zero-shot prompting of pre-trained language models (PLMs). This is relevant to prompt engineering because it involves using prompts to facilitate text classification in the absence of extensive datasets. It shows an application of prompt engineering in enhancing understanding beyond static keyword matching, which is a core challenge in the field. The iterative ensemble training module, while interesting as an approach to classifier training, is less directly related to prompt engineering. Hence the score is an 8 instead of a perfect 10, as the relevance is strong but not exclusively focused on prompt engineering." -self-explanation prompting improves dialogue understanding in large language models,gpt-4-1106-preview,8,"The study focuses on a novel 'Self-Explanation' prompting strategy specifically designed to improve Large Language Models' (LLMs) understanding in task-oriented dialogues, which falls under the broader category of prompt engineering. Although it does not deal with 'hard prefix prompts' per se, the research is highly relevant to the field of prompt engineering because it explores new methods for improving the performance of LLMs in processing complex dialogue contexts. The relevance rating is not a full 10 because the study is not directly about 'hard prefix prompts,' but it is significant due to its contribution to the overarching goal of optimizing prompts to enhance model comprehension." -fixed input parameterization for efficient prompting,gpt-4-1106-preview,10,"The abstract provided discusses the Fixed Input Parameterization (FIP) problem in the context of prompt engineering and how it aims to make the use of fixed prompts more efficient by integrating them into the parameters of a Language Model (LM). This is highly relevant to prompt engineering study as it tackles the optimization of prompt usage, which is a core aspect of prompt engineering in language models. The efficiency improvements and the exploration of methodologies for FIP in specific tasks such as persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions offer direct insights into prompt engineering. Therefore, the content of this abstract is directly related to the field of prompt engineering, addressing both the technical and application aspects of the topic." -map: low-data regime multimodal learning with adapter-based pre-training and prompting,gpt-4-1106-preview,7,"The study discusses the use of prompting in the context of vision-language multimodal learning, which is pertinent to prompt engineering. The focus on a moderate-size model (MAP) that leverages adapter-based pretraining and prompting for efficient transfer learning in a low-data regime demonstrates the application of prompting strategies. While the specifics of 'hard prefix prompts' are not mentioned, the concept of prompting is central to the paper, thereby making it relevant to the broader field of prompt engineering studies. However, the relevance is not maximal since the primary focus seems to be on the application of prompting within multimodal learning and not on the systematic review of the prompt engineering itself." -p5: plug-and-play persona prompting for personalized response selection,gpt-4-1106-preview,8,"The presented paper is highly relevant to prompt engineering due to its focus on using prompt sequences for personalized response selection in chatbots, which is a specific application of prompt engineering. The proposed method integrates the use of prompts to manage conversation flow based on persona, and it directly pertains to the engineering of prompts that help personalize chatbot responses. However, the paper is not exclusively about 'hard prefix prompts' (a term often related to the fixed instruction or text added to input data in language models to steer the response), which might have been implied in the phrase 'comprehensive systematic review on hard prefix prompts' in the original prompt. The paper focuses on persona prompting, which is a subset of prompt engineering but does not represent a broad overview or systematic review of hard prefix prompts in general. Therefore, while very relevant, the rating is not a full 10." -cof-cot: enhancing large language models with coarse-to-fine chain-of-thought prompting for multi-domain nlu tasks,gpt-4-1106-preview,8,"The presented work introduces the Coarse-to-Fine Chain-of-Thought (CoF-CoT) approach as a form of prompt engineering which is highly relevant to the field. It focuses on enhancing the reasoning capabilities of Large Language Models in Natural Language Understanding tasks. While the study might not directly address 'hard prefix prompts,' it proposes a novel way of structuring prompts that allow for a breakdown of tasks into multiple reasoning steps. This is inherently connected to the concept of prompt engineering, as it involves designing prompts that guide the model through a reasoning process, thus fitting well within the scope of prompt engineering studies. The reason for not rating it a 10 is because it doesn't explicitly state a focus on 'hard prefix prompts,' which the original query specified, but it is nonetheless substantially relevant." -a communication theory perspective on prompting engineering methods for large language models,gpt-4-1106-preview,9,"The provided title and abstract offer a high level of relevance to the field of prompt engineering study as it directly discusses prompting methods for large language models, an essential component of prompt engineering. It suggests a novel perspective by framing the review within communication theory, which is crucial for understanding the interactions between humans and AI in the PE context. Additionally, the abstract references practical use-cases in the form of typical tasks and discusses the future developments in PE methodologies, all of which are core to the study of prompt engineering. The only reason it doesn't receive a full score is due to the lack of specific detail on 'hard prefix prompts', which is mentioned in the prompt. However, the general connection to PE is strong, justifying the high rating." -2nd place winning solution for the cvpr2023 visual anomaly and novelty detection challenge: multimodal prompting for data-centric anomaly detection,gpt-4-1106-preview,7,"The technical report describes a methodology for zero-shot anomaly segmentation using multi-modal prompts, which falls under the broader category of prompt engineering. Multimodal prompting constitutes a form of prompt engineering as it involves designing and utilizing prompts that can effectively guide machine learning models, specifically foundation models, for particular tasks such as anomaly detection. This is relevant to prompt engineering study as it includes the formulation and application of prompts; however, the focus on 'hard prefix prompts' is not explicitly stated. Therefore, the relevance is significant but not complete in the context of a systematic review on hard prefix prompts in prompt engineering." -prompting segmentation with sound is generalizable audio-visual source localizer,gpt-4-1106-preview,8,"The abstract describes the use of a novel 'encoder-prompt-decoder' paradigm which directly relates to prompt engineering, as it involves constructing Semantic-aware Audio Prompts (SAPs) to improve model performance. This approach aims to enable pre-trained models to focus on sounding objects and deal with data scarcity and varying distributions, both of which are significant concerns in prompt engineering. Although the study focuses specifically on the audio-visual domain and not directly on general prompt engineering methodologies, its innovative use of prompts to bridge the semantic gap between modalities indicates its relevance to the field of prompt engineering. Therefore, it receives a high relevance rating." -aspiro: any-shot structured parsing-error-induced reprompting for consistent data-to-text generation,gpt-4-1106-preview,7,"The presented abstract details a novel approach (ASPIRO) for structured data verbalization which utilizes prompt engineering techniques such as re-prompting LLMs based on parsing checks. However, the focus appears to be more on reducing parsing errors and improving data-to-text generation consistency than on the study of hard prefix prompts specifically. Therefore, it is moderately relevant to the broader topic of prompt engineering but does not focus on a 'comprehensive systematic review on hard prefix prompts.'" -prompting strategies for citation classification,gpt-4-1106-preview,8,"The paper directly addresses prompt engineering by investigating the effectiveness of various prompting strategies for a specific NLP task – citation classification. This is highly relevant to the study of prompt engineering as it explores how different prompting methods can influence the performance of language models. Although it doesn't specifically mention 'hard prefix prompts', the mention of 'Fixed-prompt LM tuning' suggests it touches on the subject of static prompts, which could be related. The research's systematic approach to comparing these strategies and the inclusion of newly proposed methods indicate a substantial contribution to the understanding of how prompting affects language model performance, making it fairly relevant to the field of prompt engineering." -can large language models transform computational social science?,gpt-4-1106-preview,8,"The research discussed in the title clearly has implications for prompt engineering, as it talks about using Large Language Models (LLMs) for Computational Social Science (CSS) tasks. The abstract mentions 'prompting best practices,' indicating that the study likely delves into how to formulate prompts to optimize LLM performance in CSS applications. While the study might not focus exclusively on 'hard prefix prompts' but rather on a broader range of prompting techniques, the findings would still be highly relevant to the field of prompt engineering since they contribute to understanding how to effectively employ prompts in complex analysis tasks, such as CSS. The relevance is not rated as a full 10 because the study’s primary focus seems to be on broad LLM application in CSS rather than focused on prompt engineering alone." -solving challenging math word problems using gpt-4 code interpreter with code-based self-verification,gpt-4-1106-preview,8,"The abstract describes a study focusing on the development of a prompting strategy (explicit code-based self-verification) to enhance the performance of the GPT-4 Code Interpreter in solving math problems. Although this study is centered on prompting methods, it is specifically tailored to mathematical reasoning and involves verification of the model's output. It is highly relevant to the field of prompt engineering in that it presents a novel approach to using prompts to improve the accuracy of a language model's responses. The reason for not giving a full score of 10 is that the study is particularly focused on math word problems, which is just one aspect of prompt engineering." -learning to decompose visual features with latent textual prompts,gpt-4-1106-preview,8,"The abstract provided discusses an innovation in prompt engineering, specifically within the domain of vision-language models. The study introduces Decomposed Feature Prompting (DeFo), which utilizes textual prompts as part of the learning process, aligning with the concept of prompt engineering. The relevance to prompt engineering is high because it directly involves the use of textual inputs to improve the feature extraction in a dual-model architecture. However, it does not address 'hard prefix prompts' specifically, which suggests that the content is more general in the realm of prompt engineering rather than focused on a comprehensive systematic review of hard prefix prompts." -xricl: cross-lingual retrieval-augmented in-context learning for cross-lingual text-to-sql semantic parsing,gpt-4-1106-preview,7,"The abstract describes a system (XRICL) that involves constructing prompts to improve cross-lingual Text-to-SQL semantic parsing, which is relevant to the field of prompt engineering as it deals with the creation and optimization of prompts for language models. However, the focus on retrieval-augmented in-context learning and the cross-lingual aspect means it is not entirely centered on 'hard prefix prompts,' which suggests a subset of prompt engineering focusing on rigid or inflexible prompts. The study still contributes valuable insights to the broader domain of prompt engineering, hence the moderately high relevance rating." -multidimensional evaluation for text style transfer using chatgpt,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering study is moderate to high because it investigates the use of ChatGPT as an evaluator for text style transfer, which involves prompt engineering to some extent. Getting ChatGPT to perform a zero-shot evaluation entails designing prompts that effectively convey the evaluation task to the model. Therefore, the study indirectly contributes to understanding how different prompts affect the performance of large language models in generating or evaluating stylized text. However, the paper primarily focuses on the application of ChatGPT as an evaluator and correlates its performance with human judgments, rather than explicitly studying the hard prefix prompts or the mechanics of prompt construction, hence the rating is not a full 10." -yes but.. can chatgpt identify entities in historical documents?,gpt-4-1106-preview,7,"The abstract indicates that the study explores ChatGPT's ability to recognize entities within historical documents, specifically addressing the specificity of prompting, which is an integral aspect of prompt engineering. Although the core focus seems to be on entity recognition and classification, the mention of 'the specificity of prompting' suggests that the study does delve into how different prompts affect ChatGPT's performance in a task relevant to natural language processing. Therefore, while it is not entirely focused on 'prompt engineering' as a primary subject area, it is relevant due to its examination of prompts' effectiveness, which is a significant component of prompt engineering studies." -chain of thought prompt tuning in vision language models,gpt-4-1106-preview,7,"The document discusses 'chain of thought prompt tuning in vision language models,' which is a specific method within prompt engineering that aims at improving the reasoning process of AI models in image-related tasks. While the topic is closely related to the concept of prompt engineering, it is more narrowly focused on vision-language models and does not directly touch on 'hard prefix prompts' which seems to be the focus of the initial inquiry. The relevance is rated as 7 since the technique of chain of thought prompting falls under the wider umbrella of prompt engineering strategies and contributes to the field, even if it is not a direct study on hard prefix prompts." -is chatgpt a good personality recognizer? a preliminary study,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves evaluating ChatGPT's abilities in a specific natural language processing task using various prompting strategies, including the 'level-oriented' strategy, which is a type of hard prompt engineering tailored to guide the AI's reasoning. Although the primary focus is on personality recognition, the methodology and implications of different prompting strategies, including zero-shot chain-of-thought, directly contribute to the knowledge and optimization of prompt engineering. Hence, the relevance rating is high but not maximum, as the study does not exclusively concentrate on prompt engineering but also includes the application of the derived prompts in various downstream tasks." -let's do a thought experiment: using counterfactuals to improve moral reasoning,gpt-4-1106-preview,8,"The provided abstract discusses a new prompting framework, 'Thought Experiments,' which involves the engineering of prompts to teach language models improved moral reasoning using counterfactuals. While the study itself is not directly focused on 'hard prefix prompts,' it is highly relevant to the field of prompt engineering, as it explores the design of specialized prompts to enhance the performance of language models in a specific type of reasoning task. Therefore, the relevance is quite high for those interested in the broader topic of how different prompting approaches can impact model performance. However, it doesn't address 'hard prefix prompts' explicitly, hence the rating is not a perfect 10." -symbolic math reasoning with language models,gpt-4-1106-preview,7,"The abstract provided discusses the use of large language models (LLMs) such as OpenAI's GPT-3 for solving math word problems and explores their reasoning capabilities. Although the primary focus is on these models' ability to solve mathematical problems symbolically and numerically, it does mention the role of specific prompting techniques and their influence on the model's problem-solving process. Therefore, while the abstract is not directly focused on a review of 'hard prefix prompts,' it does pertain to prompt engineering in the broader context of eliciting reasoning and explanations from a language model. This justifies a moderate-to-high relevance rating, as the paper could potentially contribute valuable insights into the efficacy of prompting strategies in complex problem-solving tasks with language models." -instructexcel: a benchmark for natural language instruction in excel,gpt-4-1106-preview,7,"The provided abstract describes a study involving the creation of a benchmark for assessing Large Language Models' (LLMs) capability to interpret natural language instructions and generate Excel-related code. This directly relates to the field of prompt engineering, as it concerns the design and testing of prompts that efficiently guide a language model to perform domain-specific tasks. However, the study does not explicitly mention 'hard prefix prompts' or a 'systematic review' of such prompts, but rather it is an example of applied prompt engineering in a practical, task-oriented context. Therefore, the relevance is high but not absolute, hence a rating of 7." -improving zero-shot generalization and robustness of multi-modal models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explicitly addresses the issue of improving the performance of multi-modal models by refining how text prompts are used. The research investigates how ambiguity in text prompts can lead to a performance gap in zero-shot tasks and proposes a methodology to enhance the accuracy by leveraging semantic label hierarchies in prompts. While the study does not focus on 'hard prefix prompts' per se, it does contribute to the overall understanding of how prompt design influences model predictions, making it relevant to the field of prompt engineering." -enhancing cross-lingual natural language inference by soft prompting with language-independent knowledge,gpt-4-1106-preview,7,"The abstract discusses 'Soft prompt learning framework' and its application in cross-lingual natural language inference, which is relevant to prompt engineering as it deals with a form of prompts—soft prompts. Although it does not specifically address 'hard prefix prompts,' which the original prompt inquires about, the study of soft prompts is related and contributes to the broader field of prompt engineering. It would be more relevant if the specifics of 'hard prefix prompts' were examined, therefore it doesn't receive a full score." -precise zero-shot dense retrieval without relevance labels,gpt-4-1106-preview,7,"The relevance to prompt engineering is fairly high, as the abstract describes a process where a language model is prompted to generate a hypothetical document in a zero-shot context, which is clearly a form of prompt engineering. However, the focus of the study seems to be more on dense retrieval and encoding relevance rather than on the detailed study of prompt engineering or the effects of different prompting techniques. Thus, while relevant, the study may not be addressing prompt engineering in a direct or comprehensive manner as a primary focus." -seqzero: few-shot compositional semantic parsing with sequential prompts and zero-shot models,gpt-4-1106-preview,7,"The paper presents a novel approach in few-shot learning and semantic parsing, which directly relates to improving the performance of language models with limited data. Prompt engineering is an aspect of tuning language models to better interpret and respond to prompts. Since SeqZero involves creating sequential prompts that aid in generating outputs for sub-problems in semantic parsing, this study is relevant to prompt engineering as it pertains to the construction and optimization of prompts for improved model performance. However, the study's primary focus is not on the prompt engineering process itself, but rather on how prompts are utilized within a specific application of semantic parsing to achieve state-of-the-art results. Therefore, it is relevant, but not exclusively focused on the prompt engineering aspect." -from images to textual prompts: zero-shot visual question answering with frozen large language models,gpt-4-1106-preview,9,"The abstract describes a method (Img2LLM) involving the generation of prompts that effectively allow large language models (LLMs) to perform zero-shot visual question-answering (VQA) tasks. This is highly relevant to prompt engineering because Img2LLM essentially acts as a prompt engineering tool, transforming image content into textual prompts that enable LLMs to understand and respond to visual data without the need for end-to-end training. It directly involves the design and application of effective prompts to improve the utility of LLMs in a cross-modality context. The only reason it does not receive a full 10 rating is because it specifically pertains to visual data and VQA, whereas prompt engineering can also encompass other forms of data and tasks." -rethinking the role of demonstrations: what makes in-context learning work?,gpt-4-1106-preview,8,"The presented paper is highly relevant to prompt engineering as it delves into the mechanics of in-context learning, which is a core aspect of prompt engineering for large language models. Understanding the role of demonstrations and the impact of various aspects of those demonstrations informs how prompts should be designed. While the paper does not directly address 'hard prefix prompts,' it does explore the components of demonstrations that influence a model's performance, which can be directly applied to the design and optimization of prompts (including hard prefixes) to improve model behavior. Therefore, the findings of this study are important for advancing the science of prompt engineering, though not exclusively focused on 'hard prefix prompts.'" -a survey for in-context learning,gpt-4-1106-preview,7,"The survey deals with in-context learning (ICL), which is closely related to prompt engineering, as ICL often involves using prompts to deliver the training examples to language models. Although hard prefix prompts, which are more specific in their constructions, are not mentioned explicitly, prompting strategies in general are an integral part of ICL. The survey's focus on the broader aspects of prompting strategies makes it relevant to the field of prompt engineering. However, a more direct discussion on hard prefix prompts would be required to make the paper fully applicable to a comprehensive systematic review on that specific topic." -what can transformers learn in-context? a case study of simple function classes,gpt-4-1106-preview,7,"The abstract discusses 'in-context learning' which is a key aspect of prompt engineering as it deals with the ability of models to learn from the information provided in a prompt. The study's focus on how transformers can learn from in-context examples to perform tasks is relevant to understanding and improving prompt-based learning mechanisms, albeit it focuses more specifically on function classes rather than hard prefix prompts. It does not directly address prompt engineering as a systematic review but is certainly related to the broader category of how models respond to prompts. Therefore, it receives a high but not maximum relevance score." -what makes good in-context examples for gpt-3?,gpt-4-1106-preview,9,"The abstract describes a study focused on optimizing the selection of in-context examples for GPT-3's prompt generation, which is highly relevant to the field of prompt engineering. The research aims to improve GPT-3's performance by retrieving semantically-similar examples to the test query, which directly involves engineering better prompts for the model. The significant improvements reported in the benchmarks further underscore the relevance of this study to prompt engineering. The only reason it does not receive a perfect score is that it is focused on GPT-3, and prompt engineering can also involve other models or broader methodologies." -developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer,gpt-4-1106-preview,9,"The abstract presents a focused application of prompt engineering to improve data extraction from medical records using a large language model, which is highly relevant to prompt engineering studies. The study evaluates the effectiveness of specialized prompts for the task and discusses their development cost and accuracy, providing concrete data about prompt engineering in a real-world context. It doesn't directly address 'hard prefix prompts', but it's substantially related to engineering prompts for specific purposes." -swectrl-mini: a data-transparent transformer-based large language model for controllable text generation in swedish,gpt-4-1106-preview,8,"The relevance to prompt engineering study is high because the abstract describes the 'SweCTRL-Mini' model, which utilizes special tokens in generation prompts to control the genre of the generated text. This capability is directly related to prompt engineering, where prefixes or special tokens are crafted to steer the output of language models. While the abstract does not specifically focus on a 'systematic review on hard prefix prompts,' it does highlight the use of controlled prompts which is a significant aspect of prompt engineering. Therefore, the rating is slightly lowered because the paper does not explicitly cover a systematic review but is substantially related to the concept of hard prompts in controlling text generation." -optimizing continuous prompts for visual relationship detection by affix-tuning,gpt-4-1106-preview,7,"This abstract details a novel method involving affix-tuning transformers for optimizing visual relationship detection. While it does not explicitly use the term 'hard prefix prompts,' it does discuss the concept of 'affix-tuning,' which could be seen as a form of prompt engineering where a 'continuous task-specific vector' is optimized. This is somewhat relevant to prompt engineering as it relates to the training and utilization of model parameters in a task-specific manner. The approach of using 'prompt template' also indicates work in the direction of designing inputs that can influence model behavior, which is central to prompt engineering. However, the main focus appears to be on visual relationship detection rather than on the study or characterization of prompts (textual) in NLP tasks, hence not a perfect fit, but still relevant." -contextual transformer for offline meta reinforcement learning,gpt-4-1106-preview,8,"The presented abstract is relevant to prompt engineering as it discusses the use of prompts to improve sequence modeling-based offline reinforcement learning algorithms. The concept of prompt tuning is central to the study, and the introduction of the Contextual Meta Transformer (CMT) shows an innovative application of prompts in guiding the model towards desired outcomes and improving generalization on unseen tasks. The relevance is high since prompt engineering is explicitly mentioned and is a key part of the methodology. However, it focuses specifically on RL contexts and may not cover other aspects or domains of prompt engineering, hence the rating is not a full 10." -chinese text paraphrase recognition based on openprompt introducing hybrid prompts,gpt-4-1106-preview,9,"The abstract discusses the use of hybrid prompts, which are directly related to prompt engineering, offering a method to enhance the knowledge extraction from pretrained language models for paraphrase recognition tasks. It demonstrates a practical application of prompt engineering in the form of OpenPrompt and hybrid prompts, providing relevant outcomes like the improvement in F1 score and accuracy when using such prompts. This study helps in understanding prompt-based methods, hence the high relevance rating to prompt engineering. Only a full read-through could confirm if it tackles 'hard prefix prompts' specifically, but the mention of hybrid prompts with [mask] slots strongly suggests relevance to the field of prompt engineering." -learning to compress prompts with gist tokens,gpt-4-1106-preview,9,"The abstract describes a method directly related to prompt engineering, focusing on the efficiency of using prompts with language models. The introduction of 'gisting' to compress prompts into 'gist' tokens falls within the field of prompt engineering as it aims to optimize the use of prompts in terms of computational resources. The mentioned benefits, such as compute efficiency, compression ratios, and minimal loss in output quality, are highly relevant to the study of prompt engineering. The relevance is not rated as a perfect 10 because the specific context of 'hard prefix prompts' is not directly addressed, but the overall subject is still substantially pertinent to the field." -zero-shot entity and tweet characterization with designed conditional prompts and contexts,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves the use of 'hard prefix prompts' which are a form of prompt construction. It explores the capabilities of GPT-2 in zero-shot settings, which is an important aspect of prompt engineering, particularly when it comes to designing prompts that guide the model to perform specific tasks without prior task-specific training. The focus on human psychology-inspired and logical conditional prefixes is directly related to engineering prompts to produce desired outputs. However, the research is not exclusively focused on the systematic review of hard prefix prompts but rather on the application of these prompts for a specific task, which is why it does not receive a full score." -instruction-vit: multi-modal prompts for instruction learning in vit,gpt-4-1106-preview,8,"The paper presents an application of prompt engineering in the context of visual transformers, focusing on multi-modal prompts for instruction learning, which is highly relevant to prompt engineering. Although it primarily discusses visual transformer models and their application to image classification tasks, the concept of using text or image prompts to improve model performance is directly connected to the field of prompt engineering. The review on 'hard prefix prompts' might have a different focus compared to multi-modal prompts in visual transformers, but both share the overarching theme of enhancing model capabilities through prompts. Hence, the relevance is high, although not exact, hence not a perfect score of 10." -clinical decision transformer: intended treatment recommendation through goal prompting,gpt-4-1106-preview,7,"The relevance of the study titled 'clinical decision transformer: intended treatment recommendation through goal prompting' to prompt engineering is moderately high. The concept of 'goal prompting' directly connects to the practice of designing prompts to achieve specific outputs in a natural language processing context. Although this paper is primarily focused on a medical application, the technique of formulating prompts to guide the decision-making output of an AI model is a key aspect of prompt engineering. The concept could potentially be applied to other areas in AI where prompt design is crucial. However, the specificity to clinical recommendations and the absence of a direct focus on hard prefix prompts or a broad range of prompt engineering applications slightly reduce its overall relevance." -adversarial transformer language models for contextual commonsense inference,gpt-4-1106-preview,8,"The paper discusses the use of both hard prompts (specific words) and soft prompts (virtual learnable templates) in the context of language model prompting to control the generation of commonsense assertions, which is directly related to prompt engineering. Although the paper's primary focus is on commonsense inference, the technique of 'hinting' as described involves engineering prompts to guide the language model, which is relevant to the study of prompt engineering." -vima: robot manipulation with multimodal prompts,gpt-4-1106-preview,8,"The study described in the abstract illustrates a novel application of prompt-based learning in the domain of robotics rather than just natural language processing. The use of 'multimodal prompts' that includes both textual and visual tokens is directly related to the concept of prompt engineering, as it involves crafting prompts that a machine learning model interprets to perform various tasks. Although it does not explicitly address the engineering of 'hard prefix' prompts, the systematic development of multimodal prompts for robot manipulation is a significant contribution to prompt engineering research. The study's relevance is slightly lessened only due to the lack of a specific focus on 'hard prefix' prompts, which the original query stipulates." -prompt-based tuning of transformer models for multi-center medical image segmentation of head and neck cancer,gpt-4-1106-preview,7,"The paper describes the use of prompts in the form of 'learnable parameters' for fine-tuning pre-trained vision transformer models in medical image segmentation tasks, which is relevant to the concept of prompt engineering. This kind of study could potentially contribute to the field of prompt engineering as it explores how altering input prompts (in this case, learnable parameters) can adapt a model to new data. However, the focus here is on medical image segmentation and not on textual data or NLP models which are more common areas for prompt engineering. Thus, the relevance is significant but not entirely direct to studies narrowly focused on hard prefix prompts for NLP applications." -prompt-engineering and transformer-based question generation and evaluation,gpt-4-1106-preview,8,"The study presented involves the application of prompt engineering to improve the performance of a transformer-based question generation model. Since prompt engineering is integral to this research, with the effectiveness of various prompts being directly assessed and compared, it shows high relevance to the field of prompt engineering. However, it does not focus solely on 'hard prefix prompts' specifically, which may be a more nuanced subtopic within prompt engineering. Therefore, the relevance rating is not a full 10." -prompt guided transformer for multi-task dense prediction,gpt-4-1106-preview,7,"The presented abstract describes a research paper regarding a model called Prompt Guided Transformer (PGT), which explicitly utilizes task-specific prompts within its architecture. The use of prompts is integral to the model's operation, making it highly relevant to studies on prompt engineering. However, it seems to focus more on parameter efficiency and architecture design for multi-task learning rather than the systematic review of 'hard prefix prompts' or broad prompt engineering strategies, hence the rating does not reach the maximum." -efficient model personalization in federated learning via client-specific prompt generation,gpt-4-1106-preview,8,"The abstract describes a methodology for personalizing machine learning models in a federated learning context using client-specific prompt generation. Although it does not explicitly mention 'hard prefix prompts', it is highly relevant to prompt engineering as it discusses the generation and adaptation of prompts to improve model performance on distributed client-specific data. This is a crucial aspect of prompt engineering, which typically involves optimizing inputs to pre-trained models to achieve better customization and efficiency. Therefore, the relevance of the paper to prompt engineering is high, although it may not directly focus on the specific subset of 'hard prefix prompts'." -kosmos-2.5: a multimodal literate model,gpt-4-1106-preview,8,"The abstract describes a model that uses task-specific prompts to achieve its multimodal literate capabilities, which is highly relevant to the study of prompt engineering. The ability to adapt the model for various text-intensive image understanding tasks with different prompts through supervised fine-tuning underscores the relevance of prompt engineering to the model's functionality. Although the main focus of Kosmos-2.5 is on machine reading of text-intensive images, the mention of flexible text representations and task-specific prompts indicates that prompt engineering is a significant component of the research. The rating is not a full 10 because the primary focus seems to be on the model's multimodal capabilities rather than exclusively on prompt engineering." -automated reading passage generation with openai's large language model,gpt-4-1106-preview,7,"The study is relevant to prompt engineering as it involves using 'carefully engineered prompts' to guide GPT-3 in generating reading passages that are appropriate for a specific educational level and style. The engineering aspect of the prompts plays a crucial role in the automated item generation process mentioned in the abstract, ensuring that the AI-generated text conforms to certain standards and matches original content in terms of structure and difficulty. While the focus is on AIG and not specifically on the study of 'hard prefix prompts,' the research contributes valuable insights into how tailored prompts can be used to guide the output of a language model to meet predefined criteria. Therefore, it has a significant relevance to the field of prompt engineering, even though it might not directly address the concept of hard prefix prompts in systematic review terms." -prompt tuning gpt-2 language model for parameter-efficient domain adaptation of asr systems,gpt-4-1106-preview,8,"The abstract discusses the use of 'domain-prompts,' which seems to be a technique closely related to prompt engineering, as it involves training domain-specific embeddings to adapt a language model to new domains. This method resembles hard prompt tuning where prompts are fixed and designed to prime the model for a specific task or domain. The study's relevance is high for prompt engineering research, particularly within the context of ASR systems and parameter-efficient adaptations. However, it doesn't discuss 'hard prefix prompts' specifically; it mentions 'domain-prompts' which may or may not be exactly the same concept. Hence, the rating is not a full 10, reflecting this small uncertainty." -llama-adapter: efficient fine-tuning of language models with zero-init attention,gpt-4-1106-preview,7,"The abstract describes the development of a method for fine-tuning language models using a set of learnable adaption prompts, which is relevant to prompt engineering, particularly in the context of instruction-following models. The integration of these prompts into the higher transformer layers is a technique related to prompt engineering as it involves modifying the input sequence to achieve a desired behavior from the model. However, the study seems to be more focused on an efficient fine-tuning mechanism rather than on the specifics of designing prompts (hard prefixes), so it is not a perfect match to prompt engineering studies that focus exclusively on hard prefix prompts. Therefore, the rating acknowledges the relevance of the learnable adaption prompts but is not a full 10 due to the broader scope of the study." -in-context learning of large language models explained as kernel regression,gpt-4-1106-preview,7,"The study presents an analysis of in-context learning in large language models (LLMs), a concept closely related to prompt engineering since in-context learning involves providing LLMs with carefully crafted prompts (examples) to shape their output without updating the models' parameters. Understanding the mechanism behind LLMs' in-context learning capabilities could contribute valuable insights into the design of effective prompts, potentially improving prompt engineering strategies. However, the study does not directly focus on 'hard prefix prompts,' which are specific types of prompts, or on a systematic review of prompt engineering studies, so the relevance is substantial but not complete." -prompt tuning of deep neural networks for speaker-adaptive visual speech recognition,gpt-4-1106-preview,8,"The study presents prompt tuning methods for speaker-adaptive Visual Speech Recognition (VSR), which parallels prompt tuning in Natural Language Processing (NLP). Though the context is VSR rather than text-based models, the principles of prompt engineering (e.g., fine-tuning prompts for adaptation without changing the entire pre-trained model) are highly relevant to the prompt engineering study. As such, the techniques and results from this study could inform prompt engineering practices, especially those that deal with adaptation to new data or domains using small amounts of adaptation data. This makes it significantly relevant, though slightly less if the focus of the prompt engineering study is strictly on text-based NLP models." -à-la-carte prompt tuning (apt): combining distinct data via composable prompting,gpt-4-1106-preview,9,"The abstract discusses 'À-la-carte Prompt Tuning (APT)' which is directly related to prompt engineering as it deals with the methodology of tuning and composing prompts for transformer-based models. The approach to train individual prompts and compose them based on user-defined criteria is highly relevant to the study of prompt engineering. This could offer insights into the mechanics of prompt tuning and its practical applications in customizing machine learning models to specific data sets or user preferences. The only reason it doesn't score a perfect 10 is that the description does not explicitly mention 'hard prefix prompts', thus it may not cover the entire scope of the prompt engineering study mentioned in the prompt." -proof of concept: using chatgpt to teach emergency physicians how to break bad news,gpt-4-1106-preview,7,"The abstract highlights the use of detailed prompts to create realistic clinical scenarios and provide feedback, which directly relates to the concept of prompt engineering. The study illustrates the impact of carefully designed prompts on the AI's performance in a specific application (medical training), which is relevant to the field of prompt engineering. However, the focus is not solely on the theoretical or systematic aspects of prompt engineering but rather its practical implementation in a medical training context, which may not cover the depth or breadth of a 'comprehensive systematic review on hard prefix prompts' as the original query suggests." -promptonomyvit: multi-task prompt learning improves video transformers using synthetic scene data,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering is moderate to high because it introduces the concept of 'task prompts' within video transformers, which are specialized parameters used for enhancing performance on different video understanding tasks. 'Promptonomy' is essentially an application of prompt engineering in the context of video transformers, where prompts are designed to model task-specific structure and improve machine learning model aptitude. While the study does not explicitly cover 'hard prefix prompts' or their systematic review, it does involve the creation and utilization of prompts in a learning context, thus contributing to the broader field of prompt engineering. However, the main focus is on the usage of synthetic scene data and improving video transformers, so it is not entirely centered on the theory or methodology of prompt engineering itself." -language prompt for autonomous driving,gpt-4-1106-preview,8,"The abstract describes a study focused on the intersection of natural language prompts and autonomous driving technology, which involves prompt engineering to some extent. Although the primary application is within the domain of computer vision and autonomous driving, the creation of the object-centric language prompt set and the formulation of a new prompt-based driving task indicates a substantial involvement of prompt engineering. The study's goal to predict object trajectories based on language descriptions necessitates understanding and engineering of prompts to be suitable for machine comprehension within a driving context. This is highly relevant to prompt engineering as it deals with generating and utilizing prompts to guide AI models. However, the rating is not a perfect 10 as the core application differs from general prompt engineering studies and focuses specifically on driving scenarios." -clinical prompt learning with frozen language models.,gpt-4-1106-preview,8,"The abstract discusses the application of prompt learning in a clinical context, which is a subset of prompt engineering. It highlights the advantages of prompt learning over traditional fine-tuning, such as fewer trainable parameters, less training time, and lower computational resources, all of which are key considerations in prompt engineering. Although it does not explicitly mention 'hard prefix prompts,' the focus on prompt learning's efficiency and effectiveness is highly relevant to the overarching field of prompt engineering. The reason for not giving a full score of 10 is because the study is specific to clinical applications rather than a broad systematic review of hard prefix prompts in general." -fedyolo: augmenting federated learning with pretrained transformers,gpt-4-1106-preview,7,"The abstract discusses modularity in the context of using modules such as prompts for adapting large pretrained transformer models in federated learning setups. While it does not specifically focus on 'hard prefix prompts,' it does touch on the general relevance of prompts (or similar kinds of modules) for model adaptation. This relevance is given a rating of 7 because the study could provide useful insights into the applications of prompt engineering within federated learning, even though it does not directly focus on a comprehensive systematic review of hard prefix prompts." -chinese asr and ner improvement based on whisper fine-tuning,gpt-4-1106-preview,7,"The abstract indicates that the paper explores how to fine-tune Chinese ASR and NER tasks using Whisper, touching on the aspect of designing different prompts for various generative tasks, which is closely related to prompt engineering. While the main focus seems to be on improving ASR and NER performance, the inclusion of prompt design as a part of the fine-tuning process makes it relevant to the study of prompt engineering. However, the mention of prompts is not the central focus of the paper, which suggests that although prompt engineering is covered, it is not the primary subject matter, hence the rating of 7." -prores: exploring degradation-aware visual prompt for universal image restoration,gpt-4-1106-preview,8,"The abstract discusses the use of degradation-aware visual prompts within a universal image restoration model, which is a form of prompt engineering applied to visual tasks rather than language tasks. It touches on the principle of encoding information (degradation types) into prompts to guide the behavior of a model (Vision Transformer), a concept parallel to hard prefix prompts in NLP. While the paper does not deal directly with linguistic prompt engineering, the underlying ideas of customizing prompts to steer model behavior are highly relevant to the study of prompt engineering as a broader concept. Hence, a lower rating would be given if the question strictly asked for relevance to text-based prompts, but since it outlines the foundation of 'prompt engineering' which can extend beyond just language models, a higher rating is appropriate." -making humanoid robots teaching assistants by using natural language processing (nlp) cloud-based services,gpt-4-1106-preview,7,"The study involves using NLP and GPT language models, which are relevant to prompt engineering. The research is focused on fine-tuning GPT models with prompts derived from environmental context and robot actions, directly linking to the construction of prompts for language models. The rating is not a full 10 because the main application is on human-robot interaction and the deployment of these models, rather than on the systematic review of 'hard prefix prompts' or the discipline of prompt engineering itself." -prompt generation networks for input-based adaptation of frozen vision transformers,gpt-4-1106-preview,9,"The abstract describes a novel approach to adapt frozen vision transformers via visual prompt learning, which is highly relevant to prompt engineering as it deals with generating and optimizing prompts that can be input-dependent. Although the study focuses on the visual domain, the techniques and concepts of prompt generation, learning, and the mentioned 'prompt inversion' trick are applicable and insightful for prompt engineering for different modalities. It achieves adaptation without modifying the model and is part of the broader discussion on how to efficiently use large-scale models, a significant aspect of prompt engineering. The relevance is slightly less than perfect because the specific focus on vision transformers and input-dependent prompts may not cover the entire scope of hard prefix prompts directly, but the principles are closely related." -optimizing language models for argumentative reasoning,gpt-4-1106-preview,8,"The provided abstract details an investigation into optimizing a language model for argumentative reasoning tasks, which includes an evaluation of different optimization strategies such as prompt programming. Prompt engineering, which refers to the design and usage of prompts to guide language models, is closely related to the study's focus on prompt programming as one of the optimization strategies. Although the term 'hard prefix prompts' is not explicitly mentioned, prompt programming is a technique that often involves the use of hardcoded prompts (which could be considered 'hard prefix prompts') to direct a model's output. Therefore, this study is highly relevant to the broader field of prompt engineering; however, the relevance is slightly lower as the study does not solely concentrate on hard prefix prompts but also considers other optimization strategies." -prompt enhanced generative mrc framework for pancreatic cancer ner,gpt-4-1106-preview,7,"The paper directly engages with prompt engineering through its introduction of continuous prompts to improve the performance of a generative NER task within the context of medical document analysis. The use of prompts in the self-attention mechanism of the Transformer model is relevant to the study of how prompts can be optimized to facilitate better understanding and generation of responses by the model. While the focus is not exclusively on 'hard prefix prompts' and it is more application specific (medical NER), it does contribute to the broader understanding of prompt engineering in NER tasks." -bootstrapping vision-language learning with decoupled language pre-training,gpt-4-1106-preview,8,"The paper describes the use of a model (P-Former) to predict ideal prompts within the context of vision-language learning, by focusing on language component optimization. This relates closely to prompt engineering, as the research aims to determine how best to elicit desired responses from language models, which is a fundamental aspect of prompt engineering. The methodology of prompt prediction is directly relevant to the art of crafting effective prompts. However, the specific application to vision-language learning might be slightly tangential to more general prompt engineering studies that might not focus on multimodal contexts. Despite that, the principles discussed could nonetheless provide valuable insights into prompt engineering for LLMs in general." -prompt-based ingredient-oriented all-in-one image restoration,gpt-4-1106-preview,7,"The abstract describes a novel technique for image restoration that uses 'prompt-based learning' as part of its methodology. This indicates some relevance to prompt engineering as it pertains to the use of prompts to guide the decoder in image processing tasks. However, the term 'prompt-based learning' in this context is more related to the domain of image restoration rather than to the development and study of textual or linguistic prompts in AI and machine learning. Even though the technique involves 'prompts' in some form, it may not specifically address the systematic review of 'hard prefix prompts' as one might expect in the study of AI or natural language processing. Therefore, the relevance is moderate since it's within the area of prompts as a concept but not directly focused on the linguistic aspect of prompt engineering." -hierarchical prompt tuning for few-shot multi-task learning,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it discusses a novel approach to prompt tuning, which is a key aspect of prompt engineering. The hierarchical prompt tuning model addresses the need for effective prompts in multi-task learning, especially in few-shot scenarios. The introduction of shared prompts, auto-adaptive prompts, and task-specific prompts directly pertains to the methodology of engineering prompts to enhance performance. Although the study is not specifically about 'hard prefix prompts', the relevance to prompt engineering is strong because the paper contributes to the broader understanding of how to construct and implement prompts in complex, multi-layer neural networks such as Transformers." -pm-detr: domain adaptive prompt memory for object detection with transformers,gpt-4-1106-preview,8,"The document describes the use of prompts (though in a different context from language models) to improve the domain adaptability of object detection models. It focuses on prompt-based strategies to bridge the gap between different data distributions. The concept of 'prompt memory' is relevant to prompt engineering, as it involves using prompts to encode domain-specific knowledge which can then influence the behavior of a model. However, the application of prompts here differs from their use in language models, where the term 'prompt engineering' is often used to describe the process of crafting inputs that elicit desired outputs. In this context, prompts are aiding domain adaptation of object detection systems rather than natural language processing tasks. Nonetheless, the use of prompts as a technique to improve machine learning models is relevant to the broader field of prompt engineering study." -visual prompt flexible-modal face anti-spoofing,gpt-4-1106-preview,7,"The abstract discusses the development of a visual prompt-based approach for improving the robustness of face anti-spoofing systems, which is indirectly related to prompt engineering. Although prompt engineering is primarily associated with natural language processing and the use of textual prompts in language models, the abstract suggests an adaptation of prompt learning principles to the domain of computer vision and multimodal learning. The concept of 'visual prompts' and their application in a flexible-modal face anti-spoofing task is relevant to the study of how prompts can be engineered and utilized in AI models, extending beyond textual inputs to visual and multimodal contexts. The relevance is not a direct match to 'hard prefix prompts,' indicating that the context of prompts is being extended to a different domain, thus the rating does not reach the maximum." -harnessing gpt-3.5-turbo for rhetorical role prediction in legal cases,gpt-4-1106-preview,9,"The provided abstract discusses the implementation of prompting strategies in GPT-3.5-turbo for a specialized task within the legal domain. The focus on one-stage elicitation techniques, the influence of different prompting strategies such as zero-shot learning, task specification, and the exploration of hard prefix prompts (detailed in the mention of the textual context, number of examples, and label definitions) are highly relevant to prompt engineering. Although it doesn't exclusively concentrate on 'hard prefix prompts,' the exploration and systematic review of prompting strategies contributing to performance improvement are central to prompt engineering. The slight deduction in the rating acknowledges that the study is about prompt engineering as a whole rather than solely on 'hard prefix prompts.'" -on the relationship between skill neurons and robustness in prompt tuning,gpt-4-1106-preview,9,"The paper discusses Prompt Tuning, which is highly relevant for prompt engineering as it studies how prompt tuning affects the robustness and transferability of pre-trained language models for specific tasks. Although it does not directly address 'hard prefix prompts', the concept of 'skill neurons' and their role in prompt tuning is crucial for understanding and engineering effective prompts. It hints at an underlying mechanism that could influence the construction and refinement of prompts, potentially making this area of study valuable for those engaged in prompt engineering." -efficient domain adaptation of language models in asr systems using prompt-tuning,gpt-4-1106-preview,8,"The abstract presents research on using prompt-tuning, a form of prompt engineering, for domain adaptation in ASR systems. Although the focus is on ASR systems and not specifically on 'hard prefix prompts', prompt-tuning is related to prompt engineering studies. The research seems to involve adapting language models to specific domains using prompts, which is a core aspect of prompt engineering. The methodology could be highly relevant to those interested in tailoring LMs for specific applications without the costs associated with maintaining multiple domain-specific models. However, it falls short of a perfect score because it does not address hard prefix prompts specifically, but rather the broader application of prompt-tuning for domain adaptation." -all birds with one stone: multi-task learning for inference with one forward pass,gpt-4-1106-preview,8,"The focus on utilizing a prompt-sharing module to enable a model to handle multiple tasks with a single forward pass is highly relevant to prompt engineering, as it directly pertains to the design and efficiency of prompts in multi-task learning. Although the abstract does not specifically mention 'hard prefix prompts,' the concept of prompt design for task efficiency and model performance improvement is central to the topic of prompt engineering. Therefore, the relevance rating is relatively high, with a couple points deducted for not mentioning the specific aspect of 'hard prefix prompts.'" -medical intervention duration estimation using language-enhanced transformer encoder with medical prompts,gpt-4-1106-preview,7,"The study describes a framework that integrates medical prompts within a transformer encoder to improve the estimation of medical intervention durations. While this approach does utilize 'prompts' in the form of medical queries to improve the model's understanding of free-text EHR data, these prompts do not appear to be 'hard prefix prompts' in the context of prompting techniques typically discussed in natural language processing (NLP). The focus of the study is not on exploring the design or effectiveness of various prompts but rather on the application of medical prompts to harmonize different data modalities for medical predictions. Therefore, while prompts are relevant to the system being developed, the study does not seem to primarily address 'prompt engineering' as it would pertain to the generation or optimization of prompts themselves. This results in a moderate rating of relevance." -ctrl: a conditional transformer language model for controllable generation,gpt-4-1106-preview,8,"The referenced paper describes a language model (CTRL) designed to incorporate control codes that can direct the generation of text according to specified attributes, which is highly relevant to the field of prompt engineering. Although the paper does not directly discuss 'hard prefix prompts,' it is nonetheless pertinent because control codes essentially function as a form of prompts to guide the model output. The ability to use these codes aligns with the broader goal of prompt engineering, which is to control and guide the behavior of language models. Therefore, the paper is quite relevant to the study of prompting methods in AI, even if it doesn't address 'hard prefix prompts' specifically." -exploring visual prompts for adapting large-scale models,gpt-4-1106-preview,8,"The abstract indicates the study focuses on 'visual prompting' to adapt large-scale models in vision, which is a form of prompt engineering. While 'hard prefix prompts' are not directly mentioned, the concept of adapting models by using prompts (here, visual) is central to the discussed approach, thus making it relevant to the field of prompt engineering. The study’s relevance could be even higher if it specifically related to textual prompts and hard prefixes, but its focus on a related concept in the visual domain still provides valuable insights that could be transferable to other forms of prompt engineering." -domain prompts: towards memory and compute efficient domain adaptation of asr systems,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses domain-prompts, which is a form of prompt engineering for adapting transformer-based language models to specific domains with minimal additional parameters. While it focuses specifically on ASR systems, the concept of domain adaptation through prompts is applicable to wider studies of prompt engineering. The rating is not a full 10 because the paper does not address 'hard prefix prompts' specifically, but rather uses the concept of domain-specific prompts generally." -planning with learned entity prompts for abstractive summarization,gpt-4-1106-preview,8,"The study discusses the use of entity chains as prompts to improve the quality of abstractive summarization, which is a form of prompt engineering. The research directly involves engineering prompts (entity chains) to guide a model's generation process, making it highly relevant to the subject of prompt engineering. However, it is not solely focused on 'hard prefix prompts', as it encompasses a broader scope of learned entity prompts for content planning in summarization tasks." -a survey of controllable text generation using transformer-based pre-trained language models,gpt-4-1106-preview,7,"The provided abstract discusses the controllability of text generation using Transformer-based pre-trained language models, which is relevant to the field of prompt engineering since it deals with methods to direct language models in generating text that fulfills certain constraints. However, the abstract does not specifically mention 'hard prefix prompts' or delve into the topic of prompt engineering within controllable text generation. Therefore, while the survey has relevance due to its focus on control mechanisms, which could encompass prompt engineering techniques, it is not a perfect match for a study specifically on 'hard prefix prompts.' The rating reflects the general relevance but indicates that the document is not exclusively focused on the narrower subject of prompt engineering, especially centered around hard prefix prompts." -promptcal: contrastive affinity learning via auxiliary prompts for generalized novel category discovery,gpt-4-1106-preview,8,"The abstract discusses advancements in semi-supervised learning through the use of auxiliary visual prompts and contrastive learning methods. Though not explicitly centered on 'hard prefix prompts,' the research explores the usage of prompts (in the form of visual cues) to improve semantic clustering and discover novel classes. This is closely related to 'prompt engineering,' as it deals with the optimization of prompts to enhance model performance. Therefore, it is quite relevant to the field of prompt engineering, though it may not directly address the systematic review aspect of hard prefix prompts mentioned in the initial study description." -text style transfer between classical and modern chinese through prompt-based reinforcement learning,gpt-4-1106-preview,8,"The text discusses the use of an unsupervised prompt-based reinforcement learning (PBRL) framework for style transfer in text, which is highly relevant to prompt engineering as it involves the use of prompts to guide the learning process. While the application is specific to style transfer between classical and modern Chinese, the underlying technique is applicable to prompt engineering broadly. It does not directly study 'hard prefix prompts' as the original study query suggests, but it does contribute to the overall field of prompt engineering." -matchprompt: prompt-based open relation extraction with semantic consistency guided clustering,gpt-4-1106-preview,8,"The text describes a novel approach to open relation extraction using a prompt-based framework, which directly relates to the field of prompt engineering as it entails designing and utilizing prompts to train models with a small amount of pre-defined relational instances. This suggests innovation in the area of using prompts for machine learning tasks, which is relevant to the prompt engineering study. It is not a 'comprehensive systematic review on hard prefix prompts', but it is a practical application of prompt engineering principles, thus the relevance rating is 8 rather than 10." -dynamic visual prompt tuning for parameter efficient transfer learning,gpt-4-1106-preview,8,"The paper describes a method of parameter efficient transfer learning through the creation of dynamic, instance-wise tokens or 'prompts' for each image in visual tasks. While it is not directly related to 'hard prefix prompts', it discusses 'prompt tuning', which falls under the broader category of prompt engineering. The proposed method aims to adapt pre-trained models to new tasks more efficiently, which is relevant to the study of how prompts can be engineered to improve model performance. The high relevance score is given because the paper's core focus on dynamic visual prompts is closely aligned with the principles of prompt design and optimization, which are essential concepts in prompt engineering studies." -efficiently aligned cross-lingual transfer learning for conversational tasks using prompt-tuning,gpt-4-1106-preview,9,"The abstract discusses the use of 'prompt-tuning-based method for learning alignment prompts' which is directly related to prompt engineering. Specifically, it addresses the development of prompts that facilitate cross-lingual transfer learning, a key component of prompt engineering in the context of creating efficient language models for conversational tasks. The systematic review might explore various prompt techniques, including this efficient prompt-tuning method, making it highly relevant to the study. The reason it's not a perfect 10 is that the focus is also on the creation of a multilingual dataset and cross-lingual transfer learning, which, while related, are broader topics than prompt engineering alone." -adpl: adversarial prompt-based domain adaptation for dialogue summarization with knowledge disentanglement,gpt-4-1106-preview,9,"The paper presents an Adversarial Disentangled Prompt Learning (ADPL) model which is relevant to the study of prompt engineering as it involves the creation and utilization of prompts (domain-invariant, domain-specific, and task-oriented) to improve domain adaptation in dialogue summarization. The focus on prompt-based methods for zero-shot learning in this context is highly pertinent to understanding how prompts can be engineered to enhance the performance of language models on specific tasks. Despite not focusing exclusively on 'hard prefix prompts', which the original query asks about, its contribution to prompt engineering methods warrants a high relevance score." -clinical concept and relation extraction using prompt-based machine reading comprehension,gpt-4-1106-preview,7,"The described study makes significant use of prompt-based machine reading comprehension (MRC) architecture in the context of natural language processing for clinical data, which is directly related to the use of prompts in AI systems. Prompt engineering is central to designing the MRC architecture that can comprehend and extract relevant information from clinical texts. The fact that different prompting strategies were examined for their effects on MRC model performance bolsters its relevance to prompt engineering. However, the focus on clinical concept and relation extraction may mean that the specific prompt engineering details relevant to other domains or applications of prompt engineering are not explored in the abstract provided. Thus, the content is relevant due to its reliance on prompts and their optimization in an MRC system, but it is not exclusively focused on the concept of 'hard prefix prompts' as might be expected in a systematic review specifically dedicated to that subject." -is prompt-based finetuning always better than vanilla finetuning? insights from cross-lingual language understanding,gpt-4-1106-preview,8,"The abstract provided discusses the comparison of prompt-based fine-tuning versus vanilla fine-tuning in the context of cross-lingual language understanding tasks. This is highly relevant to the field of prompt engineering, as it studies the effectiveness of prompt-based approaches in model training. It may not be a perfect match to 'hard prefix prompts' specifically, but the exploration of prompt-based fine-tuning methods, such as the proposed ProFiT pipeline, contributes to the broader understanding of prompt efficacy in different scenarios, including multilingual tasks, and hence holds substantial relevance to studies in prompt engineering." -knowledge transfer with visual prompt in multi-modal dialogue understanding and generation,gpt-4-1106-preview,7,"The study described involves the use of prompts in the context of multi-modal data fusion and dialogue generation, which is relevant to prompt engineering in terms of developing methods to maximize the efficacy of prompts. However, the term 'hard prefix prompts' is not mentioned, suggesting that while the study is within the domain of prompting (visual prompts in this case), it may not directly address the particular area of 'hard prefix prompts'. Therefore, the relevance is notable but not complete, hence a rating of 7." -motif-based prompt learning for universal cross-domain recommendation,gpt-4-1106-preview,7,"The abstract describes a motif-based prompt learning framework aimed at enhancing cross-domain recommendation systems. Although the study focuses primarily on recommendations, the use of 'motif-based prompt learning' relates closely to prompt engineering, especially in the context of adapting machine learning models to respond to different kinds of data inputs or prompts. Prompt engineering is about designing prompts that help models perform better on specific tasks. The paper's mention of 'adaptable prompt parameters' and the integration of these into pre-training and fine-tuning paradigms indicates that it deals with adjusting how models interact with prompts. However, it does not strictly focus on 'hard prefix prompts' as the study prompt requests, thus the relevance rating is not a full 10." -pro-cs : an instance-based prompt composition technique for code-switched tasks,gpt-4-1106-preview,8,"The abstract discusses a prompt composition technique for code-switched tasks, which is highly relevant to prompt engineering, as it directly pertains to designing prompts that effectively interact with language models on code-switched data. The fact that it compares its approach to both prompt-tuning and fine-tuning indicates an in-depth analysis of prompts in the context of significant efficiency in parameter use. The relevance is not rated a full 10 because the abstract does not explicitly mention 'hard prefix prompts,' which could be a more specific aspect of prompt engineering, but the overall content is very relevant to the broader field of prompt engineering study." -"continually detection, rapidly react: unseen rumors detection based on continual prompt-tuning",gpt-4-1106-preview,8,"The paper is highly relevant to the field of prompt engineering due to its focus on 'Continual Prompt-Tuning RD (CPT-RD) framework' which relates directly to the engineering and optimization of prompts in the context of rumor detection. The study addresses challenges such as catastrophic forgetting and knowledge transfer in prompt-tuning, which are central to improving the utility of prompts in continual learning scenarios. The deduction of two points is due to the prompt not directly addressing 'hard prefix prompts' specifically, but the broader context of prompt-tuning is still substantially relevant to the study of prompt engineering." -"continuous detection, rapidly react: unseen rumors detection based on continual prompt-tuning",gpt-4-1106-preview,9,"This paper is highly relevant to prompt engineering as it presents a framework for 'Continual Prompt-Tuning' specifically designed to tackle rumor detection. It directly deals with the optimization and storage of task-specific soft-prompts, which are central to the concept of prompt engineering within the context of language models. It also introduces strategies for knowledge transfer and a hypernetwork approach, both of which could influence future work in prompt engineering for continual learning scenarios. The only reason it is not a 10 is that it is specific to the context of rumor detection and the systematic review aspect might not be covered comprehensively." -soft prompt guided joint learning for cross-domain sentiment analysis,gpt-4-1106-preview,8,"The abstract discusses a 'soft prompt-based joint learning method' which is highly relevant to the topic of prompt engineering, particularly in the context of transfer learning and aspect term extraction. It explores how learnable vectors, as soft prompts, can be used to bridge domain differences and enhance model performance. While not focused exclusively on hard prefix prompts, the concept of soft prompts is intrinsically linked to prompt engineering, thus the study can contribute valuable insights to the broader field of prompt engineering research." -adaptive prompt learning with distilled connective knowledge for implicit discourse relation recognition,gpt-4-1106-preview,9,"The abstract describes a novel approach in the area of prompt engineering, focusing on the development of an advanced prompt learning framework called AdaptPrompt, which uses continuous prompts and connective knowledge distillation. This is highly relevant to the field of prompt engineering because it addresses a common challenge in the manual design of prompts and offers a solution that could be broadly applicable to other prompt engineering tasks. Although the study is specifically applied to implicit discourse relation recognition, the methods and findings are likely to have implications for prompt engineering in general, making it a valuable study within this domain. The only reason the rating is not a perfect 10 is that it focuses on a specific usage of prompt engineering within the context of discourse relation recognition, which may not cover all aspects of prompt engineering studies, such as hard prefix prompts explicitly." -prompt learning with knowledge memorizing prototypes for generalized few-shot intent detection,gpt-4-1106-preview,7,"The abstract mentions the use of 'prompt learning' as a technique within a two-stage learning framework for the purpose of Few-Shot Intent Detection. Prompt learning is relevant to prompt engineering as it involves designing and utilizing prompts to teach models specific tasks. However, the focus on 'knowledge memorizing prototypes' and issues specifically connected with intent detection makes it less directly relevant to the broader field of prompt engineering study. The use of prompts is a significant aspect of the research, but the particulars seem more narrowly focused on a specific application (intent detection) rather than on hard prefix prompts in general." -visual-attribute prompt learning for progressive mild cognitive impairment prediction,gpt-4-1106-preview,7,"The title suggests the study involves a machine learning model using prompts to predict progressive mild cognitive impairment (pMCI), indicating that prompt engineering is a fundamental part of the research. Specifically, the mention of a 'prompt learning model' and 'global prompt token' implies an exploration into how prompts interact with the model to improve performance. This is relevant to prompt engineering as it relates to designing and utilizing prompts to guide machine learning models effectively. However, it does not explicitly mention 'hard prefix prompts' and seems to focus on a specific application rather than a broad systematic review, so it may not be entirely comprehensive in the context of prompt engineering studies." -can unsupervised knowledge transfer from social discussions help argument mining?,gpt-4-1106-preview,7,"The abstract describes a study focused on argument mining that utilizes a novel prompt-based strategy for inter-component relation prediction, which is relevant to the concept of prompt engineering. The use of finetuned language models in conjunction with prompt-based techniques to leverage discourse context indicates a level of innovation and practical application in the realm of prompt engineering, warranting a rating of 7. The relevance is not at the maximum because the study is not exclusively concentrated on hard prefix prompts or comprehensive systematic review, but it does provide insights into the domain of prompt engineering within the context of argument mining." -hetgpt: harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks,gpt-4-1106-preview,7,"While the title and abstract describe a study related to prompt engineering, the context differs from what's typically associated with 'hard prefix prompts' in prompt engineering study, which is usually referenced in the field of Natural Language Processing (NLP). Here, the concept of 'prompting' is being applied to the domain of heterogeneous graph neural networks (HGNNs) and their pre-training routines. Although it does deal with prompts in an abstract sense, and may be relevant to the broader discussion on the utility of prompt-like methods in AI model training, it is not specifically about prompt engineering in the context of language models or text-based neural networks. Therefore, it is tangentially relevant, hence the rating of 7." -approximated prompt tuning for vision-language pre-trained models,gpt-4-1106-preview,8,"The abstract provided discusses prompt tuning, which is a technique relevant to prompt engineering studies. The focus on approximating the impact of soft prompt tokens and proposing a method for reducing computational complexity directly impacts the efficiency of prompt engineering for vision-language pre-trained (VLP) models. The fact that it explores a novel Approximated Prompt Tuning (APT) approach and demonstrates the performance and efficiency improvements through experiments makes it quite relevant to the field. However, it does not specifically mention 'hard prefix prompts,' which was the focus of the initial request. Therefore, the rating is not a perfect 10." -p3o: transferring visual representations for reinforcement learning via prompting,gpt-4-1106-preview,7,"The study focuses on the transfer of learned policies in deep reinforcement learning using a process called 'prompting', which aligns with the concept of 'prompt engineering'. While the prompting here is specific to visual representation and policy optimization in DRL, it shows an application of prompts to modify behavior of a model without full retraining. This is relevant to prompt engineering as it demonstrates how prompts can be employed to adapt models to new situations. However, the study does not discuss 'hard prefix prompts' or explore the general space of natural language processing, which are commonly associated with prompt engineering, hence the relevance is not maximum." -icpc: instance-conditioned prompting with contrastive learning for semantic segmentation,gpt-4-1106-preview,8,"The paper is high in relevance to prompt engineering for a couple of reasons. Firstly, it deals directly with designing prompts for semantic segmentation, which is part of the broader spectrum of prompt engineering studies. The study focuses on dynamic prompting as opposed to static prompts, which is a notable aspect of prompt design. Secondly, the paper proposes an align-guided contrastive loss to refine the vision and text embeddings' alignment, which is an advanced technique in prompt tuning for multimodal models. The only reason it does not score a perfect 10 is that it is applied to semantic segmentation specifically, rather than prompt engineering in general. Nevertheless, the methods developed could potentially influence or be part of prompt engineering techniques in a broader context." -gradient-based automated iterative recovery for parameter-efficient tuning,gpt-4-1106-preview,8,"The paper discusses the use of gradient-based explainability methods like TracIn for improving model performance specifically mentioning 'prompt-tuning' which is a form of prompt engineering. It shows the process of recovering performance in the context of parameter-efficient tuning (PET), a concept closely related to optimizing prompts for language models. While the paper does not focus exclusively on prompt engineering, the application of TracIn in the PET context suggests significant relevance to the study of how prompts can be engineered and debugged effectively." -extracting latent steering vectors from pretrained language models,gpt-4-1106-preview,8,"The work discussed in the abstract is highly relevant to prompt engineering since it deals with controlling language models to produce desired outputs, which is a core aspect of prompt engineering. The idea of extracting latent steering vectors aligns with engineering prompts to manipulate model behavior. However, it's not centered on hard prefix prompts specifically but rather on a broader control mechanism within the language model, thus not warranting a full 10 rating." -rethinking efficient tuning methods from a unified perspective,gpt-4-1106-preview,7,"The abstract discusses Parameter-efficient transfer learning (PETL) where tuning methods such as prompt, prefix, and adapter are briefly mentioned. Although the focus is on the development of a unified framework called U-Tuning, it is relevant to prompt engineering study as it involves task-specific lightweight adjustments and potentially new approaches for parameter-efficient transfer learning which could include improvements in prompt engineering techniques. However, the abstract does not solely concentrate on 'hard prefix prompts' but rather a broader range of PETL methods, hence the 7 out of 10 rating for relevance." -retrieval-augmented generative question answering for event argument extraction,gpt-4-1106-preview,7,"The relevance of the study to prompt engineering is significant as it discusses the augmentation of prompts with retrieved QA pairs to improve event argument extraction. Such a retrieval-augmented approach is directly related to prompt engineering because it involves the strategic manipulation of prompts to enhance model performance. While the primary focus of the study appears to be on augmenting prompts for a specific task of argument extraction, the underlying principles and methods could be widely applicable to other areas of prompt engineering. Therefore, the study could contribute valuable insights into the prompt engineering domain, even though it may not address hard prefix prompts specifically." -integrated parameter-efficient tuning for general-purpose audio models,gpt-4-1106-preview,7,"The abstract of the study discusses the use of a 'prompt-based learning approach' as part of the proposed Integrated Parameter-Efficient Tuning (IPET) framework, indicating that prompt engineering is relevant to the framework's methodology. The embedding prompt as one of its components suggests that the study investigates a form of prompt engineering within the context of audio model adaptation. Although the study is specific to the audio domain and does not directly address the broader concept of hard prefix prompts in general, the inclusion of a prompt-based learning approach within the IPET framework and its application to pre-trained models is indeed relevant to prompt engineering techniques. Therefore, the study would likely be of interest to those researching prompt engineering in specific applications, albeit with a specific focus on audio tasks rather than a comprehensive systematic review on hard prefix prompts." -virtual node tuning for few-shot node classification,gpt-4-1106-preview,7,"The abstract discusses 'Virtual Node Tuning (VNT),' which involves injecting virtual nodes as 'soft prompts' in the embedding space that can be optimized for few-shot node classification tasks. While this does not directly address 'hard prefix prompts,' it does pertain to the usage of prompts (in this case, soft ones) in the context of machine learning. The technique is a form of prompt engineering but applied within a graph representation learning task rather than natural language processing. This alternative application of prompts in a learning framework is relevant to the broader field of prompt engineering as it provides insight into how prompts can be used to improve performance in tasks with limited labeled data. However, its relevance is somewhat indirect since it does not address hard prefix prompts explicitly or delve into systematic reviews of prompt engineering, thus the rating of 7." -alexander knox at semeval-2023 task 5: the comparison of prompting and standard fine-tuning techniques for selecting the type of spoiler needed to neutralize a clickbait,gpt-4-1106-preview,8,"The study directly compares prompt engineering with standard fine-tuning techniques, which is highly relevant to prompt engineering research. Its focus on the application of prompt engineering for a specific NLP problem—clickbait neutralization—demonstrates the practical implications of prompt-based approaches and allows for insights into their effectiveness when contrasted with traditional fine-tuning. While the study is not exclusively about prompt engineering and also encompasses fine-tuning methods, its comparative analysis of the two techniques makes it significant for researchers interested in the area of prompt engineering." -pcbert: parent and child bert for chinese few-shot ner,gpt-4-1106-preview,8,"The abstract talks about 'prompt-tuning', which is a method within prompt engineering, that is being applied for Chinese few-shot Named Entity Recognition (NER). While the specific term 'hard prefix prompts' is not mentioned, the concept of prompt-based techniques, which are at the heart of prompt engineering, is central to the study described in the paper. This suggests that the paper's focus on using prompt-based methods for improving model performance in low-resource settings makes it highly relevant to the field of prompt engineering." -auto-prompting sam for mobile friendly 3d medical image segmentation,gpt-4-1106-preview,8,"The abstract discusses the development of an 'AutoSAM Adapter' that automatically generates prompts for 3D medical image segmentation, which is a specific application of prompt engineering. While it does not generalize to all forms of prompt engineering, this study focuses on automatic prompt generation to improve the performance of a segmentation model. Therefore, it is highly relevant to the study of prompt engineering, particularly in the field of medical image analysis using machine learning models. The deduction of two points is due to the specialized application rather than a broad, comprehensive review of techniques across different domains." -transferring pre-trained multimodal representations with cross-modal similarity matching,gpt-4-1106-preview,7,"The abstract and the TLDR mention designing context-based prompt augmentation (CPA), which indicates a direct relevance to prompt engineering as it pertains to refining the text prompts for improved performance in multimodal models. Although the main focus is on representation transfer and not on prompt engineering per se, the use of prompts to achieve cross-modal similarity matching shows that prompts are a noteworthy aspect of the proposed method's overall framework and application, thus suggesting moderate relevance to prompt engineering studies." -srcb at the ntcir-16 real-mednlp task,gpt-4-1106-preview,8,"The abstract indicates the use of prompt learning as part of the approach for tackling Named Entity Recognition and Adverse Drug Event detection tasks, which are directly related to natural language processing challenges in computational linguistics. The involvement in prompt learning suggests that the paper includes discussion or experimentation with the implementation or optimization of prompts, which is relevant to the study of prompt engineering. However, the abstract does not provide details specifically about 'hard prefix prompts' which might be one of the variations or specific interest within prompt engineering. Therefore, the relevance is high but not complete with respect to the specified topic of 'hard prefix prompts'." -controllable generation of dialogue acts for dialogue systems via few-shot response generation and ranking,gpt-4-1106-preview,9,"The article presents a novel approach for controllable generation of dialogue acts (DAs) in dialogue systems through a few-shot learning and ranking method, which is highly relevant to prompt engineering. The use of few-shot prompts and the creation of methods for ranking generated responses based on their semantic accuracy and adherence to specific DAs are directly related to improving and refining the efficacy of prompts in generation tasks. The research aims to control the output of language models using prompt-based learning, a core aspect of prompt engineering." -adapting pre-trained language models to vision-language tasks via dynamic visual prompting,gpt-4-1106-preview,8,"The abstract discusses 'Dynamic Visual Prompting (DVP)', which is a novel approach to adapt pre-trained language models to vision-language tasks. While the focus is on bridging the gap between single- and multi-modal learning, the relevance to prompt engineering study lies in the exploration and implementation of prompts as a transfer learning approach. DVP as a means to reduce redundancy and optimize the placement of prompt tokens in the context of visual features directly pertains to prompt engineering, particularly in the way it demonstrates prompt effectiveness and modification techniques. Although the study is not exclusively about 'hard prefix prompts', it contributes to the broader field of prompt engineering by showing how prompts can be dynamically integrated with pre-trained models for enhanced performance in multi-modal tasks. The rating is given an 8 instead of a 10 because the study's primary focus is not on the comprehensive systematic review of hard prefix prompts, but rather on a particular application of prompts in vision-language tasks." -eco: ensembling context optimization for vision-language models,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering, as it discusses improving image classification in vision-language models by engineering or learning textual prompts to optimize performance. The ensemble of prompts strategy directly ties to the manipulation and optimization of prompts, which is the essence of prompt engineering. Although the prompt engineering in question is utilized for vision-language scenarios rather than the 'hard prefix prompts' mentioned, the principles and goals appear to be closely aligned. Hence, the paper is not entirely focused on 'hard prefix prompts' but is still within the broader domain of prompt engineering." -generalizing few-shot named entity recognizers to unseen domains with type-related features,gpt-4-1106-preview,8,"The paper presents a framework (PLTR) that involves a form of prompt engineering by generating unique prompts for unseen examples using type-related features. This is highly relevant to prompt engineering as it directly involves the creation and optimization of prompts for improving the model's performance on few-shot named entity recognition tasks. The reason the rating is not a full 10 is that the study focuses specifically on the NER task and the use of type-related features, which may not cover the broader concept of hard prefix prompts in the context of prompt engineering more generally." -enhancing cross-lingual natural language inference by prompt-learning from cross-lingual templates,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses a prompt-learning based framework to enhance cross-lingual natural language inference (XNLI), which is a direct application of prompt engineering techniques. The use of cloze-style questions constructed from cross-lingual templates is an example of hard prefix prompts, which fits within the broader category of prompt engineering studies. The significance of the research is supported by experimental results on benchmark datasets, although it focuses specifically on the XNLI task rather than prompt engineering in general, which prevents it from receiving a full 10." -nlpbench: evaluating large language models on solving nlp problems,gpt-4-1106-preview,8,"The abstract and TLDR describe a study focused on evaluating the performance of large language models on NLP problems using a new benchmarking dataset. Prompting strategies like chain-of-thought (CoT) and tree-of-thought (ToT) are an integral part of this performance evaluation. These strategies are directly related to prompt engineering as they involve devising ways to present problems to LLMs in a manner that leverages their strengths. Although the abstract does not specifically mention 'hard prefix prompts,' the discussion of prompting strategies is closely related to the field of prompt engineering and the study appears to contribute to our understanding of how LLMs can be more effectively prompted. The rating is not a full 10 because the provided abstract doesn't focus exclusively on prompt engineering but rather on a wider scope of NLP problem-solving capabilities." -retuyt-inco at bea 2023 shared task: tuning open-source llms for generating teacher responses,gpt-4-1106-preview,8,"This paper is highly relevant to prompt engineering as it discusses the fine-tuning of Open-Source Large Language Models (LLMs) for a specific application, which is the generation of teacher responses in educational dialogues. The exploration of different prompting strategies, such as Few-Shot and Chain-of-Thought, directly pertains to the field of prompt engineering. While the paper does not focus solely on 'hard prefix prompts,' which the original question inquires about, it examines relevant techniques that would influence the design and implementation of effective prompts for LLMs. The deduction of two points accounts for the absence of a direct focus on 'hard prefix prompts,' but overall, the study presents material that would be of significant interest to anyone researching prompting methods." -aligning large language models for clinical tasks,gpt-4-1106-preview,7,"The abstract discusses the alignment of Large Language Models (LLMs) for clinical tasks, focusing on strategies such as 'expand-guess-refine' for question-answering applications. Although it does not directly mention 'hard prefix prompts' or conduct a comprehensive systematic review on them, the alignment strategy includes in-prompt strategies like few-shot and chain-of-thought prompting which are related to prompt engineering. Therefore, while it is not wholly focused on prompt engineering, it is still relevant due to the discussion of prompt-based techniques for improving LLM performance in a specific domain." -naisteacher: a prompt and rerank approach to generating teacher utterances in educational dialogues,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it specifically deals with the generation of teacher responses using a prompt-based approach with GPT-3.5-turbo and involves reranking, which is an advanced form of prompt engineering. The only reason it does not receive a full score is that it may not directly address 'hard prefix prompts,' assuming 'hard prefix prompts' refers to a specific sub-category or method within prompt engineering." -visual prompting via image inpainting,gpt-4-1106-preview,8,"The abstract presents a study relevant to prompt engineering in the context of visual models rather than textual ones. It discusses a method analogous to prompting in NLP but applied to image processing tasks using image inpainting. Even though it doesn't involve 'hard prefix prompts' directly and focuses on the visual domain, the concept of adapting pre-trained models to new tasks with example-based prompts is closely related to the principles of prompt engineering. Therefore, the relevance is high, but not absolute, as this study does not directly discuss textual prompt engineering or hard prefix prompts specifically." -can adaptive pedagogical agents' prompting strategies improve students' learning and self-regulation?,gpt-4-1106-preview,7,"The study addresses prompting strategies in the context of adaptive pedagogical agents, which can be considered a form of prompt engineering as it relates to optimizing the prompts for better learning and self-regulation outcomes. Although it does not directly address 'hard prefix prompts' in a systematic review manner, the concept of a 'fading prompting strategy' is related to how prompts are engineered for effectiveness over time, which could be relevant in the broader scope of prompt engineering study." -low-resource ner by data augmentation with prompting,gpt-4-1106-preview,8,"The mentioned paper is highly relevant to prompt engineering study, especially considering its use of prompting strategies to elicit knowledge from a language model (BERT) for named entity recognition (NER) in a low-resource setting. The relevance score is not a perfect 10 because the focus is on data augmentation for NER and not solely on hard prefix prompts, which are a subset of prompt engineering techniques. Furthermore, the emphasis on label-conditioned word replacement and generation of new training data via QA prompting demonstrates a practical application of prompt engineering within a specific NLP task, underscoring its importance and relevance to the field." -this joke is [mask]: recognizing humor and offense with prompting,gpt-4-1106-preview,8,"The study described in the title and abstract focuses on the effectiveness of prompting, which is a technique used in NLP and directly relevant to prompt engineering. The investigation of humor recognition through prompts falls within the scope of prompt engineering studies, as it explores how prompts can be designed and utilized to achieve a specific task (humor recognition in this case). The fact that the paper compares prompting to fine-tuning and looks at low-resource scenarios also adds to its relevance. However, the specificity to humor and offense slightly limits the rating as prompt engineering can encompass a broader range of tasks beyond these topics." -demonstrate-search-predict: composing retrieval and language models for knowledge-intensive nlp,gpt-4-1106-preview,7,"The abstract provided discusses an advanced technique in the domain of natural language processing that could clearly relate to prompt engineering. The Demonstrate-Search-Predict (DSP) framework integrates language models (LM) and retrieval models (RM) in a complex pipeline to improve performance on knowledge-intensive tasks. While this does not directly reference 'hard prefix prompts', it aligns with the broader field of prompt engineering due to its focus on improving the interaction between models for better information retrieval and processing. Prompt engineering is crucial in designing the inputs to such systems to ensure the most relevant and accurate outputs. However, without explicit mention of 'hard prefix prompts', the relevance is not a perfect fit; hence, a rating of 7 is assigned to indicate its substantial relevance but not a direct match to the specific topic of prompt engineering study." -error analysis prompting enables human-like translation evaluation in large language models: a case study on chatgpt,gpt-4-1106-preview,9,"The study specifically focuses on the development and refinement of a prompting method, namely Error Analysis Prompting (EAPrompt), which is a direct application of prompt engineering. The use of prompts in this context is to enhance the capability of generative LLMs, such as ChatGPT, to evaluate machine translation quality more effectively. This falls within the domain of prompt engineering, as it involves designing prompts to elicit desired behaviors from a language model. However, it does not directly address 'hard prefix prompts' as mentioned in the initial request, but it is highly relevant to the overall field of prompt engineering." -explicit visual prompting for low-level structure segmentations,gpt-4-1106-preview,8,"The relevance to prompt engineering is significant as the study adapts the concept of prompt tuning from natural language processing (NLP) to the visual domain, which is a novel application of prompt engineering principles. Prompt tuning is a core area of study within prompt engineering, and the paper's proposition of a new visual prompting model called 'Explicit Visual Prompting (EVP)' shows direct influence from NLP prompt tuning methods, indicating that the findings could be beneficial to the field. Although EVP is tailored for image-based tasks and not textual prompt engineering, the conceptual crossover and potential implications for the development of similar strategies in NLP make this study relevant. The rating is not a perfect 10 because the study does not directly address textual prompt engineering but rather adapts its concepts to a different domain." -pushing the limits of chatgpt on nlp tasks,gpt-4-1106-preview,9,"The abstract presents research that directly involves the optimization of prompts and input strategies for improving ChatGPT's performance on a variety of NLP tasks. Techniques such as 'one-input-multiple-prompts' and the development of modules to address specific issues inherent in language model tasks are inextricably linked to prompt engineering. Although the study's title does not explicitly mention 'hard prefix prompts,' the body of work encompasses strategies that likely include or are related to prompt engineering concepts. Therefore, the study is highly relevant to prompt engineering, meriting a rating of 9 out of 10. It loses one point because it does not specifically mention the systematic review on 'hard prefix prompts,' which might be considered a subset or particular aspect of prompt engineering the inquiry could be asking about." -all in one: multi-task prompting for graph neural networks,gpt-4-1106-preview,8,"The paper focuses on the adaptation of prompt learning from NLP to graph tasks, seeking to bridge the gap between pre-trained models and diverse graph tasks by proposing a novel multi-task prompting method. This is highly relevant to prompt engineering as it explores the concept of prompts, albeit in the domain of graph models. The integration of NLP prompting techniques into a different domain suggests a broader potential application of prompt engineering principles. The rating is not a full 10 due to the specific focus on graph models rather than a general prompt engineering approach." -diffusion-nat: self-prompting discrete diffusion for non-autoregressive text generation,gpt-4-1106-preview,7,"The abstract discusses the integration of discrete diffusion models with non-autoregressive text generation and the improvement of this integration via a novel strategy called 'iterative self-prompting.' While it does not directly mention 'hard prefix prompts,' the concept of self-prompting is related to prompt engineering because it involves the manipulation of prompts to improve the text generation process. This means that the study contributes to the field of prompt engineering, even if it doesn't directly address the specific topic of hard prefix prompts. Therefore, it has relevance to the broader field of prompt engineering but is not a perfect match for a systematic review focused exclusively on hard prefix prompts." -parafuzz: an interpretability-driven technique for detecting poisoned samples in nlp,gpt-4-1106-preview,8,"The relevance to prompt engineering is quite high in this study. The abstract mentions the formulation of the trigger-removal task as a prompt engineering problem, indicating a direct engagement with prompt engineering techniques. Furthermore, the application of 'fuzzing' to discover optimal paraphrase prompts for the purpose of maintaining input semantics while eliminating backdoor triggers in NLP models is aligned with innovative practices within prompt engineering. Although the primary focus is on the detection of poisoned samples and ensuring interpretability, the use of prompt engineering as a method to achieve these aims supports the rating of 8 out of 10." -self-diagnosis and self-debiasing: a proposal for reducing corpus-based bias in nlp,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it addresses the critical aspect of bias mitigation in NLP models, which is an essential consideration when designing prompts. The concept of 'self-diagnosis' is particularly pertinent, as it implies that models can detect undesirable biases in response to prompts. Similarly, 'self-debiasing', where the model actively avoids generating problematic outputs based on the prompt description, is a direct application of prompt engineering principles. The techniques discussed could be employed in designing prompts that encourage models to produce less biased content. Although the paper does not directly elaborate on 'hard prefix prompts,' it does contribute to the overarching field of prompt engineering by exploring decoding algorithms and model behavior in response to prompts and bias management." -large language models (llms) for natural language processing (nlp) of oil and gas drilling data,gpt-4-1106-preview,7,"The abstract mentions the use of various prompt engineering strategies as part of the methodology to handle text downstream tasks in oil and gas drilling data using large language models. Although the study primarily focuses on the application of LLMs in a specific domain (oil and gas), the inclusion of prompt engineering in the process indicates a significant relevance to the field of prompt engineering study. However, a perfect relevance score is not given because the primary focus of the study is not purely on prompt engineering, but rather on the domain-specific application of large language models which includes prompt engineering as a part of the process." -automatically correcting large language models: surveying the landscape of diverse self-correction strategies,gpt-4-1106-preview,8,"The paper's focus on 'self-correction strategies' for large language models (LLMs) is highly relevant to prompt engineering study, as prompt engineering often involves designing prompts to elicit the desired behavior or correct the output of an LLM. The detailed review of automated feedback methods can be crucial for advancing the prompt engineering field, especially in the context of minimizing the necessity for human intervention in generating effective prompts. However, the paper may not be specifically centered on 'hard prefix prompts,' which the original prompt suggested, hence not a perfect 10." -adversarial attacks on large language model-based system and mitigating strategies: a case study on chatgpt,gpt-4-1106-preview,9,"The abstract details a study that is highly relevant to prompt engineering as it focuses on using prefix prompts as a mitigating strategy against adversarial attacks on language models, directly impacting how prompts are engineered for safety and robustness. Evaluating and enhancing the security of language models like ChatGPT with prefix prompts falls within the scope of prompt engineering research. Although the study may not solely concentrate on the engineering of hard prompts, the development of a 'training-free prefix prompt mechanism' indicates a significant contribution to the field of prompt design and mitigation strategies, which is a crucial aspect of prompt engineering." -evaluating tuning strategies for sequence generation with protein language models,gpt-4-1106-preview,8,"The response evaluates a study that involves adapting NLP models for use in generating artificial protein sequences, with a focus on prompt tuning as an alternative to fine-tuning. Although the study is not directly examining 'hard prefix prompts,' it is investigating the efficiency and effectiveness of tuning strategies, particularly prompt tuning, within the context of a language model adapted for a specialized domain. This makes the study highly relevant to prompt engineering as it explores adaptable methodologies for model tuning, which can include prompt engineering strategies. The study's results and the discussion of the quality assessment tools also contribute valuable insights for future developments in prompt engineering, despite not specifically addressing 'hard prefix prompts.'" -from humans to machines: can chatgpt-like llms effectively replace human annotators in nlp tasks?,gpt-4-1106-preview,7,"The abstract discusses the potential use of large language models (LLMs) like ChatGPT for NLP tasks, which is relevant to prompt engineering in the sense that prompt engineering could be vital for directing such models to perform annotation tasks. The ability of LLMs to understand and respond to prompts effectively would be central to their use as annotators. Although the focus here is more on annotation than prompt engineering directly, the quality and nature of prompts would inherently affect the success of such an application. Therefore, the study indirectly addresses issues that are significant to the field of prompt engineering." -iie-nlp-nut at semeval-2020 task 4: guiding plm with prompt template reconstruction strategy for comve,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering because it discusses a prompt template reconstruction strategy within the context of a natural language processing task (i.e., SemEval Task4). The use of prompt templates to guide pre-trained language models (PLMs) for specific tasks like commonsense validation and explanation is a direct application of prompt engineering. Even though the study does not seem to be a systematic review on 'hard prefix prompts', the introduction of input reconstruction strategy with prompt templates is closely related to the engineering and structuring of prompts to improve the performance of language models, which is a key aspect of prompt engineering. Therefore, the paper's content aligns well with the field of study." -news summarization and evaluation in the era of gpt-3,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it directly involves prompting a large language model (GPT-3) and studying its performance in a specific NLP task - news summarization. Although it does not focus exclusively on 'hard prefix prompts', the mentioned concept of 'task description' prompting is a critical element of prompt engineering. The examination of how effectively GPT-3 can generate summaries with only a task description highlights the importance of designing prompts to elicit desired responses from AI models. The relevance to prompt engineering study is not rated a perfect 10 because the paper seems to cover broader aspects of model evaluation and summarization tasks rather than focusing solely on the detailed structure and impact of prompts." -a progressive prompting approach to conducting context-aware learning activities for natural science courses,gpt-4-1106-preview,7,"The relevance to prompt engineering lies in the exploration of a progressive prompt-based approach to enhance learning outcomes, which is conceptually similar to designing prompts to improve interaction with AI or learning systems. However, the study is situated in the context of mobile learning in natural science courses, not specifically within prompt engineering for AI or computational systems. Nevertheless, the methodologies and findings could have implications for the practice of prompt engineering, particularly in creating adaptive and context-aware prompts for various applications." -opt-iml: scaling language model instruction meta learning through the lens of generalization,gpt-4-1106-preview,8,"The study pertains to the broader field of instruction-tuning, which is closely related to prompt engineering, as it involves optimizing language models to understand and execute instructions from prompts more effectively. Although the specific term 'hard prefix prompts' is not mentioned, the principles and findings from such instruction-tuning experiments can be highly relevant and applicable to prompt engineering, including the development and assessment of hard prefix prompts." -how good are gpt models at machine translation? a comprehensive evaluation,gpt-4-1106-preview,7,"The relevance of the presented paper to prompt engineering is significant, mainly due to the examination of the 'effect of prompting strategies' on the performance of GPT models in machine translation. Prompt engineering is crucial for optimizing the model's output, and this paper's exploration of how GPT models respond to different prompts could provide valuable insights for the field. Although the study's primary focus is on machine translation, the inclusion of prompting strategies as one of the evaluated aspects means that the findings could potentially contribute to a better understanding of prompt engineering. Therefore, the rating acknowledges the indirect but important relation to prompt engineering within the context of machine translation." -enabling large language models to generate text with citations,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it directly addresses the construction of prompts to enable large language models to generate text that includes citations. This requires the development of novel prompting strategies that guide the model not just to produce answers, but also to provide evidence through citations. While the study is not solely focused on 'hard prefix prompts,' it falls within the broader field of prompt engineering and is very relevant due to its focus on the performance and verification of information produced by LLMs. Prompt engineering is a critical component in achieving the goals outlined in the study." -diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine,gpt-4-1106-preview,9,The paper is highly relevant to prompt engineering as it specifically focuses on the development and use of 'diagnostic reasoning prompts' designed to investigate the ability of LLMs (like GPT-4) to replicate clinical reasoning processes. This research directly contributes to the field of prompt engineering by demonstrating that prompts can be designed in a way that not only elicits specific types of reasoning from LLMs but can also do so with a level of interpretability that aligns with the cognitive processes of professionals in the field of medicine. The study's aim to enhance understanding and trust in LLMs through better-designed prompts is squarely within the goals of prompt engineering. -make llm a testing expert: bringing human-like interaction to mobile gui testing via functionality-aware decisions,gpt-4-1106-preview,8,"The abstract describes the use of Large Language Models (LLMs) like ChatGPT in automated GUI testing, which involves a novel application of prompt engineering. By formulating the problem as a Q&A task and introducing a functionality-aware prompting mechanism, the study essentially deals with the design and utilization of prompts to enable the LLM to generate useful outputs for testing purposes. This showcases an implementation of prompt engineering to improve the performance of an AI model in a domain-specific task. However, it doesn't directly study the prompt engineering process in a broader context, and therefore doesn't merit a perfect score." -llm-funcmapper: function identification for interpreting complex clauses in building codes via llm,gpt-4-1106-preview,8,"The abstract describes the use of a large language model (LLM) to interpret complex regulatory texts, which is relevant to prompt engineering study as it involves the development of a prompt template with chain of thought thinking. While the study isn't focused on 'hard prefix prompts' specifically, the creation of this tailored template and its adjustment using a classification-based tuning strategy are key examples of prompt engineering. The approach of identifying functions and utilizing LLM for understanding complex clauses is closely related to how prompts are engineered to improve the performance of language models on specific tasks. The rating is not a full 10 because the research is not exclusively centered on prompt engineering, but rather on the application of LLMs in the context of interpreting building codes; nonetheless, the methodology includes relevant elements of prompt engineering." -resolving the imbalance issue in hierarchical disciplinary topic inference via llm-based data augmentation,gpt-4-1106-preview,7,"The paper discusses the use of large language models for data augmentation in order to tackle the problem of data imbalance in the context of hierarchical disciplinary topic inference. This is relevant to the field of prompt engineering because designing effective prompts is essential for guiding language models like Llama V1 to generate meaningful and well-aligned augmented text data. The study's emphasis on prompt design for keyword-based research proposal generation is a significant aspect of prompt engineering. However, the primary focus appears to be on addressing data imbalances in the machine learning system, rather than the nuances of prompt engineering itself. Therefore, while prompt engineering is undoubtedly a component of the study, it is not the singular focus." -workshop on large language models' interpretability and trustworthiness (llmit),gpt-4-1106-preview,8,"The abstract discusses the significance of context (prompts) and the need for research on the effects of inputs on Large Language Models (LLMs) and their outputs. It directly relates to prompt engineering, as it addresses the importance of understanding how small changes in prompts can significantly alter the behavior of LLMs (a key issue in prompt engineering). However, it doesn't explicitly mention 'hard prefix prompts' or a systematic review on prompt engineering, hence it doesn't fully match the comprehensive systematic review aspect of the prompt engineering study specified." -improving zero-shot visual question answering via large language models with reasoning question prompts,gpt-4-1106-preview,8,"The title and abstract describe a study focused on improving the effectiveness of Large Language Models (LLMs) for zero-shot Visual Question Answering tasks by using 'Reasoning Question Prompts'. This is relevant to prompt engineering as it involves the strategic design of prompts to enhance the performance of LLMs in interpreting and answering questions without any prior specific training on the task. Although the study does not specifically mention 'hard prefix prompts,' it nonetheless pertains to the broader field of crafting prompts to guide the LLMs towards better comprehension and response generation. Therefore, the relevance to prompt engineering is high, but not the maximum as the study doesn't directly address the concept of 'hard prefix prompts'." -prompts of large language model for commanding power grid operation,gpt-4-1106-preview,8,"The abstract describes a study that is focused on redefining the interaction between humans and a power grid operation system through the use of specifically engineered prompts for a Large Language Model. Given that prompt engineering is central to the process of adapting the LLM to interpret and execute natural language commands in the context of power grid operations, the study is highly relevant to the field. The rating is an 8 instead of a perfect score because, while it is about prompt engineering, the application is very specific to power grid operations and might not cover all aspects of prompt engineering, which could also include a broader range of topics beyond this specific use case." -psychologically-informed chain-of-thought prompts for metaphor understanding in large language models,gpt-4-1106-preview,9,"The study presents the application of chain-of-thought prompts to large language models in order to incorporate structured reasoning, similar to probabilistic models, particularly focusing on metaphor understanding. Although it does not specifically address 'hard prefix prompts,' it does fall within the broader category of prompt engineering, which involves designing prompts to elicit specific behaviors or capabilities in language models. The emphasis on structured reasoning through prompts and the reference to improving performance on a specific language task, metaphor paraphrase selection, make it highly relevant to studies in prompt engineering. The only reason it does not receive a full 10 is that it is not exclusively centred on 'hard prefix prompts' as the original term suggests." -can large language models explain themselves? a study of llm-generated self-explanations,gpt-4-1106-preview,8,"The abstract addresses the concept of 'self-explanations' generated by LLMs like ChatGPT, which directly pertains to one aspect of prompt engineering—eliciting detailed and insightful explanations from the model. Even though the abstract does not explicitly mention 'hard prefix prompts,' it discusses the broader area of how to effectively prompt LLMs for specific types of outputs, in this case, self-explanations. Since the study contributes to the understanding of how LLMs can be guided to provide explanations, it is relevant to the study of prompt engineering. However, the rating is not a full 10 because the abstract does not focus specifically on hard prefix prompts but rather on the general capability of LLMs to explain their reasoning." -towards llm-based fact verification on news claims with a hierarchical step-by-step prompting method,gpt-4-1106-preview,9,"The presented paper is highly relevant to prompt engineering study as it explores a novel prompting method, the Hierarchical Step-by-Step (HiSS), specifically for the task of fact verification of news claims using large language models (LLMs). This approach falls directly within the scope of prompt engineering, where the design of prompts is used to guide the LLMs to perform complex tasks such as dissecting claims into subclaims and verifying them, which is a more nuanced application of prompt engineering. The relevance is not rated a full 10 only because the abstract does not explicitly discuss the engineering of 'hard prefixes,' but the prompting methodology itself is a significant contribution to the field of prompt engineering." -unified human-scene interaction via prompted chain-of-contacts,gpt-4-1106-preview,7,"The relevance of the 'unified human-scene interaction via prompted chain-of-contacts' study to prompt engineering is significant, as it describes a system that uses language commands to control interactions within a virtual environment. This means that it requires engineered prompts to interpret human language and convert it into actionable commands, aligning closely with the concept of prompt engineering. Although the study focuses specifically on Human-Scene Interaction and does not explicitly discuss the process of designing prompts or the systematic review of hard prefix prompts, the usage of a Large Language Model (LLM) Planner to translate these commands indicates that prompt engineering is an integral part of the framework. Therefore, it is relevant to the study of prompt engineering but not entirely focused on it; hence, it receives a rating of 7." -learning profitable nft image diffusions via multiple visual-policy guided reinforcement learning,gpt-4-1106-preview,7,"The study focuses on generating Non-Fungible Token (NFT) images using a combination of language and image generation models, which relates to prompt engineering in that it involves generating detailed prompts to create specific visual attributes in NFTs. The use of a large language model (LLM) to enhance human input into more complex prompts is particularly relevant to prompt engineering. However, the study also diverges into optimization metrics and market value considerations, aspects that are less directly connected to traditional prompt engineering. Hence, the rating acknowledges the relevance of prompt generation and refinement while noting that not all aspects of the paper are centered on prompt engineering." -majority rule: better patching via self-consistency,gpt-4-1106-preview,8,"The abstract provided discusses an advanced application of prompting techniques in the specific context of software engineering problem-solving. While the focus is on a particular domain, the techniques used, such as few-shot prompts, chain of thought explanations, and the self-consistency method are directly related to prompt engineering. The paper's contribution to prompt engineering is substantial as it explores the effectiveness of particular prompting strategies (like using commit logs as explanations) that lead to state-of-the-art results. However, the research does not appear to be about 'hard prefix prompts' specifically, so it is not a perfect match for a 'comprehensive systematic review on hard prefix prompts.' Therefore, the rating is not a full 10." -llm-assisted content analysis: using large language models to support deductive coding,gpt-4-1106-preview,7,"The paper 'llm-assisted content analysis: using large language models to support deductive coding' is moderately relevant to prompt engineering studies. The study investigates the potential of Large Language Models like GPT-3.5 to assist with the labor-intensive process of deductive coding in qualitative research, which is a specific application of natural language processing. Although it does not directly focus on 'hard prefix prompts,' it does explore the broader realm of using prompts (or queries) to facilitate analysis with an LLM, and it examines how LLMs can be used to refine prompts for better deductive coding outcomes, which is a core part of prompt engineering. Therefore, the principles and findings regarding prompt optimization and evaluation in this research can be valuable for those studying prompt engineering, even if the primary focus of the study does not directly align with the construction or systematization of hard prefix prompts." -toolkengpt: augmenting frozen language models with massive tools via tool embeddings,gpt-4-1106-preview,7,"The abstract provided does pertain to the general field of prompt engineering, given it discusses an approach to augment large language models in a way that could enhance their use of prompts for tool execution. Although it doesn't specifically mention 'hard prefix prompts' or conduct a 'systematic review' on them, the description of ToolkenGPT and the concept of 'toolkens' is relevant to the field of prompting language models for specific tasks. The paper suggests a method for improving the interaction between language models and the tools they can utilize, which could be considered a form of advanced prompt engineering. Therefore, the rating is moderately high for relevance, but not a full score because it does not directly address a systematic review or the specific concept of 'hard prefix prompts.'" -automatic calibration and error correction for generative large language models via pareto optimal self-supervision,gpt-4-1106-preview,7,"The abstract describes a methodology for improving the calibration and error correction of generative large language models, which is an important aspect of prompt engineering. Effective prompt engineering can benefit greatly from systems that are able to self-evaluate their confidence and error likelihood, providing insight into how prompts might be refined for better outcomes. While the study does not directly deal with 'hard prefix prompts', the proposed framework for self-supervision and dynamic prompting strategy is relevant to the field of prompt engineering as it touches on the calibration and adaptation of prompts based on model confidence. Therefore, the relevance to prompt engineering is significant, although not exclusively focused on 'hard prefix prompts' but rather on the broader issues of model response calibration and error correction." -self-detoxifying language models via toxification reversal,gpt-4-1106-preview,9,"The abstract is highly relevant to prompt engineering study because it directly involves the process of manipulating prompts to achieve a desired behavior in a pretrained language model (PLM). The concept of 'self-detoxification' by reversing the toxification direction is an application of prompt engineering where the input prompt's design has a pivotal role. While it doesn't focus on 'hard prefix prompts' explicitly, it aligns with the core principles of prompt engineering—altering the prompts to influence the model's outputs." -enhance reasoning ability of visual-language models via large language models,gpt-4-1106-preview,8,"The provided abstract is relevant to prompt engineering study because it describes a method (TReE) for enhancing the reasoning ability of visual-language models by using prompts derived from a large language model. This is particularly applicable to hard prefix prompts, as it involves structuring input to the models in a way that guides them through a multi-stage reasoning process. Although the abstract may not explicitly state 'hard prefix prompts', the thinking and re-thinking stages likely involve constructing prompts that carefully direct the model's reasoning, a key concept in prompt engineering." -violation of expectation via metacognitive prompting reduces theory of mind prediction error in large language models,gpt-4-1106-preview,7,"The abstract describes a study on the application of a metacognitive prompting framework in the context of LLMs and their ability to perform Theory of Mind tasks. These tasks are directly related to the prediction capabilities and interpretation strategies of the models, which are essential elements in the broader scope of prompt engineering. Though the concept of 'hard prefix prompts' as specified in the initial request is not addressed directly, the nature of modifying LLM behavior through specific prompting techniques (metacognitive prompting) is highly relevant to enhancing the understanding of how prompts affect model performance and behavior. Therefore, the study is considerably relevant as it focuses on systematic approaches to improve interaction quality between humans and AI via prompts, which could indirectly contribute to the understanding and development of hard prefix prompts in prompt engineering." -automatic hallucination assessment for aligned large language models via transferable adversarial attacks,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores the creation of prompts (in this case, adversarial attacks) that influence language model performance. This involves understanding how prompting affects LLM behavior and assessing the models' reliability, which is a core aspect of prompt engineering. The use of prompting chaining is directly related to the design and engineering of prompts that can manipulate or test the behavior of LLMs. Although the study's focus is on hallucination and the generation of evaluation data, the methods used are a part of prompt engineering practices." -improving few-shot generalization of safety classifiers via data augmented parameter-efficient fine-tuning,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores the use of prompt-tuning (a form of prompt engineering) combined with data augmentation to improve the performance of language models on safety classification tasks. This work directly pertains to the field of prompt engineering, as it aims to enhance model generalization using techniques that modify the input prompt structure to better guide the model in few-shot learning scenarios. The approach mentioned, similarity-based data-augmentation + prompt-tuning (DAPT), is a specific instance of prompt engineering, thus making the study quite relevant. Despite the focus on domain-generalized few-shot learning for safety applications and not solely on 'hard prefix prompts', the paper’s exploration of prompt-tuning in practice warrants a high relevance score." -tempera: test-time prompt editing via reinforcement learning,gpt-4-1106-preview,9,"The paper's abstract indicates that the work is highly relevant to prompt engineering as it presents a novel method (TEMPERA) which focuses on editing prompts using reinforcement learning. This directly aligns with innovations and advancements in prompt design strategies for large language models, which is at the heart of prompt engineering studies. The only reason the rating is not a full 10 is that the relevance might be slightly more specific to reinforcement learning techniques in prompt engineering rather than a broad systematic review on 'hard prefix prompts'. However, the contributions to optimizing prompts and improving sample efficiency are very pertinent to the field." -gpt-4 is too smart to be safe: stealthy chat with llms via cipher,gpt-4-1106-preview,8,"The relevance of this study to prompt engineering is high because it directly investigates the interaction dynamics between humans and LLMs (Large Language Models) by introducing a novel method of communication—CipherChat. This approach challenges existing safety alignment techniques, which are crucial for prompt engineering as they ensure that model responses align with intended outcomes and ethical guidelines. The use of ciphers as a tool to test and potentially enhance LLMs' interpretative faculties aligns with prompt engineering strategies that seek to refine how models understand and generate language-based responses. Furthermore, the discovery of a 'secret cipher' within LLMs and the development of a SelfCipher method pertains to advanced prompt engineering, where understanding model behavior in non-natural languages can lead to more sophisticated and safer human-AI interactions. However, because the study primarily focuses on safety alignment and communication in ciphers, which are a subset of prompt engineering tasks, it does not fully encompass the breadth of prompt engineering studies. Hence, the rating falls short of a perfect score." -ask an expert: leveraging language models to improve strategic reasoning in goal-oriented dialogue models,gpt-4-1106-preview,8,"The study focuses on incorporating strategic reasoning into dialogue models through the use of specialized prompts, which is related to prompt engineering. Although the 'hard prefix prompt' is not explicitly mentioned, the concept of structured prompts guiding dialogue systems is fundamental to prompt engineering and is reflected in the 'Ask an Expert' framework. This framework relies on pre-specified prompts to direct the conversation, which is a core aspect of prompt engineering. The relevance to prompt engineering is high, but the rating is not a full 10 due to the absence of a direct focus on 'hard prefix prompts' specifically." -zero-shot visual relation detection via composite visual cues from large language models,gpt-4-1106-preview,9,"The described study's focus on using language model-generated description-based prompts, referred to as 'Composite Description prompts', to improve zero-shot visual relation detection directly relates to the field of prompt engineering. The systematic review of 'hard prefix prompts' could encompass studies that explore innovative ways of combining language models with vision tasks, including the generation of prompts to guide visual recognition. Furthermore, the introduction of a chain-of-thought method to prompt language models for weight generation aligns with strategic prompt design to elicit specific model behaviors. Thus, the relevance is high, though not a perfect 10 as the primary focus is on visual relation detection rather than prompt engineering exclusively." -can llms keep a secret? testing privacy implications of language models via contextual integrity theory,gpt-4-1106-preview,7,"The study discusses the implications of information handling by large language models (LLMs), which relates to how these models process and output information based on the instructions (prompts) they receive. While it does not directly address 'hard prefix prompts,' it touches on the broader topic of prompt design and its influence on model behavior, particularly regarding privacy. It is relevant to prompt engineering since understanding and improving the privacy reasoning capabilities of LLMs can lead to the development of better prompts that protect user privacy. The rating is not a perfect 10 because the study's focus is on privacy and not explicitly on the structure or format of the prompts themselves, which would be a central aspect of a study dedicated entirely to prompt engineering." -distinguish before answer: generating contrastive explanation as knowledge for commonsense question answering,gpt-4-1106-preview,8,"The abstract describes CPACE, a model that uses explanation prompts to generate contrastive explanations from symbolic knowledge, which is particularly relevant to the field of prompt engineering. The use of prompts to guide the generation of explanations indicates that this research is focused on enhancing the interpretability and effectiveness of a question answering system through careful design of prompts. While not exclusively focused on 'hard prefix prompts', the study emphasizes the use of prompts in an AI model, which aligns with studies in prompt engineering. The relevance rating is not the maximum because the connection to 'hard prefix prompts' is not direct, yet the concept of using prompts to drive AI behavior is central to the research presented." -using global land cover product as prompt for cropland mapping via visual foundation model,gpt-4-1106-preview,7,"The abstract discusses leveraging the 'Pretrain+Prompting' paradigm, which is relevant to prompt engineering as it involves designing prompts to aid in domain adaptation for cropland mapping. The introduction of the auto-prompting (APT) method aligns with prompt engineering by using prompts to modify the behavior of pre-trained models on specific tasks. However, the direct focus on cropland mapping and the use of visual foundation models means it is not exclusively centered on prompt engineering but rather its application in a specific domain. Thus, it is moderately relevant but not a comprehensive systematic review on hard prefix prompts." -epa: easy prompt augmentation on large language models via multiple sources and multiple targets,gpt-4-1106-preview,8,"The paper describes a method called EPA (Easy Prompt Augmentation) which is directly related to prompt engineering. It improves the performance of large language models by augmenting task prompts with paraphrased demonstrations, reducing the user's effort in creating effective prompts. Since the study is about a technique to enhance prompt efficacy for NLP tasks, it has high relevance to the field of prompt engineering. However, the information provided does not explicitly mention 'hard prefix prompts', which was the specific topic of interest mentioned in the original inquiry, thus the rating is not a full 10." -expclip: bridging text and facial expressions via semantic alignment,gpt-4-1106-preview,7,"The abstract describes a research study that focuses on using natural language prompts to control the style of facial expressions in speech-driven animation, which is relevant to prompt engineering in the context of using language prompts for specific tasks. However, the primary application is in the domain of facial animation rather than prompt engineering for text generation or data processing tasks. Nevertheless, the study's use of a CLIP-based model and the development of a Text-Expression Alignment Dataset (TEAD) suggests significant overlap with prompt engineering methodologies, as it involves the alignment of text prompts with emotional expressions. The relevance is not complete as the scope of prompt engineering can be more extensive, but the techniques and mechanisms such as automatic annotation with LLMs and Expression Prompt Augmentation (EPA) are of interest to the field of prompt engineering." -tailoring personality traits in large language models via unsupervisedly-built personalized lexicons,gpt-4-1106-preview,7,"The study described in the abstract addresses the manipulation of language models' outputs by tailoring personality traits, which is related to prompt engineering in the sense that it involves guiding the language model to generate text with certain characteristics. Although the main focus is on personality traits via lexical choices rather than 'hard prefix prompts,' it still falls within the broader scope of controlling language model behavior, which is a key aspect of prompt engineering. Thus, the relevance is significant but not directly aligned with hard prefix prompts, hence the rating is not a full 10." -denevil: towards deciphering and navigating the ethical values of large language models via instruction learning,gpt-4-1106-preview,9,"The described paper is highly relevant to prompt engineering, as it develops a novel prompt generation algorithm (DeNEVIL) that interacts with large language models to explore and expose their ethical value alignment through instructions. Although not directly labeled as 'hard prefix prompts,' the concept of generating prompts to induce model behavior aligns with studies concerning prompt design and efficacy. The focus on ethical considerations adds a dimension of value-based prompt engineering, which is a specialized and relevant aspect of the broader field of prompt engineering studies." -pitl: cross-modal retrieval with weakly-supervised vision-language pre-training via prompting,gpt-4-1106-preview,8,"The study is highly relevant to the field of prompt engineering as it describes a method to improve the performance of vision-language pre-training models by using prompts to elicit knowledge from large language models. The method, called Prompts-in-The-Loop (PiTL), uses prompts to generate language counterparts for images, which reduces the need for paired image-text data and is a direct application of prompt engineering techniques. Although the study does not specifically focus on 'hard prefix prompts', it is still related to the broader area of prompt engineering, hence the rating of 8." -vision-language interpreter for robot task planning,gpt-4-1106-preview,7,"The study discussed in the abstract is moderately relevant to prompt engineering, as it deals with the generation of problem descriptions (PDs) from language instructions, which is a component of prompt engineering. In prompt engineering, one must design prompts that effectively communicate tasks to language models, and here, the model is interpreting language to create PDs for robot task planning. Although the study focuses on robot planning and multimodal inputs, the underlying principle of translating natural language into machine-readable formats aligns with the techniques and goals of prompt engineering. The interdisciplinary nature of this research, combining language models with symbolic planners, reflects the complexity encountered in prompt engineering scenarios. However, it does not directly address 'hard prefix prompts,' which suggests it is not fully specialized in the field of prompt engineering but is nonetheless relevant." -prompting with pseudo-code instructions,gpt-4-1106-preview,8,"The paper directly addresses the concept of 'prompt engineering' by exploring the use of pseudo-code as a form of prompt style for improving the performance of pre-trained language models. It compares pseudo-code prompts with natural language prompts and presents empirical results showing the effectiveness of pseudo-code, which includes structural elements pertinent to the field of prompt engineering. The improvement in performance metrics like F1 scores for classification and ROUGE-L scores for generative tasks indicates a significant relevance to the area of study. However, it focuses specifically on pseudo-code prompting rather than a broader range of hard prefix prompts, which is why the rating is not a full 10." -towards general visual-linguistic face forgery detection,gpt-4-1106-preview,8,"The abstract describes a study that centers on using 'fine-grained sentence-level prompts' for more effective face forgery detection. Prompt engineering is directly related to the design of these fine-grained prompts, making it highly relevant to the stated topic. The use of prompts within a Visual-Linguistic Face Forgery Detection system to improve semantic information and interpretability aligns with the study of hard prefix prompts which are designed for better interaction between language and models. The rating isn't a full 10 because the study focuses on a specific application of prompts in face forgery detection rather than a broad systematic review of hard prefix prompts across various domains." -forgetful large language models: lessons learned from using llms in robot programming,gpt-4-1106-preview,9,"The abstract indicates a study focused on reducing errors in execution of robotic programming tasks by employing language models with prompts. Although it concentrates on the 'forgetfulness' of LLMs and proposes solutions through prompt engineering tactics, it doesn't strictly cover 'hard prefix prompts' as the original study question suggests. However, the relevance is quite high as the paper seems to be a direct application of prompt engineering to improve task performance. Just the focus on prefix prompts specifically is not stated, which slightly reduces the rating." -interpretable unified language checking,gpt-4-1106-preview,8,"The abstract mentions the use of a 'simple, few-shot, unified set of prompts' for improving the performance of large language models (LLMs) on a variety of language checking tasks. This indicates that the research involved studies on how prompt engineering can enhance the capabilities of LLMs in detecting misinformation, stereotypes, and hate speech. Although the focus is not solely on 'hard prefix prompts,' the relevance to prompt engineering is clear because the study explores how different kinds of prompts can affect the performance of LLMs on specific language tasks. The rating is not a full 10 because the abstract does not focus exclusively on systematic review of prompt engineering or on 'hard prefix prompts', which are specific types of prompts used to control the behavior of language models." -genrec: large language model for generative recommendation,gpt-4-1106-preview,7,"The abstract indicates the use of 'specialized prompts' to improve the ability of a Large Language Model (LLM) to understand recommendation tasks, which implies a form of prompt engineering. Since prompt engineering is essential for fine-tuning LLMs to perform specific tasks such as generative recommendation, and this paper discusses formulating these prompts, it has a substantial relevance to prompt engineering study. However, the focus of the abstract seems more on the application of large language models for recommendation systems rather than the detailed study of hard prefix prompts, which prevents a perfect score." -"a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity",gpt-4-1106-preview,8,"The abstract describes an evaluation framework that specifically includes assessments of ChatGPT's capabilities in a 'multi-turn ""prompt engineering"" fashion,' indicating that the study examines and utilizes prompt engineering as a part of the evaluation process. Since prompt engineering is integral to optimizing the performance of ChatGPT in various tasks as mentioned in the abstract, it is highly relevant to the study of prompt engineering. However, it is not entirely focused on 'hard prefix prompts,' which would be explicitly tailored cues designed to guide the language model's responses, therefore the rating is not a full 10." -chain-of-thought prompt distillation for multimodal named entity and multimodal relation extraction,gpt-4-1106-preview,8,"This abstract describes a study focused on prompt distillation, which is a technique related to prompt engineering. The core concept of prompt engineering is leveraged here, as it involves crafting prompts to extract reasoning abilities from large language models and effectively transfer this knowledge to smaller models. This research is relevant to the field of prompt engineering, specifically concerning the design of 'chain-of-thought' prompts to facilitate multimodal understanding. Although the study doesn't explicitly focus on 'hard prefix prompts,' it aligns closely with the larger domain of prompt engineering, thus meriting a high relevance rating." -fedlogic: interpretable federated multi-domain chain-of-thought prompt selection for large language models,gpt-4-1106-preview,9,"The relevance of the paper 'FedLogic: Interpretable Federated Multi-domain Chain-of-Thought Prompt Selection for Large Language Models' to prompt engineering is high. It directly addresses the challenge of prompt selection in LLMs, aiming to improve both the precision of responses and the interpretability of the prompting process. The focus on Chain-of-Thought reasoning, a method that has shown promise for enhancing the quality of LLM outputs, further emphasizes its relevance to the current landscape of prompt engineering. The introduction of FedLogic to navigate the complexities of multi-domain prompt selection and its emphasis on a theoretical framework and constraint incorporation suggests significant contributions to the field of prompt engineering. The only reason it doesn't score a full 10 is that the abstract does not mention 'hard prefix prompts,' which might be understood as a subset or a particular method within prompt engineering; the paper seems to focus more broadly on CoT prompts." -robust preference learning for storytelling via contrastive reinforcement learning,gpt-4-1106-preview,7,"The abstract describes an approach to controlled automated story generation that involves a level of prompt engineering, particularly in the fine-tuning phase using prompt-learning techniques. This suggests relevance to the study of prompt engineering, especially in the context of enhancing the robustness of a generative model's outputs with respect to user preferences. However, the focus of the study is on contrastive reinforcement learning rather than exclusively on hard prefix prompts or a detailed dissection of prompt engineering approaches. The relevance is therefore notable but not comprehensive concerning prompt engineering as a broad field." -using natural language explanations to rescale human judgments,gpt-4-1106-preview,7,"The abstract describes a study involving the use of large language models (LLMs) to rescale human judgment annotations based on natural language explanations. This is relevant to prompt engineering as it directly pertains to the optimization of LLM outputs through the integration of human feedback. Specifically, feeding Likert ratings and explanations into an LLM to homogenize ratings across annotators is a form of prompt design that guides the model to generate more consistent and possibly more reliable numeric scores. The technique is studied within the context of a specific NLP task (document-grounded question answering), and it addresses challenges inherent in subjective human evaluations which are critical for training and evaluating LLMs. The relevance is not rated higher because the study is more focused on the annotation process and the rescaling of human judgments rather than the construction of hard prefix prompts specifically." -chain-of-thought prompt distillation for multimodal named entity recognition and multimodal relation extraction,gpt-4-1106-preview,8,"The abstract discusses leveraging the 'chain of thought' (CoT) as an intermediate reasoning process for distilling knowledge from large language models to a student model, which is highly relevant to prompt engineering. This process directly involves designing prompts to elicit reasoning steps, indicating how the model should approach a problem, thus involving prompt engineering. However, the focus is primarily on multimodal named entity recognition and relation extraction, so it is not entirely within the realm of hard prefix prompts in a strict sense, hence the rating is not a full 10." -litsumm: large language models for literature summarisation of non-coding rnas,gpt-4-1106-preview,9,"The abstract discusses the use of large language models (LLMs) with a series of prompts and checks to automatically generate summaries of literature for non-coding RNAs, which is highly relevant to prompt engineering. The study highlights the importance of prompt design in achieving high-quality output from LLMs. It illustrates a practical application of prompt engineering within the context of automating curation processes in the life science field. This aligns closely with the concept of 'hard prefix prompts' in prompt engineering studies, as it emphasizes the effectiveness of structured input (prompts) in guiding the language model toward the desired task. The sole reason for not rating it a perfect 10 is that the abstract does not focus exclusively on the theory or mechanics of prompt engineering itself, but rather on the application of prompt engineering techniques in a specific domain." -who wrote it and why? prompting large-language models for authorship verification,gpt-4-1106-preview,9,"The abstract outlines a study that uses engineered prompts, specifically 'step-by-step stylometric explanation prompts,' as a key component of their proposed method (PromptAV) for authorship verification. This directly falls into the scope of prompt engineering studies as it involves designing prompts that enable a Large-Language Model to perform a specific task more effectively. The work not only engages with prompt design but also tackles the challenges of data efficiency and model interpretability, which are pertinent to the development and assessment of prompts in language models. The one point deduction is due to the possibility that the study may not encompass a 'comprehensive systematic review' on the topic, but rather presents a novel approach within the field." -alltogether: investigating the efficacy of spliced prompt for web navigation using large language models,gpt-4-1106-preview,7,"The study addresses the concept of prompt engineering by introducing 'AllTogether,' a prompt template aimed at improving the performance of Large Language Models in web navigation tasks, which is a specialization within prompt engineering. Though the study's focus is not on 'hard prefix prompts' specifically, it is still relevant to the broader domain of prompt engineering because it explores how to optimize prompts to enhance LLMs' understanding of tasks. As such, while it does not cover the full breadth of prompt engineering, especially with regards to systematic reviews of hard prefix prompts, it does contribute to the field by investigating prompt efficacy and template standardization." -wordart designer: user-driven artistic typography synthesis using large language models,gpt-4-1106-preview,8,"The paper describes a framework for artistic typography synthesis that centrally involves the use of Large Language Models (LLMs) to interpret user inputs and generate actionable prompts, which is directly related to prompt engineering. While the title does not explicitly mention 'hard prefix prompts', the 'LLM Engine' described operates with some form of prompt that guides the generation process. This indicates that the study does indeed involve an aspect of prompt engineering, particularly as it pertains to the synthesis of graphic designs. However, as the prompt in question specifically asks for a 'comprehensive systematic review on hard prefix prompts,' an approach or a model that is not the primary subject of this paper, the relevance is not maximal. Therefore, the rating reflects high relevance to prompt engineering in general but not a perfect match to the exact subject of 'hard prefix prompts.'" -context-aware prompt tuning for vision-language model with dual-alignment,gpt-4-1106-preview,9,"The abstract describes the development and application of a method called Dual-Aligned Prompt Tuning (DuAl-PT) in the context of vision-language models, which is highly relevant to the field of prompt engineering. Prompt engineering is a critical aspect of adapting large models to specific tasks, and the introduction of a novel method that utilizes both pre-trained language models and alignment techniques directly pertains to advancements in prompt engineering. The high relevance is underscored by the explicit focus on improving the efficiency and context-awareness of prompts, which are key goals in prompt engineering. The reason for not giving a perfect 10 is that the abstract does not focus on 'hard prefix prompts' specifically but rather on prompt learning methods in general, which encompasses a wider field than the specified study area." -de-diffusion makes text a strong cross-modal interface,gpt-4-1106-preview,7,"The title and abstract suggest that the study focuses on encoding images as text for use in a cross-modal interface, which has relevance to prompt engineering considering that prompts are a form of text input. The approach allows for the use of natural language as an interface to interact with images and demonstrates the potential to prompt large language models for multi-modal tasks. The relevance to prompt engineering is significant due to the generation of text representations that can serve as prompts and the improvement in interfacing with text-to-image tools. However, the paper is more focused on the cross-modal exchange and image representation than on the design or optimization of prompts themselves, which are typically the main focus in prompt engineering studies." -llamarec: two-stage recommendation using large language models for ranking,gpt-4-1106-preview,7,"The abstract describes a use of large language models (LLMs) in a two-stage recommendation framework, which includes the use of prompt templates for inputting user interaction history and candidate items into the LLM. Prompt engineering is relevant here because the design of prompt templates can be considered a form of engineering prompts to improve the performance of the LLM in the task of ranking-based recommendation. However, the study does not seem to focus primarily on the 'hard prefix prompts' aspect, but rather on the overall framework of using LLMs for recommendation, which includes prompt engineering as a component. Therefore, the relevance is significant but not exclusive to prompt engineering study." -mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning,gpt-4-1106-preview,9,The abstract describes research that directly relates to prompt engineering by analyzing the stability and consistency of language model predictions in response to different prompting setups. This type of investigation is crucial for understanding how different prompt designs affect model performance and is a core aspect of prompt engineering. The high relevance rating is due to the focus on prompt-based learning and the systematic review of factors that influence the behavior of language models in response to prompts. -collaborative large language model for recommender systems,gpt-4-1106-preview,7,"The abstract mentions the development of CLLM4Rec, which incorporates a 'soft+hard prompting strategy' during the pretraining stage for language modeling on recommendation system-specific corpora. The mention of hard prompts directly ties to prompt engineering, particularly within the context of integrating these prompts to improve the performance of a recommender system driven by a large language model. Given that the paper appears to specifically address and include prompt engineering strategies, it is relevant to studies of prompt engineering albeit focused more on the application within recommender systems rather than a general discussion or a review of hard prefix prompts in a wide array of domains. The rating is not a full 10 because the primary focus is on the recommender systems with prompt engineering being an element of the solution rather than the main subject of the paper." -automated assessment of comprehension strategies from self-explanations using llms,gpt-4-1106-preview,8,"The study's focus on leveraging open-source Large Language Models for the assessment of comprehension strategies is highly relevant to prompt engineering given that it employs the technique of fine-tuning LLMs and providing examples via prompts to improve performance. This is particularly pertinent to the field of prompt engineering as it directly involves strategies for optimizing the interaction with LLMs to achieve better outcomes in understanding and generating text. Although the study does not specifically mention 'hard prefix prompts', the practice of providing examples via the prompt and the implicit structuring of input to elicit specific types of responses are at the core of prompt engineering studies. Hence, the relevance to prompt engineering is quite significant, but not entirely focused on the 'hard prefix prompts' aspect, leading to a rating of 8." -distilled language models are economically efficient for the enterprise. ...mostly.,gpt-4-1106-preview,7,"The abstract discusses the comparison of three strategies to specialize a Large Language Model (LLM) for enterprise use in assisting customer service agents, one of which is prompt engineering. While the main focus appears to be on the economic efficiency of using distilled language models, prompt engineering is directly mentioned as one of the methods assessed. Therefore, it is relevant from the perspective of comparing the effectiveness and costs of different methods of leveraging LLMs, including prompt engineering. However, the complete focus on prompt engineering is not evident, thus not deserving a full score." -curriculum prompt learning with self-training for abstractive dialogue summarization,gpt-4-1106-preview,8,"The paper presents a curriculum-based prompt learning method which is highly relevant to the field of prompt engineering. The method's gradual increase in prompt perturbation is particularly pertinent to the study of hard prefix prompts, as it deals with enhancing the model's understanding through strategically structured prompts. However, it doesn't focus exclusively on 'hard prefix prompts' but rather on prompt learning in general within the specific application of dialogue summarization. Thus, while the paper is relevant due to its focus on innovative prompt engineering techniques, the relevance is not perfect as the study does not solely center on hard prefix prompts per se." -meta-augmented prompt tuning for better few-shot learning,gpt-4-1106-preview,7,"The study mentioned in the abstract addresses issues related to prompt tuning, particularly in the context of few-shot learning. While prompt tuning is directly relevant to prompt engineering, the study focuses on soft prompts rather than hard prefix prompts. The proposed SUMMER framework seeks to improve the initialization and generalizability of soft prompts, which is suggestive of techniques that could potentially be applicable to a broader set of prompt engineering challenges. However, since the study is not specifically about hard prefix prompts, the relevance is significant but not direct, leading to a rating of 7." -combating the covid-19 infodemic using prompt-based curriculum learning,gpt-4-1106-preview,7,"The abstract suggests that the study involves a prompt-based curriculum learning method, which is connected to the field of prompt engineering, as it implies the use of prompts to extract reliable information from a text. This method seems to be focused on content verification, relevant to the application of prompt engineering in creating models that combat misinformation—a key aspect of information processing and decision-making for AI language models. However, the absence of specific details on 'hard prefix prompts' means the study may not be exclusively focused on the aspect of 'hard prefix prompts' in prompt engineering, thus not warranting a higher relevance score." -taxonprompt: taxonomy-aware curriculum prompt learning for few-shot event classification,gpt-4-1106-preview,7,"The title suggests that the study involves 'taxonomy-aware curriculum prompt learning' which indicates a connection to 'prompt engineering', as it discusses designing prompts that are aware of a certain taxonomy. This seems relevant for prompt engineering studies since it likely deals with the creation and optimization of prompts for machine learning tasks. However, without an abstract or TLDR, it's difficult to determine the exact focus of the paper and its direct applicability to hard prefix prompts, hence the relevance is not rated higher." -fpc: fine-tuning with prompt curriculum for relation extraction,gpt-4-1106-preview,9,The paper's focus on prompt-based fine-tuning aligns closely with the study of prompt engineering. It explores how prompts can be designed and utilized to improve the performance of relation extraction tasks by capturing the semantics of relation labels. The concept of a 'Prompt Curriculum' contributes to the field by addressing how to incrementally build up a model's capacity through prompts. This is highly relevant to prompt engineering as it deals with strategic prompt design and application in the context of fine-tuning pre-trained language models. The reason it is not a full 10 is because it is specific to relation extraction and may not cover every aspect of prompt engineering in a broader sense. -learning to perform complex tasks through compositional fine-tuning of language models,gpt-4-1106-preview,7,"The abstract describes a method related to prompt engineering — compositional fine-tuning (CFT). While it does not directly address 'hard prefix prompts,' it does engage with the broader theme of structuring the interaction with language models to improve task performance. The work on CFT contributes to our understanding of how tasks can be decomposed and taught to language models, which is relevant to the study of how prompts can be designed and optimized. This is tangentially related to hard prefix prompts, as both are concerned with the efficacy of input structures for language models. However, the focus on CFT instead of hard prefix prompts directly means the relevance is significant but not complete." -"conversational challenges in ai-powered data science: obstacles, needs, and design opportunities",gpt-4-1106-preview,7,"The study addresses some core issues related to prompt engineering, such as formulating prompts for complex tasks and refining prompts iteratively. These topics are highly relevant to the field, as effective communication with LLMs is contingent upon constructing well-defined prompts. However, the study seems to focus more broadly on conversational challenges in AI within data science, rather than exclusively on 'hard prefix prompts' or systematic reviews on prompt engineering. Thus, while the content is relevant, it does not specifically target hard prefix prompts or provide a comprehensive systematic review, which the prompt specifically asks for." -cona: a novel context-aware instruction paradigm for communication using large language model,gpt-4-1106-preview,8,"The abstract discusses CONA, a context-aware instruction paradigm designed for effective knowledge dissemination with GPT models, which certainly falls under the broader category of prompt engineering, as it explores new methods for communication and interaction with LLMs. Despite not addressing 'hard prefix prompts' specifically, it presents a framework that utilizes the mechanisms of prompt engineering to optimize interactions with LLMs. However, the connection to 'hard prefix prompts' is not explicit, hence the rating is not a full 10." -exploring the design space of ai based code completion engines,gpt-4-1106-preview,8,"The abstract describes a thesis that has a significant focus on prompt engineering as it pertains to AI-based code completion tools like Github Copilot. It explicitly mentions the study of prompt engineering in the context of providing the AI model with the right context and assessing the impact of that context on the quality of the code suggestions. While the study seems more broadly focused on the overall design and factors affecting code completion tools, prompt engineering is indeed a crucial aspect of the thesis as it can greatly influence the AI model's performance. Therefore, it is highly relevant to the study of prompt engineering, though it might not focus solely on 'hard prefix prompts' as specified in the original prompt." -incremental learning of humanoid robot behavior from natural interaction and large language models,gpt-4-1106-preview,7,"The study discusses the integration of Large Language Models (LLMs) into the behavior orchestration of a humanoid robot, focusing on natural-language interaction and incremental learning through feedback loops. While not directly focusing on hard prefix prompts, the concept of 'incremental prompt learning' is introduced, where the system learns and modifies its interactions based on human feedback. This relates to prompt engineering in the broader sense because it involves designing and refining prompts that the LLM uses to generate proper Python statements, which directly affect the robot's actions. However, the study does not appear to specifically address hard prefix prompts or a systematic review thereof, hence the score is not a full 10, reflecting its partial relevance to the specific area of prompt engineering mentioned in the initial query." -tree-planner: efficient close-loop task planning with large language models,gpt-4-1106-preview,8,"The paper discusses an approach to task planning with Large Language Models that includes the use of prompts to generate plans, which is closely linked to the concept of prompt engineering. While the focus is on efficiency and error reduction in iterative actions rather than the study of hard prefix prompts specifically, the principles of designing effective prompts are implicitly a part of the paper due to the need for clear and structured input to guide the LLMs' plan generation and decision-making processes. Therefore, the study is relevant to the broader context of prompt engineering, although it does not directly address a comprehensive systematic review on hard prefix prompts." -batchprompt: accomplish more with less,gpt-4-1106-preview,9,"The abstract describes research focused on improving the efficiency of large language model prompting through batching strategies, specifically 'BatchPrompt.' This is highly relevant to prompt engineering as it directly tackles the challenge of optimizing prompts for better performance in terms of processing time and resource consumption, which is a core aspect of prompt engineering. The introduction of strategies like Batch Permutation and Ensembling (BPE) and Self-reflection-guided Early Stopping (SEAS) to address performance issues associated with batching denotes a significant contribution to the field. The detailed experimental results showing comparative performance with traditional single-data prompting further underscore the relevance of this study to prompt engineering. The deduction of a point from a perfect score is due to the abstract slightly broader focus on overall efficiency rather than the fine-grained specifics of prompt crafting. However, the study's outcome directly impacts prompt engineering practices for large language models." -modular and parameter-efficient multimodal fusion with prompting,gpt-4-1106-preview,7,"The paper discusses the use of prompt vectors to align modalities in multimodal fusion, which is relevant to the field of prompt engineering as it involves the use of prompts to achieve model efficiency and modularity. However, it may not directly address the exact concept of 'hard prefix prompts' as might be suggested by a 'comprehensive systematic review'. Nonetheless, the paper still contributes to the broader area of prompt engineering by exploring efficient alternatives to finetuning in multimodal pre-training, thus the rating is above average but not maximum." -attempt: parameter-efficient multi-task tuning via attentional mixtures of soft prompts,gpt-4-1106-preview,9,"The abstract presents a novel approach to multi-task learning in language models that leverages soft prompts—small prefix embedding vectors—for efficient parameter tuning. Given that the study explicitly addresses prompt engineering through soft prompts and their application in multi-task learning and knowledge transfer, it is highly relevant to the field of prompt engineering. The approach's efficiency and effectiveness in comparison to other tuning methods underscore its significance within the realm of prompt engineering studies. The score is not a perfect 10 because the focus is specifically on 'soft' prompts rather than 'hard' prompts as mentioned in your inquiry, suggesting a slightly wider scope than just hard prefix prompts." -effective structured prompting by meta-learning and representative verbalizer,gpt-4-1106-preview,8,"The provided abstract details the use of prompts in natural language processing with a focus on prompt tuning and the introduction of a new method called MetaPrompter. It relates directly to prompt engineering as it discusses the initialization of prompts, the use of meta-learning for task-specific prompts, and the creation of a more efficient system for prompt application in pre-trained MLMs. The relevance score is not a full 10 because the abstract does not specifically mention 'hard prefix prompts' which is the particular focus of the solicited comprehensive review. However, it discusses the broader field of prompt engineering and provides insights into the recent developments in prompt tuning techniques, which are pertinent to the study of hard prefix prompts." -prompting classes: exploring the power of prompt class learning in weakly supervised semantic segmentation,gpt-4-1106-preview,8,"The provided abstract details a study that explores prompt tuning in the context of weakly supervised semantic segmentation (WSSS), which is a specific application of prompt engineering. The focus on how the modification of text prompts can impact the Class Activation Map (CAM) and the introduction of a novel PrOmpt cLass lEarning (POLE) strategy demonstrate a direct relevance to prompt engineering as it pertains to adapting language-vision models to downstream tasks. While the study is specific to WSSS and does not cover the broader topic of 'hard prefix prompts' comprehensively, the principles and findings can contribute valuable insights into the broader field of prompt engineering, hence the high relevance rating." -rewoo: decoupling reasoning from observations for efficient augmented language models,gpt-4-1106-preview,7,"The study introduces ReWOO (Reasoning WithOut Observation) which aims to make Augmented Language Models more efficient by decoupling the reasoning process from knowledge retrieval. This approach could be highly relevant to prompt engineering, especially in complex systems that require prompt optimization to reduce computational costs and improve efficiency. Since the methodology addresses issues related to prompt redundancy and token optimization, it would contribute to the design of better-engineered prompts that effectively interact with external tools without unnecessary computational overhead. However, the study does not directly focus on 'hard prefix prompts' or the systematic review of various prompt types, therefore the relevance is notable but not absolute." -efficient domain adaptation of language models via adaptive tokenization,gpt-4-1106-preview,7,"The study discussed in the title 'efficient domain adaptation of language models via adaptive tokenization' is relevant to prompt engineering study to a significant extent. While it does not directly address 'hard prefix prompts', it focuses on improving the adaptation of language models to new domains, which is a related aspect of prompt engineering. The process of optimizing tokenizer behavior for domain-specific understanding can enhance prompt responses by tailoring model input to better represent contextual nuances. This indirect relation to prompt construction and optimization reflects an underlying relevance to prompt engineering, as tokenization is a foundational component that influences the quality of prompts and their interpretation by language models. Nevertheless, the study does not directly tackle prompt engineering methodologies or the systematic review of 'hard prefix prompts', thus the relevance is not maximal." -parameter-efficient low-resource dialogue state tracking by prompt tuning,gpt-4-1106-preview,9,"The abstract discusses the use of soft prompt token embeddings, which is a technique within the paradigm of prompt engineering. Although it does not discuss 'hard prefix prompts' specifically, it relates closely to the topic as prompt tuning is a key area within prompt engineering studies. The research aims to enhance dialogue state tracking by using prompts to tune language models with fewer parameters, which is a direct application of prompt engineering principles. Therefore, the rating is high because it is very relevant to the broader field of prompt engineering, but not a perfect score as it does not directly pertain to 'hard prefix prompts'." -prompt-tuning in asr systems for efficient domain-adaptation,gpt-4-1106-preview,8,"The paper is highly relevant to the field of prompt engineering as it addresses the application of prompt-tuning, specifically within the context of domain adaptation for Automatic Speech Recognition (ASR) systems. The concept of training a small number of domain-specific token embeddings to adapt a transformer-based language model is a practical example of prompt engineering. By achieving significant performance improvements with a minimal increase in parameters, the study contributes to the field by demonstrating the effectiveness of prompt-based techniques for improving model performance in specialized domains. The lower than perfect score is due to the focus on ASR systems specifically, which is a subset of prompt engineering applications, rather than the entire breadth of prompt engineering." -panda: prompt transfer meets knowledge distillation for efficient model adaptation,gpt-4-1106-preview,9,"The provided abstract and TLDR discuss research on prompt-tuning and prompt transfer (PoT) as methods for efficient model adaptation in the context of pretrained language models (PLMs), addressing the challenges with smaller PLMs and the innovation of a new approach named PANDA. Since prompt engineering studies how to design and use prompts to communicate effectively with language models, the mentioned techniques of prompt transfer and the novel PANDA approach are highly relevant to the field. It focuses on the optimization and enhancement of prompts, which is a core aspect of prompt engineering. The only reason the rating is not a 10 is because the study is narrower in scope, focusing on efficiency and specific techniques rather than a broader methodological investigation into prompt design or the theory behind prompt engineering." -toward efficient language model pretraining and downstream adaptation via self-evolution: a case study on superglue,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering is moderate to high as it discusses the 'prompt transfer technique' which is a form of prompt engineering. This technique involves transferring knowledge from one task to another, which is central to the idea of adapting language models to various downstream tasks using prompts. The study's focus on leveraging this technique to improve low-resource tasks indicates that it involves modifying or engineering prompts to enhance performance, which is pertinent to the study of prompt engineering. However, the report does not seem to specifically address 'hard prefix prompts,' which was the explicit focus mentioned in your query. Therefore, the study is relevant due to its inclusion of prompt-based techniques, but not as high as it would be if it were centered on hard prefix prompts specifically." -degree: a data-efficient generation-based event extraction model,gpt-4-1106-preview,8,"The study appears highly relevant to prompt engineering as it involves the design of manual prompts to guide a data-efficient event extraction model, termed DEGREE. The model's dependency on these prompts for semantic guidance indicates that a significant portion of the research likely involves understanding and improving how prompts are constructed (prompt engineering) to better capture event arguments. Although the primary focus is event extraction, the reliance on manually designed prompts for model training and the discussion of prompt-encoded information suggest a substantial relevance to the field of prompt engineering." -fedprompt: communication-efficient and privacy-preserving prompt tuning in federated learning,gpt-4-1106-preview,7,"The paper discusses prompt tuning within the context of federated learning, which directly relates to the broader field of prompt engineering. While it does not explicitly mention 'hard prefix prompts,' the study of prompt tuning techniques and their efficiency and privacy implications within federated learning frameworks adds to the understanding of how prompts can be optimized. Given that prompt engineering encompasses the exploration and application of prompts in various scenarios, the relevance is high. However, it is not rated a full 10 because the specific focus on communication efficiency and privacy in federated learning does not directly address the systematic review aspect of hard prefix prompts, which seems to be a more targeted area within the field of prompt engineering." -prompt tuning for parameter-efficient medical image segmentation,gpt-4-1106-preview,8,"The abstract presents a study on the application of prompt tuning, a concept closely related to prompt engineering, in the context of medical image segmentation. Although the study focuses on a specific application (parameter-efficient adaptations for semantic segmentation in medical imaging), it explores the use of prompts (learnable prompt tokens) to adapt a neural network model to new tasks without full model fine-tuning. Since prompt engineering involves techniques for efficiently integrating prompts in order to steer model behavior, albeit typically in the context of language models, this work's investigation into prompts in the UNet architecture for medical imaging is relevant to the broader study of prompt engineering principles and methods. The rating is not a full 10 because the study is highly specialized and may not directly address 'hard prefix prompts' or the specificities of prompt engineering in natural language processing, which often is the primary focus of prompt engineering literature." -rethinking visual prompt learning as masked visual token modeling,gpt-4-1106-preview,7,"The discussed paper is relevant to the study of prompt engineering, despite its focus on the vision domain rather than natural language processing (NLP). The paper introduces a method for visual prompt learning, which parallels the concept of prompt engineering in NLP by adapting pre-trained models to downstream tasks. The proposal of Visual Prompt learning as Masked visual Token Modeling (VPTM) to unify the form of pre-training and downstream tasks is conceptually similar to hard prompt methods in NLP that aim to bridge the gap between the two stages. Although the specific application to visual tasks might not directly correspond to textual 'hard prefix prompts,' the underlying principles of prompting and task reformulation involved in VPTM are relevant to the broader study of prompt engineering. The emphasis on consistency, robustness, and unified deployment also echoes concerns in prompt engineering research." -parameter-efficient tuning helps language model alignment,gpt-4-1106-preview,7,"The given abstract presents a method for aligning language models with human preferences by using a technique called 'alignMEnt with parameter-Efficient Tuning (MEET)'. This involves optimizing control tokens using parameter-efficient tuning strategies such as prompting tuning and low-rank adaptation, which is highly relevant to prompt engineering. The reference to 'control tokens' and 'hand-crafted prompts' directly relates to the design and engineering of prompts for tuning model behavior. The focus on parameter-efficiency is also pertinent to prompt engineering because it relates to optimizing the input given to models without overhauling the entire model architecture. However, the abstract does not specifically address 'hard prefix prompts' which would be the focus of a comprehensive systematic review on that topic. For this reason, the relevance is not rated a full 10, as it is more broadly about language model alignment with control tokens rather than narrowly focused on hard prefix prompts in prompt engineering." -multimodal prompting with missing modalities for visual recognition supplementary materials,gpt-4-1106-preview,8,"While the study is not specifically focused on 'hard prefix prompts', it does address the broader topic of prompt engineering in the context of multimodal learning and attention mechanisms. The research on the impact of prompt length and the layer at which the prompt is inserted is relevant to the understanding of how prompts can be optimized for improved performance in AI models. Therefore, the paper's relevance to prompt engineering is high, warranting a rating of 8. However, the exact match with the 'hard prefix prompts' focus may be lacking, hence not a full 10." -prompting as multimodal fusing,gpt-4-1106-preview,7,"The abstract describes research on using visual prompts to improve the capability of a language model to perform multi-modal tasks, which is related to the field of prompt engineering. The concept of 'prompting' is central to the study. However, the focus on multimodal tasks and disentangling objectives for the vision encoder introduces specificity that is somewhat tangential to hard prefix prompts in text-based prompt engineering. While the principles of the study could potentially be applied or extended to text-based prompt engineering, the immediate relevance is somewhat indirect, hence the rating of 7." -ddcot: duty-distinct chain-of-thought prompting for multimodal reasoning in language models,gpt-4-1106-preview,8,"While the abstract describes a study focused on multimodal reasoning and Chain of Thought (CoT) with language models, its relevance to prompt engineering lies in the novel DDCoT prompting approach which is presented. The notion of 'negative-space prompting' and tailoring prompts to encourage 'critical thinking' and proper distribution of tasks ('letting everyone do their jobs') within multimodal CoT reasoning are directly related to the design and engineering of effective prompts that enhance AI performance. Consequently, the abstract is highly relevant to the study of prompt engineering, particularly in the context of improving AI's multimodal reasoning capabilities. However, the rating is not a full 10 because it does not focus exclusively on 'hard prefix prompts' but rather on a broader set of techniques within multimodal CoT prompting, leaving some room for more specific relevance to the systematic review aspect of the provided prompt." -prompting chatgpt in mner: enhanced multimodal named entity recognition with auxiliary refined knowledge,gpt-4-1106-preview,7,"The study presents a two-stage framework (PGIM) designed to improve Multimodal Named Entity Recognition (MNER) by using ChatGPT as an implicit knowledge base for generating auxiliary knowledge, which relates to prompt engineering as it involves creating and using prompts to guide ChatGPT in generating useful information for a specific task. However, the paper seems to focus more on improving MNER performance and leveraging implicit knowledge bases rather than on the underlying mechanisms of prompt engineering, such as prompt design or optimization techniques specifically. Therefore, the relevance is significant but not exclusively centered on prompt engineering." -initial images: using image prompts to improve subject representation in multimodal ai generated art,gpt-4-1106-preview,7,"The paper addresses the utilization of image prompts to enhance subject representation in AI-generated art, which falls within the realm of prompt engineering as it involves guiding generative models to achieve desired outputs. Although the study focuses specifically on multimodal interactions (text and image prompts) rather than purely text-based 'hard prefix prompts,' the findings and design guidelines derived from the research could be informative for prompt engineering in a broader context. The aspects of conditioning models and evaluating their performance based on input prompts are directly relevant to the techniques and methodologies of prompt engineering, hence the relatively high relevance rating." -adaptive action prompting: a complementary aid to support task-oriented interaction in explorative user interfaces,gpt-4-1106-preview,7,"The abstract refers to 'Adaptive action prompting,' which is closely related to prompt engineering in that it involves the system generating suggestions or prompts based on various models. This concept aligns with prompt engineering, as it requires understanding how to design and adapt prompts for optimal user interaction. However, the study seems to focus more on user interface interaction rather than the specific linguistic or conversational design of prompts. Therefore, while relevant, it may not fully delve into the 'hard prefix prompts' aspect of the prompt engineering study." -prompt tuning for generative multimodal pretrained models,gpt-4-1106-preview,8,"This abstract is quite relevant to prompt engineering as it discusses 'prompt tuning', which is a specific method within the broader area of prompt engineering. Prompt tuning is a new paradigm where prompts are specifically crafted or optimized to improve the performance of pretrained models on various tasks. The focus on generative multimodal pretrained models suggests that the study addresses complex scenarios where prompt engineering could be crucial for model tuning. Despite the high relevance, the rating is not a complete 10 because the study seems to be more focused on implementing prompt tuning as a lightweight alternative to full model finetuning, rather than a comprehensive systematic review of hard prefix prompts as the original prompt might suggest." -opal: multimodal image generation for news illustration,gpt-4-1106-preview,7,"The paper's focus on a system named Opal that navigates the challenges of finding the right visual language for text prompts does relate to prompt engineering, particularly in multimodal AI contexts. Although the paper does not directly address 'hard prefix prompts,' it does deal with the structured creation of text prompts to guide AI in generating images, which is an essential part of prompt engineering. The relevance is high because prompt engineering is critical for effective human-AI co-creation, especially in text-to-image generation tasks. However, the paper centers more on the application of such a system for news illustrations rather than the theoretical or methodological aspects of prompt engineering study." -draw your art dream: diverse digital art synthesis with multimodal guided diffusion,gpt-4-1106-preview,7,"The paper presented addresses the usage of multimodal prompts which involve feeding a model with inputs from different modalities such as text and image, which aligns with the concept of 'prompt engineering' that typically involves crafting inputs to guide a model’s output. Although not directly focused on 'hard prefix prompts', the concept of using complex, multimodal inputs for guiding a diffusion model in digital art synthesis demonstrates advanced prompt techniques and is indirectly related to the engineering of prompts to achieve desired outcomes in AI systems. Hence, there is a significant relevance to prompt engineering, but it is not a perfect match as the primary study is not about hard prefix prompts in the context of systematic reviews." -promptmner: prompt-based entity-related visual clue extraction and integration for multimodal named entity recognition,gpt-4-1106-preview,8,"The presented work is highly relevant to prompt engineering as it discusses the utilization of entity-related prompts to improve multimodal named entity recognition. It specifically targets the extraction of visual clues with the help of prompts, which is a novel application of prompt engineering in the field of image processing and analysis. The 'prompt-based' method for extracting visual information addresses the central theme of prompt engineering. However, since the focus is also on modality-aware attention mechanisms and cross-modal fusion, the relevance is not solely on prompt engineering. Therefore, the rating is not a full 10." -lvp-m3: language-aware visual prompt for multilingual multimodal machine translation,gpt-4-1106-preview,7,"The paper introduces a model LVP-M3 that utilizes visual prompts for the task of Multilingual Multimodal Machine Translation. While the study focuses primarily on translation and the integration of visual features for understanding context across multiple languages, the concept of 'visual prompts' does relate to the idea of 'prompt engineering' as it involves designing inputs to improve the machine's understanding and performance. Although these visual prompts are not 'hard prefix prompts' explicitly, the process of generating and utilizing prompts to enhance model performance overlaps with the broader theme of prompt engineering. Thus, the relevance is significant but not directly focused on the systematic study of hard prefix prompts, hence the rating of 7." -few-shot multimodal sentiment analysis based on multimodal probabilistic fusion prompts,gpt-4-1106-preview,7,"The study addresses prompt engineering to some extent by introducing a novel method that includes the design of 'unified multimodal prompts' to decrease discrepancies between different modalities in the few-shot sentiment analysis. This involves engineering prompts that cater to more than just textual data, integrating multimodal data which is a unique and relevant approach to prompt engineering. Additionally, the concept of 'probabilistic fusion method to fuse output predictions from multiple diverse prompts' indicates an advanced level of prompt engineering where different prompts and their predictions are combined. However, the study focuses more specifically on multimodal sentiment analysis and few-shot learning, rather than solely on prompt engineering or 'hard prefix prompts' as stated in the initial topic. Therefore, it is not exclusively aligned with the concept of 'hard prefix prompts' in prompt engineering studies but still significantly contributes to the broader domain of prompt engineering." -towards multimodal computational humanities. using clip to analyze late-nineteenth century magic lantern slides,gpt-4-1106-preview,7,"Although the study does not solely focus on prompt engineering, it does discuss the impact of different textual prompts on the performance of the CLIP model and identifies the lack of effective prompt engineering techniques as an issue affecting the model's stability. Therefore, the paper is relevant to the field of prompt engineering to a noticeable extent, especially regarding the application and challenges of prompt engineering in multimodal learning within the computational humanities." -π-tuning: transferring multimodal foundation models with optimal multi-task interpolation,gpt-4-1106-preview,7,"The abstract mentions compatibility with diverse types of parameter-efficient experts, including prompts, which implies that the study covers aspects of prompt engineering. However, the focus seems to be broader, targeting transfer learning methods in general rather than specifically on 'hard prefix prompts'. Thus, while it has relevance due to its inclusion of prompts within the scope of parameter-efficient transfer learning, it's not solely dedicated to prompt engineering, leading to a rating of 7." -beyond text-to-image: multimodal prompts to explore generative ai,gpt-4-1106-preview,7,"The abstract and TLDR of 'Beyond Text-to-Image: Multimodal Prompts to Explore Generative AI' are relevant to prompt engineering because they discuss the development of workflows that facilitate the translation of abstract design goals into prompts for AI systems. This aligns with the principles of prompt engineering, which is concerned with the creation and optimization of prompts to effectively guide AI behavior. However, the study appears to focus on the broader context of multimodal interactions and integrating creator contributions rather than hard prefix prompts specifically. Hence, while it is relevant due to its focus on improving the AI prompting process, it does not directly address systematic reviews on hard prefix prompts, thus receiving a rating of 7." -mass-producing failures of multimodal systems with language models,gpt-4-1106-preview,7,"The abstract describes a novel system, MultiMon, which involves in part the use of language models to identify and generate natural language descriptions of patterns of failures in multimodal systems. This bears relevance to the prompt engineering field since the process includes feeding certain inputs (prompts) to a language model to elicit descriptive outputs regarding the failures. However, the main focus appears to be on the identification of systematic failures in multimodal systems rather than the study of hard prefix prompts themselves. Thus, while related to prompt engineering in the context of multimodal system failure analysis, it is not entirely centered on a comprehensive study of prompts or their structures." -open visual knowledge extraction via relation-oriented multimodality model prompting,gpt-4-1106-preview,7,"The abstract describes a novel approach to visual knowledge extraction that indirectly involves a form of prompt engineering, as it relies on prompting a multimodality model to generate knowledge. Although the primary focus is not on the engineering of text prompts for language models, the concept of 'model prompting' is closely related to prompt engineering, particularly in the context of multimodal models that process both visual and textual data. The mention of employing prompts for knowledge generation aligns with current interests in optimising prompts to improve model performance. However, the direct relevance to 'hard prefix prompts' may be limited, hence a full relevance rating is not given." -multimodal prompt learning in emotion recognition using context and audio information,gpt-4-1106-preview,7,"The study is relevant to prompt engineering due to its focus on improving language models' performance using prompt learning techniques. Although it primarily deals with multimodal sources (text and audio) rather than being strictly about hard prefix prompts, it addresses the aspect of how prompts are engineered to enhance a pre-trained model's ability to perform specific tasks, in this case, emotion recognition. The study proposes a method for prompt learning that considers the context and emotional information, which is a valuable insight into prompt engineering for specialized tasks. However, the relevance is not at the maximum because the study diverges from hard prefix prompts specifically to a broader application of prompts in multimodal learning." -multimodal parameter-efficient few-shot class incremental learning,gpt-4-1106-preview,7,"The abstract mentions the use of 'learnable prompts for both the language and vision encoders' in the proposed Continual Parameter-Efficient CLIP (CPE-CLIP) model, which directly relates to prompt engineering. While the main focus is on Few-Shot Class Incremental Learning (FSCIL) and the use of CLIP for transfer learning across sessions, the mention of learnable prompts indicates that prompt engineering is a component in the study's approach to improve performance in learning tasks. However, since prompt engineering is not the central theme but rather a part of the methodology, the relevance rating is a 7." -multitask instruction-based prompting for fallacy recognition,gpt-4-1106-preview,8,"The abstract describes a study on how instruction-based prompting in a multitask setup can improve the recognition of fallacies by computational models. This is highly relevant to prompt engineering as it explores the construction and optimization of prompts to enhance model performance. The use of a multitask setup indicates a sophisticated approach to prompt engineering which is likely to be of interest to those studying prompt design. However, the focus on fallacy recognition means the research is specialized and may not cover all areas of interest within the broader field of prompt engineering." -when do you need chain-of-thought prompting for chatgpt?,gpt-4-1106-preview,8,"The abstract discusses the performance and challenges of Chain-of-Thought prompting for ChatGPT, which is directly related to the field of prompt engineering. It explores the limitations and potential of CoT instructions in improving LLM output, providing insights into instruction-based finetuning. The analysis of instruction memorization and potential dataset leakage is crucial for understanding how to engineer prompts effectively for different tasks. Despite not focusing specifically on 'hard prefix prompts,' the study provides valuable information for prompt engineering in a broader sense, which is why it does not receive a perfect score." -coder reviewer reranking for code generation,gpt-4-1106-preview,8,"The abstract describes an advanced technique in prompt engineering where two models are used in tandem for code generation – a 'Coder' model to generate programs and a 'Reviewer' model to evaluate these programs. This process of generating and reranking outputs based on prompt-engineered models is clearly relevant to the study of prompt engineering. The methodology explores optimizing the interaction between these models to produce better results, which is a critical part of prompt engineering – refining inputs and evaluating outputs to improve performance. The reason why the rating is not a full 10 is because the abstract focuses on the application of prompt engineering to code generation, which may be a subset of the broader prompt engineering field. However, the principles and techniques exemplified are directly applicable to prompt engineering studies." -generating instruction automatically for the reading strategy of self-questioning,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant since the paper focuses on generating instructional content automatically, which aligns with the creation of prompts for educational purposes. Specifically, breaking down the instruction into describing, modeling, scaffolding, and prompting is similar to the process of designing prompts that are effective in prompting the strategy. The paper also touches upon automatic generation of prompts, which is a core task in prompt engineering. However, the primary objective of the paper is centered around self-questioning in reading comprehension rather than the broader scope of hard prefix prompts or prompt engineering in general, which justifies a rating of 7 instead of a perfect score." -short-term versus long-term effects of cognitive and metacognitive prompts in writing-to-learn,gpt-4-1106-preview,7,"The study is moderately relevant to prompt engineering because it investigates the effects of cognitive and metacognitive prompts on learning and writing. This is related to understanding how prompts can influence cognitive processes and outcomes, which is a key part of prompt engineering. However, as the focus is on educational contexts and long-term effects rather than computational systems or machine learning, it is not directly focused on prompt engineering for language models or other AI systems, hence the rating isn't higher." -dualprompt: complementary prompting for rehearsal-free continual learning,gpt-4-1106-preview,8,"The content of the abstract is highly relevant to prompt engineering study because it discusses a novel framework called DualPrompt, which involves learning a tiny set of parameters (prompts) that instruct a pre-trained model on handling new tasks sequentially without the need for rehearsing previous tasks. This approach to prompt engineering is significant as it addresses the challenge of catastrophic forgetting in continual learning models and does so without the need for storing old examples, hence respecting privacy and memory constraints. The abstract focuses on the application of prompt learning in the context of continual learning models, which is a subset of the broader prompt engineering field. The rating is not a full 10 because the study is specific to the continual learning application and may not cover all possible aspects or methodologies of prompt engineering, especially those outside the scope of continual learning." -editeval: an instruction-based benchmark for text improvements,gpt-4-1106-preview,7,"The provided abstract discusses 'EditEval', which is an evaluation suite for text generation models, specifically focusing on their editing capabilities. While it does not directly address 'hard prefix prompts' or 'prompt engineering', its core concept of evaluating and optimizing text generation models is relevant to the field. The study examines InstructGPT and PEER models in the context of editing tasks and acknowledges the challenges in prompt optimization. This can inform prompt engineering studies by providing insights into how models respond to instructions and the issues with current metrics, therefore facilitating the creation of better prompts for model evaluations. However, the direct application to hard prefix prompts is tangential and not the central focus of the study, which affects the overall relevance rating." -promptsource: an integrated development environment and repository for natural language prompts,gpt-4-1106-preview,9,"The paper describes 'PromptSource', a system designed specifically for creating, sharing, and using natural language prompts, which is central to the concept of prompt engineering. The discussion of a templating language, a user interface for prompt development, and community-driven guidelines directly concerns the practice of prompt engineering. Although the article does not specifically address 'hard prefix prompts' but rather prompts in general, its relevance to the broader field of prompt engineering is significant and should be highly informative for those studying various aspects of prompt design and usage in natural language processing (NLP). Therefore, it receives a high relevance rating of 9." -adversarial soft prompt tuning for cross-domain sentiment analysis,gpt-4-1106-preview,7,"The study presents advancements in prompt tuning, specifically Adversarial Soft Prompt Tuning for cross-domain sentiment analysis, which is relevant to the field of prompt engineering, as it involves learning to use prompts effectively with language models. Although the study focuses on soft prompts rather than hard prefix prompts, the underlying principles of prompt design and its impact on model performance are highly pertinent to the broader topic of prompt engineering. The approach of using separate prompts for different domains connects to the customization and optimization of prompts for specific tasks. However, the relevance is not rated higher because the prompt mentioned here is 'soft', while the systematic review in question specifically targets 'hard prefix prompts'. Therefore, there is a slight mismatch, but the study still holds value for those exploring the varying applications and methodologies of prompt tuning in language models." -prompt-based rule discovery and boosting for interactive weakly-supervised learning,gpt-4-1106-preview,8,"The paper discusses a method for iteratively discovering novel labeling rules via prompts in the context of weakly-supervised learning. While not directly focused on 'hard prefix prompts', it does revolve around the use of prompts for generating rules and improving models, which is a vital component of prompt engineering. The study is relevant because it deals with the automated generation and refinement of prompts, which is closely related to the analysis and application of prompt effectiveness and efficiency, key considerations in prompt engineering studies. The rating is not a full 10, as the paper's abstract does not specify a focus on 'hard prefix prompts' specifically, but rather on a broader application of rule discovery using prompts." -hpt: hierarchy-aware prompt tuning for hierarchical text classification,gpt-4-1106-preview,8,"The given title and abstract provide information about a technique called Hierarchy-aware Prompt Tuning (HPT) for hierarchical text classification. Although this method is focused on a specific task - hierarchical text classification - rather than prompt engineering in general, the concept of 'prompt tuning' is highly relevant to the broader field of prompt engineering. HPT involves constructing dynamic virtual templates and label words as soft prompts, which are essentially a form of prompt engineering tailored to incorporate hierarchical information into the learning process of a PLM. Therefore, the study is quite pertinent to prompt engineering, particularly within the domain of improving model performance for complex classification tasks involving label hierarchies. It doesn't address a 'hard prefix prompt' specifically, which would be an exact match to the search query, but still has significant relevance due to its focus on prompt tuning methodologies." -connprompt: connective-cloze prompt learning for implicit discourse relation recognition,gpt-4-1106-preview,8,"The paper presents an approach that leverages the prompt engineering paradigm for Implicit Discourse Relation Recognition (IDRR), specifically developing a novel Connective-cloze Prompt (ConnPrompt) which includes Prefix-cloze Prompt (PCP) to improve task performance. This is highly relevant to prompt engineering as it demonstrates an innovative application of prompt-based methods to a natural language processing (NLP) task. The rating is not a full 10 because the study focuses on a specific application of prompt engineering within the IDRR context, rather than on prompt engineering in a more general sense, which may limit its broader relevancy to the field at large." -ptau: prompt tuning for attributing unanswerable questions,gpt-4-1106-preview,8,"The presented study 'ptau: prompt tuning for attributing unanswerable questions' is highly relevant to prompt engineering as it directly deals with the development of a system that leverages the concept of prompt tuning. The introduction of a cause-oriented template module for constructing continuous templates in a high-dimensional space and a semantics-aware label module through contrastive learning are indicative of advanced techniques in prompt engineering. Although the study's primary focus is question answering systems and their ability to identify unanswerable questions, the methods used for prompt tuning are applicable and insightful for the broader field of prompt engineering." -prompt-learning for short text classification,gpt-4-1106-preview,9,"The provided abstract describes a study on prompt-learning, specifically for the task of short text classification which directly relates to the field of prompt engineering. The approach of using knowledgeable expansion and the incorporation of knowledge graphs into the prompt-learning process are advanced techniques in the area, suggesting that the paper provides detailed insights into the engineering of prompts for language models. The outstanding improvement in accuracy mentioned in the abstract and TLDR indicates a significant contribution to the field. The reason it is not a full 10 is because it doesn't specifically mention 'hard prefix prompts', but it does deal with prompt-learning methods in general, which makes it highly relevant to prompt engineering studies." -continuous prompt tuning based textual entailment model for e-commerce entity typing,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it discusses a novel application of continuous prompt tuning, which is a subset of prompt engineering, in the context of e-commerce entity typing. The approach of reformulating entity typing into a textual entailment problem with the use of prompts indicates a significant contribution towards the field of prompt engineering. The automatic generation of hypotheses using prompt tuning is particularly pertinent, although the study's focus is more narrowly on textual entailment in the e-commerce domain rather than hard prefix prompts in general. Nonetheless, since prompt engineering techniques are pivotal in the study, it merits a relatively high score." -taxoprompt: a prompt-based generation method with taxonomic context for self-supervised taxonomy expansion,gpt-4-1106-preview,8,"The paper presents 'TaxoPrompt,' a framework for taxonomy expansion leveraging prompt tuning, which is directly related to prompt engineering. Although the focus is more specifically on incorporating taxonomic context rather than hard prefix prompts in a broad sense, the methodological approach to enhancing prompt templates and its use in a hierarchical classification context mean that the paper offers relevant insights into the application and development of prompt-engineering techniques." -bi-directional iterative prompt-tuning for event argument extraction,gpt-4-1106-preview,9,"The given abstract is highly relevant to prompt engineering study as it directly pertains to the development of a new prompt-tuning method for a specific NLP task, which is event argument extraction (EAE). The bi-directional iterative prompt-tuning approach uses cloze-style tasks and entity information, both key elements in the prompt engineering process. Moreover, the focus on improving interaction with pre-trained language models (PLMs) by considering the context of entities and the roles of arguments during prompt construction are advancements directly applicable to the field of prompt engineering. The only reason it did not receive a 10 is that it is specialized towards EAE rather than prompt engineering in general." -knowledge base construction from pre-trained language models by prompt learning,gpt-4-1106-preview,7,"The abstract describes a study that falls within the domain of prompt engineering as it involves designing prompts to extract factual knowledge from pre-trained language models. The relevance to prompt engineering is clear as the authors design prompt templates and explore strategies for generating responses using these models. However, the mention of 'hard prefix prompts' is not explicitly referenced, suggesting this work may not be fully centered on that specific aspect of prompt engineering. Therefore, while the study is related to prompt engineering, its relevance to the specific concept of 'hard prefix prompts' cannot be determined from the abstract alone." -schema-aware reference as prompt improves data-efficient knowledge graph construction,gpt-4-1106-preview,9,"The abstract discusses a new approach to improve data-efficient knowledge graph construction through the use of 'schema-aware Reference As Prompt (RAP)' which directly concerns the engineering of prompts to bridge the gap between natural language and structured knowledge. This is highly relevant to prompt engineering study as it proposes a method that advances the way prompts can be utilized in a practical application, namely knowledge graph construction. The only reason it is not a perfect 10 is that it does not cover the broader scope of prompt engineering but rather focuses on a specific application within the field." -prompt tuning for multi-label text classification: how to link exercises to knowledge concepts?,gpt-4-1106-preview,9,"The abstract describes the development and application of a prompt tuning method specifically for multi-label text classification, which is highly relevant to the field of prompt engineering. Prompt tuning is a technique within natural language processing that is used to adapt language models to specific tasks without the need for extensive training data. Since the study explores the use of prompt tuning to connect exercises to knowledge concepts, it contributes directly to advancing the methodologies within the area of prompt engineering. The high relevance score reflects the direct applicability of the findings to the study of prompt engineering, albeit the study doesn't focus on 'hard prefix prompts' specifically but on prompt tuning for a related task." -a prompt based approach for euphemism detection,gpt-4-1106-preview,8,"The abstract describes a study that involves developing prompts and verbalizers for euphemism detection, which is directly connected to prompt engineering. Prompt tuning is a subset of prompt engineering, and the use of templates indicates that the study engages in engineering prompts to elicit specific responses from a language model. However, the study is focused more on the specific application of euphemism detection rather than the broader topic of 'hard prefix prompts', so it may not cover all aspects of prompt engineering study, thus not receiving a perfect score." -scene-aware prompt for multi-modal dialogue understanding and generation,gpt-4-1106-preview,7,"The abstract discusses the use of a 'scene-aware prompt' in the context of multi-modal dialogue understanding and generation, which falls under the broader domain of prompt engineering as it pertains to enhancing AI's interaction with multi-modal data. Although it does not specifically address 'hard prefix prompts'—a more nuanced aspect of prompt design often associated with transformer-based language models—it does relate to the application and structuring of prompts for improved AI performance in a given task. Therefore, the relevance is moderate because it demonstrates an application of prompt engineering in a specific NLP contest, however, it is not directly focused on the study of prompt engineering as a standalone subject." -label prompt for multi-label text classification,gpt-4-1106-preview,8,"The abstract describes a model for multi-label text classification that uses a form of prompt learning for pre-trained language models. The relevance to prompt engineering is high because it involves designing templates (prompts) that integrate labels into the input of a pre-trained language model and optimizes it using Masked Language Models (MLM), which is a technique related to prompt engineering. The mention of designing a set of templates directly relates to the construction of prompts, which is a core aspect of prompt engineering. The rating isn't a full 10 because the information provided does not indicate if the study includes a 'comprehensive systematic review' or a focus on 'hard prefix prompts' specifically, as mentioned in the study topic." -improving sentence classification in abstracts of randomized controlled trial using prompt learning,gpt-4-1106-preview,8,"The study focuses on the application of Prompt Learning (PL) for sentence classification within the context of Randomized Controlled Trial (RCT) abstracts, which is highly relevant to the field of prompt engineering as it entails creating and utilizing prompt templates to guide models in performing specific tasks effectively. Although 'hard prefix prompts' are not specifically mentioned, the deployment of manual templates in PL is closely related to designing effective prompts for language models. The relevance of the study to prompt engineering is not at the maximal score because it does not directly address 'hard prefix prompts' but rather addresses prompt learning in a broad sense." -mtpl-g2t: graph-to-text generation task based on mixed template prompt learning,gpt-4-1106-preview,8,"The abstract discusses an approach to text generation that involves prompt learning, which is a method to guide pre-trained models to perform specific tasks without extensive fine-tuning. It also compares the effectiveness of different prompt templates, including mixed prompt templates. This is relevant to the study of 'hard prefix prompts,' a type of prompt engineering. However, the abstract does not specifically mention 'hard prefix prompts' but discusses prompt learning in a broader context. Therefore, it is highly relevant but not entirely focused on 'hard prefix prompts,' which results in a rating of 8." -masked prompt learning for formal analogies beyond words,gpt-4-1106-preview,9,"The paper's focus on the development of a generative model for analogies using prompt-based fine-tuning within the context of a pre-trained language model (PLM) is highly relevant to the study of prompt engineering. The exploration of masked prompt learning and the systematic approach to handling analogies by reformulating them using prompts deeply contribute to the field of prompt engineering. It addresses how different prompting techniques can enhance language models' ability to generalize beyond simple word-level tasks. The relevance rating is not a full 10 only because the study seems to be specifically tailored to the analogy task, whereas prompt engineering broadly covers a wider range of applications." -promptrgd: prompt learning with relation-aware gradient denoising for low-resource relation extraction,gpt-4-1106-preview,8,"The abstract discusses a framework for semi-supervised prompt learning for relation extraction. Since prompt engineering is about designing and implementing prompts to effectively interact with a model or a system, the paper's focus on 'prompt template construction' and 'relation-aware gradient denoising' directly relates to the design and optimization of such prompts, especially in low-resource settings. The relevance rating is not a perfect 10 because although it deals with prompt engineering, the paper centers more on a specific aspect of relation extraction rather than a comprehensive study of hard prefix prompts in a broader context." -prompt learning for multi-modal covid-19 diagnosis,gpt-4-1106-preview,7,"The paper presents a novel approach that utilizes prompt-based methods for COVID-19 diagnosis, which is relevant to the study of prompt engineering. Prompt learning, a key aspect of prompt engineering, is central to the paper's methodology where a cloze prompt template and label word set are constructed to redefine the diagnosis task. However, the specificity to the 'hard prefix prompts' is not mentioned, which may or may not be within the scope of the presented methods. The relevance is rated moderately high due to the application of prompt learning concepts, but not the maximum score given the potential difference in prompt types being studied." -uper: boosting multi-document summarization with an unsupervised prompt-based extractor,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering, as the core of this research involves creating 'prompting templates' to harness the knowledge within a Pre-trained Language Model (PLM) for determining the semantic relevance of documents in a multi-document summarization task. This innovative approach leverages prompt engineering to improve document salience assessment and abstract generation. The rating is not a perfect 10 only because the application is specific to multi-document summarization and the details on the 'hard prefix prompts' specifically are not provided, which may not cover all aspects of prompt engineering studied in a comprehensive systematic review on the topic." -graphprompt: biomedical entity normalization using graph-based prompt templates,gpt-4-1106-preview,8,"The paper introduces 'GraphPrompt', which is a prompt-based learning approach that operates within the domain of prompt engineering. It specifically creates prompt templates according to graph structures, which is directly related to engineering prompts to improve biomedical entity normalization. While the study is not about 'hard prefix prompts' in a general sense, the design and utilization of prompts is core to the paper, hence the high relevance score. The focus on a specific application (biomedical entity normalization) and the lack of a direct mention of 'hard prefix prompt' impacts the relevance rating mildly, preventing a full score." -a cueing strategy for prompt tuning in relation extraction,gpt-4-1106-preview,7,"The abstract describes a modified approach for utilizing prompt tuning in the context of relation extraction by incorporating task-specific cues. This relates to the concept of prompt engineering because it involves the design and use of prompts to guide pre-trained language models to understand and perform specific tasks more effectively. However, the relevance is not a perfect 10 because the abstract specifically addresses relation extraction and introduces a cueing strategy, rather than discussing 'hard prefix prompts' or providing a systematic review on prompts in general. 'Prompt engineering' covers a broader range of applications and methodologies, including but not limited to the cueing strategy mentioned." -"promptaid: prompt exploration, perturbation, testing and iteration using visual analytics for large language models",gpt-4-1106-preview,9,"The provided title and abstract describe a visual analytics system, PromptAid, aimed at assisting users in the creation, refinement, and testing of prompts for Large Language Models. The systems focus on interactive prompt exploration, perturbation, and iteration, which are central to the process of prompt engineering. The relevance to prompt engineering is high, as the paper's aim is to directly address challenges involved in crafting and refining prompts. Despite not specifically mentioning 'hard prefix prompts', the broad nature of the study on modifying prompts to improve task performance and its attention to the usability by non-experts make it highly relevant. Nevertheless, the rating is not a full 10 as the information provided does not indicate if hard prefix prompts were specifically considered or the primary focus of the study." -discourse-aware prompt for argument impact classification,gpt-4-1106-preview,8,"The abstract indicates that the paper is about developing a learnable continuous prompt that integrates discourse markers to improve the performance of pre-trained language models (PLMs) on the task of argument impact classification. Prompt engineering is vital for adapting PLMs to specific tasks, and the paper's focus on leveraging discourse information through prompts is relevant to the study of prompt engineering. The improvement in performance metrics (e.g., a 2.5% increase in the F1 score) suggests effective prompt engineering practices. However, the study does not focus on 'hard prefix prompts' specifically; it seems to emphasize the discourse-aware nature of prompts, which might make it slightly less relevant to a systematic review particularly centered on 'hard prefix prompts.'" -few-shot table-to-text generation with prompt-based adapter,gpt-4-1106-preview,9,"The paper presents a novel method for enhancing table-to-text generation in few-shot learning conditions by using a Prompt-based Adapter (PA) to incorporate domain-specific knowledge and bridge the structure gap between tables and text. This is highly relevant to the field of prompt engineering as it involves designing and using prompt templates to augment a language model's capabilities, which is a core concept within prompt engineering. The adaptation of prompts to improve the efficiency of models in specific tasks underlines the important role that prompts play in tailoring pre-trained language models to specialized applications. Therefore, the paper is of high relevance to studies on prompt engineering, particularly in the context of knowledge augmentation and few-shot learning scenarios." -graphprompt: graph-based prompt templates for biomedical synonym prediction,gpt-4-1106-preview,9,"The abstract describes a novel use of prompt-based learning specific to the task of biomedical synonym prediction. The study's focus on creating prompt templates derived from graph features directly aligns with prompt engineering by designing, tailoring, and applying prompts to specialized tasks. This approach is beneficial for expanding the understanding and applications of prompt engineering within biomedical datasets and is very relevant to studies on prompt engineering methods. The only reason it does not receive a full score is that it may not cover the broader aspects of prompt engineering across different domains but is highly relevant within its specified context." -prompt middleware: mapping prompts for large language models to ui affordances,gpt-4-1106-preview,9,"The described study is highly relevant to prompt engineering as it focuses on a framework (Prompt Middleware) to systematically generate prompts for large language models based on user interface affordances. The research specifically addresses static prompts, template-based prompts, and free-form prompts, all of which are direct aspects of prompt engineering. The application in a practical UI setting (FeedbackBuffet) and the discussion on development integration further emphasize its significance in the field. The reason for not giving a full score of 10 is because the paper might not cover the 'hard prefix prompts' as explicitly as the term implies, but rather discusses a broader scope of integrating prompts into UIs." -prompt learning for developing software exploits,gpt-4-1106-preview,7,"The abstract describes the use of a prompt learning approach, PT4Exploits, with pre-trained language models for generating software exploits and appears to employ prompt engineering by adding trainable prompt tokens. This is relevant to prompt engineering as it is an application of prompts in adjusting language model behavior. However, it is more focused on a specific application related to software vulnerability exploitation, rather than concentrating purely on the methodology of hard prefix prompts for a broad range of applications. Therefore, the relevance is notable but not entirely comprehensive regarding general prompt engineering studies." -clickprompt: ctr models are strong prompt generators for adapting language models to ctr prediction,gpt-4-1106-preview,8,"The paper introduces a novel method for integrating CTR prediction models with language models through the use of prompt engineering, in this case, the generation of 'soft prompts' based on a CTR model. This is highly relevant to the field of prompt engineering as it directly involves the creation and utilization of prompts to enhance the performance of language models in a specific task. The score is not a perfect 10 because the focus is specifically on CTR prediction, which is a narrower application within the broader scope of prompt engineering studies." -b . alternate design choices prompt initialization : table 8,gpt-4-1106-preview,8,"The given abstract discusses prompt initialization strategies and their impact on the performance of a model called MaPLe, which is directly relevant to the study of prompt engineering. It examines different initialization methods such as using a specific template or random initialization, and their effectiveness in different layers of the model. The depth of detail regarding the effect of learnable prompts and hierarchical learning within the layers indicates a high level of relevance, although the prompt engineering study question may be broader and involve other aspects not covered in the abstract. However, since the abstract provides empirical findings related to prompt design choices and their impact on a model's performance, it is substantially relevant to the field of prompt engineering." -this prompt is measuring : evaluating bias evaluation in language models,gpt-4-1106-preview,7,"The abstract provided discusses evaluating bias in language models by using prompts and templates, which is relevant to prompt engineering as it involves the design and analysis of prompts to diagnose social biases in NLP systems. The study contributes to the broader field of prompt engineering by highlighting the importance of carefully crafting prompts to achieve specific measurement goals in bias evaluation. The relevance is not maximum because the study is specifically focusing on the bias aspect rather than a comprehensive review of various uses and types of hard prefix prompts, but it is still significantly related to the overall endeavor of prompt engineering." -prompt tuning with contradictory intentions for sarcasm recognition,gpt-4-1106-preview,9,"The abstract discusses an advanced application of prompt tuning specifically designed for sarcasm recognition in NLP. It directly tackles the challenges of engineering prompts for a specialized task, which is highly relevant to studies on prompt engineering. The work's focus on incorporating domain-specific knowledge (contradictory intentions) into the prompts makes it particularly pertinent to the nuances involved in prompt engineering for complex language tasks. It is rated 9 instead of 10 because the abstract does not mention 'hard prefix prompts', the specific type of prompt the original query seemed to be interested in, but it still stays within the broader field of prompt engineering." -grammar correction for multiple errors in chinese based on prompt templates,gpt-4-1106-preview,9,"The given abstract describes a novel grammar error correction method that leverages prompt templates, making it highly relevant to prompt engineering studies. A key aspect of prompt engineering is designing effective prompts that interact optimally with language models, as seen with the use of BERT here. The proposed dynamic updating of templates is a specific application of prompt engineering to improve NLP tasks, showcasing how tweaks in prompt strategy can significantly enhance model performance. This research does not study hard prefix prompts but still falls under the broader domain of prompt engineering, hence the rating of 9 rather than a perfect 10." -ppm: prompt-free prompt-tuning for multi-task learning,gpt-4-1106-preview,8,"The abstract describes a novel approach in prompt-tuning for multi-task learning by using task-specific adapters in place of hand-crafted prompts, which is highly relevant to prompt engineering. It focuses on optimizing the training process and enhancing the model's performance on various downstream tasks without relying on manually designed prompts. While the abstract does not specifically mention 'hard prefix prompts,' it contributes to the broader field of prompt engineering by exploring alternative techniques to improve language models' efficiency in multi-task learning. This is valuable for prompt engineering studies, but not a direct examination of 'hard prefix prompts,' hence the rating is not the maximum." -teprompt: task enlightenment prompt learning for implicit discourse relation recognition,gpt-4-1106-preview,8,"The presented abstract discusses the development and use of a model called TEPrompt for the task of Implicit Discourse Relation Recognition (IDRR), which explicitly involves the concept of prompt learning. This fits within the realm of prompt engineering as it focuses on the design of prompts for specific tasks (DRR, SSC, ACP) which improve the performance of the main task (IDRR). The systematic review of 'hard prefix prompts' could potentially cover such applications of prompt learning in natural language processing tasks. However, the abstract does not directly discuss 'hard prefix prompts' specifically but rather a variant of prompt learning which makes it somewhat less directly relevant for a study exclusively focused on that area. Therefore, the rating is high but not maximum." -cover: a heuristic greedy adversarial attack on prompt-based learning in language models,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses the vulnerabilities in prompt-based learning, a key component of prompt engineering. It focuses on how adversarial attacks can affect manual templates used within pre-trained language models, which is crucial for understanding the robustness and security of prompts. However, the study's primary concern is adversarial attacks rather than the design or optimization of prompts, hence the rating is not a perfect 10." -self-adaptive prompt-tuning for event extraction in ancient chinese literature,gpt-4-1106-preview,8,"The described study demonstrates a direct application of prompt engineering by developing a self-adaptive prompt-tuning mechanism to enhance the performance of a generative event extraction framework. The focus on crafting specialized prompts that account for the unique complexities of ancient Chinese literature and war events shows a sophisticated use of prompt engineering to improve the interpretation and generation capabilities of a pre-trained language model. While this isn't a systematic review of hard prefix prompts specifically, it's a practical application of tuned prompts within a complex domain. Hence, the rating reflects high relevance to prompt engineering but not a perfect match since the study is not a comprehensive review." -sptnet: span-based prompt tuning for video grounding,gpt-4-1106-preview,7,"The study introduces a methodology (SPTNet) that uses prompt tuning, a technique within the field of prompt engineering, to enhance the performance of a PLM in a video grounding task. This is relevant to prompt engineering as it involves the strategic modification of a prompt (via templates and mask tokens) to leverage a pre-trained model's knowledge more effectively. However, the focus on 'hard prefix prompts' is not explicitly mentioned, so while the paper is related to prompt engineering, it might not directly address the comprehensive systematic review on hard prefix prompts specifically." -low-resource multi-granularity academic function recognition based on multiple prompt knowledge,gpt-4-1106-preview,9,"The abstract demonstrates a direct application of prompt engineering by introducing Mix Prompt Tuning (MPT), which uses both manual and automatically learned prompt templates to improve the effectiveness of pre-trained language models in classifying academic functions with limited annotated data. This is highly relevant to the study of prompt engineering as it explores a practical use-case and contributes to the body of knowledge on how prompt strategies can be utilized to enhance model performance in low-resource settings." -promptcl: improving event representation via prompt template and contrastive learning,gpt-4-1106-preview,7,"The title 'promptcl: improving event representation via prompt template and contrastive learning' suggests that the study involves prompt engineering by focusing on the improvement of event representation using prompt templates. This implies that the study likely explores the design or optimization of prompts, which are critical in influencing the performance of language models. The use of contrastive learning could indicate an innovative approach to refining these prompts, potentially making the study relevant to the field of prompt engineering. However, without the abstract or a TLDR, it's difficult to ascertain the full scope and direct relevance to hard prefix prompts specifically, hence the rating does not reach the maximum score." -ground-truth labels matter: a deeper look into input-label demonstrations,gpt-4-1106-preview,7,"The study focuses on the impact of accurate ground-truth labels within the context of in-context learning (ICL), which is a significant component of prompt engineering for AI models. Accurate inputs and labels are critical for training models effectively, and the introduction of metrics like Label-Correctness Sensitivity and Ground-truth Label Effect Ratio can shed light on prompt design strategies. However, since the study seems to focus more on the labels rather than the prompts (the 'hard prefix prompts' mentioned in the initial query), it is not fully centered on prompt engineering. Thus, it receives a medium-high relevance rating, indicating that it is quite relevant but not entirely focused on the specified aspect of prompt engineering." -not all languages are created equal in llms: improving multilingual capability by cross-lingual-thought prompting,gpt-4-1106-preview,9,"The study introduces a method of prompt engineering named cross-lingual-thought prompting (XLT) which directly pertains to improving the efficacy of prompt-based tasks in Large Language Models (LLMs) across multiple languages. Given that the study focuses on a specialized prompting technique to enhance language model capabilities, it is highly relevant to the field of prompt engineering. The reason for not giving a full score is that the abstract does not describe 'hard prefix prompts' specifically, but rather a prompt engineering strategy for multilingual models." -unihd at tsar-2022 shared task: is compute all we need for lexical simplification?,gpt-4-1106-preview,8,"The title and abstract of the paper are highly relevant to prompt engineering as they detail the use of prompted GPT-3 responses for lexical simplification, which is an application of prompt engineering. The study investigates the efficacy of using prompts to guide a state-of-the-art language model in performing a specific task, thereby contributing to the field of prompt engineering by exploring the potential and limitations of different prompting techniques. The fact that the research describes differing levels of context within the prompts and examines their impact in a competitive setting (TSAR-2022 shared task) is particularly pertinent to the study of how prompts can be optimized for performance. The rating isn't a full 10 because the study focuses on lexical simplification rather than a broad examination of all possible applications of prompt engineering." -using natural sentence prompts for understanding biases in language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explicitly addresses the design and use of prompts to evaluate biases in language models. It discusses the impact of different types of prompts (template-based vs natural sentence prompts) on bias assessments in language models, which is a crucial aspect of prompt engineering. The paper's focus on real-world natural sentences for generating prompts also aligns with the current direction in prompt engineering of using more contextually rich and realistic data. Although it doesn't specifically mention 'hard prefix prompts,' the general theme of prompt design and its implications on model behavior makes it relevant to the field of prompt engineering studies. The rating is not a full 10 as the abstract specifies a focus on gender-occupation biases, which is slightly more specific than general prompt engineering." -domain knowledge matters: improving prompts with fix templates for repairing python type errors,gpt-4-1106-preview,8,"The given abstract directly relates to prompt engineering as it discusses 'TypeFix,' which is a novel approach for improving prompts with domain knowledge fix templates specifically for Python type error repair tasks. This study is highly relevant to prompt engineering because it explores how to enhance prompts efficacy through automatic methods. It delves into using domain-specific knowledge to refine and adapt prompts to increase their effectiveness in a programming context, thus it scores an 8 instead of 10 because it is very specific to the domain of type error repair rather than general prompt engineering." -citeprompt: using prompts to identify citation intent in scientific papers,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it involves the development of a tool, Citeprompt, that utilizes prompt learning for citation intent classification. Prompt learning, as a part of prompt engineering, concerns the design of inputs that effectively leverage pretrained language models to perform specific tasks. The research focuses on the choice of prompt templates and verbalizers, which are essential components of prompt engineering. The improvements reported over baseline models and the exploration into few-shot and zero-shot settings underscore its significant contribution to the field of prompt engineering." -extracting structured seed-mediated gold nanorod growth procedures from literature with gpt-3,gpt-4-1106-preview,7,"The relevance to prompt engineering study is moderate to high. This abstract describes a practical application of prompt engineering, where the GPT-3 language model is used to interpret and structure unstructured scientific text data into a useful format (JSON documents). While the study is not solely focused on the theory of hard prefix prompts, it does involve the fine-tuning of prompts with the GPT-3 model to achieve specific outcomes. Therefore, the study contributes to the broader field of prompt engineering by showcasing how prompts can be designed and leveraged to extract complex information from literature, which is a subset of the prompt engineering domain." -a dataset for cross-domain reasoning via template filling,gpt-4-1106-preview,8,"The relevance to prompt engineering is high, as the abstract discusses the development of a dataset and a method (prompt-template-filling approach) for enabling sequence to sequence models to perform cross-domain reasoning. Prompt engineering involves creating prompts that guide models towards desired outputs; the prompt-template-filling approach is likely related to the construction of such prompts to facilitate reasoning across different domains. Even though it may not directly address 'hard prefix prompts', it does pertain to the broader field of prompt engineering and its application in NLP tasks. The additional focus on cross-domain reasoning is also relevant, as it indicates a level of complexity in the prompt design suited for advanced reasoning. However, without more explicit mention of 'hard prefix prompts', it cannot receive a full score." -prompting for automatic log template extraction,gpt-4-1106-preview,8,"The content is highly relevant to prompt engineering study due to the core focus on leveraging the in-context inference capabilities of large language models for log parsing. The precise framework, LogDiv, that is introduced, is a direct application of prompt engineering where log examples are used as prompts to extract information. This aligns with the concept of 'hard prefix prompts' as it uses a structured approach to guide the language model's output towards the generation of log templates. The rating is not a full 10 because the abstract mostly concerns log parsing rather than the broader scope of prompt engineering, but the techniques and findings are still very much applicable to the field." -dspy: compiling declarative language model calls into self-improving pipelines,gpt-4-1106-preview,9,"The abstract describes a programming model (DSPy) that deals with the creation and optimization of language model pipelines using declarative modules, which is closely related to prompt engineering. The abstraction of LM pipelines as text transformation graphs directly involves the crafting and application of prompts to achieve specific computational tasks. The optimization of pipelines to maximize performance metrics is also a key aspect of prompt engineering, as it relates to refining prompts for better outcomes. The introduction of a systematic approach with modules that can learn and improve over time suggests a significant relevance to the study and advancement of prompt engineering. Therefore, I have rated its relevance as high but not the maximum because the abstract does not discuss 'hard prefix prompts' specifically, which was the focus of the original prompt." -role knowledge prompting for document-level event argument extraction,gpt-4-1106-preview,7,The paper presents a new model for Document-level Event Argument Extraction (DEAE) which is relevant to prompt engineering as it discusses enhancing the interaction between templates (prompts) and roles for pretrained language models (PLMs). The use of a role knowledge guidance mechanism to aid PLMs in understanding semantics and generating arguments can be considered a contribution to the field of prompt engineering. The relevance is not at the highest level because the focus is on a specific application of prompt engineering within document-level event argument extraction rather than on prompt engineering more generally or on 'hard prefix prompts' as an overarching concept. -cot-bert: enhancing unsupervised sentence representation through chain-of-thought,gpt-4-1106-preview,8,"The abstract details the use of prompt engineering as a part of a two-stage approach for sentence representation learning with CoT-BERT, which suggests a direct relationship to the field of study. While prompt engineering is not the sole focus, it is integral to the proposed method's success, indicating high relevance. However, the abstract does not focus solely on hard prefix prompts, which would be necessary for a rating of 10." -advanced prompting as a catalyst: empowering large language models in the management of gastrointestinal cancers,gpt-4-1106-preview,9,"The abstract described relates directly to prompt engineering, as it discusses how different prompting strategies can affect the performance of Large Language Models (LLMs) in a specified domain, which is gastrointestinal oncology. The investigation of varying types of prompts, the development of an evaluation system, and the focus on optimizing LLMs' performance in medical scenarios demonstrate a high level of relevance to the field of prompt engineering. The reason for not rating it a perfect 10 is that the study's focus is on one specific application area within healthcare rather than a broad exploration of prompt engineering in multiple contexts." -incorporating instructional prompts into a unified generative framework for joint multiple intent detection and slot filling,gpt-4-1106-preview,8,"The abstract describes a method for addressing joint multiple Intent Detection (ID) and Slot Filling (SF) using a Unified Generative framework (UGEN) that relies on prompt-based instructions. Since it involves designing templates as instructional prompts in a question-answering format to improve understanding of intents and slots in natural language processing, it is highly relevant to prompt engineering. The focus on instructional prompts aligns with the study of how prompts can enhance performance in language models. However, it doesn't address 'hard prefix prompts' specifically, hence the rating is not a full 10." -a practical three-phase approach to fully automated programming using system decomposition and coding copilots,gpt-4-1106-preview,7,"The study focuses on enhancing the capabilities of language models in generating code, which indirectly relates to prompt engineering in the context of creating prompts that facilitate better code generation. The paper mentions empirical insights to create prompt templates, indicating that the research involves understanding how to structure prompts effectively to improve the performance of the language models. Thus, it has relevance to prompt engineering study, particularly the aspect of designing prompts for coding-related tasks. However, the paper's primary aim is not centered on the study of prompt engineering itself but rather on a neuro-symbolic approach to automated programming. This is why the relevance rating is not higher." -towards robust nlg bias evaluation with syntactically-diverse prompts,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it directly addresses the impact of syntactic variations in prompts on the output of NLG systems. It critiques the standard practice of using fixed templates for bias analysis and demonstrates the importance of diversifying prompt structures to obtain more reliable and representative outcomes. This research aligns with the motives of prompt engineering, which include understanding and optimizing how different prompts affect the behavior of language models." -ku x upstage’s submission for the wmt22 quality estimation: critical error detection shared task,gpt-4-1106-preview,8,"The paper discusses the application of prompt-based fine-tuning within the context of quality estimation and critical error detection tasks which is closely related to prompt engineering. The method of reformulating the task to fit a masked language model objective and the efforts to design intuitive templates and label words are directly relevant to the study of engineering effective prompts. Although the focus is on the specific application of QE and CED in machine translation, the techniques and insights derived could be beneficial for prompt engineering study. The rating is not a full 10 because the paper is specialized in QE and CED, which is only a subset of the broader field of prompt engineering." -vision encoders in visual question answering,gpt-4-1106-preview,8,"The relevance of the study to prompt engineering is significant as it examines the impact of strategically formatting prompts on the performance of Visual Language Models in the task of Visual Question Answering. This exploration is an essential aspect of prompt engineering, as it directly relates to how the models' input structure influences their ability to leverage learned knowledge. The improvement in task performance through prompt formatting highlights the importance of prompt engineering for optimizing model efficacy. However, it is not given a full score because the study is specifically focused on VQA tasks and VLMs, rather than the broader field of prompt engineering across various models and tasks." -keyword-optimized template insertion for clinical information extraction via prompt-based learning,gpt-4-1106-preview,9,"The abstract describes a study focused on prompt-based learning, specifically within clinical NLP tasks, and addresses the challenge of prompt design optimization for text classification. Although it doesn't mention 'hard prefix prompts' explicitly, the research on keyword-optimized template insertion is highly relevant to the field of prompt engineering. It explores how the position of the template (i.e., prompt) can affect model performance, which is a core aspect of prompt engineering studies. The research is very pertinent for anyone interested in the effects of prompt design on model efficacy, especially in data-sparse scenarios such as clinical note classification. Thus, it receives a high relevance rating." -kul@smm4h’22: template augmented adaptive pre-training for tweet classification,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering is significant as it discusses the use of template augmentations in pre-training models for tweet classification, which is a form of prompt engineering. The inclusion of 'template augmented task adaptive pre-training' indicates that the study explores how different prompt structures can aid in adapting language models to particular tasks, here being the classification of tweets mentioning Adverse Drug Effects. Although the study is focused on a specific application in the health domain and does not solely focus on 'hard prefix prompts', it demonstrates a practical implementation of prompt engineering through template augmentation. The relevance is not rated higher because the abstract does not directly address a systematic review on prompt engineering or 'hard prefix prompts' as a general concept, but rather reports on a specific application and its outcomes." -research on chinese short text classification based on prefix-vector attention template and probabilistic answer set,gpt-4-1106-preview,8,"The abstract discusses the use of a prefix-vector as a template in prompt learning for text classification, indicating a clear relevance to prompt engineering. It specifically addresses the optimization of prompts for improving performance in text classification tasks, which is a direct application of prompt engineering. However, it doesn't solely focus on 'hard' prefix prompts, hence the rating isn't a full 10." -stt: soft template tuning for few-shot learning,gpt-4-1106-preview,9,"The abstract discusses a new prompt-tuning framework called Soft Template Tuning (STT), which directly relates to prompt engineering as it involves the fine-tuning of prompts for few-shot learning applications with large language models. The study's focus on combining manual prompts and auto-prompts, as well as treating downstream tasks as masked language modeling tasks, is highly relevant to the field of prompt engineering. While it doesn't focus specifically on 'hard prefix prompts,' it does contribute significantly to the overall understanding of prompt tuning, which is a core aspect of prompt engineering. Therefore, it gets a high relevance rating." -cross-domain reasoning via template filling,gpt-4-1106-preview,8,"The paper discusses a prompt-template-filling approach which is highly relevant to the field of prompt engineering as it directly involves designing prompts to facilitate cross-domain reasoning in sequence to sequence models. The relevance is slightly lower than the maximum score because the prompt engineering study specified involves hard prefix prompts, and it is not clear from the abstract if the study specifically addresses hard prefix prompts or if it has a broader scope. Nevertheless, the methodology and case studies presented are likely to be informative for prompt engineering research, particularly in understanding and improving model's abilities in cross-domain applications." -daprompt: deterministic assumption prompt learning for event causality identification,gpt-4-1106-preview,8,"The paper 'daprompt: deterministic assumption prompt learning for event causality identification' is highly relevant to prompt engineering as it discusses the design and implementation of a novel prompt learning method for a specific NLP task (ECI). The focus on the deterministic assumption in prompt learning directly feeds into the broader discussion of how to engineer prompts for better utilization of pre-trained language models. While the study is not about hard prefix prompts in general, it contributes to the field of prompt engineering by exploring an alternative approach to conventional prompt design, thus the rating of 8." -stprompt: semantic-guided and task-driven prompts for effective few-shot classification,gpt-4-1106-preview,9,"The given title and abstract describe an approach to prompt engineering that is specifically tailored to improve few-shot classification performance in language models. The development of the STPrompt model, which utilizes semantic-guided and task-driven prompts, is highly relevant to the field of prompt engineering. The use of prompts that are constructed from semantic dependency trees and task-specific metadata is indicative of advanced prompt engineering techniques. Therefore, the study is almost directly aligned with prompt engineering, with the potential deduction of a point for not addressing 'hard prefix prompts' as the prompt is open-ended regarding the type of prompts studied." -supplementary material for mask-free ovis: open-vocabulary instance segmentation without manual mask annotations,gpt-4-1106-preview,8,"The abstract describes a process of using prompt templates to generate pseudo-captions from image-labels for vision-language models. This is highly relevant to the study of prompt engineering because it involves the creation of templates that structure input for language models in a way that improves their understanding and output generation. While it doesn't directly mention the term 'hard prefix prompts', the use of rigidly structured prompt templates hints at a similar concept. Prompt engineering is crucial in this context to ensure that the model correctly interprets the image categories and generates coherent and accurate captions. The rating isn't a full 10 as it doesn't cover the entire breadth of prompt engineering studies, especially those that pertain to non-image related tasks, but it remains significantly relevant for the subset of prompt engineering it pertains to." -pre-training extractive question-answer prompts for few-shot chinese text classification,gpt-4-1106-preview,8,"The document discusses the use of prompt learning for few-shot text classification, which is a subset of prompt engineering as it involves designing and training prompts to work effectively with pre-trained language models. The relevance to prompt engineering is high because it directly deals with the creation of prompts that fit a specific task, which is extractive question-answering in this case. The study also touches upon improving the efficiency of such prompts using contrastive learning, which is an advanced topic in prompt engineering. However, the specific term 'hard prefix prompts' is not mentioned, which suggests that while the document is highly relevant to prompt engineering, it may not cover the 'hard prefix' aspect explicitly." -enhancing cross-lingual natural language inference by soft prompting with multilingual verbalizer,gpt-4-1106-preview,8,"The study discusses soft prompt learning within the context of cross-lingual natural language inference, which is related to the field of prompt engineering. Although this is not specifically about 'hard prefix prompts,' soft prompting is an alternative prompting approach, and understanding it can contribute to the field of prompt engineering by offering insights into different methods of designing prompts. Furthermore, the study mentions the limitations of hard prompts, which implies a comparison that can be informative for prompt engineering studies. The rating is not a full 10 because the direct focus on 'hard prefix prompts' is lacking, but it is still highly relevant due to its implications for the broader field of prompt engineering." -exploring prompts in few-shot cross-linguistic topic classification scenarios,gpt-4-1106-preview,9,"The abstract describes research directly related to prompt engineering, specifically addressing the challenge of creating efficient prompts for few-shot learning in cross-linguistic scenarios. The study's exploration of discrete, continuous, and hybrid prompts, and their impact on model performance, makes it highly relevant to the field of prompt engineering. The deduction of one point is due to the abstract not mentioning 'hard prefix prompts' specifically, but it is otherwise very pertinent to the prompt engineering domain." -grounding language to entities and dynamics for generalization in reinforcement learning,gpt-4-1106-preview,7,"The described study involves creating templates for textual descriptions and has a component of paraphrasing, which relates to prompt engineering in that it deals with the systematic construction and variation of prompts. However, because it is situated within the context of reinforcement learning and generalization rather than directly focused on prompt engineering for language models or search queries, it is not a perfect match for the specific topic of a 'hard prefix prompts' systematic review." -random word retrieval for automatic story generation,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering study is moderately high. It discusses automatic story generation using a method that mimics human writing prompts. The concept of leveraging random words as prompts and then using the internet to provide context aligns with aspects of prompt engineering, which involves creating stimuli that guide the output of generative models. While the paper focuses primarily on story generation rather than the intricacies of engineering prompts, the approach contributes to understanding how prompts can be constructed to initiate a creative process in AI systems. Hence, it offers insights applicable to prompt engineering, even if that is not the main focus of the study." diff --git a/data/semantic_scholar_data/semantic_scholar_human_review_papers_with_pdf.csv b/data/semantic_scholar_data/semantic_scholar_human_review_papers_with_pdf.csv deleted file mode 100644 index 5f4e2f0..0000000 --- a/data/semantic_scholar_data/semantic_scholar_human_review_papers_with_pdf.csv +++ /dev/null @@ -1,136 +0,0 @@ -Title,Probability,Reasoning,Open Access PDF URL -align and prompt: video-and-language pre-training with entity prompts,4,"The abstract describes a pre-training framework for video-and-language tasks, focusing on cross-modal alignment and introducing a novel prompting entity modeling concept. Although the study involves 'entity prompts,' it primarily concentrates on video-text interaction rather than exploring 'hard prefix prompts' as may be suggested by prompt engineering in a language model context. The relevance to prompt engineering is secondary and indirect, mainly connected through the novel use of prompts for entity modeling within a multimodal framework, not as a comprehensive study of prompt engineering itself.",https://arxiv.org/pdf/2112.09583 -modeling prompt adherence in student essays,6,"The study is only moderately relevant to prompt engineering. It focuses on modeling prompt adherence in student essays and introduces a corpus and scoring method, which could potentially inform the development of prompts in educational settings. However, prompt adherence is just one aspect of prompt engineering, and the study's scope is limited to student essays rather than a broader application within engineering prompts for AI or human-computer interactions. Therefore, while relevant, it does not wholly represent prompt engineering as a comprehensive field.",https://aclanthology.org/P14-1144.pdf -how novices use llm-based code generators to solve cs1 coding tasks in a self-paced learning environment,4,"While the presented study does not directly focus on 'hard prefix prompts' or prompt engineering, it does investigate the use of prompts by novice programmers in an educational setting when interacting with a Large Language Model (LLM)-based code generator like Codex. Since prompt crafting is a substantial part of this interaction, and the properties of these prompts are analyzed, the study has some relevance to prompt engineering. However, its primary focus seems to be on the educational implications and usage patterns of the LLM rather than developing or understanding the specific prompt engineering strategies to improve interaction with LLMs.",https://arxiv.org/pdf/2309.14049 -"reason for future, act for now: a principled framework for autonomous llm agents with provable sample efficiency",5,"The abstract provided discusses a framework for improving the way large language models act and reason over time, with a focus on learning and planning within Bayesian adaptive Markov decision processes. Although this is related to how prompts might be engineered to elicit particular responses from LLMs, it doesn't specifically mention 'hard prefix prompts' or address prompt engineering techniques in a systematic review context. Therefore, while aspects of this framework could potentially inform prompt engineering strategies to some extent (hence not a 0 rating), the relevance to the study of prompt engineering, particularly that of 'hard prefix prompts,' is only tangentially related. Therefore, a middle score reflects this partial relevance.",https://arxiv.org/pdf/2309.17382 -hide and seek (has): a lightweight framework for prompt privacy protection,5,"The provided abstract focuses on privacy protection in the context of using large language models by introducing the HaS (Hide and Seek) framework, which is relevant to the broader field of responsible AI usage and prompt engineering to a certain degree. It discusses techniques for anonymization and de-anonymization, which could indirectly affect the way prompts are engineered to ensure privacy. However, the main concern of the study is privacy protection rather than methodologies for optimizing or understanding the construction of prompts (hard prefix prompts) in prompt engineering studies. As a result, it holds moderate relevance as it touches upon the privacy aspect of user inputs (prompts) but does not directly deal with the study or advancement of prompt-engineering techniques.",https://arxiv.org/pdf/2309.03057 -gpt-3-driven pedagogical agents for training children's curious question-asking skills,6,"The relevance to prompt engineering study is moderate. While the focus of this paper appears to be on using large language models to encourage children to ask more curious questions, and it involves a natural language prompting approach, the connection to 'hard prefix prompts' specifically is not directly mentioned. Prompt engineering is certainly a component of training these models for pedagogical purposes, but the abstract does not provide information about a systematic review of prompt engineering or hard prefix prompts explicitly. It suggests using prompting methods for practical applications rather than studying the prompts themselves.",https://arxiv.org/pdf/2211.14228 -reconcile: round-table conference improves reasoning via consensus among diverse llms,4,"The study presents a multi-agent system for improving consensus and reasoning among Large Language Models (LLMs), which touches on the field of prompt engineering indirectly through the use of 'discussion prompts'. While it does not address hard prefix prompts directly, the mention of prompts as a means for agent communication suggests relevance to prompt design and its impact on model performance. Therefore, it is somewhat relevant to studies in prompt engineering, especially those exploring the interaction dynamics and prompt-response behavior within and between models.",https://arxiv.org/pdf/2309.13007 -prompting large language models for zero-shot domain adaptation in speech recognition,4,"The abstract touches on using a domain-specific text prompt for zero-shot domain adaptation in speech recognition with a large language model, which involves prompt engineering for a narrowly defined purpose. It highlights utilizing prompts for performance improvement in a specific AI task, which is relevant to the study of prompt engineering. However, it does not directly address a 'systematic review on hard prefix prompts' or cover the broader implications and methodologies of prompt engineering, thus only partially relevant.",http://arxiv.org/pdf/2306.16007 -interactive data synthesis for systematic vision adaptation via llms-aigcs collaboration,6,"The abstract provided for the study 'interactive data synthesis for systematic vision adaptation via llms-aigcs collaboration' indicates an exploration of the collaboration between language models (LLMs) and artificial intelligence generated content (AIGC) models for more controllable image generation, which is aligned with the practice of prompt engineering. However, the focus seems to be on data augmentation for vision tasks rather than solely on the systematic review of 'hard prefix prompts' in prompt engineering. Although prompt engineering is relevant to the work described, as it is necessary for guiding the LLMs in this process, the absence of a direct and explicit focus on a review of prompt engineering techniques, specifically hard prefix prompts, results in a moderate rating on the relevance scale.",http://arxiv.org/pdf/2305.12799 -systematic rectification of language models via dead-end analysis,6,"The study presents a method for detoxification of language model outputs, which is tangentially related to prompt engineering. While the main focus is not on the development of prompts, the detoxification process could impact how prompts are engineered by reducing the probability of generating toxic responses and altering the token selection process. This can be relevant in creating safer and more effective prompts. However, the study does not directly address hard prefix prompts or systematic reviews of prompt engineering strategies, so the rating reflects moderate relevance rather than full alignment with the prompt engineering field.",http://arxiv.org/pdf/2302.14003 -chatrule: mining logical rules with large language models for knowledge graph reasoning,5,"The described paper presents a novel framework called ChatRule, which utilizes large language models to generate logical rules for knowledge graph reasoning. While this application indirectly relates to prompt engineering, as it involves leveraging LLMs to generate content based on structured prompts from knowledge graphs, the focus is more on the application in knowledge graphs and logical rule mining rather than on the study of hard prefix prompts in a general context. Therefore, its relevance to a comprehensive systematic review on hard prefix prompts in prompt engineering may be considered moderate, as the principles could potentially inform prompt engineering techniques, but it is not directly aligned with the review's core subject.",https://arxiv.org/pdf/2309.01538 -evaluating the text-to-sql capabilities of large language models,6,"The abstract describes an empirical study on the capability of a language model, Codex, to interpret and convert natural language into SQL queries. The study focuses on performance evaluation and comparison with state-of-the-art models in terms of few-shot learning, which is facilitated by providing a few in-domain examples in the prompt. Although the abstract does not explicitly mention the term 'prompt engineering,' the essence of evaluating the impact of tailored prompts on the model's performance is captured in the process of providing 'in-domain examples'. This could be considered a form of prompt engineering, as it involves crafting prompts to improve task-specific performance. Hence, the study has relevance to the broader field of prompt engineering, specifically regarding how prompts can enable large language models to understand and generate structured queries like SQL. However, the focus is not on 'hard prefix prompts' or a comprehensive systematic review on such, which would be more directly related to the prompt engineering study described in the prompt, thus warranting a moderate rating rather than a high one.",http://arxiv.org/pdf/2204.00498 -persistent anti-muslim bias in large language models,6,"While the study is highly relevant to the broader field of AI ethics and bias in machine learning models, its direct relevance to 'prompt engineering' is moderate. It touches on the concept of 'adversarial text prompts' as a means to counteract bias in language models, which does fall under the scope of prompt engineering. However, the study's primary focus is on the identification and analysis of bias, rather than on the engineering of prompts as a method for directing or improving the language model's outputs. More specifically, it does not address 'hard prefix prompts' in the systematic review sense but does explore the dynamic between prompt construction and model responses related to bias.",https://arxiv.org/pdf/2101.05783 -augesc: dialogue augmentation with large language models for emotional support conversation,4,"The study described does involve prompts as it discusses leveraging large language models for dialogue augmentation, specifically in the context of emotional support conversation. The prompt engineering aspect is present in the sense that the researchers instruct the model to complete dialogues which could be considered a form of a 'prompt'. However, hard prefix prompts, which imply a specific approach to structuring prompts to elicit desired responses, are not directly mentioned. This suggests that while the study is related to prompt design and usage, it may not focus on the 'hard prefix prompts' aspect extensively, leading to a moderate relevance rating.",https://aclanthology.org/2023.findings-acl.99.pdf -retroformer: retrospective large language agents with policy gradient optimization,6,"The abstract describes a study related to optimizing large language agents through policy gradient optimization, which indirectly involves engineering of prompts because it mentions the automatic tuning of language agent prompts based on environment feedback. While this does not specifically target 'hard prefix prompts,' it is relevant to the broader field of prompt engineering as it involves refining prompts to improve agent performance. However, the lack of direct mention of 'hard prefix prompts' or a comprehensive systematic review of them justifies a moderate rating rather than a high one.",https://arxiv.org/pdf/2308.02151 -leveraging large language models for mental health prediction via online text data,6,"The title and abstract indicate that this study involves leveraging large language models (LLMs) for mental health prediction tasks by analyzing online text data, which is related to the application of LLMs, but it doesn't specifically mention 'hard prefix prompts' or 'prompt engineering' as the central theme. However, the use of zero-shot and few-shot prompting, along with instruction finetuning, falls under the broader category of prompt engineering techniques. Therefore, while the study is tangentially relevant to prompt engineering because it involves designing inputs for LLMs to perform specific tasks, it is not focused on a comprehensive systematic review of hard prefix prompts, which makes it only moderately relevant.",https://arxiv.org/pdf/2307.14385 -can large language models empower molecular property prediction?,4,"The study focuses on the application of Large Language Models (LLMs) for molecular property prediction using SMILES text, which demonstrates a use case for LLMs that is adjacent to the concept of prompt engineering. Although it deals with prompting LLMs for in-context learning and involves the generation of explanations, which are relevant techniques in prompt engineering, the study's primary aim is not a systematic review of prompt engineering itself, nor does it specifically address 'hard prefix prompts'. Therefore, its relevance to a comprehensive systematic review on hard prefix prompts is tangentially related but not directly aligned, warranting a moderate relevance rating.",https://arxiv.org/pdf/2307.07443 -generating data for symbolic language with large language models,6,"The abstract indicates that the paper is closely related to the use of prompts in the context of LLMs for data generation, specifically in the area of symbolic language tasks. While the study does not directly focus on 'hard prefix prompts' as stipulated in the prompt engineering study question, it does explore 'informative prompt' design in order to steer the LLM's data generation process. This suggests a strong relevance to the practice of prompt engineering and the optimization of prompts for specific tasks in LLMs. However, the lack of explicit focus on 'hard prefix prompts' reduces the direct relevance to the systematic review concerning that specific aspect of prompt engineering.",http://arxiv.org/pdf/2305.13917 -denseclip: language-guided dense prediction with context-aware prompting,4,"While the study described in the abstract does involve a form of 'prompting' by using contextual language information to guide a model, this is applied in the scope of visual representation learning and not in the explicit context of 'hard prefix prompts' for text-based language models, which is often what is referred to in prompt engineering studies. Therefore, its relevance to prompt engineering study is tangential rather than directly applicable.",https://arxiv.org/pdf/2112.01518 -speechprompt v2: prompt tuning for speech classification tasks,6,"The paper is relevant to prompt engineering as it discusses prompt tuning, a technique integral to prompt engineering that involves fine-tuning a language model (LM) using prompts to better perform specific tasks. Although the main focus is on speech classification tasks and not solely on hard prefix prompts, it still offers insights into the larger field of prompt engineering, particularly how prompts are used to improve performance and efficiency for various tasks in speech processing. The paper does not directly address a 'comprehensive systematic review on hard prefix prompts,' but the technology it explores falls within the broader scope of prompt engineering studies.",http://arxiv.org/pdf/2303.00733 -prompting visual-language models for efficient video understanding,6,"The study pertains to efficient adaptation mechanisms for pre-trained visual-language models specifically for video understanding tasks. It suggests a methodology for fine-tuning the models, possibly including the use of prompts to align pre-training objectives with video-related tasks. While it doesn't directly address 'hard prefix prompts', the adaptation of pre-trained models using prompts is a related area of research. The relevance is therefore moderate, as the study could potentially inform prompt engineering practices in multi-modal contexts, even though it does not focus on a systematic review of hard prefix prompts.",https://arxiv.org/pdf/2112.04478 -reducing sentiment bias in language models via counterfactual evaluation,4,"The study deals with reducing sentiment bias in language models by using a form of counterfactual evaluation, which is related to how prompts might internalize biases present in training data. While it touches on the area of prompt engineering by considering how the conditioning context affects model output, its primary focus is on bias quantification and reduction rather than on the systematic review of 'hard prefix prompts' or the structure and impact of prompt design itself.",https://www.aclweb.org/anthology/2020.findings-emnlp.7.pdf -simultaneous translation and paraphrase for language education,4,"The study presents work on the generation of translations and paraphrases, which touches upon prompt engineering indirectly through the creation of diverse language sets for training models. However, the primary focus appears to be on translation and paraphrasing rather than prompt engineering itself. It can be relevant to prompt engineering in the context of designing effective prompts for language translation tasks but does not directly address the systematic review of 'hard prefix prompts' for prompt engineering studies.",https://www.aclweb.org/anthology/2020.ngt-1.28.pdf -generative visual prompt: unifying distributional control of pre-trained generative models,6,"The study presents a framework called Generative Visual Prompt (PromptGen) to exercise distributional control over pre-trained generative models. While it does not directly relate to 'hard prefix prompts' that are typically associated with language models and their prompting techniques, the concept of manipulating the output of generative models using prompts (here in the form of external model knowledge) is related to the broader topic of prompt engineering. The focus on controlling generative models aligns with the idea of influencing model behavior through prompts, hence the relevance to the field of prompt engineering. However, it lacks a direct connection to 'hard prefix prompts' in systematic review context and instead deals with a different application of prompting in the visual domain. Thus, the relevance is moderate, and the rating is given a 6.",http://arxiv.org/pdf/2209.06970 -dream3d: zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models,4,"The paper discusses the use of text prompts for zero-shot text-to-3D synthesis, which involves aspects of prompt engineering as it requires the optimization of text prompts to generate 3D shapes. However, the core focus is on the synthesis of 3D structures from text descriptions rather than on the study of hard prefix prompts specifically. The relevance to prompt engineering is tangential and mainly related to optimizing text prompts within a specific context of 3D content generation.",https://arxiv.org/pdf/2212.14704 -unleashing the power of visual prompting at the pixel level,6,"The paper describes a study focused on visual prompting as a technique to adapt pre-trained models for recognition tasks, which is related to prompting in a broad sense. However, the query asks for a comprehensive systematic review on hard prefix prompts, which typically refers to textual prompt engineering where specific text prompts are designed to guide language models. Although visual prompting shares similar goals in terms of adapting models to new tasks, it does so in the domain of images rather than text. Hence, the relevance is moderate because the methods and outcomes may have conceptual parallels to textual prompt engineering, but do not directly address or review hard text-based prefix prompts.",http://arxiv.org/pdf/2212.10556 -being comes from not-being: open-vocabulary text-to-motion generation with wordless training,4,"While the abstract indicates that this study was inspired by prompt learning in NLP and involves the reformulation of input text into a 'prompt' for a generator, the primary focus is on text-to-motion generation rather than on prompt engineering for language models or systematic reviews of 'hard prefix prompts.' The connection to prompt engineering is tangential and based more on a conceptual inspiration than on a direct study or analysis of prompts in the context of text or language processing.",https://arxiv.org/pdf/2210.15929 -a web-based environment for documentation and sharing of engineering design knowledge,5,"The abstract describes an ontological knowledge-base designed to aid in the engineering design process by prompting engineers to document and share information efficiently. Although it mentions the use of prompts to drive certain behaviors within the engineering design process, the focus is not specifically on the study of 'hard prefix prompts' in the context of 'prompt engineering' as it relates to AI or machine learning. The paper seems to be more aligned with knowledge management and ontological structures in engineering rather than the specific study of designing and engineering prompts for AI systems. Therefore, it is somewhat relevant due to its use of prompting mechanisms but not directly concerned with the study at hand.",https://dr.lib.iastate.edu/bitstreams/f33fbf78-2699-4329-ba73-eb0eb605254f/download -"don’t prompt, search! mining-based zero-shot learning with language models",6,"The paper discusses the limitation of the traditional prompt-based approach for zero-shot learning with language models and offers an alternative mining-based approach. It touches upon the subject of how prompts are used and their sensitivity to the task, which is relevant to prompt engineering studies. However, the primary focus seems to be on the mining technique rather than the engineering or optimization of prompts themselves. Therefore, while it relates to the field of prompt engineering, it does so from a perspective of finding an alternative to hard-coded prompts, rather than improving or systematically reviewing them.",https://arxiv.org/pdf/2210.14803 -prompt-tuned code language model as a neural knowledge base for type inference in statically-typed partial code,6,"The study presents an approach that incorporates elements of prompt engineering by fine-tuning a language model with a specific task-oriented prompt ('pre-train, prompt and predict' paradigm). Although the primary focus is not on prompt engineering for natural language processing, but rather type inference within code, the use of prompts to guide the model suggests relevance. However, it is specialized for code language models which may not fully align with more generalized prompt engineering studies.",https://dl.acm.org/doi/pdf/10.1145/3551349.3556912 -blended diffusion for text-driven editing of natural images,6,"The paper's relevance to prompt engineering is moderate as it deals with the application of language prompts in the context of image editing. Even though the main focus is on the use of natural language prompts to direct image edits, which is related to how prompts are engineered to guide machine learning models, it is not specifically focused on the study of 'hard prefix prompts' or the structure and efficacy of prompts in a general sense. The relevance comes from the intersection with prompt engineering in the domain of combining text prompts with image processing models, which may offer insights into how to better design prompts for specific tasks like image editing. However, without a direct analysis on the design, structure, or impact of the prompts themselves, its relevance is not maximal.",https://arxiv.org/pdf/2111.14818 -cora: adapting clip for open-vocabulary detection with region prompting and anchor pre-matching,4,"The abstract describes an approach to improve open-vocabulary detection by using region prompting in combination with a visual-language model, which could be relevant to prompt engineering in that it involves the adaptation of prompts to improve recognition tasks. However, the focus is on object detection and adapting existing models to new tasks, rather than investigating the systematic study of hard prefix prompts specifically. While the method of region prompting could potentially inform prompt engineering practices, the direct relevance to the study of hard prefix prompts is tangential.",https://arxiv.org/pdf/2303.13076 -bloom+1: adding language support to bloom for zero-shot prompting,4,"The provided document abstract pertains to language model adaptation, specifically for the BLOOM model, and how it is applied to zero-shot prompting in new languages. While the study addresses issues relevant to language models and prompting, it does not directly deal with the engineering of prompts, especially with 'hard prefix prompts' as mentioned in the original query. The relevance lies in the broader context of zero-shot learning and language adaptation, which can impact the effectiveness of prompts in multiple languages. However, since it doesn't focus on the specific design or structuring of prompts, or the concept of 'hard prefix prompts', the rating is moderately low.",http://arxiv.org/pdf/2212.09535 -using simple technology to prompt multistep tasks in the home for people with dementia: an exploratory study comparing prompting formats,5,"The study provides insights into the design of prompts for a specific user group (people with dementia) and highlights that the effectiveness of prompts can be context-dependent, which offers a partial relevance to the general field of prompt engineering. However, the study is focused on cognitive impairment and lacks a direct connection to the broader concepts and methodologies of engineering prompts for software or AI interactions. Therefore, the relevance is moderate.",https://journals.sagepub.com/doi/pdf/10.1177/1471301215602417 -large language models are state-of-the-art evaluators of translation quality,5,"The study focuses on the use of large language models for evaluating translation quality, which indirectly relates to prompt engineering through the application of zero-shot prompting and comparison of prompt variants. However, it is more centered on the application of language models for translation assessments rather than the principles or effects of prompt engineering itself. Although understanding how different prompts impact the quality evaluation by a language model is relevant, the core of the study is translation quality assessment rather than prompt engineering.",http://arxiv.org/pdf/2302.14520 -virtual prompt pre-training for prototype-based few-shot relation extraction,4,"While the title suggests the study involves 'virtual prompt pre-training', which pertains to a technique potentially related to prompt engineering in the context of machine learning, the lack of abstract and TLDR makes it difficult to assess its direct relevance to prompt engineering, particularly to 'hard prefix prompts'. The relevance is expected to be moderate as it mentions prototypes and few-shot relation extraction which may involve prompt design but does not explicitly focus on hard prefix prompts as per the provided information.",http://manuscript.elsevier.com/S0957417422019455/pdf/S0957417422019455.pdf -zero- and few-shot event detection via prompt-based meta learning,6,"The study discusses a meta-learning framework for zero- and few-shot event detection, employing cloze-based prompts within the methodology. Prompt-based approaches are relevant to prompt engineering, as they involve the design of input structures that facilitate model learning and generalization to new tasks. However, the focus on event detection and a meta-learning framework makes this work only partially related to the core study of hard prefix prompts in prompt engineering, hence the rating is moderate.",http://arxiv.org/pdf/2305.17373 -few-shot composition learning for image retrieval with prompt tuning,6,"The study includes techniques related to prompt tuning and the development of a visual prompt within the context of image retrieval, which is indirectly related to prompt engineering in natural language processing (NLP). While prompt tuning is a concept used in NLP, this study applies it to a visual domain and focuses on compositional learning and few-shot learning mechanisms, which are somewhat tangential to the typical studies on hard prefix prompts in text-based models. The relevance is moderate because the study does show the application of prompt tuning concepts but in a different domain and does not directly address hard prefix prompts in the context of NLP.",https://ojs.aaai.org/index.php/AAAI/article/download/25597/25369 -structure pretraining and prompt tuning for knowledge graph transfer,4,"The abstract describes a study on a knowledge graph pretraining model (KGTransformer) and its application across different knowledge graph-related tasks, which is related to machine learning and transfer learning. The use of 'prompt-tuning' with task data as a 'triple prompt' indicates a form of prompt engineering, but the focus seems to be more on the application of this mechanism for task-specific KG interactions, rather than a comprehensive study of the prompt engineering concept itself. The relevance to prompt engineering study is therefore present but not central to the paper's core contribution, hence the moderate rating.",https://arxiv.org/pdf/2303.03922 -[cls] token is all you need for zero-shot semantic segmentation,4,"The given abstract pertains to a study on zero-shot semantic segmentation using [CLS] tokens from the CLIP model, which isn't directly related to prompt engineering. However, the use of [CLS] tokens as auxiliary prompts for the visual encoder suggests some relevance to the understanding of how prompts can influence AI models. The rating is not higher because the primary focus of the study is on image segmentation, not on prompt engineering itself.",http://arxiv.org/pdf/2304.06212 -the unreliability of explanations in few-shot in-context learning,6,"The study seems to address a part of prompt engineering by examining how 'prompting' GPT-3 with explanations affects its performance on certain reasoning tasks, which is relevant to understanding how different types of prompts influence large language models. However, it primarily focuses on the reliability of explanations produced by GPT-3 and their use in validating predictions post-hoc, which is one aspect of prompt engineering. The study does not directly address 'hard prefix prompts' or a comprehensive systematic review of them. Therefore, while not fully aligned, it does contribute to the broader topic of prompt engineering by discussing the impact of explanatory prompts.",http://arxiv.org/pdf/2205.03401 -short answer grading using one-shot prompting and text similarity scoring model,5,"The relevance of the study to prompt engineering is moderate. The study involves the use of a large language model for one-shot prompting, which is relevant to the broader field of prompt engineering as it relies on effectively prompting a language model to perform a task—in this case, grading short answers. However, the study specifically focuses on an application of language models for automated grading rather than the systematic review of hard prefix prompts. The relevance is not direct but tangentially related due to the use of prompting techniques within the ASAG model.",http://arxiv.org/pdf/2305.18638 -metaprompting: learning to learn better prompts,6,"The abstract describes research on prompting methods in natural language processing, specifically focusing on moving from 'hard prompts' to 'soft prompts' and proposing a new method called MetaPrompting that utilizes meta-learning for better prompt initialization. Although the study is highly relevant to the broader topic of prompt engineering, the specific term 'hard prefix prompts' is not the main focus of this abstract. Instead, the research emphasizes soft prompting and the improvement of prompt initialization. Hence, the relevance to 'hard prefix prompts' is indirect, as the study seems to address the transition from hard to soft prompts and the advancement of soft prompt techniques.",http://arxiv.org/pdf/2209.11486 -learning to paraphrase sentences to different complexity levels,4,"While the study presented in the abstract does touch upon prompting strategies, which are part of prompt engineering, its focus seems to be more on the creation and use of datasets for sentence simplification, complexification, and paraphrasing. Prompt engineering generally refers to the design, testing, and optimization of prompts to improve performance of language models. The abstract indicates that the research includes experimentation on prompting strategies, which is relevant to prompt engineering; however, the main emphasis appears to be on dataset development and performance benchmarks rather than the intricate details of prompt engineering itself. Therefore, the relevance to prompt engineering study is moderate.",https://arxiv.org/pdf/2308.02226 -task effects on linguistic complexity and accuracy: a large-scale learner corpus analysis employing natural language processing techniques,6,"While the title 'task effects on linguistic complexity and accuracy: a large-scale learner corpus analysis employing natural language processing techniques' does not specifically mention 'prompt engineering' or 'hard prefix prompts', the abstract indicates relevance by discussing the influence of tasks or prompts on linguistic performance in second language acquisition. Prompt engineering is crucial in designing effective tasks that can elicit the desired complexity and accuracy in language learning, which is pertinent to the study at hand. However, since the focus is more broadly on task effects rather than the specifics of engineering prompts, especially 'hard prefix prompts', the relevance is moderate.",https://eprints.lancs.ac.uk/id/eprint/83702/3/Alexopoulou_et_al_in_press.pdf -automatic code summarization via chatgpt: how far are we?,6,"The abstract discusses evaluating ChatGPT's performance on code summarization tasks using specific prompts, which can be considered a form of 'prompt engineering.' Prompt engineering involves crafting prompts to guide a model towards specific desired outputs or behaviors. While the abstract does not focus exclusively on 'hard prefix prompts,' it does entail exploring appropriate prompts to improve ChatGPT's performance. Therefore, it touches upon aspects of prompt engineering which are relevant to the study of how prompts affect an LLM's output, even though it isn't focused specifically on the systematic review of hard prefix prompts as a topic.",http://arxiv.org/pdf/2305.12865 -"augmented behavioral annotation tools, with application to multimodal datasets and models: a systematic review",6,"The systematic review discusses the evolution of annotation tools, which are a fundamental part of creating datasets for machine learning, and mentions the increasing emphasis on prompt engineering in the context of training sophisticated multimodal datasets. While the main focus of the paper is on annotation methods and not specifically on hard prefix prompts, the implications for prompt engineering in the context of adding qualitative fine-tuning to models is relevant. This indicates a moderate level of relevance to prompt engineering studies, especially in the context of how these annotation tools may impact the future of prompt engineering as part of machine learning model development.",https://www.mdpi.com/2673-2688/4/1/7/pdf?version=1674957180 -automatic essay scoring method based on multi-scale features,4,"The study discusses a method for automated essay scoring (AES) that integrates Sentence-BERT for sentence vectorization, deep neural networks, and shallow linguistic features, which includes prompt-related features. Although prompt-related features are mentioned, the focus is on scoring essays rather than engineering prompts which suggests a tangential connection to prompt engineering study. The method addresses the extraction and integration of features in AES, which is peripherally related to understanding prompts in the context of their relevance to essays but does not constitute a comprehensive systematic review on hard prefix prompts. Therefore, the relevance to prompt engineering study is moderate.",https://www.mdpi.com/2076-3417/13/11/6775/pdf?version=1685706520 -smart-llm: smart multi-agent robot task planning using large language models,6,"The study mentioned revolves around the use of Large Language Models (LLMs) for converting high-level instructions into task plans for multi-robot operations, which includes the use of programmatic LLM prompts within the few-shot prompting paradigm. This suggests a relevance to prompt engineering as it involves designing and using prompts to achieve specific outcomes with LLMs. However, the focus is more on robotics and task planning rather than solely on the study of prompt engineering techniques, hence the rating is a medium 6 out of 10 for relevance to prompt engineering studies specifically focused on hard prefix prompts.",https://arxiv.org/pdf/2309.10062 -prompting large language models with speech recognition abilities,4,"The described study focuses on extending the capabilities of large language models to perform automatic speech recognition by integrating an audio encoder. It does not primarily concentrate on the study or application of hard prefix prompts in the context of prompt engineering. However, because prompt engineering can involve methods for effectively instructing or incorporating additional modalities (like audio) into language models, this paper indirectly relates to the broader field of prompt engineering. The relevance is not direct as it doesn't address hard prefix prompts specifically, but the insights from such a study could potentially influence prompt engineering strategies for multimodal models.",https://arxiv.org/pdf/2307.11795 -satisfiability-aided language models using declarative prompting,6,"The abstract details a novel approach to improve reasoning capabilities of large language models (LLMs) by using a satisfiability-aided language modeling (SatLM). Although it does not specifically mention 'hard prefix prompts' or 'prompt engineering,' the integration of an automated theorem prover to enhance the model's problem-solving abilities indirectly relates to the broader field of prompt engineering, where devising the right prompts to elicit desired outcomes from language models is crucial. The approach of generating a declarative task specification could be seen as part of the prompt engineering process, since it involves guiding the LLM to produce useful outputs for theorem proving. However, the lack of explicit focus on prompt engineering techniques limits the relevance to a comprehensive systematic review on hard prefix prompts, thus warranting a middling score.",https://arxiv.org/pdf/2305.09656 -query expansion by prompting large language models,6,"The abstract describes using various prompts, including Chain-of-Thought, in the context of query expansion leveraging Large Language Models (LLMs). The relevance to prompt engineering is clear since it specifically mentions the study of different types of prompts to optimize the performance of LLMs in a search-related task. However, it does not directly address 'hard prefix prompts,' indicating a comprehensive systematic review on a subset of prompt engineering but not covering the full scope that might be suggested by the prompt 'a comprehensive systematic review on hard prefix prompts.' Therefore, while it is relevant due to its focus on prompt types and their effect on LLMs, it's not exactly aligned with the outlined study of hard prefix prompts.",http://arxiv.org/pdf/2305.03653 -how is chatgpt's behavior changing over time?,4,"The provided abstract and TLDR focus on the changes in the behavior of large language models (LLMs) like GPT-3.5 and GPT-4 over time across various tasks. While it is not directly related to a 'systematic review on hard prefix prompts' in prompt engineering, the study's insights into the performance variability and amenity to different prompting techniques (like chain-of-thought prompting) have indirect relevance to prompt engineering. Knowing how model performance can change over time is valuable for designing and updating prompts to maintain or improve LLMs' effectiveness. However, the focus is not specifically on prompt engineering with hard prefixes, which would make the relevance partial and thus results in a moderate rating.",https://arxiv.org/pdf/2307.09009 -gpt-ner: named entity recognition via large language models,5,"The relevance to 'prompt engineering study' is moderate. While the abstract discusses a method to adapt large language models (LLMs) for named entity recognition (NER) by transforming it into a text generation task, which implicitly involves engineering prompts (special tokens @@##) for entity extraction, the main focus is on overcoming the shortcomings of LLMs for NER tasks and not specifically on the study of prompt engineering as a field. The self-verification strategy mentioned does relate to the usage of prompts to verify generated content, which is relevant, but the paper does not seem to be centered on prompt engineering as a comprehensive topic.",https://arxiv.org/pdf/2304.10428 -adaptive test generation using a large language model,5,"The relevance to prompt engineering study is moderate. While the abstract discusses the use of a Large Language Model (Codex) for automated test generation, which involves prompting the model with certain inputs to produce desired outputs (tests in this case), the study is focused on practical application rather than a systematic review of prompt engineering techniques or the study of 'hard prefix prompts' specifically. The process involves an adaptive prompting mechanism to improve test generation, which is somewhat related to prompt engineering studies. Therefore, the relevance is rated a 5, as it addresses some elements of prompt design but does not specifically target a comprehensive review or study of prompt engineering methodologies.",https://arxiv.org/pdf/2302.06527 -on the risk of misinformation pollution with large language models,4,"While the paper addresses the use of large language models for generating misinformation and explores defensive strategies such as prompting, it is not specifically focused on prompt engineering study with regards to hard prefix prompts. The mention of prompting as a defense strategy does lend some relevance, but because the primary focus is on misinformation and not the systematic review of hard prefix prompts in prompt engineering, the relevance to the specific prompt engineering study is moderate to low.",http://arxiv.org/pdf/2305.13661 -codehelp: using large language models with guardrails for scalable support in programming classes,6,"Although the study does not focus on 'hard prefix prompts' specifically within the context of prompt engineering, it is related to the field in a broader sense. It examines the use of prompting strategies in the context of a tool called CodeHelp that utilizes large language models to assist students. The relevance rating is above average because understanding how prompts are engineered to generate non-solution revealing outputs in an educational setting can contribute valuable insights to prompt engineering research, especially in terms of designing controllable and ethical AI. However, as the paper's primary focus is on the deployment and effects of an educational tool rather than the systematic review of prompt engineering techniques, it is not rated higher.",https://arxiv.org/pdf/2308.06921 -reviewergpt? an exploratory study on using large language models for paper reviewing,4,"While the study explores the use of large language models (LLMs) in the context of scientific paper reviewing, which requires sophisticated prompting strategies, it does not specifically focus on 'hard prefix prompts' as the main subject of investigation. The relevance to prompt engineering is present as the research touches upon how different prompts can lead to different performance outcomes by the LLM (e.g., prompting with specific questions versus general review requests). However, since the core study does not concentrate on the engineering of prompts and their systematic review but rather on the application of LLMs in a specific task of paper reviewing, the rating is moderately relevant rather than highly relevant.",http://arxiv.org/pdf/2306.00622 -graphologue: exploring large language model responses with interactive diagrams,6,"The study is relevant to prompt engineering to a moderate degree. It does not directly deal with hard prefix prompts but explores the broader area of improving interactions with Large Language Models (LLMs) using novel prompting strategies and interface designs. By introducing an interactive system, Graphologue, which converts LLM responses into diagrams, it touches upon enhancing the efficacy of prompts and the ways in which users can solicit and handle information from an LLM. The connection to prompt engineering lies in the fact that extracting entities and relationships for diagrams requires careful prompt design to ensure that the LLM provides structured responses suitable for graphical representation. Although the focus is not on 'hard prefix prompts', the study does contribute to the field of prompt engineering by demonstrating alternative ways to optimize user interactions with LLMs.",https://arxiv.org/pdf/2305.11473 -visualizing linguistic diversity of text datasets synthesized by large language models,4,"The abstract presented describes a tool, LinguisticLens, which is not directly related to the study of 'hard prefix prompts' in prompt engineering. However, the tool's function of analyzing syntactic diversity of LLM-generated datasets can have tangential relevance to understanding how different prompting methods, including hard prefix prompts, might influence the generative outcomes of LLMs. Therefore, while the primary focus of the abstract is on visualization and analysis of textual diversity rather than on prompt engineering, the insights from such a tool could potentially inform prompt engineering studies to some extent, which warrants a moderate relevance rating.",https://arxiv.org/pdf/2305.11364 -"camel: communicative agents for ""mind"" exploration of large scale language model society",6,"The abstract indicates that the paper is related to 'inception prompting' which is a form of prompt engineering as it involves guiding language models. However, the main focus seems to be on the cooperative behavior of communicative agents rather than hard prefix prompts. The relevance is moderate because while the paper touches on prompt engineering, it does not appear to conduct a 'comprehensive systematic review on hard prefix prompts' as specified in the original prompt.",http://arxiv.org/pdf/2303.17760 -meta-learning the difference: preparing large language models for efficient adaptation,6,"The abstract discusses ways to adapt large pretrained language models to be more efficient in tasks such as dialogue completion, summarization, and multi-domain language modeling, focusing on model weight differences and structural changes without extensive finetuning. This is relevant to prompt engineering because it touches on the efficiency of adapting models to specific tasks, which is a significant aspect of prompt engineering. However, the text does not directly address 'hard prefix prompts' or their systematic review, thus it is moderately relevant but not a perfect match for the topic of prompt engineering study.",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00517/2059907/tacl_a_00517.pdf -explainability for large language models: a survey,5,"The paper's focus on explainability for large language models (LLMs) is indirectly relevant to prompt engineering study because understanding how LLMs work can inform the design of more effective prompts. However, the paper does not directly address prompt engineering or specifically hard prefix prompts. The relevance is moderate since insights into explainability might overlap with some aspects of prompt engineering, such as understanding model behavior and improving performance through better prompts, without being the central focus.",https://arxiv.org/pdf/2309.01029 -adapting large language models via reading comprehension,6,"The study explores a novel method of training large language models using domain-specific reading comprehension texts, which could indirectly relate to prompt engineering by enhancing the model's ability to understand and respond to prompts more effectively in different domains. However, the study does not directly address the systematic review of 'hard prefix prompts' which would be the focus of an engineering study on prompt format and structure. Thus, the relevance is moderate as the improvements in domain-specific prompting could benefit from such a training approach, but it is not centrally focused on prompt engineering itself.",https://arxiv.org/pdf/2309.09530 -milan: masked image pretraining on language assisted representation,4,"The abstract describes an approach to masked image pretraining using language-assisted representation, rather than directly involving 'hard prefix prompts' in the conventional sense associated with language models or text-based prompting. However, the mention of 'prompting decoder architecture' suggests an involvement of prompting mechanisms, which could be related to the topic of prompt engineering in a broader sense. Hence, it receives a moderate relevance rating due to the possible connection through the architecture design that involves prompts in the image-language pretraining context, but it does not directly focus on prompt engineering studies for text-based models or the specific concept of 'hard prefix prompts'.",http://arxiv.org/pdf/2208.06049 -are hard examples also harder to explain? a study with human and model-generated explanations,6,"The study touches on aspects of prompting when it investigates size and hardness of the test samples and their impact on the quality of explanations generated by both humans and GPT-3. This is indirectly related to prompt engineering, as the quality of outputs generated by LLMs may depend on the prompt's difficulty level, which can inform prompt engineering strategies. However, the study's primary focus is on the explainability and the comparison of human and model-generated explanations, not specifically on engineering prompts to improve LLM performance.",https://arxiv.org/pdf/2211.07517 -a comprehensive survey on pretrained foundation models: a history from bert to chatgpt,5,"The provided abstract and TLDR are related to Pretrained Foundation Models (PFMs) and cover a wide range of aspects including their history, applications, and challenges across different data modalities. While prompt engineering is not directly mentioned, the study's focus on 'zero shot or few shot prompting' used by models like ChatGPT suggests an indirect relation to the topic. Thus, the paper may contain insights relevant for understanding the broader context of prompt engineering, particularly in how PFMs accommodate prompt-based interactions. However, since prompt engineering is a more specific discipline focused on the design and optimization of prompts to effectively leverage models like GPT or BERT, and the summary does not explicitly address hard prefix prompts or prompt engineering techniques, it receives a moderate rating of relevance.",http://arxiv.org/pdf/2302.09419 -in-context autoencoder for context compression in a large language model,6,"The abstract describes a method for compressing long contexts into compact memory slots, which although not directly related to 'hard prefix prompts', it does pertain to the broader field of prompt engineering by allowing for more efficient handling of prompts in large language models. This efficiency can impact how prompts are created, managed, and used in large language models. The connection to prompt engineering lies in the fine-tuning on instruction data, which likely would involve crafting prompts to generate desirable responses. However, since the primary focus appears to be context compression rather than prompt engineering techniques or the study of 'hard prefix prompts' specifically, the rating is not higher.",https://arxiv.org/pdf/2307.06945 -decomposed soft prompt guided fusion enhancing for compositional zero-shot learning,4,"The abstract presents a study on a method for compositional zero-shot learning (CZSL) using a framework called Decomposed Fusion with Soft Prompt (DFSP). While it is related to engineering prompts in the context of vision-language models, which indeed falls under the broader category of prompt engineering, it isn't directly focused on hard prefix prompts as mentioned in the initial request. The paper's relevance is therefore not exact but tangentially related since it involves the construction of vector combinations of learnable soft prompts, which can be considered a part of prompt engineering. However, the method described diverges from the original topic of 'hard prefix prompts,' which typically implies a non-modifiable text input for models, as opposed to the learnable prompts discussed here.",https://arxiv.org/pdf/2211.10681 -prompt-guided zero-shot anomaly action recognition using pretrained deep skeleton features,4,"The study is somewhat related to prompt engineering as it incorporates user prompt-guided zero-shot learning which hints at the use of prompts to guide the anomaly detection model. However, the focus is primarily on skeleton-based anomaly detection and the usage of prompts seems to be a part of the overall anomaly score calculation rather than the core study of different prompt engineering techniques or hard prefix prompts. Therefore, the relevance is moderate.",https://arxiv.org/pdf/2303.15167 -socratic models: composing zero-shot multimodal reasoning with language,5,"The abstract discusses the use of Socratic Models (SMs) for zero-shot multimodal reasoning which relates to the field of prompt engineering in that it involves effective prompting to enable communication and information exchange between models. While it's not explicitly focused on 'hard prefix prompts' as mentioned in the study prompt, the concept of multimodal-informed prompting falls within the broader scope of prompt engineering. Therefore, the relevance to prompt engineering is moderate but not directly aligned with the specific topic of hard prefix prompts.",https://arxiv.org/pdf/2204.00598 -partslip: low-shot part segmentation for 3d point clouds via pretrained image-language models,5,"While the abstract indicates the use of a pretrained image-language model, GLIP, in the context of 3D part segmentation leveraging multi-view priors and few-shot prompt tuning, it does not directly address prompt engineering study or the investigation of hard prefix prompts. However, the mention of 'few-shot prompt tuning' suggests a relevant connection to the disciplines of prompt engineering and the model's ability to interpret and process language-based inputs, which may overlap with the interests of those studying prompt design and effectiveness. Thus, the relevance is moderate as it sits at the intersection of neural language models and their application in visual tasks, without focusing explicitly on the study of prompt engineering.",https://arxiv.org/pdf/2212.01558 -few-shot anaphora resolution in scientific protocols via mixtures of in-context experts,6,"The study presents MICE, a method for few-shot anaphora resolution using in-context learning, which is relevant to prompt engineering in that it involves conditioning language models on specific inputs for desired outputs. The focus on in-context learning and efficiency in handling long sequences could inform strategies in prompt engineering, especially for complex tasks like anaphora resolution. However, the study is not directly focused on designing or optimizing prompts (i.e., 'hard prefix prompts'), but rather on a specific application of in-context learning. As such, the relevance is moderate but not high.",https://arxiv.org/pdf/2210.03690 -what language model architecture and pretraining objective work best for zero-shot generalization?,5,"While the abstract provided does not directly address prompt engineering or the study of hard prefix prompts specifically, it discusses related aspects of language model performance such as zero-shot generalization, model architectures, and pretraining objectives. Understanding how different architectures and objectives contribute to a model's ability to understand and process prompts is relevant to prompt engineering. However, since the focus is not on prompt engineering itself or on systematic reviews of prompts, the relevance is moderate.",http://arxiv.org/pdf/2204.05832 -go-tuning: improving zero-shot learning abilities of smaller language models,4,"The abstract discusses a method to improve zero-shot learning of smaller language models, which indirectly pertains to prompt engineering, as it may influence the way prompts are designed to interact with these models. However, the focus is on the self-supervised learning approach and the update of language models rather than the systematic study or design of hard prefix prompts specifically.",http://arxiv.org/pdf/2212.10461 -zero-shot recommendation as language modeling,6,"The abstract indicates a recommendation system that operates using pre-trained language models and unstructured text corpora, which is tangentially related to prompt engineering as it involves using language models in an innovative application. However, the focus on recommendation systems and matrix factorization suggests that the study does not directly address the creation or manipulation of prompts (i.e., the 'hard prefix prompts' mentioned in the original prompt). Therefore, the relevance is moderate because while it deals with language models, it may not directly contribute to our understanding of prompt engineering in the context of a comprehensive systematic review.",https://arxiv.org/pdf/2112.04184 -sam.md: zero-shot medical image segmentation capabilities of the segment anything model,5,"The title and abstract provided discuss a model that utilizes prompting (SAM) for image segmentation tasks, which is relevant to the concept of prompt engineering as it involves the use of prompts to direct the behavior of AI models. However, the focus is mainly on the zero-shot learning capabilities of SAM in medical image segmentation, rather than a systematic review of 'hard prefix prompts' in a broader context. The relevance to prompt engineering is moderate because it showcases an application of prompts in a specialized domain but does not address prompt engineering study in a comprehensive manner.",http://arxiv.org/pdf/2304.05396 -enabling calibration in the zero-shot inference of large vision-language models,4,"The abstract presents a study focused on the calibration of vision-language models, particularly CLIP, in the context of zero-shot inference. While the research addresses aspects such as prompt choice, its core contribution lies in proposing a modified temperature scaling method for calibrating the models rather than in-depth analysis or methodology development for 'prompt engineering' itself. The mention of prompt as one of the variables does increase the relevance to 'prompt engineering,' yet since it is not the main focus of the study, the relevance is moderate.",http://arxiv.org/pdf/2303.12748 -zero-shot text classification via self-supervised tuning,6,"The abstract discusses a novel approach to zero-shot text classification using self-supervised learning, which includes an alternative prompting method where the model learns to predict the first sentence of a paragraph. This is relevant to prompt engineering as it touches on the use of prompts to improve language model performance without relying on large-scale annotated data. However, the focus is more on the self-supervised learning aspect and the specific learning objective, rather than a deep dive into prompt engineering or hard prefix prompts specifically. Therefore, the relevance is moderate.",http://arxiv.org/pdf/2305.11442 -harnessing the zero-shot power of instruction-tuned large language model in end-to-end speech recognition,6,"The abstract deals with the utilization of an instruction-tuned large language model within the context of ASR, which relates to prompt engineering in the sense that precise instructions are used to guide the LLM. However, the focus is more on the application of LLMs for improving ASR rather than on the study or optimization of the prompts themselves (i.e., hard prefix prompts or prompt engineering techniques). The relevance is moderate because it showcases an implementation of prompt-instructed LLMs, but it does not directly address a systematic review or study on prompt engineering.",https://arxiv.org/pdf/2309.10524 -interaction-aware prompting for zero-shot spatio-temporal action detection,6,"The study describes the use of prompting as a mechanism to obtain more appropriate text features for zero-shot spatio-temporal action detection, which falls under the broader scope of prompt engineering. However, the context is very specialized and focuses more on the application to a specific domain (video processing and action detection) rather than the study of hard prefix prompts in general. The relevance is moderate because it deals with an application of prompts in a machine learning system, but it does not directly address a 'comprehensive systematic review on hard prefix prompts' as the original query specifies.",https://arxiv.org/pdf/2304.04688 -zero-textcap: zero-shot framework for text-based image captioning,4,"The abstract discusses the Zero-TextCap model for text-based image captioning. It touches on prompt engineering indirectly by mentioning the generation of candidate sentences from the prompt 'Image of' and the refinement process for improving caption quality and diversity. However, the main focus is on image captioning and OCR technology, rather than prompt engineering. The relevance to prompt engineering study is moderate because it deals with a specific use of prompts within a different field of study, i.e., text-based image captioning. The study is more relevant to the fields of computer vision and natural language processing than to the study of prompt engineering in general.",https://dl.acm.org/doi/pdf/10.1145/3581783.3612571 -are soft prompts good zero-shot learners for speech recognition?,6,"The abstract discusses 'soft prompts' in the context of automatic speech recognition and zero-shot learning, which is related to the field of prompt engineering, as it involves the manipulation of prompts to enhance model performance. However, the prompt specifically asks about 'hard prefix prompts,' and this study focuses on 'soft prompts,' not 'hard' ones. Therefore, the study is relevant to the broader field of prompt engineering but not directly relevant to the specified subset of 'hard prefix prompts.' The relevance rating acknowledges the connection to prompt engineering while also recognizing the divergence from the specified topic of 'hard prefix prompts'.",https://arxiv.org/pdf/2309.09413 -blended-nerf: zero-shot object generation and blending in existing neural radiance fields,5,"The presented work, Blended-NeRF, involves some aspects of prompt engineering, such as the use of text prompts to guide the editing of 3D scenes. This suggests a connection to natural language processing and the translation of text instructions to visual modifications. However, the focus seems to be more on the application of 3D neural radiance fields and the integration of new objects in existing scenes rather than on the detailed study of prompt engineering itself. Therefore, the relevance to prompt engineering as a primary study objective appears to be moderate.",https://arxiv.org/pdf/2306.12760 -zero-shot text-driven physically interpretable face editing,4,"The paper discusses text-driven face editing and involves the use of text prompts to guide the image editing process. Its relevance to prompt engineering is in the use of the CLIP model which involves understanding and correlating text descriptions to visual content. However, the primary focus of the paper seems to be on face editing using a novel method rather than on the study or improvement of prompt engineering techniques themselves. Therefore, it has some relevance due to the application of text prompts, but it is not a direct study on prompt engineering.",https://arxiv.org/pdf/2308.05976 -multi-view vision-prompt fusion network: can 2d pre-trained model boost 3d point cloud data-scarce learning?,4,"The abstract discusses the fusion of 2D pre-trained models with 3D point cloud data through a novel network (MvNet) for few-shot 3D classification, which includes aspects of prompt learning inspired by NLP. Although the application is primarily for 3D classification in computer vision and not for prompt engineering in a textual context, the inspiration from prompt learning and the mention of using prompts to describe prior knowledge for image models suggests some relevance to the topic of prompt engineering study. However, since the primary focus is not on textual or linguistic prompts but on prompts that bridge 3D and 2D model data, the relevance is moderate but not high.",https://arxiv.org/pdf/2304.10224 -grass: unified generation model for speech-to-semantic tasks,4,"The paper is relevant to prompt engineering to some extent as it involves generating target text conditioned on a task-related prompt for audio data. Although it does focus on utilizing prompts for refining the production of target text, which is an aspect of prompt engineering, it specifically addresses speech-to-semantic tasks rather than hard prefix prompts within a text-input domain. Therefore, while it has some relevance due to the usage of prompts in the model's training and task execution, it is not a direct study on hard prefix prompts, reducing its relevance to the specific area of prompt engineering under review.",https://arxiv.org/pdf/2309.02780 -ontotype: ontology-guided zero-shot fine-grained entity typing with weak supervision from pre-trained language models,6,"The paper discusses a method which leverages pre-trained language models (PLMs) for fine-grained entity typing (FET) and specifically mentions how it ensembles multiple PLM prompting results, suggesting a novel use of prompts in model processing. While the main focus of the study is on FET and it introduces OntoType, a zero-shot ontology-guided FET method, the paper still has relevance to prompt engineering since it deals with generating and refining prompts for PLMs to improve typing resolution. The significance of prompt engineering is not the central theme of the paper, but prompts play a significant role in the described methodology, which aligns with how prompts can be engineered to work with ontological structures. Therefore, the paper is somewhat relevant to prompt engineering but not directly focused on it.",http://arxiv.org/pdf/2305.12307 -lt at semeval-2023 task 1: effective zero-shot visual word sense disambiguation approaches using external knowledge sources,6,"The paper abstract is partially relevant to prompt engineering study as it discusses different textual prompting strategies as they relate to multi-modal machine learning and zero-shot capabilities. However, the main focus seems to be on Visual Word Sense Disambiguation (VWSD) using pre-trained visiolinguistic models and external knowledge sources, rather than a direct emphasis on hard prefix prompts or a comprehensive analysis of prompt engineering. The relevance rating of 6 reflects that prompt engineering is a supporting concept in the study rather than the primary focus.",https://aclanthology.org/2023.semeval-1.64.pdf -odor descriptor understanding through prompting,6,"The study addresses a niche aspect of prompt engineering by focusing on generating word embeddings specific to olfactory descriptors, which implies a form of prompt optimization for a specialized application. The relevance to prompt engineering is moderate because it deals with improving the interaction between an NLP model and domain-specific language, which is an important aspect of prompt engineering. However, the paper does not seem to offer a broad investigation into hard prefix prompts or their systematic review, but rather presents practical methods for a specific type of prompting to improve performance in a specialized benchmark.",http://arxiv.org/pdf/2205.03719 -winning solution for the cvpr2023 visual anomaly and novelty detection challenge: multimodal prompting for data-centric anomaly detection,6,"The study discusses the use of multimodal prompts within the context of zero-shot anomaly segmentation, which is related to the field of prompt engineering due to the involvement of customized prompts for model adaptation. While the study may not directly address 'hard prefix prompts', the concept of utilizing expert knowledge and context to create prompts for foundation models exhibits a component of prompt-engineering techniques. This relevance is not direct, as prompt engineering typically involves text-based language prompts for natural language models as opposed to prompts for visual anomaly detection; hence, the rating is above the midpoint but not fully aligned with the focus on prompt engineering.",https://arxiv.org/pdf/2306.09067 -a fine-grained comparison of pragmatic language understanding in humans and language models,6,"The study addresses a comparison of pragmatic language understanding in both humans and language models, which indirectly relates to prompt engineering since the effectiveness of prompts can be influenced by a model's ability to deal with pragmatics and non-literal meanings. However, the study does not focus specifically on prompt engineering or on the design, structure, or optimization of prompts ('hard prefix prompts'), therefore the relevance is not direct and merits a mid-range rating.",http://arxiv.org/pdf/2212.06801 -knowledge-in-context: towards knowledgeable semi-parametric language models,4,"While the discussed paper presents a novel semi-parametric language model architecture that is closely related to enhancing the performance of language models, it does not directly address prompt engineering, especially with regards to 'hard prefix prompts.' The architecture indeed involves prompting in the broader sense, as it uses prompts to generate output answers, but the main focus of the study lies in knowledge integration and model efficiency, rather than on the design or study of prompts themselves. Therefore, the relevance to prompt engineering is somewhat tangential and not the central theme of the paper.",http://arxiv.org/pdf/2210.16433 -speechx: neural codec language model as a versatile speech transformer,5,"While the abstract does discuss the use of audio-text prompts for speech generation and how SpeechX leverage task-dependent prompting for various speech tasks, it does not specifically address hard prefix prompts in the context of prompt engineering within the text generation domain, which is generally implied by prompt engineering. There is relevance to prompts and task-dependent prompting, but not directly to the study of hard prefix prompts in a systematic review sense, hence the intermediate score.",https://arxiv.org/pdf/2308.06873 -an investigation of llms' inefficacy in understanding converse relations,5,"The abstract presents a study on how LLMs process and understand converse relations, which relates to their semantic understanding capabilities. While this does touch on the issue of understanding structured semantics and could have indirect implications for prompt engineering (e.g., designing prompts that account for the converse relations might improve LLMs' performance), the study is not directly focused on prompt engineering or the effectiveness of hard prefix prompts. Thus, the relevance is moderate as the findings might inform prompt engineering strategies indirectly, but it is not the central theme of the study.",https://arxiv.org/pdf/2310.05163 -zero-shot generalization in dialog state tracking through generative question answering,6,"The abstract discusses the use of a generative question-answering framework with a conditional language model for improving dialog state tracking, which indirectly relates to prompt engineering in that it deals with the generation of language model queries (which can be considered as prompts) for unseen constraints and slots. The system is designed to interpret natural language queries, akin to how prompts are used to extract information from language models. However, the specific focus of the study is not on prompt engineering itself or on the systematic review of 'hard prefix prompts', but rather on the application of a generative language model to dialog systems for zero-shot adaptation. Therefore, while the study is relevant to the general field of language model applications (and thus has some relevance to prompt engineering), it does not directly address the subject of prompt engineering in relation to hard prefix prompts.",https://aclanthology.org/2021.eacl-main.91.pdf -differentially private in-context learning,6,"The study touches on the deployment of large language models (LLMs) and their adaptation to new tasks, which relates to prompt engineering in the broader sense of preparing LMs for specific applications. However, the focus is primarily on maintaining privacy via Differentially Private In-context Learning (DP-ICL), and not on the prompt engineering techniques such as 'hard prefix prompts'. Although prompt engineering may rely on data privacy principles when integrating private data, the abstract lacks a direct mention or analysis of 'hard prefix prompts', yielding a moderate relevance score.",https://arxiv.org/pdf/2305.01639 -reward modeling for mitigating toxicity in transformer-based language models,4,"While the study focuses on mitigating toxicity in language models, which is related to improving AI behavior and output quality, it is tangential to the specific topic of 'prompt engineering', particularly 'hard prefix prompts'. Prompt engineering involves crafting inputs to guide AI models more effectively, whereas this study seems centered on a method (Reinforce-Detoxify) for reducing toxicity. Although related, it is not a direct study of prompt engineering techniques, thus the moderate rating reflects this indirect relevance.",https://arxiv.org/pdf/2202.09662 -"tryage: real-time, intelligent routing of user prompts to large language models",4,"While the described paper, 'tryage: real-time, intelligent routing of user prompts to large language models,' indirectly relates to the field of prompt engineering by addressing optimal model selection based on input prompts, it does not explicitly focus on 'hard prefix prompts' or the systematic review of these prompts. Prompt engineering generally refers to the design of input prompts to achieve better performance or more relevant responses from language models. The paper's relevance to prompt engineering is in its ability to select the best-suited model for a given prompt, which could be a component of a larger prompt engineering strategy. However, the absence of specific focus on 'hard prefix prompts' or systematic review thereof limits the relevance score.",https://arxiv.org/pdf/2308.11601 -tempo: prompt-based generative pre-trained transformer for time series forecasting,5,"The relevance to prompt engineering study is moderate. The described TEMPO framework does incorporate 'selection-based prompts' which indicates some element of prompt engineering. However, the core focus is on time series forecasting using generative transformers rather than the systematic review or study of hard prefix prompts in general. Therefore, the relevance is partial as it pertains to adapting prompts for time series tasks specifically rather than prompt engineering as a broader field.",https://arxiv.org/pdf/2310.04948 -phenaki: variable length video generation from open domain textual description,4,"The abstract describes a model, Phenaki, which deals with generating videos from textual descriptions using a novel representation and learning approach. This is relevant to prompt engineering to the extent that it involves creating prompts (textual descriptions) that are used to generate content (videos). However, the focus of the study appears to be more on video synthesis and representation learning rather than on the design or optimization of the textual prompts themselves ('hard prefix prompts'). Therefore, the relevance is moderate, indicating a tangential connection to prompt engineering, particularly in how text prompts are used to generate complex media like videos, rather than a direct study on the engineering of prompts.",http://arxiv.org/pdf/2210.02399 -can language models automate data wrangling?,5,"The content seems to address the utilization of language models for data wrangling tasks, and while it does imply a certain level of task design and user interaction with language models (which could be related to prompt engineering), the focus on data wrangling rather than prompt design specifically for eliciting desired outputs from a language model suggests that this isn't a comprehensive study on hard prefix prompts. There is potential crossover in terms of understanding how prompts work in the context of data wrangling, but it is not directly about prompt engineering.",https://link.springer.com/content/pdf/10.1007/s10994-022-06259-9.pdf -textdiffuser: diffusion models as text painters,4,"While the study introduces TextDiffuser, which involves generating images from text prompts and might have indirect applications in understanding and improving how models handle text prompts, the main focus is on image generation and enhancing text coherence within visual content. The mention of prompts relates more to the input for image generation rather than the study of prompt engineering itself. Therefore, the relevance to prompt engineering study is moderate as the techniques developed could be tangentially useful, but it is not the central theme of the research.",http://arxiv.org/pdf/2305.10855 -vector representations of idioms in conversational systems,5,"The study is partially relevant to prompt engineering as it touches on how training on specific language constructs (in this case, idioms) improves the performance of conversational systems. While it does not directly address 'hard prefix prompts' or the systematic review of such prompts, understanding how idiomatic expressions are handled by NLP systems can inform prompt design strategies and might be leveraged in the creation of more sophisticated prompts. This relevance is more tangential than direct to the focus of prompt engineering, thus warranting a mid-range rating.",http://arxiv.org/pdf/2205.03666 -generalization properties of retrieval-based models,4,"While the abstract discusses retrieval-based models and their generalization properties, which are relevant to the broader field of machine learning and could potentially be applied to prompt engineering with respect to selecting the best prompts in a retrieval-based manner, it does not directly address 'hard prefix prompts' or prompt engineering specifically. Therefore, its relevance to a systematic review on hard prefix prompts in prompt engineering studies is tangential rather than central.",http://arxiv.org/pdf/2210.02617 -attention satisfies: a constraint-satisfaction lens on factual errors of language models,4,"The abstract provides insights into the internal mechanisms of Large Language Models (LLMs) related to factual accuracy, which is peripherally relevant to prompt engineering. Prompt engineering often involves crafting prompts to elicit accurate and reliable responses from a model. The study's focus on the attention mechanism and factual accuracy can be indirectly useful in understanding how prompts might be structured to improve the likelihood of factually correct outputs. However, the study does not directly investigate hard prefix prompts or prompt engineering techniques, therefore the rating isn't higher.",https://arxiv.org/pdf/2309.15098 -gpt4mia: utilizing geneative pre-trained transformer (gpt-3) as a plug-and-play transductive model for medical image analysis,4,"The relevance of the paper to prompt engineering is tangential rather than direct. It discusses using GPT-3 as a tool for medical image analysis, which implies a level of prompt engineering in structuring the interaction between the language model and the image analysis tasks. The paper's focus on technical treatments for efficiency and effectiveness might involve innovative prompt design strategies, which is pertinent to prompt engineering. However, because the central theme is the application of GPT-3 to medical image analysis rather than prompt engineering itself, the relevance is moderate.",http://arxiv.org/pdf/2302.08722 -plato-ad: a unified advertisement text generation framework with multi-task prompt learning,4,"The abstract discusses PLATO-Ad, a framework for online advertisement text generation that incorporates multi-task prompt learning, which is related to prompt engineering. However, it does not focus specifically on 'hard prefix prompts' or systematic reviews of them, but rather the application of prompt learning to advertisement text generation. The relevance to prompt engineering exists due to the implementation of prompts in the learning process, but because the focus is on a specific application and not on a broad study or review of prompts, the rating is not higher.",https://aclanthology.org/2022.emnlp-industry.52.pdf -rgb-t tracking via multi-modal mutual prompt learning,6,"The study uses the concept of 'prompt learning' in the context of computer vision, specifically for RGB-T tracking, which demonstrates an application of prompt engineering to improve the interaction between different data modalities (visible and thermal images) and enhance the model's performance. The relevance to prompt engineering is evident in the design of the 'lightweight prompter' and the use of attention mechanisms as a form of information transfer, which can be seen as a specialized application of prompts in machine learning. However, the study does not directly focus on hard prefix prompts or their systematic review, which limits its relevance to the specific area of prompt engineering referred to in the original query. It is more related to the application and implementation of prompts in a practical task rather than the study of prompt engineering itself.",https://arxiv.org/pdf/2308.16386 -prefixmol: target- and chemistry-aware molecule design via prefix embedding,6,"The provided title and abstract refer to a generative model using 'prefix embeddings,' which can be seen as a form of prompt engineering, albeit in a different domain (molecular design rather than text generation). The concept of prefix embeddings as contextual prompts shares a conceptual similarity with prefix prompts in text-based models, as they both aim to guide the generation process under specific conditions. However, the application is quite niche and specific to chemistry and drug design, which means the focus is not on prompt engineering in the general sense but is instead applied in a specialized context. Therefore, the relevance to prompt engineering studies is moderate but not direct, as it uses similar concepts in a domain-specific application.",http://arxiv.org/pdf/2302.07120 -towards a unified view on visual parameter-efficient transfer learning,4,"While the study presents a framework in parameter efficient transfer learning (PETL) and investigates prefix-tuning, it does so in the context of vision models rather than language models, which is the primary domain for prompt engineering. However, the concept of a 'hard prefix prompt' isn't directly addressed, but the methods and findings could be considered somewhat relevant for those interested in the extension of prompt engineering concepts to the vision domain. Thus, relevance is moderate but not directly aligned with the specific focus of hard prefix prompts in prompt engineering studies.",http://arxiv.org/pdf/2210.00788 -evaluating adaptive pedagogical agents' prompting strategies effect on students' emotions,5,"The relevance to prompt engineering is moderate, as the study examines the impact of different prompting strategies on students' emotions within an Intelligent Tutoring System (ITS). While not directly focused on 'hard prefix prompts' or prompt engineering in the AI language model sense, the research does explore how different types of prompts can influence user experience and engagement, which can be parallel to how prompts are engineered to guide AI behavior. However, the specific connection to 'hard prefix prompts' in prompt engineering is not made, which limits the direct relevance to the topic.",https://hal.archives-ouvertes.fr/hal-02015693/file/Bouchet%20et%20al.%20-%202018%20-%20Evaluating%20adaptive%20pedagogical%20agents%E2%80%99%20prompting%20.pdf -impact of different pedagogical agents' adaptive self-regulated prompting strategies on learning with metatutor,5,"The study focuses on the effect of prompting strategies on learning outcomes within an educational tool, which marginally relates to prompt engineering as it deals with the design and effectiveness of prompts. Prompt engineering specifically pertains to the construction and optimization of prompts to improve the performance of artificial intelligence systems. While the study on pedagogical agents' prompting strategies is adjacent to this domain, its direct application to prompt engineering in AI is not clear. Therefore, the relevance is moderate.",http://escholarship.mcgill.ca/downloads/9s161b83t -can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert,6,"The abstract addresses the understanding ability of ChatGPT as compared to fine-tuned BERT models and mentions the use of advanced prompting strategies to improve ChatGPT's understanding. While the main focus is on the comparative analysis of model performance, the mention of prompting strategies implies some relevance to prompt engineering. However, the abstract does not offer a detailed exploration or direct focus on hard prefix prompts or their systematic review, which reduces its direct relevance to the specified topic of prompt engineering study.",http://arxiv.org/pdf/2302.10198 -revisiting the plastic surgery hypothesis via large language models,5,"The abstract describes how Large Language Models (LLMs) can be utilized for Automated Program Repair (APR) and discusses the relevance of the plastic surgery hypothesis in this context. The mention of 'prompting strategy' indicates some level of relevance to prompt engineering, as it suggests that the study explores how to effectively use prompts to improve model performance. However, the focus seems to be on the application of LLM-based APR rather than on the study of prompt engineering itself. Therefore, the relevance to prompt engineering is moderate, as the paper likely touches on elements of prompt engineering as part of APR, but is not centered on prompt engineering as its primary topic of investigation.",http://arxiv.org/pdf/2303.10494 -instructprotein: aligning human and protein language via knowledge instruction,6,"The abstract describes InstructProtein, a large language model trained for bidirectional human and protein language comprehension, which involves specialized prompt engineering to facilitate this unique form of language alignment. Prompt engineering is relevant here, as it is necessary to construct instructions that enable the model to translate between human language and protein sequences. The knowledge graph-based instruction framework mentioned can be seen as an advanced form of prompt engineering, designed to overcome issues of annotation imbalance and instruction deficits. However, the content is more focused on the application within a bioinformatics context rather than prompt engineering as a standalone subject. Therefore, while prompt engineering is a component of the research, the paper is not primarily about prompt engineering in the broader sense but rather a specific application of it.",https://arxiv.org/pdf/2310.03269 -evaluation of gpt-3.5 and gpt-4 for supporting real-world information needs in healthcare delivery,6,"The abstract highlights the need for further research in prompt engineering to improve the performance of large language models (LLMs) in healthcare settings. It mentions the variability in the quality of responses by GPT-3.5 and GPT-4 to specific information needs, which implies that there is room for improvement in how prompts are designed to achieve better results. This is relevant to the study of prompt engineering since it suggests that better-designed prompts could potentially lead to more accurate and useful responses from LLMs. However, the abstract does not directly focus on 'hard prefix prompts' but rather on the broader application of LLMs in healthcare. Therefore, it is somewhat relevant but not fully focused on prompt engineering, hence the rating of 6.",http://arxiv.org/pdf/2304.13714 -chatspot: bootstrapping multimodal llms via precise referring instruction tuning,4,"The study primarily focuses on improving human-AI interactivity within multimodal large language models by introducing a more sophisticated method of instruction via referring prompts. While this does involve some form of prompt engineering, specifically in relation to how the model receives and understands instructions, it is not strictly concerned with 'hard prefix prompts' as it seems to combine multiple input modalities (language, clicks, drag-and-drop, drawings). The relevance is thus moderate because it does intersect with the concept of prompt design and efficacy but does not explicitly address the engineering of hard-coded text prompts within a linguistic context.",https://arxiv.org/pdf/2307.09474 -chill: zero-shot custom interpretable feature extraction from clinical notes with large language models,6,"The described study focuses on using expert-crafted queries to generate interpretable features from health records, which indirectly relates to prompt engineering since it involves crafting queries (prompts) for a model to generate useful outputs. However, the study applies the technique for feature extraction from clinical notes rather than the systematic review of 'hard prefix prompts,' which is more specific to improving prompt engineering methods or understanding their efficacy. Therefore, the relevance is moderate but not directly focused on the prompt engineering field as defined by the initial prompt.",http://arxiv.org/pdf/2302.12343 -interleaving pre-trained language models and large language models for zero-shot nl2sql generation,4,"The abstract discusses the development of a framework (ZeroNL2SQL) that involves using prompts to guide language models for a specialized task (NL2SQL generation). Although the specific term 'hard prefix prompts' is not used, the concept of using prompts to direct language model behavior is central to the study. This indicates some relevance to the study of prompt engineering but not directly focused on hard prefix prompts or a systematic review of them. Therefore, it is somewhat relevant to prompt engineering but not fully aligned with a comprehensive systematic review on that specific topic.",http://arxiv.org/pdf/2306.08891 -can large language models be good path planners? a benchmark and investigation on spatial-temporal reasoning,6,"The title and abstract indicate research dealing with how large language models can handle tasks requiring spatial-temporal reasoning, which includes the analysis of few-shot prompting methodologies. These methodologies are a subset of prompt engineering, as they explore how to design prompts that enable language models to perform spatial reasoning tasks. While the focus is not explicitly on 'hard prefix prompts' as the prompt engineering study may suggest, few-shot prompting, as part of prompt engineering, is relevant because it discusses the effectiveness of different prompting techniques. Therefore, the study is indirectly related to the broader field of prompt engineering but does not directly address the comprehensive systematic review on hard prefix prompts.",https://arxiv.org/pdf/2310.03249 -fabricator: an open source toolkit for generating labeled training data with teacher llms,4,"The relevance to prompt engineering is moderate. The abstract discusses the use of LLMs (Large Language Models) to generate labeled data for training other NLP models, which does involve prompting the LLM to produce specific outputs. The process of designing these prompts to effectively direct the LLM's output towards useful labeled data creation is related to 'prompt engineering.' However, the abstract does not specifically mention 'hard prefix prompts' nor does it focus on a comprehensive systematic review of such. Therefore, while the topic is related to prompt engineering, it does not fully align with a 'comprehensive systematic review on hard prefix prompts.' Thus, the given rating is moderately relevant, but not directly on point.",https://arxiv.org/pdf/2309.09582 -scalable multi-robot collaboration with large language models: centralized or decentralized systems?,6,"The abstract describes research on planning frameworks for pre-trained large language models (LLMs) in multi-robot task scenarios, addressing token efficiency which relates to the token budget and potentially the prompt constructions. While the study isn't focused specifically on 'hard prefix prompts', it does engage with prompt engineering in the context of task planning for robots using LLMs. The relevance to prompt engineering is indirect through its exploration of token-efficient LLM frameworks and mention of prompting techniques, which could include prompt design or optimization. However, the core focus is on the application within robotics rather than the systematic study or review of prompt engineering itself.",https://arxiv.org/pdf/2309.15943 -autotutor: a tutor with dialogue in natural language,4,"While the 'autotutor' paper focuses on a system that uses dialogue in natural language, which is tangentially related to prompt engineering in the sense that it deals with natural language processing and potentially the design of prompts for tutorial purposes, it does not directly address 'hard prefix prompts' or systematic reviews related to prompt engineering studies. The connection to prompt engineering is more incidental as it relates to dialogue patterns and design, which might apply to the field but are not centrally concerned with the systematic approaches to hard prefix prompts specifically. Therefore, the relevance is moderate, as some of the underlying principles may be applicable, but the core subject of the study diverges from the specific focus on prompt engineering.",https://link.springer.com/content/pdf/10.3758/BF03195563.pdf -uniex: an effective and efficient framework for unified information extraction via a span-extractive perspective,4,"The abstract mentions the use of schema-based prompts within the UniEX framework for universal information extraction, which touches upon the aspect of utilizing prompts in AI tasks. However, it does not specifically address 'hard prefix prompts' or conduct a 'comprehensive systematic review' on prompt engineering. The focus appears to be on information extraction tasks and improving their efficiency through a unified extractive framework rather than on the study of prompt engineering itself. Consequently, it has some relevance due to the mention of prompts but is not centrally focused on prompt engineering studies.",http://arxiv.org/pdf/2305.10306 -visual chain of thought: bridging logical gaps with multimodal infillings,6,"The study introduces VCoT, which uses a form of prompt engineering by leveraging chain of thought prompting which is relevant to prompt engineering study. However, the focus is more on multimodal integration and recursive infillings to improve reasoning in sequential data, rather than on prompt engineering with hard prefixes specifically. The relevance is moderate because it does involve prompt engineering techniques, though it is not focused on the systematic review of hard prefix prompts.",http://arxiv.org/pdf/2305.02317 -beyond bounding box: multimodal knowledge learning for object detection,5,"The paper deals with the use of language prompts for improving object detection in machine learning, indicating relevance to prompt engineering in that it involves designing prompts to facilitate learning. However, the study's primary focus is on multimodal knowledge learning in object detection, rather than on prompt engineering specifically. It discusses the creation and use of prompts as part of the method but does not center around designing or systematically reviewing hard prefix prompts, which would be more directly related to prompt engineering studies.",http://arxiv.org/pdf/2205.04072 -meta learning to bridge vision and language models for multimodal few-shot learning,4,"The presented abstract discusses a multimodal meta-learning approach to bridge vision and language models, aiming to improve few-shot learning by automatically adapting to new tasks. The relevance to 'prompt engineering' is tangential since the abstract mentions induction of tasks without hand-engineering and could relate to auto-generating or tuning prompts in a broad sense. However, it deals more with meta-learning and the interplay between different modalities than the specific study of hard prefix prompts as described in the initial request. Therefore, it is only moderately related to prompt engineering as the focus of the paper is on model adaptation and few-shot learning rather than prompt design or engineering.",http://arxiv.org/pdf/2302.14794 -what matters in training a gpt4-style language model with multimodal inputs?,5,"The abstract discusses various factors that affect the training of a GPT4-style multimodal language model, among which the influence of diversified prompts on the instruction-following ability of the trained models is mentioned. This indicates some relevance to prompt engineering, as understanding how prompts affect model performance is a subset of prompt engineering. However, the focus of the study includes a broader range of topics such as network structures, training data, and benchmarks, which are not exclusively concerned with prompt engineering. Hence, the rating is at the midpoint to reflect this partial relevance.",https://arxiv.org/pdf/2307.02469 -multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation,6,"The abstract provided references the use of a 'Multimodal Prompt Transformer' which may imply some relevance to prompt engineering, particularly as it pertains to encoding textual features and facilitating multimodal fusion in the context of emotion recognition. However, the primary focus of the study is on emotion recognition in conversation rather than on hard prefix prompts or prompt engineering in a broader sense. Prompt engineering typically involves the strategic design of input prompts to produce desired outputs from AI models, which is a tangential aspect of the described research. Therefore, while there is some relevance, it is not the core subject of the study.",https://dl.acm.org/doi/pdf/10.1145/3581783.3611805 -a comparison of prompt delays with trial-and-error instruction in conditional discrimination training,4,"The study focuses on 'prompt delays' within the context of conditional discrimination training, which is relevant to learning processes and instruction strategies but does not directly address 'prompt engineering' as related to computational models or hard prefix prompts. However, considering that 'prompt delays' could potentially be related to the timing and delivery aspects of prompts in computational terms, the study might offer some indirect insights useful for prompt engineering, especially in the nuanced aspects of timing and response effectiveness. Therefore, a moderate relevance rating is provided.",https://europepmc.org/articles/pmc6269381?pdf=render -an automated prompting system for smart environments,4,"While the document seems to deal with automation and smart systems, which could involve some form of prompt engineering, the focus on 'hard prefix prompts' is unclear without further content. A 'fully automating prompting system' suggests relevance to automated prompt generation, but the extent to which this aligns with 'hard prefix prompts' is not specified. The relevance rating could be higher if the paper's approach to prompting systems includes or overlaps with the structured method of prompt engineering implied by hard prefix prompts.",http://www.eecs.wsu.edu/~cook/pubs/icost11p2.pdf -cocomo: computational consciousness modeling for generative and ethical ai,4,"The mention of 'prompt template formulation' implies some relevance to the area of prompt engineering, as this involves crafting inputs that guide the behavior of AI models. However, the CoCoMo model appears to focus more broadly on ethical and emotional intelligence in AI, rather than specifically on the study of 'hard prefix prompts' in prompt engineering. The relevance is present but not the primary focus of the study.",http://arxiv.org/pdf/2304.02438 -let me check the examples: enhancing demonstration learning via explicit imitation,6,"The abstract discusses Imitation-Demo, a method to enhance demonstration learning for prompt-based predictions. While it does not directly mention 'hard prefix prompts,' it addresses prompt-demonstration dependencies and the optimization of prompt-based learning, which is relevant to prompt engineering. However, since the focus is on imitation and contrastive learning mechanisms rather than the systematic study of hard prefix prompts, the relevance is moderate rather than high.",http://arxiv.org/pdf/2209.00455 -recruiting patients and collecting data for an observational study using computerised record pop-up prompts: the prog-res study,4,"The study described in the abstract demonstrates the practical application of electronic prompts in the context of patient recruitment and data collection for medical studies, which indirectly relates to prompt engineering as it showcases a real-world use case of prompts facilitating a task, in this case, recruitment, and data collection. However, the study is centered around improving operational aspects of medical research rather than exploring the theoretical or methodological aspects of prompt design, development, or optimization in automated systems or artificial intelligence, which are the central themes of prompt engineering. The relevance is therefore moderate as the conceptual link exists but is tangential to the principal focus of prompt engineering as a field.",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E93B7CE703F600A1423292CF4FD74CAA/S1463423612000047a.pdf/div-class-title-recruiting-patients-and-collecting-data-for-an-observational-study-using-computerised-record-pop-up-prompts-the-prog-res-study-div.pdf diff --git a/data/semantic_scholar_data/semantic_scholar_human_review_papers_without_pdf.csv b/data/semantic_scholar_data/semantic_scholar_human_review_papers_without_pdf.csv deleted file mode 100644 index 8536310..0000000 --- a/data/semantic_scholar_data/semantic_scholar_human_review_papers_without_pdf.csv +++ /dev/null @@ -1,59 +0,0 @@ -Title,Probability,Reasoning -why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning,6,"The paper's abstract describes an analysis of head tuning and prompt tuning, which are highly relevant to the study of prompt engineering as a concept. Prompt tuning particularly involves the process of adjusting prompts to achieve better performance in downstream tasks, which is a subset of prompt engineering. However, the abstract suggests a specific focus on the theoretical underpinnings of why pretrained language models are effective, using generative models like HMMs for analysis. The relevance to prompt engineering is therefore significant but not completely aligned, as it does not explicitly address the systematic review of 'hard prefix prompts' or the practical aspect of designing prompts, which might be expected from a 'comprehensive systematic review on hard prefix prompts'." -surreal vr pong: llm approach to game design,4,"The title and abstract provided discuss the application of generative models and computational creativity in the context of 3D game design and propose the integration of prompt-based creation into the gameplay itself. While it does not specifically focus on 'hard prefix prompts,' it does touch on prompt engineering by suggesting that prompts can be an element within game mechanics. This indicates some relevance to the study of prompt engineering, but it is not a direct or comprehensive examination of hard prefix prompts in systematic reviews or other studies." -zero-shot prompting for code complexity prediction using github copilot,6,"The relevance of this study to prompt engineering is somewhat indirect. The study investigates the capacity of GitHub Copilot, which leverages a Large Language Model, to predict code complexity in a zero-shot manner. While this addresses the model's ability to understand and generate responses in a specific technical domain without prior training, it does not directly explore the engineering or optimization of prompts (i.e., hard prefix prompts). However, the study does touch on a key aspect of prompt-based interactions with AI, which is the model's performance on tasks with no fine-tuning. This suggests relevance in terms of understanding the capabilities and limitations of LLMs like GPT3 when prompted with untrained tasks, which is a component of prompt engineering." -investigating causal understanding in llms,6,"The study discussed in the abstract is only partially relevant to prompt engineering since the investigation focuses on the causal understanding capabilities of LLMs rather than specifically on 'hard prefix prompts.' However, the research touches on how varying prompt presentations can affect LLM responses, which is related to the concept of prompt engineering. The relevance lies in understanding the influence of presentation form, both in hard prefix prompts and other types of prompting. The rating is not higher because the study does not directly focus on 'hard prefix prompts,' which seems to be the specific area of interest." -quantifying memorization across neural language models,4,"While the abstract discusses the issue of memorization in language models, which is indirectly related to how models respond to prompts, it does not directly address prompt engineering, particularly the study of 'hard prefix prompts.' The information provided is relevant to the construction and reliability of prompts in the context of avoiding the elicitation of memorized data, but it does not specifically focus on engineering prompts for systematic review that the original query suggests. Therefore, the relevance is moderate but not directly applicable to the study of hard prefix prompts in prompt engineering." -what initiates evidence‐based reasoning?: situations that prompt students to support their design ideas and decisions,6,"The document is somewhat relevant to the study of prompt engineering as it discusses the situations that lead students to use evidence-based reasoning, which is an important factor in understanding how to structure prompts to elicit informed responses. Although it focuses on evidence-based reasoning in the context of engineering education rather than the specific area of 'hard prefix prompts', understanding the broader principles of how prompts can initiate certain types of thinking is pertinent to prompt engineering." -a prompt-aware neural network approach to content-based scoring of non-native spontaneous speech,4,"The study focuses on using neural network techniques to assess non-native spontaneous speech, which includes using prompts as a condition for the model. Although this involves engineering a model to interact with prompts, the core emphasis is on automatic assessment rather than on the systematic review or deep exploration of 'hard prefix prompts', which would be central to prompt engineering studies. Therefore, the relevance is moderate as it only touches on prompt-related aspects within a broader application context." -how do different reflection prompts affect engineering students’ academic performance and engagement?,6,"The abstract describes a study that relates to prompt engineering in the educational sense and not in the AI field. It addresses the effectiveness of different types of reflection prompts (generic versus specific) on students' performance and engagement in an engineering course context. While it is not directly about the engineering of AI-based prompt systems, the insights regarding how specificity in prompts can influence outcomes may be partially relevant to the nuances involved in designing prompts for AI systems. However, the primary focus of the study on academic performance and engagement of engineering students limits the relevance to prompt engineering in AI. Thus, the rating reflects moderate relevance due to indirect connections that could be drawn between educational prompting strategies and AI prompt design considerations." -v2p: vision-to-prompt based multi-modal product summary generation,6,"The paper presents a multi-modal product summary generation framework that uses a Generative Pre-trained Language Model with prompts derived from visual attributes, which aligns with the concept of prompt engineering in the sense that it involves designing prompts for guiding text generation. However, the focus seems to be more on the multi-modal interaction and summary generation, rather than on the systematic study of hard prefixes or prompt structures themselves. Therefore, while it is relevant due to its use of prompts, it may not directly address the nuances of prompt engineering as pertains to hard prefix prompts specifically, hence the rating of 6." -prompt deep light-weight vessel segmentation network (plvs-net),6,"The relevance to prompt engineering in this study is moderate. The use of 'prompt blocks' within the network architecture indicates an innovation related to how the network processes information, which might be relevant to prompt engineering in the broader sense of designing inputs that improve the performance of a neural network. However, the primary focus appears to be on biomedical image segmentation, rather than the development or study of prompting methods for natural language processing or other general AI applications. Thus, while the term 'prompt' is used, it may not directly align with the typical context of prompt engineering, which is often related to improving AI responses or behavior based on textual input." -the utility of an evidence-based lecture and clinical prompt as methods to improve quality of care in colorectal cancer screening,4,"The study appears to investigate the effectiveness of clinical prompts in a medical setting, which tangentially relates to the concept of prompt engineering. While not directly studying 'hard prefix prompts' or prompt engineering for AI or computational systems, the principle of using prompts to improve performance outcomes has some relevance to the broader field of study. However, the specific application to colorectal cancer screening and the focus on evidence-based lectures differentiates this from the typical context of prompt engineering in technology, which usually refers to the designing of inputs to elicit desired responses from AI models or systems." -information and communication technology based prompting for treatment compliance for people with serious mental illness.,5,"The provided abstract discusses the use of ICT-based prompting to improve treatment compliance in people with serious mental illness, which aligns with the broader concept of 'prompts' in behavior modification. However, the term 'hard prefix prompts' typically refers to a specific approach in natural language processing or AI-related prompt engineering, which is not the focus of this study. Therefore, the relevance is moderate as it deals with prompts in a different context than what 'prompt engineering study' typically would imply in technological or AI research." -visual prompt tuning for few-shot text classification,6,"The paper abstract introduces a novel method of Visual Prompt Tuning for few-shot text classification that utilizes vision-language pre-training models, which is somewhat relevant to prompt engineering as it involves a form of prompt tuning. However, the primary focus is on incorporating visual elements rather than exclusively on texts or verbal prompts, which traditionally constitute 'prompt engineering' in language models. The relevance rating is given a moderate score because it deals with tuning aspects pertinent to the deployment of large-scale language models, although it does not directly address 'hard prefix prompts' as described in the original study topic." -augprompt: knowledgeable augmented-trigger prompt for few-shot event classification,5,"The title suggests that the study is related to prompt engineering as it mentions 'augmented-trigger prompt' which implies a method of prompt design for enhanced performance in an NLP task (few-shot event classification). However, without an abstract or TLDR, it is challenging to assess the depth of relevance to prompt engineering, hence a middle-of-the-road rating is given. More information would be required for a more accurate rating." -prompt-based self-training framework for few-shot named entity recognition,5,"The title suggests the study involves 'prompt-based' methodology, which is relevant to prompt engineering. However, without more information from the abstract or TLDR, it's challenging to determine the extent of relevance to hard prefix prompts specifically. The study focuses on few-shot named entity recognition, which may involve prompts, but it is unclear how systematically the prompts are reviewed or engineered in the study. A neutral score reflects this partial relevance based on the information provided." -balanced distributed augmentation for multi-label few shot learning with prototypical network,4,"The abstract indicates that the study involves novel pipeline for automating the prompt generation, which is somewhat relevant to prompt engineering, particularly if the automated generation includes what could be considered 'hard prefix prompts.' However, the main focus of the paper appears to be on data augmentation techniques and sentiment analysis for few-shot learning rather than directly on prompt engineering. The relevance is therefore moderate and not the primary emphasis of the research." -retrieving visual facts for few-shot visual question answering,6,"The abstract describes a research study where a language model is prompted with facts retrieved from an image, to improve the performance of few-shot visual question answering systems. While it does not directly address 'hard prefix prompts' as in the study of prompts in the context of natural language processing, it does involve the process of selecting specific information (facts from an image) to inform the prompting process for a language model. Thus, it demonstrates relevance to prompt engineering by showing how tailored information can be used to elicit better responses from a model. However, because it focuses primarily on image-based data and facts rather than text-based prompting, it is not fully centered on 'prompt engineering' as typically understood within NLP, hence the mid-range rating." -"machine translation with large language models: prompting, few-shot learning, and fine-tuning with qlora",4,"The abstract discusses machine translation using large language models and evaluates different methodologies including zero-shot prompting, which is closely related to prompt engineering. However, the focus on QLoRA fine-tuning indicates a greater emphasis on the fine-tuning process rather than on prompt engineering itself. The relevance is present but not central to the topic of prompt engineering, therefore a moderate rating reflects the connection without overstating its focus." -application of cognitive rehabilitation theory to the development of smart prompting technologies,4,"While the study addresses the use of prompting technologies, which is a form of human-computer interaction, it primarily focuses on cognitive rehabilitation and assistive technologies for older adults with cognitive impairments. The relevance to 'prompt engineering' in the context of hard prefix prompts and systematic review is tangential. The study could be peripherally related to prompt engineering in the way it seeks to optimize the design of prompts for a specific application (assistive technology), but it does not directly study or review the more general field of prompt engineering, especially as it might relate to conversational AI, machine learning or data input systems." -application of cognitive rehabilitation theory to the development of smart prompting technologies,4,"While the study addresses the use of prompting technologies, which is a form of human-computer interaction, it primarily focuses on cognitive rehabilitation and assistive technologies for older adults with cognitive impairments. The relevance to 'prompt engineering' in the context of hard prefix prompts and systematic review is tangential. The study could be peripherally related to prompt engineering in the way it seeks to optimize the design of prompts for a specific application (assistive technology), but it does not directly study or review the more general field of prompt engineering, especially as it might relate to conversational AI, machine learning or data input systems." -application of cognitive rehabilitation theory to the development of smart prompting technologies,4,"The document appears to discuss the application of cognitive rehabilitation theory to the development of smart prompting technologies for assisting older adults with cognitive impairments. While it does touch upon the design and effectiveness of prompts (which is indirectly related to prompt engineering), the focus is more on the application of CRT in the development of assistive technologies rather than on a comprehensive systematic review of hard prefix prompts or on the specifics of engineering prompt systems. Thus, the relevance to prompt engineering study, particularly in the context of a comprehensive systematic review on hard prefix prompts, is moderately low, warranting a rating of 4." -application of cognitive rehabilitation theory to the development of smart prompting technologies,4,"The document appears to discuss the application of cognitive rehabilitation theory to the development of smart prompting technologies for assisting older adults with cognitive impairments. While it does touch upon the design and effectiveness of prompts (which is indirectly related to prompt engineering), the focus is more on the application of CRT in the development of assistive technologies rather than on a comprehensive systematic review of hard prefix prompts or on the specifics of engineering prompt systems. Thus, the relevance to prompt engineering study, particularly in the context of a comprehensive systematic review on hard prefix prompts, is moderately low, warranting a rating of 4." -embracing ai for better quality engineering,5,"The provided text briefly mentions 'prompt engineering for testing use cases', indicating that prompt engineering is indeed part of the study in the context of quality engineering with AI. However, the focus seems to be on a broader application of AI in quality engineering and does not provide specific details on hard prefix prompts or a comprehensive systematic review on such prompts in engineering study. Therefore, the relevance is moderate, as it touches on the subject but does not delve deeply into it." -telehealth intensive care unit nurse surveillance of sepsis,6,"The article is somewhat relevant to prompt engineering study, as it involves the development of a 'sepsis prompt' that integrates usability and human factors engineering standards. Although the focus is on the medical application of prompts, rather than on the hard prefix prompts often used in machine learning or computational contexts, the principles of design, usability testing, and alert optimization could provide insights into prompt engineering methodologies. The evaluation of sensory processing, cognitive processing, and user satisfaction has parallels in the design of effective prompts in other fields. However, the specific application to a telehealth ICU scenario and sepsis surveillance limits the direct applicability to general prompt engineering studies." -"artificial intelligence in engineering and society: blue skies, black holes, and the job of requirements engineers (keynote)",6,"The abstract provides a comprehensive overview of artificial intelligence's impact on engineering and society, touching briefly on the use of large language models to address requirements engineering problems which may involve prompt engineering to some extent. However, the focus on prompt engineering, particularly in the context of 'hard prefix prompts,' is not explicit or central to the abstract. It mentions the potential for using prompts to check requirements completeness and to generate models, suggesting some relevance to the field of prompt engineering. The relevance rating is above average because prompt engineering is definitely a subset of the topics covered, but since the abstract does not focus on the systematic review of hard prefix prompts specifically, it does not score higher." -feasibility of using the privacy-preserving large language model vicuna for labeling radiology reports.,6,"The provided abstract discusses the application of a large language model (LLM), Vicuna, for labeling radiography reports in a manner that preserves patient privacy. The relevance to prompt engineering lies in the mention of 'using a single-step or multistep prompting strategy' which indicates that prompts were designed and tested to achieve the desired outcome. The study evaluates the efficacy of these prompting strategies against established benchmarks. However, the study is not focused on prompt engineering itself, but rather on the application of prompts in a specific domain (medical report analysis). This means that while prompt engineering is a component of the study, the focus is not on the systematic review of 'hard prefix prompts,' but on the feasibility and efficacy of running a privacy-preserving LLM locally for practical applications in healthcare. Therefore, the relevance is moderate, as insights into prompt engineering can be gleaned but are not the central focus of the study." -forward-backward reasoning in large language models for mathematical verification,6,"The study presents an innovative approach in using large language models for mathematical verification through FOBAR, which involves prompt engineering to some extent by integrating backward reasoning into the prompts to verify answers. While it doesn't directly address 'hard prefix prompts' in prompt engineering, the use of CoT prompting and the integration of answer verification templates are related to the techniques used in prompt engineering to improve AI performance. It shows the importance of prompt design in eliciting correct outputs from models. The relevance score isn't higher because it doesn't specifically discuss or review hard prefix prompts, which is the focus of the prompt engineering study mentioned." -react: synergizing reasoning and acting in language models,6,"The paper discusses an integrated approach where reasoning and acting are combined within LLMs, which is related to prompt engineering in the sense that it explores how to effectively prompt models to engage in both cognitive processes. Although it doesn't directly address 'hard prefix prompts,' it does deal with the broader topic of prompting LLMs to improve performance, suggesting some relevance. However, its focus on the 'ReAct' system's development and evaluation on specific tasks may not provide in-depth insights into the particular strategies used for engineering prompts, hence the rating isn't higher." -reading comprehension quiz generation using generative pre-trained transformers,4,"The study is related to the application of AI in the educational domain, specifically using a pre-trained transformer model (GPT-3) for quiz generation which is a type of prompt engineering. However, it does not specifically focus on 'hard prefix prompts' but rather on the general capability of transformer models to generate educational content. The relevance to prompt engineering is present since quiz generation can be considered a form of prompt design, yet it is not focused on the systematic review of prompts or their optimization, which would make it highly relevant to prompt engineering studies." -codegen: an open large language model for code with multi-turn program synthesis,6,"The abstract describes research on program synthesis using large language models, particularly focusing on a new model called CODEGEN. The relevance to prompt engineering is moderate because it touches on the use of prompts specifying subproblems in a multi-step paradigm for program synthesis. This suggests that different prompt structures (such as multi-turn prompts) can significantly affect the performance of code generation tasks, which is a part of the broader area of prompt engineering. However, the abstract does not specifically discuss 'hard prefix prompts' or provide a systematic review of prompt engineering, so it is only partially relevant to the specified topic of a comprehensive systematic review on hard prefix prompts." -a topic-based prompt learning method for zero-shot stance detection,4,"While the study involves the use of prompts to determine the stance detection ability, it is focused more on the classification and processing of language with respect to stance detection, rather than the creation or systematic review of hard prefix prompts in the context of prompt engineering. Since prompt engineering typically refers to methods for improving language model responses, and this paper seems to touch on related concepts without being squarely focused on prompt engineering, it receives a moderate rating." -vision-language models are zero-shot reward models for reinforcement learning,6,"The abstract describes the use of vision-language models (VLMs) as zero-shot reward models in reinforcement learning, which includes a component of prompt engineering by providing text prompts to specify tasks. Although the main focus is on reinforcement learning and the efficacy of VLMs in this context, the mention of using 'minimal prompt engineering' indicates that there is a relevance to the study of crafting prompts. However, the primary emphasis is not on the systematic review of 'hard prefix prompts' or the intricacies of prompt engineering methods, which would be required for a higher relevance score." -language models as zero-shot trajectory generators,4,"While the abstract discusses the usage of Large Language Models for trajectory generation in robotics, which would require careful crafting of prompts to interact with the model effectively, the focus on 'hard prefix prompts' in the context of a comprehensive systematic review is not directly addressed. Although the principles of prompt engineering could be applied to formulate the inputs for GPT-4 in this study, the abstract does not specifically mention or concentrate on 'hard prefix prompts', nor does it suggest a systematic review of such prompts. Therefore, the relevance is moderate as the concept of prompting is involved, but not specific to the requested area of study." -zeroprompt: streaming acoustic encoders are zero-shot masked lms,5,"The study presents a technique called ZeroPrompt that is applied to streaming acoustic encoders, which is tangentially relevant to 'prompt engineering' since it involves what can be described as a prompting strategy. However, the core of the study focuses on streaming ASR (Automatic Speech Recognition) models and improving their latency, which is not directly related to the systematic review of 'hard prefix prompts' in the traditional sense of prompt engineering for language models. Therefore, the relevance is moderate as it deals with prompts in a different context than what 'prompt engineering study' might typically imply, which is often associated with text-based language model prompting." -b-pet: the pet model with parameter-efficient learning,4,"The abstract provided discusses the B-pet model, which focuses on few-shot learning (FSL), parameter efficiency, and storage reductions for model training and deployment. This involves the concept of 'prompt learning' as a component of the PET model, indicating some relevance to prompt engineering. However, the main content is centered on fine-tuning efficiency and parameter freezing, not directly on the systematic study or development of prompting methods. Consequently, relevance is limited to the aspect of 'prompt learning' in the context of the broader FSL and model efficiency discussions." -transformers as algorithms: generalization and stability in in-context learning,6,"The study examines in-context learning and generalization in transformer models, which is relevant to prompt engineering as it relates to how these models use input prompts to infer outputs. However, the focus on 'hard prefix prompts' is not specifically addressed, meaning the study might offer insights related to prompt engineering more broadly rather than hard prefix prompts in particular. Therefore, the relevance is moderate." -context-based narrative generation transformer (ngen-transformer),6,"The abstract indicates that the paper discusses a text generation model, the NGen-Transformer, which is relevant to natural language processing and prompt engineering to some extent. It emphasizes the context assimilation capabilities of the architecture, which aligns with the concept of prompt engineering as it involves providing context or prompts for generating text. Although the paper appears to focus more on the model's architecture for story generation rather than on the systematic study of hard prefix prompts, the use of prompts (in the form of user-defined context) and performance evaluation on a prompt-based dataset (WritingPrompts) makes it moderately relevant to prompt engineering studies." -fp-detr: detection transformer advanced,5,"The paper 'fp-detr: detection transformer advanced' mentions the use of a concept analogous to prompts in NLP, where query positional embeddings serve as 'visual prompts' to assist in object detection. While this indicates a potential crossover concept with prompt engineering, it's specific to the visual domain rather than the textual domain typically associated with prompt engineering in NLP studies. Therefore, the relevance is moderate as it offers insight into how the idea of prompts can be applied in different contexts, but it does not address hard prefix prompts or their systematic review in NLP applications specifically." -pretraining data mixtures enable narrow model selection capabilities in transformer models,5,"The study relates indirectly to prompt engineering, as it deals with the ability of transformer models to perform in-context learning and adapt to new tasks based on their pretraining data mixture. Prompt engineering typically involves designing prompts to elicit desired behaviors or responses from LLMs. The relevance lies in understanding how different pretraining data affects the model's response to prompts, which is crucial for prompt engineers. However, the study does not explicitly focus on 'hard prefix prompts' which would be more directly aligned with prompt engineering, thus the rating reflects a moderate relevance." -smart homes for people with alzheimer's disease: adapting prompting strategies to the patient's cognitive profile,6,"The study's relevance to prompt engineering is moderate. While it does not directly address 'hard prefix prompts' in the context of machine learning or computational prompt engineering, it deals with the adaptation of prompts (cognitive assistance) to users' needs, which parallels the customization aspect of prompt engineering. Furthermore, the development of guidelines for effective prompting strategies and an experimental protocol has some commonalities with the principles of designing and testing prompts in AI systems. However, the application is specific to smart homes and Alzheimer's patients and may not fully translate to the broader field of prompt engineering study." -understanding the effect of in-video prompting on learners and instructors,4,"While the abstract discusses in-video prompting, which is a form of engagement tactic within an educational context, it does not specifically mention or focus on 'hard prefix prompts' or the systematic review of prompt engineering. The study is relevant to the broader context of prompt design and use in learning environments but does not directly address the topic of a comprehensive review of hard prefix prompts in prompt engineering. Therefore, it has some relevance due to its focus on the effects of prompts in an instructional setting but falls short of directly addressing the specified topic of hard prefix prompts." -effects of a progressive prompting-based educational game on second graders' mathematics learning performance and behavioral patterns,6,"The study focuses on the use of prompting strategies within a game-based learning environment, which is tangentially related to the broader concept of 'prompt engineering' in that it involves the design of prompts to guide users (learners) towards specific outcomes. However, 'prompt engineering' typically refers to designing prompts to interact with AI systems or computer models, rather than human students. Therefore, while the educational prompting strategy is a form of prompt design and may share underlying principles with prompt engineering for AI, it is not a direct study on 'hard prefix prompts' as the context differs. The relevance is moderate because the skills and insights from designing effective prompts for education might be applicable to prompt engineering for AI in developing user instructions or interactions." -considering student choice when selecting instructional strategies: a comparison of three prompting systems.,6,"The study touches upon the effectiveness of prompting systems in educational settings, which is tangentially relevant to prompt engineering as it involves the use of prompts to enhance learning outcomes. However, prompt engineering typically focuses on improving the interaction with AI models and systems, rather than instructional strategies for human learning. Despite the different context, principles from studying human response to prompts could be insightful for designing AI prompts, thus earning a moderate relevance rating." -boosting static resource leak detection via llm-based resource-oriented intention inference,6,"The provided abstract outlines research on 'InferROI,' a system designed to detect resource leaks in code using large language models (LLMs) for intention inference. Though this approach employs prompts to guide the LLM toward inferring intentions from code snippets, it is indirectly relevant to prompt engineering. The use of prompts is in the context of static analysis in software engineering, while prompt engineering generally refers to designing prompts to accurately elicit specific responses from language models. Since this research involves instructing an LLM via prompts, it could offer some insights into prompt design and effectiveness; hence, it is given a moderate relevance rating. However, it does not focus on prompt engineering as a primary study area, which is why the rating is not higher." -fake news in sheep's clothing: robust fake news detection against llm-empowered style attacks,6,"The relevance to prompt engineering study in the context of 'hard prefix prompts' is moderately substantial as the abstract describes the use of 'style-oriented reframing prompts' which are a form of prompts used in engaging with Language Models (LLMs). Although the main focus is on fake news detection and style-agnostic approaches to improve robustness against camouflage attempts by LLMs, the application of prompts is directly related to the mechanics of how LLMs are manipulated or interacted with to produce or detect certain styles of content. Therefore, while the primary topic is not a comprehensive systematic review of hard prefix prompts, the paper relates to one aspect of prompt engineering—using prompts to reframe content style to train a more robust detection model." -cgsmp: controllable generative summarization via multimodal prompt,5,"The abstract discusses the use of a multimodal approach to reduce hallucination in Natural Language Generation (NLG) and improve the quality of abstractive summarization, relating to language model performance and prompt design to some extent. However, the focus here is on the use of multimodal (image and text) inputs rather than on the study of 'hard prefix prompts' specifically. While prompt engineering is a broader field that includes various methods to control language model outputs, this paper seems to address only a subset of that field related to multimodal interaction and controllability. Therefore, the relevance to prompt engineering study is moderate, as it could provide insights into one aspect of the field without directly focusing on hard prefix prompts." -divknowqa: assessing the reasoning ability of llms via open-domain question answering over knowledge base and text,4,"The study focuses on the retrieval capabilities of Large Language Models and how they can be grounded on heterogeneous knowledge sources for better question-answering performances. While it relates to prompt engineering in the broader context of machine learning and enhancing LLMs' interactions with external data, the study's primary concern is not with hard prefix prompts directly but rather with improving the information retrieval process, which is a component of the system that supports effective prompting. Therefore, its relevance to prompt engineering, specifically to a systematic review of hard prefix prompts, is tangential rather than central." -scpatcher: mining crowd security discussions to enrich secure coding practices,6,"The paper discusses SCPatcher, a tool that uses Prompt Learning with a Large Language Model to improve secure coding practices by mining crowd security discussions. Although the primary focus is on enhancing secure coding, the use of Prompt Learning is relevant to the study of prompt engineering. However, the paper does not specifically focus on 'hard prefix prompts' as implied by the term 'prompt engineering study.' Therefore, the relevance to prompt engineering is secondary and not central to the main objective of the paper, resulting in a moderate rating of relevance." -amortizing intractable inference in large language models,4,"The provided abstract discusses the use of amortized Bayesian inference to sample from intractable posterior distributions in autoregressive large language models (LLMs) and touches on chain-of-thought reasoning as a latent variable modeling problem. While this research is related to the functioning and fine-tuning of LLMs, it does not directly address 'hard prefix prompts' or any aspect of prompt engineering. However, the methods developed in this work for fine-tuning LLMs could indirectly benefit prompt engineering by enabling more efficient adaptation of models to specific tasks, which is why the relevance rating is not at the lowest end of the scale." -the impact of scaffolding prompts on the collaborative problem solving of ill-structured tasks by undergraduate engineering student groups,5,"The study seems to focus on scaffolding prompts in the context of collaborative problem solving for ill-structured tasks, rather than hard prefix prompts specifically. Nonetheless, the research is relevant to the field of prompt engineering to some extent because it explores how certain types of prompts can affect the problem-solving abilities of engineering students. This could indirectly inform studies or practices within prompt engineering, especially concerning the design of prompts that facilitate learning and problem-solving in educational settings." -exploring the impacts of cognitive and metacognitive prompting on students’ scientific inquiry practices within an e-learning environment,4,"While the study focuses on the use of prompts to enhance scientific inquiry in an educational context, and thus tangentially touches upon the concept of prompting, it does not directly address prompt engineering related to natural language processing or AI prompt design. The relevance lies in the investigation of prompts effectiveness, which could be conceptually extended to prompt engineering for AI systems. However, the study's primary focus on educational cognitive and metacognitive prompts limits its direct applicability to prompt engineering study, specifically regarding hard prefix prompts." -which prompts make the difference? data prioritization for efficient human llm evaluation,5,"The provided title and abstract describe a study focused on the optimization of human evaluations of large language models through data prioritization, which is indirectly relevant to prompt engineering. Prompt engineering typically involves constructing prompts to elicit specific outputs from language models but does not directly address the question of human evaluators in the loop. However, the study's implications for improving the efficiency of model-evaluation can influence the prompt engineering process indirectly by refining the human feedback loop connected to prompt tuning performances. This makes it somewhat relevant to prompt engineering, especially in the scope of human-in-the-loop evaluations and performance measurement. Nonetheless, the study does not seem to address 'hard prefix prompts' or any specific prompt engineering methodologies, which limits its direct relevance to the field of prompt engineering." -multimodal multi-task stealth assessment for reflection-enriched game-based learning,4,"The study mentioned does not directly address 'hard prefix prompts' or 'prompt engineering' as it appears to be more focused on game-based learning environments and using a stealth assessment framework for educational purposes. The relevance comes from the use of in-game reflection prompts and the multifaceted assessment of student responses which tangentially touches upon the concept of prompts and reflection in learning. However, it does not engage with the specific study of engineering prompts in the context of AI systems or conversational models, which would be necessary for a higher relevance rating." -making a case for spatial prompting in human-robot communication,4,"This paper is somewhat relevant to prompt engineering in that it discusses communication strategies with robots, which could include developing prompts for human-robot interaction. However, it focuses on 'spatial prompting' and non-verbal communication cues, which is a different area than 'hard prefix prompts,' which are typically textual or verbal in nature and used in language model interactions. The study's relevance to prompt engineering is tangential and not directly aligned with the concept of hard prefix prompts in language models or more conventional prompting techniques." -the smartweb corpora: multimodal access to the web in natural environments,4,"The description indicates that the chapter discusses a prompting scheme called SitPro, a recording technique, and properties of created corpora. While the mention of a prompting scheme suggests relevance to prompt engineering, there is no explicit mention of 'hard prefix prompts' or a systematic review approach. The relevance appears to be tangential rather than directly focused on prompt engineering as it pertains to pre-determined structured prompts. The rating reflects moderate relevance due to the connections to prompts and data acquisition which could be applicable to prompt engineering studies but lacks specificity regarding 'hard prefix prompts'." -a prompt-based multimodal tabular transformer encoder for medical intervention duration estimation,6,"The study introduces a prompt-based approach within a medical context, focusing on a multimodal deep learning framework for medical intervention duration estimation. While it does not directly address 'prompt engineering' in the broader sense, the use of prompts in conjunction with a pre-trained sentence encoder indicates an application of prompt engineering principles. Hence, the relevance is moderate, as it shows an example of how prompts can be interfaced with other machine learning components, but the study is specific to medical interventions and does not cover prompt engineering as a standalone subject." -mpt: multimodal prompt tuning for event detection,5,"The presented abstract discusses a multimodal Prompt Tuning approach for event detection that incorporates both images and text, which pertains to prompt engineering in the context of modal feature aggregation and the use of soft prompts. However, the primary focus is on event detection using multimodal inputs rather than on hard prefix prompts specifically. The mention of 'prompt tuning' indicates relevance to prompt engineering, but the absence of discussion about hard prefix prompts specifically results in a moderate relevance rating." -prompt cache: modular attention reuse for low-latency inference,5,"The abstract discusses Prompt Cache which is related to the optimization of language model inference using attention state reuse in prompts; however, it does not directly address a 'hard prefix prompt' which might suggest a fixed prompt structure or content in the systematic review. The approach outlined in the abstract is relevant to prompt engineering, as it deals with efficient prompt handling in LLMs, but it doesn't focus on prompt content or structure, thereby making it moderately relevant." diff --git a/data/semantic_scholar_data/semantic_scholar_papers_above_3.csv b/data/semantic_scholar_data/semantic_scholar_papers_above_3.csv deleted file mode 100644 index 6c576d9..0000000 --- a/data/semantic_scholar_data/semantic_scholar_papers_above_3.csv +++ /dev/null @@ -1,1463 +0,0 @@ -Title,Model,Probability,Reasoning -"""do anything now"": characterizing and evaluating in-the-wild jailbreak prompts on large language models",gpt-4-1106-preview,7,"The paper provides valuable insights into jailbreak prompts, which are a specific type of adversarial prompt that attempts to circumvent the safeguards of large language models. Understanding the characteristics and evaluation of these prompts is relevant to prompt engineering because it guides the development of prompts that can resist misuse and prompts that align better with human values. Although it focuses on the adversarial aspect rather than constructive prompt engineering, the findings can inform the broader field of prompt engineering, particularly in designing robust and safe systems. Therefore, the study is quite relevant but not entirely centered on prompt engineering in its purest form, hence the rating of 7." -latent jailbreak: a benchmark for evaluating text safety and output robustness of large language models,gpt-4-1106-preview,9,"The paper focuses on evaluating the safety and robustness of large language models (LLMs) using a benchmark that entails analysis of prompt design, which is highly relevant to prompt engineering. Specifically, it investigates how malicious instructions embedded within prompts affect the LLM's behavior. This is crucial for understanding how different prompt structures (position of instructions, word replacements, and instruction replacements) influence the model's output, aligning closely with the broader field of prompt engineering that aims to optimize the interaction with LLMs. The systematic review mentioned in the query would likely cover such research, as it is integral to understanding how 'hard prefixes' or fixed parts of prompts can affect the LLM's outputs. The only reason it does not get a full 10 is because the study does not solely focus on the engineering aspect of prompts but also on the safety and ethical concerns related to prompts." -fuzzllm: a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models,gpt-4-1106-preview,7,"The relevance of 'fuzzllm: a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models' to prompt engineering is notable. While the primary focus is on discovering vulnerabilities through fuzz testing, the utilization of templates to understand the structure of prompts and the identification of features within these prompts are directly related to the study of prompt engineering. The process of ensuring that prompts do not lead to service guideline violations requires a deep understanding of how different prompts are constructed and how they interact with LLMs. Therefore, the research indirectly contributes to the field of prompt engineering by seeking ways to prevent manipulative prompts from eliciting undesired responses. However, the study does not directly address hard prefix prompts or the systematic review of such prompts, which would be more central to a targeted prompt engineering study." -latent jailbreak: a test suite for evaluating both text safety and output robustness of large language models,gpt-4-1106-preview,7,"The paper's primary focus is on evaluating the safety and robustness of large language models (LLMs), which is relevant to prompt engineering as it deals with how different prompts (including those that contain harmful or malicious content) can affect the performance of LLMs. The concept of 'latent jailbreak' and the creation of a benchmark that includes 'malicious instruction embedding' directly relates to the study of prompts, particularly 'hard prefixes' which could be considered a form of adversarial input designed to test the limits of the model's behavior. This relevance is crucial because ensuring that models perform consistently well and generate safe content across a variety of prompt types is a key aspect of prompt engineering. However, it does not directly discuss the 'hard prefix prompts' in a systematic review context but rather the safety and robustness in a broader sense, hence the rating does not reach the maximum." -"tricking llms into disobedience: understanding, analyzing, and preventing jailbreaks",gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it addresses the manipulation of prompts to achieve unintended model behaviors, which is a critical aspect of prompt design and engineering. Understanding how to prevent these 'jailbreaks' is crucial for developing more secure and reliable prompt engineering practices. The study provides insights into the vulnerabilities of current models and offers potential solutions, which directly contribute to the field of prompt engineering. The rating is not a full 10 because the study is more focused on security and mitigation rather than the broader aspects of prompt engineering, such as the optimization of prompts for various tasks or the generation of more sophisticated prompts for improved performance." -jailbreaking black box large language models in twenty queries,gpt-4-1106-preview,8,"The abstract discusses an algorithm (PAIR) for generating 'semantic jailbreaks' using adversarial methods on large language models (LLMs) such as GPT-3.5/4, Vicuna, and PaLM-2. This is highly relevant to prompt engineering because understanding and preventing adversarial manipulation of LLMs is crucial for developing more effective and secure prompts. It is directly related to the field as it explores the vulnerabilities in the current engineering of prompts and how they can be exploited. The abstract, however, does not specifically address 'hard prefix prompts', which are a subset of prompts within prompt engineering, hence not warranting a full score of 10." -autodan: automatic and interpretable adversarial attacks on large language models,gpt-4-1106-preview,8,"The paper describes an adversarial attack method named 'AutoDAN' that is highly relevant to prompt engineering as it involves the generation of attack prompts, a form of input manipulation which is a key aspect of prompt engineering. This research contributes to a deeper understanding of Large Language Model vulnerabilities and strategies that can be used to manipulate model outputs, which is pertinent to the field of prompt engineering. However, the study is focused specifically on adversarial attacks rather than the broader topic of 'hard prefix prompts,' therefore it does not fully align with systematic review studies on prompt engineering techniques in general, which might include non-adversarial methods and a wider range of applications. Hence, the rating is high but not at the maximum." -visual prompt tuning,gpt-4-1106-preview,7,"The topic of Visual Prompt Tuning (VPT) is relevant to the prompt engineering study since it deals with the adaptation of pre-trained models, which is a core concept in prompt engineering. However, VPT specifically addresses the visual domain and large-scale Transformer models in vision, which differs from the 'hard prefix prompts' that typically relate to textual input. Despite this difference, the underlying principles of efficient tuning and the introduction of new parameters to influence model behavior without extensive retraining are concepts shared with prompt engineering. This cross-domain relevance is valuable but not directly tied to the initial study of 'hard prefix prompts', hence the rating of 7." -conditional prompt learning for vision-language models,gpt-4-1106-preview,7,"The abstract describes a study that focuses on prompt learning, specifically in vision-language models, which is highly relevant to the field of prompt engineering. The study introduces Conditional Context Optimization (CoCoOp), which is a method for improving the generalization of learned prompts over unseen classes. While this is directly related to prompt engineering, it is specifically tailored to vision-language models, and not directly focused on 'hard prefix prompts' which the original prompt suggests. Therefore, the relevance rating is not a perfect 10, as 'hard prefix prompts' might imply a different subset of prompt engineering concerned with text prompts in NLP. Nevertheless, the concepts studied are transferable to prompt engineering more broadly, warranting a relatively high rating." -prompt-to-prompt image editing with cross attention control,gpt-4-1106-preview,8,"The provided abstract describes a study closely related to 'prompt engineering,' as it involves a framework for editing images using text prompts, which directly entails understanding and manipulating prompts for precise outcomes. The emphasis on cross-attention layers as a mechanism for controlling the relationship between text prompts and the spatial layout of images is particularly relevant to the field of prompt engineering, as it is concerned with the fine-tuned influence of textual input on generative models. While the study is not specifically about 'hard prefix prompts,' it contributes to the broader field of prompt engineering by showing how textual prompts can be used to control and manipulate the output of synthesis models. The 2-point deduction accounts for the specific focus on imagery rather than a systematic review of hard prefix prompts in various contexts." -p-tuning: prompt tuning can be comparable to fine-tuning across scales and tasks,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it discusses 'prompt tuning', which is a method within the field of prompt engineering in natural language understanding (NLU). It compares prompt tuning with fine-tuning, highlighting its efficiency and effectiveness across different tasks and model scales, and introduces an advanced version named 'P-Tuning v2'. This research contributes to the understanding of how continuous prompts can be optimized and sheds light on prompt engineering as a potentially universal method for NLU tasks, making it a significant resource for studying prompt engineering methods." -"pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing",gpt-4-1106-preview,9,"The article provides a detailed survey of prompt-based learning in natural language processing, which is directly relevant to prompt engineering. It covers the adaptation of language models to new tasks using prompts, which is a core concept in prompt engineering. The systematic review and organization of research along with the introduction of a unified set of mathematical notations for existing work are valuable for understanding the breadth and depth of prompt-based methods, making it highly relevant to the study of prompt engineering. Moreover, the article's release of resources like NLPedia–Pretrain aids further research and accessibility. The rating is not a perfect 10 because it might not exclusively focus on 'hard prefix prompts' as the prompt engineering study inquires but generally covers prompting methods in NLP." -learning to prompt for open-vocabulary object detection with vision-language model,gpt-4-1106-preview,8,"The abstract details a novel method, detection prompt (DetPro), which focuses on learning continuous prompts for open-vocabulary object detection, indicating an application of prompt engineering in vision-language models. The relevance is high because it directly tackles the challenge of designing effective prompts to improve model performance. However, it might not cover the theoretical foundations or a wider range of prompt engineering applications, hence not a full score." -"recommendation as language processing (rlp): a unified pretrain, personalized prompt & predict paradigm (p5)",gpt-4-1106-preview,7,"The abstract describes a 'Pretrain, Personalized Prompt, and Predict Paradigm' (P5) which is closely related to the concept of hard prefix prompts in prompt engineering. The study's emphasis on personalized prompts and instruction-based recommendation indicates that it deals with the design and utilization of prompts to elicit desired behaviors from a language model, which is a core element of prompt engineering. However, because the abstract specifically focuses on recommendation tasks and does not explicitly mention 'hard prefix prompts' as a category or detail the systematic review elements that might be expected from a 'comprehensive systematic review,' it does not fully align with a study exclusively centered on hard prefix prompts. Despite this, the principles discussed are relevant to the broader field of prompt engineering." -an information-theoretic approach to prompt engineering without ground truth labels,gpt-4-1106-preview,9,"The article presents a technique for prompt engineering, which is highly relevant to the study of prompt engineering. It focuses on a method that maximizes mutual information between input and output to select effective prompts without labeled data or model access. This method is innovative in the field of prompt engineering as it bypasses the need for substantial labeled datasets and the necessity to tweak model parameters. However, the title does not specifically mention 'hard prefix prompts,' so it may not be entirely focused on 'hard prefix prompts' as the type of prompts being engineered, which is why it doesn't receive a perfect 10." -prompt distribution learning,gpt-4-1106-preview,9,"The abstract and TLDR indicate that the study deals with prompt distribution learning, which is directly related to prompt engineering. It focuses on adapting pre-trained models to downstream tasks by learning prompt distributions, a technique relevant to constructing and using prompts to improve model performance. This is highly pertinent to studies in prompt engineering, which aims to optimize how models interact with prompts for better task performance. Although the term 'hard prefix prompts' is not explicitly mentioned, the overall concept of learning and utilizing prompts makes this study considerably relevant." -ignore previous prompt: attack techniques for language models,gpt-4-1106-preview,8,"The provided abstract is highly relevant to the subject of 'prompt engineering,' as it directly discusses PromptInject, a methodology for adversarial prompt composition designed to exploit vulnerabilities in transformer-based language models like GPT-3. This pertains to the broader category of prompt engineering by showcasing methods of prompting that could lead to model misalignment, thus revealing long-tail risks. Understanding these attack techniques is crucial for developing more robust prompt engineering practices, although the specific focus on 'hard prefix prompts' is not directly mentioned." -language models that seek for knowledge: modular search & generation for dialogue and prompt completion,gpt-4-1106-preview,7,"While the abstract provided doesn't directly address 'hard prefix prompts' or 'prompt engineering' specifically, it does pertain to the broader subject area of how language models can be improved to generate more factual and relevant responses. The research on modular search and generation in the context of dialogue and prompt completion is relevant to prompt engineering as it impacts the effectiveness of the prompts in eliciting accurate and meaningful responses from language models. Therefore, the rating is relatively high due to the indirect relevance of improving language model outputs, which is a fundamental aspect of prompt engineering." -test-time prompt tuning for zero-shot generalization in vision-language models,gpt-4-1106-preview,9,"The abstract describes a study directly related to prompt engineering, specifically the dynamic tuning of prompts for vision-language models to enhance zero-shot generalization. Although the provided text doesn't explicitly mention 'hard prefix prompts,' it discusses an advanced concept of prompt optimization at test-time which is highly relevant to the broader field of prompt engineering. The method's ability to adapt prompts using a single test sample fits well within the study of how prompts can be engineered and optimized to improve model performance, particularly in zero-shot settings." -p-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks,gpt-4-1106-preview,9,"The abstract discusses the concept of prompt tuning in the context of Natural Language Understanding (NLU) and proposes a new method called P-Tuning v2, indicating a significant advancement in the field of prompt engineering. The stated goals of matching the performance of full model fine-tuning with a fraction of tuned parameters make it highly relevant. The only reason it is not rated a perfect 10 is that the abstract does not specifically mention 'hard prefix prompts', but it is likely that the methodology could be applied to or has implications for such prompts, hence the high rating." -diffusiondb: a large-scale prompt gallery dataset for text-to-image generative models,gpt-4-1106-preview,8,"The abstract describes a dataset (DiffusionDB) focused on the synthesis of text-to-image generation using prompts in natural language, which includes the study of syntactic and semantic characteristics of prompts. This relates closely to prompt engineering, as it involves analyzing how prompts influence the outputs of generative models and finding optimal prompts to achieve desired results. The only reason it does not score a perfect 10 is because 'hard prefix prompts' which the initial prompt specified are not mentioned, so it may not cover the specific focus on 'hard prefix prompts'. Nonetheless, it is highly relevant for studies about prompt engineering in the broader context of text-to-image generative models." -learning to prompt for continual learning,gpt-4-1106-preview,8,"The abstract discusses an approach to continual learning that focuses on using prompts as learnable parameters within a memory space to guide model predictions and manage knowledge. This is highly relevant to prompt engineering because it directly deals with the optimization and efficacy of prompts in a machine learning context. However, it is not a 'comprehensive systematic review on hard prefix prompts,' as the prompt specifies, but rather a presentation of a novel framework for continual learning using prompts, which is why the rating is not a perfect 10." -prompt-aligned gradient for prompt tuning,gpt-4-1106-preview,9,"The abstract describes a study focused on improving prompt tuning methods for vision-language models, presenting a new approach called Prompt-aligned Gradient (ProGrad) specifically designed to prevent the loss of general knowledge during the fine-tuning process. This is highly relevant to prompt engineering as it addresses a significant challenge in the field—maintaining the balance between task-specific adaptation and the retention of pre-trained capabilities. The paper shows potential advancements in prompt tuning, which is a core aspect of prompt engineering, hence the high relevance rating." -domain adaptation via prompt learning,gpt-4-1106-preview,7,"The abstract describes a study on 'domain adaptation via prompt learning (DAPrompt)', which is relevant to the field of prompt engineering, as it specifically focuses on the use of prompts in unsupervised domain adaptation. The relevance is not at the maximum because the study concentrates on a particular application of prompt learning (i.e., unsupervised domain adaptation) rather than a comprehensive overview or systematic review of hard prefix prompts in prompt engineering. Nonetheless, it contributes valuable insights into prompt engineering by illustrating how prompts can dynamically adapt classifiers to different domains, which is a significant aspect of the study area." -hyperprompt: prompt-based task-conditioning of transformers,gpt-4-1106-preview,9,"The provided text is highly relevant to prompt engineering study as it directly addresses a novel architecture called 'HyperPrompt' for prompt-based task-conditioning in Transformers, which is a key area in the field of prompt engineering. The text discusses the efficiency and effectiveness of HyperPrompt in the context of few-shot learning and multi-task learning, benchmarks that are essential for evaluating prompt-based methods. The relevance is not rated a full 10 only because the specific term 'hard prefix prompts' is not directly mentioned, although the description strongly suggests relevance to that concept." -prompt for extraction? paie: prompting argument interaction for event argument extraction,gpt-4-1106-preview,8,"The provided abstract describes a model (PAIE) that leverages prompt tuning as part of its methodology for Event Argument Extraction (EAE). The model's use of prompts to guide span selection and capture argument interactions is highly relevant to the study of prompt engineering, as it applies prompt-based methods for a specific NLP task. The paper also discusses extractive prompt tuning strategies and their effectiveness, which contributes to the understanding of prompt engineering. However, it does not specifically address 'hard prefix prompts' which might be a more specialized aspect within the field of prompt engineering, hence the rating isn't a full 10." -promda: prompt-based data augmentation for low-resource nlu tasks,gpt-4-1106-preview,8,"The paper's focus on 'Prompt-based Data Augmentation' and the method of training 'small-scale Soft Prompts' in PLMs directly relates to the concept of prompt engineering, a technique used to interface with and extract specific behaviors from language models. While the paper might not explicitly cover a 'hard prefix prompt,' it does deal with the broader topic of how prompts can be engineered and utilized to improve NLU tasks, which makes it highly relevant to studies within prompt engineering." -no more fine-tuning? an experimental evaluation of prompt tuning in code intelligence,gpt-4-1106-preview,8,"The abstract discusses prompt tuning as an alternative to fine-tuning in the context of code intelligence tasks. Prompt tuning is highly relevant to prompt engineering studies since it involves designing and inserting prompts that aid the pre-trained models in adapting to specific tasks. This specific paper evaluates the efficiency of prompt tuning over fine-tuning, which is a core topic within prompt engineering research. Although it focuses on code intelligence tasks and not 'hard prefix prompts' specifically, the principles and findings can have implications for prompt engineering in general. The relevance could be higher if the study specifically addressed hard prefix prompts or a broader range of prompt engineering techniques." -personalized prompt learning for explainable recommendation,gpt-4-1106-preview,8,"The given title and abstract focus on 'prompt learning', particularly in the context of explainable recommendation systems. It is highly relevant to prompt engineering since prompt learning is a crucial aspect of tailoring prompts to improve the performance of AI models, such as pre-trained transformer models mentioned in the text. Moreover, the paper discusses innovative approaches (discrete and continuous prompt learning) and training strategies, which are essential for advancing the field of prompt engineering. The rating is not a full 10 because the study specifically addresses the use of prompt learning for explainable recommendations rather than a broad systematic review on 'hard prefix prompts' in general, implying a more focused domain application rather than a comprehensive study across multiple domains or types of prompts." -towards unified conversational recommender systems via knowledge-enhanced prompt learning,gpt-4-1106-preview,7,"The abstract discusses the integration of recommendation and conversation modules in a conversational recommender system using a prompt learning paradigm, which is particularly relevant to prompt engineering. The use of knowledge-enhanced prompts and the unification of different subtasks into the prompt learning framework makes it pertinent to the study of how prompts can be designed to improve performance in AI systems. Although the primary focus is on conversational recommender systems rather than prompt engineering in general, the methodology and implications for the field of prompt engineering are significant enough to warrant a relevance rating of 7." -bridge-prompt: towards ordinal action understanding in instructional videos,gpt-4-1106-preview,7,"The paper describes an approach that involves 'reformulating individual action labels as integrated text prompts,' which relates to the concept of incorporating linguistic structures (prompts) to enhance the understanding of actions in videos. This suggests an innovative use of prompt engineering to bridge the semantic gap between actions, which is relevant to the study of prompts in the context of machine learning. However, this application is specific to action recognition in video data and does not address 'hard prefix prompts' directly, which is why the relevance rating is not higher." -prompt consistency for zero-shot task generalization,gpt-4-1106-preview,9,"The title and abstract describe a study focused on improving zero-shot task generalization by regularizing prompt consistency, which is highly relevant to prompt engineering. Prompt engineering involves the careful design of prompts to elicit the desired responses from language models, and this paper directly addresses and proposes a method for enhancing performance in this area. The relevance is not rated a full 10 because the study may not explicitly be about 'hard prefix prompts' as mentioned in the primary query but it does contribute significantly to the broader field of prompt engineering." -promptcap: prompt-guided task-aware image captioning,gpt-4-1106-preview,8,"The article describes 'PromptCap', a model that utilizes natural-language prompts to generate image captions that are tailored to assist large language models in performing visual question answering tasks. While the primary focus is on image captioning to aid knowledge-based VQA, the use of prompts to guide the model's output is directly related to prompt engineering. The research showcases how carefully engineered prompts can significantly enhance the performance of language models in understanding and responding to visual content. Therefore, the study has high relevance to prompt engineering, particularly in the context of integrating textual and visual information. However, it does not directly address hard prefix prompts in a systematic review, which is why the rating is not a perfect 10." -spot: better frozen model adaptation through soft prompt transfer,gpt-4-1106-preview,9,"The abstract describes a study directly relevant to prompt engineering, as it focuses on the use of prompts to enhance performance in natural language processing tasks through a method known as Soft Prompt Transfer (SPoT). The relevance is high because it involves leveraging soft prompts for model adaptation which is a specific aspect of prompt engineering. Moreover, it suggests a systematic approach to understanding task transferability, which can contribute significant insights into the field of prompt engineering. The only reason it does not receive a full 10 is that the abstract does not mention 'hard prefix prompts' which was the specific focus of the systematic review mentioned in the prompt." -prompt programming for large language models: beyond the few-shot paradigm,gpt-4-1106-preview,9,"The abstract discusses advanced concepts in prompt programming and evaluates the effectiveness of 0-shot prompts in comparison to few-shot prompts using GPT-3. It underlines the significant impact of prompt design on language model performance and outcomes. The introduction of 'metaprompt' suggests a forward-thinking approach in prompt engineering, indicating a relevance to the study of prompt engineering. The score is not a perfect 10 because the abstract doesn't specifically mention 'hard prefix prompts,' but the overall discussion is highly pertinent to the field of prompt engineering." -coda-prompt: continual decomposed attention-based prompting for rehearsal-free continual learning,gpt-4-1106-preview,9,"The paper is highly relevant to the field of prompt engineering as it discusses a novel approach for producing dynamic prompts through an attention-based key-query mechanism, specifically for continual learning in computer vision. This study directly addresses the issue of prompt generation in the context of large-scale pre-trained models and presents a solution for improving accuracy without the need for data rehearsal. Although it may not exclusively focus on 'hard prefix prompts', the concept of input-conditioned prompt components is a valuable contribution to prompt engineering studies, making it almost entirely pertinent to the field." -prompt learning with optimal transport for vision-language models,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering, as it directly addresses the challenge of creating efficient prompts for vision-language models, which is a subset of the broader field of prompt engineering. The utilization of optimal transport to match vision and text modalities is a novel approach in learning multiple prompts, which aligns with the topic of systematic review on hard prefix prompts by exploring alternative strategies to enhance prompt effectiveness. The only reason the rating is not a full 10 is that the abstract does not explicitly mention 'hard prefix prompts', suggesting that the study might not be solely focused on that specific aspect of prompt engineering." -idpg: an instance-dependent prompt generation method,gpt-4-1106-preview,9,"The provided abstract directly pertains to prompt engineering within the realm of NLP transfer learning, making it highly relevant. The novel method of Instance-Dependent Prompt Generation (IDPG) is a significant contribution to prompt engineering because it introduces variability and personalization of prompts for different input instances. The effectiveness of this method is demonstrated through experiments on various NLU tasks, situating the paper at the forefront of prompt engineering research. The reason for not awarding a perfect 10 is that the study does not explicitly mention 'hard prefix prompts', but the concept of IDPG seems inherently related to the engineering of task-specific prompts which would include hard prefix prompts among others." -continual prompt tuning for dialog state tracking,gpt-4-1106-preview,7,"The abstract provided discusses 'Continual Prompt Tuning' which is a method for task adaptation in dialog systems that includes learning and storing prompt token embeddings to prevent catastrophic forgetting. Although not directly stated as 'hard prefix prompts,' the methodology is closely related to prompt engineering as it involves the manipulation of prompts to improve the performance of a pre-trained model in continual learning scenarios. This concept is relevant to the study of prompt engineering because it explores ways to effectively utilize prompts in a dynamic and evolving context, which is a crucial aspect of advanced prompt engineering strategies. However, the rating is not a full 10 because it is not directly focused on 'hard prefix prompts' specifically, which is narrower in scope compared to the broader concept of prompt engineering." -exploring the universal vulnerability of prompt-based learning paradigm,gpt-4-1106-preview,9,"The abstract describes a study that directly investigates the vulnerabilities of the prompt-based learning paradigm, which is highly relevant to prompt engineering. The focus on triggers that exploit these vulnerabilities is critical for understanding the limitations and potential risks associated with prompts in language models. While not focused on creating or optimizing prompts, it is fundamentally related to their integrity and security, which is an essential aspect of prompt engineering studies." -how many data points is a prompt worth?,gpt-4-1106-preview,9,"The abstract describes a study focusing on comparing the effectiveness of using prompts versus generic model heads in fine-tuning pretrained models for classification tasks. It specifically aims to quantify the benefits of prompts when working with limited data. Since the study investigates the impact of prompting on model performance across different tasks and data sizes, it contributes valuable insights to the field of prompt engineering. The high rating reflects the direct relevance of the findings to understanding how prompts can improve machine learning models, which is a core aspect of prompt engineering research. However, the rating is not a full 10 because it does not cover the breadth of prompt engineering, such as the design and optimization of prompts, which also includes areas beyond fine-tuning for classification tasks." -knowprompt: knowledge-aware prompt-tuning with synergistic optimization for relation extraction,gpt-4-1106-preview,8,"The paper presents an advancement in prompt-tuning for the specific application of relation extraction. It introduces KnowPrompt, a technique that effectively incorporates domain knowledge into prompt templates, which is highly relevant to studies on prompt engineering. Although the focus is on relation extraction and not hard prefix prompts, the concepts of knowledge-aware prompts and learnable virtual type words are innovative contributions to the field of prompt-tuning as a whole. The lower score is because it does not directly address 'hard prefix prompts' as described in the original broad request, but it is still significantly relevant to the broader subject of prompt engineering." -knowledgeable prompt-tuning: incorporating knowledge into prompt verbalizer for text classification,gpt-4-1106-preview,9,"The paper directly relates to the field of prompt engineering by introducing a novel approach to improve prompt tuning performance for text classification tasks. This approach involves integrating external knowledge into the verbalizer component of the tuning process, which is a specific technique within the broader area of prompt engineering. This is highly relevant as it targets one of the fundamental challenges in the field, which is to optimize the interaction between pre-trained language models and task-specific prompts. The rating is not a full 10 because it does not cover 'hard prefix prompts' specifically, but focuses more broadly on knowledgeable prompt-tuning, which may or may not include hard prefixes." -pro-tuning: unified prompt tuning for vision tasks,gpt-4-1106-preview,8,"The abstract discusses the concept of prompt tuning, termed 'Pro-tuning', which is highly relevant to the field of prompt engineering as it applies prompt-based learning principles to computer vision tasks. While the principle is derived from work in natural language processing, the adaptation to vision models suggests a cross-disciplinary application of prompt engineering techniques, which is pertinent to the broader study of how prompts can be engineered for different types of models across fields. The relevance is not rated a full 10 as the study is specific to computer vision and may not cover all aspects of 'hard prefix prompts' in the context of a systematic review, which would more generally encompass various modalities and tasks." -interactive and visual prompt engineering for ad-hoc task adaptation with large language models,gpt-4-1106-preview,9,"The abstract provided outlines a study that is highly relevant to prompt engineering. It describes the development of PromptIDE, a tool that facilitates the experimentation and optimization of prompts for neural language models. The workflow mentioned is designed to enhance prompt creation and performance evaluation before deployment, which is central to the field of prompt engineering. Although it doesn't explicitly mention 'hard prefix prompts,' the focus on prompt variations and performance signifies a close connection to the concept of prompt design and engineering. Thus, the relevance to prompt engineering is very high, but not a perfect 10 due to the missing specific mention of 'hard prefix prompts'." -promptmaker: prompt-based prototyping with large language models,gpt-4-1106-preview,8,"The content of the article appears to be highly relevant to prompt engineering as it discusses prototyping ML-powered features using natural language prompts, which is a core component of prompt engineering. The emphasis on the experiences of industry professionals indicates insights into practical applications and challenges of prompt-based approaches. The article's focus on broadening access, speeding up prototyping, and improving collaboration directly relates to the evolution of prompt engineering techniques. However, the specific term 'hard prefix prompts' is not mentioned, which might suggest that the study doesn't exclusively focus on that subtype of prompts within prompt engineering. Therefore, the rating is an 8 instead of a perfect 10." -dynamic prompt learning via policy gradient for semi-structured mathematical reasoning,gpt-4-1106-preview,7,"The abstract describes a study that focuses on enhancing the performance of pre-trained language models like GPT-3 on mathematical reasoning tasks by using a novel approach called PromptPG. This approach uses policy gradient to optimize the selection of in-context examples for prompt construction, which is a core aspect of prompt engineering. While the study is not directly about 'hard prefix prompts', it addresses the broader concept of prompt optimization for improving model performance. Therefore, it is relevant to prompt engineering but not specifically focused on a comprehensive systematic review on hard prefix prompts." -conversing with copilot: exploring prompt engineering for solving cs1 problems using natural language,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates the use of natural language interactions to guide GitHub Copilot, an AI code generation tool, in solving programming problems. It focuses on how changes to the wording of a problem can impact the AI's ability to generate correct code, which is at the core of prompt engineering techniques. The fact that the study includes an empirical evaluation of Copilot's performance across a dataset of programming problems and discusses the potential of prompt engineering as a learning tool underscores its relevance to the field. The rating is not a perfect 10 because the study is specific to the domain of programming problem solving and the tool GitHub Copilot, and while it is a significant component of prompt engineering, there may be additional facets of prompt engineering in broader contexts that are not covered by this study." -"zeroprompt: scaling prompt-based pretraining to 1, 000 tasks improves zero-shot generalization",gpt-4-1106-preview,8,"The abstract discusses a multitask pretraining approach named ZeroPrompt which is highly relevant to prompt engineering as it directly relates to enhancing the performance of zero-shot learning using prompts. It also mentions the introduction of a new prompting method that utilizes a genetic algorithm to discover the best prompts for unseen tasks. This is a significant contribution to the field of prompt engineering. Despite not mentioning 'hard prefix prompts,' the focus on task scaling and prompting methods in zero-shot scenarios are pertinent to prompt engineering study. The relevance rating is not a full 10 because the abstract does not explicitly discuss the comprehensive systematic review or focus exclusively on 'hard prefix prompts,' which are specified in the prompt." -fantastically ordered prompts and where to find them: overcoming few-shot prompt order sensitivity,gpt-4-1106-preview,9,"The study directly investigates the effect of prompt order sensitivity and devises a method to overcome it in few-shot settings, which is highly relevant to prompt engineering. It leverages the generative capabilities of language models to improve the performance of GPT-family models without the need for additional data, indicating a significant contribution to the field of prompt engineering. The deduction of one point is due to the fact that it focuses specifically on order sensitivity and not on the entire scope of hard prefix prompts, but it is still highly pertinent." -ptr: prompt tuning with rules for text classification,gpt-4-1106-preview,8,"The document presents research on 'prompt tuning with rules' (PTR), which directly relates to the field of prompt engineering study. It involves constructing prompts with sub-prompts and integrating logic rules, which is a form of hard prefix prompt design in the establishment of many-class text classification tasks. The concept of using human prior knowledge and pre-trained language models (PLMs) in prompt construction is relevant to the study of how prompts can guide or improve the performance of machine learning models. However, the rating is not a perfect 10 because the abstract is missing (listed as 'nan'), which suggests that there may be additional context to the relevance that is not provided in the TLDR summary." -iteratively prompt pre-trained language models for chain of thought,gpt-4-1106-preview,9,"The abstract describes an innovative approach to improving the capability of Pre-trained Language Models (PLMs) for tasks that require multi-step reasoning, an aspect that is central to prompt engineering. This iterative prompting framework that progressively elicits relevant knowledge and dynamically synthesizes prompts based on contexts directly pertains to the field of prompt engineering, as it looks at refining the prompts that are given to language models in order to achieve better performance on complex tasks. While it does not specifically mention 'hard prefix prompts', which is part of the original query, the idea of creating dynamic and context-aware prompts is highly relevant to the study of prompt design and engineering." -black-box prompt learning for pre-trained language models,gpt-4-1106-preview,9,"The paper presents a method for adapting pre-trained language models (PLMs) through black-box discrete prompt learning without needing access to the model's parameters or gradients, which is highly relevant to the field of prompt engineering. The study focuses on efficient optimization of discrete prompts and even though it does not specifically mention 'hard prefix prompts', the concept of discrete prompts is within the scope of prompt engineering. The proposed black-box setting for secure interaction between cloud and edge devices is innovative and directly linked to the adaptability of PLMs for various tasks using prompts. The paper's significant improvements across benchmarks and in-depth case studies on prompt characteristics are valuable contributions to the study of prompt engineering." -visual prompt tuning for test-time domain adaptation,gpt-4-1106-preview,8,"The presented work is highly relevant to prompt engineering study as it introduces a method named 'Data-efficient Prompt Tuning' (DePT), which is a direct application of prompt engineering to adapt models during test-time. It focuses on tuning prompts as a parameter-efficient way to adjust model representation to new data domains. Although the term 'prompt' in the context of this paper refers to visual prompts in a vision Transformer, which differs from textual prompts commonly discussed in NLP prompt engineering, the concept of adjusting a small set of parameters for domain adaptation is aligned with the principles of prompt engineering. The reason for not being a 10 is that the term 'hard prefix prompts' was not mentioned, which suggests that the exact topic of the prompt may not be covered in its entirety." -repository-level prompt generation for large language models of code,gpt-4-1106-preview,9,"The paper presents a framework that directly contributes to the field of prompt engineering by generating example-specific prompts for large language models of code. The fact that this system uses the context of the entire repository and does not rely on the internal weights of the models aligns well with the principles of prompt engineering, where context and relevance are crucial for effective prompt design. The relevance to engineering study is slightly less than perfect only because it is specific to code generation and not the broader application of prompts in general large language models." -gppt: graph pre-training and prompt tuning to generalize graph neural networks,gpt-4-1106-preview,9,"The paper's abstract describes a novel transfer learning framework, which includes the concept of prompt tuning to generalize Graph Neural Networks (GNNs) for downstream tasks. It is highly relevant to prompt engineering study as it involves modifying prompts (by creating token pairs) to influence the behavior of the pre-trained GNNs without extensive fine-tuning. This approach aligns with the practice of designing prompts to effectively elicit desired responses from pre-trained models, which is central to prompt engineering. The only reason it doesn't receive a full 10 is because the paper is specifically about the domain of graph data and might not cover other aspects or generalities of prompt engineering." -visual prompt tuning for generative transfer learning,gpt-4-1106-preview,7,"The provided abstract discusses the topic of prompt tuning which is relevant to prompt engineering, a field that deals with optimizing the input given to AI models to elicit better performance. Although the context of the abstract is specific to the domain of generative image models and visual prompts, which is slightly different from hard prefix prompts in textual domain, the general principles and techniques of prompt tuning can be considered applicable across multiple domains. Hence, the content is substantially relevant to prompt engineering, especially in demonstrating knowledge transfer and domain adaptation which are significant challenges in the field. The lower rating reflects the domain-specific focus on visual transformers rather than a general treatment of all forms of prompt engineering." -prompt vision transformer for domain generalization,gpt-4-1106-preview,8,"The abstract describes a study that involves prompt learning with vision transformers for the purpose of domain generalization. Although the study does not specifically mention 'hard prefix prompts', it does focus on a prompt-based method (DoPrompt) for improving the performance of ViTs in unseen domains. This is relevant to prompt engineering because it is a direct application of using prompts to enhance model generalization. The relevance rating is not a full 10 because the study does not directly address 'hard prefix prompts' as specified in the initial prompt, but it is closely related and contributes to the field of prompt engineering." -prompt tuning for discriminative pre-trained language models,gpt-4-1106-preview,8,"The paper presents DPT, a novel framework for prompt tuning in the context of discriminative pre-trained language models, which is highly relevant to the field of prompt engineering as it explores how to adapt PLMs to different tasks. While it does not directly address 'hard prefix prompts', the concept of prompt tuning is central to prompt engineering. The study's systematic approach to reformulating NLP tasks to suit discriminative PLMs and its comprehensive experiments align closely with prompt engineering methodologies. Thus, the paper contributes valuable insights to the broader field of prompt engineering, even if it is not specialized in hard prefix prompts specifically. The rating is not a full 10 due to the abstract's lack of direct reference to hard prefix prompts." -incremental prompting: episodic memory prompt for lifelong event detection,gpt-4-1106-preview,7,"The presented abstract is relevant to prompt engineering study to a considerable extent because it introduces 'Episodic Memory Prompts (EMP)', which is a technique relevant to prompt engineering. It contributes to the field by addressing the issue of catastrophic forgetting and suggesting a prompt-based method to retain task-specific knowledge in a model that is being continually updated. This is pertinent as it deals with prompt optimization and its role in lifelong learning, both of which fall under the broad umbrella of prompt engineering. However, it is not a 'systematic review on hard prefix prompts' specifically; rather, it is an empirical study about a novel approach to prompting. Hence, the rating is not a full 10, as it does not exactly match the premise of a 'comprehensive systematic review on hard prefix prompts.'" -prompt-matched semantic segmentation,gpt-4-1106-preview,7,"While the abstract discusses 'prompt learning' in the context of visual foundation models and semantic segmentation, which is somewhat related to the concept of 'prompt engineering,' it refers to a different domain (visual tasks rather than text-based tasks). The relevance to prompt engineering studies is indirect, as the principles of learning prompts for tasks could potentially be analogous across domains. However, the term 'prompt' in this context does not directly correspond to 'hard prefix prompts' typically discussed in language models and prompt engineering. The methodology and application are related in a broader sense to the concept of optimizing pre-trained models using prompts, so it receives a medium-high relevance rating." -multitask vision-language prompt tuning,gpt-4-1106-preview,9,"The abstract provides a detailed insight into an advanced application of prompt engineering—specifically in the area of multitask vision-language prompt tuning. It is highly relevant to the study of prompt engineering because it discusses a method for improving the performance of vision-language models through task-specific learned prompt vectors and shares empirical evidence of cross-task benefits. Furthermore, the concept of transferable prompts and their effect on model generalization is directly pertinent to the prompt engineering domain. The only reason the rating isn't a full 10 is because the prompt engineering here is specialized for vision-language tasks, which might be slightly narrower in focus than the broader concept of 'hard prefix prompts' mentioned in the initial prompt." -memory-assisted prompt editing to improve gpt-3 after deployment,gpt-4-1106-preview,9,"The relevance to prompt engineering is very high, as this study focuses on refining the interaction between users and GPT-3 through prompt modification using memory-assisted techniques. The study addresses improving the accuracy of responses from GPT-3 by using recorded instances of misunderstandings and user feedback to inform better prompt construction. This falls directly within the realm of prompt engineering, which is the practice of designing prompts to elicit better performance from language models." -openprompt: an open-source framework for prompt-learning,gpt-4-1106-preview,9,"The given abstract reviews a toolkit called OpenPrompt designed for prompt-learning in natural language processing, which is highly relevant to the study of prompt engineering. Prompt engineering deals with how to best structure and adapt prompts to get effective responses from language models. While it does not specifically mention 'hard prefix prompts', it offers a framework that likely supports experimenting with various prompt strategies, including hard prefixes. Therefore, the relevance to prompt engineering is high, but not maximum as it does not directly address 'hard prefix prompts'." -cpt: colorful prompt tuning for pre-trained vision-language models,gpt-4-1106-preview,8,"The abstract describes an innovative approach called Cross-modal Prompt Tuning (CPT) for pre-trained vision-language models (VL-PTMs), which involves a form of prompt engineering by utilizing color-based co-referential markers in image and text to reformulate visual grounding. This is highly relevant to the study of prompt engineering as it presents a specific instance where prompts are engineered to bridge the gap between pre-training and fine-tuning, enhancing the model's performance on downstream tasks with few-shot or zero-shot learning. Although the study focuses specifically on vision-language models and doesn't address hard prefix prompts in general, the concept of tailoring prompts for better performance is directly applicable to the field of prompt engineering. Thus, the rating reflects its high relevance due to its innovative approach to prompt design, with some points deducted for not directly addressing the broader topic of hard prefix prompts." -differentiable prompt makes pre-trained language models better few-shot learners,gpt-4-1106-preview,8,"The paper presents a method (DART) for enhancing the few-shot learning capabilities of small language models without traditional prompt engineering. Although it claims to bypass 'any prompt engineering,' the method still inherently deals with prompts by differentially optimizing prompt templates. Therefore, it is relevant to the study of prompt engineering since it explores an alternative avenue for prompt manipulation. The rating is not a full 10 because the study appears to focus more on the model's few-shot learning improvement rather than prompting techniques themselves." -adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections,gpt-4-1106-preview,8,"The study describes a process of 'meta-tuning' pre-trained language models on a variety of datasets and unifying label descriptions in a QA format to optimize them for zero-shot learning. While not specifically addressing 'hard prefix prompts,' it heavily involves the concept of using prompts to improve the performance of language models in tasks they were not explicitly trained for. This is highly relevant to the field of prompt engineering, as it explores how different methods of providing input to models (in this case, through meta-tuning) can result in better alignment with desired outcomes. The TLDR further confirms the study's relevance to prompt engineering by emphasizing the improved performance on answering prompts. However, given that it does not directly study hard prefix prompts, the rating is not a full 10." -align and prompt: video-and-language pre-training with entity prompts,gpt-4-1106-preview,4,"The abstract describes a pre-training framework for video-and-language tasks, focusing on cross-modal alignment and introducing a novel prompting entity modeling concept. Although the study involves 'entity prompts,' it primarily concentrates on video-text interaction rather than exploring 'hard prefix prompts' as may be suggested by prompt engineering in a language model context. The relevance to prompt engineering is secondary and indirect, mainly connected through the novel use of prompts for entity modeling within a multimodal framework, not as a comprehensive study of prompt engineering itself." -prompt-learning for fine-grained entity typing,gpt-4-1106-preview,9,"The abstract describes a study focused on prompt-learning, which is directly related to the field of prompt engineering. It highlights the use of language prompts to tune pre-trained language models for specific tasks, which is an essential component of research within prompt engineering. The relevance is high because the work specifically investigates prompt-learning methodologies and their applications, including a new self-supervised strategy for zero-shot scenarios, directly contributing to the understanding and advancement of how prompts can improve model performance on a granular level. The only detail preventing a perfect score is the lack of explicit mention of 'hard prefix prompts,' but the described study is likely to have significant implications for prompt engineering in general." -a good prompt is worth millions of parameters: low-resource prompt-based learning for vision-language models,gpt-4-1106-preview,9,"The abstract clearly pertains to the study of prompt engineering, as it discusses the utilization and effects of prompts in few-shot learning tasks for vision-language models. The research focuses on how different types of prompts (noisy versus hand-crafted) influence the learning process and performance of the model. The mention of 'prefix language modeling' also directly relates to the prompt engineering study, specifically regarding hard prefix prompts. The high score reflects the direct relevance to the study of how prompts can improve or affect the learning capabilities of AI models, despite not exclusively being about 'hard prefix prompts', hence not a perfect score." -why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning,gpt-4-1106-preview,6,"The paper's abstract describes an analysis of head tuning and prompt tuning, which are highly relevant to the study of prompt engineering as a concept. Prompt tuning particularly involves the process of adjusting prompts to achieve better performance in downstream tasks, which is a subset of prompt engineering. However, the abstract suggests a specific focus on the theoretical underpinnings of why pretrained language models are effective, using generative models like HMMs for analysis. The relevance to prompt engineering is therefore significant but not completely aligned, as it does not explicitly address the systematic review of 'hard prefix prompts' or the practical aspect of designing prompts, which might be expected from a 'comprehensive systematic review on hard prefix prompts'." -on transferability of prompt tuning for natural language processing,gpt-4-1106-preview,9,"The abstract is highly relevant to prompt engineering as it discusses prompt tuning (PT), which is an efficient method in natural language processing to utilize pre-trained language models with adjustable soft prompts. The study's focus on the transferability of these soft prompts and the implications for efficiency and performance improvements directly relates to the core concepts of prompt engineering. They explore how different prompts affect various models and how that can be harnessed to enhance the PT process. Although the study is not strictly about 'hard prefix prompts' as originally sought, the relevance to prompt engineering is significant, thus the high rating. The explicit mention of 'trained soft prompts' and 'prompt transfer' indicates a direct relationship to engineering the inputs to the language models." -pada: example-based prompt learning for on-the-fly adaptation to unseen domains,gpt-4-1106-preview,9,"The paper detailed in the prompt directly pertains to prompt engineering, specifically in the application of 'example-based autoregressive Prompt learning for on-the-fly Any-Domain Adaptation'. It focuses on augmenting the ability of the T5 language model to generate prompts that effectively adapt to unseen domains without the need for prior examples or knowledge about the target domain, which is a crucial aspect of prompt engineering. The relevance rating is high because it directly addresses the generation and utilization of prompts to enhance the adaptability and performance of NLP systems in novel contexts, which is central to the study of prompt engineering." -nsp-bert: a prompt-based few-shot learner through an original pre-training task —— next sentence prediction,gpt-4-1106-preview,9,"The paper described pertains directly to prompt engineering, as it deals with a prompt-based few-shot learner and demonstrates how prompts can be used in conjunction with the BERT model's original pre-training task of Next Sentence Prediction (NSP). The relevance to prompt engineering is clear since it discusses an innovative approach to prompts at the sentence level, contrasting with the common token-level prompts. Furthermore, the paper's focus on how prompt-based learning can be effective in different NLP tasks, and its exploration of factors like the pre-training corpus on the few-shot learning capabilities of the model, are pertinent issues within the study of prompt engineering." -lightner: a lightweight generative framework with prompt-guided attention for low-resource ner,gpt-4-1106-preview,8,"The paper discusses the use of 'prompt-guided attention' within a generative framework for Named Entity Recognition (NER) in low-resource settings. This approach is quite relevant to prompt engineering, as it involves the manipulation of continuous prompts to improve the performance of a pre-trained language model on a specific task, without the need for extensive re-training or large datasets. Although the paper is specifically about NER and not about the broader topic of 'hard prefix prompts', the concept of integrating prompts into the attention mechanism is very much related to the study of how prompts can be effectively used to direct the focus of language models. The rating is not a full 10 because it concentrates on a specific application (NER) and does not cover the entire breadth of prompt engineering, which could also include other tasks and models." -pada: a prompt-based autoregressive approach for adaptation to unseen domains,gpt-4-1106-preview,8,"The abstract describes PADA, a prompt-based approach, which is directly related to prompt engineering as it involves the generation of unique prompts to adapt to unseen domains in NLP tasks. The approach's autoregressive nature and its reliance on Domain Related Features (DRFs) suggest a nuanced and advanced application of prompt engineering. While the study seems to focus more on domain adaptation rather than hard prefix prompts specifically, the technique's success in outperforming other approaches highlights its relevance to the broader field of prompt engineering and its potential contributions to the prompt engineering literature. The paper could provide valuable insights into designing effective prompts for domain adaptation, which is a subset of the overall prompt engineering research area." -why johnny can’t prompt: how non-ai experts try (and fail) to design llm prompts,gpt-4-1106-preview,9,"The study described in the title and abstract addresses a core aspect of prompt engineering by investigating whether non-AI experts are capable of designing effective prompts for large language models (LLMs). It directly focuses on the challenges and learnability of prompt design, which is highly relevant to the field of prompt engineering. The rating is not a perfect 10 because the study appears to focus on the end-user experience and may not delve into technical aspects or methodologies of prompt crafting, such as hard prefix prompts, as much as a more narrowly focused technical paper would." -the power of prompt tuning for low-resource semantic parsing,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it specifically investigates 'prompt tuning', which is a technique within the domain of prompt engineering. The focus on how prompt tuning can enhance the performance of language models for the semantic parsing task suggests that this paper contributes to the understanding and application of prompt engineering. However, it may not cover all aspects of prompt engineering, such as the creation or manipulation of hard prompts, therefore the rating is not a full 10." -the biases of pre-trained language models: an empirical study on prompt-based sentiment analysis and emotion detection,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it focuses on the biases of PLMs when used in prompt-based tasks such as sentiment analysis and emotion detection. These findings are directly applicable to prompt engineering since the biases in label-word mappings, prompt templates, formation of prompts, and others impact how prompts are engineered for effective interaction with PLMs. The high rating is due to the direct investigation and empirical study of issues that would be fundamental to anyone engaged in engineering prompts for PLMs." -adaprompt: adaptive prompt-based finetuning for relation extraction,gpt-4-1106-preview,8,"The paper presents an approach that is highly relevant to prompt engineering as it involves the novel use of adaptive prompts in the context of fine-tuning language models for relation extraction, a specific NLP task. The adaptive label words selection mechanism directly relates to how prompts are engineered to handle complex label spaces, and the auxiliary entity discriminator may be considered a form of prompt that encourages the model to concentrate on certain aspects of input data. Thus, the relevance to prompt engineering studies is significant, though not perfect, as the paper might not cover the entire breadth of prompt engineering topics." -prompt waywardness: the curious case of discretized interpretation of continuous prompts,gpt-4-1106-preview,9,"The study addresses a central issue in prompt engineering by exploring the relationship between continuous and discrete prompt formats and their effectiveness in solving language tasks. The investigation into the 'waywardness' of prompt behavior is highly relevant to developing more robust and interpretable prompting methods, which aligns closely with the field of prompt engineering. The only reason the rating is not a full 10 is because the study does not specifically mention 'hard prefix prompts' but rather deals with continuous prompts more broadly." -sentiprompt: sentiment knowledge enhanced prompt-tuning for aspect-based sentiment analysis,gpt-4-1106-preview,8,"The study presents a method of enhancing language model performance for aspect-based sentiment analysis through the use of customized prompts that incorporate sentiment knowledge. This directly relates to the engineering of prompts, as it involves designing and applying specialized prompt structures (consistency and polarity judgment templates) to improve task-specific model outputs. While the study is not just about 'hard prefix prompts', it still involves the systematic design of prompts to encode task-specific knowledge, which is a significant component of prompt engineering. Therefore, it gets a high relevance score but is not a perfect match due to the specificity of 'hard prefix prompts' not being the central focus." -automated cross-prompt scoring of essay traits,gpt-4-1106-preview,7,"The abstract describes a study on cross-prompt automated essay scoring, which is not directly related to 'hard prefix prompts' or prompt engineering. However, the methodology involves training models to understand and score various traits of essay text, likely making use of several prompt design considerations to generalize across different essay prompts. While not explicitly focused on prompt engineering, the research indirectly involves the creation of prompts that can elicit features used for trait-focused scoring. Thus, the relevance to prompt engineering is moderate due to its indirect but significant implications for designing prompts that can be effectively utilized by AES systems in various contexts." -masterkey: automated jailbreak across multiple large language model chatbots,gpt-4-1106-preview,8,"The abstract discusses a study related to 'jailbreak' attacks on Large Language Models (LLMs), which directly involve the manipulation of prompts to achieve unintended outcomes. This is highly relevant to the field of prompt engineering because it pertains to understanding how prompts can be engineered to exploit or circumvent the intended use of LLMs. Although the specific term 'hard prefix prompts' is not mentioned, the concept of automated jailbreak prompt generation suggests a close relationship with prompt engineering techniques. The research's emphasis on reverse-engineering defensive strategies and developing countermeasures is also pertinent to the design and analysis of prompts in LLMs. The rating is not a full 10 as the abstract doesn't directly address 'hard prefix prompts' specifically, but rather the broader issue of jailbreak prompts." -gptfuzzer : red teaming large language models with auto-generated jailbreak prompts,gpt-4-1106-preview,8,"The 'gptfuzzer : red teaming large language models with auto-generated jailbreak prompts' study is highly relevant to prompt engineering, but with a specific focus on security and adversarial testing. The research presented automates the generation of jailbreak prompts, which are a subset of prompts aimed at testing the robustness and safety of LLMs. This aspect makes it relevant as it deals with the automated creation and effectiveness of hard prefix prompts, tasks that closely relate to prompt engineering. Nonetheless, it does not cover the broader aspects of prompt engineering, such as optimizing prompts for constructive tasks, rephrasing for better understanding, or improving human-AI interaction, hence the rating is not a full 10." -autodan: generating stealthy jailbreak prompts on aligned large language models,gpt-4-1106-preview,8,"The paper directly deals with the issue of creating prompts that can influence the behavior of Large Language Models (LLMs), which is a subset of prompt engineering. Although it focuses on generating adversarial or 'jailbreak' prompts, rather than constructive hard prefix prompts, the techniques and insights from such a study could be highly relevant to prompt engineering, particularly in understanding and preventing unintended responses from LLMs. However, the relevance is not a perfect 10 as the study's primary goal is to address security concerns rather than the broader scope of prompt engineering for beneficial use cases." -developing an accuracy-prompt toolkit to reduce covid-19 misinformation online,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores various accuracy prompts that could be used to encourage the sharing of accurate information online, particularly in the context of COVID-19. The effectiveness of different prompts and their impact on behavior is central to the field of prompt engineering. However, the specificity of the prompts to the domain of misinformation may not encompass the full breadth of prompt engineering, which can also include prompts for eliciting information, generating text, or other interactions in user interfaces beyond accuracy checking." -not what you've signed up for: compromising real-world llm-integrated applications with indirect prompt injection,gpt-4-1106-preview,8,"The abstract presents an in-depth look at how natural language prompts can be used maliciously to exploit LLM-integrated applications, which is closely relevant to the field of prompt engineering. It reveals new attack vectors in the form of Indirect Prompt Injection and stresses on the importance of understanding prompts from a security perspective. While it does not focus solely on 'hard prefix prompts', the study of adversarial prompting is critical to the broader domain of prompt engineering where designing robust and secure prompts is key. Hence, the information is highly relevant, though not exclusively centered on hard prefix prompting methodologies." -hard prompts made easy: gradient-based discrete optimization for prompt tuning and discovery,gpt-4-1106-preview,9,"The abstract details a study focused on prompt engineering, specifically regarding the optimization of 'hard' prompts, which are highly relevant to the field of prompt engineering. It introduces a method for automatically generating and optimizing these prompts, which aligns closely with the study of engineering prompts that are interpretable and easily manipulated. Furthermore, it has applications in both text-to-image and text-to-text models, indicating a broad relevance to different aspects of prompt engineering. The only reason for not giving a full score of 10 is that the abstract does not explicitly mention a 'systematic review', which suggests that the work may be more focused on original research or methodology rather than reviewing existing literature on hard prefix prompts." -more than you've asked for: a comprehensive analysis of novel prompt injection threats to application-integrated large language models,gpt-4-1106-preview,8,"The paper discusses 'prompt injection threats' in Large Language Models (LLMs) which are closely related to prompt engineering as it concerns how prompts are constructed and how they can be manipulated. Prompt engineering involves the strategic creation of prompts to guide the behavior of LLMs, and understanding prompt injection threats is crucial for developing robust and secure prompt engineering methods. Although the paper focuses more on security threats than on prompt engineering in general, the systematic analysis and discussion of these threats are highly relevant for developing better prompt engineering practices." -catastrophic jailbreak of open-source llms via exploiting generation,gpt-4-1106-preview,7,"The abstract details research on exploiting large language models (LLMs) through what is termed 'generation exploitation attack', by altering decoding methods. This is relevant to prompt engineering since understanding how different decoding methods and adversarial prompts affect the model's outputs can inform the development of better prompts. Moreover, the work's exploration of alignment methods to counteract the attack implies the significance of structured prompts to maintain LLMs' alignment with human values. While the study is not focused on 'hard prefix prompts' explicitly, it deals with model manipulations related to input prompts, hence the rating of 7 for its partial but significant relevance to prompt engineering study." -jailbreak and guard aligned language models with only few in-context demonstrations,gpt-4-1106-preview,9,"The abstract details an investigation into the application of In-Context Learning (ICL) for manipulating language models, which falls under the domain of prompt engineering. The study assesses the ability to guide language models towards either harmful or safe responses by providing specific examples or 'prompts'. Although the main focus is on the security aspect of language models, the techniques mentioned—In-Context Attack (ICA) and In-Context Defense (ICD)—are directly relevant to prompt engineering as they involve crafting prompts that significantly alter a model's outputs. Hence, the relevance to prompt engineering is high, but since the study seems to be more targeted at security (alignment and guarding against jailbreaking) rather than on prompt engineering in general, the rating is not a perfect 10." -prompt as triggers for backdoor attack: examining the vulnerability in language models,gpt-4-1106-preview,7,"The paper is relevant to prompt engineering as it discusses utilizing the prompt itself as a potential vector for backdoor attacks in language models, which falls under prompt manipulation and its potential risks. This indicates a direct relationship to the design and usage of prompts within AI models, showing the consequences that can arise from prompt engineering. However, it may not address the broader scope of prompt engineering techniques and their applications directly, focusing instead on the security aspect and vulnerability of the models to prompt-based attacks." -notable: transferable backdoor attacks against prompt-based nlp models,gpt-4-1106-preview,8,"The abstract describes a study that is highly relevant to prompt engineering as it specifically addresses vulnerabilities in prompt-based learning models. The focus on backdoor attacks that are independent of downstream tasks and prompting strategies indicates a notable concern for the prompt engineering domain, considering the increasing utilization of such models in various NLP tasks. The high relevance score is due to the direct relation to prompt-based models' security, an aspect that is crucial for understanding and improving prompt engineering techniques. However, the score is not a full 10, as the primary focus is on security, and while related, it does not exclusively cover the broader range of prompt engineering topics such as prompt design or optimization." -prompts should not be seen as secrets: systematically measuring prompt extraction attack success,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering studies as it addresses the security aspect of prompt-based control of large language models. It directly explores how the prompts, which are integral to shaping model outputs, can be uncovered through extraction attacks. This is crucial for understanding the integrity and confidentiality of proprietary prompting methods, although it is a specialized focus on prompt security rather than the broader field of designing or optimizing prompts for general use." -sam on medical images: a comprehensive study on three prompt modes,gpt-4-1106-preview,7,"The study described in the title and abstract does revolve around the use of prompts—in this case, to guide a machine learning model (SAM) for the task of image segmentation. The research explores different prompt modes, specifically in relation to the performance of zero-shot generalization on medical images, which is a form of prompt engineering in the context of 'foundation models'. The relevance is not at the maximum because the prompt engineering mentioned here mostly refers to the application of prompt types like bounding boxes, rather than the systematic study of 'hard prefix prompts' that might be involved in other areas like NLP or more complex interactions. Nevertheless, the research still contributes to the field of prompt engineering by investigating how different prompts affect model performance, thus the rating is above average." -tdnn: a two-stage deep neural network for prompt-independent automated essay scoring,gpt-4-1106-preview,7,"The abstract pertains to the development of a deep neural network for automated essay scoring that is designed to work under a prompt-independent setting. This is somewhat relevant to prompt engineering as it relates to the broader field of natural language processing and the automated response to prompts (essays). However, the system is not centered around the creation or manipulation of prompts itself (prompt engineering), but rather on evaluating responses to prompts, which is indirectly related to understanding the prompts' influence on the response. Therefore, the relevance is notable but not direct." -modeling prompt adherence in student essays,gpt-4-1106-preview,6,"The study is only moderately relevant to prompt engineering. It focuses on modeling prompt adherence in student essays and introduces a corpus and scoring method, which could potentially inform the development of prompts in educational settings. However, prompt adherence is just one aspect of prompt engineering, and the study's scope is limited to student essays rather than a broader application within engineering prompts for AI or human-computer interactions. Therefore, while relevant, it does not wholly represent prompt engineering as a comprehensive field." -an llm can fool itself: a prompt-based adversarial attack,gpt-4-1106-preview,9,"The study directly addresses prompt engineering by proposing PromptAttack, a method that uses a prompt-based approach to generate adversarial attacks against large language models (LLMs). The study's focus on how prompts can be engineered to manipulate LLM outputs is highly relevant to the field of prompt engineering. The only reason it does not receive a full score is that the study is focused specifically on adversarial attacks rather than a broader range of prompt engineering applications." -llm-grounded diffusion: enhancing prompt understanding of text-to-image diffusion models with large language models,gpt-4-1106-preview,7,"The study focuses on enhancing the understanding of complex prompts in text-to-image diffusion models by incorporating a large language model, which relates to prompt engineering as it involves interpreting and acting upon language input. While the study is not explicitly about 'hard prefix prompts' in the context of comprehensive systematic reviews, the improvement of prompt understanding and the interaction between language models and diffusion models is relevant to the broader field of prompt engineering. Therefore, the relevance rating is relatively high, but not maximum due to the lack of direct focus on 'hard prefix prompts' specifically." -prompt distillation for efficient llm-based recommendation,gpt-4-1106-preview,7,"The provided abstract directly relates to prompt engineering in the context of improving the efficiency of large language models (LLMs) for recommendation systems. Prompt distillation, as discussed in the abstract, is a technique aimed at refining the use of prompts in LLMs, which falls within the scope of prompt engineering. Although the term 'hard prefix prompts' is not explicitly mentioned, the concept of distilling discrete prompts to continuous vectors is relevant to the broader study of how prompts are structured and optimized for LLMs. Therefore, the relevance is high but not maximal due to the lack of specificity regarding 'hard prefix prompts'." -"compress, then prompt: improving accuracy-efficiency trade-off of llm inference with transferable prompt",gpt-4-1106-preview,9,"The study highly relates to prompt engineering since it focuses on improving the performance of compressed Large Language Models (LLMs) by means of prompt engineering (i.e., the use of 'hard prompts'). The research suggests a method for enhancing prompt efficacy via a 'soft prompt learning method,' which is specifically tailored to work with compressed models. Although the primary focus of the paper is on model compression and its impact on efficiency and accuracy, the core of the study involves refining the prompt engineering process to ensure high-quality performance from these compressed models. The fact that the study explores the transferability of learned prompts to different tasks and models also demonstrates depth in research pertaining to prompt design and optimization, which is a fundamental aspect of prompt engineering." -prompt sapper: llm-empowered software engineering infrastructure for ai-native services,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering study as it directly discusses the role of prompts in AI-native services and how natural language prompts can be used as executable code, which aligns with the subject of hard prefix prompts in the context of natural language processing and command execution. Although the paper does not specifically mention 'hard prefix prompts', the focus on prompt-based interaction systems and infrastructure indicates a clear relationship with the broader topic of prompt engineering, warranting a high relevance rating. The deduction in the score accounts for the lack of explicit mention of 'hard prefix prompts', which may be a key term if the research sought to target that specific sub-domain within prompt engineering." -prompt sapper: a llm-empowered production tool for building ai chains,gpt-4-1106-preview,8,"The paper introduces 'Prompt Sapper', a tool designed to help build AI services using foundation models like GPT-4. This is highly relevant to prompt engineering because the tool is meant to streamline the process of creating prompt-based AI services. It focuses on incorporating software engineering principles into AI chain engineering, which includes prompt engineering as a subset. The tool aims to make this process more accessible, efficient, and correct, which directly impacts the field of prompt engineering. The rating is not a full 10 because the abstract does not detail the specifics of 'hard prefix prompts' or focus solely on prompt engineering; it discusses AI chain engineering more broadly, of which prompt engineering is a part." -artificial intelligence for health message generation: an empirical study using a large language model (llm) and prompt engineering,gpt-4-1106-preview,9,"The given abstract directly pertains to the use of prompt engineering within the context of generating health awareness messages using a large language model. The study focuses on the method of using AI-generated prompts to compare message quality, clarity, and semantic content with human-generated content. The high relevance comes from the practical application of prompt engineering in creating AI-generated messages and the systematic evaluation of their effectiveness against a human-generated benchmark. It is slightly less than a perfect score because the study is specific to health messages and does not cover all aspects of prompt engineering, such as 'hard prefix prompts' which the original prompt suggests may be of particular interest." -"exploring the relationship between llm hallucinations and prompt linguistic nuances: readability, formality, and concreteness",gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates how various linguistic aspects of prompts affect the behavior of Large Language Models (LLMs), particularly in the context of hallucination, which is a significant issue related to the performance and reliability of LLMs. Understanding the relationship between prompt nuances and LLM output is central to prompt engineering. The only reason for not giving a full score is that the abstract specifies an exploratory investigation, indicating that the findings might not be comprehensive or definitive, which would be necessary for a perfect relevance rating." -promptcrafter: crafting text-to-image prompt through mixed-initiative dialogue with llm,gpt-4-1106-preview,8,"The presented paper focuses on a mixed-initiative system called PromptCrafter that aids in the crafting of text-to-image prompts using a step-by-step process facilitated by a Large Language Model. While it does not explicitly address 'hard prefix prompts', it is substantially related to the field of prompt engineering. It deals with the refinement of prompts and user interaction with language models to produce specific outputs, which are central issues in prompt engineering studies. Therefore, it is highly relevant in terms of offering practical solutions and methodologies for improving prompt design, even if it does not directly tackle the concept of hard prefixes." -two-stage llm fine-tuning with less specialization and more generalization,gpt-4-1106-preview,9,"The abstract describes a method (ProMoT) directly addressing the issues related to prompt engineering by proposing a two-stage fine-tuning framework that reduces format specialization and improves generalization, which is highly relevant to engineering more adaptable and effective prompts for large language models (LLMs). The fact that it seeks to enhance in-context learning through prompt tuning suggests a close connection to the field of prompt engineering, making the study's relevance to prompt engineering very high. The only reason it does not get a 10 is because it doesn't focus exclusively on 'hard prefix prompts' as the original query specifies, but rather on prompt tuning in a broader sense." -llm-adapters: an adapter family for parameter-efficient fine-tuning of large language models,gpt-4-1106-preview,7,"The paper's focus on parameter-efficient fine-tuning (PEFT) of large language models (LLMs) through the use of adapters is relevant to prompt engineering, as it deals with the modification and adaptation of LLMs for specific tasks, which is intrinsic to prompt engineering. However, the study does not directly address 'hard prefix prompts,' which is the specific topic of interest. Although the techniques described could potentially be applied to improve the efficiency of prompt-based learning methods, the abstract does not explicitly mention the application to prompt engineering. Nevertheless, the relevance lies in the broader context of adapting and improving the performance of LLMs in different tasks, which is tangential to the field of prompt engineering." -llm-eval: unified multi-dimensional automatic evaluation for open-domain conversations with large language models,gpt-4-1106-preview,7,"The abstract describes 'LLM-eval,' an evaluation method for open-domain conversations with large language models, focusing on using single prompt-based approaches for comprehensive assessment. While it does not explicitly address 'hard prefix prompts' or prompt engineering studies, the methodology is relevant for understanding how prompt-based systems can be evaluated. Since prompt engineering is a key element in defining how language models interpret and respond to prompts, this study could indirectly contribute to the field by providing a framework for evaluating the effectiveness of different prompt strategies, albeit without directly targeting hard prefix prompts." -a first look at llm-powered generative news recommendation,gpt-4-1106-preview,8,"The abstract describes using a language model for personalized news recommendation, which implies that the system employs some form of prompt engineering to generate or summarize news according to a user's interests. The concept of moving from model design to prompt design suggests that prompt engineering is a significant component of the research. However, the study focuses more on the application of LLMs for recommendation systems rather than on the study of hard prefix prompts in isolation or comprehensive systematic reviews on prompt engineering. Therefore, the relevance is high but not entirely focused on prompt engineering study as it relates to the broader application within recommendation systems." -llm-assisted generation of hardware assertions,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves utilizing natural language prompts to generate code assertions, a clear instance of applying language model prompting to a specialized domain. The use of prompts in this case directly pertains to the concept of 'prompt engineering,' which is about optimizing inputs for language models to achieve desired outputs. However, since the focus is specifically on code generation for security assertions within hardware and not on hard prefix prompts in a broader context, it might not cover all aspects of prompt engineering study. This results in a slightly lower rating." -how novices use llm-based code generators to solve cs1 coding tasks in a self-paced learning environment,gpt-4-1106-preview,4,"While the presented study does not directly focus on 'hard prefix prompts' or prompt engineering, it does investigate the use of prompts by novice programmers in an educational setting when interacting with a Large Language Model (LLM)-based code generator like Codex. Since prompt crafting is a substantial part of this interaction, and the properties of these prompts are analyzed, the study has some relevance to prompt engineering. However, its primary focus seems to be on the educational implications and usage patterns of the LLM rather than developing or understanding the specific prompt engineering strategies to improve interaction with LLMs." -certifying llm safety against adversarial prompting,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it directly addresses the challenge of adversarial prompting and the need for developing techniques to ensure safe outputs from large language models (LLMs). Since prompt engineering involves crafting inputs that can influence or guide a model's behavior, the presented 'erase-and-check' framework is a significant contribution to understanding and mitigating the risks posed by adversarial prompts. The study’s focus on certifying the safety of prompts against adversarial attacks is essential for advancing the field of prompt engineering while ensuring responsible use of LLMs. It only slightly misses a perfect score because it does not directly cover 'hard prefix prompts,' but it extensively pertains to the broader domain of prompt safety and adversarial resistance." -graph-toolformer: to empower llms with graph reasoning ability via prompt augmented by chatgpt,gpt-4-1106-preview,7,"The abstract discusses a method to enhance large language models (LLMs) by teaching them graph reasoning abilities through the use of prompts augmented by ChatGPT. This is related to prompt engineering since it involves developing ways to optimize prompts to extend the capabilities of LLMs into new domains, like graph reasoning. However, the core focus of the study is on integrating external API tools with LLMs rather than the actual crafting or systematic review of 'hard prefix prompts' specifically. Therefore, while relevant due to the utilization of prompts, it doesn't directly address a comprehensive review of prompt engineering methodologies or specifics of 'hard prefix prompts,' leading to a score that indicates moderate relevance rather than being fully on-topic." -velma: verbalization embodiment of llm agents for vision and language navigation in street view,gpt-4-1106-preview,7,"The abstract describes VELMA as an embodied LLM agent that uses verbalization for navigation, which implies a form of prompt engineering is used to translate visual information into text prompts for the LLM to make decisions. Although not specifically about 'hard prefix prompts,' it does involve constructing and using prompts in a multimodal context (vision and language). Therefore, it is relevant to the field of prompt engineering, but slightly indirectly as the main focus seems to be on navigation and embodiment rather than prompt engineering itself." -llm-empowered chatbots for psychiatrist and patient simulation: application and evaluation,gpt-4-1106-preview,8,"The abstract describes research that is highly relevant to prompt engineering as it specifically addresses the impact of prompt designs on chatbot behavior and user experience. While it doesn't directly mention 'hard prefix prompts,' the study of prompt designs in the context of chatbot performance is directly related to the field of prompt engineering. Therefore, the findings could contribute valuable insights into the subtleties of prompt crafting and optimization, particularly in mental health applications." -chain-of-thought prompting for responding to in-depth dialogue questions with llm,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates an approach (chain-of-thought prompting) to enhance the interaction between users and large language models (LLMs) by focusing on personalizing responses based on user status (personality, emotion, psychology). While it does not directly address 'hard prefix prompts,' it contributes to the field of prompt engineering by exploring advanced prompting techniques aimed at improving the efficacy and personalization of LLM responses. The relevance would be higher if the study specifically addressed hard prefix prompts, but it is still significant due to its focus on improving the quality of prompts and user-model interactions." -trapping llm hallucinations using tagged context prompts,gpt-4-1106-preview,8,"The study addresses the issue of 'hallucinations' in large language models (LLMs) and proposes a methodology that includes the use of context and embedded tags to mitigate this problem. Since prompt engineering involves crafting inputs to effectively interact with LLMs and obtain desired outputs, the technique described in the paper to minimize hallucinations is quite relevant to prompt engineering. It is likely to contribute to designing better prompts that can control or guide model behavior, ensuring more accurate responses. However, the study's specific focus is on combating hallucinations rather than on prompt engineering in its entirety, which explains why the rating is not a perfect 10." -mental-llm: leveraging large language models for mental health prediction via online text data,gpt-4-1106-preview,7,"The study involves the evaluation of large language models (LLMs) with a focus on prompt designs such as zero-shot and few-shot prompting, which are directly related to the field of prompt engineering. Moreover, it discusses instruction fine-tuning, which is a more advanced form of prompt engineering that tailors the model to specific tasks. Although the main application discussed in the study is mental health prediction, which is not directly related to 'hard prefix prompts,' the methodology and findings could have implications for prompt engineering in general, making it moderately relevant to the field." -free-bloom: zero-shot text-to-video generator with llm director and ldm animator,gpt-4-1106-preview,8,"The abstract describes using large language models (LLMs) to generate a 'semantic-coherent prompt sequence', which is directly relevant to prompt engineering, particularly in the niche area of text-to-video generation. While the study focuses more on the application of these prompts to generate video rather than the systematic review of hard prefix prompts themselves, the creation and optimization of prompts remains a central component of the research, justifying a high relevance rating." -"reason for future, act for now: a principled framework for autonomous llm agents with provable sample efficiency",gpt-4-1106-preview,5,"The abstract provided discusses a framework for improving the way large language models act and reason over time, with a focus on learning and planning within Bayesian adaptive Markov decision processes. Although this is related to how prompts might be engineered to elicit particular responses from LLMs, it doesn't specifically mention 'hard prefix prompts' or address prompt engineering techniques in a systematic review context. Therefore, while aspects of this framework could potentially inform prompt engineering strategies to some extent (hence not a 0 rating), the relevance to the study of prompt engineering, particularly that of 'hard prefix prompts,' is only tangentially related. Therefore, a middle score reflects this partial relevance." -benchmarking a foundation llm on its ability to re-label structure names in accordance with the aapm tg-263 report,gpt-4-1106-preview,7,"The study described in the title and abstract is relevant to prompt engineering to a significant extent because it involves using a large language model (GPT-4) with specifically tuned prompts to perform a complex, domain-specific task. However, while the focus of the study is on the application of an LLM to re-label structure names in medical imaging in accordance with a specific standard, it also implicitly involves designing and refining prompts to obtain this accurate outcome. This prompt engineering aspect is an essential part of the study as it directly affects the performance of the LLM, but the study is not explicitly about prompt engineering methodologies or their systematic review. Therefore, the rating is not a perfect 10, but still notably high due to the implicit involvement of prompt fine-tuning and the potential insights it might offer for prompt engineering best practices." -promptly: using prompt problems to teach learners how to effectively utilize ai code generators,gpt-4-1106-preview,9,"The paper directly addresses prompt engineering by introducing the concept of 'Prompt Problems', which are designed to teach students how to effectively craft prompts for large language models that generate code. This is highly relevant to the study of prompt engineering as it focuses on improving the interaction between humans and AI through the construction of effective prompts. Although the paper doesn't specifically mention 'hard prefix prompts', it addresses the broader concept of prompts in the context of educational settings, which is why the rating is not a perfect 10." -promptbreeder: self-referential self-improvement via prompt evolution,gpt-4-1106-preview,9,"The abstract describes a system that revolves around the core idea of evolving and improving prompts, which is directly relevant to the study of prompt engineering. Since the system, Promptbreeder, is designed to enhance the ability of Large Language Models through prompt adaptation and is being compared to other prompt strategies, it holds significant relevance to the field. The only reason it does not receive a full score is that it may not relate exclusively to 'hard prefix prompts' as specified in the initial inquiry but addresses a broader scope of prompt engineering." -backdooring instruction-tuned large language models with virtual prompt injection,gpt-4-1106-preview,9,"The paper discusses the concept of Virtual Prompt Injection (VPI), which directly relates to manipulating the behavior of Large Language Models (LLMs) through the use of hidden or embedded prompts. This is a specific, albeit adversarial, example of prompt engineering. It demonstrates how the model's response can be engineered to follow certain instructions without visible modification to the prompt input. Since prompt engineering is about designing prompts to achieve desired outputs from a model, this study is highly relevant as it explores the consequences and defensive strategies related to prompt manipulation. Although the focus is on a security vulnerability, understanding such backdoor methods contributes to a broader comprehension of how prompt mechanisms work in LLMs and the importance of data integrity in instruction tuning." -on the role of attention in prompt-tuning,gpt-4-1106-preview,9,"The provided abstract discusses the use of prompt-tuning within the context of attention mechanisms in language models, which is directly relevant to studies on prompt engineering. It provides insights into how prompts can be used to direct attention to relevant tokens within a given input, which is a crucial aspect of how prompts function in large language models. The abstract also mentions contextual data models and the expressiveness of prompt-tuning, indicating a deep exploration into prompt mechanics. The only reason it doesn't receive a perfect score is the absence of specific mention of 'hard prefix prompts', but otherwise, it has a high relevance to the field of prompt engineering." -a prompt log analysis of text-to-image generation systems,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to a large extent, as it delves into the analysis of prompt logs from text-to-image generation systems, which is a direct application of understanding user interaction with prompts and could inform better prompt design. However, it focuses more on the analysis of user prompts and behavior rather than the construction of hard prefix prompts, which would be more closely aligned with 'prompt engineering' as it pertains to the design, syntax, and semantics of prompts themselves." -privacy-preserving prompt tuning for large language model services,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it addresses prompt tuning, which is a method of customizing LLMs for specific tasks or applications. The concept of privacy-preserving mechanisms within the realm of prompt tuning is pertinent to prompt engineering study because it expands the scope of how prompts can be engineered, taking into account the crucial aspect of user privacy. The fact that this paper also introduces a novel approach to improve LLMs' learning with privatized data indicates a significant contribution to the field of prompt engineering. The reason the relevance rating is not a full 10 is because it focuses more on the privacy aspect than on the general techniques or effectiveness of prompt engineering." -deep language networks: joint prompt training of stacked llms using variational inference,gpt-4-1106-preview,9,"The abstract discusses the optimization of natural language prompts in stacked large language models (LLMs), which is directly relevant to the field of prompt engineering. The focus on learning and training prompts within a deep learning architecture (DLN) highlights crucial aspects of prompt design and efficacy. This paper would be quite significant for someone studying prompt engineering, as it provides insight into how prompts can be optimized to improve the performance of language models." -are chatbots ready for privacy-sensitive applications? an investigation into input regurgitation and prompt-induced sanitization,gpt-4-1106-preview,7,"The study investigates how LLM-powered chatbots handle sensitive information when provided in prompts and how instructions could influence the chatbot's ability to sanitize outputs to comply with privacy regulations. While this does not specifically address 'hard prefix prompts,' it is closely related to prompt engineering because it examines how specific instructions in prompts can affect the information handling of chatbots. The research could inform the development and refinement of prompts that elicit desired privacy-compliant behaviors from the models, which is a critical aspect of prompt engineering in privacy-sensitive applications." -extracting accurate materials data from research papers with conversational language models and prompt engineering - example of chatgpt,gpt-4-1106-preview,9,"The discussed paper is highly relevant to the field of prompt engineering study because it proposes a new method, 'ChatExtract', which utilizes engineered prompts in a conversational language model to automate data extraction from research papers. These prompts are specifically designed to identify pertinent data and ensure its accuracy, addressing a key challenge in prompt engineering. Although the paper does not specifically mention 'hard prefix prompts', it is an application of prompt engineering for a specific and practical task, thus meriting a high relevance score. Prompt engineering is central to the performance of the ChatExtract method, as it hinges on the quality of the prompts to retrieve and validate information from the language model." -discrete prompt optimization via constrained generation for zero-shot re-ranker,gpt-4-1106-preview,9,"The abstract describes a study focused specifically on the optimization of prompts for a zero-shot re-ranker, which is directly connected to prompt engineering. The proposed discrete prompt optimization method, Co-Prompt, is highly relevant to the field since it addresses the creation and refinement of prompts to improve the performance of pre-trained language models on specific tasks without additional parameter updates. This approach is an important aspect of prompt engineering, hence the high relevance rating. The study appears to contribute valuable insights into prompt effectiveness and optimization, which are key areas of interest in prompt engineering. The reason for not giving a perfect score is that it does not explicitly mention 'hard prefix prompts' as referred to in the original query, but its connection to prompt optimization is clear and significant." -sweeping heterogeneity with smart mops: mixture of prompts for llm task adaptation,gpt-4-1106-preview,9,"The abstract presents research on using a 'Mixture of Prompts' with 'smart gating functionality' to enhance the performance of Large Language Models (LLMs) on heterogeneous tasks. This is highly relevant to prompt engineering as it directly addresses the optimization of prompts for task adaptation in LLMs. It investigates a method of improving prompt tuning for diverse tasks, which is a core issue in prompt engineering. The paper aims to reduce training interference and improve efficiency, areas of significant interest in the prompt engineering field. The reasoning behind not giving a full 10 rating is that the abstract does not explicitly mention 'hard prefix prompts,' the specific focus of the prompt engineering study indicated in the query." -cataloging prompt patterns to enhance the discipline of prompt engineering,gpt-4-1106-preview,9,"The paper is highly relevant to the field of prompt engineering as it directly addresses the conceptualization and codification of prompt patterns to enhance interactions with Large Language Models (LLMs) such as ChatGPT. It underscores the significance of establishing more systematic and repeatable approaches within prompt engineering to improve the performance and evaluation of LLMs across various domains. The only reason for not giving a full 10 is because the abstract does not explicitly mention 'hard prefix prompts', which is the specialized topic of the study in question (assuming that 'hard prefix prompts' refer to a specific subset or technique within prompt engineering)." -promptcare: prompt copyright protection by watermark injection and verification,gpt-4-1106-preview,7,"While the article 'promptcare: prompt copyright protection by watermark injection and verification' addresses prompts in the context of Large Language Models and is relevant to the field of prompt engineering, it focuses more specifically on the protection of intellectual property associated with prompts rather than on the techniques for engineering prompts to improve model performance (which would be directly related to 'hard prefix prompts'). However, the study does contribute to the broader ecosystem of prompt engineering by ensuring the safe and authorized use of prompts, which can be considered an aspect of the prompt engineering life cycle. Therefore, it receives a mid-high relevance rating." -batch calibration: rethinking calibration for in-context learning and prompt engineering,gpt-4-1106-preview,9,"The abstract describes a comprehensive analysis of calibration methods to reduce prompt brittleness and biases in large language models, which is directly related to prompt engineering. The study seems to offer a novel contribution with the Batch Calibration method, aiming to improve the effectiveness of prompts and in-context learning. Although it does not explicitly mention 'hard prefix prompts', the content is highly relevant to the broader field of prompt engineering, hence the high relevance rating." -survival of the most influential prompts: efficient black-box prompt search via clustering and pruning,gpt-4-1106-preview,9,"The paper directly addresses the process of optimizing prompt-based learning for large language models by introducing an efficient black-box prompt search method. The inclusion of clustering and pruning to focus on influential prompt tokens is highly relevant for the field of prompt engineering, as it seeks to refine the approach by which prompts are selected and used to drive LLM predictions. The presented Clustering and Pruning for Efficient Black-box Prompt Search (ClaPS) technique is pertinent to the challenge of search space design in prompt engineering. The study's focus on enhancing the efficiency of the prompt search process validates its high relevance to the topic, although it may not cover the full breadth of 'hard prefix prompts' and could be missing some other aspects of prompt engineering not detailed in the abstract." -hide and seek (has): a lightweight framework for prompt privacy protection,gpt-4-1106-preview,5,"The provided abstract focuses on privacy protection in the context of using large language models by introducing the HaS (Hide and Seek) framework, which is relevant to the broader field of responsible AI usage and prompt engineering to a certain degree. It discusses techniques for anonymization and de-anonymization, which could indirectly affect the way prompts are engineered to ensure privacy. However, the main concern of the study is privacy protection rather than methodologies for optimizing or understanding the construction of prompts (hard prefix prompts) in prompt engineering studies. As a result, it holds moderate relevance as it touches upon the privacy aspect of user inputs (prompts) but does not directly deal with the study or advancement of prompt-engineering techniques." -prompt engineering or fine tuning: an empirical assessment of large language models in automated software engineering tasks,gpt-4-1106-preview,9,"The study directly explores multiple prompt engineering techniques applied to GPT-4 for ASE tasks. The empirical assessment compares the efficacy of prompt engineering against fine-tuned models, providing valuable insights into the current capabilities and limitations of prompt engineering. The high relevance score reflects the detailed analysis of specific prompting strategies, such as task-specific prompting and conversational prompts, which contributes significantly to the body of knowledge on prompt engineering." -poisonprompt: backdoor attack on prompt-based large language models,gpt-4-1106-preview,9,"The study titled 'poisonprompt: backdoor attack on prompt-based large language models' is highly relevant to prompt engineering as it directly deals with the security vulnerabilities associated with the use of prompts in Large Language Models, which can be either hard (fixed) or soft (more flexible). Although the study's primary focus is on the backdoor attack mechanism (POISONPROMPT), it inherently contributes to the understanding and advancement of prompt engineering by identifying potential threats and exploring the robustness of different prompting methods. This information is crucial for researchers and practitioners working on prompt engineering to create more secure and reliable systems. The rating is not a full 10, as the paper focuses more on the security aspect rather than core prompt engineering techniques or their optimization for better performance on tasks." -selfzcot: a self-prompt zero-shot cot from semantic-level to code-level for a better utilization of llms,gpt-4-1106-preview,9,"The relevance of the study to prompt engineering is high because it focuses on the utilization of a self-prompt mechanism (SelfzCoT) that enhances zero-shot learning capabilities in large language models (LLMs). It directly pertains to the field of prompt engineering as it deals with improving the performance of LLMs on arithmetic reasoning tasks through the use of specialized prompts, which are an essential component of prompt engineering. The systematic improvement across different datasets indicates that the researchers are effectively engineering prompts to better utilize the models' existing knowledge without additional training, which is a core aspect of prompt engineering studies." -prompttts 2: describing and generating voices with text prompt,gpt-4-1106-preview,7,"The abstract indicates that the study is concerned with the use of text prompts in the context of text-to-speech (TTS) and addresses issues surrounding voice variability and the generation of text prompts, which are relevant to prompt engineering. Prompt engineering is often associated with designing and refining inputs to affect the outputs of AI models, which is closely related to what PromptTTS 2 aims to achieve in the TTS domain. However, the study's relevance to prompt engineering may not be a perfect fit as it is specialized towards TTS systems and does not broadly tackle hard prefixed prompts in various AI contexts, which a 'comprehensive systematic review on hard prefix prompts' would imply." -conversation regression testing: a design technique for prototyping generalizable prompt strategies for pre-trained language models,gpt-4-1106-preview,9,"The described study directly pertains to prompt engineering as it focuses on improving pre-trained language model outputs using prompt strategies and assessing the effects of these strategies through Conversation Regression Testing. Although it doesn't specifically mention 'hard prefix prompts,' the broad field of prompt engineering, including the design and systematic review of prompt effects, is central to the study. Thus, the relevance to prompt engineering is high." -prompts matter: insights and strategies for prompt engineering in automated software traceability,gpt-4-1106-preview,9,"The title and abstract indicate that the paper focuses on prompt engineering within the context of using Large Language Models for automated software traceability, which is a specific application of prompt engineering. The paper discusses the construction of effective prompts and proposes strategies for utilizing LLMs. This is highly relevant to the study of prompt engineering, particularly in a specialized domain. However, it is not directly related to 'hard prefix prompts' as the prompt specifies, suggesting there is room for more targeted relevance, hence not a perfect score." -graph-toolformer: to empower llms with graph reasoning ability via prompt dataset augmented by chatgpt,gpt-4-1106-preview,8,"The paper is highly relevant to the field of prompt engineering as it specifically looks into the development of a framework that leverages prompts augmented by ChatGPT to improve the performance of large language models when tasked with graph reasoning. While it does not focus on the 'hard prefix prompts' mentioned in the initial prompt, it explores the prompt-based teaching approach and the construction of prompt datasets for specialized applications, which is a component of prompt engineering. The systematic review aspect isn't directly addressed, but the paper proposes a practical application of prompts in the context of LLMs, indicating significant relevance to the study of prompt engineering." -model tuning or prompt tuning? a study of large language models for clinical concept and relation extraction,gpt-4-1106-preview,7,"The study explores different training strategies for large language models (LLMs), including hard prompts and soft prompts, focusing on clinical concept and relation extraction. It directly investigates prompt engineering by comparing the effectiveness of hard and soft prompts within different LLM training conditions. The relevance to prompt engineering study is high, although the primary focus is on soft prompts in a specific domain (clinical), rather than solely on hard prefix prompts as suggested by the original query. Consequently, the rating reflects substantial but not exclusive relevance." -evoke: evoking critical thinking abilities in llms via reviewer-author prompt editing,gpt-4-1106-preview,9,"The provided abstract directly pertains to prompt engineering, as it discusses the development of a framework called Evoke that refines prompts for Large Language Models (LLMs) to enhance their performance. The inclusion of an automatic feedback loop, which considers 'hard' samples implying a form of 'hard prefix prompts', suggests it is highly relevant to the study of refining and improving prompts to elicit better performance from AI models. The main reason the rating is not a perfect 10 is that while Evoke's approach includes working with challenging prompts, it may not strictly constitute a 'systematic review' of hard prefix prompts but appears to be an application or development of that concept." -decoding prompt syntax: analysing its impact on knowledge retrieval in large language models,gpt-4-1106-preview,9,"The provided abstract focuses on the evaluation of prompt syntax and its impact on knowledge retrieval in Large Language Models (LLMs), which is a significant aspect of prompt engineering. The systematic approach to paraphrase prompts and analyze their structures provides valuable insights into how different types of prompts affect the performance of LLMs. This research can inform the design of more effective prompts (including hard prefix prompts), making it highly relevant to the field of study. The reason for not giving a full score of 10 is the absence of a specific mention of 'hard prefix prompts' in the context of the abstract, but it is still generally relevant to prompt engineering." -progprompt: generating situated robot task plans using large language models,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it deals with designing structured prompts for large language models to generate robot task plans, demonstrating an understanding of the importance of prompt design in achieving functional outputs from LLMs. Its focus on programmatic prompts, ablation experiments for prompt structures, and ultimately demonstrating success in a practical domain like robotics shows a clear overlap with the field of prompt engineering. The rating is not a full 10 because the study focuses specifically on robotics and situated task plans, which is just one application of prompt engineering, rather than a broad investigation of hard prefix prompts across various domains." -universal and transferable adversarial attacks on aligned language models,gpt-4-1106-preview,7,"The paper's focus on developing adversarial attacks against aligned language models is tangentially related to prompt engineering, as it concerns the specific construction of inputs (suffixes) designed to elicit certain responses from language models. While the study does not directly address 'hard prefix prompts', it does deal with the broader theme of how prompts can be engineered (in this case, to be adversarial) to manipulate model output. Therefore, the relevance to prompt engineering is significant, but it is not a direct match to the concept of 'hard prefix prompts' in a systematic review context." -principle-driven self-alignment of language models from scratch with minimal human supervision,gpt-4-1106-preview,8,"The abstract describes a novel approach to aligning language models with human intentions using minimal human supervision and includes stages relevant to prompt engineering, such as generating synthetic prompts to augment diversity. Although the study seems more focused on self-alignment principles and minimization of supervision rather than hard prefix prompts specifically, the method includes aspects like in-context learning from generated prompts, which is a key part of prompt engineering. Therefore, the relevance to prompt engineering is high but not entirely focused on the 'hard prefix prompts' aspect mentioned in the original prompt." -language models don't always say what they think: unfaithful explanations in chain-of-thought prompting,gpt-4-1106-preview,7,"The study investigates the reliability of chain-of-thought (CoT) prompting in the context of Large Language Models (LLMs). While this is highly relevant to the field of prompt engineering as it relates to the integrity and trustworthiness of prompts (especially CoT prompts), it does not specifically address 'hard prefix prompts'. Since the study has implications for how prompts can be engineered to elicit accurate explanations from models and discusses the manipulation of model inputs, which is a core concern in prompt engineering, it has relevance. However, the study's focus on the fidelity of CoT rather than on systematic reviews of 'hard prefix prompts' means it is only partially aligned with the prompt engineering study. Therefore, it receives a moderate to high relevance score." -ask me anything: a simple strategy for prompting language models,gpt-4-1106-preview,9,"The content of the abstract indicates a high relevance to prompt engineering as it discusses the development of prompt strategies to improve the performance of large language models (LLMs) and attempts to reduce the brittleness of prompting by aggregating multiple prompts. The concept of 'ASK ME ANYTHING' (AMA) is directly related to engineering effective prompts and influences how prompts are generated and utilized. The study also evaluates the performance across different models and benchmarks, which is essential for understanding the implications of prompt design strategies. While it may not explicitly focus on 'hard prefix prompts' as mentioned in the original request, the general exploration of prompt formats and strategies makes this abstract highly relevant to the field of prompt engineering." -progressive-hint prompting improves reasoning in large language models,gpt-4-1106-preview,9,"The provided abstract details a study on a novel prompting method, Progressive-Hint Prompting (PHP), designed to improve the reasoning capabilities of Large Language Models (LLMs) by leveraging previously generated responses. This relates directly to the field of 'prompt engineering' as it explores the structure and strategy behind prompts to enhance the performance of LLMs. The fact that it introduces a new methodology and reports on experimentation and results aligns closely with advancements and research in prompt engineering, justifying the high relevance rating. The only reason it is not a perfect 10 is that the abstract does not explicitly mention the 'hard prefix prompts' specified in the original query, otherwise, it charts the advancement in the field of prompt engineering which includes improvements over conventional methods like CoT and self-consistency." -frugalgpt: how to use large language models while reducing cost and improving performance,gpt-4-1106-preview,8,"The paper titled 'frugalgpt: how to use large language models while reducing cost and improving performance' is quite relevant to prompt engineering. One of the strategies mentioned for reducing inference cost is 'prompt adaptation,' which directly pertains to the field of prompt engineering. This strategy likely involves creating and refining prompts to produce more accurate or useful outputs from LLMs, thereby also reducing repetitive or unnecessary queries that could increase costs. Although the study's primary focus is on cost-reduction and performance improvement rather than the specifics of crafting hard-prefix prompts, the concept of prompt adaptation is a core part of prompt engineering. Therefore, it holds substantial relevance to someone interested in the efficient and effective use of prompts in LLMs." -conversational automated program repair,gpt-4-1106-preview,7,"While the abstract primarily outlines a study on conversational automated program repair, which is a different domain from prompt engineering, it does mention the use of constructed input/prompt and iteratively building the input to a large language model. The relevance to prompt engineering lies in the iterative process, engaging with the LLM in a conversational way, and adjusting the prompts based on feedback to avoid generating previously incorrect patches. This indicates that the study touches upon aspects of prompt engineering by refining the prompts to improve output, which is a key technique in prompt engineering. However, it does not directly focus on 'hard prefix prompts' or a comprehensive study of them. Therefore, the relevance is moderate, warranting a rating of 7 out of 10." -annollm: making large language models to be better crowdsourced annotators,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it explores a method to enhance the effectiveness of large language models for the purpose of data annotation, which is a significant aspect of prompt engineering. This paper suggests an innovative way of creating prompts by including explanations along with annotated examples (the 'explain-then-annotate' methodology). This strategy could be beneficial in refining the way prompts are designed to solicit more accurate responses from language models, thus contributing valuable insights to the field of prompt engineering." -keep the conversation going: fixing 162 out of 337 bugs for $0.42 each using chatgpt,gpt-4-1106-preview,8,"The provided abstract describes ChatRepair, a novel approach that leverages a conversational Large Language Model (LLM), specifically ChatGPT, for Automated Program Repair (APR). It uses a unique prompt engineering strategy by incorporating a feedback loop into the generation of input prompts. The methodology involves enhancing the prompts with relevant test failure information and learning from past patch attempts to refine the generation process. This study is highly relevant to prompt engineering as it applies advanced techniques to craft prompts that effectively utilize LLM capabilities to diagnose and fix software bugs. The relevance to prompt engineering is not absolute, as the main focus seems to be on the application of these prompts for APR rather than on the study or analysis of the prompt engineering itself, but it is still highly pertinent due to the innovative use of prompts for iterative and conversational task completion." -marked personas: using natural language prompts to measure stereotypes in language models,gpt-4-1106-preview,7,"The study focuses on using language prompts to measure and understand biases in language models, which is closely related to the field of prompt engineering. While it does not deal directly with 'hard prefix prompts,' it uses a prompt-based method for a specific and important application within the larger scope of prompt engineering—detecting and analyzing stereotypes. Thus, it contributes to the understanding of how prompts can elicit certain types of responses from language models, which is a relevant aspect of prompt engineering studies. The rating is not a perfect 10 because the study is not about prompt engineering techniques or optimizations but rather an application of prompts to understand model biases." -supporting qualitative analysis with large language models: combining codebook with gpt-3 for deductive coding,gpt-4-1106-preview,8,"The study mentioned in the abstract directly explores the use of large language models (LLMs) like GPT-3 for coding tasks in qualitative analysis without the need for task-specific model training or fine-tuning. It specifically illustrates an application of LLMs using prompt learning, which falls under the broader category of prompt engineering. While it is not centered on 'hard prefix prompts,' it does delve into the realm of using prompts effectively to interact with language models. Therefore, the relevance to prompt engineering is high, but not at the maximum because it does not focus exclusively on 'hard prefix prompts' as per the initial prompt." -assessment of chemistry knowledge in large language models that generate code,gpt-4-1106-preview,8,"The study specifically mentions the impact of prompt engineering strategies on the performance of Large Language Models (LLMs) in executing chemistry-related coding tasks. The fact that adding copyright notices at the tops of files leads to a 30-percentage point increase in accuracy directly relates to the field of prompt engineering. The study examines and validates the effectiveness of prompt engineering in enhancing the capabilities of LLMs within a specific domain (chemistry). However, it does not focus exclusively on 'hard prefix prompts' but on prompt engineering in a broader sense, hence the rating does not reach the maximum." -in-context impersonation reveals large language models' strengths and biases,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores the use of hard-coded persona prompts to elicit specific behaviors and capabilities from large language models (LLMs). By analyzing the LLMs' performance across various tasks in the context of the prompted persona, the study contributes insights into how the design of prompts can influence the output of LLMs, an essential aspect of prompt engineering. It directly addresses the impact of prompt construction on the quality and characteristics of the model's responses. However, it doesn't explicitly cover 'hard prefix prompts' in the more general sense, as it's focussed on role-impersonation, which is a subset of prompt engineering." -knn prompting: beyond-context learning with calibration-free nearest neighbor inference,gpt-4-1106-preview,8,"The presented abstract discusses advancements in 'kNN Prompting' which are relevant to the broader realm of prompt engineering in that it explores alternative ways to utilize language model prompts for task completion. kNN Prompting can be seen as an extension or improvement within the field of prompt engineering, particularly since it addresses limitations of typical in-context learning and provides a way to scale with additional training data without a context length restriction. This is highly relevant for studies looking to overcome the current constraints of hard prefix prompts in LLMs. However, the abstract does not address hard prefix prompts specifically, thereby making the relevance less than perfect for a systematic review focused solely on hard prefix prompt engineering." -"on second thought, let’s not think step by step! bias and toxicity in zero-shot reasoning",gpt-4-1106-preview,7,"The given abstract discusses the implications of using zero-shot Chain of Thought reasoning in large language models, which is relevant to prompt engineering studies in that it examines the effect of a specific prompting technique in the context of AI behavior. However, the focus on biases and toxicity rather than hard prefix prompts specifically somewhat limits its direct relevance to a systematic review on hard prefix prompts in prompt engineering." -evaluation of chatgpt for nlp-based mental health applications,gpt-4-1106-preview,7,"The abstract discusses the use of a specific input prompt for classification with ChatGPT in mental health applications, which aligns with the concept of prompt engineering. Even though the study's application is in mental health, the methodology involves designing and utilizing prompts to elicit accurate responses from a language model, which is a core aspect of prompt engineering. Though the focus is not on 'hard prefix prompts,' the relevance lies in how prompts are integral to the performance of LLMs in NLP tasks, which could translate to insights in prompt engineering studies generally. Hence, a rating of 7 suggests that the study is quite relevant but not directly focused on hard prefix prompt engineering." -exploiting asymmetry for synthetic training data generation: synthie and the case of information extraction,gpt-4-1106-preview,7,"The paper is partially relevant to the study of prompt engineering as it discusses the generation of synthetic data by prompting a large language model in reverse to create input text for a target output structure. Although it primarily focuses on synthetic data generation and its application to information extraction, the underlying methodology incorporates elements of prompt engineering by exploiting asymmetry in task difficulty to effectively communicate with the model. It doesn't directly address 'hard prefix prompts,' but the concept of utilizing prompts creatively to generate data is within the domain of prompt engineering research. Therefore, the relevance is significant, but not perfect, as the main focus is not directly on prompt engineering techniques or systematic reviews of said techniques." -guiding large language models via directional stimulus prompting,gpt-4-1106-preview,9,"The provided abstract describes a research approach that is highly relevant to the field of prompt engineering, particularly in the way it deals with the customization and optimization of prompts to guide the behavior of large language models. The concept of using a tunable policy model to generate instance-specific 'directional stimulus prompts' falls directly under the umbrella of prompt engineering techniques. The high relevance score reflects the paper's focus on creating prompts that steer the output of LLMs, which is a central concern in prompt engineering studies. Although the term 'hard prefix prompts' is not explicitly mentioned, the methodology proposed is very much related to the underlying principles of prompting language models." -motiongpt: finetuned llms are general-purpose motion generators,gpt-4-1106-preview,7,"The paper 'motiongpt: finetuned llms are general-purpose motion generators' seems to utilize prompt engineering by treating multimodal signals as special input tokens in large language models (LLMs) to generate human motion. However, it is not focused on 'hard prefix prompts' specifically, but rather on applying prompt engineering principles to multimodal inputs and finetuning LLMs for a specialized task. The concept of formulating signals into a unified prompt instruction is relevant to prompt engineering, but the study is more about motion generation rather than the systematic review of prompt engineering techniques." -up5: unbiased foundation model for fairness-aware recommendation,gpt-4-1106-preview,7,"The given abstract is relevant to prompt engineering study to a good extent because it covers the use of 'personalized prefix prompts' as part of the Counterfactually-Fair-Prompting (CFP) techniques. These techniques contribute to the broader field of prompt engineering by exploring how prompts can be designed or modified to address fairness and bias in recommendations. While the focus is not solely on hard prefix prompts, it does pertain to the sub-domain of prompt engineering for ethical and fairness considerations, which is an important aspect of the field. However, since the primary focus is on fairness-aware recommendation rather than prompt engineering itself, the rating is not a full 10." -fill in the blank: context-aware automated text input generation for mobile gui testing,gpt-4-1106-preview,7,"The paper introduces QTypist, a method which utilizes Large Language Models (LLMs) for the automated generation of semantic text inputs in mobile GUI testing. The relevance to prompt engineering study lies in the fact that the approach involves a 'prompt-based data construction and tuning method' which entails extracting prompts and answers for model tuning. This means the study directly involves designing and utilizing prompts to improve performance of AI models, which is closely related to prompt engineering. However, the study's primary focus is on the application of this technique for improving GUI testing rather than on the theory or principles behind prompt engineering itself. Hence, it's not entirely centered on prompt engineering but is highly related, warranting a 7 out of 10 for relevance." -explaining patterns in data with language models via interpretable autoprompting,gpt-4-1106-preview,9,"The abstract describes a study where a method called interpretable autoprompting (iPrompt) is used to generate and evaluate prompts for large language models, which is directly related to prompt engineering. The systematic review of 'hard prefix prompts' would likely cover different techniques and contributions in the area of prompt engineering, and iPrompt appears to be a notable example of innovation in this field. Therefore, the relevance to prompt engineering is high, although the study might not directly focus on hard prefix prompts but more generally on explanatory prompts and their iterative improvement." -instructzero: efficient instruction optimization for black-box large language models,gpt-4-1106-preview,9,"The abstract details a study that focuses on the optimization of instructional prompts for large language models (LLMs), particularly in the scenario where direct optimization of the instructions isn't possible, such as with black-box LLMs. The study introduces 'InstructZero', a method which indirectly optimizes instructions through the use of 'soft prompts' via Bayesian optimization, which is highly relevant to the field of prompt engineering. This systematic approach to improving efficiency and effectiveness of LLM instructions directly relates to studies of how prompts can be engineered to yield better performance from LLMs. The only reason the rating isn't a perfect 10 is that the abstract doesn't mention 'hard prefix prompts', the specific topic of interest, and focuses instead on 'soft prompts'." -language models enable simple systems for generating structured views of heterogeneous data lakes,gpt-4-1106-preview,8,"The abstract describes a system that leverages large language models (LLMs) for the purpose of generating queryable tables from semi-structured documents. Prompt engineering is an implicit but significant aspect of this work; the LLMs are used to either directly extract values or to generate code based on the natural language prompts given to them. The success of EVAPORATE and EVAPORATE-CODE+ hinges on effective prompt engineering to guide the LLMs. While the study does not seem to be explicitly focused on 'hard prefix prompts,' the underlying principle of using prompts to control LLM output aligns with studies in prompt engineering. Hence, the relevance is rated highly but not maximally due to the lack of specificity regarding 'hard prefix prompts.'" -recurrentgpt: interactive generation of (arbitrarily) long text,gpt-4-1106-preview,8,"The paper presents a novel approach for prompting language models to generate long text sequences by incorporating an LSTM-like recurrence mechanism into GPT, termed RecurrentGPT. Despite not addressing 'hard prefix prompts' directly, the study is relevant to prompt engineering as it explores strategies for enhancing the capabilities of language models through sophisticated prompting techniques by simulating external memory mechanisms. This has implications for how prompts can be engineered to handle more complex tasks like generating long-form content, which can be an aspect of prompt engineering studies. However, the focus on 'hard prefix prompts' is not explicit, thus the rating does not receive a full score." -prd: peer rank and discussion improve large language model based evaluations,gpt-4-1106-preview,7,"The abstract discusses methodologies for improving the evaluation of large language model responses, including a peer rank algorithm and peer discussion system which both can be considered forms of prompt engineering, as they involve crafting prompts to facilitate a discussion between LLMs for better assessment. These processes are relevant to prompt engineering studies because they deal with how input prompts affect LLMs' output and evaluation. Although the study's primary focus is not on the hard prefix prompts but rather on the evaluation techniques for model outputs, it indirectly contributes to the field of prompt engineering by exploring methods to refine the interaction and ranking processes between different models which is a subset of prompting strategies." -gpt-3-driven pedagogical agents for training children's curious question-asking skills,gpt-4-1106-preview,6,"The relevance to prompt engineering study is moderate. While the focus of this paper appears to be on using large language models to encourage children to ask more curious questions, and it involves a natural language prompting approach, the connection to 'hard prefix prompts' specifically is not directly mentioned. Prompt engineering is certainly a component of training these models for pedagogical purposes, but the abstract does not provide information about a systematic review of prompt engineering or hard prefix prompts explicitly. It suggests using prompting methods for practical applications rather than studying the prompts themselves." -open sesame! universal black box jailbreaking of large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores a method (using a genetic algorithm) for exploiting and manipulating large language models (LLMs) through prompts. While it specifically deals with adversarial attacks and alignment issues, understanding these vulnerabilities is crucial for developing robust prompt engineering techniques. It contributes to the field by highlighting the importance of security measures in prompt design to prevent unintended model behavior. However, the paper's primary focus is on the security and manipulation aspect rather than the constructive development or direct study of prompt engineering techniques, hence the rating is not a full 10." -what language reveals about perception: distilling psychophysical knowledge from large language models,gpt-4-1106-preview,8,"Although the study does not specifically focus on 'prompt engineering' or 'hard prefix prompts,' it is highly relevant because it involves the use of prompt auto-completion features of a large language model (GPT-3) for psychophysical research. The method of eliciting similarity scores through prompt responses is a form of prompt engineering where the design of the prompts is critical for the success of the study. However, it did not directly address hard prefix prompts, which would be specific sequences of words or phrases designed to elicit particular behaviors from language models, leading to a rating slightly lower than the maximum." -boosting language models reasoning with chain-of-knowledge prompting,gpt-4-1106-preview,9,"The abstract describes a novel approach in prompt engineering, specifically focusing on enhancing reasoning capabilities in Large Language Models through Chain-of-Knowledge prompting. It directly relates to the field of prompt engineering by proposing a methodology for improving the quality of generated outputs by incorporating structured knowledge evidence. This is highly relevant to prompt engineering studies, especially those concerning the improvement of model reasoning and reliability. The reason for not giving a full score of 10 is that it does not directly mention 'hard prefix prompts,' but the approach is undoubtedly within the scope of advanced prompt engineering techniques." -lion: adversarial distillation of proprietary large language models,gpt-4-1106-preview,8,"The abstract describes a method of adversarial distillation where a 'teacher' large language model generates 'hard' instructions to enhance the training of a 'student' model. This falls under the umbrella of prompt engineering, as it includes the design of specific prompts to identify and produce instructional data that challenges the student model, thereby improving its performance. The innovative use of 'hard' instructions to drive the adversarial loop is particularly relevant to prompt engineering studies, as it directly relates to the crafting of prompts aimed at maximizing learning potential. However, it does not directly address a comprehensive systematic review on the subject, hence the deduction of two points." -surreal vr pong: llm approach to game design,gpt-4-1106-preview,4,"The title and abstract provided discuss the application of generative models and computational creativity in the context of 3D game design and propose the integration of prompt-based creation into the gameplay itself. While it does not specifically focus on 'hard prefix prompts,' it does touch on prompt engineering by suggesting that prompts can be an element within game mechanics. This indicates some relevance to the study of prompt engineering, but it is not a direct or comprehensive examination of hard prefix prompts in systematic reviews or other studies." -towards parameter-efficient automation of data wrangling tasks with prefix-tuning,gpt-4-1106-preview,9,"The title 'towards parameter-efficient automation of data wrangling tasks with prefix-tuning' is highly relevant to prompt engineering study because it directly addresses the development of a method ('prefix-tuning') to optimize the way prompts are used with Large Language Models to perform data wrangling tasks, which is an example of a practical application of prompt engineering. Furthermore, the abstract details the benefits of using prefix-tuning over full fine-tuning, which is central to the efficiency and effectiveness of using language models in various tasks. The mention of learning continuous prompts automatically and the assessment of prefix-tuning on specific tasks provide concrete evidence of the method's applicability and performance, underscoring its relevance to the field of prompt engineering." -herding ai cats: lessons from designing a chatbot by prompting gpt-3,gpt-4-1106-preview,9,"The given abstract is highly relevant to prompt engineering as it specifically addresses challenges and insights gained from attempting to design a chatbot using prompts with GPT-3/4. It highlights difficulties in achieving a fully positive user experience through prompting alone, discusses the limitations of prompt control in practical applications, and considers the broader implications for design methods using Large Language Models. The focus on UX design and interaction with chatbots powered by LLMs correlates directly with studies on prompt engineering, as it deals with crafting prompts to elicit desired behavior from the model. Although it does not explicitly mention 'hard prefix prompts', the study of prompting effectiveness in this context is still pertinent to the broader field of prompt engineering." -exploring large language model for graph data understanding in online job recommendations,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering is significant but not direct. The notion of using a 'meta-path prompt constructor' suggests a novel approach to prompt development, focusing on behavior graphs rather than text generation or parsing. While this represents an innovative application of LLMs in the context of recommendation systems, it is not a 'comprehensive systematic review on hard prefix prompts' as outlined in the initial prompt for engineering study. Yet, the paper does delve into prompt optimization relevant to a specific application (job recommendations), which is a pertinent aspect of prompt engineering. Thus, the relevance is high due to the contribution to the field of prompt construction and bias mitigation in LLMs, but not a perfect match since it doesn't directly address hard prefix prompts or provide a systematic review of the topic." -ten quick tips for harnessing the power of chatgpt/gpt-4 in computational biology,gpt-4-1106-preview,7,"The article provides practical advice for incorporating ChatGPT into computational biology workflows, which includes a component of 'prompt engineering'. Even though the title suggests a broader usage within computational biology, the mention of 'prompt engineering' in the context of using ChatGPT implies that the article will address how to effectively design prompts to interact with the chatbot for various tasks. This makes it relevant to the study of prompt engineering. However, it is not entirely focused on 'hard prefix prompts' specifically, as indicated by the initial prompt request for a 'comprehensive systematic review on hard prefix prompts'. Therefore, it doesn’t fully match the specificity requested in terms of prompt engineering study, but it is still relevant due to the inclusive nature of the tips and discussion on the best use of prompts." -reconcile: round-table conference improves reasoning via consensus among diverse llms,gpt-4-1106-preview,4,"The study presents a multi-agent system for improving consensus and reasoning among Large Language Models (LLMs), which touches on the field of prompt engineering indirectly through the use of 'discussion prompts'. While it does not address hard prefix prompts directly, the mention of prompts as a means for agent communication suggests relevance to prompt design and its impact on model performance. Therefore, it is somewhat relevant to studies in prompt engineering, especially those exploring the interaction dynamics and prompt-response behavior within and between models." -prompting large language models for zero-shot domain adaptation in speech recognition,gpt-4-1106-preview,4,"The abstract touches on using a domain-specific text prompt for zero-shot domain adaptation in speech recognition with a large language model, which involves prompt engineering for a narrowly defined purpose. It highlights utilizing prompts for performance improvement in a specific AI task, which is relevant to the study of prompt engineering. However, it does not directly address a 'systematic review on hard prefix prompts' or cover the broader implications and methodologies of prompt engineering, thus only partially relevant." -prompting is not a substitute for probability measurements in large language models,gpt-4-1106-preview,7,"The study addresses an aspect of prompt engineering by comparing metalinguistic prompting with direct probability measurements in large language models. Although the study does not specifically discuss 'hard prefix prompts,' it does examine prompting techniques and their effectiveness in understanding linguistic knowledge, which is relevant to the field of prompt engineering. However, since the study is more focused on the comparison with direct probability methods and on metalinguistic judgment rather than on prompt engineering techniques, the rating is not a perfect 10." -automated annotation with generative ai requires validation,gpt-4-1106-preview,7,"While the abstract does not mention 'prompt engineering' or 'hard prefix prompts' directly, it does discuss the quality of prompts as a factor that affects the performance of LLMs in text annotation tasks. The study highlights the importance of validation against human-generated labels, which indirectly ties into the importance of designing effective prompts to get the desired output from an LLM. Therefore, the relevance to prompt engineering is substantial but not explicit, hence the rating of 7 out of 10." -studenteval: a benchmark of student-written prompts for large language models of code,gpt-4-1106-preview,7,"The paper introduces StudentEval, a benchmark for evaluating the efficacy of prompts written by non-expert users (beginning programmers) when interacting with code-based Large Language Models (LLMs). This is relevant to the study of prompt engineering as it provides insight into how well different models respond to prompts that vary in quality and are created by non-experts. It highlights the importance of prompt variability in assessing model performance, which directly relates to the broader inquiry of prompt engineering. Additionally, it contributes to understanding the challenges faced by new programmers in effectively leveraging LLMs for coding tasks, which could inform the development of improved prompt engineering practices. However, the paper might be more narrowly focused on the code LLMs and the non-expert population, rather than a broad, comprehensive systematic review on hard prefix prompts in general prompt engineering." -mindmap: knowledge graph prompting sparks graph of thoughts in large language models,gpt-4-1106-preview,8,"The study described in the abstract appears to be highly relevant to the field of prompt engineering. It focuses on a specific technique of prompting large language models (LLMs) using knowledge graphs (KGs) to address common issues such as knowledge incorporation, hallucinations, and transparency. While the study does not specifically mention 'hard prefix prompts,' which may have been the focus of the requested 'comprehensive systematic review,' it does discuss the broader topic of enhancing the interaction between LLMs and external structured knowledge sources. The concept of 'MindMap' prompting could be considered as a type of advanced prompt engineering that aims to deepen the language model's understanding and reasoning capabilities. Hence, the relevance is rated at 8, acknowledging its importance to the field of prompt engineering but also noting that it does not directly address the specific aspect of 'hard prefix prompts.'" -progprompt: program generation for situated robot task planning using large language models,gpt-4-1106-preview,8,"This publication appears to be highly relevant to prompt engineering as it discusses a structured approach to creating prompts for large language models (LLMs), specifically in the context of generating plans for situated robot tasks. It also mentions the use of ablation experiments to make concrete recommendations about prompt structure, which is an essential part of studying how different prompts affect the performance of LLMs. Although the study's primary focus is on prompts for programmatic tasks within robotics, the methodologies and findings could likely be generalized or applied to other areas of prompt engineering. The rating is not a perfect 10 since the review does not specify that it is a 'systematic review' or that it focuses on 'hard prefix prompts,' but it is still highly applicable to the field." -interactive data synthesis for systematic vision adaptation via llms-aigcs collaboration,gpt-4-1106-preview,6,"The abstract provided for the study 'interactive data synthesis for systematic vision adaptation via llms-aigcs collaboration' indicates an exploration of the collaboration between language models (LLMs) and artificial intelligence generated content (AIGC) models for more controllable image generation, which is aligned with the practice of prompt engineering. However, the focus seems to be on data augmentation for vision tasks rather than solely on the systematic review of 'hard prefix prompts' in prompt engineering. Although prompt engineering is relevant to the work described, as it is necessary for guiding the LLMs in this process, the absence of a direct and explicit focus on a review of prompt engineering techniques, specifically hard prefix prompts, results in a moderate rating on the relevance scale." -clusterllm: large language models as a guide for text clustering,gpt-4-1106-preview,7,"The text describes a study on a text clustering framework called ClusterLLM that uses a large language model, ChatGPT, for gaining insights and for tuning clustering granularity based on text prompts. While the study is not specifically about 'prompt engineering', the use of 'hard triplet questions' and 'carefully designed pairwise questions' indicates a deliberate and strategic approach to crafting prompts to achieve specific outcomes from the language model. This shows relevance to the study of prompt engineering, as the effectiveness of ClusterLLM relies on the proper construction of these prompts to guide the clustering process. However, the application is specific to text clustering rather than prompt engineering in general, which is why the rating is not closer to 10." -how to unleash the power of large language models for few-shot relation extraction?,gpt-4-1106-preview,7,"The abstract indicates a study focused on few-shot relation extraction using large language models like GPT-3.5. It discusses in-context learning and data generation, which are both relevant to prompt engineering, as they deal with how to effectively use prompts to leverage the capabilities of language models for specific tasks. The mention of 'task-related instructions' is directly aligned with prompt engineering, as it involves designing prompts to guide the model's responses. However, the study appears to be more broadly focused on the applications of these methods in relation extraction rather than solely on prompt engineering techniques. Therefore, while there is clear relevance, it is not exclusively centered on prompt engineering, meriting a 7 out of 10." -systematic rectification of language models via dead-end analysis,gpt-4-1106-preview,6,"The study presents a method for detoxification of language model outputs, which is tangentially related to prompt engineering. While the main focus is not on the development of prompts, the detoxification process could impact how prompts are engineered by reducing the probability of generating toxic responses and altering the token selection process. This can be relevant in creating safer and more effective prompts. However, the study does not directly address hard prefix prompts or systematic reviews of prompt engineering strategies, so the rating reflects moderate relevance rather than full alignment with the prompt engineering field." -knowledge refinement via interaction between search engines and large language models,gpt-4-1106-preview,7,"The described study 'knowledge refinement via interaction between search engines and large language models' is relevant to the concept of prompt engineering to a considerable extent. The 'InteR' framework focuses on refining the search and query processes by integrating search engines and LLMs, which directly relates to the creation and optimization of prompts to facilitate these interactions. The study touches upon enhancing prompt formulation using search engine-retrieved documents. Even though it does not focus exclusively on hard prefix prompts or a systematic review of such, it presents relevant research on improving input (which includes prompts) to LLMs to achieve better results in information retrieval tasks. Hence, it contributes to the broader field of prompt engineering by proposing practical ways to optimize the interaction between users, LLMs, and search engines." -introspective tips: large language model for in-context decision making,gpt-4-1106-preview,7,"The abstract describes a study focusing on improving the decision-making capabilities of large language models (LLMs) by generating 'Introspective Tips' which are likely a form of advanced prompts. This approach is related to prompt engineering in that it involves enhancing the prompt (a hard prefix, in this case) to improve the model's performance without altering the underlying model parameters. This relates to how prompting can be used to guide an LLM's output. However, it's not a perfect match, as it doesn't focus specifically on a 'systematic review on hard prefix prompts' but rather on a practical application of prompts for LLM decision-making enhancement. Therefore, it doesn't completely align with prompt engineering studies, but it has substantial relevance due to its focus on the optimization and application of prompts." -augmenting greybox fuzzing with generative ai,gpt-4-1106-preview,8,"The abstract describes ChatFuzz, a system that integrates generative AI (such as ChatGPT) with greybox fuzzing to enhance the generation of format-conforming inputs. The use of ChatGPT to transform initial seed inputs into variations through prompting is directly related to prompt engineering, as this process necessitates designing effective prompts to guide the generative model to produce useful outputs for fuzzing tasks. The paper outlines an application of prompt engineering in a cybersecurity context. The reason for not giving a full 10 is because it focuses specifically on the application of generative AI for fuzzing and not on the broader study of prompt engineering across various domains or on the details of how the prompts are constructed and optimized, which would be of direct interest in a systematic review on hard prefix prompts." -chatrule: mining logical rules with large language models for knowledge graph reasoning,gpt-4-1106-preview,5,"The described paper presents a novel framework called ChatRule, which utilizes large language models to generate logical rules for knowledge graph reasoning. While this application indirectly relates to prompt engineering, as it involves leveraging LLMs to generate content based on structured prompts from knowledge graphs, the focus is more on the application in knowledge graphs and logical rule mining rather than on the study of hard prefix prompts in a general context. Therefore, its relevance to a comprehensive systematic review on hard prefix prompts in prompt engineering may be considered moderate, as the principles could potentially inform prompt engineering techniques, but it is not directly aligned with the review's core subject." -zero-shot prompting for code complexity prediction using github copilot,gpt-4-1106-preview,6,"The relevance of this study to prompt engineering is somewhat indirect. The study investigates the capacity of GitHub Copilot, which leverages a Large Language Model, to predict code complexity in a zero-shot manner. While this addresses the model's ability to understand and generate responses in a specific technical domain without prior training, it does not directly explore the engineering or optimization of prompts (i.e., hard prefix prompts). However, the study does touch on a key aspect of prompt-based interactions with AI, which is the model's performance on tasks with no fine-tuning. This suggests relevance in terms of understanding the capabilities and limitations of LLMs like GPT3 when prompted with untrained tasks, which is a component of prompt engineering." -taming ai bots: controllability of neural states in large language models,gpt-4-1106-preview,8,"The abstract describes a study that is highly relevant to prompt engineering, as it addresses the ability to control AI bot behavior through prompts, which is a core aspect of prompt engineering. This study's focus on the formal definition of 'meaning' and the conditions under which an AI bot can be directed to reach any given 'meaning' is directly related to how prompts are engineered to achieve desired outcomes in language models. The exploration of controllability in the context of large language models (LLMs) also contributes to understanding how different prompts can influence the state of AI, which is a fundamental concern for prompt engineering. The reason for not giving a perfect score is that the abstract does not mention 'hard prefix prompts' specifically, which was the focus indicated in the prompt engineering study query." -spellburst: a node-based interface for exploratory creative coding with natural language prompts,gpt-4-1106-preview,7,"The described study 'Spellburst' is relevant to prompt engineering as it incorporates the use of natural language prompts to facilitate creative coding, an application of prompt engineering. It indicates the development of a system that allows users to interact using high-level semantic constructs ('expressive prompt-based interactions') for creative tasks, which is a part of prompt engineering. However, the focus on a node-based interface for artists suggests that prompt engineering is only a portion of the study's objectives, hence the study may not be exclusively dedicated to hard prefix prompts or the fundamental principles of prompt engineering." -smoothllm: defending large language models against jailbreaking attacks,gpt-4-1106-preview,7,"The study deals with defence mechanisms against adversarial attacks on large language models, specifically addressing the vulnerability at the level of input prompts. Although it is not directly related to 'hard prefix prompts,' it is highly relevant to the broader field of prompt engineering as it tackles the manipulation of prompts to secure desired or undisturbed outputs from language models. The relevance is particularly notable in the context of creating robust prompting strategies that could prevent adversarial attacks and thus maintain the integrity of the interaction with the models. However, the research does not specifically focus on the systematic review of hard prefix prompts, which would be the core topic for direct relevance." -fully autonomous programming with large language models,gpt-4-1106-preview,7,"The title and abstract indicate that this study deals with program synthesis using Large Language Models (LLMs) and explores different strategies for improving the code generation process, which includes evaluating various prompt-based instructions for the LLM. Although the study does not directly mention 'hard prefix prompts,' it implies a close examination of how to effectively prompt LLMs (like OpenAI Codex) to generate, repair, and debug programs. Given that the study involves exploring and comparing different prompt-generation techniques for improving the performance of LLMs in a programming context, it is relevant to prompt engineering to a significant extent. Thus, the rating recognizes the relevance of exploring effective instructions for LLMs, but it is not a perfect match since the study does not explicitly focus on 'hard prefix prompts' but rather on a broader set of prompt-generating techniques and program synthesis strategies." -large language models and (non-)linguistic recursion,gpt-4-1106-preview,7,"The abstract indicates that the study involves designing prompts to elicit certain behaviors from a large language model (LLM), specifically with respect to recursive structures in language. Since prompt engineering is about how to effectively design prompts to achieve desired outputs from LLMs, this study's focus on prompt design for testing meta-linguistic awareness of recursion is relevant to prompt engineering. Although it does not directly address 'hard prefix prompts', it does touch on a related aspect of prompt design. The relevance is not maximal as it doesn't seem to focus on different categories or types of prompts, such as 'hard prefixes', but rather on a specific feature of language (recursion) and how well it can be elicited and analyzed in LLMs." -domain knowledge distillation from large language model: an empirical study in the autonomous driving domain,gpt-4-1106-preview,8,"The paper's abstract discusses the use of prompt engineering with the LLM ChatGPT for the semi-automation of domain knowledge distillation in the engineering process, which is relevant to the subject of 'prompt engineering study'. It explores the practical application of prompts in creating knowledge-based systems, which aligns with the idea of 'hard prefix prompts' in that it examines structured interactions with an LLM. The paper presents empirical findings on the efficacy of prompt engineering in a specific domain, which is valuable for the broader study of prompt engineering techniques. The rating is not a full 10 since the 'hard prefix prompts' might refer to a more specific subset of prompts or methodologies within the field of prompt engineering, which the paper's abstract does not explicitly address." -investigating causal understanding in llms,gpt-4-1106-preview,6,"The study discussed in the abstract is only partially relevant to prompt engineering since the investigation focuses on the causal understanding capabilities of LLMs rather than specifically on 'hard prefix prompts.' However, the research touches on how varying prompt presentations can affect LLM responses, which is related to the concept of prompt engineering. The relevance lies in understanding the influence of presentation form, both in hard prefix prompts and other types of prompting. The rating is not higher because the study does not directly focus on 'hard prefix prompts,' which seems to be the specific area of interest." -reducing retraining by recycling parameter-efficient prompts,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it addresses the issue of retraining prompts when an underlying language model is updated. The concept of 'Prompt Recycling' directly pertains to prompt engineering, by aiming to adapt prompts to new versions of a model without the need for extensive retraining. This research could significantly contribute to the efficiency and practicality of using prompts in various applications, hence the high relevance rating." -selecting better samples from pre-trained llms: a case study on question generation,gpt-4-1106-preview,8,"The paper presents a study on selecting the best outputs from samples generated by Large Language Models (LLMs) using prompt-based approaches, which is highly relevant to the field of prompt engineering. Although the study focuses specifically on the task of question generation, the research on improving the diversity and quality of LLM outputs through prompt manipulation is a direct application of prompt engineering principles. The rating is not a full 10 because the paper is a case study limited to question generation and does not cover the broader spectrum of hard prefix prompts or systematic reviews of prompt engineering." -validating large language models with relm,gpt-4-1106-preview,8,"The abstract mentions the validation and evaluation of language model concerns including bias and inappropriate language, which are topics relevant to prompt engineering because they address the model's outputs in response to prompts. Furthermore, ReLM's increased prompt-tuning coverage directly pertains to prompt engineering as it suggests an improved method for evaluating and refining how prompts are designed and how models respond to them. The connection to 'hard prefix prompts' is not explicit, leading to a rating lower than 10, but the general subject matter is pertinent to studies in prompt engineering." -preserving in-context learning ability in large language model fine-tuning,gpt-4-1106-preview,9,"The discussed paper addresses a crucial aspect of prompt engineering, which is preventing the loss of a large language model's innate in-context learning abilities during the fine-tuning process. The proposed two-stage fine-tuning framework, ProMoT, is highly relevant as it involves prompt tuning, a method directly connected to prompt engineering. The study's findings on how to maintain a model's performance across various tasks and its ability to work with different formats add valuable insights to the field. The research is relevant to prompt engineering as it provides a potential solution to a common problem faced when fine-tuning models with hard prompts, although it does not directly discuss 'hard prefix prompts'. Nonetheless, the principles could be applicable to the systematic review on hard prefix prompts." -improving knowledge extraction from llms for robotic task learning through agent analysis,gpt-4-1106-preview,8,"The abstract outlines a study that, while not focusing exclusively on hard prefix prompts, does address the broader concept of prompt engineering within the context of LLMs and robotic task learning. It directly engages with how prompt engineering can be improved and augmented through a cognitive-agent approach, making it relevant to those interested in the intricacies and optimizations of prompting large language models. This is highly pertinent to the field of prompt engineering, although the text does not specifically mention 'hard prefix prompts.'" -large language models as superpositions of cultural perspectives,gpt-4-1106-preview,7,"The abstract discusses the concept of 'perspective controllability' within Large Language Models (LLMs), which is relevant to prompt engineering. It highlights how LLMs can exhibit context-dependent values and personality traits, a concept critical to understanding how different prompts can influence the output of such models. Despite not directly addressing 'hard prefix prompts', the study does engage with the underlying mechanics that would be essential for designing effective prompts to guide LLM responses, which is a fundamental aspect of prompt engineering. Therefore, while not focused on hard prefix prompts specifically, the research contributes to the broader understanding of prompt design and LLM interaction methods, which could impact the study of prompt engineering." -robot task planning based on large language model representing knowledge with directed graph structures,gpt-4-1106-preview,7,"The given title and abstract involve the development of an LLM prompt template, which indicates a study related to prompt engineering as it aims to create a prompt structure with strong expressive power. This is directly relevant to the exploration of how prompts are structured and their relation to large language models (LLMs) in the context of task planning for robots. The systematic review of 'hard prefix prompts' could likely benefit from insights derived from this proposed method and its structured template. However, the study might be more focused on the application side of prompt engineering in robot task planning, rather than a broad and comprehensive review of prompt engineering techniques and theories. Therefore, it is not entirely focused on 'hard prefix prompts' but is relevant to the broader field of prompt engineering." -using large language models to generate engaging captions for data visualizations,gpt-4-1106-preview,8,"The abstract discusses the application of large language models to generate captions for data visualizations, with a focus on the process of 'prompt engineering'. Although it does not mention a 'hard prefix prompt' specifically, the study is centered around the broader concept of prompt engineering, which is designing the most effective prompts to elicit desired responses from a language model like GPT-3. This falls under the umbrella of prompt engineering and is therefore highly relevant to the study of how prompts can affect the output of language models. The rating is not a full 10 because the study abstract does not specifically address a 'systematic review' on 'hard prefix prompts' but seems more focused on practical experimentation and application." -spec: a soft prompt-based calibration on performance variability of large language model in clinical notes summarization,gpt-4-1106-preview,7,"The relevance of the provided title and abstract to prompt engineering is quite significant, given that the study centers on the application of prompts, specifically 'soft prompts,' to refine the performance of large language models in the context of summarizing clinical notes. Prompt engineering fundamentally involves the strategic use of prompts to effectively steer language models towards desired outputs. The research introduces a Soft Prompt-Based Calibration (SPeC) pipeline, which pertains to optimizing the use of prompts to achieve more consistent and accurate results. Although the study is situated in a specific application area—healthcare—and focuses on 'soft prompts' rather than 'hard prefixes,' it contributes to the broader understanding of how prompt design can affect language model behavior and performance. Nonetheless, it does not directly address the systematic review of hard prefix prompts, which would be the core of a prompt engineering study, hence the rating is not a perfect 10." -using a large language model to control speaking style for expressive tts,gpt-4-1106-preview,7,"While the study primarily focuses on the use of a language model for controlling prosody in text-to-speech (TTS) systems, it is relevant to prompt engineering due to the use of prompts to control language model outputs. Specifically, the study involves engineering prompts that guide the language model to produce suggestions on pitch, energy, and duration for expressive TTS, which is an application of prompt engineering. Though the study’s main goal is not about prompt engineering itself, the methodology of designing prompts to achieve desired outcomes in model behavior is an essential aspect of prompt engineering. Therefore, this study would provide useful information for those interested in the intersection of prompt engineering and TTS technology." -gpt4tools: teaching large language model to use tools via self-instruction,gpt-4-1106-preview,8,"The paper is relevant to prompt engineering study because it discusses an advanced method of enabling Large Language Models (LLMs) to use tools through the generation of an instruction-following dataset using a form of prompt engineering. It specifically mentions 'sophisticated prompt engineering' as a crucial component for LLMs tool usage capabilities. Although the focus is more on self-instruction and tool usage within multimodal contexts, prompt engineering is a significant part of the methodology used in teaching the LLMs. However, it does not focus exclusively on 'hard prefix prompts,' which would be central to a study specifically addressing prompt engineering, hence the rating is not a full 10." -simulating h.p. lovecraft horror literature with the chatgpt large language model,gpt-4-1106-preview,9,"The study directly investigates the application and effectiveness of prompt engineering techniques to guide a language model's output to emulate H.P. Lovecraft's horror literature style. Given that the focus is on both the generation of text in a specific literary style and the examination of prompt engineering methods, this is highly relevant to the field of prompt engineering. The rating is not a perfect 10 because the study also delves into the model's architecture and comparative analysis, which, while related, are not exclusively focused on prompt engineering." -s3: social-network simulation system with large language model-empowered agents,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering, as it explicitly mentions the use of prompt engineering and prompt tuning techniques to shape the behavior of agents within the social network simulation system. It indicates that these techniques are critical for the agents' performance in emulating human-like sensing, reasoning, and behavior, which are key in the context of the study. The rating is not a full 10 because the abstract does not provide detailed insight into the nature of the prompt engineering study or its findings specific to the 'hard prefix prompts', which is the specific focus of the prompt engineering study in question." -hierarchical prompting assists large language model on web navigation,gpt-4-1106-preview,8,"The abstract discusses a hierarchical prompting approach specifically designed to improve the performance of large language models on tasks involving complex observations, such as web navigation. While this is not directly related to 'hard prefix prompts', it falls under the broader category of prompt engineering which aims to enhance how models interpret and react to prompts. The hierarchical structure mentioned involves creating more efficient prompts that enable better decision making. Therefore, the study is highly relevant to the field of prompt engineering, albeit with a specific focus on a hierarchical strategy rather than hard prefix prompting techniques." -co-training improves prompt-based learning for large language models,gpt-4-1106-preview,9,"The abstract describes research on enhancing prompt-based learning with co-training, which is directly relevant to the field of prompt engineering. It explores methods to improve and iterate on prompt models, which are integral to the efficiency and effectiveness of large language models like GPT-3. Although the title and abstract do not specifically mention 'hard prefix prompts,' the systematic review of improving prompt-based learning in LLMs is encompassed within the broader scope of prompt engineering. A small deduction is made because the exact term 'hard prefix prompts' was not discussed, but the overall content is highly pertinent." -prompt text classifications with transformer models! an exemplary introduction to prompt-based learning with large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates prompt-based learning, a key concept within this field, especially as it pertains to the use of transformer models and large language models for classification tasks. Although it does not specifically mention engineering 'hard prefix prompts', it still examines the broader subject of using prompts in machine learning. The emphasis on the practical application of prompt-based learning and comparison with human ratings also adds value to the context of prompt engineering." -on robustness of prompt-based semantic parsing with large pre-trained language model: an empirical study on codex,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to a significant extent as it investigates the robustness of prompt-based semantic parsing with a large pre-trained language model such as CODEX, which is a practical aspect of prompt engineering. However, the focus is more on adversarial robustness and less on the hard prefix prompts specifically. As the study involves understanding how prompts work with a language model trained on code, it has implications for the design of prompts (engineering) for better robustness, which is a critical aspect of prompt design. Nevertheless, the absence of a direct investigation into 'hard prefix prompts' as suggested by the original prompt, limits the full relevance of this study to the prompt engineering field described in the initial question." -investigating the translation performance of a large multilingual language model: the case of bloom,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering is fairly high as it touches upon prompt design within the context of evaluating a multilingual language model's performance in machine translation tasks. While the study is not exclusively focused on 'hard prefix prompts,' it does examine how variations in prompts (0-shot vs. few-shot settings) influence the language model's output. Therefore, the investigation of prompt design as a factor in model performance is pertinent to the broader field of prompt engineering, particularly as it relates to enhancing the model's understanding and generating the correct language output. However, the rating is not a full 10 since the primary focus is on the translation performance rather than on prompt engineering methodologies or prompt optimization techniques exclusively." -cold-start data selection for few-shot language model fine-tuning: a prompt-based uncertainty propagation approach,gpt-4-1106-preview,8,"The title and abstract are highly relevant to prompt engineering as they discuss 'PATRON', a method utilizing prompt-based approaches for improving the data selection process in few-shot learning scenarios for language models. This method directly relates to engineering prompts to handle uncertainty, which is a subset of the broader field of prompt engineering. However, the study does not seem to concentrate on 'hard prefix prompts', which is the specific type mentioned in the prompt. Hence, it may not cover the full scope of the systematic review on hard prefix prompts if that is the sole focus, but still remains very relevant to the broader category of prompt engineering studies." -can large language models reason about medical questions?,gpt-4-1106-preview,8,"This abstract is highly relevant to prompt engineering as it discusses the effectiveness of different prompting scenarios such as Chain-of-Thought, zero-shot, few-shot, and retrieval augmentation in eliciting accurate responses from large language models for medical questions. The study focuses on the LLM's reasoning capabilities in the context of complex real-world questions, which is a critical component of prompt engineering, especially when evaluating the utility and reliability of prompts in domain-specific knowledge tasks." -tabllm: few-shot classification of tabular data with large language models,gpt-4-1106-preview,7,"The study addresses the conversion of tabular data to a natural-language string for classification tasks, which can be considered a specific form of prompt engineering. The relevance lies in the fact that the process involves crafting prompts that enable a language model to interpret and classify non-textual data efficiently. However, the study's primary focus is on tabular data and classification tasks, rather than the broader topic of hard prefix prompts used across various types of data and tasks in prompt engineering. Therefore, the rating is a 7, indicating that the study is relevant but not fully aligned with a systematic review of hard prefix prompts in prompt engineering." -evaluating the text-to-sql capabilities of large language models,gpt-4-1106-preview,6,"The abstract describes an empirical study on the capability of a language model, Codex, to interpret and convert natural language into SQL queries. The study focuses on performance evaluation and comparison with state-of-the-art models in terms of few-shot learning, which is facilitated by providing a few in-domain examples in the prompt. Although the abstract does not explicitly mention the term 'prompt engineering,' the essence of evaluating the impact of tailored prompts on the model's performance is captured in the process of providing 'in-domain examples'. This could be considered a form of prompt engineering, as it involves crafting prompts to improve task-specific performance. Hence, the study has relevance to the broader field of prompt engineering, specifically regarding how prompts can enable large language models to understand and generate structured queries like SQL. However, the focus is not on 'hard prefix prompts' or a comprehensive systematic review on such, which would be more directly related to the prompt engineering study described in the prompt, thus warranting a moderate rating rather than a high one." -prompting is programming: a query language for large language models,gpt-4-1106-preview,9,"The abstract provided discusses Language Model Programming (LMP) and Language Model Query Language (LMQL), which are novel approaches to prompt engineering. The focus on an efficient inference procedure and the ability to impose constraints on language model outputs is highly relevant to the field of prompt engineering, as it aims to optimize the way we interact with language models. The relevance is not rated a full 10 only because prompt engineering can encompass a broader range of techniques and considerations beyond the specific innovations of LMP and LMQL, such as different prompting strategies, the study of few-shot learning, etc. However, the presented work is undeniably pertinent and likely to contribute significantly to the advancement of prompt engineering methodologies." -large language models are reasoning teachers,gpt-4-1106-preview,8,"The paper is highly relevant to the study of prompt engineering as it discusses an advanced technique, Fine-tune-CoT, which generates reasoning samples from large models to improve the prompt-based capabilities of smaller models. Although the technique focuses on fine-tuning smaller models rather than the creation of prompts per se, the central idea of using larger models as a 'reasoning teacher' is deeply intertwined with generating more effective prompts that leverage the large model's understanding to enhance reasoning in smaller models. This contributes to the field of prompt engineering by optimizing the efficiency and capability of prompts in eliciting desired responses, particularly for complex reasoning tasks." -class-aware visual prompt tuning for vision-language pre-trained model,gpt-4-1106-preview,9,"The title and abstract of the paper indicate a high relevance to prompt engineering as the study focuses on tuning prompts for a vision-language pre-trained model, which involves modifying and optimizing the input prompts to elicit desired responses from the model. Although the paper does not explicitly mention 'hard prefix prompts', it falls within the broader category of prompt engineering by exploring 'visual prompts' and 'text prompts'. This makes it significantly relevant to the topic of prompt engineering study as it contributes to the understanding of how to efficiently tune and adapt pre-trained models to specific tasks through prompt modifications." -analogy generation by prompting large language models: a case study of instructgpt,gpt-4-1106-preview,9,"The study's focus on prompt design and its effectiveness in generating analogies is highly relevant to prompt engineering. It explores how different prompts affect InstructGPT's output, which is a core aspect of the field. The sensitivity analysis to prompt structure and variations is also pertinent to understanding how to engineer prompts for better performance. The rating is not a full 10 because the study is specifically about analogy generation, so it might not cover other aspects of prompt engineering comprehensively." -using large language models to simulate multiple humans,gpt-4-1106-preview,8,"The presented abstract is highly relevant to prompt engineering as it discusses the use of prompt templates to generate varied responses from a language model in the context of behavioral experiments. The methodology relies heavily on designing effective prompts to ensure the simulation accuracy of human responses. This is directly related to prompt engineering, as it requires an understanding of how to tailor prompts to illicit specific reactions from the model. The study's validation and exploration of model responses to different scenarios are a core part of prompt engineering research. However, the study does not explicitly focus on 'hard prefix prompts', thus the rating is not a full 10." -persistent anti-muslim bias in large language models,gpt-4-1106-preview,6,"While the study is highly relevant to the broader field of AI ethics and bias in machine learning models, its direct relevance to 'prompt engineering' is moderate. It touches on the concept of 'adversarial text prompts' as a means to counteract bias in language models, which does fall under the scope of prompt engineering. However, the study's primary focus is on the identification and analysis of bias, rather than on the engineering of prompts as a method for directing or improving the language model's outputs. More specifically, it does not address 'hard prefix prompts' in the systematic review sense but does explore the dynamic between prompt construction and model responses related to bias." -large language models in the workplace: a case study on prompt engineering for job type classification,gpt-4-1106-preview,9,"The abstract provided discusses a case study that centers on the use of prompt engineering for the specific task of job classification. It details the comparative performance analysis of various models including state-of-the-art GPT-3.5-based language models. Considering that prompt engineering is both a focus of the study and is used as a tool to direct the language models toward the desired classification task, the relevance to prompt engineering is very high. A point is subtracted because the details on 'hard prefix prompts' specifically are not mentioned, which could be an aspect of prompt engineering but is not explicitly covered in the abstract provided." -soft-prompt tuning for large language models to evaluate bias,gpt-4-1106-preview,7,"The abstract discusses 'soft-prompt tuning' for evaluating biases in large language models, which is related to prompt engineering as it involves the refinement of prompts to achieve specific outcomes from language models. However, the study focuses specifically on sentiment classification tasks and the evaluation of bias, not on 'hard prefix prompts' as specified in the original query for a comprehensive systematic review. Therefore, the relevance to the precise subject of 'hard prefix prompts' is indirect, hence the rating of 7, indicating moderate relevance to prompt engineering but not closely aligned with the original request for information on hard prefix prompts." -promptify: text-to-image generation through interactive prompt exploration with large language models,gpt-4-1106-preview,8,"The paper describes 'Promptify', a system designed to aid in prompt engineering for text-to-image generation by making the process interactive, which is highly relevant to the study of prompt engineering. While it doesn't specifically address 'hard prefix prompts,' the general field of designing and refining prompts to achieve better alignment with user intent is central to prompt engineering. The suggestion engine's utilization of large language models to aid in crafting prompts further aligns this work with the broader domain of prompt engineering. However, the paper's focus on text-to-image and not purely text outputs means it's not a complete overlap with prompt engineering studies that may deal with a variety of output modalities (e.g., text-to-text, text-to-speech), hence the rating is not a full 10." -you only prompt once: on the capabilities of prompt learning on large language models to tackle toxic content,gpt-4-1106-preview,8,"The study directly investigates the use of prompt learning with large language models, which is a clear application of prompt engineering. It focuses on how prompting these models can be used to address toxicity, a significant part of language model applications. The relevance is high because it involves creating prompts for classification, detection, and detoxification tasks. However, the study is specific to toxic content moderation, which is a subset of prompt engineering, hence not a full 10." -controlling the extraction of memorized data from large language models via prompt-tuning,gpt-4-1106-preview,8,"The abstract details a study that is highly relevant to prompt engineering, as it directly involves the technique of prompt-tuning to manipulate the behavior of Large Language Models. It is relevant to the study of controlling the output of such models, particularly concerning data extraction and privacy issues, which are key considerations in prompt engineering. The deduction of two points reflects that the abstract specifically focuses on the memorization aspect and the privacy concerns rather than the broader field of prompt engineering or hard prefix prompts in general." -sensitivity and robustness of large language models to prompt template in japanese text classification tasks,gpt-4-1106-preview,8,"The given abstract is highly relevant to prompt engineering as it investigates the effects of prompt template modifications on the performance of Large Language Models (LLMs), specifically in the context of Japanese text classification tasks. It addresses critical aspects of prompt engineering, such as sensitivity and robustness of language models to changes in prompt templates. The study's focus on how simple changes can lead to significant discrepancies in model performance is directly linked to prompt engineering. The rating is not a full 10 because the abstract mentions a specific application (Japanese text classification) rather than providing a broader analysis across various applications and languages, which could impact the generalizability of the findings to all areas of prompt engineering." -sensitivity and robustness of large language models to prompt in japanese,gpt-4-1106-preview,8,"The paper focuses on the sensitivity and robustness of Large Language Models to prompt changes, which is a core aspect of prompt engineering. It is highly relevant as it evaluates how minor alterations in prompts can impact model performance, directly relating to the study of prompt engineering. The slight deduction in rating is because it does not address 'hard prefix prompts,' the specific type of prompt mentioned in the original query, but rather the broader concept of prompt sensitivity and robustness in the context of Japanese language prompts." -bounding the capabilities of large language models in open text generation with prompt constraints,gpt-4-1106-preview,9,"The abstract presents a relevant study in the area of prompt engineering as it focuses on analyzing and bounding abilities of generative models with a prompt-centric approach. The researchers' use of structural and stylistic constraints directly pertains to prompt engineering, given that they are well-defined constraints that can affect how prompts guide model generation. The relevance is further supported by the use of a major model like GPT-3 as a case study and the consideration of generalizability to other large models. The deduction of one point is due to the absence of specific details about 'hard prefix prompts' from the given abstract, though the content is strongly related to prompt engineering overall." -linguist: language model instruction tuning to generate annotated utterances for intent classification and slot tagging,gpt-4-1106-preview,9,"The abstract describes a method called LINGUIST which involves fine-tuning a large language model using a flexible instruction prompt to improve the generation of annotated data for Intent Classification and Slot Tagging. This process is closely related to prompt engineering, as it involves the specific design of prompts to achieve desired outcomes in the model's performance. Although it is not exclusively focused on 'hard prefix prompts,' the practice of instruction tuning and prompt design to guide the model's output makes this study highly relevant to the field of prompt engineering. The fine-tuning on instruction prompts is a subset of prompt engineering that has a broad impact on the data generation process for natural language understanding tasks." -augesc: dialogue augmentation with large language models for emotional support conversation,gpt-4-1106-preview,4,"The study described does involve prompts as it discusses leveraging large language models for dialogue augmentation, specifically in the context of emotional support conversation. The prompt engineering aspect is present in the sense that the researchers instruct the model to complete dialogues which could be considered a form of a 'prompt'. However, hard prefix prompts, which imply a specific approach to structuring prompts to elicit desired responses, are not directly mentioned. This suggests that while the study is related to prompt design and usage, it may not focus on the 'hard prefix prompts' aspect extensively, leading to a moderate relevance rating." -conal: anticipating outliers with large language models,gpt-4-1106-preview,8,"The abstract describes a methodology for improving text classification models' handling of out-of-distribution (OOD) examples by generating these examples via prompts to a large language model. The relevance to prompt engineering lies in the fact that it utilizes prompt-based techniques to generate new datasets that represent novel classes, which is a part of the broader field of prompt engineering. While the study does not focus on 'hard prefix prompts' specifically, the process of generating prompts to create OOD examples is an integral part of prompt engineering. Therefore, the relevance is rated as high but not maximal due to the specific approach not being the central topic of prompt engineering studies." -variational prompt tuning improves generalization of vision-language models,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it explores an innovative approach to prompt tuning for vision-language models, which is a key area in the field. It proposes a method that enhances the generalization capabilities of foundational models by using a probabilistic model to generate prompts. This addresses a common issue with prompt tuning where prompts may be too narrow or specific, thus hindering the ability of the model to generalize. The mention of integration with standard and conditional prompt learning frameworks suggests that this study is specifically tailored towards improving the efficacy of prompt engineering in practical applications. The only reason it doesn't receive a perfect score is because the study focuses on vision-language models, and while it is highly relevant, it may not encompass all aspects of prompt engineering that might be applicable in purely language-based models." -prompt-and-rerank: a method for zero-shot and few-shot arbitrary textual style transfer with small language models,gpt-4-1106-preview,8,"The abstract describes a method that directly involves prompt engineering through the use of zero-shot or few-shot prompting as part of a 'Prompt-and-Rerank' process for textual style transfer. Deliberate prompt design choices are discussed as affecting the quality of style transfer, including the use of prompt paraphrasing and delimiter-pair choice. This directly ties to the area of prompt engineering as it is about optimizing the prompts given to language models to achieve a certain task. However, the relevance is not a full 10 as the primary focus is on textual style transfer rather than the structure and formulation of the prompts themselves which would constitute a comprehensive systematic review on hard prefix prompts." -visual-language navigation pretraining via prompt-based environmental self-exploration,gpt-4-1106-preview,8,"The abstract presents a study on improving Vision-Language Navigation (VLN) by utilizing a method called Prompt-based Environmental Self-exploration (ProbES). This involves prompt tuning for language embeddings to adapt a pretrained model like CLIP to new environments without human supervision. Although not directly concerned with 'hard prefix prompts', it relates to prompt engineering significantly as it deals with the adaptation and tuning of prompts to enhance learning efficiency in AI models. The focus is more on vision-language applications and self-exploration but it still falls under the broad umbrella of prompt engineering." -prcbert: prompt learning for requirement classification using bert-based pretrained language models,gpt-4-1106-preview,9,"The relevance of the given paper to the study of prompt engineering is high. The paper discusses the application of prompt learning, a technique within prompt engineering, to the domain of software requirement classification using BERT-based pre-trained language models. Since it explicitly deals with the use of prompt templates to improve classification performance, it is highly relevant to the prompt engineering field, particularly in the context of applying these techniques to domain-specific tasks. However, the focus appears to be more on the classification performance rather than the prompt engineering methodology itself, which is why the rating is not a full 10." -fundamental limitations of alignment in large language models,gpt-4-1106-preview,8,"The abstract discusses the concept of 'alignment' in language models and the theoretical approach to understand the limitations of alignment, which is highly relevant to prompt engineering. The Behavior Expectation Bounds (BEB) framework mentioned in the abstract directly relates to how prompts can influence a model's behavior, which is a core component of prompt engineering. The paper addresses the ability to trigger particular behaviors in large language models through the use of prompts, making it pertinent to the study of hard prefix prompts and how they can be engineered. Although the focus seems to be on the alignment aspect rather than the specific structure and content of prompts (i.e., 'hard prefixes'), the findings about adversarial prompting and the length of the prompt influencing behavior is crucial for the domain of prompt engineering. Therefore, I've rated it an 8 as it is quite pertinent but not exclusively centered on hard prefix prompts." -synthetic prompting: generating chain-of-thought demonstrations for large language models,gpt-4-1106-preview,9,"The relevance of the given article to prompt engineering is very high. Synthetic prompting, as described, directly addresses the creation and refinement of prompts for large language models, aiming to improve their reasoning capabilities. The systemic approach to generating chain-of-thought demonstrations ties closely to the study and evolution of prompt engineering techniques. It demonstrates the iterative process of generating questions and enhancing reasoning chains, which is at the heart of prompt engineering. The only reason it doesn't receive a perfect score is because the content might not be exclusively focused on 'hard prefix prompts' as mentioned in the original request, but rather on the broader concept of prompt generation and optimization." -prompting large language models with answer heuristics for knowledge-based visual question answering,gpt-4-1106-preview,8,"The relevance to prompt engineering is high, as the study directly addresses the utilization of prompts in improving the performance of a large language model (GPT-3) for the specific task of knowledge-based visual question answering (VQA). The approach involves training a model to generate 'answer heuristics' which are then used as part of the prompts to refine GPT-3's understanding of the questions, thereby enhancing its ability to produce accurate answers. This method represents a novel application of prompt engineering, highlighting its effectiveness in extracting and utilizing implicit knowledge for complex tasks. However, the focus is particularly on incorporating answer heuristics into prompts for a VQA task rather than on hard prefix prompts in general, so the rating is not a perfect 10." -large language models are effective text rankers with pairwise ranking prompting,gpt-4-1106-preview,9,"The paper addresses a technique called Pairwise Ranking Prompting (PRP) which is highly relevant to the field of prompt engineering for large language models (LLMs). It contributes to the understanding of how different prompting methods can affect the capabilities of LLMs in the context of ranking tasks. As prompt engineering is largely about optimizing the interaction between users and LLMs for specific tasks, a study that advances the state-of-the-art in this manner is closely related to prompt engineering studies." -exploring the mit mathematics and eecs curriculum using large language models,gpt-4-1106-preview,7,"The abstract describes a study where large language models are evaluated and fine-tuned for solving mathematics and EECS problems, which relates to prompt engineering in terms of optimizing inputs to enhance model performance. GPT-4's 'perfect solve rate' with prompt engineering indicates a direct application of prompt engineering techniques. However, the study focuses more broadly on the model's capabilities in academic problem-solving rather than strictly on prompt engineering methodologies and their systematic review, which would be the core interest of a 'hard prefix prompts' study. Hence, the relevance is strong but not complete." -retroformer: retrospective large language agents with policy gradient optimization,gpt-4-1106-preview,6,"The abstract describes a study related to optimizing large language agents through policy gradient optimization, which indirectly involves engineering of prompts because it mentions the automatic tuning of language agent prompts based on environment feedback. While this does not specifically target 'hard prefix prompts,' it is relevant to the broader field of prompt engineering as it involves refining prompts to improve agent performance. However, the lack of direct mention of 'hard prefix prompts' or a comprehensive systematic review of them justifies a moderate rating rather than a high one." -sequential monte carlo steering of large language models using probabilistic programs,gpt-4-1106-preview,8,"The paper presents a method for controlling the outputs of large language models using sequential Monte Carlo steering, which is highly relevant to prompt engineering as it deals with influencing and guiding the performance of these models at inference time. This approach could be viewed as an advanced form of prompt engineering where the prompts are not fixed but are instead dynamic and take into account syntactic and semantic constraints. Although it does not explicitly tackle 'hard prefix prompts', it proposes a method that is applicable to prompt engineering in a broader sense. Hence, the relevance is high but not absolute, as it is not directly focusing on a 'systematic review' or explicitly on 'hard prefix prompts'." -fineval: a chinese financial domain knowledge evaluation benchmark for large language models,gpt-4-1106-preview,7,"While the title 'fineval: a chinese financial domain knowledge evaluation benchmark for large language models' and abstract presented do not directly deal with 'prompt engineering' in the context of designing or studying hard prefix prompts, the mention of employing various prompt types (zero-shot, few-shot, answer-only, and chain-of-thought) within the evaluation benchmark touches on the principles of prompt engineering. Assessing different prompting strategies is essential to understanding how LLMs like GPT-4 respond in domain-specific tasks. The study's focus on measuring the performance of these LLMs using a set of prompts tailored for the financial domain implies a level of relevance to prompt engineering, as it would provide insights into the effectiveness of prompt design in eliciting the desired response from the models. However, the absence of a specific focus on the systematic review of hard prefix prompts limits the rating from being higher." -leveraging large language models for mental health prediction via online text data,gpt-4-1106-preview,6,"The title and abstract indicate that this study involves leveraging large language models (LLMs) for mental health prediction tasks by analyzing online text data, which is related to the application of LLMs, but it doesn't specifically mention 'hard prefix prompts' or 'prompt engineering' as the central theme. However, the use of zero-shot and few-shot prompting, along with instruction finetuning, falls under the broader category of prompt engineering techniques. Therefore, while the study is tangentially relevant to prompt engineering because it involves designing inputs for LLMs to perform specific tasks, it is not focused on a comprehensive systematic review of hard prefix prompts, which makes it only moderately relevant." -analyzing chain-of-thought prompting in large language models via gradient-based feature attributions,gpt-4-1106-preview,9,"The provided abstract is highly relevant to the field of prompt engineering, as it focuses on the Chain-of-thought (CoT) prompting method, which is an advanced tactic in prompting for large language models. The study investigates the impact CoT has on the models' interpretation and weighting of input tokens, which is a fundamental aspect of prompt engineering. Although the paper does not specifically address 'hard prefix prompts,' the examination of CoT prompting mechanisms contributes valuable insights into the broader topic of prompt design effectiveness in LLMs, making it pertinent to the prompt engineering study. The reduction in relevancy score from a perfect 10 to a 9 is due to the specified focus on CoT rather than hard prefix prompts specifically." -"utilizing large language models to simplify radiology reports: a comparative analysis of chatgpt-3.5, chatgpt-4.0, google bard, and microsoft bing",gpt-4-1106-preview,8,"The presented study, while not focusing on 'hard prefix prompts' specifically, addresses the broader field of prompt engineering by evaluating the effectiveness of different prompts in guiding LLMs to simplify radiology reports. Since the performance variation based on the type of prompt used is central to the paper, it contributes relevant insights into how prompts can be engineered for specific applications in medical communication. Thus, the relevance is high, but not a perfect score due to it not focusing exclusively on 'hard prefix prompts'." -can large language models empower molecular property prediction?,gpt-4-1106-preview,4,"The study focuses on the application of Large Language Models (LLMs) for molecular property prediction using SMILES text, which demonstrates a use case for LLMs that is adjacent to the concept of prompt engineering. Although it deals with prompting LLMs for in-context learning and involves the generation of explanations, which are relevant techniques in prompt engineering, the study's primary aim is not a systematic review of prompt engineering itself, nor does it specifically address 'hard prefix prompts'. Therefore, its relevance to a comprehensive systematic review on hard prefix prompts is tangentially related but not directly aligned, warranting a moderate relevance rating." -understanding the effectiveness of very large language models on dialog evaluation,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates the structure of prompts and their impact on the performance of various large language models in dialog evaluation tasks. While it does not specifically address 'hard prefix prompts,' it does concern the broader category of prompting and example selection, which are integral components of prompt engineering. The systematic review of how the datasets influence prompt construction and the exploration of example quantity and selection type are directly related to understanding and optimizing prompt efficacy." -generating data for symbolic language with large language models,gpt-4-1106-preview,6,"The abstract indicates that the paper is closely related to the use of prompts in the context of LLMs for data generation, specifically in the area of symbolic language tasks. While the study does not directly focus on 'hard prefix prompts' as stipulated in the prompt engineering study question, it does explore 'informative prompt' design in order to steer the LLM's data generation process. This suggests a strong relevance to the practice of prompt engineering and the optimization of prompts for specific tasks in LLMs. However, the lack of explicit focus on 'hard prefix prompts' reduces the direct relevance to the systematic review concerning that specific aspect of prompt engineering." -identifying and extracting rare disease phenotypes with large language models,gpt-4-1106-preview,9,"The abstract describes a study focused on the development and evaluation of novel prompts for named entity recognition (NER) in the context of extracting rare disease (RD) phenotypes using large language models such as ChatGPT. This work is highly relevant to the field of prompt engineering as it directly involves designing and testing prompts to improve NER performance in zero-shot and few-shot settings, as well as comparing these results to traditional fine-tuning methods. This investigation contributes to understanding the potential and limitations of prompt engineering in practical applications, although it is specific to a particular domain of rare diseases." -prompt tuning or fine-tuning - investigating relational knowledge in pre-trained language models,gpt-4-1106-preview,8,"The relevance of the study to prompt engineering is high since it directly deals with the optimization of query prompts for relational knowledge extraction from pre-trained language models. The study compares prompt tuning techniques against adaptive fine-tuning, which is an essential contrast in the field of prompt engineering, as it investigates how pre-trained models can be made more efficient in understanding and responding to prompts without extensive additional training. While the paper does not focus solely on 'hard prefix prompts', it addresses the broader topic of optimizing prompts for better model performance which is integral to prompt engineering studies." -on transferability of prompt tuning for natural language understanding,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study, specifically within the domain of natural language understanding. It discusses prompt tuning, an essential aspect of prompt engineering, where the reusability and transferability of prompts across different tasks and models are investigated. The exploration of knowledge transfer for improving prompt tuning efficiency is directly applicable to strategies in prompt engineering for large pre-trained language models. The reason for not giving a perfect score is the absence of a direct mention of 'hard prompts,' but the study's content is still very pertinent to the broader field of prompt engineering." -knowledge-augmented language model prompting for zero-shot knowledge graph question answering,gpt-4-1106-preview,8,"The relevance of this study to prompt engineering is significant, as it involves the augmentation of input prompts with factual information retrieved from a knowledge graph to improve the performance of Large Language Models (LLMs) in answering questions. This approach directly pertains to prompt engineering by structuring the input to LLMs in a way that aids in zero-shot knowledge graph question answering. Although the focus is not specifically on 'hard prefix prompts,' the method does relate to constructing effective prompts that align with the principles of prompt engineering. The high rating reflects the close relation of knowledge augmentation in prompting to enhance model performance without additional training, which is a core aspect of prompt engineering. The rating is not a perfect 10 because the study specifies a specialized application in knowledge graphs and does not broadly survey prompt engineering techniques or include a systematic review of hard prefix prompts generally." -purr: efficiently editing language model hallucinations by denoising language model corruptions,gpt-4-1106-preview,7,"The study discusses improving the editing and attribution of language model outputs through prompt-based editing methods, which is closely related to prompt engineering. However, the focus is specifically on reducing hallucinations and improving efficiency, rather than on hard prefix prompts. While it does pertain to the broader category of prompt engineering, it does not address the systematic review of hard prefix prompts directly, hence the relevance rating is above average but not maximum." -revealing the unwritten: visual investigation of beam search trees to address language model prompting challenges,gpt-4-1106-preview,8,"The study is highly relevant to the field of prompt engineering as it explores prompt refinement and the intricacies of guiding outputs of generative language models. By introducing a method to investigate the beam search tree visually, it aids in understanding how prompts affect generation, which is a key area in prompt engineering. The paper focuses on improving human understanding of the model decision-making process, which is crucial for effective prompt engineering. Although it does not directly address 'hard prefix prompts,' the broader topic of prompt refinement and model output guidance is closely related to prompt engineering. The rating is not a full 10 because it is not specific to 'hard prefix prompts,' but it is still highly relevant to the general area of study." -training language models to follow instructions with human feedback,gpt-4-1106-preview,7,"The abstract describes a study where language models are fine-tuned with human feedback to improve their alignment with user intent, which is a form of prompt engineering. The process of creating 'InstructGPT' involves using prompts and enhancing the model's response to them; thus, it's relevant to the study of how prompts can be engineered to elicit better responses from language models. However, the study focuses more broadly on model alignment rather than specifically on 'hard prefix prompts', which might be a more technical aspect of prompt engineering. Therefore, it does not entirely focus on hard prefix prompts but is still significantly related to the general field of prompt engineering." -"translating radiology reports into plain language using chatgpt and gpt-4 with prompt learning: results, limitations, and potential",gpt-4-1106-preview,7,"The relevance to prompt engineering is significant, given that the title suggests the study involves using GPT models to translate radiology reports and this would likely involve devising specific prompts to generate plain language explanations. This indicates the research is about the application of prompt engineering to improve language model outputs in a clinical education context. However, the absence of detailed information in the abstract limits the ability to fully assess the degree to which prompt engineering is the focus of the study, so the rating is not a full 10." -a systematic survey of prompt engineering on vision-language foundation models,gpt-4-1106-preview,9,"The abstract provided is highly relevant to prompt engineering, as it specifically addresses the application of prompt engineering techniques to vision-language foundation models. These are a subset of tasks within the broader field of prompt engineering. The abstract indicates a systematic review of how prompts are used in this context, discusses different types of models and how they are prompted, and outlines research directions in prompt engineering. While it does not exclusively focus on 'hard prefix prompts', which would be the only aspect potentially limiting a perfect score, the content is indeed directly related to studies on prompt engineering, hence the high relevance rating." -pouf: prompt-oriented unsupervised fine-tuning for large pre-trained models,gpt-4-1106-preview,8,"The abstract describes a study focused on prompt-oriented unsupervised fine-tuning for pre-trained models, which is highly relevant to the field of prompt engineering. Although it does not specifically mention 'hard prefix prompts,' the concept of aligning discrete distributions from prompts and target data, as well as the application to various tasks, indicates a strong connection to the techniques and objectives in prompt engineering. The fact that it involves unsupervised learning approaches to enhance the performance of the models on unlabeled data by using prompts makes it valuable to the prompt engineering study despite it not being a systematic review or explicitly focused on 'hard prefix prompts'." -model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning,gpt-4-1106-preview,9,"The abstract discusses the approach of improving few-shot performance of prompt tuning through knowledge transfer and model ensembles, directly targeting the optimization of prompt engineering. Although it does not specifically mention 'hard prefix prompts', it is highly relevant to the broader area of prompt engineering which involves techniques to better adapt large language models to specific tasks with minimal examples. The proposed SESoM focuses on sample-specific adaptation that is a key aspect of prompt engineering, thus justifying the high relevance rating." -attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing,gpt-4-1106-preview,9,"The abstract describes a study on a new method for parameter-efficient language model tuning called ATTEMPT, which utilizes a novel approach of soft prompt tuning for multi-task knowledge sharing. This is highly relevant to prompt engineering as it directly involves the development and optimization of prompts that influence the behavior of language models. The introduction of a light-weight sub-network for computing instance-wise attention for prompt interpolation is a significant contribution to the field. The fact that this approach contributes to multi-task learning, parameter efficiency, and interpretability in prompt tuning makes it extremely pertinent. The reason why the rating is not a perfect 10 is that the abstract does not mention 'hard prefix prompts' specifically, which was the exact interest stated in the initial 'prompt engineering study' query." -prompting large pre-trained vision-language models for compositional concept learning,gpt-4-1106-preview,8,"The abstract describes research on the use of prompt-based learning within vision-language models, focusing on compositional learning. While the study emphasizes the use of 'soft-prompting' as opposed to 'hard-prompting', it still falls under the broader category of prompt engineering. The work is highly relevant to the field as it explores how prompts can be engineered to enhance the performance of machine learning models, which is a core part of prompt engineering studies. The rating is not a perfect 10 because the study does not exclusively deal with 'hard prefix prompts' as specified in the initial request but instead focuses on an alternative method within the same field." -quantifying memorization across neural language models,gpt-4-1106-preview,4,"While the abstract discusses the issue of memorization in language models, which is indirectly related to how models respond to prompts, it does not directly address prompt engineering, particularly the study of 'hard prefix prompts.' The information provided is relevant to the construction and reliability of prompts in the context of avoiding the elicitation of memorized data, but it does not specifically focus on engineering prompts for systematic review that the original query suggests. Therefore, the relevance is moderate but not directly applicable to the study of hard prefix prompts in prompt engineering." -proqa: structural prompt-based pre-training for unified question answering,gpt-4-1106-preview,9,"The abstract of 'proqa: structural prompt-based pre-training for unified question answering' is highly relevant to the study of prompt engineering. It details the use of structural prompts as a method to train a QA system, thus highlighting an approach to prompt engineering. The paper not only presents a model that is pre-trained with structural prompt-formatted data but also emphasizes the model's performance on benchmarks and its abilities in various learning scenarios. Although it doesn't specifically mention 'hard prefix prompts', the focus on structural prompt-based pre-training indicates a strong connection to prompt engineering studies." -novelty controlled paraphrase generation with retrieval augmented conditional prompt tuning,gpt-4-1106-preview,8,"The abstract describes research related to adapting pre-trained language models using a method called Retrieval Augmented Prompt Tuning and a variation for controlling lexical novelty in paraphrases. Although the study does not directly address 'hard prefix prompts', it is closely related to prompt engineering because it involves the use of specialized prompt tokens and is model-agnostic, which contributes to prompt engineering literature. This relevance is bolstered by the fact that altering prompts to control generation outcomes is a key area within prompt engineering. The study's focus on parameter-efficiency and controlled generation is not the primary focus of hard prefix prompts, hence the rating is not a full 10 but is still relatively high due to the overlapping interests." -discup: discriminator cooperative unlikelihood prompt-tuning for controllable text generation,gpt-4-1106-preview,9,"The paper describes an advanced technique for prompt learning with Casual Language Models, focusing on attribute-controllable text generation, which is a core aspect of prompt engineering. The method of utilizing a discriminator to refine the generation process is directly relevant to the study of hard prefix prompts and their optimization in prompt engineering. The relevance is not a perfect 10 since the abstract does not specifically mention 'hard prefix prompts,' yet the overall topic is highly pertinent to the field." -deep continuous prompt for contrastive learning of sentence embeddings,gpt-4-1106-preview,8,"The title and abstract describe a study that is highly relevant to prompt engineering, particularly with regard to optimizing and innovating within the framework of contrastive learning and sentence embeddings. The proposed method involves 'prefix deep continuous prompts,' which aligns with prompt engineering, though it does not explicitly mention 'hard prefix prompts.' Nonetheless, the focus on efficiently prompting a language model without full fine-tuning is a significant contribution to the field of prompt engineering. The emphasis on performance improvement with minimal parameter tuning and the avoidance of handcrafted prompt search provides valuable insights for prompt engineering studies. Thus, the relevance is rated high, but not full, due to the lack of direct reference to 'hard prefix prompts.'" -improving the sample efficiency of prompt tuning with domain adaptation,gpt-4-1106-preview,9,"The given abstract describes research focused on improving the efficiency of prompt tuning for pretrained language models through domain adaptation methods. Although it does not directly mention the term 'hard prefix prompts', the study investigates 'soft prompts' and is highly relevant to the broader field of prompt engineering. It addresses a key challenge in the area, which is enhancing performance in data-scarce situations—a topic of interest for prompt engineering. The proposed OPTIMA method and its potential to improve the transferability and sample efficiency of prompt tuning are of significant value to prompt engineering studies. The rating is not a full 10 as the study might not be exclusively focused on hard prefix prompts, but it remains extremely relevant to the subject matter." -prompt-augmented linear probing: scaling beyond the limit of few-shot in-context learners,gpt-4-1106-preview,8,"The paper addresses an advanced technique in prompt engineering by combining linear probing with in-context learning, which directly pertains to how language models are prompted to enhance their understanding and usage of data. The concept of 'prompt-augmented linear probing' (PALP) is relevant to the field of prompt engineering as it seeks to improve the model's performance by carefully designing prompts that fit within the input constraints of language models and make the input more understandable for the model. This is central to the study of prompt engineering. However, it does not specifically address 'hard prefix prompts', though the technique may still be applicable to that subset of prompt engineering. The TLDR section does not provide information in this context, hence the rating is not a full 10." -reduce communication costs and preserve privacy: prompt tuning method in federated learning,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it discusses 'prompt tuning,' which is a method within the field of natural language processing that directly relates to how prompts are engineered and optimized. While the primary focus of the study appears to be on the application of prompt tuning in federated learning, which entails privacy-preserving and communication-efficient aspects, it still contributes to the broader understanding of prompt engineering by showcasing its efficiency and robustness in different data distribution scenarios. The presence of a 'backdoor threat' evaluation further adds to its relevance as it touches on the security aspect of prompt engineering." -doubly right object recognition: a why prompt for visual rationales,gpt-4-1106-preview,7,"The abstract discusses the development of a 'why prompt' for visual recognition models, which is relevant to the study of prompt engineering as it involves creating prompts that guide models to give not only correct classifications but also the underlying rationales. Although the study is focused more on visual rationales and the intersection of language models with visual models, it still pertains to the broader category of prompt engineering. However, it is not directly related to 'hard prefix prompts' specifically, as it doesn't mention them explicitly, leading to a slightly lower relevance rating." -xprompt: exploring the extreme of prompt tuning,gpt-4-1106-preview,9,"The paper directly relates to the domain of prompt engineering, as it explores prompt tuning techniques and their impact on performance with Pre-trained Language Models (PLMs). The research addresses a specific issue in prompt engineering—negative impact of trained prompt tokens—and introduces a novel solution (XPrompt) to mitigate this issue. Therefore, it is highly relevant to studies focused on refining the application of prompts in PLMs. The only reason it does not receive a full score of 10 is that the prompt does not specifically mention 'hard prefix prompts,' so it may slightly deviate from that narrow aspect of prompt engineering study if the method described does not strictly apply to hard prompts." -automatic prompt augmentation and selection with chain-of-thought from labeled data,gpt-4-1106-preview,9,"The content of the presented paper is highly relevant to prompt engineering study due to its focus on Chain-of-thought prompting (CoT), which is a technique used in prompt engineering. Automate-CoT, the proposed strategy in the paper, directly addresses the process of generating and selecting prompts in an automated fashion, which aligns with the core components of prompt engineering. This technique also impacts how language models can be efficiently used in various reasoning tasks, which are central to the application of prompt engineering. The reason the rating is not a perfect 10 is because the abstract does not specifically mention 'hard prefix prompts' that the user inquiry is about, instead it refers to CoT in a general sense." -multitask prompt tuning enables parameter-efficient transfer learning,gpt-4-1106-preview,8,"The provided abstract describes a method for prompt tuning in the context of adapting large language models to various tasks, which is highly relevant to the field of prompt engineering. Multitask prompt tuning (MPT) is a technique that is specifically designed to create versatile prompts that are applicable across multiple tasks, indicating a direct application to prompt engineering. The abstract focuses on the efficient use of prompts and parameter tuning, which are central themes in prompt engineering studies. However, the abstract does not directly mention 'hard prefix prompts' but rather it discusses soft prompts and their adaptation for multitask learning, so it may not be fully comprehensive in the context of a systematic review on hard prefix prompts. This is why the rating is not a full 10." -declaration-based prompt tuning for visual question answering,gpt-4-1106-preview,8,"The paper presents a method for fine-tuning visual-language models for VQA tasks (Declaration-based Prompt Tuning, DPT), which involves aligning downstream task objectives with pre-training objectives. While the paper focuses on an application within cross-modal tasks (visual question answering), the method of 'prompt tuning' is central to 'prompt engineering,' which involves designing inputs that efficiently guide models to perform specific tasks. Therefore, the concept of reformulating questions into declarative sentence form for prompt tuning is highly relevant to the study of prompt engineering, albeit in a more specialized context." -prompt generation networks for efficient adaptation of frozen vision transformers,gpt-4-1106-preview,9,"The abstract describes a new method in prompt engineering, the Prompt Generation Network (PGN), which is highly relevant to the study of how to efficiently adapt frozen vision transformers for various tasks without fine-tuning. The fact that PGN pertains to learning input-dependent prompts places it within the domain of prompt engineering. The reason it is not a full 10 is that it might not cover 'hard prefix prompts' specifically as the systematic review requires, but rather discusses a more generalized approach to prompt engineering." -spt: semi-parametric prompt tuning for multitask prompted learning,gpt-4-1106-preview,9,"The study titled 'spt: semi-parametric prompt tuning for multitask prompted learning' is highly relevant to prompt engineering since it directly deals with an innovative method for prompt tuning which is a central theme in prompt-based learning and modeling. The semi-parametric approach, utilizing a memory bank to retrieve memory prompts based on discrete prompts, is a novel contribution to the field of prompt engineering, and the extensive experiments conducted across various tasks and domains underscore its potential impact on the efficiency and generalization of large language models. The reason why the rating is not a full 10 is that the prompt engineering relevance is specific to semi-parametric methods, and it does not address the entire spectrum of prompt engineering techniques, such as hard prefix prompts." -cup: curriculum learning based prompt tuning for implicit event argument extraction,gpt-4-1106-preview,7,"The abstract describes a method for enhancing a machine learning model's ability to perform implicit event argument extraction—'Curriculum learning based Prompt tuning (CUP).' This approach is relevant to prompt engineering because it involves adapting prompt templates over different stages of learning to better utilize pre-trained language models. Although the paper does not exclusively focus on 'hard prefix prompts,' which the prompt engineering study may specifically be interested in, it talks about prompt-based models and their tuning, which is closely related to the domain of prompt engineering. Therefore, the relevance to prompt engineering is significant, although not perfectly aligned with the prompt engineering area targeting hard prefixes." -zero-label prompt selection,gpt-4-1106-preview,9,"The abstract describes a method named Zero-Label Prompt Selection (ZPS) that evidently pertains to the field of prompt engineering as it directly involves the selection and use of prompts for natural language models without the need for labeled data. Despite not explicitly mentioning 'hard prefix prompts', it addresses a critical component of prompt engineering, which is prompt performance in zero or few-shot settings. The relevance to prompt engineering is high because it contributes to the understanding of how to effectively utilize prompts to improve model performance under constrained conditions." -clip-tuning: towards derivative-free prompt learning with a mixture of rewards,gpt-4-1106-preview,8,"The paper describes an innovative approach to prompt learning that is highly relevant to the field of prompt engineering. Derivative-free prompt learning is a part of prompt engineering, and the technique of using 'thinned' networks to create a mixture of rewards is a novel contribution to optimizing prompts. While the paper focuses specifically on Clip-Tuning and derivative-free methods as opposed to a broader systematic review of hard prefix prompts, it still provides valuable insights and advancements in the area of prompt engineering. Therefore, the rating is high for relevance but not the maximum score since it doesn't cover the entire scope of 'hard prefix prompts'." -denseclip: language-guided dense prediction with context-aware prompting,gpt-4-1106-preview,4,"While the study described in the abstract does involve a form of 'prompting' by using contextual language information to guide a model, this is applied in the scope of visual representation learning and not in the explicit context of 'hard prefix prompts' for text-based language models, which is often what is referred to in prompt engineering studies. Therefore, its relevance to prompt engineering study is tangential rather than directly applicable." -uom&mmu at tsar-2022 shared task: prompt learning for lexical simplification,gpt-4-1106-preview,8,"The paper describes an approach for using prompts in a language model to achieve lexical simplification. It directly relates to prompt engineering since it involves fine-tuning language models with a specifically designed prompt template. The method described is an example of how prompt engineering can be used to improve the performance of language tasks in different settings (zero-shot, fine-tuned, and multilingual). This is closely aligned with the study of prompt engineering, although it is focused on one particular application (lexical simplification) rather than hard prefix prompts in a broader sense." -bidirectional language models are also few-shot learners,gpt-4-1106-preview,8,"The abstract discusses the concept of prompt-based learning in the realm of bidirectional language models, which is a central component of prompt engineering. It presents a novel technique (SAP) for prompting bidirectional models, which is highly relevant to the study of how to effectively design and use prompts to elicit desired responses from such models. While it doesn't directly address 'hard prefix prompts,' the subject of designing prompts and demonstrating their utility across different models (bidirectional and unidirectional) is pertinent to the broader field of studies into prompt engineering. The work's implications for the adaptability and performance of language models when prompted make it significantly relevant, though not perfectly aligned since the prompt primely focuses on 'hard prefix prompts.'" -speechprompt v2: prompt tuning for speech classification tasks,gpt-4-1106-preview,6,"The paper is relevant to prompt engineering as it discusses prompt tuning, a technique integral to prompt engineering that involves fine-tuning a language model (LM) using prompts to better perform specific tasks. Although the main focus is on speech classification tasks and not solely on hard prefix prompts, it still offers insights into the larger field of prompt engineering, particularly how prompts are used to improve performance and efficiency for various tasks in speech processing. The paper does not directly address a 'comprehensive systematic review on hard prefix prompts,' but the technology it explores falls within the broader scope of prompt engineering studies." -language models in the loop: incorporating prompting into weak supervision,gpt-4-1106-preview,9,"The document describes a methodology deeply tied to the application of prompt engineering, where large language models are prompted with multiple queries to generate labeled data for a classifier in a weak supervision context. This is highly relevant to prompt engineering studies as it directly involves developing and refining methods for eliciting structured responses from language models through prompts. The only reason why the rating is not a perfect 10 is the study's specific focus on weak supervision, which might not cover all aspects of prompt engineering, such as constructing prompts for different kinds of language tasks beyond weak supervision." -prompting as probing: using language models for knowledge base construction,gpt-4-1106-preview,8,"The study described in the abstract details the use of various prompting techniques with GPT-3 to perform Knowledge Base Construction, an advanced application of prompt engineering. The multi-step approach to optimizing prompts, including manual prompt curation and the use of true/false questions, directly relates to the field of prompt engineering. Although it does not specifically mention 'hard prefix prompts,' the overarching use of prompts to elicit specific information from a language model is highly relevant. Therefore, the paper is quite pertinent to the study of prompt engineering, but since 'hard prefix prompts' are not exclusively the focus, the rating is not a perfect 10." -what does clip know about a red circle? visual prompt engineering for vlms,gpt-4-1106-preview,9,"The abstract describes a study on prompt engineering within the domain of Vision-Language Models, such as CLIP, specifically focusing on the use of visual cues (a red circle) to direct the model's attention. Although the study is about visual prompt engineering rather than traditional text-based prompts ('hard prefix prompts'), it is still highly relevant to the broader field of prompt engineering as it explores how different types of prompts can influence model behavior and performance on various tasks. The rating is not a perfect 10 because it does not directly address 'hard prefix prompts' in text but instead a visual method, which may not be precisely what is meant by 'prompt engineering' in the original query context." -prompting visual-language models for efficient video understanding,gpt-4-1106-preview,6,"The study pertains to efficient adaptation mechanisms for pre-trained visual-language models specifically for video understanding tasks. It suggests a methodology for fine-tuning the models, possibly including the use of prompts to align pre-training objectives with video-related tasks. While it doesn't directly address 'hard prefix prompts', the adaptation of pre-trained models using prompts is a related area of research. The relevance is therefore moderate, as the study could potentially inform prompt engineering practices in multi-modal contexts, even though it does not focus on a systematic review of hard prefix prompts." -healthprompt: a zero-shot learning paradigm for clinical natural language processing,gpt-4-1106-preview,8,"The abstract outlines a research study that is highly relevant to prompt engineering study. It describes the development of a new prompt-based learning framework specifically for clinical NLP tasks, which is an example of applying prompt engineering to a specialized domain (healthcare). The fact that this framework operates in a zero-shot learning context enhances its relevance, as it illustrates the potential of prompt engineering in scenarios where annotated datasets are scarce or non-existent. However, while the study does focus on prompt-based learning, which is a subset of prompt engineering, it does not explicitly mention 'hard prefix prompts' as the prompt type being investigated. Consequently, the rating is not a full 10, as it might not cover the comprehensive systematic review aspect explicitly focused on hard prefixes." -an automatically discovered chain-of-thought prompt generalizes to novel models and datasets,gpt-4-1106-preview,9,"The abstract discusses a study focused on the effectiveness of chain-of-thought (CoT) reasoning prompts across different language models and datasets, which is highly relevant to prompt engineering. The exploration of how previously devised prompts can be applied and generalized to new model generations provides valuable insights for prompt engineering research. The study investigates the impact of prompts on the performance of language models, which is central to the field of prompt engineering. However, the abstract doesn't specifically mention 'hard prefix prompts,' which might slightly reduce the relevance considering the precise topic in the initial prompt." -pbnr: prompt-based news recommender system,gpt-4-1106-preview,8,"The paper describes the 'prompt-based news recommendation' (PBNR) system which closely relates to prompt engineering as it involves designing personalized prompts to interact with a pre-trained language model (T5) for the specific task of news recommendation. This system is an example of applying prompt engineering to adapt language models for a specific application. However, the relevance is not a full 10 because the paper seems more focused on the application of prompt engineering in the context of news recommendation, rather than on the study of the hard prefix prompts or the systematic review of the methodology itself." -p rompt c ap : prompt-guided image captioning for vqa with gpt-3,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it introduces 'P ROMPT C AP', a model that utilizes natural-language prompts to guide the image captioning process which in turn enhances the performance of visual question answering (VQA) with a language model like GPT-3. The method directly involves engineering prompts to control the content of image captions, ensuring they contain the necessary details for LMs to answer questions. This is a specific application of prompt engineering in the context of integrating textual prompts with image understanding for improved knowledge-based task performance. The paper's focus on synthesizing prompts for effective LM use aligns closely with the study of prompt engineering." -visual clues: bridging vision and language foundations for image paragraph captioning,gpt-4-1106-preview,7,"The study relates to prompt engineering in that it discusses the creation of structured textual prompts, termed 'visual clues,' from an image using a vision model, and then using these prompts to generate image captions with a language model. Although the research does not focus on 'hard prefix prompts' per se, it is relevant to the broader field of prompt engineering, considering it involves the construction and utilization of prompts to facilitate communication between vision and language models. Therefore, it offers insights into one aspect of the prompt engineering area - namely, how to effectively generate prompts for a specific cross-modal task." -response generation with context-aware prompt learning,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it focuses on a novel approach that treats dialogue generation as a prompt-learning task. The methodology of learning continuous prompt embeddings customized for dialogue contexts aligns closely with prompt engineering, as it involves designing prompts that can effectively interact with pre-trained language models to produce desired responses. Despite the paper not explicitly mentioning the term 'hard prefix prompts', it is implicit in the context of prompt embeddings. The reduction of two points is because it doesn't directly address the systematic review aspect of hard prefix prompts but is still very much within the realm of prompt engineering for dialogue systems." -few-shot self-rationalization with natural language prompts,gpt-4-1106-preview,8,"The presented study explores natural language prompts extensively in the context of self-rationalization models, which is a form of prompt engineering where the model is prompted to not only provide a decision but also to generate explanations for its decisions. Even though the study does not exclusively focus on 'hard prefix prompts', it is relevant to the broader topic of engineering prompts in such a way that enables models to perform complex tasks with minimal training data. The focus on few-shot learning and the use of prompts to improve plausibility ratings also contribute to the field of prompt engineering. However, the rating is not a full 10 as the specific term 'hard prefix prompts' is not directly addressed." -controllable generation from pre-trained language models via inverse prompting,gpt-4-1106-preview,9,"The abstract presents a direct application of prompt engineering by proposing a novel technique called inverse prompting to improve controllability in text generation from pre-trained language models. The concept of predicting the prompt from generated text during beam search for better alignment between the two is a clear attempt at enhancing the prompt engineering field. The study seems highly relevant to prompt engineering, especially in creating more efficient and controlled generation of texts. The rating is not a full 10 simply because the abstract does not mention 'hard prefix prompts' specifically, which was outlined in the original inquiry regarding a 'systematic review on hard prefix prompts'. However, inverse prompting is still clearly within the domain of prompt engineering." -progressive prompts: continual learning for language models,gpt-4-1106-preview,9,"The provided abstract directly addresses the development of a new method within the field of prompt engineering referred to as 'Progressive Prompts.' This approach is relevant because it is a specific technique aimed at improving the capabilities of language models by facilitating continual learning. Since prompt engineering involves the design and utilization of prompts to effectively interact with language models, a study on Progressive Prompts is highly pertinent to the field. The relevance is not rated as a full 10 only because the prompt specifically asks about 'hard prefix prompts,' while this method pertains to soft prompts learned for each task, and it's not clear whether hard prompts are considered or compared in the approach." -boosting natural language generation from instructions with meta-learning,gpt-4-1106-preview,7,"The abstract describes a study focused on improving natural language generation using meta-learning strategies, specifically in a multi-task instructional learning setting. While the study does not directly address 'hard prefix prompts,' it does explore how language models can better extract and utilize information from instructions, which is a critical aspect of prompt engineering. Enhancing the generalization of language models to perform unseen tasks based on instructions is relevant to prompt engineering as it addresses the challenge of designing prompts that can guide models to perform specific NLP tasks effectively. The application of meta-learning to MTIL is an innovative approach within the broader field of prompt engineering, thus earning a relevance rating of 7 out of 10." -strategic reasoning with language models,gpt-4-1106-preview,9,"The abstract highlights the use of 'systematically generated prompts' in conjunction with large language models to facilitate strategic reasoning, which is highly relevant to prompt engineering. The study's exploration of how prompts can guide AI to generalize to new tasks with little or no additional training intersects with the core concepts of creating effective prompts that drive AI performance. The slight deduction from a perfect score is due to the specific context of strategic games, which may not cover all aspects of prompt engineering, but the principles discussed are broadly applicable." -respectful or toxic? using zero-shot learning with language models to detect hate speech,gpt-4-1106-preview,8,"The paper focuses on prompt-based methods for hate speech detection, which falls under the broader category of prompt engineering within the field of natural language processing. Prompting is a core technique used in this study and is relevant to the understanding and development of effective prompt strategies in the context of language model applications. Although the paper's primary concern isn't about 'hard prefix prompts' specifically, it still contributes to the knowledge base regarding how prompts can be engineered to enhance zero-shot learning capabilities in AI models, which is pertinent to the study of prompt engineering." -meta-tuning language models to answer prompts better,gpt-4-1106-preview,9,"The abstract discusses a method called 'meta-tuning' for improving the ability of large pretrained language models to answer prompts, which is directly related to prompt engineering. The relevance is high because the study aims to specialize and generalize language models to better understand and respond to prompts, which is a core aspect of prompt engineering. The only reason it doesn't score a perfect 10 is because the abstract doesn't directly address 'hard prefix prompts', but the concept can likely be applied to various types of prompts including hard prefixes." -"a sign language recognition system with pepper, lightweight-transformer, and llm",gpt-4-1106-preview,7,"The abstract indicates that prompt engineering was used as part of the process to enable the Pepper Robot to generate natural Co-Speech Gesture responses. While the focus of the study is on sign language recognition and robot interaction, the mention of tailoring interactions through prompt engineering shows relevance to the prompt engineering field. However, the study does not appear to be a comprehensive systematic review on hard prefix prompts specifically but instead applies prompt engineering within the scope of robot interaction and sign language processing. Therefore, the rating is a 7 out of 10, acknowledging the connection without it being the central theme of the research." -reducing sentiment bias in language models via counterfactual evaluation,gpt-4-1106-preview,4,"The study deals with reducing sentiment bias in language models by using a form of counterfactual evaluation, which is related to how prompts might internalize biases present in training data. While it touches on the area of prompt engineering by considering how the conditioning context affects model output, its primary focus is on bias quantification and reduction rather than on the systematic review of 'hard prefix prompts' or the structure and impact of prompt design itself." -question decomposition improves the faithfulness of model-generated reasoning,gpt-4-1106-preview,7,"The study discusses a method of improving the quality and faithfulness of responses from large language models by decomposing questions into subquestions, which is related to prompt engineering. The utilization of specific prompting strategies to elicit more explainable and verifiable outputs from the models is a part of prompt engineering. Although the focus is more on question decomposition and the faithfulness of the reasoning process rather than on 'hard prefix prompts' specifically, the principles and findings can still have implications for prompt engineering practices in general, hence the relatively high relevance score." -improving gender fairness of pre-trained language models without catastrophic forgetting,gpt-4-1106-preview,8,"The study described in the abstract is highly relevant to prompt engineering because it develops a method called GEEP (GEnder Equality Prompt) to improve the performance of pre-trained language models. GEEP specifically involves learning gender-related prompts, which makes it a direct application of prompt engineering in addressing the issue of gender bias in AI models. Although the study is not a comprehensive systematic review on hard prefix prompts and is more focused on gender fairness, the concept of 'hard prefix prompts' as a key component of 'prompt engineering' makes this study quite relevant to the broader field of prompt engineering." -few-shot instruction prompts for pretrained language models to detect social biases,gpt-4-1106-preview,8,"The study involves the construction of few-shot instruction-based prompts for pretrained language models, which is highly relevant to the field of prompt engineering. It examines how effectively these prompts can guide language models in detecting social biases in text, which is a specific application of prompt engineering. Although it does not directly mention 'hard prefix prompts,' the methodology of using instructional prompts to achieve a task with a language model fits under the broader umbrella of prompt engineering. The relevance is rated an 8 instead of a 10 because the focus is more on detecting social biases rather than on the systematic review of prompting techniques themselves." -simultaneous translation and paraphrase for language education,gpt-4-1106-preview,4,"The study presents work on the generation of translations and paraphrases, which touches upon prompt engineering indirectly through the creation of diverse language sets for training models. However, the primary focus appears to be on translation and paraphrasing rather than prompt engineering itself. It can be relevant to prompt engineering in the context of designing effective prompts for language translation tasks but does not directly address the systematic review of 'hard prefix prompts' for prompt engineering studies." -evaluating the instruction-following robustness of large language models to prompt injection,gpt-4-1106-preview,9,"The study directly examines the interaction between large language models and prompts, specifically investigating the challenge of adversarial instruction injection. This is highly relevant to prompt engineering as it deals with understanding and improving the robustness of LLMs in discerning and responding to prompts. The focus on how models discern and follow instructions is a critical aspect of prompt engineering, especially when considering the creation of prompts that intend to guide the model towards producing specific outcomes or behaviors without succumbing to manipulation." -(ab)using images and sounds for indirect instruction injection in multi-modal llms,gpt-4-1106-preview,8,"The provided title and abstract are relevant to prompt engineering study as they discuss a method of manipulating the output of multi-modal LLMs (Large Language Models) through indirect prompt and instruction injection via images and sounds, which can be considered a form of prompt engineering. Although the focus is on adversarial perturbations and security, understanding this process is crucial for developing effective prompts, especially in the context of preventing misuse. It highlights the importance of prompt design in multi-modal systems and contributes to the broader field of prompt engineering by exploring potential vulnerabilities and manipulative techniques." -chat-rec: towards interactive and explainable llms-augmented recommender system,gpt-4-1106-preview,7,"The relevance of the provided study to prompt engineering is moderately high, with a rating of 7 out of 10. The study focuses on a method for augmenting recommender systems with large language models by converting user data into prompts, which falls within the scope of prompt engineering. Prompt design plays a crucial role in enabling the Chat-Rec system to function by guiding the language model to generate relevant and personalized recommendations. While the study does not specifically target 'hard prefix prompts,' it does explore a practical application of prompts within an interactive system and contributes to the body of knowledge on how to effectively leverage LLMs through prompt engineering. However, if the focus were specifically on a 'systematic review on hard prefix prompts,' the rating might be lower as this study presents an application rather than a review on hard prefix prompts." -dialogue for prompting: a policy-gradient-based discrete prompt optimization for few-shot learning,gpt-4-1106-preview,9,"The study described focuses on prompt-based optimization for few-shot learning in the context of pre-trained language models, which is directly relevant to prompt engineering. The novel Dialogue-comprised Policy-gradient-based Discrete Prompt Optimization (DP2O) method aims to improve the efficiency, quality, and applicability of prompt-based methods in NLP tasks. The use of a reinforcement learning framework to optimize discrete prompts signifies a technical advancement in the field. The only reason it doesn't score a perfect 10 is that it doesn't address 'hard prefix prompts' specifically but discusses discrete prompt optimization in a broader sense." -promptagent: strategic planning with language models enables expert-level prompt optimization,gpt-4-1106-preview,9,"The article is highly relevant to prompt engineering as it discusses 'PromptAgent', an optimization method aimed at automating the generation of expert-level prompts, which is directly aligned with prompt engineering studies. It addresses the strategic planning problem within prompt optimization and demonstrates the system's effectiveness across various domains and tasks. The only reason it does not receive a 10 is that the specific focus on 'hard prefix prompts' is not explicitly stated, but the scope still remains within the general field of prompt engineering." -emotion-conditioned text generation through automatic prompt optimization,gpt-4-1106-preview,9,"The title and abstract discuss an automatic prompt optimization approach specifically for emotion-conditioned text generation, which is clearly within the domain of prompt engineering. The study focuses on refining prompts to improve the performance of instruction-fine-tuned models, which is at the core of prompt engineering studies. The relevance is not rated a perfect 10 as the study is narrowly focused on emotion-conditioned text generation and not prompt engineering in general. Overall, however, the relevance to prompt engineering is very high." -multiprompter: cooperative prompt optimization with multi-agent reinforcement learning,gpt-4-1106-preview,9,"The paper presents a new framework, MultiPrompter, that directly addresses the issue of prompt optimization, which is a core aspect of prompt engineering. It introduces a novel concept of using multi-agent reinforcement learning for cooperative prompt optimization. Such a technique is highly relevant for studies in prompt engineering, as it could lead to improvements in the generation of interpretable prompts and better interaction with foundation models. Although the paper is applied to the text-to-image task, the concepts and methodologies presented could be generalizable and thus highly relevant to the broader field of prompt engineering." -robust prompt optimization for large language models against distribution shifts,gpt-4-1106-preview,9,"The presented paper directly addresses a key issue in prompt engineering, namely the optimization of prompts for large language models, especially in the context of distribution shifts, which is a crucial aspect in the robustness of language models. Although the abstract does not specify the use of 'hard prefix prompts,' the focus on prompt optimization and generalization across different distributions indicates a close relevance to the broader field of prompt engineering. The proposed Generalized Prompt Optimization framework, which utilizes unlabeled data in optimization, is highly pertinent to advancing the study and application of prompt engineering." -query-dependent prompt evaluation and optimization with offline inverse rl,gpt-4-1106-preview,8,"The abstract indicates a study focused on enhancing arithmetic reasoning of LLMs (Large Language Models) specifically through prompt optimization, which is directly related to prompt engineering. The introduction of Prompt-OIRL as a method to evaluate query-prompt pairs and recommend optimal prompts without requiring live interaction with LLMs is notable for prompt engineering efficiency and effectiveness. It suggests a more nuanced approach to evaluating and optimizing prompts based on query dependency, which is an important aspect of prompt engineering. However, the study is not centered on 'hard prefix prompts' specifically but rather on a broader prompt optimization problem, which includes but is not limited to hard prefix prompts. Therefore, the rating is not a perfect 10." -visual-language prompt tuning with knowledge-guided context optimization,gpt-4-1106-preview,8,"The presented abstract directly addresses an aspect of prompt engineering, focusing on improving the generalization ability of learnable prompts in the context of a visual-language model. The introduction of Knowledge-guided Context Optimization (KgCoOp) pertains to the optimization of prompts, which is a fundamental component of prompt engineering. The relevance rating is not a full 10 because the study specifically targets visual-language models and may not cover other prompt engineering contexts, such as text-based models or hard prefix prompts more broadly." -copner: contrastive learning with prompt guiding for few-shot named entity recognition,gpt-4-1106-preview,9,"The study introduces the use of class-specific prompts for few-shot NER, employing these prompts as supervision signals and metric referents, which is highly relevant to prompt engineering. The methodology specifically addresses the optimization of token representations and inferencing strategies, which are central concerns in prompt engineering. The relevance score is not a full 10 because the study focuses on one specific application (NER) and it is not a systematic review on hard prefix prompts in general." -generative visual prompt: unifying distributional control of pre-trained generative models,gpt-4-1106-preview,6,"The study presents a framework called Generative Visual Prompt (PromptGen) to exercise distributional control over pre-trained generative models. While it does not directly relate to 'hard prefix prompts' that are typically associated with language models and their prompting techniques, the concept of manipulating the output of generative models using prompts (here in the form of external model knowledge) is related to the broader topic of prompt engineering. The focus on controlling generative models aligns with the idea of influencing model behavior through prompts, hence the relevance to the field of prompt engineering. However, it lacks a direct connection to 'hard prefix prompts' in systematic review context and instead deals with a different application of prompting in the visual domain. Thus, the relevance is moderate, and the rating is given a 6." -cpl: counterfactual prompt learning for vision and language models,gpt-4-1106-preview,8,"The paper discusses 'Counterfactual Prompt Learning (CPL)' for vision and language models, which is directly related to prompt tuning, a subset of prompt engineering. It introduces an innovative approach to optimize prompt learning and aims to improve generalization of learned representations for few-shot learning tasks. Although it does not specifically mention 'hard prefix prompts', it still contributes to the broader field of prompt engineering by advancing techniques for efficient and non-spurious prompt learning. This is highly relevant for the study of prompt engineering as it explores new methods and their impact on model performance. Therefore, the rating is high but not maximum, as the exact focus on 'hard prefix prompts' is not clear from the abstract." -prompt engineering for zero‐shot and few‐shot defect detection and classification using a visual‐language pretrained model,gpt-4-1106-preview,9,"The abstract indicates that the study focuses on the optimization of prompts, which is intrinsic to prompt engineering. It investigates how different types of prompts affect the performance of a VLP model, particularly for the task of defect detection and classification. The findings on domain-specific definitions, sentence structure, and modality of information are directly relevant to understanding how prompts can be engineered for better performance in zero-shot and few-shot learning tasks, which is a key component of prompt engineering. The only reason the rating is not a full 10 is that it doesn't discuss 'hard prefix prompts' specifically but prompt optimization in a broader sense within the context of VLP models." -understanding and mitigating overfitting in prompt tuning for vision-language models,gpt-4-1106-preview,8,"The abstract discusses the mitigation of overfitting in prompt tuning for vision-language models, which is highly relevant to prompt engineering studies. The focus on understanding and addressing overfitting issues during prompt tuning is pertinent as prompt engineering encompasses the design, optimization, and evaluation of prompts used to guide machine learning models. The abstract presents a direct application and improvement in the field of prompt engineering by proposing a new method (Subspace Prompt Tuning) to enhance the training process of models, making the study very relevant. However, it does not explicitly cover 'hard prefix prompts' which is specifically mentioned in the query, thus the rating is slightly reduced." -bbtv2: pure black-box optimization can be comparable to gradient descent for few-shot learning,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it presents an advanced technique (BBTv2) for optimizing the prompts used in language models, seeking to improve performance in few-shot learning tasks without relying on gradient-based methods. This research is directly related to how prompts can influence model performance and efficiency, which is a core aspect of prompt engineering. Although it does not specifically address 'hard prefix prompts' as mentioned in the initial study prompt, it deals with the continuous prompt tokens and optimizing them, which falls under the broader umbrella of prompt engineering. Therefore, the rating is not a full 10 but remains high due to the close relevance." -connecting large language models with evolutionary algorithms yields powerful prompt optimizers,gpt-4-1106-preview,9,"The paper directly relates to prompt engineering by introducing a framework (EvoPrompt) for optimizing prompts using evolutionary algorithms, which is a novel approach within the field of prompt engineering study. The use of both large language models and evolutionary algorithms specifically to improve the efficiency and effectiveness of prompt generation is extremely relevant to those researching how to develop better prompts for LLMs. The only reason it does not receive a full 10 is that, without access to the full text, it's not clear how much the paper focuses on 'hard prefix prompts' specifically, if at all, since it doesn't mention this specific term in the provided abstract or TLDR content." -pfedprompt: learning personalized prompt for vision-language models in federated learning,gpt-4-1106-preview,8,"The abstract describes a study on a method called pFedPrompt that focuses on personalizing prompts for pre-trained vision-language models in a federated learning context. It directly engages in prompt engineering by refining how the prompts adapt to user characteristics, attempting to improve performance and relevance of the model outputs. While it doesn’t address 'hard prefix prompts' directly, the study is highly relevant to prompt engineering as it talks about optimizing prompts, which is a core area of interest in prompt engineering studies. The methodological focus on personalization in a federated learning framework is an innovative contribution to the field." -iterative prompt learning for unsupervised backlit image enhancement,gpt-4-1106-preview,8,"The abstract describes a study that focuses on the development of an unsupervised image enhancement method by using prompt learning within the CLIP framework. Although the primary application is not textual prompt engineering but rather improving visual quality in backlit images, the concept of iterative prompt learning is highly relevant to prompt engineering. The lifecycle of prompts, their optimization, and their iterative improvement are at the core of prompt engineering studies. This work can contribute to the understanding of prompt-based models and how they can be fine-tuned for specific tasks, which is valuable knowledge for the field of prompt engineering. Hence, the relevance rating is 8, acknowledging the connection to prompts and learning frameworks but also recognizing that the study doesn't focus on textual prompts or their direct use in text-based models." -meta learning for domain agnostic soft prompt,gpt-4-1106-preview,8,"The abstract discusses a new approach to prompt-based learning, which is highly relevant to the field of prompt engineering as it focuses on optimizing soft prompts for domain-agnostic applications. The method aims to improve the generalizability of prompts which is a critical aspect in the study of prompt engineering. The relevance is not a full 10 because it specifically addresses soft prompts and unsupervised domain adaptation rather than hard prefixes or a comprehensive review of prompt engineering techniques." -temporally-extended prompts optimization for sam in interactive medical image segmentation,gpt-4-1106-preview,7,"The study described in the abstract is somewhat relevant to prompt engineering as it involves optimizing the interaction between human experts and a machine learning model through the form of prompts (e.g., points, bounding boxes). However, the primary focus seems to be on the application of this technique to the medical image segmentation field rather than the theory or methodology of prompt engineering itself. The relevance is thus rated a 7, recognizing the contribution to the prompt engineering field in the specific context of medical image segmentation but also noting that it does not address broader prompt engineering topics." -styleclip: text-driven manipulation of stylegan imagery,gpt-4-1106-preview,7,"The relevance to prompt engineering is substantial, as the study addresses a text-based interface which involves users providing text prompts that manipulate images generated by StyleGAN. This process inherently relies on prompt engineering to achieve meaningful image manipulations, effectively turning textual descriptions into stylistic changes in images. The use of CLIP models to understand and execute these prompt-induced manipulations highlights an important application of prompt engineering in the field of AI and image processing. However, the primary focus of the study is on the interface and leveraging CLIP for image manipulation rather than the detailed study of the prompt engineering itself, which slightly reduces the rating." -null-text inversion for editing real images using guided diffusion models,gpt-4-1106-preview,7,"The paper presents an inversion technique and a method for text-based image editing using diffusion models, which involves prompt engineering concepts such as working with textual embeddings and guiding diffusion models using text. While the focus is on image editing rather than constructing or evaluating hard prefix prompts explicitly, the techniques developed could be relevant to prompt engineering by enabling more sophisticated control and manipulation of generated content based on text prompts. However, the study does not directly address hard prefix prompts in systematic review, thus the relevance is significant but not complete." -clip-mesh: generating textured meshes from text using pretrained image-text models,gpt-4-1106-preview,8,"The given abstract presents a technique that utilizes a pre-trained CLIP model for the zero-shot generation of textured 3D models from text prompts, which aligns well with the field of 'prompt engineering' as it demonstrates a practical application of generating content from textual descriptions. The relevance is marked as an 8 because while it heavily leverages the engineering of prompts to create 3D models, the focus is on the product of the prompt (a 3D model) rather than on the study of prompt engineering itself. It does not address the systematic review aspect of hard prefix prompts, but it is related to the domain of how text prompts can guide AI to produce desired outputs." -dream3d: zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models,gpt-4-1106-preview,4,"The paper discusses the use of text prompts for zero-shot text-to-3D synthesis, which involves aspects of prompt engineering as it requires the optimization of text prompts to generate 3D shapes. However, the core focus is on the synthesis of 3D structures from text descriptions rather than on the study of hard prefix prompts specifically. The relevance to prompt engineering is tangential and mainly related to optimizing text prompts within a specific context of 3D content generation." -what changes can large-scale language models bring? intensive study on hyperclova: billions-scale korean generative pretrained transformers,gpt-4-1106-preview,8,"The abstract indicates extensive exploration of prompt-based learning within the context of a non-English large-scale language model, HyperCLOVA, and discusses the integration of prompt optimization into the prompt engineering pipeline. This is highly relevant to prompt engineering, but not specifically centered on 'hard prefix prompts'. However, it does address prompt engineering more broadly and introduces an interactive prompt engineering interface, suggesting considerable coverage of the topic. Some points were deducted as the abstract does not focus precisely on 'hard prefix prompts', but instead on a wider range of prompt engineering aspects." -directed diffusion: direct control of object placement through attention guidance,gpt-4-1106-preview,7,"The study described in the abstract engages with the concept of hard prompt engineering by introducing methods for providing 'direction' to the model's output, specifically in terms of spatial object placement. This work falls under the study of prompt engineering to the extent that it addresses a fine-grained aspect of the control mechanism one might use in a prompt to guide the output of a generative model. However, the focus is somewhat tangential to hard prefix prompts specifically, as the emphasis seems to be on the manipulation of cross-attention maps rather than the construction of text prompt prefixes. The rating is not a perfect 10 because the abstract does not directly reference hard prefix prompts or their systematic review; rather, it offers a novel contribution that could be considered in the broader field of prompt engineering within generative AI." -clip-actor: text-driven recommendation and stylization for animating human meshes,gpt-4-1106-preview,7,"The relevance of the described paper 'clip-actor: text-driven recommendation and stylization for animating human meshes' to prompt engineering study is moderately high. While the main focus is on animating 3D human meshes using text prompts, the fact that it leverages natural language prompts to drive the animation process indicates an overlap with prompt engineering research. The system's ability to interpret and respond to natural language inputs demonstrates a practical application of prompt engineering in the field of computer graphics and animation. However, the study is not explicitly centered on the systematic review or theoretical examination of hard prefix prompts in the broader context of prompt engineering, which slightly limits its full relevance to the specific subject of a comprehensive systematic review on hard prefix prompts." -promptboosting: black-box text classification with ten forward passes,gpt-4-1106-preview,9,"The abstract discusses PromptBoosting, an approach to text classification that effectively uses prompts to train a classifier without needing access to the underlying language model's internal workings, which is highly relevant to prompt engineering. The method involves creating a set of prompts and using an ensemble learning algorithm to improve classification performance. This process aligns closely with prompt engineering by proposing a novel way to interface with and manipulate language models using prompts, thereby making it highly pertinent to studies in prompt engineering. The paper does not specifically focus on 'hard prefix prompts' as stated in the potentially narrower research interest of the initial inquiry but still provides significant insights into the general area of prompt-based methods." -unleashing the power of visual prompting at the pixel level,gpt-4-1106-preview,6,"The paper describes a study focused on visual prompting as a technique to adapt pre-trained models for recognition tasks, which is related to prompting in a broad sense. However, the query asks for a comprehensive systematic review on hard prefix prompts, which typically refers to textual prompt engineering where specific text prompts are designed to guide language models. Although visual prompting shares similar goals in terms of adapting models to new tasks, it does so in the domain of images rather than text. Hence, the relevance is moderate because the methods and outcomes may have conceptual parallels to textual prompt engineering, but do not directly address or review hard text-based prefix prompts." -reward collapse in aligning large language models,gpt-4-1106-preview,8,"The paper discusses an important aspect of prompt-based training in large language models, specifically how prompt-related information is incorporated into the training process. This is highly relevant to prompt engineering because it deals with the effectiveness of prompts and the responses generated by language models. The concept of 'reward collapse' is directly related to the outcomes of different prompts, and thus to the study of prompt engineering. The paper proposes a solution to make rewards prompt-dependent, which is a significant concern in prompt engineering. While it does not directly address 'hard prefix prompts', the study's implications for the design of prompts and training methods are closely related to prompt engineering." -being comes from not-being: open-vocabulary text-to-motion generation with wordless training,gpt-4-1106-preview,4,"While the abstract indicates that this study was inspired by prompt learning in NLP and involves the reformulation of input text into a 'prompt' for a generator, the primary focus is on text-to-motion generation rather than on prompt engineering for language models or systematic reviews of 'hard prefix prompts.' The connection to prompt engineering is tangential and based more on a conceptual inspiration than on a direct study or analysis of prompts in the context of text or language processing." -a web-based environment for documentation and sharing of engineering design knowledge,gpt-4-1106-preview,5,"The abstract describes an ontological knowledge-base designed to aid in the engineering design process by prompting engineers to document and share information efficiently. Although it mentions the use of prompts to drive certain behaviors within the engineering design process, the focus is not specifically on the study of 'hard prefix prompts' in the context of 'prompt engineering' as it relates to AI or machine learning. The paper seems to be more aligned with knowledge management and ontological structures in engineering rather than the specific study of designing and engineering prompts for AI systems. Therefore, it is somewhat relevant due to its use of prompting mechanisms but not directly concerned with the study at hand." -speechprompt: an exploration of prompt tuning on generative spoken language model for speech processing tasks,gpt-4-1106-preview,8,"The provided document is highly relevant to prompt engineering as it discusses prompt tuning, which is a key aspect of prompt engineering. Although the focus is on speech processing tasks rather than hard prefix prompts in textual contexts, the principles of prompt tuning and leveraging pre-trained models with minimal additional parameter training are central to the concept of prompting in both speech and text applications. The exploration of this technique's effects on efficiency and performance in speech models contributes useful insights to the broader field of prompt engineering. The rating is not a full 10 as the study specifics are tailored towards speech models, thereby making it somewhat less directly applicable to prompt engineering studies focused exclusively on text-based models." -late prompt tuning: a late prompt could be better than many prompts,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it discusses prompt tuning—a specific area within prompt engineering. It introduces 'Late Prompt Tuning' as a method to improve efficiency and performance of prompt tuning, which is directly related to the concerns of prompt engineering. The only reason why it is not rated a perfect 10 is that the abstract does not explicitly mention 'hard prefix prompts,' but rather focuses on an improved methodology of soft prompt tuning. Nevertheless, understanding the prompt tuning aspect, even if it is soft prompt related, is essential for comprehensive knowledge in the overall field of prompt engineering." -kipt: knowledge-injected prompt tuning for event detection,gpt-4-1106-preview,9,"The described study directly relates to prompt engineering by discussing Knowledge-injected Prompt Tuning (KiPT) for event detection, which is a technique to enhance the performance of prompt-based models by injecting external knowledge. It is highly relevant to the field of prompt engineering, as it proposes a specific way to refine prompts (a core component of prompt engineering) to increase precision. This is applicable to the broader study of prompt engineering, particularly in the context of few-shot learning tasks and the integration of external knowledge bases into the prompting process." -making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning,gpt-4-1106-preview,9,"The paper presents a framework related to improving the efficiency of PLMs in low-resource scenarios through a method known as Contrastive Prompt Tuning. It tackles the challenge of creating task-specific prompts and verbalizers without manual engineering, which is highly relevant to the field of prompt engineering. The mention of 'task-invariant continuous prompt encoding' and 'fully trainable prompt parameters' directly relates to engineering prompts to improve few-shot learning capabilities of language models. Therefore, the study is highly pertinent to prompt engineering, especially considering its focus on end-to-end and contrastive learning approaches for enhancing language model performance. The only reason it is not rated a full 10 is that it doesn't explicitly mention 'hard prefix prompts,' which the original study inquiry specified, but it covers the overarching theme of prompt engineering sufficiently." -lpt: long-tailed prompt tuning for image classification,gpt-4-1106-preview,7,"The paper introduces an approach for adapting pretrained models to long-tailed classification problems using prompts. This is relevant to prompt engineering since LPT (Long-tailed Prompt Tuning) involves creating and tuning prompts as a method of model adaptation, which falls under the broader category of prompt engineering strategies. The systematic review sought is broader and looks for hard prefix prompts, which might imply a specific subset of prompt engineering. Nonetheless, as LPT involves modifying prompt mechanisms for a specific end, it shares concepts with the overall field of prompt engineering. The rating is not a full 10 because the described method does not directly focus on the general study of prompt engineering or the particular 'hard prefix prompts' but rather a specialized application of prompt tuning in image classification." -multi-prompt alignment for multi-source unsupervised domain adaptation,gpt-4-1106-preview,8,"The abstract describes the use of prompts in the context of unsupervised domain adaptation, introducing a new framework called Multi-Prompt Alignment (MPA). This is directly related to prompt engineering as it involves training and aligning prompts to minimize domain gaps. Although the focus here is more on domain adaptation rather than the study of 'hard prefix prompts' in isolation, the application of prompt learning techniques makes it relevant to the field of prompt engineering. The rating is not a full 10 because the abstract does not directly address a comprehensive systematic review on hard prefix prompts per se, but rather introduces a novel application of prompt engineering in UDA." -eliciting knowledge from pretrained language models for prototypical prompt verbalizer,gpt-4-1106-preview,9,"The paper describes an approach that directly pertains to prompt engineering by discussing the elicitation of knowledge from pretrained models and the optimization of said models for prompt-tuning. The concept of a prototypical prompt verbalizer and the use of contrastive learning are specific methodologies within the broader field of prompt engineering, thus highly relevant. The rating isn't a perfect 10 as the abstract provided is missing, and therefore the review may not cover all aspects of 'hard prefix prompts' specifically mentioned in the initial term." -fine-grained retrieval prompt tuning,gpt-4-1106-preview,7,"The paper titled 'Fine-grained Retrieval Prompt Tuning' is relevant to prompt engineering as it introduces a method (FRPT) involving prompts to steer a pre-trained model's behavior without fine-tuning the entire model. This is in line with the concept of prompt engineering wherein strategic prompts are used to harness a model's capabilities for specific tasks. Although the paper deals with a specialized domain of fine-grained object retrieval and is more focused on the retrieval aspect rather than prompt engineering in a broad sense, the principles and methods it introduces are applicable to the study of prompt engineering, especially in how prompts can be used to adapt a model's output without extensive retraining. The rating is not a full 10 because the paper appears to be narrowly focused on a specific instance of prompt use, rather than a comprehensive systematic review on hard prefix prompts as potentially indicated by the phrase 'prompt engineering study.'" -exploring low-dimensional intrinsic task subspace via prompt tuning,gpt-4-1106-preview,8,"The abstract and TLDR provided pertain to the study of prompt tuning within pre-trained language models (PLMs), and they discuss how adjustments to these models for various tasks can be achieved by optimizing a small set of parameters within a low-dimensional subspace. This suggests a strong relevance to prompt engineering, as it directly explores methodologies for tuning prompts to improve task adaptability of language models. The only reason the rating is not a full 10 is that, while highly relevant, the study seems to focus on a specific aspect of prompt engineering rather than a comprehensive review of hard prefix prompts in general." -exploring universal intrinsic task subspace via prompt tuning,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates the adaptability of pre-trained language models to different NLP tasks by optimizing a small number of parameters. It directly examines prompt tuning, which is a crucial aspect of prompt engineering, and explores the concept of an intrinsic task subspace that could significantly impact how PLMs are fine-tuned for various tasks. Although the focus is on intrinsic prompt tuning (IPT) rather than hard prefix prompts specifically, the findings are broadly applicable to the field of prompt engineering." -how to design the perfect prompt: a linguistic approach to prompt design in automotive voice assistants – an exploratory study,gpt-4-1106-preview,8,"The provided title and abstract are highly relevant to the broad field of prompt engineering, especially in the context of voice user interfaces (VUIs). The exploratory study focuses on the linguistic aspects of prompt design, which covers syntactical, lexical, and grammatical elements that are fundamental to the construction of effective prompts within the automotive industry's voice assistants. Although the study is specific to a particular application (automotive VUIs) and language (German), the methodology and findings regarding the impact of language parameters on user perception can offer significant insights for prompt engineering in general. The rating falls short of a perfect score because the study's scope is restricted to a single language and use case, which may or may not be directly applicable to hard prefix prompts specifically mentioned in the original query." -improving chatgpt prompt for code generation,gpt-4-1106-preview,9,"The abstract provided details an empirical study on how prompt design, particularly in the use of ChatGPT for code generation tasks, affects performance. This is highly relevant to prompt engineering, as it outlines a method of prompt optimization (leveraging the chain-of-thought strategy) and discusses the impact of different prompts on the efficacy of an AI model. It does not focus specifically on 'hard prefix prompts,' as might be suggested by the original query on 'prompt engineering study,' but it does deal with the broader area of prompt engineering, warranting a high relevance rating." -dynamic prompting: a unified framework for prompt tuning,gpt-4-1106-preview,9,"The paper in the title focuses on the topic of prompt tuning, specifically the effectiveness of dynamic prompts versus fixed soft prompts. It directly addresses optimizing prompt position and how it affects performance in extracting knowledge from various pretrained models. The 'hard prefix prompts' mentioned in the request for a systematic review relates to the broader field of prompt engineering and tuning, and while the paper appears to discuss a more advanced approach (dynamic prompts), it is highly relevant to the study of prompts in general, including hard prefixes. The abstract provided offers insights and tangible outcomes of prompt tuning research, thus the relevance rating is high. However, it is not exclusively focused on 'hard prefix prompts' but considers prompt tuning more broadly, hence the rating is not a perfect 10." -exploring sparse visual prompt for domain adaptive dense prediction,gpt-4-1106-preview,8,"The provided abstract is highly relevant to prompt engineering study because it discusses an advanced application of prompts—Sparse Visual Domain Prompts (SVDP)—in the context of Test-Time Adaptation (TTA) for domain adaptive dense prediction tasks. It examines the role of prompts in addressing domain shift challenges and introduces methods for optimal prompt placement and updating on a per-sample basis. Although the abstract focuses specifically on visual domain prompts, which may be a more specialized area within the broader field of prompt engineering, the concepts of domain-specific knowledge extraction and efficient adaptation to target domains through prompts are essential to the study of prompt engineering. Therefore, the relevance is rated highly but not at the maximum because it is specific to the visual domain and dense prediction tasks rather than general prompt engineering." -stylediffusion: prompt-embedding inversion for text-based editing,gpt-4-1106-preview,7,"The given abstract is moderately relevant to prompt engineering study. It discusses a method for text-based editing of images using pretrained diffusion models, which involves prompt-editing. The relevance is substantial because working with prompts is integral to guiding AI models in generating or editing content. The paper proposes improvements for image editing using text prompts, which is related to prompt engineering in the way that it attempts to refine how prompts influence the AI's output. However, the focus seems to be more on image editing and attention regularization rather than hard prefix prompts, which would be the core topic in a prompt engineering study. Hence, the relevance is not complete, but the approach to handle and edit prompts for better results is pertinent to the field." -efficient transfer learning for visual tasks via continuous optimization of prompts,gpt-4-1106-preview,8,"The title suggests that the study involves optimizing prompts for transfer learning in visual tasks, indicating a focus on prompt engineering as it applies to machine learning and possibly to neural networks that process visual data. Although details are lacking in the abstract and TLDR, the title implies relevance to prompt engineering, particularly in the context of improving the efficiency of transfer learning through some form of prompt optimization. The rating is not a full 10 due to the lack of information provided in the other fields, which could have either strengthened or weakened the relevance." -a simple zero-shot prompt weighting technique to improve prompt ensembling in text-image models,gpt-4-1106-preview,8,"The abstract presents a study that is directly related to prompt engineering, focusing on automated scoring and ensembling of prompts to improve the accuracy of zero-shot text-image models. Although the study does not specifically mention 'hard prefix prompts', it does address the broader topic of prompt engineering and optimization, which is highly relevant. The only reason it does not receive a full 10 is the absence of a direct discussion about 'hard prefix prompts', which might be considered more specialized within the domain of prompt engineering." -drpt: disentangled and recurrent prompt tuning for compositional zero-shot learning,gpt-4-1106-preview,8,"The provided abstract describes research on prompt tuning, specifically a novel framework called DRPT, in the context of Compositional Zero-shot Learning (CZSL). Its relevance to prompt engineering is high, given that it addresses the optimization of prompts through the use of disentangled and recurrent tuning strategies. While the study might not focus exclusively on 'hard prefix prompts' as mentioned in the initial prompt, the described techniques are directly related to enhancing the efficacy of prompts in interacting with vision-language models (VLMs). Therefore, the content is substantially pertinent to the broader field of prompt engineering." -reprompt: automatic prompt editing to refine ai-generative art towards precise expressions,gpt-4-1106-preview,9,"The abstract pertains directly to the field of prompt engineering, specifically concerning the refinement of AI-generated images based on textual prompts. The introduction of RePrompt, an automatic method for editing prompts to achieve precise emotional expressiveness in AI-generated images, represents a focused study within prompt engineering. This is highly relevant since it deals with optimizing text prompts, albeit in the context of generative art rather than 'hard prefix prompts' used for textual outputs or structured data queries. The reason it's not a 10 is the study's specific angle on emotional expressiveness, which may not encompass the entirety of prompt engineering studies, such as technical or informational aspects." -prompt engineering for text-based generative art,gpt-4-1106-preview,8,"The paper is significantly relevant to prompt engineering study as it explores prompt modifiers in the context of text-based generative art, which is a direct application of prompt engineering techniques. The identification of a taxonomy of prompt modifiers aids in understanding how prompts can be engineered or modified for specific outcomes in creative AI applications. Although the study is not exclusively on 'hard prefix prompts', it does provide valuable insights into the broader field of prompt engineering, which is inclusive of various types of prompts including hard prefixes. The conclusion mentioning further research opportunities suggests its utility in expanding the knowledge base of prompt engineering. The rating is not a full 10 because the study is specific to the domain of text-based generative art and does not focus solely on hard prefix prompts, which may be a subset of the broader topic of prompt modifiers." -prompting ai art: an investigation into the creative skill of prompt engineering,gpt-4-1106-preview,9,"The provided abstract directly pertains to the study of prompt engineering, focusing on understanding the skillset necessary for effective text-to-image generation, which is indeed a form of prompt engineering. The research explores participants' abilities to assess, write, and improve prompts, which is highly relevant to the study of prompt engineering as a creative process. The conclusion that prompt engineering requires expertise and practice is a significant insight into the field. The only reason the full score is not given is that the abstract does not specifically address 'hard prefix prompts' which was mentioned in the initial query, indicating it may not cover all possible facets of prompt engineering." -grimm in wonderland: prompt engineering with midjourney to illustrate fairytales,gpt-4-1106-preview,8,"The given abstract describes a study that is highly relevant to prompt engineering, as it focuses on refining text inputs to achieve better outcomes in text-to-image generation, specifically for the purpose of illustrating popular fairytales. The investigation into a methodical process for converting pre-existing text into image prompts aligns with the essence of prompt engineering. However, the study's relevance is slightly limited as it emphasizes action research within the context of fairytales' illustration rather than a broad analysis of the hard prefix prompts aspect in the general field of prompt engineering." -prompt engineering in medical education,gpt-4-1106-preview,8,"The abstract discusses the importance of prompt engineering within the context of medical education using generative language models (GLMs). It highlights the necessity of properly formulated instructions (or prompts) to maximize the utility of GLMs like ChatGPT, Perplexity AI, and Google Bard. The relevance is high because it directly addresses how prompt crafting affects the performance of GLMs in delivering personalized learning and feedback, which is core to prompt engineering studies. However, it is not a perfect 10 as it does not focus solely on the systematic review of 'hard prefix prompts' but rather on prompt engineering in a broader sense within the specific domain of medical education." -"multi-party goal tracking with llms: comparing pre-training, fine-tuning, and prompt engineering",gpt-4-1106-preview,9,"The study involves a direct comparison of different adaptation methods for language models, including prompt engineering, to handle a complex task such as multi-party goal-tracking and intent-slot recognition in conversations. The relevance to prompt engineering is high as the paper specifically evaluates and discusses the efficacy of prompt engineering techniques and compares it to other methodologies such as fine-tuning and pre-training in the context of understanding user goals in multi-party conversations. The high performance of prompt engineering in the few-shot setting demonstrates its significance in the study of language model capabilities and applications." -real estate insights unleashing the potential of chatgpt in property valuation reports: the “red book” compliance chain-of-thought (cot) prompt engineering,gpt-4-1106-preview,9,"The article specifically addresses prompt engineering within the context of property valuation and compliance with industry standards, namely the 'Red Book'. It discusses the direct application and importance of crafted prompts for instructing large language models to generate specific, accurate results that comply with professional property valuation standards. Even though it does not focus on 'hard prefix prompts' in a general sense, its contribution to prompt engineering for practical, domain-specific use cases is highly relevant. The deduction of one point is due to the lack of a TLDR and no explicit mention of 'hard prefix prompts', which would have given a precise summary and tied the relevance more directly to the topic." -improving formality-sensitive machine translation using data-centric approaches and prompt engineering,gpt-4-1106-preview,8,"The paper appears to be highly relevant to prompt engineering as it explicitly mentions the use of 'empirically-grounded prompt engineering' as a part of its methodology to improve machine translation relative to a baseline. Prompt engineering is used here in conjunction with a data-centric approach to specifically address the challenge of formal language variations in translation, indicating a direct application of prompt engineering for enhancing model performance. The rating is not a full 10 since the focus is not solely on prompt engineering, but also includes language-specific data-driven approaches." -artificial intelligence prompt engineering as a new digital competence: analysis of generative ai technologies such as chatgpt,gpt-4-1106-preview,9,"The provided abstract for the article, 'artificial intelligence prompt engineering as a new digital competence: analysis of generative ai technologies such as chatgpt,' is highly relevant to the field of prompt engineering. It discusses creating a theoretical framework for AI prompt engineering, analyzing best practices through extensive literature review, and introducing the AI PROMPT framework, which is directly related to the study of prompt engineering. It only falls short of a perfect score because the abstract does not mention 'hard prefix prompts' specifically, which was the core subject of the initial statement. However, the general discussion on AI prompt engineering strategies and their implications in various sectors makes it significantly relevant to the topic at hand." -cases of efl secondary students' prompt engineering pathways to complete a writing task with chatgpt,gpt-4-1106-preview,9,"The paper presents an empirical study about how EFL secondary students engineer prompts for a chatbot, specifically ChatGPT, in the context of completing a writing task. It explores the strategies students use and the trial-and-error process they undergo, which is central to understanding the practical applications and educational needs for prompt engineering. The study is highly relevant to the subject of prompt engineering as it shows the significance of this skill in educational settings and provides direct insight into the ways in which non-technical users interact with language models. The reason for not giving a full score of 10 is that it does not cover the theoretical or systematic review aspect of prompt engineering, but focuses specifically on the practical application and user experience." -enhancing automated program repair through fine-tuning and prompt engineering,gpt-4-1106-preview,8,"This abstract discusses a study where language models such as PLBART and CodeT5 are fine-tuned with datasets that contain code review and code changes to improve automated program repair. The relevance to prompt engineering comes from the part of the study that focused on utilizing zero-shot and few-shot learning-based prompt engineering with advanced code generative models like Codex and GPT-3.5-Turbo to assess their performance. Although the primary focus of the study appears to be automated program repair through fine-tuning of language models with specific datasets, the inclusion of prompt engineering as a method to enhance model performance gives it substantial relevance to the topic of prompt engineering. It does not directly address 'hard prefix prompts' as specified in the original inquiry, but it does deal with the employment of prompts in the context of language models, which is why the relevance is rated slightly lower." -"supporting self-directed learning and self-assessment using teachergaia, a generative ai chatbot application: learning approaches and prompt engineering",gpt-4-1106-preview,8,"The abstract indicates that the study involves leveraging prompt engineering to guide the interactions of an AI chatbot, named TeacherGAIA, to support self-directed learning and self-assessment. It specifically contrasts the engineered prompts with the default behavior of a chatbot like ChatGPT, suggesting a focus on how prompts can be tailored to achieve specific educational objectives. While the study is not exclusively focused on 'hard prefix prompts', it clearly involves a significant component of prompt engineering. The rating is not a full 10 because the abstract does not explicitly mention a 'systematic review' or a focus on 'hard prefix prompts', which are key aspects of the complete prompt stated in the requirement." -ncu-iisr: prompt engineering on gpt-4 to stove biological problems in bioasq 11b phase b,gpt-4-1106-preview,9,"The abstract indicates a high relevance to prompt engineering study as it describes a system that focuses on the application of prompt engineering strategies using GPT-4. The system's design for addressing biomedical questions implies substantial engagement with the crafting of prompts to interact with a language model effectively. The paper details experimental steps on prompt engineering, compares methodologies, and notes performance improvements due to optimized prompts. This offers considerable insight into how prompt engineering can be applied to enhance the utility of language models in a specific domain. The point deduction from a perfect score is due to the absence of details about 'hard prefix prompts', which may or may not have been a part of their strategies, as it is not explicitly stated." -"optimizing mobile-edge ai-generated everything (aigx) services by prompt engineering: fundamental, framework, and case study",gpt-4-1106-preview,9,"The title and abstract indicate that the study is highly relevant to prompt engineering as it directly discusses optimizing services through prompt engineering methods. The study reviews the evolution from AI-Generated Content (AIGC) to AI-Generated Everything (AIGX), and presents a framework that uses prompt engineering to enhance the performance of AI services on edge devices. It also includes a case study on training a prompt optimizer, which is directly related to employing prompt engineering techniques. The only reason the rating is not a full 10 is that the study focuses on a specific application (mobile-edge services) rather than prompt engineering in the broadest sense, which could include other domains and use-cases." -prompt engineering as an important emerging skill for medical professionals: tutorial,gpt-4-1106-preview,8,"The title and abstract provided describe a paper that is significantly relevant to the field of prompt engineering. It specifically discusses the application of prompt engineering in the context of medical professionals, thereby addressing a niche yet important aspect of prompt engineering. The relevance is not a full 10 because the focus is narrowed to the medical field, and the study is a tutorial rather than a comprehensive systematic review on 'hard prefix prompts'. Therefore, while it is highly relevant to prompt engineering, it does not fully address the broader aspect of the engineering study as requested in the initial prompt." -exploring the intersection of large language models and agent-based modeling via prompt engineering,gpt-4-1106-preview,9,"The title and abstract are highly relevant to prompt engineering as they describe research that directly utilizes large language models through prompt engineering to simulate human behavior. By exploring two specific simulations (a negotiation and a murder mystery game), the study emphasizes the application of prompt engineering in creating believable scenarios, which aligns closely with the prompt engineering discipline. One point is deducted because the abstract does not explicitly mention 'hard prefix prompts,' which was specified in your original request; however, it does focus on the broader context of prompt engineering within large language models." -the prompt engineering librarian,gpt-4-1106-preview,7,"The abstract discusses the role of librarians in the emerging field of prompt engineering, which is directly related to the study of prompt engineering as a discipline. It also covers the concept of optimizing prompts for artificial intelligence models, which is a fundamental aspect of prompt engineering. However, it focuses more on the potential professional development for librarians rather than a systematic review of hard prefix prompts specifically, which is why the rating is not a full 10." -contextual stance classification using prompt engineering,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it directly addresses the use of natural language prompts in the domain of few-shot learning. Furthermore, it relates to the creation of prompts based on existing conversation threads, which is a specific application of prompt engineering. The focus on how these prompts can potentially replace supervised methods while maintaining accuracy and reducing development costs further emphasizes the practical significance of prompt engineering in machine learning tasks such as contextual stance classification. The rating is not a full 10 because the abstract does not explicitly mention 'hard prefix prompts' which was a specific aspect mentioned in the initial query." -promptmagician: interactive prompt engineering for text-to-image creation,gpt-4-1106-preview,8,"The described research directly addresses prompt engineering within the context of text-to-image generation. It focuses on helping users effectively generate prompts that produce the desired image outcomes, which is a core aspect of prompt engineering. The relevance rating is not a full 10 because the study does not specifically discuss 'hard prefix prompts' as mentioned in your query; rather, it deals with prompt engineering in a broader sense. However, the system it introduces, PromptMagician, is very relevant as it is a direct application of prompt engineering principles to improve user interaction with generative models." -logprompt: prompt engineering towards zero-shot and interpretable log analysis,gpt-4-1106-preview,8,"The abstract describes a novel approach to log analysis using zero-shot learning through the employment of large language models (LLMs) with advanced prompt strategies, which is highly relevant to the field of prompt engineering. The significant performance improvements and the use of no training data underscore the utility of prompt engineering techniques in practical applications. However, the paper seems to be focused more on the application of prompt engineering within the specific domain of log analysis rather than a broad study of hard prefix prompts or a general evaluation of various prompt engineering strategies across different domains." -"the artificially intelligent entrepreneur: chatgpt, prompt engineering, and entrepreneurial rhetoric creation",gpt-4-1106-preview,8,"The title suggests that the study focuses on the use of chatbot technology, specifically ChatGPT, in the context of prompt engineering. It implies an analysis of how entrepreneurial rhetoric can be generated through prompt engineering techniques, which is closely related to the study of how prompts are used to steer the performance of AI models like ChatGPT. Although the 'hard prefix prompts' are not explicitly mentioned, the title indicates a strong relevance to the field of prompt engineering in general." -a survey on segment anything model (sam): vision foundation model meets prompt engineering,gpt-4-1106-preview,7,"While the title suggests the primary focus of the study is on the Segment Anything Model (SAM), the abstract indicates a secondary aspect that touches upon the versatility of SAM when combined with various models, including some that involve prompt engineering (e.g., ChatGPT). Although prompt engineering is not the central theme of the study, the impact of the work on prompt engineering is tangential and relevant as it involves the integration of SAM with models that may require or benefit from prompt engineering techniques. Therefore, the relevance to prompt engineering is moderate to high." -plain template insertion: korean-prompt-based engineering for few-shot learners,gpt-4-1106-preview,8,"The abstract indicates that the study is highly relevant to prompt engineering as it focuses on the application of prompt-based few-shot learning to Korean-language datasets, and it specifically mentions the introduction of a plain template insertion method. The fact that it addresses few-shot learning, data scarcity, and the adaptability of prompts to language-specific contexts means that it offers valuable insights into the field of prompt engineering. However, it does not explicitly address 'hard prefix prompts' as mentioned in the original query, which is why the rating is not a full 10." -polyglot prompt: multilingual multitask prompt training,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it explores the concept of 'Polyglot Prompting', a framework specifically designed for prompt-based learning across multiple languages and tasks. Prompt engineering is central to the approach of creating a unified semantic space within a multilingual context. Additionally, the paper's comprehensive evaluation and the development of an interpretable multilingual evaluation methodology further contribute to the field of prompt engineering by providing insights and tools that can be used to gauge the effectiveness of different prompting methods in a multilingual setting." -"chatgpt prompt patterns for improving code quality, refactoring, requirements elicitation, and software design",gpt-4-1106-preview,9,"The paper outlines a set of patterns for prompt designing, explicitly targeting the automation of software engineering tasks through large language models (LLMs) like ChatGPT. The relevance to prompt engineering is high because it directly discusses prompt design techniques for specific professional tasks and contributes a catalog of patterns that can enhance the effectiveness of LLMs in software engineering contexts. The reason for not giving a full 10 is that the paper does not solely focus on the general concept of 'hard prefix prompts' but rather on broader prompt patterns for software engineering activities." -retrieval-based prompt selection for code-related few-shot learning,gpt-4-1106-preview,8,"The provided abstract is highly relevant to prompt engineering as it discusses a technique centered around the creation of effective prompts, specifically for code-related few-shot learning tasks. The approach, Cedar, leverages retrieval-based methods to choose appropriate code demonstrations to accompany the task prompt, which is a direct application of prompt engineering principles. The results indicating the technique's effectiveness and its comparison with state-of-the-art models further underscore its relevance to the field. The deduction of two points is due to the lack of direct mention of 'hard prefix prompts', as the abstract focuses more broadly on prompt creation rather than the specific systematic review mentioned in the initial prompt." -"a study on prompt design, advantages and limitations of chatgpt for deep learning program repair",gpt-4-1106-preview,8,"The study directly relates to prompt engineering by investigating how ChatGPT's performance in deep learning program repair can be enhanced through tailored prompts. It explores ChatGPT's debugging capabilities and proposes prompt templates, which are central to prompt engineering. Additionally, the study addresses the effectiveness of dialogue in facilitating program repair, which is a novel aspect of prompt design. The rating is not a perfect 10 because the focus is more on program repair rather than exclusively on prompt engineering. However, prompt design is a significant component of this research, making it highly relevant to the field of prompt engineering." -ip-adapter: text compatible image prompt adapter for text-to-image diffusion models,gpt-4-1106-preview,7,"The paper describes IP-Adapter, an adapter for text-to-image diffusion models to incorporate image prompts along with text prompts. Although not focused on 'hard prefix prompts' specifically within text prompt engineering, it tackles the broader area of prompt engineering by enhancing the interface between human input and AI models to improve the generation of images. It is relevant to the field as it addresses the complexity of prompt engineering and offers a solution that enhances multimodal interactions, thus providing insights into how prompt systems could be improved. However, the paper's main focus is on the technical implementation of the adapter and the decoupled cross-attention mechanism for image prompts, so it is not entirely centered on the systematic review or standard text-based prompt engineering." -prompt space optimizing few-shot reasoning success with large language models,gpt-4-1106-preview,9,"The title and abstract indicate that the study is highly relevant to prompt engineering with a particular focus on optimizing prompt strategies for large language models in few-shot reasoning contexts. The introduction of 'Prompt Space' and its theoretical foundation based on text embeddings and matrix decomposition aligns closely with the field of prompt engineering. The claimed improvements over state-of-the-art methods further validate the study's pertinence to the topic. The only reason it is not a perfect 10 is that the study does not appear to narrowly focus on 'hard prefix prompts', but rather on prompt engineering as a whole, which may include a broader range of techniques beyond just hard prefix prompts." -what initiates evidence‐based reasoning?: situations that prompt students to support their design ideas and decisions,gpt-4-1106-preview,6,"The document is somewhat relevant to the study of prompt engineering as it discusses the situations that lead students to use evidence-based reasoning, which is an important factor in understanding how to structure prompts to elicit informed responses. Although it focuses on evidence-based reasoning in the context of engineering education rather than the specific area of 'hard prefix prompts', understanding the broader principles of how prompts can initiate certain types of thinking is pertinent to prompt engineering." -bim-gpt: a prompt-based virtual assistant framework for bim information retrieval,gpt-4-1106-preview,7,"The abstract presents a study focused on utilizing prompt-based virtual assistant technologies for information retrieval in the construction industry, which is tangentially relevant to prompt engineering. While the primary application is specific to building information models (BIM), the fact that it involves engineering prompt systems (in this case, for integration with GPT models) to interpret natural language makes it partially relevant to the study of prompt engineering. The rating is not higher because the study is not solely focused on the systematic review of hard prefix prompts or prompt engineering specifically but rather on an application of those principles within a specific domain." -api entity and relation joint extraction from text via dynamic prompt-tuned language model,gpt-4-1106-preview,7,"The paper discusses the use of a dynamic prompt-tuned language model for the task of API entity and relation extraction, which is a form of prompt engineering applied to software engineering tasks. Although the main focus is on API extraction rather than the prompt engineering itself, the use of dynamic prompts is a relevant application of prompt engineering techniques. Hence, the relevance to prompt engineering study is significant, but not entirely central to the work, as prompt engineering seems to be a part of the method rather than the sole focus." -a prompt-aware neural network approach to content-based scoring of non-native spontaneous speech,gpt-4-1106-preview,4,"The study focuses on using neural network techniques to assess non-native spontaneous speech, which includes using prompts as a condition for the model. Although this involves engineering a model to interact with prompts, the core emphasis is on automatic assessment rather than on the systematic review or deep exploration of 'hard prefix prompts', which would be central to prompt engineering studies. Therefore, the relevance is moderate as it only touches on prompt-related aspects within a broader application context." -performance of chatgpt on the us fundamentals of engineering exam: comprehensive assessment of proficiency and potential implications for professional environmental engineering practice,gpt-4-1106-preview,7,"The study focuses on the use of ChatGPT in the context of an engineering certification exam, which is highly relevant to the engineering field. It examines the role of AI in educational settings, specifically related to professional environmental engineering practice. However, the study is narrowly tailored to the Environmental sector of the FE exam and does not directly address 'prompt engineering' as a systematic study across various disciplines or in a broad context. Prompt engineering usually refers to how prompts are structured to elicit the best response from an AI model, and while the abstract mentions 'noninvasive prompt modifications', it does not seem to be the central focus of the study. Therefore, the rating is a 7, indicating substantial but not complete relevance to prompt engineering study." -how do different reflection prompts affect engineering students’ academic performance and engagement?,gpt-4-1106-preview,6,"The abstract describes a study that relates to prompt engineering in the educational sense and not in the AI field. It addresses the effectiveness of different types of reflection prompts (generic versus specific) on students' performance and engagement in an engineering course context. While it is not directly about the engineering of AI-based prompt systems, the insights regarding how specificity in prompts can influence outcomes may be partially relevant to the nuances involved in designing prompts for AI systems. However, the primary focus of the study on academic performance and engagement of engineering students limits the relevance to prompt engineering in AI. Thus, the rating reflects moderate relevance due to indirect connections that could be drawn between educational prompting strategies and AI prompt design considerations." -exploring the effects of the design prompt on students’ design cognition,gpt-4-1106-preview,8,"The abstract discusses the influence of design prompts on students' design cognition, which is highly relevant to prompt engineering in the context of educational research. It examines the hypothesis that the task provided (the design prompt) impacts the student's design process and experience. While the concept of 'hard prefix prompts' is not specifically mentioned, the study of how prompts affect design cognition is closely related to exploring how different types of prompts (potentially including hard prefixes) can shape the design process. Therefore, the relevance to prompt engineering study is high, but not maximal due to the absence of a specific focus on 'hard prefix prompts'." -symbolic knowledge distillation: from general language models to commonsense models,gpt-4-1106-preview,9,"The abstract provided discusses the use of prompt engineering as a central technique in the process of Symbolic Knowledge Distillation. The careful construction of prompts and the use of a critic model to refine the results from a general language model like GPT-3 directly relate to the field of prompt engineering. It demonstrates the effectiveness of well-engineered prompts in training more specialized commonsense models. Although the abstract does not focus exclusively on 'hard prefix prompts,' the relevance of the work to the broader field of prompt engineering is substantial, meriting a high rating." -"chat2vis: generating data visualizations via natural language using chatgpt, codex and gpt-3 large language models",gpt-4-1106-preview,9,"The paper discusses a novel system, Chat2VIS, which relies heavily on effective prompt engineering to guide large language models (LLMs) like ChatGPT and GPT-3 to generate data visualizations from natural language text. Although the focus is more on the application side of using LLMs for data visualization, the process inevitably involves the study and construction of prompts that can accurately convey user queries to these models, despite potential misspecification or under-specification. This reliance on specialized prompt design for improving the reliability and accuracy of LLM outputs suggests a significant overlap with the topic of prompt engineering. The rating is not a full 10 because the abstract does not indicate if the study explicitly covers theoretical aspects of hard prefix prompts or a systematic review of such." -"chatgpt evaluation on sentence level relations: a focus on temporal, causal, and discourse relations",gpt-4-1106-preview,7,"The abstract provided is relevant to prompt engineering to a significant extent as it describes the evaluation of an AI language model, specifically ChatGPT, using different prompt templates such as zero-shot, zero-shot PE (prompt engineering), and ICL (in-context learning). These templates are inherently connected to the study of prompt engineering as they directly impact the performance and accuracy of the model on various tasks related to inter-sentential relations. Although the abstract does not directly address 'hard prefix prompts', the use of different prompt templates including the PE template aligns with the broader field of prompt engineering. The systematic approach taken in evaluating these templates relates to the systematic review aspect of a 'comprehensive systematic review on hard prefix prompts.' However, given that the focus is on sentence-level relations rather than hard prefix prompts explicitly, it does not fully align with the prompt, hence the rating is not a full 10." -cutting down on prompts and parameters: simple few-shot learning with language models,gpt-4-1106-preview,8,"The abstract discusses how fine-tuning language models in a few-shot setting can reduce the need for prompt engineering, indirectly addressing the challenges associated with hard prefix prompts by proposing an alternative solution. Although the study targets the broader concept of prompt engineering, its findings offer valuable insights into the specific area of hard prompting, demonstrating ways to optimize the process. The lower rating reflects that while the study is relevant, it is not exclusively focused on hard prefix prompts." -fake it till you make it: learning transferable representations from synthetic imagenet clones,gpt-4-1106-preview,7,"The abstract describes a study where the researchers explore using class-agnostic prompt engineering to generate ImageNet clones with Stable Diffusion, suggesting a focus on prompt engineering to enhance synthetic image training for image classification models. While the focus on 'hard prefix prompts' isn't explicitly mentioned, the paper still significantly revolves around the concept of prompt engineering and its effects on machine learning model outcomes. Thus, the study is quite relevant to the broader field of prompt engineering, albeit in the context of image generation, rather than text-based applications." -text-guided synthesis of artistic images with retrieval-augmented diffusion models,gpt-4-1106-preview,7,"The abstract describes a method where 'prompt-engineering' is used to achieve a certain visual style in synthesized images, which is relevant to the study of how prompts are engineered to guide AI models. However, the focus on 'retrieval-augmented diffusion models' which use external databases for conditioning, offers an alternative to crafting hard prefix prompts. The relevance is rated a 7 as it deals with prompt engineering indirectly by presenting an alternative method to achieve specific outcomes in generative tasks. The study emphasizes the conditioning of models post training rather than the design of the prompts themselves." -bigbio: a framework for data-centric biomedical natural language processing,gpt-4-1106-preview,8,"The text discusses the creation of BigBIO, a library that contains numerous biomedical NLP datasets, supporting meta-dataset curation. Its compatibility with current platforms for prompt engineering makes it highly relevant for studies focused on prompting, though the abstract does not specifically address 'hard prefix prompts'. Therefore, its relevance to the broader subject of prompt engineering is high, but it may not directly address the specificity of hard prefix prompts, thus the rating is not a full 10." -repair is nearly generation: multilingual program repair with llms,gpt-4-1106-preview,7,"The abstract describes a research study on RING, a multilingual repair engine that uses a large language model for code repair tasks, which relies on prompts to guide the repair process. Although the study focuses on automated program repair, the use of a prompt-based strategy to assist in the repairing process is aligned with prompt engineering concepts. This suggests that the study contributes to the understanding of how prompts can be engineered to interact with AI models, specifically in the context of code repair. However, it doesn't specifically target 'hard prefix prompts' in prompt engineering, nor does it seem to focus on the systematic review of such prompts. Therefore, the relevance rating is not a perfect 10, but still substantial given the use of prompt-based strategies in the context of AI-powered code repair." -prompting is all your need: automated android bug replay with large language models,gpt-4-1106-preview,9,"The abstract describes the use of prompt engineering to automatically reproduce bugs from bug reports using a methodology called AdbGPT. This directly involves prompt engineering as a crucial component for leveraging Large Language Models (LLMs) to understand and process bug reports, enabling automated bug replay. The relevance to prompt engineering is high, as it is a key part of the proposed system for understanding and acting on natural language inputs, which demonstrates an advanced application of prompt engineering in software maintenance. The reason the rating is not a perfect 10 is because the focus is on the application of prompt engineering in a specific context (automated android bug replay) rather than a general study or comprehensive review of hard prefix prompts within the broader scope of engineering studies." -qaner: prompting question answering models for few-shot named entity recognition,gpt-4-1106-preview,9,"The abstract discusses the development of a new method for prompt-based learning in the context of Named Entity Recognition (NER), which is directly related to the field of prompt engineering. The research is aimed at refining prompt strategies, generating prompts, and tuning QA models with prompts, addressing various challenges in prompt-based methods. This is highly relevant to the study of prompt engineering, especially in its application to NER tasks. The reason for not giving a full 10 is because the abstract does not explicitly mention 'hard prefix prompts,' suggesting that the study might not cover that specific aspect of prompt engineering." -prompting the hidden talent of web-scale speech models for zero-shot task generalization,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it focuses on adapting a web-scale speech model, Whisper, to perform zero-shot tasks by using specialized prompt engineering techniques. The paper demonstrates significant performance improvements on new tasks by designing task-specific prompts, which directly pertains to the field and thereby scores a high relevance rating. It only falls short of a perfect score because it is not a comprehensive systematic review, but rather an experimental study illustrating practical applications of prompt engineering." -the creativity of text-based generative art,gpt-4-1106-preview,8,"The abstract indicates that the paper focuses on 'text-based generative art' and discusses the role of human creativity in the context of prompt engineering, which is directly related to prompt engineering study. It references Rhodes’s conceptual model of creativity, which could provide insight into the design and evaluation of prompts. The critique of product-centered creativity views hints at a theoretical exploration relevant to understanding how prompts are engineered and used in practice. Although the paper does not seem to be exclusively about 'hard prefix prompts' in prompt engineering, it appears to address the broader context and implications of prompt use and creativity in text-based generative systems. Thus, the relevance to prompt engineering study is high, but it is not a perfect match since it does not focus solely on 'hard prefix prompts'." -no token left behind: explainability-aided image classification and generation,gpt-4-1106-preview,8,"The paper abstract indicates that the research addresses issues related to the instability in zero-shot learning when using models like CLIP, which is related to how input prompts are constructed and used (prompt engineering). The study proposes an explainability-based approach to ensure that the model considers all relevant semantic parts of the input, likely including how the prompts are designed and their tokens. This is highly relevant to prompt engineering, although the study focuses more broadly on zero-shot learning and explainability, not solely on prompt engineering. Thus, the relevance rating is high, but not maximum." -automatically generating cs learning materials with large language models,gpt-4-1106-preview,7,"The content of the provided abstract is relevant to prompt engineering in that it discusses the application of Large Language Models (LLMs) in generating code and educational content based on natural language prompts. Although it does not specifically mention 'hard prefix prompts', it is related to the broader subject of how prompts can be utilized to facilitate computer science learning and to the design of prompts for effective interaction with models like GPT-3 and Codex. The abstract also touches upon the implications of LLM integration in pedagogy, which could include discussions on the crafting of prompts for educational purposes. Therefore, while it is not a direct study on prompt engineering, it is certainly relevant to the field, especially in the context of their application in education." -language-aware soft prompting for vision & language foundation models,gpt-4-1106-preview,8,"The shared abstract and summary are highly relevant to prompt engineering, specifically in the context of Vision & Language (V&L) models, indicating a study of prompt design and their application to model training. Although the study focuses on 'soft' prompts and not 'hard' prompts as mentioned in the initial query, it significantly engages with prompt engineering concepts by discussing the creation and adjustment of prompts. It researches how prompts can be optimized and regularized to improve model performance and addresses an important aspect of prompt engineering: the resistance to overfitting and the ability to generalize to unseen classes. Therefore, it contributes to the overall understanding and methodology of prompt engineering even if it does not directly address 'hard prefix prompts'." -chatgpt4pcg competition: character-like level generation for science birds,gpt-4-1106-preview,8,"The paper's focus on a competition that centers on creating prompts for ChatGPT to generate specific game levels is highly relevant to the field of prompt engineering. Although it doesn't address 'hard prefix prompts' specifically, it contributes to the understanding and application of prompt engineering in procedural content generation. This relevance is somewhat niche as it applies to a gaming context, yet the principles and methods used can offer valuable insights into prompt engineering best practices and strategies." -will it blend? mixing training paradigms & prompting for argument quality prediction,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it specifically describes the use of prompt engineering with GPT-3 for the task of Argument Quality Prediction. The focus on mixing training paradigms and the experimentation to determine the best setup for predicting different aspects of argument quality are central to the study of how different prompts can influence the output of large language models. The relevance is not a full 10 only because the paper also delves into training paradigms along with prompt engineering, which implies it does not solely concentrate on prompt engineering but rather on a combination of techniques." -the infinite index: information retrieval on generative text-to-image models,gpt-4-1106-preview,9,"The abstract discusses the concept of 'prompt engineering' directly in the context of generative models like DALL-E and Stable Diffusion, which is highly relevant to the field of prompt engineering study. It addresses a unique challenge within prompt engineering—information retrieval based on prompts given to generative models, which is an advanced aspect of prompt engineering. The introduction of the 'infinite index' concept and the exploration of active learning for image retrieval are pertinent to the engineering of prompts and the optimization of results from generative models. The deduction of one point is due to the lack of explicit mention of 'hard prefix prompts,' which may or may not be part of the 'interactive text-based retrieval' system referenced. However, the content is still highly relevant for researchers and practitioners interested in the intricacies of prompt engineering for generative text-to-image models." -exploring the benefits of visual prompting in differential privacy,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant due to the mention of Visual Prompting (VP), which constitutes a form of prompt engineering applied to visual tasks. This technique aligns with the concept of prompt engineering in the machine learning context, which involves designing inputs that guide the model to perform specific tasks or improve its performance. Even though 'hard prefix prompts' are not explicitly mentioned, the study still falls within the broader scope of prompt engineering by exploring the modification and utilization of input prompts to enhance the performance of machine learning models with differential privacy. The incorporation of VP into DP training methods like PATE and the exploration of its benefits in neural network classifiers make it relevant to the study of prompt engineering. However, the specific exploration of 'hard prefix prompts' is not addressed, which led to a rating of 7 instead of 10." -textgraphs-16 natural language premise selection task: zero-shot premise selection with prompting generative language models,gpt-4-1106-preview,9,"The paper seems to directly address the use of prompt engineering in the context of a natural language premise selection task, which is relevant to the study of prompt engineering effects on AI models' capabilities. It specifically assesses the performance of prompt engineering with GPT-3 in comparison to semantic similarity ranking with SBERT, and although it doesn't outperform SBERT when used alone, the combined approach yields better results. This indicates the paper significantly contributes to the understanding of prompt engineering's influence and utility in complex NLP tasks such as automated theorem proving, making it highly relevant to prompt engineering study." -generating requirements elicitation interview scripts with large language models,gpt-4-1106-preview,9,"The referenced study focuses on the application of prompt engineering to the generation of requirements elicitation interview scripts using large language models. It specifically discusses the use of prompt engineering techniques to generate various structured outputs, and even touches on refining prompts for better performance. This directly correlates with the study of prompt engineering as it involves optimizing and fine-tuning prompts to achieve specific outcomes with AI models. The reason for not giving a full 10 is that it's not exclusively about 'hard prefix prompts', but more broadly about prompt engineering applied within a specific context. However, it still holds high relevance to the overall field of prompt engineering." -an empirical evaluation of prompting strategies for large language models in zero-shot clinical natural language processing,gpt-4-1106-preview,9,"The described paper is highly relevant to prompt engineering as it conducts an empirical evaluation of prompting strategies for large language models specifically within the clinical NLP context. It assesses several prompt types like simple prefix, chain of thought, and introduces new types such as heuristic prompting and ensemble prompting, which are directly related to the study of prompt engineering. The only reason it doesn't receive a perfect score is that it is focused on the clinical domain and the prompt types are not limited to 'hard prefix prompts' as inquired in the original query." -an experimental investigation of analogy formation using the engineering-to-biology thesaurus,gpt-4-1106-preview,7,"The study focuses on the use of an Engineering-to-Biology thesaurus to facilitate analogy formation, which is a cognitive strategy closely related to the concept of 'hard prefix prompts'. Although it does not explicitly mention 'hard prefix prompts', the experimentation with keywords to generate ideas is akin to the process of using specific prompts to guide thought processes. However, its relevance is not a perfect match as it does not directly deal with the systematic review of hard prefix prompts or their use in studies; instead, it focuses on the application of a thesaurus in bioinspired design, which is just one aspect of prompt engineering." -an empirical study on few-shot knowledge probing for pretrained language models,gpt-4-1106-preview,8,"The study presents an empirical analysis of prompt-based knowledge probing with a focus on few-shot settings, which is highly relevant to the field of prompt engineering as it explores how models can be effectively used with limited data. Although it does not directly analyze 'hard prefix prompts,' the mention of optimizing prompts and a comparison of various approaches is pertinent to prompt engineering techniques and strategies. The findings related to finetuning bias vectors could contribute to the prompt engineering literature, especially since they claim to outperform existing methods." -generating disentangled arguments with prompts: a simple event extraction framework that works,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it introduces a prompt-based learning strategy to the domain of Event Extraction. The use of prompts to automate the exploitation of label semantics indicates a direct application of prompt engineering. The fact that this work sets new records for Argument and Trigger Extractions suggests that it advances the field significantly. While the paper does not focus on 'hard prefix prompts' specifically, its contribution to prompt-based methods in Event Extraction demonstrates its relevance to studies on prompt engineering." -how to prompt? opportunities and challenges of zero- and few-shot learning for human-ai interaction in creative applications of generative models,gpt-4-1106-preview,9,"The abstract provided outlines a study that delves into the usage, challenges, and potential advancements in the field of prompt engineering, specifically in the context of zero-shot and few-shot learning for creative applications with generative models. The focus on how end-users interact with AI through prompts and the subsequent proposal of design goals for user interfaces that support prompt-based interactions is highly relevant to prompt engineering. The study appears to be concerned with improving the effectiveness and intuitiveness of prompts, which is crucial to the field. Therefore, the relevance rating is high, albeit not maximum, as it might not cover the 'hard prefix prompts' as specified in the original prompt, but it still relates significantly to the broader subject of prompting in AI." -few-shot learning with multilingual generative language models,gpt-4-1106-preview,8,"The study appears to be highly relevant to prompt engineering as it includes an in-depth analysis of different multilingual prompting approaches and demonstrates the utility of templates and example demonstrations in achieving strong few-shot learning performance across languages. Although the abstract does not explicitly mention 'hard prefix prompts', the principle of engineering effective prompts to enhance model performance in few-shot learning scenarios is fundamentally related to prompt engineering. The rating is not a full 10 because the abstract does not directly address 'hard prefix prompts', but it ishigh due to the clear relevance of the study's focus on prompting techniques and few-shot learning." -tuning language models as training data generators for augmentation-enhanced few-shot learning,gpt-4-1106-preview,8,"The study deals with few-shot learning in pretrained language models (PLMs) leveraging prompts which is highly relevant to prompt engineering. It explores how to effectively utilize a limited amount of data to tune PLMs and then generate additional data to enhance performance on various language tasks. Even though the study does not specifically mention 'hard prefix prompts', it discusses training methodology that involves prompt formulation for modeling, which is a significant aspect of prompt engineering. For this reason, the work is very much related to prompt engineering but does not directly address the systematic review of 'hard prefix prompts', hence the rating of 8 instead of 10." -true few-shot learning with prompts—a real-world perspective,gpt-4-1106-preview,8,"This abstract describes an extensive study on Pet (Pattern-exploiting Training), which is a method that leverages prompt-based few-shot learning without relying on a development set for tuning. This research is highly relevant to prompt engineering because it evaluates the effectiveness of prompt-based approaches in few-shot learning scenarios. This can help understand how different prompting strategies can be designed and employed effectively in real-world settings. However, the study seems to focus specifically on Pet rather than a broader range of hard prefix prompts, hence the rating is not a full 10." -cins: comprehensive instruction for few-shot learning in task-oriented dialog systems,gpt-4-1106-preview,7,"The study is highly relevant to prompt engineering as it details an approach for leveraging pre-trained language models (PLMs) using task-specific instructions, which is a core aspect of prompt engineering. The 'CINS' system's specific focus on utilising instructions for few-shot learning in task-oriented dialog systems indicates relevance to the field. However, the paper might not center exclusively on hard prefix prompts or a systematic review of such prompts, thus not fully aligning with the potential scope implied by the term 'comprehensive systematic review on hard prefix prompts'. The rating reflects the significance of instructional design in prompting while acknowledging the potential mismatch in the specificity of the topic." -story centaur: large language model few shot learning as a creative writing tool,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to some extent, as it deals with the application of few shot learning with large language models, which is an aspect of prompt engineering. The design of the Story Centaur interface can imply the use of prompts to guide the language model in generating text based on the writer's input. However, the relevance is not full (i.e., not a 10) because the abstract does not specifically mention 'hard prefix prompts' or a systematic review of prompt engineering techniques. It is more focused on the end-user experience and tool creation for creative writing rather than the detailed study of prompt engineering methods." -few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning,gpt-4-1106-preview,7,"The abstract discusses Parameter-efficient fine-tuning (PEFT), which includes prompt tuning, a technique directly relevant to prompt engineering. Prompt tuning is a method of adjusting a pre-trained model to understand and perform new tasks using prompt-based instructions. The PEFT, and specifically the novel (IA)$^3$ method mentioned, likely relate to how prompts can be engineered or optimized for better performance with fewer resources, making it relevant to the study of prompt engineering. However, the focus on the comparative benefits over in-context learning and the overarching goal to improve model efficiency and performance, while related, do not strictly fall within the typical exploration of hard prefix prompts, and therefore do not warrant a maximum relevance rating." -exploring effectiveness of gpt-3 in grammatical error correction: a study on performance and controllability in prompt-based methods,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates how prompt-based methods, a key aspect of prompt engineering, impact GPT-3's performance in Grammatical Error Correction tasks. It examines the effects of varying task instructions and examples, which are central to designing effective prompts. The focus on the controllability aspect of GPT-3 with different instructional prompts makes this study pertinent to understanding and enhancing the use of language models in prompt engineering." -knowledge injected prompt based fine-tuning for multi-label few-shot icd coding,gpt-4-1106-preview,7,"The abstract presents a study that involves using prompt-based fine-tuning for a multi-label classification task, which is a relevant aspect of prompt engineering. However, the focus is more on the injection of domain-specific knowledge into the model and its application to ICD coding rather than a broad analysis of hard prefix prompts across various domains or a generalizable framework. The relevance is therefore significant but not entirely central to prompt engineering and lacks discussion on hard prefix prompts specifically." -improved universal sentence embeddings with prompt-based contrastive learning and energy-based learning,gpt-4-1106-preview,8,"The abstract discusses 'PromCSE', a method which focuses on using a 'Soft Prompt', that is, a set of trainable vectors, in a prompt-based contrastive learning setting for sentence embeddings. This is related to prompt engineering, a domain that comprises methods to better integrate and tune prompts for effective use with pre-trained language models. Although the abstract does not explicitly mention 'hard prefix prompts', it addresses the topic of prompt-based learning and even touches on energy-based learning mechanisms. For these reasons, the abstract is highly relevant to the study of prompt engineering, but slightly less so specifically to 'hard prefix prompts'. Hence, the rating is 8 instead of 10." -v2p: vision-to-prompt based multi-modal product summary generation,gpt-4-1106-preview,6,"The paper presents a multi-modal product summary generation framework that uses a Generative Pre-trained Language Model with prompts derived from visual attributes, which aligns with the concept of prompt engineering in the sense that it involves designing prompts for guiding text generation. However, the focus seems to be more on the multi-modal interaction and summary generation, rather than on the systematic study of hard prefixes or prompt structures themselves. Therefore, while it is relevant due to its use of prompts, it may not directly address the nuances of prompt engineering as pertains to hard prefix prompts specifically, hence the rating of 6." -do we still need human assessors? prompt-based gpt-3 user simulation in conversational ai,gpt-4-1106-preview,8,"The study directly addresses a critical aspect of prompt engineering by exploring the generation of synthetic data through prompting a language model, which is a subset of the broader field. It assesses the viability of using prompted synthetic responses as a replacement for human-generated data, an inquiry that overlaps with prompt engineering since it evaluates the quality and utility of the prompts and the resulting data. The relevance to prompt engineering is high, although not perfect, because it does not focus on 'hard prefix prompts' specifically but rather on the general application of prompts for data generation in AI conversational models." -towards open-vocabulary scene graph generation with prompt-based finetuning,gpt-4-1106-preview,8,"The abstract indicates the use of 'prompt-based techniques' for fine-tuning a pre-trained model in the context of scene graph generation (SGG). Although it does not explicitly mention 'hard prefix prompts,' it does involve the concept of prompt engineering as it leverages prompts to adapt the model to new tasks without updating parameters. This is directly related to studying different prompt engineering strategies, particularly in the open-vocabulary setting. Thus, the relevance to prompt engineering is high but not focused solely on the aspect of hard prefix prompts, hence the rating is not a full 10." -zero-shot cross-lingual transfer of prompt-based tuning with a unified multilingual prompt,gpt-4-1106-preview,9,"The abstract describes research on prompt-based tuning for multilingual pretrained language models with a focus on a unified, language-agnostic prompt, which is highly relevant to the field of prompt engineering. It addresses the challenge of creating prompts that work across multiple languages and demonstrates significant performance improvements, which is a core aspect of engineering effective prompts. The only reason it does not receive a full score is because it does not address 'hard prefix prompts' specifically, but it is still very relevant to the broader topic of prompt engineering." -promptcast: a new prompt-based learning paradigm for time series forecasting,gpt-4-1106-preview,8,"The paper's focus on 'prompt-based time series forecasting (PromptCast)' is highly relevant to the study of prompt engineering as it explores transforming numerical inputs and outputs into prompts, thus framing the forecasting task as a language model problem. This suggests innovative applications of prompt engineering techniques outside of traditional language tasks. The relevance is not a perfect 10 because the paper may not deal specifically with 'hard prefix prompts' and there is no explicit mention of a 'systematic review'. However, it still represents a significant piece of research within the broader field of prompt engineering." -lego-absa: a prompt-based task assemblable unified generative framework for multi-task aspect-based sentiment analysis,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it discusses a generative framework that uses task prompts, which are akin to hard-coded prompts, to control the generation of outputs for different tasks in ABSB. The methodology directly relates to how prompts are engineered to produce specific responses from a generative model. Its approach to assemblable task prompts is a novel application within the area of prompt engineering, even if the focus is more on sentiment analysis rather than on hard prefix prompts specifically." -prompt-based connective prediction method for fine-grained implicit discourse relation recognition,gpt-4-1106-preview,8,"The study introduces a Prompt-based Connective Prediction (PCP) method that is relevant to prompt engineering since it discusses instructing pre-trained models to utilize prompts for tasks in natural language processing. This is directly involved with prompt design and its implications on model performance. Although the main focus is on discourse analysis, the core concept of using prompts to guide model understanding and predictions is inherent to prompt engineering studies. Therefore, the relevance rating is high but not perfect due to the niche application within discourse relation recognition, rather than a broad study of prompt engineering techniques." -prompt-based distribution alignment for domain generalization in text classification,gpt-4-1106-preview,8,"The abstract mentions 'prompt-based learning' or 'prompting' as a key method for improving text classification across different domains. Although the study focuses on domain generalization and distribution alignment, the technique of prompting described is indeed crucial within the understanding of prompt engineering. It speaks to the customization of prompts to align data distributions across domains which could be understood as an advanced topic in prompt engineering. The study, however, does not directly address 'hard prefix prompts' but explores the broader concept of prompting and its application for domain generalization in natural language processing tasks. The rating is therefore not a full 10, as it does not specifically focus on hard prefix prompts but is still highly relevant due to its broader application in task alignment which is a subset of prompt engineering." -context variance evaluation of pretrained language models for prompt-based biomedical knowledge probing,gpt-4-1106-preview,9,"The abstract discusses advanced methods in prompt engineering, particularly in the context of biomedical knowledge probing. It details creating 'context variance' prompts, which directly relates to the development of prompt engineering techniques and introduces a new evaluation metric (UCM) for this purpose. These aspects are highly relevant to the study of prompt engineering as they contribute to the understanding and improvement of prompting methods for evaluating language models, though it doesn't explicitly mention 'hard prefix prompts,' hence the rating is not a perfect 10." -zero-shot event detection based on ordered contrastive learning and prompt-based prediction,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant since the abstract mentions the use of prompt-based prediction in a zero-shot natural language processing model. The study's methods directly involve prompt engineering by utilizing prompts to identify trigger words. However, prompt engineering is not the sole focus of the study, as it also involves ordered contrastive learning techniques. Therefore, while prompt engineering is relevant, it may not be the central theme of the research." -prompt-based time series forecasting: a new task and dataset,gpt-4-1106-preview,7,"The paper introduces a novel approach to time series forecasting by leveraging prompt-based methods, which is within the realm of prompt engineering. This is relevant as it explores the adaptation of language models to tasks outside their initial scope (i.e., forecasting) using prompts. However, the study does not focus specifically on 'hard prefix prompts' but on transforming numerical time series forecasting problems into a language model-friendly format. Therefore, it is a contribution to the broader context of prompt engineering rather than a targeted study on the more specific 'hard prefix prompts'." -prompt-based meta-learning for few-shot text classification,gpt-4-1106-preview,9,"The abstract discusses the application of prompt-tuning within a meta-learning framework for few-shot text classification, which is directly related to prompt engineering. As prompt-based systems are a critical study area within the broader scope of prompt engineering, this work's focus on a Prompt-Based Meta-Learning (PBML) model is highly relevant. It contributes to understanding how prompts can be effectively used in conjunction with meta-learning to enhance performance in low-data regimes. The paper offers insights into the practical application and theoretical underpinning of using prompts in machine learning, which is at the core of prompt engineering studies." -ai illustrator: translating raw descriptions into images by prompt-based cross-modal generation,gpt-4-1106-preview,7,"The study explores a Prompt-based Cross-Modal Generation Framework (PCM-Frame), which is relevant to prompt engineering as it involves using prompts to bridge the semantic gap between text descriptions and image generation. While the field of prompt engineering often refers to optimizing input language for language models, the abstract suggests a broader scope where prompts assist in mapping text to image embeddings. This makes it pertinent to the study of how prompts can be engineered to improve cross-modal generation tasks. However, the paper's focus seems more on the application of prompt engineering in the context of AI illustration and image generation, rather than a comprehensive review of prompt engineering techniques or hard prefix prompts specifically. Hence, the rating is not a full 10." -parabart: a prompt-based method with parabiotic decoder for few-shot named entity recognition,gpt-4-1106-preview,7,"The abstract describes a novel method, ParaBART, for improving few-shot named entity recognition (NER) by enhancing entity boundary detection with a specialized decoder. While it does not directly address 'hard prefix prompts' in the context of prompt engineering, the research does involve 'prompt-based methods' (as mentioned in line 001) in the application of NER. Prompt engineering is a broader field that includes the design and use of prompts to improve model performance in language tasks. Therefore, the relevance to prompt engineering study is significant, but not directly focused on addressing hard prefix prompts specifically, warranting a rating of 7." -clamp: prompt-based contrastive learning for connecting language and animal pose,gpt-4-1106-preview,7,"The abstract discusses the use of prompt-based methods (in the context of CLAMP) to connect language models with animal pose estimation tasks, which is highly relevant to prompt engineering as it involves crafting prompts to facilitate an application of language understanding. The relevance is not a perfect 10 because the study focuses specifically on contrastive learning for animal pose estimation, rather than a broad systematic review of hard prefix prompts in general. Nevertheless, the adaptation and engineering of prompts for a specific task like this contributes to the understanding of how prompts can be effectively utilized in various domains, which is a pertinent aspect of prompt engineering research." -promptattack: prompt-based attack for language models via gradient search,gpt-4-1106-preview,8,"The paper discusses 'Prompt Learning', a method directly related to prompt engineering, and addresses security vulnerabilities within this approach, a relevant aspect not often considered in standard prompt engineering studies. The focus is on constructing malicious prompts to reveal security issues, which is a valuable angle in prompt engineering research. Although the paper does not specifically mention a 'hard prefix prompt', it does delve into prompt-based methods and their implications, thus warranting a high relevance rating. However, the rating is not a full 10 because the paper's core topic is security rather than the effectiveness or optimization of prompt engineering itself." -pts: a prompt-based teacher-student network for weakly supervised aspect detection,gpt-4-1106-preview,8,"The paper describes a method that utilizes prompts to enhance the performance of weakly supervised aspect detection by using a teacher-student network structure. This is directly relevant to the field of prompt engineering as it involves constructing and utilizing prompts to train language models more effectively, especially with limited labeled data. The use of hand-crafted and auto-generated prompts also indicates a deeper exploration into prompt methodologies, which is significant for prompt engineering studies. The primary reason why the rating is not a 10 is due to the specificity of the application to aspect detection and the paper's focus on a novel network architecture, which may slightly deviate from a 'comprehensive systematic review' of hard prefix prompts, thus not completely aligning with the broader aspect of the prompt engineering study." -prompt-based metric learning for few-shot ner,gpt-4-1106-preview,8,"The abstract describes a method that uses multiple prompt schemas to enhance label semantics in the context of few-shot named entity recognition, which is relevant to prompt engineering as it involves the design of prompts to influence the model's performance. The proposed method indicates an improvement in metric learning for NER by incorporating prompt-based representations, aligning with the study of how different prompting techniques can affect machine learning tasks. However, it does not explicitly address 'hard prefix prompts,' which may be a more specialized area within the broader field of prompt engineering, hence the rating is not a full 10." -prompt-based zero-shot video moment retrieval,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it directly involves the design and usage of prompts ('Proposal Prompt' and 'Verb Prompt') for a zero-shot learning task in video moment retrieval. Although the focus is on video and text, the principles of prompt learning and their application to a zero-shot context align well with studies in prompt engineering, particularly in the innovative use of 'hard prefixes' or structured prompts in neural language models. However, the rating is not a full 10 because it may not directly tackle the methodological aspects of prompt engineering or address a 'hard prefix prompt' in a broader sense but rather applies prompt concepts to a specialized domain." -on the robustness of dialogue history representation in conversational question answering: a comprehensive study and a new prompt-based method,gpt-4-1106-preview,7,"The title and abstract suggest a study that investigates the robustness of dialogue history representation in Conversational Question Answering (CQA), which does not directly deal with 'prompt engineering' per se. However, the introduction of a 'prompt-based history modeling approach' signifies the study's partial relevance to prompt engineering, as it involves the strategic integration of prompts into the passage text to enhance model performance. The mention of 'textual prompts' indicates that part of the study is concerned with understanding how prompts can affect the outcome of a CQA task. Even though the study is not solely dedicated to 'hard prefix prompts' or prompt engineering in general, the development of a new prompt-based method implies that it could offer insightful data and practices relevant to prompt engineering research. The rating is not higher because the primary focus still seems to be on robustness and not explicitly on the engineering of prompts." -promptda: label-guided data augmentation for prompt-based few shot learners,gpt-4-1106-preview,9,"The abstract describes a study on the use of a novel framework called PromptDA that focuses on data augmentation in the context of prompt-based few-shot learning for natural language understanding tasks. The study appears highly relevant to prompt engineering as it directly addresses the development of prompts for few-shot learners and investigates ways to improve their performance through specialized data augmentation that leverages label semantic information. This relates closely to the study of 'hard prefix prompts' as it pertains to the design and enhancement of prompt-based methods. The only reason the rating is not a 10 is it doesn't specify if 'hard prefix prompts' are specifically addressed, but it's clear that the work is valuable to the field of prompt engineering." -nsp-bert: a prompt-based zero-shot learner through an original pre-training task-next sentence prediction,gpt-4-1106-preview,7,"The abstract indicates that the study introduces a novel method for utilizing BERT's Next Sentence Prediction (NSP) in zero-shot scenarios, which contrasts with the token-level methods most prompt-based learning approaches use. Seeing as prompt engineering is fundamentally about designing inputs and templates that effectively harness the capabilities of language models, the methods proposed in the paper for various NLP tasks and prompt construction templates contribute to the field of prompt engineering. Additionally, the abstraction of token-level constraints aligns with the goal of refining prompt engineering to achieve better performance with language models. However, the paper appears to focus more on the pre-training task and zero-shot learning rather than the detailed intricacies of prompt engineering, which is why the relevance is scored as a 7 rather than higher." -unified multimodal pre-training and prompt-based tuning for vision-language understanding and generation,gpt-4-1106-preview,7,"The abstract discusses the use of prompt-based methods for fine-tuning models on different downstream tasks. This is directly related to prompt engineering as it involves designing and choosing the right prompts for effective model performance. The information provided is relevant, as the study deals with how prompts can be used in model tuning, particularly in few-shot scenarios, although it does not specifically discuss 'hard prefix prompts'. This might slightly reduce the relevance as the prompt seems to inquire about a systematic review on a specific type of prompts known as 'hard prefix prompts', which is not mentioned in the abstract. Nevertheless, the general relevance to prompt engineering is still significant." -adversarial robustness of prompt-based few-shot learning for natural language understanding,gpt-4-1106-preview,7,"The study focuses on prompt-based few-shot learning (FSL) methods within natural language understanding, which is a subset of prompt engineering as it investigates the utilization of prompts for model fine-tuning. Evaluating the adversarial robustness of prompt-based FSL is relevant as it considers the stability and reliability of these prompts under adversarial conditions, a crucial aspect for prompt engineering. However, the study is more focused on the robustness to adversarial attacks rather than on the broader aspects of prompt engineering such as prompt design, optimization, or the systematic review of 'hard prefix' prompts. Therefore, while the study is highly relevant to a specialized area of prompt engineering, it does not cover the full scope of a 'comprehensive systematic review on hard prefix prompts,' so it gets a rating of 7." -"don’t prompt, search! mining-based zero-shot learning with language models",gpt-4-1106-preview,6,"The paper discusses the limitation of the traditional prompt-based approach for zero-shot learning with language models and offers an alternative mining-based approach. It touches upon the subject of how prompts are used and their sensitivity to the task, which is relevant to prompt engineering studies. However, the primary focus seems to be on the mining technique rather than the engineering or optimization of prompts themselves. Therefore, while it relates to the field of prompt engineering, it does so from a perspective of finding an alternative to hard-coded prompts, rather than improving or systematically reviewing them." -decorate the newcomers: visual domain prompt for continual test time adaptation,gpt-4-1106-preview,8,"The paper described involves the concept of 'prompt learning' from NLP but applies it to the visual domain, suggesting a novel crossover of prompt engineering techniques to continual test-time adaptation for images. While the research isn't about textual 'hard prefix prompts' in NLP, the principles of designing prompts for domain adaptation and mitigating issues like catastrophic forgetting are closely related to prompt engineering in how they shape model inputs for better performance. Thus, it is relevant but not directly focused on the prompt engineering study in the text domain." -prompt-tuned code language model as a neural knowledge base for type inference in statically-typed partial code,gpt-4-1106-preview,6,"The study presents an approach that incorporates elements of prompt engineering by fine-tuning a language model with a specific task-oriented prompt ('pre-train, prompt and predict' paradigm). Although the primary focus is not on prompt engineering for natural language processing, but rather type inference within code, the use of prompts to guide the model suggests relevance. However, it is specialized for code language models which may not fully align with more generalized prompt engineering studies." -"toward human readable prompt tuning: kubrick's the shining is a good movie, and a good prompt too?",gpt-4-1106-preview,9,"The paper discussed is highly relevant to prompt engineering as it addresses the direct issue of how to create effective and fluent prompts through a novel tuning method. It contributes to the understanding of what makes a prompt effective, ensuring topical relevance and adjusting prior probabilities. The only reason it is not rated a perfect 10 is that the prompt engineering study specifically asked for 'hard prefix prompts,' which this summary does not explicitly state that the paper addresses. However, the general principles and methodology presented are very likely applicable to prompt engineering as a whole." -parameter-efficient prompt tuning makes generalized and calibrated neural text retrievers,gpt-4-1106-preview,9,"The abstract discusses prompt tuning, a form of prompt engineering, in the context of neural text retrievers. It emphasizes parameter efficiency, which is a crucial factor in the design and use of prompts for AI models. Moreover, the study explores prompt tuning's impact on generalizability across various domains, directly relating to advancements in prompt engineering methodologies. Hence, it is highly relevant to the prompt engineering study, although it focuses on a specific application rather than a broad range of use cases." -relation extraction as open-book examination: retrieval-enhanced prompt tuning,gpt-4-1106-preview,8,"The abstract discusses a novel application of prompt tuning in the context of relation extraction, by utilizing a retrieval-enhanced prompt tuning approach. While it does not directly address 'hard prefix prompts' or a 'comprehensive systematic review', it certainly falls within the broader category of prompt engineering studies. The focus on improving performance on hard or rare patterns, and the method of combining parametric and non-parametric techniques, relate closely to the challenges prompt engineering aims to address, especially in the context of improving prompt-based models' generalization capabilities. Thus, the relevance is high, although not perfect, due to the absence of a specific focus on 'hard prefix prompts' or a 'systematic review' aspect." -prompt deep light-weight vessel segmentation network (plvs-net),gpt-4-1106-preview,6,"The relevance to prompt engineering in this study is moderate. The use of 'prompt blocks' within the network architecture indicates an innovation related to how the network processes information, which might be relevant to prompt engineering in the broader sense of designing inputs that improve the performance of a neural network. However, the primary focus appears to be on biomedical image segmentation, rather than the development or study of prompting methods for natural language processing or other general AI applications. Thus, while the term 'prompt' is used, it may not directly align with the typical context of prompt engineering, which is often related to improving AI responses or behavior based on textual input." -rethinking reinforcement learning for recommendation: a prompt perspective,gpt-4-1106-preview,7,"The relevance to prompt engineering in this study lies in the proposed Prompt-Based Reinforcement Learning (PRL) framework for recommendations, which intersects with the field of prompt engineering by leveraging state-reward inputs as prompts during the decision-making process. The study doesn't center on prompt engineering as it typically applies to language models or processes of tuning textual inputs, but it does conceptualize a similar method within the RL context, angling prompts as essential elements in training RL models for improved recommendation systems. Therefore, its relevance is notable but not directly central to the typical application of prompt engineering, which more commonly refers to optimizing inputs for generative language models." -point prompt tuning for temporally language grounding,gpt-4-1106-preview,7,"The abstract discusses 'Point Prompt Tuning (PPT)' as a novel approach that integrates prompt-based strategies within a multi-modal learning framework, specifically applied to the task of temporally language grounding (TLG) in videos. Since the methodology involves formulating a query rewriting strategy as prompts and integrating it with a multi-modal transformer, it directly relates to the concept of prompt engineering. The relevance to prompt engineering is quite high since it involves designing and using prompts to improve task performance. However, it is not a comprehensive systematic review on hard prefix prompts, as the initial prompt suggested, but rather an application of prompt tuning strategies in a specific domain. Therefore, the rating is not a perfect 10, but still significant due to the use of prompt engineering techniques." -generative prompt tuning for relation classification,gpt-4-1106-preview,9,"The abstract presents a study that is highly relevant to the field of prompt engineering. It addresses the limitations of the existing prompt tuning methods when dealing with complex label spaces for relation classification tasks. By introducing a generative prompt tuning approach that reformulates the problem into an infilling task, the study directly applies to developing new techniques within prompt engineering. The relevance is therefore rated a 9 out of 10 because it contributes significantly to the understanding and development of prompt-based methods, although it focuses specifically on relation classification rather than prompt engineering in general." -prompt learning for few-shot dialogue state tracking,gpt-4-1106-preview,8,"The paper described is relevant to prompt engineering as it discusses a prompt learning framework for few-shot dialogue state tracking (DST), which is inherently related to the utilization of prompts to improve model performance with limited labeled data. The use of value-based prompts and an inverse prompt mechanism connects directly to the design and implementation of prompts in the context of leveraging pre-trained language models (PLM). While the study is not specifically about 'hard prefix prompts' and does not perform a systematic review, it is still highly relevant to the broader field of prompt engineering due to its focus on improving the efficiency of knowledge probing from PLMs using specially designed prompts, which is an essential aspect of prompt engineering. Therefore, the paper receives a high relevance score." -"prompt, generate, then cache: cascade of foundation models makes strong few-shot learners",gpt-4-1106-preview,7,"The abstract discusses the use of GPT-3 to 'Prompt, Generate, then Cache', indicating an application of language generation for creating prompts, which is relevant to prompt engineering. Additionally, the integration of multi-modal models such as CLIP and DALL-E implies the use of prompts to facilitate communication across language and image domains, which is an advanced form of prompt engineering. However, the primary focus of the paper appears to be on few-shot learning and integrating diverse pre-training knowledge, rather than on systematic review of hard prefix prompts specifically. Therefore, while it is related to prompt engineering, it is not directly focused on a comprehensive review of that domain, hence the rating is not a full 10." -instructionner: a multi-task instruction-based generative framework for few-shot ner,gpt-4-1106-preview,7,"The relevance of the provided abstract to prompt engineering is quite significant, as it discusses the usage of prompt-based methods in few-shot learning and the refinement of those prompts for a specific downstream task, which is named entity recognition (NER). While the focus of the study is on the development of a framework for NER, the essence of reformulating tasks as generation problems and enriching source sentences with task-specific instructions is closely related to prompt engineering. This process involves creating prompts that effectively guide the language model to perform a desired task. However, because the abstract does not explicitly mention 'hard prefix prompts' or conduct a systematic review on prompt engineering, the rating is not a full 10." -finding skill neurons in pre-trained transformer-based language models,gpt-4-1106-preview,7,"The paper is moderately relevant to prompt engineering study. It doesn't directly focus on generating or optimizing prompts -- which would be the core subject of a prompt engineering study. However, the identification of 'skill neurons' within transformers after prompt tuning relates to understanding how prompts can affect neural language models and how specific neurons contribute to processing tasks after prompt-based training. This has implications for prompt engineering, as insight into which neurons are 'skill neurons' might inform how to structure or alter prompts to target these neurons and improve task performance." -good examples make a faster learner: simple demonstration-based learning for low-resource ner,gpt-4-1106-preview,8,"The abstract details a study on demonstration-based learning, which is a part of prompt-based learning methodologies. Although it focuses specifically on named entity recognition (NER), the principles of designing demonstrations and templates are directly related to the broader field of prompt engineering. The study's emphasis on the effect of different demonstration strategies on performance and the exploration of in-context learning provide insights that are applicable to prompt engineering. The relevance to prompt engineering is notable due to the systematic study of these strategies, which is a component of the hard prefix prompts mentioned in the initial query. However, the rating is not a full 10, as the abstract suggests a specific application (NER) rather than a focus on prompt engineering in general." -the utility of an evidence-based lecture and clinical prompt as methods to improve quality of care in colorectal cancer screening,gpt-4-1106-preview,4,"The study appears to investigate the effectiveness of clinical prompts in a medical setting, which tangentially relates to the concept of prompt engineering. While not directly studying 'hard prefix prompts' or prompt engineering for AI or computational systems, the principle of using prompts to improve performance outcomes has some relevance to the broader field of study. However, the specific application to colorectal cancer screening and the focus on evidence-based lectures differentiates this from the typical context of prompt engineering in technology, which usually refers to the designing of inputs to elicit desired responses from AI models or systems." -promptbert: improving bert sentence embeddings with prompts,gpt-4-1106-preview,9,"The paper describes a method that directly pertains to prompt engineering, specifically within the context of improving sentence embeddings using a novel contrastive learning method named PromptBERT. The emphasis on overcoming the limitations of BERT by integrating prompts into the sentence embedding process is highly relevant to the study of prompts in engineering. The research not only introduces a new prompt-based embedding method but also explores prompt representation and searching methods, which are central themes in prompt engineering. The proposed unsupervised training objective with template denoising is similarly a significant contribution to this field. The only reason the score is not a full 10 is that it doesn't mention 'hard prefix prompts' explicitly, but the overall context is very much applicable to the subject of prompt engineering." -how can we know what language models know?,gpt-4-1106-preview,9,"The paper directly addresses a core aspect of prompt engineering by focusing on the automatic generation of high-quality and diverse prompts to elicit more accurate information from language models. Improving prompt quality is a fundamental part of prompt engineering, and the paper's experimental results on the enhancement of LM accuracy are highly relevant to studies of prompt effectiveness. The slight deduction from a perfect score is due to the abstract not specifying 'hard prefix prompts', indicating the review might not focus exclusively on that particular subset of prompt engineering." -realfusion 360° reconstruction of any object from a single image,gpt-4-1106-preview,8,"The abstract describes the use of a conditional image generator and the engineering of a prompt to improve the neural network's ability to 'dream up' or synthesize novel views of an object from a single image. This directly relates to the field of prompt engineering, as the research involves designing and refining a prompt to guide an AI model to perform a specific task more effectively. The relevance to prompt engineering study is high because it involves a practical application of prompt design to achieve better results in an AI-driven task. The score is not a full 10 because the abstract focuses on the application of this prompt engineering in the context of 3D reconstruction rather than the study of prompt engineering itself." -active prompting with chain-of-thought for large language models,gpt-4-1106-preview,9,"The paper addresses an advanced technique within prompt engineering, specifically for large language models (LLMs), by introducing active prompting with example-based CoT reasoning. This is highly relevant to the field of prompt engineering as it involves creating task-specific example prompts and evaluating their effectiveness for LLMs' performance on complex reasoning tasks. The mention of uncertainty metrics and the adaptation of the active learning framework to prompt design underscore the paper's direct and substantial contribution to developing and improving prompting strategies. The reason it's not a 10 is that it doesn't cover 'hard prefix prompts' which may suggest a more specific subset of prompt engineering techniques not explicitly mentioned in the abstract." -blended diffusion for text-driven editing of natural images,gpt-4-1106-preview,6,"The paper's relevance to prompt engineering is moderate as it deals with the application of language prompts in the context of image editing. Even though the main focus is on the use of natural language prompts to direct image edits, which is related to how prompts are engineered to guide machine learning models, it is not specifically focused on the study of 'hard prefix prompts' or the structure and efficacy of prompts in a general sense. The relevance comes from the intersection with prompt engineering in the domain of combining text prompts with image processing models, which may offer insights into how to better design prompts for specific tasks like image editing. However, without a direct analysis on the design, structure, or impact of the prompts themselves, its relevance is not maximal." -gpt3mix: leveraging large-scale language models for text augmentation,gpt-4-1106-preview,7,"The paper is highly relevant to the study of prompt engineering as it discusses a method to leverage large-scale language models, like GPT-3, using prompts to generate text for data augmentation. This is intrinsically linked to the concept of prompt engineering, which involves designing prompts to elicit desired responses from language models. However, the focus of the paper is more on the application of these prompts for data augmentation rather than a systematic review on hard prefix prompts specifically. The relevance is high because the technique proposed is a practical application of prompt engineering, but it is not a comprehensive review on the topic." -warp: word-level adversarial reprogramming,gpt-4-1106-preview,8,"The abstract presents research that extends earlier work on automatic prompt generation, which is highly relevant to the prompt engineering field. Adversarial reprogramming, as discussed in the paper, is a method for learning task-specific prompts that improve the performance of language models on various tasks. The focus on prompt generation suggests a strong relevance to studies on 'hard prefix prompts' or engineered prompts intended to direct model behavior. However, as the abstract does not explicitly mention 'hard prefix prompts', the rating is not a full 10." -prompting for multimodal hateful meme classification,gpt-4-1106-preview,8,"The study appears to be highly relevant to prompt engineering as it involves the creation of a prompt-based model (PromptHate) that specifically addresses the task of hateful meme classification by leveraging the capabilities of pre-trained language models through the use of prompts. The use of 'simple prompts' alongside 'in-context examples' indicates a direct application of prompt engineering techniques to extract and utilize implicit knowledge from the models. However, the study seems to focus on a specific application of prompts in the context of multimodal tasks (hateful meme classification), which may slightly limit its generalizability to prompt engineering as a whole. Despite this, the study's effort in optimizing prompts for a complex task adds valuable insights to the field of prompt engineering." -badprompt: backdoor attacks on continuous prompts,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it focuses on the security aspect of continuous prompt-learning algorithms, which are a core component of prompt-based learning paradigms. Although the study is not directly analyzing 'hard prefix prompts' but is rather investigating 'backdoor attacks' on continuous prompts, understanding such vulnerabilities is crucial for the overall field of prompt engineering, particularly for ensuring the robustness and reliability of models using prompts. However, it may be slightly less relevant if the specific focus of the inquiry is on 'hard prefix prompts,' as this paper investigates continuous prompts, which could be conceptually distinct." -multilingual relation classification via efficient and effective prompting,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it focuses on the application of prompting methods in multilingual relation classification, a specific area within NLP tasks that can benefit from engineered prompts. The research introduces a method for constructing prompts efficiently and evaluates its effectiveness in various scenarios, including low-resource languages. The relevance to hard prefix prompts is indirect since the focus is more on the application and efficacy of prompt methods rather than on the systematic analysis of different prompt structures, but it still contributes valuable insights to the field of prompt engineering." -knowledge prompting in pre-trained language model for natural language understanding,gpt-4-1106-preview,9,"The abstract describes a method for incorporating factual knowledge into Pre-trained Language Models (PLMs) via a 'knowledge prompting' technique, which is highly relevant to prompt engineering. The study not only discusses the integration of knowledge prompts with PLMs but also introduces novel knowledge-aware tasks. This indicates a direct application and exploration of prompting mechanisms within language models, thereby warranting a high relevance rating. A point is withheld because the abstract does not explicitly mention 'hard prefix prompts,' suggesting that while the paper is relevant to prompt engineering, it may not specifically cover the systematic review aspect of hard prefix prompts." -multi-stage pre-training for automated chinese essay scoring,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant given that the paper outlines a method that requires fine-tuning an essay scoring model on different types of prompts. This aligns with prompt engineering since the quality and characteristics of these prompts directly influence the training and performance of the AI model. Furthermore, including weakly supervised and cross-prompt fine-tuning stages implies a deep understanding of how prompts interact with the model. However, the focus appears to be more on automated essay scoring than on hard prefix prompts specifically, which is why the score is not higher." -punifiedner: a prompting-based unified ner system for diverse datasets,gpt-4-1106-preview,8,"The paper presents PUnifiedNER, a model leveraging prompt learning, which is a subfield of prompt engineering. This NER system's ability to train across multiple datasets and efficiently recognize a wide range of entity types by using prompts directly relates to the study of prompt design and utilization within models, a key aspect of prompt engineering. The relevance is not maximal since the abstract does not specifically discuss the nature of the 'hard prefix prompts' mentioned in the initial query, but it does focus on prompt-based learning which is closely related to the field of study in question." -prompting through prototype: a prototype-based prompt learning on pretrained vision-language models,gpt-4-1106-preview,8,"The abstract describes a relevant method in the field of prompt engineering, specifically focusing on a prototype-based prompting approach for few-shot image recognition tasks using pretrained vision-language models (PVLMs). Although the study presented is not directly examining 'hard prefix prompts', it is relevant to the broader topic of prompt engineering as it explores how prompts can be optimized and tailored for specific instances or tasks. The prototype-based approach is an innovative instance-level technique that directly contributes to the understanding and development of prompt-based methods in machine learning. The high rating reflects the study's potential contributions to the field of prompt engineering, despite not addressing hard prefix prompts explicitly." -self-prompting large language models for open-domain qa,gpt-4-1106-preview,8,"The abstract describes a research study focusing on the use of Large Language Models (LLMs) for Open-Domain Question Answering (ODQA) by introducing a Self-Prompting framework that relies on in-context learning through prompts generated by the LLMs themselves. This approach directly involves the concept of 'prompt engineering,' as it requires the design and structuring of prompts to effectively guide LLMs to produce useful pseudo QA pairs for learning. It is highly relevant to prompt engineering because it explores an innovative way of using prompts to leverage the internal knowledge of LLMs, thereby eliminating the dependency on external datasets. Although the study does not focus specifically on 'hard prefix prompts', it does tackle the broader area of how prompts can be used to enhance the performance of LLMs in a specific task, which makes it quite relevant to the field of prompt engineering." -dialogue state tracking with a language model using schema-driven prompting,gpt-4-1106-preview,8,"The abstract discusses a novel approach that employs 'schema-driven prompting' for dialogue state tracking, which is relevant to prompt engineering as it involves designing prompts that guide a language model's behavior. The use of prompts for task-aware history encoding aligns with the subject of prompt engineering. Although it does not directly reference 'hard prefix prompts', the concept of schema-driven prompts is closely related to the topic of how prompts affect the performance of language models. The high rating reflects the relevance of schema-driven prompting in the broader field of prompt engineering study, despite it not being an exact match for 'hard prefix prompts'." -mapl: parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting,gpt-4-1106-preview,7,"The abstract describes a method (MAPL) for adapting pre-trained unimodal models for few-shot learning in multimodal vision-language settings, which is relevant to prompt engineering as it involves leveraging existing models to perform new, related tasks with minimal training data. However, the focus is on a parameter-efficient adaptation technique rather than the systematic study of prompt design or hard prefix prompts specifically, hence the rating of 7 reflecting substantial relevance but not a direct focus on the prompt engineering methodology." -transprompt: towards an automatic transferable prompting framework for few-shot text classification,gpt-4-1106-preview,8,"The mentioned study focuses on a prompting framework aimed at few-shot text classification tasks, which is highly relevant to prompt engineering. The transferability aspect of the prompts across similar NLP tasks suggests novel techniques in prompt design and application, contributing to the field of prompt engineering. The use of cross-task transferable knowledge is especially pertinent, although the provided abstract does not specifically mention 'hard prefix prompts,' which was the topic requested. Therefore, while the study is much related to prompt engineering, it may not entirely focus on the subset of 'hard prefix prompts,' leading to a slightly lower rating." -context-faithful prompting for large language models,gpt-4-1106-preview,9,"The paper presents methods for improving the performance of Large Language Models (LLMs) on context-sensitive tasks using advanced prompt engineering techniques. Although it does not explicitly mention 'hard prefix prompts,' the focus on 'carefully designed prompting strategies' is highly relevant to the broader field of prompt engineering. Opinion-based prompts and counterfactual demonstrations are specific types of prompts that could fall under the category of systematic review on hard prefix prompts. Therefore, the paper is likely to contribute valuable insights to the study of prompt engineering." -cora: adapting clip for open-vocabulary detection with region prompting and anchor pre-matching,gpt-4-1106-preview,4,"The abstract describes an approach to improve open-vocabulary detection by using region prompting in combination with a visual-language model, which could be relevant to prompt engineering in that it involves the adaptation of prompts to improve recognition tasks. However, the focus is on object detection and adapting existing models to new tasks, rather than investigating the systematic study of hard prefix prompts specifically. While the method of region prompting could potentially inform prompt engineering practices, the direct relevance to the study of hard prefix prompts is tangential." -information and communication technology based prompting for treatment compliance for people with serious mental illness.,gpt-4-1106-preview,5,"The provided abstract discusses the use of ICT-based prompting to improve treatment compliance in people with serious mental illness, which aligns with the broader concept of 'prompts' in behavior modification. However, the term 'hard prefix prompts' typically refers to a specific approach in natural language processing or AI-related prompt engineering, which is not the focus of this study. Therefore, the relevance is moderate as it deals with prompts in a different context than what 'prompt engineering study' typically would imply in technological or AI research." -prompting technologies: a comparison of time-based and context-aware transition-based prompting.,gpt-4-1106-preview,7,"The study presented in the abstract is relevant to prompt engineering as it investigates the timing and context of delivering prompts, which can be crucial for the effectiveness of interventions in cognitive tasks. Although the study does not directly address 'hard prefix prompts,' which are specifically designed prompts in language models or AI environments, the underlying principles of effective prompting are closely related to prompt engineering. The comparison between time-based and context-aware prompting can inform how to design better prompts by understanding user interaction and response patterns. Therefore, this study holds relevance for the broader field of prompt engineering, especially in user-centric applications where user experience and interaction timing are important, even though it doesn't directly deal with hard prefix prompts." -"self-contradictory hallucinations of large language models: evaluation, detection and mitigation",gpt-4-1106-preview,8,"The provided abstract is highly relevant to prompt engineering as it discusses a prompting-based framework to address self-contradictions in large language models. Self-contradiction is a critical issue that can affect the effectiveness of prompts, and the study's focus on evaluation, detection, and mitigation is directly related to improving the performance of prompts in generating consistent and reliable output from LMs. The high relevance rating is justified because the paper tackles the challenge of crafting prompts that can lead to better-managed discourse by the LM, which is a core aspect of prompt engineering. While the study does not specifically mention 'hard prefix prompts,' it is closely allied with prompt engineering principles and practices." -a prompting-based approach for adversarial example generation and robustness enhancement,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it focuses on the development of prompt-based adversarial attacks and a robustness enhancement technique that uses prompts to improve model resistance to attacks. It indicates the potential of prompting paradigms in identifying and mitigating the vulnerabilities of pre-trained language models, which are at the core of prompt engineering. The only reason it is not rated a full 10 is that it is more focused on the application of prompts for adversarial purposes rather than a comprehensive study of hard prefix prompts in general." -dictionary-based phrase-level prompting of large language models for machine translation,gpt-4-1106-preview,8,"The article titled 'dictionary-based phrase-level prompting of large language models for machine translation' is highly relevant to prompt engineering as it describes a novel method for improving machine translation through the use of prompt engineering techniques. Specifically, it explores the use of large language models for MT and addresses the challenge of rare words by incorporating bilingual dictionaries into prompts, which directly falls within prompt engineering. The rating is not a full 10 because the study focuses on machine translation and the use of dictionaries for assisting translation of rare words which is a specific application of prompt engineering rather than a comprehensive review of hard prefix prompts in general." -fine-grained controllable text generation using non-residual prompting,gpt-4-1106-preview,8,"The abstract presents an approach to improve fine-grained control of text generation in Causal Language Models (CLMs) using an encoder-decoder architecture and intermediate text prompts. While the study is focused on text generation control rather than prompt engineering directly, it is highly relevant to the field of prompt engineering as it proposes a method for enhancing the control and versatility of prompts within these models. The introduction of intermediate prompts as a mechanism for controlling text generation could be applicable to 'hard prefix prompts' research, hence the high relevance score. However, it does not address 'hard prefix prompts' specifically, which prevents a full score." -bloom+1: adding language support to bloom for zero-shot prompting,gpt-4-1106-preview,4,"The provided document abstract pertains to language model adaptation, specifically for the BLOOM model, and how it is applied to zero-shot prompting in new languages. While the study addresses issues relevant to language models and prompting, it does not directly deal with the engineering of prompts, especially with 'hard prefix prompts' as mentioned in the original query. The relevance lies in the broader context of zero-shot learning and language adaptation, which can impact the effectiveness of prompts in multiple languages. However, since it doesn't focus on the specific design or structuring of prompts, or the concept of 'hard prefix prompts', the rating is moderately low." -understanding and improving visual prompting: a label-mapping perspective,gpt-4-1106-preview,7,"The study deals with visual prompting (VP), which is closely related to the concept of 'prompt engineering' in the sense that both involve techniques for effectively leveraging pre-trained models for new tasks. However, the focus on 'label-mapping' and visual tasks diverges from the typical context of 'hard prefix prompts,' which often relates to text prompts in natural language processing. Still, the principles investigated can be relevant to prompt engineering in a broader sense as it explores the relationship between prompting and label mapping to improve task accuracy." -automatic multi-label prompting: simple and interpretable few-shot classification,gpt-4-1106-preview,9,"The study presents a new method within the field of prompt engineering, directly aiming to improve the efficiency and efficacy of prompt-based few-shot text classification. As prompt engineering is a critical aspect of utilizing pretrained language models, and the paper offers a systematic approach to select label mappings for prompts, it is highly relevant to the field of prompt engineering. The only reason it does not receive a 10 is because it does not specifically address 'hard prefix prompts,' but rather prompt-based learning in a broader sense." -fs-detr: few-shot detection transformer with prompting and without re-training,gpt-4-1106-preview,7,"The paper discusses a new few-shot detection transformer (FS-DETR) that uses visual prompting, which is a form of prompt engineering. Visual prompts are used to provide the model with additional context without re-training. While the study does not specifically focus on 'hard prefix prompts', it does explore the concept of using prompts in a transformer-based model, which is a relevant aspect of prompt engineering. Therefore, the relevance to prompt engineering is significant but not directly focused on 'hard prefix prompts' which may suggest a slightly lower rating." -prompting contrastive explanations for commonsense reasoning tasks,gpt-4-1106-preview,9,"The study directly involves the use of language models to generate explanations for commonsense reasoning tasks by contrasting alternatives, which is a form of prompt engineering. This approach modifies how prompts are presented to the language model to elicit more informative and justifiable outputs, closely aligning with the concept of 'hard prefix prompts' where the prompt structure is critical to guide the language model's generation process. The relevance is high because the research focuses on improving the interpretability and effectiveness of prompts given to PLMs." -enhancing cross-lingual prompting with mask token augmentation,gpt-4-1106-preview,8,"The title 'Enhancing Cross-Lingual Prompting with Mask Token Augmentation' suggests a focus on improving the effectiveness of prompts within the context of multilingual language models. The abstract confirms that the paper investigates prompt-based approaches, particularly in cross-lingual scenarios, and proposes a method to optimize this process. Although the study deals with 'prompting' in the broader sense of language model applications and doesn't specify 'hard prefix prompts', it is still highly relevant to the field of prompt engineering. It presents empirical analysis and a novel framework for prompt enhancement. However, without explicit mention of 'hard prefix prompts', the rating is not a full 10." -generated knowledge prompting for commonsense reasoning,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering since it discusses 'generated knowledge prompting,' which is a method of using generated knowledge as a prompt to enhance performance in commonsense reasoning tasks. This falls within the purview of prompt engineering as it involves the strategic manipulation of inputs to a language model to garner better performance. Although it does not specifically mention 'hard prefix prompts,' it does approach the broader topic of how prompts can be used to integrate external knowledge into a language model's reasoning process, which may be beneficial to those studying ways to optimize prompting techniques." -dynamic prefix-tuning for generative template-based event extraction,gpt-4-1106-preview,7,"The abstract discusses a generative template-based event extraction method that utilizes dynamic prefix (GTEE-DynPref), which is highly relevant to prompt engineering as it involves type-specific prefixes that are adaptively integrated with context information. This suggests an innovation in how prompts are engineered to be context-specific rather than static, contributing to the study of prompts in NLP tasks. However, the focus on event extraction as a specific application may slightly limit the relevance to the broader field of prompt engineering since it doesn't address prompt engineering in a variety of other AI model contexts." -an empirical study of gpt-3 for few-shot knowledge-based vqa,gpt-4-1106-preview,7,"The paper describes a novel approach to using GPT-3 with prompts, specifically tailored for knowledge-based visual question answering (VQA). Although the primary focus is on VQA and not on 'hard prefix prompts' in general, the method of incorporating prompts using image captions is indeed relevant to the broader field of prompt engineering. The study explores how prompts can effectively guide a language model to utilize its latent knowledge base for a specific task. The systematic investigation into what text formats best describe image content and how to select in-context examples could provide valuable insights for prompt engineering studies." -interactive-chain-prompting: ambiguity resolution for crosslingual conditional generation with interaction,gpt-4-1106-preview,7,"The study's focus on 'interactive-chain prompting' as a mechanism to resolve ambiguities in crosslingual conditional generation suggests a significant overlap with prompt engineering techniques, especially within the context of natural language processing and machine learning. Even though the paper does not explicitly study 'hard prefix prompts,' the proposed method represents a form of advanced prompting strategy that can be valuable in the broader field of prompt engineering. The paper could hence provide insights into the design and effectiveness of complex prompting mechanisms, which is relevant for the study of prompt engineering. However, since the paper's primary focus is not on prompt engineering but on improving translation quality through interaction, the rating is not a full 10." -fast and constrained absent keyphrase generation by prompt-based learning,gpt-4-1106-preview,7,"The prompt engineering relevance of the study is substantial, considering it details a novel approach for keyphrase generation using prompt-based learning, which falls under the domain of controlled natural language generation—a key aspect of prompt engineering. The proposed method's constrained generation technique, which uses prompts derived from keywords to guide the production of absent keyphrases, is closely related to the concept of 'hard prefix prompts' where prompts direct the generative process. Although the main focus of the study is on efficient and consistent keyphrase generation rather than prompt engineering per se, the techniques employed for creating and utilizing prompts in the learning process have significant implications for the field of prompt engineering. It demonstrates a method to control and speed up the language generation process, which are key challenges in the development of efficient prompt engineering strategies. Nonetheless, the relevance is not given a full score as the primary focus seems to be on absent keyphrase generation rather than on the prompt engineering itself." -using simple technology to prompt multistep tasks in the home for people with dementia: an exploratory study comparing prompting formats,gpt-4-1106-preview,5,"The study provides insights into the design of prompts for a specific user group (people with dementia) and highlights that the effectiveness of prompts can be context-dependent, which offers a partial relevance to the general field of prompt engineering. However, the study is focused on cognitive impairment and lacks a direct connection to the broader concepts and methodologies of engineering prompts for software or AI interactions. Therefore, the relevance is moderate." -prompt-based re-ranking language model for asr,gpt-4-1106-preview,8,"The abstract discusses the application of a prompt-based method in the context of re-ranking for Automatic Speech Recognition, which is a form of prompt engineering. Although it does not directly address 'hard prefix prompts' in the systematic review sense, it describes a practical application of prompts in a machine learning model, BERT, indicating an overlap with prompt engineering studies. Therefore, the relevance is significant but not complete, as the focus is on a specific use-case rather than a broad analysis of prompt engineering techniques." -lfpt5: a unified framework for lifelong few-shot language learning based on prompt tuning of t5,gpt-4-1106-preview,7,"The paper presents a framework for lifelong few-shot language learning based on prompt tuning of T5, which is relevant to the concept of prompt engineering. Although the main focus is on lifelong learning and few-shot learning capabilities, the utilization of prompt tuning indicates that the work contributes to the understanding of how prompts can be engineered and optimized for specific language tasks. Additionally, the generation of pseudo samples for preventing forgetting involves creating prompts that are conducive to the model's learning process. Therefore, the paper has significant relevance to prompt engineering, despite not focusing exclusively on 'hard prefix prompts.'" -cold-start data selection for better few-shot language model fine-tuning: a prompt-based uncertainty propagation approach,gpt-4-1106-preview,9,"The abstract describes a study focusing on a prompt-based data selection method (PATRON) for fine-tuning pre-trained language models, which is highly relevant to prompt engineering. The mention of designing a prompt-based uncertainty propagation approach directly involves the development and refinement of prompts, and thus it directly contributes to the study of prompt engineering. The 'partition-then-rewrite (PTR) strategy' is slightly less relevant to the core concept of 'hard prefix prompts' but still within the domain of prompt engineering. The only reason the rating is not a full 10 is that the detailed application to 'hard prefix prompts' is not specified, as this technique seems broader than just hard prefix prompts, covering aspects such as data selection and sample diversity." -large language models are state-of-the-art evaluators of translation quality,gpt-4-1106-preview,5,"The study focuses on the use of large language models for evaluating translation quality, which indirectly relates to prompt engineering through the application of zero-shot prompting and comparison of prompt variants. However, it is more centered on the application of language models for translation assessments rather than the principles or effects of prompt engineering itself. Although understanding how different prompts impact the quality evaluation by a language model is relevant, the core of the study is translation quality assessment rather than prompt engineering." -visual prompt based personalized federated learning,gpt-4-1106-preview,7,"The paper presents a novel personalized federated learning framework that uses visual prompts to capture the data distribution information of clients. This approach is relevant to the study of prompt engineering because it involves the use of prompts (visual in this case) as a mechanism to improve the performance of machine learning models. While the term 'hard prefix prompts' typically refers to textual prompts, the use of visual prompts in this context is an extension of the idea to the visual domain. Hence, the relevance is substantial due to the innovation in prompt utilization, although it may not directly address 'hard prefix prompts' as understood in natural language processing." -memobert: pre-training model with prompt-based learning for multimodal emotion recognition,gpt-4-1106-preview,8,"The paper's abstract discusses the use of a prompt-based method in the context of multimodal emotion recognition, which is highly relevant to prompt engineering. The relevance is underscored by the fact that the prompt-based learning is used to redefine a downstream task, which is a core area of interest in prompt engineering studies. However, the focus on emotion recognition rather than hard prefix prompts specifically means it is not entirely focused on prompt engineering, hence the rating is not a perfect 10." -prompt-based text entailment for low-resource named entity recognition,gpt-4-1106-preview,7,"The abstract discusses a methodology for adapting pre-trained language models to named entity recognition tasks by changing the task to text entailment with entity type-specific prompts. This is related to prompt engineering as it involves crafting prompts to interact with language models and manipulate their behavior to improve performance on specific tasks without extensive labeled data. However, the term 'hard prefix prompt' is not explicitly mentioned, indicating that the study might not be focused on hard prefix prompts specifically but rather on prompt-based methods in a broader sense. The relevance is significant due to the use of prompts in adjusting language model behavior but is not fully aligned with a study specifically on hard prefix prompts." -consprompt: easily exploiting contrastive samples for few-shot prompt learning,gpt-4-1106-preview,9,"The title and abstract indicate the study is highly relevant to prompt engineering. It discusses the development of a model (ConsPrompt) that leverages contrastive samples to enhance the fine-tuning process in prompt learning, particularly in few-shot settings. The paper's focus on finding strategies for more effective prompt initialization and improving the robustness of prompt learning aligns well with the topic of prompt engineering. It offers a novel approach, aligns with current challenges in the field, and claims to set a new standard for performance and robustness in few-shot learning tasks." -towards informative few-shot prompt with maximum information gain for in-context learning,gpt-4-1106-preview,9,"The study addresses a fundamental aspect of prompt engineering by exploring the effect of data example selection on the stability and performance of LLMs in few-shot scenarios. The introduction of a method to quantify Information Gain from data examples and the proposal to choose examples with maximum IG are directly relevant to enhancing prompt design. Additionally, the identification and mitigation of template bias in assessing IG can improve the quality of prompt engineering. While not exclusively focused on 'hard prefix prompts', this work contributes to the broader field of prompt engineering, thus receiving a high relevance rating." -virtual prompt pre-training for prototype-based few-shot relation extraction,gpt-4-1106-preview,4,"While the title suggests the study involves 'virtual prompt pre-training', which pertains to a technique potentially related to prompt engineering in the context of machine learning, the lack of abstract and TLDR makes it difficult to assess its direct relevance to prompt engineering, particularly to 'hard prefix prompts'. The relevance is expected to be moderate as it mentions prototypes and few-shot relation extraction which may involve prompt design but does not explicitly focus on hard prefix prompts as per the provided information." -few-shot multi-modal sentiment analysis with prompt-based vision-aware language modeling,gpt-4-1106-preview,7,"The described study focuses on multi-modal sentiment analysis (MSA) using a few-shot learning approach and a prompt-based vision-aware language modeling (PVLM) method. The relevance to prompt engineering lies in the paper's emphasis on 'prompt tuning' as a method to incorporate multimodal information into a pre-trained language model for sentiment analysis tasks. This suggests that the study addresses the use of prompts within a deep learning model, specifically to bridge the gap between pre-training and specific NLP tasks. However, it does not primarily focus on 'hard prefix prompts', as mentioned in the prompt engineering study interest. Instead, it appears to be utilizing prompts as part of a broader framework for multi-modal learning. Therefore, the relevance is significant but not entirely on-topic with respect to studies centered specifically on 'hard prefix prompts'." -unified multi-modal pre-training for few-shot sentiment analysis with prompt-based learning,gpt-4-1106-preview,7,"The abstract presents work related to 'prompt-based fine-tuning (PF)' for 'few-shot multi-modal sentiment analysis (MSA)', which suggests relevance to prompt engineering particularly in the context of model fine-tuning. The concept of using prompts to bridge modalities and improve few-shot learning is applicable to the study of prompt engineering, especially considering the innovative approach of a multi-modal prompt-based system. However, the focus is specifically on sentiment analysis and not on hard prefix prompts or a comprehensive systematic review of them. Therefore, while the study is related to prompt engineering, it is not a direct match for a comprehensive systematic review on hard prefix prompts, which affects the rating." -commonsense knowledge-aware prompt tuning for few-shot nota relation classification,gpt-4-1106-preview,9,"The paper presents a study on commonsense knowledge-aware prompt tuning, which is directly related to prompt engineering as it discusses constructing relation-oriented templates and incorporating external knowledge for improving pre-trained language model performance in few-shot tasks. This is highly relevant to the field of prompt engineering, as it deals with optimizing prompts to effectively utilize the knowledge within language models. The only reason it doesn't receive a full 10 is that the focus is specifically on NOTA relation classification, which is a subset of the broader field of prompt engineering." -dual context-guided continuous prompt tuning for few-shot learning,gpt-4-1106-preview,9,"The abstract describes a research work that is highly relevant to prompt engineering, specifically in the niche of continuous prompt tuning methods. The paper introduces a novel method to improve the efficiency of prompts in few-shot learning scenarios, which is a direct contribution to the field of prompt engineering. The proposal of dual context-guided continuous prompts (DCCP) and the discussion of its advantages over existing methods highlight its significance for studies on how prompts influence the performance of NLP models. The reason for not giving a full score of 10 is that while the paper is highly relevant, it may not cover the 'hard prefix prompts' aspect mentioned in the original prompt but focuses more broadly on continuous prompt tuning." -a dual prompt learning framework for few-shot dialogue state tracking,gpt-4-1106-preview,8,"The paper describes the application of prompt learning in the context of Dialogue State Tracking (DST), which is a highly relevant area within natural language processing for task-oriented dialogue systems. The use of dual prompts and the idea of formulating the DST task as a language modeling task under few-shot settings directly concerns the engineering of prompts for effective model learning with limited data. The relevance to prompt engineering is high because it explores how to use prompts to assist pre-trained language models in understanding and generating dialogue states, which is an innovative approach to embed task-specific knowledge into the language model's processes. The paper's focus on incorporating task-related knowledge into prompts for language models aligns with prompt engineering objectives, such as improving model performance on targeted tasks with minimal examples. However, it does not cover all aspects of prompt engineering, such as the systematic study of different types of prompts (e.g., hard prefixes), hence the rating is not a full 10." -multi-task pre-training of modular prompt for few-shot learning,gpt-4-1106-preview,9,"The abstract pertains directly to the field of prompt engineering, discussing an approach to improving few-shot learning in language models through pre-trained modular prompts (MP2). This is highly relevant to prompt engineering as it addresses enhancing the adaptability and efficiency of prompt tuning, which is a core aspect of the application of language models to downstream tasks. It presents empirical results showing the method's superiority over traditional prompt tuning and full model tuning in few-shot settings. The relevance is not rated a full 10 because the abstract mentions the specific application to Chinese tasks, which might not cover the full breadth of the general field of prompt engineering, but it is otherwise highly pertinent." -visual prompt tuning for few-shot text classification,gpt-4-1106-preview,6,"The paper abstract introduces a novel method of Visual Prompt Tuning for few-shot text classification that utilizes vision-language pre-training models, which is somewhat relevant to prompt engineering as it involves a form of prompt tuning. However, the primary focus is on incorporating visual elements rather than exclusively on texts or verbal prompts, which traditionally constitute 'prompt engineering' in language models. The relevance rating is given a moderate score because it deals with tuning aspects pertinent to the deployment of large-scale language models, although it does not directly address 'hard prefix prompts' as described in the original study topic." -idiapers @ causal news corpus 2022: efficient causal relation identification through a prompt-based few-shot approach,gpt-4-1106-preview,8,"The paper's methodology is highly relevant to prompt engineering as it specifically deals with fine-tuning language models using prompts in a few-shot learning configuration. The approach treats a specialized task, Causal Relation Identification, as a masked language modeling problem, which aligns with the concept of utilizing prompts to steer LMs towards desired outputs without extensive training data. This suggests relevance to prompt-engineering techniques, although it is not a direct study on 'hard prefix prompts,' which might be a specific subset of prompt engineering." -p4e: few-shot event detection as prompt-guided identification and localization,gpt-4-1106-preview,8,"The provided abstract describes P4E, a framework for event detection that utilizes prompting (cloze-based prompting) as part of its methodology. The usage of prompts in the identification task is directly relevant to the field of prompt engineering. The study shows how prompts can be effectively integrated into the pre-training of language models for specific tasks like event detection, which falls within the scope of prompt engineering studies. However, the abstract also covers broader aspects of event detection, such as structured prediction and not exclusively prompts, so the rating is not a full 10." -few-shot natural language inference generation with pdd: prompt and dynamic demonstration,gpt-4-1106-preview,7,"The study introduces a framework to improve performance in few-shot natural language inference generation tasks by incorporating prompts and dynamic demonstrations within a language model. Although it does not directly study 'hard prefix prompts', it is relevant to prompt engineering because it involves the development of prompts and their application to enhance model performance in natural language processing tasks. The improvements on benchmark datasets and the claim of good generalizability suggest that the techniques used could potentially inform prompt engineering strategies, particularly in few-shot learning contexts." -discriminative language model as semantic consistency scorer for prompt-based few-shot text classification,gpt-4-1106-preview,9,"The paper introduces a finetuning method for text classification using prompts, which is highly relevant to prompt engineering. ELECTRA, being a language model used to distinguish between genuine and artificially generated text, contributes to the creation and evaluation of prompts, indicating a direct application to prompt engineering. This method is focused on improving the performance of language models in few-shot learning scenarios, which is a subset of prompt engineering. The rating is not a perfect 10 because the paper appears to be more focused on the application of a discriminative language model rather than on the prompt engineering process itself." -dfs-ner: description enhanced few-shot ner via prompt learning and meta-learning,gpt-4-1106-preview,7,"The paper's abstract indicates that it involves 'prompt learning' as a part of the proposed DFS-NER model. The focus on using prompts to guide a masked-language model learning objective for semantic information absorption is relevant to prompt engineering, as it implies constructing and employing prompts for improving model performance. However, the paper is more specifically about Named Entity Recognition and how prompt learning can be integrated with meta-learning for this task, rather than a broad study of prompt engineering itself. Thus, it is only moderately relevant to the prompt about 'hard prefix prompts,' as the paper might not be directly focused on studying prompts in a comprehensive systematic manner but rather using them as a tool for a specific application in NER." -a prompt-based few-shot machine reading comprehension model for intelligent bridge management,gpt-4-1106-preview,8,"The abstract describes a machine reading comprehension model that utilizes prompt-based techniques, which are relevant to the field of prompt engineering. The model's use of domain-specific heuristic rules to design prompt templates indicates a direct application and study of prompt engineering principles. However, the focus appears to be more on the model's application to bridge management rather than a comprehensive systematic review of prompt engineering, which might be expected from a study explicitly titled 'hard prefix prompts.' Therefore, the rating reflects its high relevance but not a perfect match due to the specific application context." -a study on prompt-based few-shot learning methods for belief state tracking in task-oriented dialog systems,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores prompt-based few-shot learning, which directly relates to the development and use of prompts in the training of language models. The formulation of DST as a prompt-based task indicates a significant engagement with prompt design and optimization, which is a core aspect of prompt engineering. The empirical analysis of the performance of these prompt-based methods contributes to the understanding of their effectiveness, which is crucial for prompt engineering research. The study might not be focused exclusively on 'hard prefix prompts' as mentioned in the systematic review title, but it addresses a related and important aspect of the field." -prompt and contrastive learning for few-shot sentiment classification,gpt-4-1106-preview,7,"The abstract you've provided describes a paper which is relevant to prompt engineering as it addresses a method for few-shot sentiment classification that uses prompts as part of the strategy. The proposed Prompt and Contrastive Learning (PCL) is directly related to the field of prompt engineering because it deals with bridging the gap between pre-training and fine-tuning of language models, a central issue in the utilization of prompts in NLP tasks. However, it does not specifically address 'hard prefix prompts' as mentioned in the prompt engineering study, therefore the rating is not a full 10. It is relevant due to its focus on the application of prompts to improve language model performance but does not directly address the systematic review aspect of 'hard prefix prompts'." -ti-prompt: towards a prompt tuning method for few-shot threat intelligence twitter classification*,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it details a prompt-based method specifically designed for a few-shot classification task which is a key area of interest in prompt engineering studies. The approach of leveraging prompt tuning and refining verbalizer techniques directly pertains to the domain of prompt engineering, as it involves crafting and optimizing prompts to interface with language models effectively. Although the study is focused on a niche application of threat intelligence classification on Twitter, the methodologies and insights could be broadly applicable to other prompt engineering contexts." -adaptive prompt learning-based few-shot sentiment analysis,gpt-4-1106-preview,9,"The paper appears highly relevant to prompt engineering as it proposes an adaptive prompt learning method for few-shot sentiment analysis, directly addressing the construction of prompts. The specific focus on adaptive prompts demonstrates an advanced application of prompt engineering aimed at improving the effectiveness of language models in few-shot learning scenarios. The only reason it is not rated a full 10 is due to the lack of information on 'hard prefix prompts', which may be a specific subset of the broader prompt engineering field." -augprompt: knowledgeable augmented-trigger prompt for few-shot event classification,gpt-4-1106-preview,5,"The title suggests that the study is related to prompt engineering as it mentions 'augmented-trigger prompt' which implies a method of prompt design for enhanced performance in an NLP task (few-shot event classification). However, without an abstract or TLDR, it is challenging to assess the depth of relevance to prompt engineering, hence a middle-of-the-road rating is given. More information would be required for a more accurate rating." -prompt-based few-shot learning for table-based fact verification,gpt-4-1106-preview,8,"The abstract discusses the use of the prompt method in the context of few-shot learning for table-based fact verification, which is directly relevant to prompt engineering because it explores how to design and utilize prompts to improve the performance of a pre-trained model on a specific NLP task with limited data samples. Although the main focus is on structured information in tables, the application of prompt-based approaches is a key part of prompt engineering. The rating is not a full 10 because the study seems to be more focused on a particular application of prompt engineering (table-based fact verification) rather than a broad systematic review of hard prefix prompts." -investigating prompt learning for chinese few-shot text classification with pre-trained language models,gpt-4-1106-preview,8,"The abstract describes a study on a prompt-based framework for Chinese text classification, especially in few-shot learning scenarios, which is highly relevant to prompt engineering. However, it specifically addresses the adaptation of prompt-based methods for Chinese language tasks, which may not be directly applicable to the concept of 'hard prefix prompts' as it is not clear if the techniques are universally applicable to other languages or specific to Chinese. Therefore, while the study is related to prompt engineering, the rating is not a full 10 due to potential limitations in generalizability." -"few-shot information extraction is here: pre-train, prompt and entail",gpt-4-1106-preview,8,"The abstract discusses an approach that employs prompting and fine-tuning pre-trained language models (PLMs) for achieving state-of-the-art results in Information Extraction with minimal annotations. Although it does not specifically mention 'hard prefix prompts', it centrally addresses prompt engineering by explaining how natural language prompts are used to harness PLMs and enhance their inference abilities for specific tasks. This work is highly relevant to prompt engineering studies, as it showcases the effectiveness of prompts in the context of improving PLM performance. The reason for not giving a full score is that the exact term 'hard prefix prompts' is not referenced, which may indicate this study focuses on a broader range of prompting methodologies." -prompt-based self-training framework for few-shot named entity recognition,gpt-4-1106-preview,5,"The title suggests the study involves 'prompt-based' methodology, which is relevant to prompt engineering. However, without more information from the abstract or TLDR, it's challenging to determine the extent of relevance to hard prefix prompts specifically. The study focuses on few-shot named entity recognition, which may involve prompts, but it is unclear how systematically the prompts are reviewed or engineered in the study. A neutral score reflects this partial relevance based on the information provided." -towards unified prompt tuning for few-shot learning,gpt-4-1106-preview,9,"The abstract discusses the concept of prompt-based fine-tuning and introduces a novel framework, Unified Prompt Tuning (UPT), designed for improving few-shot learning in BERT-style pre-trained language models by capturing prompt semantics. This is highly relevant to the field of prompt engineering as it directly addresses the enhancement of model performance through better understanding and utilization of prompts. It may not receive a perfect score as the abstract does not specifically mention 'hard prefix prompts' which could infer a nuanced subset within prompt engineering." -cqare: contrastive question-answering for few-shot relation extraction with prompt tuning,gpt-4-1106-preview,9,"The abstract discusses prompt tuning, a relevant aspect of prompt engineering, specifically in the context of relation extraction (RE). The entire concept of 'prompt tuning' is central to the field of prompt engineering as it involves the refinement and manipulation of prompts to improve performance with pre-trained language models (PLMs). While the abstract does not discuss 'hard prefix prompts' directly, it does mention the challenges of prompt engineering for label mapping and the attempt to improve prompt tuning with Contrastive Question-Answering method (CQARE). Considering the abstract's focus on developing improved methods for prompt tuning which is a vital part of prompt engineering, the relevance rating is high." -prompt-guided few-shot event detection,gpt-4-1106-preview,8,"The abstract describes the use of cloze prompts to assist in few-shot event detection by eliciting knowledge from pretrained language models. Although the main focus is on event detection, the study's reliance on prompt engineering is clear as it uses specifically crafted prompts to enhance the capabilities of machine learning models in a limited data scenario. The term 'hard prefix prompts' isn't mentioned directly, but the concept of designing efficient prompts is crucial to their methodology. This makes the study relevant to the field of prompt engineering, justifying the high rating." -psp: pre-trained soft prompts for few-shot abstractive summarization,gpt-4-1106-preview,9,"The abstract provided discusses a novel methodology for few-shot abstractive summarization that relates closely to prompt engineering. It introduces a new concept of soft prompts along with a training paradigm focussed on these prompts. Although the study introduces 'soft prompts' rather than 'hard prefix prompts', it is still highly relevant due to its focus on the broader area of prompt tuning and engineering for model performance improvement. This contribution to prompt architecture and training directly informs how prompts can be effectively implemented and optimized in various machine learning scenarios. The difference in the type of prompts (soft vs. hard prefix) results in a rating of 9 instead of a perfect 10." -few-shot text-to-sql translation using structure and content prompt learning,gpt-4-1106-preview,9,"The abstract presents a novel approach to prompt engineering within the specific domain of Text-to-SQL translation. It discusses the design of a hybrid prompt strategy that is particularly relevant for enhancing the performance of pre-trained language models on few-shot learning tasks. This directly ties into the study of prompt engineering by exploring how prompts can be optimized to guide language models more effectively. Although the application is specialized in Text-to-SQL, the concepts of structure stage and content stage prompting are highly relevant to the field of prompt engineering. The high rating reflects the paper's substantive contribution to the methodology of crafting and utilizing prompts to improve the performance of AI models." -zero- and few-shot event detection via prompt-based meta learning,gpt-4-1106-preview,6,"The study discusses a meta-learning framework for zero- and few-shot event detection, employing cloze-based prompts within the methodology. Prompt-based approaches are relevant to prompt engineering, as they involve the design of input structures that facilitate model learning and generalization to new tasks. However, the focus on event detection and a meta-learning framework makes this work only partially related to the core study of hard prefix prompts in prompt engineering, hence the rating is moderate." -vppt: visual pre-trained prompt tuning framework for few-shot image classification,gpt-4-1106-preview,8,"The abstract describes a method for prompt tuning in the context of few-shot image classification with pre-trained transformers, which is closely related to prompt engineering. Although the subject is applied to computer vision rather than language models (which are more commonly associated with prompts), the principles of tuning prompts to adapt to downstream tasks are highly relevant. The approach discussed involves specific challenges and solutions in initializing and fine-tuning prompt modules in a parameter-efficient way, which is a key area of prompt engineering. The reason why the rating is not a full 10 is that the prompt engineering discussed is specific to visual tasks and may not directly translate to linguistic prompt engineering studies." -decomposed two-stage prompt learning for few-shot named entity recognition,gpt-4-1106-preview,8,"The study presents a novel approach to prompt learning within the task of Named Entity Recognition (NER) in a few-shot setting, which is directly related to prompt engineering as it contributes to advancements in precision and efficiency of using prompts in machine learning models. The relevance to prompt engineering is high because it involves creating and using prompts specifically designed to improve the performance of NER tasks. The deduction of points is due to the specificity of the application to NER rather than a broader exploration of prompt engineering in general." -few-shot table-to-text generation with prompt planning and knowledge memorization,gpt-4-1106-preview,8,"The study presents a framework called PromptMize, intended for table-to-text generation within few-shot learning scenarios, which focuses on the design of prompts to guide pre-trained language models. While it does not specifically address 'hard prefix prompts', it is highly relevant to the field of prompt engineering due to its emphasis on designing prompts (prompt planner) to bridge the gap between different data modalities (tabular data and text). This is a direct application of prompt engineering techniques in the context of natural language generation from structured data, and it advances the domain by integrating domain-specific knowledge into the prompting process. Therefore, this study should be of significant interest for those researching or studying prompt engineering, albeit not directly focused on hard prefix prompts." -locoop: few-shot out-of-distribution detection via prompt learning,gpt-4-1106-preview,8,"The abstract describes an advancement in prompt learning specifically applied to few-shot out-of-distribution detection in the context of a vision-language model, which is relevant to the field of prompt engineering. However, the study focuses more on the application of prompt learning for improving OOD detection rather than the structure, phrasing, or systematic review of 'hard prefix' prompts. Despite this, the introduction of a local regularization technique called LoCoOp that enhances performance in prompt-based models indicates a significant contribution to the prompt engineering domain, particularly in algorithmic improvement for better model generalization. Therefore, it is not a perfect match to the study of 'hard prefix prompts,' but it is closely related due to its focus on enhancing prompt learning methods." -few-shot joint multimodal aspect-sentiment analysis based on generative multimodal prompt,gpt-4-1106-preview,8,"The study introduces a Generative Multimodal Prompt model within the context of Multimodal Aspect-Based Sentiment Analysis, a subfield of prompt engineering related to few-shot learning. Prompt engineering typically involves crafting inputs that guide machine learning models, especially in few-shot or zero-shot settings. The relevance to prompt engineering is substantiated by the creation and use of prompts to handle multimodal data when labeled instances are sparse. This implies a strong connection to the strategies involved in prompt engineering. However, the study is specifically targeted at multimodal data and aspect-sentiment analysis, and it doesn't cover the entire breadth of prompt engineering, which may also include text-only or other single-modality frameworks. Thus, the relevance is rated as high but not absolute." -partseg: few-shot part segmentation via part-aware prompt learning,gpt-4-1106-preview,9,"The paper presents a method for few-shot part segmentation by leveraging a part-aware prompt learning technique, which directly relates to the process of prompt engineering. The relevance is high because prompt engineering involves generating inputs that help models like CLIP better interpret and process information, which is what the paper appears to be achieving with its part-specific prompts. It's not a perfect 10 because the paper is application-specific (focused on few-shot part segmentation), whereas prompt engineering can also encompass broader methodologies and applications beyond this context." -evolutionary verbalizer search for prompt-based few shot text classification,gpt-4-1106-preview,9,"The given abstract describes research focused on improving prompt-based tuning, specifically within the realm of few-shot text classification by developing a novel evolutionary verbalizer search (EVS) algorithm. Since prompt-based tuning is a direct application of prompt engineering, and this paper deals with the construction of optimal verbalizers, which are integral to the functioning of prompt-based models, its relevance to prompt engineering is high. However, it doesn't cover every aspect of prompt engineering, such as hard prefix prompts specifically, thus warranting a slightly less than perfect score." -a chinese few-shot text classification method utilizing improved prompt learning and unlabeled data,gpt-4-1106-preview,8,"The abstract discusses a method for Chinese few-shot text classification (FSTC) that employs an improved prompt learning technique, indicating a close relevance to prompt engineering. It details an approach for creating and optimizing prompt prefixes specifically designed for Chinese, which falls directly within the study of prompt engineering. The method's use of multiple masks in prompt learning and its application in a semi-supervised context with unlabeled data enhance the relevance. The reason for not giving a full 10 is because the focus seems heavily on the application to Chinese text and the improvement of performance in FSTC; the abstract does not broadly address various aspects of prompt engineering beyond its specific use case." -unified prompt learning makes pre-trained language models better few-shot learners,gpt-4-1106-preview,8,"The paper described is highly relevant to prompt engineering because it discusses a novel approach to prompt-based learning, which is an essential aspect of prompt engineering. It specifically addresses the challenge of balancing task-specific and instance-dependent information in prompts to enhance few-shot learning in language models. While it may not focus exclusively on 'hard prefix prompts,' which would be directly related to a systematic review on such prompts, it deals with the broader question of how to design and utilize prompts effectively, crucial for the field of prompt engineering." -boosting prompt-based few-shot learners through out-of-domain knowledge distillation,gpt-4-1106-preview,7,"The abstract describes a method to improve prompt-based learning in the context of few-shot learning and knowledge distillation (KD), which is relevant to prompt engineering as it deals with enhancing the efficiency and performance of prompt-tuned Pre-trained Language Models (PLMs). Although the study focuses on knowledge distillation and model compression rather than the direct creation or manipulation of prompts, the optimization of models for prompt-based few-shot learning is a significant aspect of prompt engineering. Therefore, the relevance is fairly high, but not maximal due to the indirect focus on the engineering of prompts themselves." -prompt-based zero- and few-shot node classification: a multimodal approach,gpt-4-1106-preview,7,"The study mentioned in the abstract focuses on the use of prompts in a multimodal approach for node classification, which is relevant to the field of prompt engineering in the context of machine learning. The 'prompt- and graph-based module' specifically indicates that prompts are engineered as part of the model to handle zero-shot scenarios, which is an application of prompt engineering. However, the primary focus seems to be on integrating text and graph modalities rather than on the systematic review of hard prefix prompts, which would more directly address the prompt engineering study. Thus, while the study is relevant due to the inclusion of prompts in the machine learning model, it may not fully represent a comprehensive review strictly on prompt engineering with 'hard prefix prompts'." -prompt-distiller: few-shot knowledge distillation for prompt-based language learners with dual contrastive learning,gpt-4-1106-preview,8,"The article is highly relevant to prompt engineering as it directly addresses an aspect of prompt-based learning, which is a key area in prompt engineering. It offers innovative solutions for the deployment of prompt-tuned Pre-trained Language Models in few-shot learning scenarios through Knowledge Distillation. The focus on the few-shot KD algorithm designed for prompt-tuned PLMs ('Prompt-Distiller') aligns with the broader topic of engineering effective prompts for language models to enhance learning performance. While it may not specifically cover 'hard prefix prompts,' the overall context of prompt-based learning and improving the efficiencies of such systems makes it pertinent to the field of prompt engineering. A full 10 is not awarded as the abstract does not directly mention 'hard prefix prompts,' which was the specific subject of the systematic review requested." -few-shot text-to-sql translation using structure and content prompt learning,gpt-4-1106-preview,9,"The paper describes a hybrid prompt strategy that leverages learnable and fixed vectors to guide Pre-trained Language Models (PLMs) for few-shot Text-to-SQL translation tasks. This is highly relevant to prompt engineering as it relates directly to the development of prompts that assist in task-specific predictions and facilitate model understanding. Although 'hard prefix prompts' are not mentioned explicitly, the approach is fundamentally connected to creating effective prompts for language models, thus making it pertinent to studies in prompt engineering." -dreamartist: towards controllable one-shot text-to-image generation via positive-negative prompt-tuning,gpt-4-1106-preview,8,"The abstract discusses the use of prompt-tuning strategies, specifically introducing a 'positive-negative prompt-tuning learning strategy' in the context of text-to-image generation, which falls within the realm of prompt engineering. Prompt engineering is about finding effective ways to interface with language models or other AI systems using written prompts; the mention of positive and negative prompt tuning is a concrete example of this, tailored for a specific application. Therefore, this study is relevant to the broader field of prompt engineering as it explores a novel method to enhance the controllability and quality of outputs from AI systems. However, it does not specifically address 'hard prefix prompts,' which would be even more directly related to the prompt engineering study mentioned in the request. Thus, the rating is not a full 10." -dreamartist: towards controllable one-shot text-to-image generation via contrastive prompt-tuning,gpt-4-1106-preview,7,"The paper discusses 'contrastive prompt-tuning,' which is a technique relevant to prompt engineering. Since prompt engineering involves methods to efficiently communicate with AI models, and in this case, to control text-to-image generation, the paper's subject is pertinent to the field. However, it doesn't focus on the 'hard prefix prompts,' which the initial request emphasizes. Therefore, the relevance is substantial but not entirely on point with the specific systematic review criteria stated." -one-shot and partially-supervised cell image segmentation using small visual prompt,gpt-4-1106-preview,7,"The abstract describes a framework for cell image segmentation that uses concepts from prompt learning, which is related to the field of prompt engineering. While the main focus is on the application of these concepts to one-shot and partially-supervised learning for cell image segmentation, the utilization of 'small prompt images' and the attention given to prompt learning techniques in the study suggest relevance to prompt engineering. However, it does not appear to closely study hard prefix prompts as applied in NLP or broader prompt engineering, hence it is not a perfect match for the prompt engineering study." -pøda: prompt-driven zero-shot domain adaptation,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering because it introduces a novel methodology that utilizes natural language prompts to drive the process of zero-shot domain adaptation. Though it does not focus specifically on 'hard prefix prompts', it does explore the role of prompts in guiding the adaptation of models to new domains, which is an essential aspect of prompt engineering in the broader sense. The use of CLIP and the approach to optimize feature transformations based on target text embeddings are elements that connect closely to the principles of prompt engineering, which includes crafting prompts to guide model behavior." -cohoz: contrastive multimodal prompt tuning for hierarchical open-set zero-shot recognition,gpt-4-1106-preview,7,"The abstract describes CoHOZ, an approach for open-set recognition and zero-shot learning by leveraging hierarchical label trees and contrastive continuous prompt tuning. While it does not directly mention 'hard prefix prompts', it does engage with 'prompt tuning', which is a relevant aspect of prompt engineering. The relevance is marked as a 7 because the techniques and experiments could potentially contribute to the broader understanding of prompt engineering without being specifically focused on 'hard prefix prompts'. The concept of prompt tuning, particularly in a contrastive and multimodal setting, is pertinent to the study of how prompts are constructed and used, especially in zero-shot learning scenarios." -proze: explainable and prompt-guided zero-shot text classification,gpt-4-1106-preview,7,"The abstract discusses 'ProZe,' a text classification approach that utilizes prompting pretrained language models, which is directly relevant to prompt engineering as it involves the method of using prompts to guide language models. However, the abstract also includes mention of querying ConceptNet for adding explainability, which is somewhat peripheral to the core concept of prompt engineering. Moreover, the study focuses on zero-shot text classification, which is only one aspect of the broader field of prompt engineering. Therefore, while prominently featuring elements of prompt engineering, the paper's focus on the combination of prompts with an external knowledge base and its aim for explainability dilutes the pure relevance to hard prefix prompts, hence the rating of 7." -prompt-based extraction of social determinants of health using few-shot learning,gpt-4-1106-preview,7,"The study described in the abstract involves the use of one-shot prompting with GPT-4 to extract social determinants of health from unstructured text. This is relevant to prompt engineering because it focuses on the methodology of leveraging language models through prompts to achieve a specific task. While it does not directly study 'hard prefix prompts', which suggests a more specific kind of prompt engineering, the exploration of one-shot prompts and their comparison to traditional supervised approaches is within the broader domain of prompt engineering. Therefore, its relevance is high but not entirely focused on hard prefix prompts, warranting a rating of 7." -augmenters at semeval-2023 task 1: enhancing clip in handling compositionality and ambiguity for zero-shot visual wsd through prompt augmentation and text-to-image diffusion,gpt-4-1106-preview,7,"The paper focuses on enhancing the performance of the CLIP model by addressing issues related to prompt engineering, such as the compositionality and ambiguity in natural language and generating more contextual prompts using large language models. While it is not specifically about 'hard prefix prompts', it does involve an in-depth look at modifying and improving prompts for better results, which is relevant to the broader field of prompt engineering study." -self-supervised meta-prompt learning with meta-gradient regularization for few-shot generalization,gpt-4-1106-preview,9,"The abstract describes an approach to prompt tuning, particularly focusing on few-shot generalization, which is highly relevant to the field of prompt engineering. The method outlined involves learning soft prompts and touches on the challenges of generalization and overfitting, key issues in prompt engineering. The proposed framework, SUPMER, addresses these problems by creating a universal initialization for prompts, which contributes significantly to the study and advancement of prompt engineering methods. The reason the rating is not a perfect 10 is that the abstract does not explicitly discuss 'hard prefix prompts,' which was mentioned in the user's request for a 'comprehensive systematic review on hard prefix prompts.'" -enhancing black-box few-shot text classification with prompt-based data augmentation,gpt-4-1106-preview,7,"The provided abstract focuses on the use of large-scale language models (LLMs) like GPT-3 for few-shot text classification and explores a method of interacting with them purely through their inference APIs, without requiring access to the gradients. The relevance to prompt engineering is found in the application of prompt-based data augmentation, which is a technique integral to the practice of prompt engineering. Although the primary focus seems to be on the black-box modeling approach and parameter-efficient adaptation, the utilization of prompts to augment data for better performance in few-shot scenarios suggests that the research contributes to the prompt engineering field. It does not, however, directly address a systematic review on hard prefix prompts, which would be the core topic of a prompt engineering study. Hence, the relevance is significant but not complete, leading to a rating of 7." -"cocoopter: pre-train, prompt, and fine-tune the vision-language model for few-shot image classification",gpt-4-1106-preview,7,"The document's title suggests the use of a process that includes 'prompt' as part of the procedure for improving few-shot image classification. This indicates that the study involves some level of modification or creation of prompts to enhance model performance, which is relevant to prompt engineering. However, without further details on the nature of these prompts, particularly whether they pertain to language prompts typically used in prompt engineering, or are more broadly related to model conditioning, it's difficult to assess the full relevance. The mention of 'hard prefix prompts' in the initial query was not directly addressed, resulting in a rating that acknowledges relevance but cannot confirm an exact match." -few-shot fake news detection via prompt-based tuning,gpt-4-1106-preview,8,"The abstract presents a study on a Fake News Detection model that utilizes prompt-based tuning, which is directly relevant to prompt engineering. The model's design incorporates contextual prompts to enhance the detection capabilities of pre-trained language models in few-shot scenarios. While the study is not a comprehensive systematic review on hard prefix prompts, it does focus on the application of prompts in a specific important area, hence the relatively high relevance score." -lm-cppf: paraphrasing-guided data augmentation for contrastive prompt-based few-shot fine-tuning,gpt-4-1106-preview,8,"The paper 'lm-cppf: paraphrasing-guided data augmentation for contrastive prompt-based few-shot fine-tuning' directly relates to prompt engineering as it discusses the use of prompt-based tuning in the context of language model fine-tuning. Since prompt engineering fundamentally involves crafting input prompts to elicit the desired output from a language model, this paper's focus on leveraging paraphrasing-guided augmentation within the prompt-based few-shot fine-tuning framework demonstrates an application of prompt engineering. The relevance is not rated as a perfect 10 because the study seems to emphasize data augmentation and contrastive learning in addition to prompt-based methods rather than focusing solely on prompt engineering." -syntax-aware hybrid prompt model for few-shot multi-modal sentiment analysis,gpt-4-1106-preview,9,"The paper describes a novel approach to prompt engineering by integrating hand-crafted and learnable prompts within a hybrid model for few-shot multi-modal sentiment analysis. Since prompt engineering involves crafting input prompts to guide models, especially in few-shot learning scenarios, this paper is highly relevant to prompt engineering studies. It also touches upon optimizing prompt encoders using attention mechanisms, which is a sophisticated technique within this field. The only reason it doesn't receive a full 10 is that it is specific to sentiment analysis and may not cover the entire breadth of prompt engineering applications." -enhancing few-shot ner with prompt ordering based data augmentation,gpt-4-1106-preview,7,"The relevance is fairly high because the paper discusses a Prompt Ordering based Data Augmentation (PODA) method, which is related to prompt engineering in that it involves manipulating data to improve the performance of language models in a low-resource setting. Prompt engineering typically involves crafting prompts that guide the model's predictions or generating capabilities, and while this method is specifically targeting a data augmentation approach for named entity recognition, it is relevant insofar as it involves ordered prompts and their effect on the training process. However, it does not directly address 'hard prefix prompts' or a broader range of prompt engineering outside the context of few-shot NER, hence the rating is not a full 10." -few-shot composition learning for image retrieval with prompt tuning,gpt-4-1106-preview,6,"The study includes techniques related to prompt tuning and the development of a visual prompt within the context of image retrieval, which is indirectly related to prompt engineering in natural language processing (NLP). While prompt tuning is a concept used in NLP, this study applies it to a visual domain and focuses on compositional learning and few-shot learning mechanisms, which are somewhat tangential to the typical studies on hard prefix prompts in text-based models. The relevance is moderate because the study does show the application of prompt tuning concepts but in a different domain and does not directly address hard prefix prompts in the context of NLP." -image-object-specific prompt learning for few-shot class-incremental learning,gpt-4-1106-preview,8,"The study presents a novel training framework in the context of few-shot class-incremental learning (FSCIL), incorporating the use of specialized prompts, which are biased towards specific attributes of class objects to guide the learning process. This biasing through prompts is relevant to prompt engineering as it involves the strategic use of prompts to direct the model's attention to specific features, which is an integral concept in prompt engineering. The use of key-prompt pairs is directly associated with designing effective prompts. While the study does not explicitly state 'hard prefix prompts' or a comprehensive review on it, it does demonstrate practical application and manipulation of prompts in a machine learning context, which is relevant to the broader field of prompt engineering." -overcoming catastrophic forgetting in zero-shot cross-lingual generation,gpt-4-1106-preview,9,"The abstract discusses the use of prompt tuning, a parameter-efficient adaptation technique, to overcome challenges in zero-shot cross-lingual generation, which is directly relevant to prompt engineering. The study focusses on how prompts can be engineered and factored to enable a generative multilingual model to perform tasks in languages it wasn't explicitly trained on, without catastrophic forgetting. Although it does not specifically mention 'hard prefix prompts,' the concept of prompt tuning is a crucial part of prompt engineering studies, so the relevance to the broader field of prompt engineering is high." -nearest neighbor zero-shot inference,gpt-4-1106-preview,7,"The abstract presents kNN-Prompt, a k-nearest neighbor Language Model with expanded verbalizers, which is relevant to the study of prompt engineering because it involves the automatic expansion of prompts for improved zero-shot learning. While the study emphasizes retrieval-augmented language models and zero-shot inference rather than directly focusing on 'hard prefix prompts,' the concept of expanding verbalizers to include synonyms directly pertains to the engineering of prompts to enhance model performance. Thus, the relevance to prompt engineering study is significant, though not entirely focused on 'hard prefix' prompts specifically." -balanced distributed augmentation for multi-label few shot learning with prototypical network,gpt-4-1106-preview,4,"The abstract indicates that the study involves novel pipeline for automating the prompt generation, which is somewhat relevant to prompt engineering, particularly if the automated generation includes what could be considered 'hard prefix prompts.' However, the main focus of the paper appears to be on data augmentation techniques and sentiment analysis for few-shot learning rather than directly on prompt engineering. The relevance is therefore moderate and not the primary emphasis of the research." -kecp: knowledge enhanced contrastive prompting for few-shot extractive question answering,gpt-4-1106-preview,7,"The abstract describes an approach involving a novel method of prompt-tuning, which is highly relevant to prompt engineering studies. The focus on Knowledge Enhanced Contrastive Prompt-tuning (KECP) is especially pertinent to the field as it introduces a non-conventional method of leveraging prompts through external knowledge bases and contrastive learning objectives. Nevertheless, since the study doesn't specifically address 'hard prefix prompts' but rather a broader prompt-tuning strategy for EQA, the rating is not a full 10." -cross-lingual retrieval augmented prompt for low-resource languages,gpt-4-1106-preview,7,"The study described in the abstract is relevant to prompt engineering because it discusses the creation and use of a pipeline (PARC) that augments prompts to enhance the performance of Multilingual Pretrained Language Models (MPLMs) in zero-shot learning scenarios for low-resource languages. This directly relates to the field of prompt engineering, as it involves designing and manipulating prompts to improve language model performance. However, it may not be directly related to 'hard prefix prompts,' as it does not specify the nature of the prompts used (whether hard-coded, soft, or another type). The focus is more on cross-lingual retrieval and augmentation rather than the systematic review of the prompt types or their design characteristics, hence the rating is not a full 10." -indirect: language-guided zero-shot deep metric learning for images,gpt-4-1106-preview,7,"The abstract introduces Language-Guided Zero-Shot Deep Metric Learning (LanZ-DML) which emphasizes the use of natural language text prompts to control image representation without the need for training data. The model InDiReCT mentioned utilizes CLIP to transfer the variation in text prompt embeddings to the image embedding space. Although the study focuses on the metric learning aspect and the application in image retrieval systems, it is highly relevant to prompt engineering because it involves the use of text prompts to guide a zero-shot learning process. This showcases an intricate way that prompts can interact with deep learning models to influence their behavior. However, it does not directly address hard prefix prompts or a systematic review of such, which limits the rating to a 7." -list: lite self-training makes efficient few-shot learners,gpt-4-1106-preview,7,"The abstract discusses a method related to fine-tuning pre-trained language models with the use of prompts, which is relevant to prompt engineering. LiST improves prompt-tuning with techniques like self-training and lightweight fine-tuning, which fall within the realm of prompt optimization strategies. However, the abstract does not specifically mention 'hard prefix prompts' as in the initial prompt, so it may not address the complete systemic review aspect of hard prefix prompts in prompt engineering. Thus, the relevance to prompt engineering study is significant but not fully aligned with the specificity of 'hard prefix prompts'." -prompt-based multi-modal image segmentation,gpt-4-1106-preview,8,"The study presents a system that utilizes prompts in the form of text or images to generate image segmentation, indicating a strong relevance to 'prompt engineering.' Although the primary focus is image segmentation and not prompt engineering itself, the system's capability to interpret and process arbitrary prompts at test time is indicative of a significant application of prompt engineering principles. This demonstrates the integration of prompt-based methods into AI tasks, which is a key aspect of prompt engineering research. The rating is not a full 10 because the study's primary aim is not the investigation of the prompts themselves or their optimization, but rather their application to a particular AI task." -jurassic is (almost) all you need: few-shot meaning-to-text generation for open-domain dialogue,gpt-4-1106-preview,8,"The given title and TLDR indicate research related to few-shot meaning-to-text generation using semantic prompts. This is relevant to prompt engineering as it specifically pertains to the utilization of prompts to guide natural language generation (NLG) systems to produce text in a conversational context. Despite not explicitly mentioning 'hard prefix prompts', the study appears to contribute to the broader field of prompt-based learning and NLG. Hence, the rating is high but not maximum, due to the lack of direct reference to 'hard prefix prompts'." -prompt scoring system for dialogue summarization using gpt-3,gpt-4-1106-preview,8,"The abstract provided discusses the development of a scoring system specifically designed for improving few-shot training performances in the context of dialogue summarization with GPT-3, which involves an aspect of prompt engineering. Prompt engineering is integral to optimizing few-shot learning techniques by crafting effective prompts that guide language models like GPT-3 to perform specific tasks. The research focuses on the structure of dialogues and how tuned prompts can enhance the summarization task, which is highly relevant to the study of prompt engineering. Although the paper does not explicitly mention 'hard prefix prompts', it addresses the broader subject of prompt design and effectiveness, thus earning a high relevance rating. The 2-point deduction from a perfect score is due to the lack of specificity regarding 'hard prefix prompts', which may be a more narrow area within prompt engineering." -inverse is better! fast and accurate prompt for slot tagging,gpt-4-1106-preview,8,"The abstract describes an innovative method in prompt engineering, specifically for the task of slot tagging in few-shot learning scenarios. While it doesn't discuss 'hard prefix prompts' directly, it presents the concept of 'inverse prompting', which is a technique within the broader domain of prompt engineering. The improvement in efficiency and accuracy mentioned in the abstract is highly relevant to studies in prompt engineering, especially when considering the impact on state-of-the-art performance. The score is not a full 10 because it is not explicitly tied to 'hard prefix prompts' but does address closely related concepts within prompt engineering." -is a prompt and a few samples all you need? using gpt-4 for data augmentation in low-resource classification tasks,gpt-4-1106-preview,8,"The described study is highly relevant to prompt engineering as it directly involves using prompts to leverage GPT-4 and ChatGPT for the purpose of data augmentation in classification tasks. Prompt engineering is a core component of this because the quality of the generated synthetic data heavily depends on the design and effectiveness of the prompts used. Although the study does not exclusively focus on 'hard prefix prompts,' it covers an application of prompts that is central to understanding and improving the use of language models in low-resource situations. The only reason the rating is not a 10 is that it does not specifically mention 'hard prefix prompts' or explore a comprehensive systematic review of such prompts, rather it looks at practical applications of prompt-related techniques for data augmentation." -structure pretraining and prompt tuning for knowledge graph transfer,gpt-4-1106-preview,4,"The abstract describes a study on a knowledge graph pretraining model (KGTransformer) and its application across different knowledge graph-related tasks, which is related to machine learning and transfer learning. The use of 'prompt-tuning' with task data as a 'triple prompt' indicates a form of prompt engineering, but the focus seems to be more on the application of this mechanism for task-specific KG interactions, rather than a comprehensive study of the prompt engineering concept itself. The relevance to prompt engineering study is therefore present but not central to the paper's core contribution, hence the moderate rating." -residual prompt tuning: improving prompt tuning with residual reparameterization,gpt-4-1106-preview,9,"The abstract presents a study that directly addresses improvements in prompt tuning, which is an essential aspect of prompt engineering. The introduction of Residual Prompt Tuning as a method that advances the performance and stability of prompt tuning is highly relevant to engineers and researchers working with language models. The fact that it outperforms standard prompt tuning and shows robustness against various hyper-parameters and initializations makes it a significant contribution to the study of prompt engineering. The reason the rating is not a perfect 10 is that the abstract doesn't directly address 'hard prefix prompts', but it is relevant to the broader field of prompt engineering." -ds4dh at mediqa-chat 2023: leveraging svm and gpt-3 prompt engineering for medical dialogue classification and summarization,gpt-4-1106-preview,8,"The study described in the title uses prompt engineering as a part of its methodology to generate summaries for medical dialogues using GPT-3.5. Even though the study focuses on a specific application of prompt engineering within the medical field and combines it with Support Vector Machines (SVM) for classification tasks, the use of one-shot prompts to operate with GPT-3.5 embeds elements of prompt engineering which are relevant to the study of this domain. The relevance is not rated a full 10 due to the specificity of the application (medical dialogues), as opposed to a broader coverage of hard prefix prompts in prompt engineering." -soft prompt tuning for augmenting dense retrieval with large language models,gpt-4-1106-preview,8,"The article presents a novel approach for enhancing dense retrieval through the use of soft prompt tuning with large language models, which is a technique within the scope of prompt engineering. This is closely relevant to the study of prompt engineering since it involves the optimization of prompts to improve the performance of language model tasks. Although the study focuses specifically on 'soft' prompt tuning rather than 'hard' prefix prompts, the methods and insights from soft prompt tuning contribute to the broader understanding of how prompts can influence language model behavior and performance. Therefore, the relevance is high but not absolute, hence the rating of 8." -self-prompting large vision models for few-shot medical image segmentation,gpt-4-1106-preview,8,"The abstract discusses the application of a segmentation model (SAM) in medical image analysis and introduces a novel technique for self-prompting in the context of few-shot learning. This is highly relevant to prompt engineering as it deals directly with how to leverage and optimize prompts for a model to improve its performance, especially in a domain like medical imaging where data can be scarce. The self-prompting approach relies on prompt tuning strategies which are an integral part of prompt engineering. The rating is not a full 10 because the abstract does not specifically mention 'hard prefix prompts' or the systematic review aspect of prompt engineering, which would cover a broader scope including various strategies beyond the one mentioned in the paper." -cipta: contrastive-based iterative prompt-tuning using text annotation from large language models,gpt-4-1106-preview,8,"The study described in the title and abstract is highly relevant to prompt engineering as it focuses on 'prompt tuning,' which is a method used to enable models to quickly adapt to new tasks or domains using a limited amount of data or examples. The innovation in prompt tuning that the study proposes, CIPTA, particularly targets low-resource scenarios, which is a critical area of research in prompt engineering for improving the efficiency and applicability of large language models. The study's use of contrastive embedding training as part of the prompt-tuning process also contributes to the field. Therefore, it scores high in relevance. It doesn’t get a full score because it is specifically angled towards public opinion analysis rather than covering prompt engineering in broader scenarios." -unleashing the potential of prompt engineering in large language models: a comprehensive review,gpt-4-1106-preview,9,"The abstract provided is highly relevant to the field of prompt engineering as it covers a breadth of topics within the discipline, including foundational principles, advanced methodologies, assistance tools, prospective research directions, and applications in various fields. The rating is not a perfect 10 as there is some information missing, such as empirical data or case studies that would make it an exhaustive review. Nevertheless, the paper appears to be a comprehensive resource that would substantially benefit those interested in the workings and advancements of prompt engineering for Large Language Models." -multi-mask label mapping for prompt-based learning,gpt-4-1106-preview,8,"The abstract discusses a novel prompt-based learning method called Multi-Mask Label Mapping (MMLM) that is designed to address the issues of misleading lexical cues in few-shot learning. Although the study does not specifically mention 'hard prefix prompts', its focus on improving prompt-based learning through strategic label mapping and instance augmentation is very relevant to the field of prompt engineering. Given that prompt engineering involves crafting prompts to effectively communicate with a model, the methodology proposed in this study could potentially be applied to the study of hard prefix prompts, thereby enhancing the state of prompt engineering. The deducted points are due to the lack of direct reference to 'hard prefix prompts', which was the specific focus of the prompt engineering study mentioned." -prompts can play lottery tickets well: achieving lifelong information extraction via lottery prompt tuning,gpt-4-1106-preview,8,"The relevance to prompt engineering is high, given that the abstract discusses a novel prompt tuning method called Lottery Prompt Tuning (LPT) which directly pertains to modifying prompts in the context of a universal information extraction system trained for lifelong learning. Prompt engineering broadly encompasses the tweaking and optimization of prompts to improve the performance of language models, and the LPT method falls within this field. Although it is not explicitly focused on 'hard prefix prompts', the study of prompt tuning methods is a significant aspect of prompt engineering. Therefore, the relevance is rated as an 8, with some points deducted because the description might not target 'hard prefix prompts' specifically but rather a related area within prompt engineering." -llm comparative assessment: zero-shot nlg evaluation through pairwise comparisons using large language models,gpt-4-1106-preview,7,"The paper focuses on zero-shot NLG evaluation using large language models (LLMs) and specifically addresses new methods for assessment, which closely relates to the field of prompt engineering as it pertains to the performance assessment of language model outputs. While it does not directly study 'hard prefix prompts' or design prompts for LLMs, the study of assessment methods is relevant for fine-tuning and validating prompts during the engineering process. The inclusion of discussion on prompt positional biases and debiasing methods is particularly relevant, as these considerations can impact the effectiveness of engineered prompts." -tuning multi-mode token-level prompt alignment across modalities,gpt-4-1106-preview,9,"The presented abstract discusses a novel approach to prompt tuning that emphasizes token-level prompt alignment across different modalities, which is a specific aspect of prompt engineering. Although it does not explicitly address 'hard prefix prompts,' it concentrates on the generalizable and nuanced aspects of prompt tuning in the context of vision-language models, which is highly relevant to the field of prompt engineering. The focus on multi-mode prompts and token-level alignment is crucial for fine-tuning prompt-based models, which is why it receives a high relevance rating." -[cls] token is all you need for zero-shot semantic segmentation,gpt-4-1106-preview,4,"The given abstract pertains to a study on zero-shot semantic segmentation using [CLS] tokens from the CLIP model, which isn't directly related to prompt engineering. However, the use of [CLS] tokens as auxiliary prompts for the visual encoder suggests some relevance to the understanding of how prompts can influence AI models. The rating is not higher because the primary focus of the study is on image segmentation, not on prompt engineering itself." -metricprompt: prompting model as a relevance metric for few-shot text classification,gpt-4-1106-preview,8,"The paper described is highly relevant to the field of prompt engineering as it discusses MetricPrompt, a method that directly addresses the optimization of prompt design for text classification tasks. It specifically tackles the challenge of designing verbalizers and leverages the power of prompting models as relevance metrics, which falls within the domain of prompt engineering. The relevance rating is not a perfect 10 because, while the study is related to prompt engineering, the term 'hard prefix prompts' is not explicitly mentioned, and it is unclear how closely the proposed MetricPrompt methodology aligns with 'hard prefix prompts' specifically." -unsupervised dual modality prompt learning for facial expression recognition,gpt-4-1106-preview,9,"The abstract describes a study that is highly relevant to prompt engineering, as it proposes an 'Unsupervised Dual Modality Prompt Learning framework' which is directly related to adapting and tuning prompts for better performance in facial expression recognition tasks. This study focuses on optimizing the prompts used in vision-language models, which is a core area of interest in prompt engineering. The only reason it does not receive a perfect score is that it is specialized in facial expression recognition rather than covering prompt engineering in a broader sense across various applications." -label-aware automatic verbalizer for few-shot text classification,gpt-4-1106-preview,8,"The study focuses on the verbalizer component within prompt-based learning, a crucial element of prompt engineering, especially in the context of few-shot text classification. The relevance to prompt engineering is strong as it addresses the optimization of prompt output translation into class predictions, which is directly related to how prompts are engineered to interact with language models. Although the study does not explicitly mention 'hard prefix prompts,' it aligns with the broader field of prompt engineering. The rating is not a perfect 10 because it does not directly address a comprehensive systematic review of hard prefix prompts, which the initial query specifies." -speak foreign languages with your own voice: cross-lingual neural codec language modeling,gpt-4-1106-preview,7,"The abstract describes the use of speech in the source language as a 'prompt' to generate speech in another language, preserving the voice, emotion, and acoustic environment of the original speaker. Although the term 'prompt' in this context does not directly refer to 'hard prefix prompts' as used in prompt engineering for text-based language models, it is relevant as it shows an application of prompts in a different but related domain of language processing and AI, i.e., speech synthesis and cross-lingual translation. The technology leverages in-context learning similar to how prompts are used in text-based models to guide the generation of synthetic speech, suggesting a form of prompt engineering in a speech synthesis model. Therefore, the rating is moderately high for relevance to the broader field of prompt engineering but is not a direct match since it pertains to speech rather than text-based prompting." -pointclip v2: adapting clip for powerful 3d open-world learning,gpt-4-1106-preview,7,"The abstract discusses leveraging large-scale language models to automatically design a more descriptive 3D-semantic prompt for CLIP’s textual encoder, indicating a study or application of prompt engineering to improve performance in 3D classification tasks. While it does not explicitly focus on 'hard prefix prompts,' it does deal with the broader topic of prompt engineering in the context of a real-world application—enhancing the compatibility of language-image pre-training models with 3D point cloud data. Therefore, the study is relevant to the subject of prompt engineering but perhaps less so to the specific aspect of 'hard prefix prompts.'" -image segmentation using text and image prompts,gpt-4-1106-preview,8,"The study presents a system for generating image segmentations based on arbitrary prompts, which directly involves prompt engineering as it requires understanding and designing prompts that the model can interpret accurately. The use of text and image prompts to dictate model behavior demonstrates a practical application of prompt engineering. However, the specifics of 'hard prefix prompts' mentioned in the study inquiry are not directly addressed, so it may not fully cover the systematic review aspect of the inquiry but is still highly relevant to the field of prompt engineering." -sega: instructing diffusion using semantic dimensions,gpt-4-1106-preview,7,"The studied paper 'sega: instructing diffusion using semantic dimensions' discusses a method for providing semantic control over text-to-image diffusion models through something called SEGA. Although it doesn't directly address 'hard prefix prompts,' it is highly relevant to the field of prompt engineering because it focuses on improving the interaction between user inputs and the model's output. Such research contributes to the broader understanding of how to engineer prompts to achieve desired results, which is a crucial aspect of prompt engineering. The relevance to 'hard prefix prompts' itself is indirect but still significant due to the overlap in goals of increasing control over generative models' responses to textual prompts." -learnable ophthalmology sam,gpt-4-1106-preview,8,"The provided abstract and TLDR indicate a study that involves a form of prompt engineering, as it discusses a 'learnable prompt layer' in the context of a deep learning model for ophthalmology image analysis. This is pertinent to prompt engineering study, specifically within the domain of medical image analysis, as it involves the tailoring of prompts (inputs to the model which guide its responses) to improve performance on specialized tasks. The connection to 'hard prefix prompts' is not directly stated, but the concept of learnable prompts closely relates to the broader field of prompting techniques in machine learning, hence the relevance to prompt engineering studies." -prompting multilingual large language models to generate code-mixed texts: the case of south east asian languages,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates how different prompt templates and language pairings affect the capability of multilingual large language models to generate code-mixed texts, which is a key aspect of designing effective prompts. While the study does not focus exclusively on 'hard prefix prompts', it does explore the broader topic of how prompts can influence the output of language models. This falls within the range of studies related to prompt engineering. The findings have implications for how one should engineer prompts for multilingual contexts, particularly in the domain of code-mixing." -the stable artist: steering semantics in diffusion latent space,gpt-4-1106-preview,7,"The abstract describes an approach that improves the precision of text-conditioned generative diffusion models, which is relevant to prompt engineering because it addresses the challenge of achieving fine-grained control over generated content through text input modifications. While the study's focus is on image generation and editing, the semantic guidance technique is applicable to the broader concept of steering output in response to precise prompt adjustments. The relevance rating is not higher because the study is not specifically about hard prefix prompts or their systematic review but instead about a related yet distinct area of prompt-based control in generative models." -internet-augmented language models through few-shot prompting for open-domain question answering,gpt-4-1106-preview,8,"The study focuses on the utilization of few-shot prompting to enhance language models' ability to answer open-domain questions by conditioning the responses on web-searched information. Although it does not specifically mention 'hard prefix prompts,' it is highly relevant to the field of prompt engineering since it explores methodologies for effective prompting to improve information retrieval and question-answering capabilities of language models. This closely aligns with the goal of prompt engineering, which is to design prompts that enable language models to perform specific tasks more accurately. Therefore, the relevance rating is high." -the unreliability of explanations in few-shot prompting for textual reasoning,gpt-4-1106-preview,8,"The described study directly investigates the role of explanations within the context of few-shot prompting, which is a pertinent area of research within prompt engineering. Although it does not explicitly mention 'hard prefix prompts', it explores the impact of the style and quality of prompts (including explanations) on the performance of large language models in textual reasoning tasks. This is relevant to prompt engineering as it informs on how different prompt constructs can affect model outputs, especially in tasks requiring understanding and explanations. The relevance is not rated as a perfect 10 since there's no detailed focus on 'hard prefix prompts' specifically, but the research closely aligns with investigating prompt effects in LLMs." -few-shot prompting towards controllable response generation,gpt-4-1106-preview,8,"The paper discusses an advanced application of prompt-based learning in the context of chatbot response generation, showing relevance to the field of prompt engineering. The use of prompting combined with reinforcement learning to direct model output without parameter access aligns with the concept of hard prefix prompts because it explores methods of prompt manipulation for controllable outcomes. The emphasis on few-shot learning for task generalization is also pertinent to prompt engineering as it demonstrates efficiency in prompt application. Though the study doesn't solely focus on hard prefix prompts, its methods and objectives are closely related to the core ideas of engineering prompts for language models." -zero- and few-shot prompting with llms: a comparative study with fine-tuned models for bangla sentiment analysis,gpt-4-1106-preview,7,"The study's focus on zero- and few-shot prompting with language models is closely related to prompt engineering, as it deals with the efficacy of prompts when minimal examples are provided to the model. While the study is not specifically about hard prefix prompts, it explores in-context learning with language models, which is an essential aspect of prompt engineering. The investigation into the effectiveness of different prompting strategies for a low-resource language like Bangla is relevant because it contributes to the understanding of how various models respond to prompts in different scenarios, which is a critical component of prompt engineering research. However, the title and abstract do not mention 'hard prefix prompts' specifically, which would have made it a perfect match for the topic of comprehensive systematic review on hard prefix prompts. Thus, the rating is above average for relevance but not a perfect score." -multilingual social media text generation and evaluation with few-shot prompting,gpt-4-1106-preview,8,"The abstract describes a research study on adapting large language models to generate multilingual social media text with specific objectives, mentioning the use of prompts to achieve these goals. Since prompt engineering is the process of designing and formulating prompts to effectively interact with language models, and this work includes developing generalizable prompt formation techniques, it is highly relevant. The relevance is not rated as a perfect 10 due to the lack of explicit discussion of 'hard prefix prompts' which would be required for a comprehensive systematic review specific to that sub-topic of prompt engineering." -sparsefit: few-shot prompting with sparse fine-tuning for jointly generating predictions and natural language explanations,gpt-4-1106-preview,8,"The study described focuses on a fine-tuning strategy for Pre-trained Language Models (PLMs) that utilizes 'discrete prompts' to generate predictions and Natural Language Explanations (NLEs), which is highly relevant to prompt engineering. While it does not directly study 'hard prefix prompts,' the use of prompts in the context of few-shot learning to enhance the model's performance and explanations is closely related to prompt engineering and how it can be optimized in practice. The relevance is not maximum because the abstract does not detail the nature of the prompts used (e.g., hard prefix prompts specifically), but the methodology is still pertinent to the field of prompt engineering." -prompting electra: few-shot learning with discriminative pre-trained models,gpt-4-1106-preview,7,"The provided abstract details an approach to adapting ELECTRA, a discriminative pre-trained model, to prompt-based few-shot learning. Although the focus is primarily on the model's learning capabilities and performance rather than 'hard prefix prompts' specifically, the relevance lay in the use of prompts to facilitate model understanding and few-shot learning, which is a component of prompt engineering. The study explores how the model interacts with prompts, an essential aspect of prompt engineering, hence the relatively high relevance rating. However, it does not directly address a 'comprehensive systematic review on hard prefix prompts', so it cannot receive a perfect score." -knowledge prompting for few-shot action recognition,gpt-4-1106-preview,7,"The study described in the abstract addresses the use of structured external knowledge (knowledge prompting) to enhance the performance of a pre-trained vision-language model for few-shot classification. Although it does not specifically mention 'hard prefix prompts,' it does involve the engineering of prompts (text proposals) to improve machine learning performance. This indicates a moderate level of relevance to the broader topic of prompt engineering study, especially considering the systematic approach taken to generate and utilize these prompts. However, without a specific focus on the concept of 'hard prefix prompts' as described in the original query, the relevance is not complete." -promptner: a prompting method for few-shot named entity recognition via k nearest neighbor search,gpt-4-1106-preview,7,"The paper discusses PromptNER, a method that incorporates prompting, which relates to prompt engineering by using prompts to construct label prototypes for few-shot Named Entity Recognition. While the primary focus is on NER and not on prompt engineering as a general concept, the use of prompts as a way to improve machine learning models' performance through fine-tuning with limited data is pertinent to the study of prompt engineering. However, as the paper does not seem to conduct a comprehensive systematic review specifically on hard prefix prompts and does not address prompt engineering in the broader sense, the relevance is not maximal." -prompting large language models with chain-of-thought for few-shot knowledge base question generation,gpt-4-1106-preview,9,"The abstract discusses an advanced application of prompt engineering where Chain-of-Thought (CoT) prompting is used to enhance few-shot question generation over Knowledge Bases (KBQG). It is highly relevant to prompt engineering because it directly involves the process of designing prompts to improve the performance of Large Language Models. The research proposes a novel methodology (KQG-CoT) which leverages the CoT prompting technique, and the paper claims significant improvement over state-of-the-art results. The only reason it doesn't score a perfect 10 is because it doesn't explicitly mention 'hard prefix prompts', which is the specific focus of prompt engineering study mentioned in the initial query." -investigating prompting techniques for zero- and few-shot visual question answering,gpt-4-1106-preview,8,"The described study is highly relevant to the field of prompt engineering, as it directly investigates how different prompting strategies can influence the performance of a visual question answering (VQA) system in zero- and few-shot scenarios. The systematic examination of various question templates and the use of few-shot exemplars are core aspects of prompt engineering. The exploration of chain-of-thought reasoning and the integration of additional visual cues also fall within the scope of prompting techniques. Although the study specifically targets the VQA domain and does not mention 'hard prefix prompts', the general principles and findings are pertinent to the prompt engineering literature. The rating is not a full 10 because the paper focuses more broadly on VQA performance via prompting rather than the specific 'hard prefix prompts' indicated by the original prompt." -prod: prompting-to-disentangle domain knowledge for cross-domain few-shot image classification,gpt-4-1106-preview,8,"The paper presents a method named prompting-to-disentangle (ProD) that utilizes prompts to improve the performance of image classification in cross-domain few-shot learning scenarios. This approach is directly related to prompt engineering as it involves designing prompts to manipulate the behavior of a model (in this case, a transformer) for better performance. The technique specifically leverages prompts to separate domain-general and domain-specific knowledge, which demonstrates an application of prompt engineering in the context of machine learning and image classification. However, it does not address 'hard prefix prompts' as mentioned in the original study prompt, which suggests a more specific focus within the broader area of prompt engineering. The rating is not a full 10 due to the absence of a direct alignment with 'hard prefix prompts,' but it remains high because the paper still significantly contributes to the overarching field of prompt engineering." -template-free prompting for few-shot named entity recognition via semantic-enhanced contrastive learning.,gpt-4-1106-preview,9,"The paper presents a novel technique for named entity recognition (NER) using prompt-based contrastive learning that does not require prompt templates or label word mappings, which is highly relevant to prompt engineering. It focuses on token-level classification tasks and introduces a new way to apply prompts in few-shot learning scenarios, which is a key area of interest in prompt engineering studies. The only reason it does not receive a full score is that it does not specifically address 'hard prefix prompts,' which was the indicated topic of interest, but it is still very pertinent to the broader field of prompt engineering." -lmcap: few-shot multilingual image captioning by retrieval augmented language model prompting,gpt-4-1106-preview,7,"The study involves prompting a language model with retrieved captions, which is a form of prompt engineering. However, the focus is on multilingual image captioning rather than hard prefix prompts specifically. While it does not address hard prefix prompts in its methodology, the concept of using prompts to generate language model outputs is relevant to the broader field of prompt engineering. Therefore, the relevance is moderate to high." -hiprompt: few-shot biomedical knowledge fusion via hierarchy-oriented prompting,gpt-4-1106-preview,9,"The study introduces HiPrompt, a framework that leverages hierarchy-oriented prompts to improve few-shot biomedical knowledge fusion tasks by utilizing large language models. This is highly relevant to prompt engineering because it directly involves designing and employing prompts that are specifically structured to leverage and extract hierarchical relationships within large language models. The fact that it deals with prompting techniques to enhance the model's reasoning capabilities makes it pertinent to the field. The only reason it does not receive a perfect score is that the information provided centers more on biomedical knowledge fusion rather than a generalized application in prompt engineering." -few-shot learning with prompting methods,gpt-4-1106-preview,9,"The abstract describes research focused on prompting methods in the context of few-shot and zero-shot learning within the field of natural language processing. It specifically addresses the use of hard prefixes in prompting by mentioning pattern-exploiting methodologies such as PET and iPET. These methodologies are a form of prompt engineering that modify the input to language models in a structured way to improve performance with limited data. Given that the paper reviews studies on prompt-based learning and relates to hard prefix prompts through the use of structured input, it is highly relevant to prompt engineering studies. The rating is not a full 10 because the abstract does not exclusively focus on hard prefix prompts but also discusses prompt-based learning more broadly." -adversarial knowledge stimulated contrastive prompting for few-shot language learners,gpt-4-1106-preview,9,"The abstract describes a method for improving the efficiency of pre-trained language models for few-shot learning tasks by introducing a novel prompting framework, which is highly relevant to prompt engineering studies. The AKSCP framework leverages Cloze-driven prompts for prompt-based learning and joint prompt tuning, which directly relates to the development and optimization of prompts for language models. Additionally, the use of adversarial contrastive learning to enhance generalization further aligns with advanced prompt engineering techniques. The only reason it does not receive a full 10 is that it does not specifically mention 'hard prefix prompts' which the original prompt inquires about, however, the general relevance to prompt engineering is very high." -multi-step prompting for few-shot emotion-grounded conversations,gpt-4-1106-preview,7,"The paper presented is relevant to prompt engineering as it discusses the design of a prompting approach, which is a core concept within prompt engineering. By identifying emotions and using them to inform subsequent prompts, the study contributes to the field by showing how prompts can be adapted based on contextual information (emotional content in this case). However, the paper focuses specifically on a two-step prompting method for conversational AI and emotion recognition rather than on 'hard prefix prompts' in a broad sense. Therefore, while the paper is relevant to prompt engineering, it does not directly address the topic of hard prefix prompts, hence the rating is not a full 10." -leveraging few-shot data augmentation and waterfall prompting for response generation,gpt-4-1106-preview,8,"The abstract mentions the development of methodologies and strategies for response generation in task-oriented conversational modeling, including the use of a 'waterfall prompting technique'. This indicates an exploration into how prompts are structured and how they can be optimized for better performance in conversation engines using AI like GPT-3 and ChatGPT. Although 'hard prefix prompts' are not explicitly mentioned, the study is still highly relevant to prompt engineering as it focuses on improving and understanding how prompts can be leveraged along with few-shot learning for effective response generation. The lower rating is due to the lack of specific mention of 'hard prefix prompts', suggesting that while the study is relevant, it may not directly tackle the named concept." -the unreliability of explanations in few-shot in-context learning,gpt-4-1106-preview,6,"The study seems to address a part of prompt engineering by examining how 'prompting' GPT-3 with explanations affects its performance on certain reasoning tasks, which is relevant to understanding how different types of prompts influence large language models. However, it primarily focuses on the reliability of explanations produced by GPT-3 and their use in validating predictions post-hoc, which is one aspect of prompt engineering. The study does not directly address 'hard prefix prompts' or a comprehensive systematic review of them. Therefore, while not fully aligned, it does contribute to the broader topic of prompt engineering by discussing the impact of explanatory prompts." -self-convinced prompting: few-shot question answering with repeated introspection,gpt-4-1106-preview,8,"The provided abstract outlines a study involving 'few-shot question answering with repeated introspection' which is closely related to the field of prompt engineering, particularly in refining prompts to improve the performance of large language models (LLMs). Although the study does not specifically mention 'hard prefix prompts', it does deal with the broader category of prompts and their optimization through an iterative process. This makes the work relevant to prompt engineering but not exclusively focused on the hard prefix aspect. Therefore, the relevance to 'prompt engineering' is high, but it might be less directly related to a 'systematic review on hard prefix prompts'." -retrieving visual facts for few-shot visual question answering,gpt-4-1106-preview,6,"The abstract describes a research study where a language model is prompted with facts retrieved from an image, to improve the performance of few-shot visual question answering systems. While it does not directly address 'hard prefix prompts' as in the study of prompts in the context of natural language processing, it does involve the process of selecting specific information (facts from an image) to inform the prompting process for a language model. Thus, it demonstrates relevance to prompt engineering by showing how tailored information can be used to elicit better responses from a model. However, because it focuses primarily on image-based data and facts rather than text-based prompting, it is not fully centered on 'prompt engineering' as typically understood within NLP, hence the mid-range rating." -continued pretraining for better zero- and few-shot promptability,gpt-4-1106-preview,9,"The provided abstract discusses continued pretraining with an emphasis on enhancing the effectiveness of natural language prompts in zero-shot and few-shot learning contexts, which is highly relevant to prompt engineering. The systematic examination of pretraining methods, identification of gaps, and concrete recommendations based on experimental results are directly related to the advancements in the field of prompt engineering. Although it does not directly mention 'hard prefix prompts', the focus on trainable prompts during multi-task learning and prompt tuning is integral to the broader field of prompt engineering. A point is deducted because the relevance to 'hard prefix prompts' specifically is not clear, but otherwise, it is highly pertinent to the study of how prompts can be engineered and optimized for better performance in machine learning models." -what makes pre-trained language models better zero/few-shot learners?,gpt-4-1106-preview,9,"The paper directly addresses prompt learning, which is a critical aspect of prompt engineering. It presents both a theoretical framework to understand the efficiency of prompts and a practical approach to select prompts without relying on development sets. The focus on zero/few-shot scenarios is particularly relevant to the current challenges faced in prompt engineering where labeled data is scarce. Although the paper does not address 'hard prefix prompts' specifically, it does contribute to the broader field of prompt engineering which encompasses the study of prompts and their optimization. Therefore, it receives a high relevance score." -plan-and-solve prompting: improving zero-shot chain-of-thought reasoning by large language models,gpt-4-1106-preview,9,"The abstract discusses a novel approach to prompt engineering for large language models, focusing on improving chain-of-thought reasoning in a zero-shot context. It addresses key issues such as calculation errors, missing-step errors, and semantic misunderstandings by introducing the Plan-and-Solve (PS) Prompting technique. As prompt engineering is central to optimizing the performance of LLMs in multi-step reasoning tasks, this study is highly relevant to the field. The high rating is due to the direct application of prompt engineering strategies to enhance the capabilities of these models without relying on multiple examples for training, which is an innovative contribution to the prompt engineering literature. However, it does not explicitly mention 'hard prefix prompts', which the original prompt might specifically refer to, hence not a perfect 10." -better zero-shot reasoning with self-adaptive prompting,gpt-4-1106-preview,8,"The provided abstract and TLDR relate closely to prompt engineering, as they describe the development and application of a novel prompt design method intended to enhance the zero-shot reasoning capabilities of large language models (LLMs) without relying on handcrafted responses or ground-truth labels. The method, Consistency-based Self-adaptive Prompting (COSP), addresses a core aspect of prompt engineering by strategically selecting and constructing prompts to improve LLM performance. While the abstract doesn't mention 'hard prefix prompts' explicitly and instead focuses on the broader field of prompt design and optimization, the relevance is high due to the overall focus on improving prompt-based LLM interactions." -gpt-3 for few-shot dialogue state tracking,gpt-4-1106-preview,9,"The abstract details a study focused on few-shot Dialogue State Tracking (DST) using GPT-3 and the influence of prompt crafting on performance. It explores methodologies around prompt engineering, such as different completion strategies, and the effects of fine-tuning, ensembling, and context example selection. This information is highly relevant to prompt engineering, as it contributes to the understanding of how prompts can be optimized for certain tasks. However, the study doesn't strictly focus on 'hard prefix prompts', which might be a specific subset of prompt engineering, hence the rating is not a perfect 10." -multi-modal prompting for low-shot temporal action localization,gpt-4-1106-preview,8,"The paper is highly relevant to the study of prompt engineering as it involves the design and utilization of prompts to guide a pre-trained text encoder (CLIP) to perform open-vocabulary classification in the context of temporal action localization. The experimentation with both detailed action descriptions and visually-conditioned instance-specific prompt vectors directly ties into the methodologies of prompt engineering, aiming to improve the model performance on low-shot learning tasks. However, the primary focus on temporal action localization slightly reduces its direct relevance to general prompt engineering studies that are not focused on the specific application of action localization." -program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks,gpt-4-1106-preview,8,"The provided abstract details a study relevant to prompt engineering by introducing a 'Program of Thoughts' (PoT) method which separates computation from reasoning in numerical reasoning tasks. This separation directly impacts how prompts are designed for language models, as it leads to a fundamental change in the expected output (programs vs. solutions). The study's relevance is high because it exemplifies an advanced application of prompt engineering to improve performance on language models for specific tasks. The reason the rating is not a full 10 is because the study focuses specifically on numerical reasoning tasks and might not be directly applicable to other prompt engineering domains." -generative zero-shot prompt learning for cross-domain slot filling with inverse prompting,gpt-4-1106-preview,8,"The paper described is highly relevant to the field of prompt engineering as it discusses a novel methodology for zero-shot prompt learning in the context of cross-domain slot filling, which is a specific application within the broader domain of prompt engineering. It focuses on using prompts to transfer knowledge between domains without additional labeled data, which is a core aspect of prompt engineering. The proposed inverse prompting strategy is particularly pertinent for creating effective prompts that can distinguish between different types of data. Although the paper does not directly address 'hard prefix prompts', the concepts and strategies discussed are likely to contribute valuable insights to the prompt engineering literature and thus receive a high relevance rating." -language-aware soft prompting: text-to-text optimization for few- and zero-shot adaptation of v &l models,gpt-4-1106-preview,8,"The given title discusses 'Language-Aware Soft Prompting (LASP)' which is directly related to prompt engineering, especially in the context of optimizing text-to-text models for few- and zero-shot tasks. This indicates a high level of relevance as prompt engineering is about devising and employing prompts to guide or improve the performance of language and vision-and-language (V&L) models. The proposed method seems to enhance the interaction between hand-crafted textual prompts and model-generated outputs. Although the study doesn't explicitly mention 'hard prefix' prompts, the focus on soft prompting suggests it is in the broader area of prompt engineering, thus earning a high relevance rating." -"large language model is not a good few-shot information extractor, but a good reranker for hard samples!",gpt-4-1106-preview,8,"The abstract discusses the effectiveness of LLMs relative to SLMs in few-shot information extraction tasks and introduces a paradigm that involves prompting strategies. The relevance to prompt engineering is significant because it examines the role of prompts in improving performance of LLMs when combined with SLMs. Although the primary focus is on LLMs as rerankers for hard samples rather than on constructing or studying 'hard prefix prompts' specifically, the concept of using adaptive prompting to achieve better results is closely related to the field of prompt engineering. This suggests that the paper could offer valuable insights into prompt strategies that may be beneficial for designing or evaluating hard prefix prompts." -towards few-shot identification of morality frames using in-context learning,gpt-4-1106-preview,8,"The study discusses using pre-trained Large Language Models for few-shot in-context learning, which is directly related to prompt engineering as it involves designing prompts for these models to handle specific tasks, in this case, identifying morality frames. However, it doesn't focus specifically on 'hard prefix prompts,' which the original request mentions, but rather on prompting methodologies in a broader sense. Therefore, the rating isn't a perfect 10 but still high due to the relevance of few-shot learning and in-context learning methodologies, which are integral to prompt engineering." -enhancing few-shot text-to-sql capabilities of large language models: a study on prompt design strategies,gpt-4-1106-preview,9,"The paper's focus on exploring various prompt design strategies and their systematic investigation into demonstration selection methods and optimal instruction formats for prompting LLMs in the Text-to-SQL task is highly relevant to the field of prompt engineering. The study is specifically addressing how to effectively use prompts to improve the performance of LLMs on a specialized task, which is a core aspect of prompt engineering. The relevance rating is not a full 10 because the paper is specialized in the Text-to-SQL context and prompt engineering can be applied to a broader range of tasks beyond this specific application. Nonetheless, the findings and methodology could be valuable for prompt engineering studies in general." -few-shot and prompt training for text classification in german doctor's letters,gpt-4-1106-preview,8,"The given abstract describes the use of prompt-based methods, specifically pattern-exploiting training, for text classification in a few-shot learning context, which is highly relevant to the field of prompt engineering. Although the focus is on a specific application within the medical domain for German doctor's letters, the core concept of using prompts to effectively guide a language model and improve performance with limited data is central to the study of prompt engineering. The improvement in accuracy and efficiency mentioned aligns with the goals of prompt engineering to enhance model performance. The rating is not a full 10 as the study seems to be applied and specific rather than a comprehensive and systematic review on hard prefix prompts in general." -exploring zero and few-shot techniques for intent classification,gpt-4-1106-preview,8,"This study is highly relevant to prompt engineering as it explores zero and few-shot learning techniques, which are integral to the development of efficient prompting methods. The use of zero-shot intent classification with descriptions and parameter-efficient fine-tuning indicates a direct application of prompt engineering principles. The fact that they are testing these methods on large language models, which are often used in conjunction with prompts, further adds to the relevance. While the study does not focus exclusively on 'hard prefix prompts,' its implications on prompt engineering strategies are significant, particularly for intent classification in low-resource settings." -knowledge-guided prompt learning for few-shot text classification,gpt-4-1106-preview,9,"The abstract discusses a study that is highly relevant to prompt engineering, specifically within the context of leveraging implicit knowledge in pre-trained language models for few-shot text classification. The introduction of a knowledge-guided prompt learning method directly relates to prompt engineering, as it addresses how prompts can be optimized to improve model performance. The slight deduction from a perfect score is due to the lack of explicit mention of 'hard prefix prompts' which may or may not be a part of their 'knowledge prompting template'. Despite this, the study's focus on improving and understanding prompt-based learning is closely aligned with the field of prompt engineering." -a smashed glass cannot be full: generation of commonsense explanations through prompt-based few-shot learning,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering due to its focus on generating commonsense explanations through the use of prompts on pre-trained language models. Although it does not specifically mention 'hard prefix prompts', the methodology involving prompting and few-shot learning is a core technique within the field of prompt engineering. The ability to generate explanations from semantically related sentences is an important aspect of prompt engineering, which contributes to the relevance of this study to the field. However, full relevance to 'hard prefix prompts' specifically would require a more direct investigation into that subset of prompt engineering techniques." -successive prompting for decomposing complex questions,gpt-4-1106-preview,8,"The abstract discusses 'Successive Prompting', a methodology directly related to prompt engineering, involving the iterative process of breaking down complex questions for large language models. This is highly relevant to prompt engineering studies as it provides insights into the structuring of prompts for complex problem-solving. The approach could lead to more effective design of prompts, which is a core element of prompt engineering, thereby improving the performance of LMs in complex question-answering tasks. The rating is not a full 10 because it is more focused on the iterative prompting process rather than a broad application of prompt engineering techniques across different domains." -"machine translation with large language models: prompting, few-shot learning, and fine-tuning with qlora",gpt-4-1106-preview,4,"The abstract discusses machine translation using large language models and evaluates different methodologies including zero-shot prompting, which is closely related to prompt engineering. However, the focus on QLoRA fine-tuning indicates a greater emphasis on the fine-tuning process rather than on prompt engineering itself. The relevance is present but not central to the topic of prompt engineering, therefore a moderate rating reflects the connection without overstating its focus." -"structured prompting: scaling in-context learning to 1, 000 examples",gpt-4-1106-preview,9,"The abstract presents a study directly related to prompt engineering, with a focus on structured prompting to overcome length constraints in in-context learning for large language models. This is highly relevant as it addresses a limitation often encountered in the field of prompt engineering, where the length of input can restrict the number of examples a language model can learn from. The improvement of end-task performance and reduction of variance in results mentioned in the abstract suggests significant empirical findings for prompt engineering applications. Although the study does not specifically mention 'hard prefix prompts,' its relevance lies in advancing the methodologies used in prompt engineering, which could be applicable or foundational to hard prefix prompts as well." -short answer grading using one-shot prompting and text similarity scoring model,gpt-4-1106-preview,5,"The relevance of the study to prompt engineering is moderate. The study involves the use of a large language model for one-shot prompting, which is relevant to the broader field of prompt engineering as it relies on effectively prompting a language model to perform a task—in this case, grading short answers. However, the study specifically focuses on an application of language models for automated grading rather than the systematic review of hard prefix prompts. The relevance is not direct but tangentially related due to the use of prompting techniques within the ASAG model." -zero-shot prompting for implicit intent prediction and recommendation with commonsense reasoning,gpt-4-1106-preview,7,"The paper abstract discusses a framework for multi-domain dialogue systems that can understand implicit user intents and appropriately trigger task-oriented bots using zero-shot prompting. While this is not specifically about 'hard prefix prompts' as might be investigated in a prompt engineering study, the relevance is reasonably high because zero-shot prompting is a closely related concept where the effectiveness of the prompt in eliciting the correct response from a language model without prior examples is crucial. The system's dependence on 'commonsense knowledge' and inference of 'implicit intents' also implies that there is prompt engineering occurring to facilitate these operations. However, the abstract does not directly mention the study or optimization of prompts, which would be the primary focus of a prompt engineering study, hence the relevance is not rated higher." -naturalspeech 2: latent diffusion models are natural and zero-shot speech and singing synthesizers,gpt-4-1106-preview,7,"The abstract describes a text-to-speech system, NaturalSpeech 2, which includes a speech prompting mechanism as a means to facilitate in-context learning. Although the system is not primarily focused on 'hard prefix prompts' for text input, the speech prompting mechanism can be seen as related to prompt engineering, particularly for speech synthesis. The relevance is significant because the paper addresses how prompting can be utilized in TTS systems to improve performance. However, it is not an exact match because the study does not focus solely on the prompt engineering aspect but rather on the overall TTS system that includes prompting as one of its components." -udapdr: unsupervised domain adaptation via llm prompting and distillation of rerankers,gpt-4-1106-preview,7,"The abstract discusses the use of large language models (LLMs) to generate synthetic queries which relates to prompt engineering as it may involve crafting prompts to elicit these queries from the LLMs. The focus on domain adaptation and efficient information retrieval can be seen as an application of prompt engineering, particularly in the context of generating useful data for model fine-tuning. However, the abstract doesn't specifically mention 'hard prefix prompts' or detail the prompt engineering process, hence the rating is not a full 10." -probing power by prompting: harnessing pre-trained language models for power connotation framing,gpt-4-1106-preview,8,"The abstract describes a study on probing pre-trained language models (PLMs) by using prompts to understand and predict power connotations in language, which is relevant to prompt engineering. The research focuses on how prompts can elicit different connotations about power from language models and the impact of fine-tuning on the models' accuracy in this task. Although the study primarily explores connotation framing rather than hard prefixes specifically, the methodology closely relates to prompt engineering as it involves designing prompts to harness the capabilities of language models. This indicates a high level of relevance, but not the maximum score as it does not directly focus on 'hard prefix prompts'." -what do language models know about word senses? zero-shot wsd with language models and domain inventories,gpt-4-1106-preview,7,"The paper discusses an innovative use of language models for Word Sense Disambiguation (WSD) by casting the problem as one of textual entailment, which inherently involves crafting prompts that effectively convey the different domain-relevant hypotheses that are matched against the given word senses. This is related to prompt engineering as it shows a specific application where the design of the prompts (i.e., the relation between word senses and domains phrased as hypotheses) is crucial for the successful application of language models to this task. Although not directly addressing 'hard prefixes', which are a specific type of prompt, the study does engage with the broader notion of how to construct prompts to extract desired outputs from language models. Therefore, the relevance is quite high, albeit not perfectly aligned with the specific topic of hard prefix prompts." -compresso: structured pruning with collaborative prompting learns compact large language models,gpt-4-1106-preview,7,"The abstract discusses 'Compresso,' a new paradigm for structurally pruning Large Language Models, which includes a 'collaborative prompt' to foster collaboration between the LLM and the pruning algorithm. While the main focus is on model compression, the use of collaborative prompts for enhancing the pruning process does touch upon the broader field of prompt engineering. Prompt engineering generally refers to the design and optimization of prompts to elicit desired responses from language models, and the collaborative prompt in this context serves to improve the interaction between model components during compression. However, it is not directly focused on prompt engineering study in the conventional sense, which typically deals with how different prompts affect the output of LLMs in natural language tasks, rather than model pruning. Therefore, the relevance is moderate but not entirely central to traditional prompt engineering studies." -you can generate it again: data-to-text generation with verification and correction prompting,gpt-4-1106-preview,8,"The paper discusses an advanced methodology in the field of text generation which involves a multi-step process including generation, verification, and correction stages. This is directly relevant to the practice of prompt engineering, as the proposed VCP method deals with iteratively refining the prompts based on feedback, which is a key aspect of designing effective prompts that can lead to high-quality outputs. The relevance is not a perfect score because the study does not focus exclusively on hard prefix prompts or prompt engineering in general, but rather on a multi-step generation process with verification and correction, which is just one aspect of prompt engineering." -transprompt v2: a transferable prompting framework for cross-task text classification,gpt-4-1106-preview,8,"The abstract discusses the development of TransPrompt v2, which is a prompting framework specifically designed for improving performance in few-shot text classification tasks across various NLP applications. By focusing on prompt-based fine-tuning and transferring prompting knowledge across tasks, it is highly relevant to studies on prompt engineering, especially in the context of how prompts can be optimized and utilized to enhance the capabilities of pre-trained language models with limited data. Though the abstract does not mention 'hard prefix prompts' specifically, the overall framework is pertinent to the field of prompt engineering. The significant increase in performance compared to other baselines, as evidenced in the text, further solidifies its relevance to the study of efficient prompting methods." -dynamic strategy chain: dynamic zero-shot cot for long mental health support generation,gpt-4-1106-preview,8,"The abstract presents a novel methodology involving prompting Large Language Models with chain-of-thought techniques, specifically tailored for generating long counseling texts for mental health support. The development of the zero-shot Dynamic Strategy Chain (DSC) prompting method is a direct application of prompt engineering, as it focuses on improving the performance of the LLM by designing specialized prompts based on dynamic mental health counseling strategies. This is highly relevant to the study of prompt engineering because it demonstrates an advanced use-case of prompt design to produce more effective and personalized responses from language models. The use of GPT2 and the claim of state-of-the-art performance further indicates an engagement with prompt engineering techniques. However, it does not fully match the requirement for a 'systematic review on hard prefix prompts' as it seems to introduce a new prompting strategy rather than review existing strategies." -adapt and decompose: efficient generalization of text-to-sql via domain adapted least-to-most prompting,gpt-4-1106-preview,8,"The paper describes a method for improving generalization in Text-to-SQL tasks by preparing and adapting prompts for specific domains and compositions. This research directly involves creating efficient prompts for large language models, which is an important aspect of prompt engineering. The relevance is high because it devises strategies for prompt construction and adaptation, which is a part of prompt engineering studies. It gets an 8 instead of a perfect 10 because it is focused on a specific application (Text-to-SQL) rather than prompt engineering in general." -leveraging large language models for multiple choice question answering,gpt-4-1106-preview,7,"The abstract focuses on improving the effectiveness of large language models (LLMs) such as GPT-3 in multiple choice question answering (MCQA) tasks. It highlights an approach where the LLM is presented with both the question and the answer options and outputs a symbol representing its chosen answer. This method is related to prompt engineering because it involves structuring the input to the LLM in a way that helps it utilize its capabilities more efficiently (known as natural prompting). The concept of multiple choice symbol binding (MCSB) reflects a specialized form of prompt engineering that is highly relevant to developing efficient prompting strategies for MCQA. Although the text does not explicitly use the term 'prompt engineering' or focus broadly on various types of prompts (e.g., hard prefix prompts), it is relevant as it tackles a specific challenge within the field of prompting LLMs to optimize performance on MCQA tasks." -data augmentation for intent classification with off-the-shelf large language models,gpt-4-1106-preview,8,"The study described in the title and abstract is highly relevant to prompt engineering as it deals with the generation of training data for intent classification using prompts with large language models like GPT-3. Although it does not address the 'hard prefix prompts' specifically, the research is indeed focused on utilizing prompting techniques to improve the data generation process for machine learning tasks, which is a core concept in prompt engineering. The relevance is not maximum because the study concentrates more on the application of prompt-generated data for classification and its quality rather than on the systematic study of the prompts themselves." -metaprompting: learning to learn better prompts,gpt-4-1106-preview,6,"The abstract describes research on prompting methods in natural language processing, specifically focusing on moving from 'hard prompts' to 'soft prompts' and proposing a new method called MetaPrompting that utilizes meta-learning for better prompt initialization. Although the study is highly relevant to the broader topic of prompt engineering, the specific term 'hard prefix prompts' is not the main focus of this abstract. Instead, the research emphasizes soft prompting and the improvement of prompt initialization. Hence, the relevance to 'hard prefix prompts' is indirect, as the study seems to address the transition from hard to soft prompts and the advancement of soft prompt techniques." -unraveling chatgpt: a critical analysis of ai-generated goal-oriented dialogues and annotations,gpt-4-1106-preview,7,"The paper titled 'unraveling chatgpt: a critical analysis of ai-generated goal-oriented dialogues and annotations' addresses the use of large pre-trained language models like ChatGPT for generating high-quality text, which is tangentially related to the study of prompt engineering. Although the specific focus on 'hard prefix prompts' is not mentioned, the exploration of 'prompting techniques' for data generation and annotation in AI models directly influences studies related to crafting prompts to achieve desired outputs. Thus, the relevance is quite high as it may provide insights into prompt efficiency and effectiveness, crucial for prompt engineering. However, the rating is not a full 10 because it does not directly discuss hard prefix prompts, which is the central theme of the prompt engineering study." -improving patient pre-screening for clinical trials: assisting physicians with large language models,gpt-4-1106-preview,8,"The paper discusses the use of InstructGPT and prompt-engineering techniques, such as chaining one-shot, selection-inference and chain-of-thought prompts, to improve the process of determining eligibility for clinical trials. Although the study is not directly focused on hard prefix prompts, it is within the domain of prompt engineering and examines how tailored prompts can enhance a language model's performance in a specific, practical application. Thus, the relevance rating is high due to the examination of prompts' design and efficacy in a real-world task, which is a central aspect of prompt engineering studies." -sinc: spatial composition of 3d human motions for simultaneous action generation,gpt-4-1106-preview,7,"The abstract discusses the use of large language models, particularly GPT-3, to understand the relationship between actions and body parts through prompt engineering ('what are the body parts involved in the action?'). This implies a relevance to prompt engineering study as it involves designing prompts to extract specific knowledge from the language model that can be used for another application, which in this case is 3D human motion synthesis. However, the main focus of the study is on the spatial composition of 3D human motions rather than prompt engineering itself, and thus the rating is not a perfect 10." -the potential and pitfalls of using a large language model such as chatgpt or gpt-4 as a clinical assistant,gpt-4-1106-preview,8,"The provided abstract describes studies assessing the performance of GPT-4 and ChatGPT in the medical field, specifically with tasks such as identifying patients with specific diagnoses and providing diagnostic assistance. The relevance to prompt engineering is high because the study involves the use of 'chain of thought and few-shot prompting' indicating that prompt engineering techniques were indeed utilized and studied in the context of their effectiveness in a real-world application. The rating is not a full 10 because the study does not solely focus on prompt engineering but also on the broader application and implications of using language models in clinical settings." -little giants: exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task,gpt-4-1106-preview,9,"The paper's focus on assessing the effectiveness of prompt-based techniques directly addresses prompt engineering, which is the practice of formulating prompts to elicit specific responses from language models. The use of various prompting techniques and the integration with zero-shot and one-shot learning methods are key components of prompt engineering studies. Although the paper's primary domain is quality estimation for summaries and machine translations, the core of the research involving systematic experiments with prompts is highly relevant to prompt engineering. The only reason the rating is not a perfect 10 is because it might be more narrowly focused on evaluation metrics rather than the broader context of prompt engineering." -hitachi at semeval-2023 task 4: exploring various task formulations reveals the importance of description texts on human values,gpt-4-1106-preview,7,"While the paper primarily focuses on the task of human value detection behind arguments, it is relevant to prompt engineering because it also explores various task formulations, including question answering with chain-of-thought prompting. The exploration of different task approaches, the effectiveness of including description texts, and the evaluation of model performance directly relate to how prompts are engineered and optimized for specific NLP tasks. Additionally, the insights on zero-shot learning and the importance of task formulation could inform prompt design strategies. However, since the primary focus isn't solely on prompt engineering but a broader scope of task formulation, the relevance is not at its maximum." -learning to paraphrase sentences to different complexity levels,gpt-4-1106-preview,4,"While the study presented in the abstract does touch upon prompting strategies, which are part of prompt engineering, its focus seems to be more on the creation and use of datasets for sentence simplification, complexification, and paraphrasing. Prompt engineering generally refers to the design, testing, and optimization of prompts to improve performance of language models. The abstract indicates that the research includes experimentation on prompting strategies, which is relevant to prompt engineering; however, the main emphasis appears to be on dataset development and performance benchmarks rather than the intricate details of prompt engineering itself. Therefore, the relevance to prompt engineering study is moderate." -"investigating the perception of the future in gpt-3, -3.5 and gpt-4",gpt-4-1106-preview,7,"The given study indirectly relates to prompt engineering by exploring how models like GPT-3, GPT-3.5, and GPT-4 process and generate concepts of the future through different prompting techniques such as fine-tuning, prompt-tuning, and few-shot prompting. These methods fall under the broader category of prompt engineering. Although the study's primary focus is on the models' perception of time, rather than exclusively on prompt engineering efficiency or methodology, understanding the nuances of how different models perform with various prompt designs is relevant to prompt engineering practices. The detailed investigation into the efficacy of these prompting methods can provide insights into how to craft better prompts to achieve specific outcomes, which is a critical aspect of prompt engineering." -tree of clarifications: answering ambiguous questions with retrieval-augmented large language models,gpt-4-1106-preview,8,"The study introduces a novel framework, Tree of Clarifications (ToC), which is directly related to prompt engineering as it involves few-shot prompting to disambiguate open-domain questions. The method of recursively constructing a tree of disambiguations and leveraging external knowledge for generating long-form answers shows an application of designing and engineering prompts to improve question-answering systems. While it doesn't specifically mention 'hard prefix prompts', the concept is within the realm of prompt engineering, hence the high relevance rating. However, it doesn't fully match the exact concept of 'hard prefix prompts' as it does not mention systematic review of them directly." -evaluation of prompts to simplify cardiovascular disease information using a large language model,gpt-4-1106-preview,9,"The described study directly relates to prompt engineering by proposing and evaluating a 'rubric prompting' strategy to optimize the simplification of complex medical information using large language models. The focus on evaluating different prompting techniques, particularly the comparison with zero-shot or one-shot prompting methods, indicates a high relevance to the field of prompt engineering. The systematic approach to developing prompts that yield complete, readable, and syntactically simple outputs, especially in a critical domain like healthcare, illustrates the application of prompt engineering principles. Although not specifically using 'hard prefix prompts,' the study is highly pertinent as it discusses the design and impact of prompt structures on the quality of AI-generated text, reflecting on a major aspect of prompt engineering." -category-specific prompts for animal action recognition with pretrained vision-language models,gpt-4-1106-preview,7,"The study described in the abstract appears to be relevant to prompt engineering because it involves the development of a 'category-specific prompting module' which generates adaptive prompts for text and video inputs based on detected animal categories. This is a form of prompt engineering where prompts are crafted to improve the performance of a vision-language model on the task of animal action recognition. Although the focus is not on 'hard prefix prompts' specifically, the creation and utilization of tailored prompts is a pertinent aspect of prompt engineering. The relevance is not rated higher because the abstract does not provide details on how the prompts are engineered or whether hard prefix prompts are a part of the study, which would be critical for a 'comprehensive systematic review on hard prefix prompts.'" -deeplyrics: gpt2 for lyrics generation with finetuning and prompting techniques,gpt-4-1106-preview,7,"The study outlined in the abstract describes the use of 'tuning-free prompting' as a method to assist lyric generation with AI, indicating that it does involve prompt engineering. However, the specifics of 'hard prefix prompts', which are the main focus of the implied systematic review, are not explicitly mentioned. It implies work on prompting techniques without giving details on whether these are 'hard prefix prompts' or another form of prompting. Therefore, the relevance is significant but not fully aligned due to the lack of explicit mention of 'hard prefix prompts'." -"a survey of graph prompting methods: techniques, applications, and challenges",gpt-4-1106-preview,7,"The survey is highly relevant to prompt engineering as it discusses the 'pre-train, prompt, predict training' paradigm, which is at the core of how prompts are used within modern machine learning frameworks to make models generalize better with less labeled data. The focus on graph prompting methods indicates a novel approach to designing prompts using structured graph knowledge, which is a specific aspect within the broader field of prompt engineering. The relevance is not a full 10 because the survey is specialized in graph-based prompting rather than covering all aspects of prompt engineering, including 'hard prefix' prompts or other prompting techniques not related to graphs." -help me think: a simple prompting strategy for non-experts to create customized content with models,gpt-4-1106-preview,9,"The abstract describes a novel approach to prompting language models. It is highly relevant to the study of prompt engineering as it directly addresses the problem of how non-expert users can effectively interact with such models. The HELP ME THINK strategy is a form of prompt engineering designed to aid users in generating customized content, an area of growing interest in the field. It also touches on the challenge of control within language model outputs, a central issue in prompt engineering. The slightly less than perfect score is due to the paper potentially not addressing a 'systematic review on hard prefix prompts' specifically, which would be necessary for a 10 rating." -neuro-symbolic causal language planning with commonsense prompting,gpt-4-1106-preview,9,"The paper presents a method called Neuro-Symbolic Causal Language Planner (CLAP) that directly addresses the challenge of eliciting procedural knowledge from large language models (LLMs) through advanced prompting techniques that involve commonsense knowledge. Given that prompt engineering involves the strategic construction of prompts to extract or generate specific responses from LLMs, this paper's focus on using prompts as causal interventions to improve language planning capabilities in AI systems is highly relevant to the field of prompt engineering. The fact that it also employs a Structural Causal Model (SCM) to construct structured prompts makes it even more pertinent, as it represents a sophisticated approach to prompt design. However, it does not focus exclusively on 'hard prefix prompts', thus the rating is not a full 10." -generative speech recognition error correction with large language models and task-activating prompting,gpt-4-1106-preview,9,"The study addresses the use of large language models (LLMs) for speech recognition error correction and investigates various prompting schemes, which directly relates to prompt engineering. The focus on in-context learning, task activation prompting, and the combination of causal instructions with demonstrations are key elements of prompt engineering, showing how different prompts can improve the performance of LLMs in specific tasks without fine-tuning. Although the study does not explicitly mention 'hard prefix prompts', it explores related methods of instruction prompting, making it highly relevant to prompt engineering studies." -llm-rec: personalized recommendation via prompting large language models,gpt-4-1106-preview,8,"The given abstract directly relates to prompt engineering, as it investigates various prompting strategies to improve the performance of large language models, particularly for personalized recommendations. The relevance to prompt engineering is high because the study specifically examines how different types of prompts can enhance LLM's capabilities. This is pertinent to prompt engineering as it contributes to understanding how LLMs can be tuned for better performance on specific tasks by using tailored prompts. The mention of 'hard prefix prompts' is not explicitly stated; however, the exploration of prompting strategies such as 'recommendation-driven' and 'engagement-guided' prompting falls within the broader scope of prompt engineering studies." -application of cognitive rehabilitation theory to the development of smart prompting technologies,gpt-4-1106-preview,4,"While the study addresses the use of prompting technologies, which is a form of human-computer interaction, it primarily focuses on cognitive rehabilitation and assistive technologies for older adults with cognitive impairments. The relevance to 'prompt engineering' in the context of hard prefix prompts and systematic review is tangential. The study could be peripherally related to prompt engineering in the way it seeks to optimize the design of prompts for a specific application (assistive technology), but it does not directly study or review the more general field of prompt engineering, especially as it might relate to conversational AI, machine learning or data input systems." -a general language assistant as a laboratory for alignment,gpt-4-1106-preview,8,"The abstract describes a study that investigates various techniques and evaluations, including prompting, to align large language models with human values. While it does not specifically mention 'hard prefix prompts,' prompting in general is a significant aspect of prompt engineering. The investigation into baseline techniques for alignment has relevance to the field of prompt engineering, as it can inform the development of more sophisticated prompts that are better aligned with human intentions. The study also examines the scalability of different training objectives relevant to alignment, which is pertinent to advancing the effectiveness of prompt engineering in large language models. However, without a focus on 'hard prefix prompts' specifically, the relevance is not absolute, hence the rating is not a perfect score." -enabling conversational interaction with mobile ui using large language models,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it explores the design of prompting techniques to adapt large language models (LLMs) for conversational interactions with mobile UIs. This indicates a direct engagement with the process of developing and refining prompts to elicit desired responses from LLMs, which is the essence of prompt engineering. However, it is not exclusively focused on 'hard prefix prompts' as might be suggested by a comprehensive systematic review on such. Its focus on mobile UIs also suggests a specific application area rather than a broad study of prompting techniques. Nevertheless, the work contributes significantly to the field of prompt engineering by demonstrating the practical application of LLMs in a relevant domain without the need for task-dedicated resources." -recent advances in natural language processing via large pre-trained language models: a survey,gpt-4-1106-preview,7,"The title and abstract indicate that the survey covers pre-trained language models and their applications in various NLP tasks, including 'prompting.' Since prompt engineering is a subset of techniques applied to language models to improve performance on various tasks, this survey's content is relevant to the study of prompt engineering, particularly concerning the 'prompting' methods mentioned in the abstract. However, it does not appear to focus exclusively on 'hard prefix prompts' or prompt engineering, hence the rating is not a full 10." -are large language models ready for healthcare? a comparative study on clinical language understanding,gpt-4-1106-preview,8,"The provided abstract discusses the evaluation of large language models (LLMs) for clinical language understanding tasks in healthcare, which is indirectly related to prompt engineering, as it involves creating effective prompts for complex tasks. More specifically, it introduces a new prompting strategy known as self-questioning prompting (SQP), which is a direct application of prompt engineering aimed at improving the performance of LLMs on healthcare-related tasks. Although the main focus is not on 'hard prefix prompts', SQP likely employs principles of prompt engineering to elicit better responses from the models. This justifies the high relevance rating, although it is not a perfect match since it doesn't focus solely on prompt engineering but includes broader topics of LLM application in healthcare." -application of cognitive rehabilitation theory to the development of smart prompting technologies,gpt-4-1106-preview,4,"The document appears to discuss the application of cognitive rehabilitation theory to the development of smart prompting technologies for assisting older adults with cognitive impairments. While it does touch upon the design and effectiveness of prompts (which is indirectly related to prompt engineering), the focus is more on the application of CRT in the development of assistive technologies rather than on a comprehensive systematic review of hard prefix prompts or on the specifics of engineering prompt systems. Thus, the relevance to prompt engineering study, particularly in the context of a comprehensive systematic review on hard prefix prompts, is moderately low, warranting a rating of 4." -task effects on linguistic complexity and accuracy: a large-scale learner corpus analysis employing natural language processing techniques,gpt-4-1106-preview,6,"While the title 'task effects on linguistic complexity and accuracy: a large-scale learner corpus analysis employing natural language processing techniques' does not specifically mention 'prompt engineering' or 'hard prefix prompts', the abstract indicates relevance by discussing the influence of tasks or prompts on linguistic performance in second language acquisition. Prompt engineering is crucial in designing effective tasks that can elicit the desired complexity and accuracy in language learning, which is pertinent to the study at hand. However, since the focus is more broadly on task effects rather than the specifics of engineering prompts, especially 'hard prefix prompts', the relevance is moderate." -voyager: an open-ended embodied agent with large language models,gpt-4-1106-preview,7,"The abstract describes an AI agent (Voyager) that uses a new iterative prompting mechanism, which is relevant to prompt engineering studies. This mechanism involves environment feedback and self-verification processes, which are significant topics within prompt engineering research. However, the focus is on an embodied agent in a gaming environment, rather than on hard prefix prompts. While there is significant overlap with interests in prompt engineering, the specific focus on 'hard prefix prompts' in a comprehensive systematic review is not directly addressed, thus the relevance is rated as a 7 instead of a higher score." -the flan collection: designing data and methods for effective instruction tuning,gpt-4-1106-preview,9,"The given abstract is highly relevant to prompt engineering study as it specifically discusses design decisions, task balancing, enrichment techniques, and mixed prompt settings, which are central concepts in the development and improvement of instruction tuning for language models. Despite not using the term 'hard prefix prompts', it directly addresses the broader domain of prompt optimization and the impact on model performance, therefore meriting a high relevance rating." -learning to compose soft prompts for compositional zero-shot learning,gpt-4-1106-preview,8,"The abstract discusses the development of Compositional Soft Prompting (CSP), which is directly relevant to prompt engineering, as CSP is a form of prompt-related technique designed to improve the interaction between users (or systems) and AI models, specifically pretrained vision-language models. While the reference to 'soft prompts' and not 'hard prefix prompts' might suggest a slight deviation, the overall study is still highly pertinent to the field of prompt engineering, especially given its focus on parameter efficiency, zero-shot learning, and the manipulation of prompt structures (attributes and objects) to optimize model performance. Hence, the rating of 8 acknowledges its strong relevance with a minor deduction for the difference in prompt type (soft versus hard)." -generating training data with language models: towards zero-shot language understanding,gpt-4-1106-preview,8,"The study's focus on using prompts to generate class-conditioned texts with a unidirectional PLM directly pertains to the field of prompt engineering. The prompts guide the text generation process, which is a practical application of hard prefix prompts within the context of zero-shot learning. Although the study isn't exclusively a systematic review on hard prefix prompts, it demonstrates a relevant application of prompts in the engineering process to improve NLU tasks, making it highly relevant to the subject." -factual probing is [mask]: learning vs. learning to recall,gpt-4-1106-preview,8,"The abstract discusses the use of cloze-style prompts to retrieve factual information from a pre-trained language model, which is highly relevant to the field of prompt engineering. The introduction of OptiPrompt, which optimizes prompts in continuous embedding space, is a direct contribution to the development of prompt engineering techniques. The paper's investigation into the distinction between 'learning' and 'learning to recall' is also pertinent to understanding how models respond to prompts. However, the paper does not specifically address 'hard prefix prompts,' hence the rating is not a full 10." -"using chatgpt standard prompt engineering techniques in lesson preparation: role, instructions and seed-word prompts",gpt-4-1106-preview,8,"The abstract provided discusses the use of standard prompt engineering techniques (which could potentially include hard prefix prompts as a subset) in the context of lesson preparation for an AI tool, specifically ChatGPT. It emphasizes the effectiveness of structuring prompts with additional defined roles and seed words. Although it does not explicitly mention 'hard prefix prompts,' it is closely related to the broader topic of prompt engineering. The study's findings could contribute valuable insights into the usage of specific prompting methods, which may include but are not explicitly limited to hard prefix prompts. Therefore, it is relevant to the study of prompt engineering, but it would have a higher relevance rating if it directly addressed hard prefix prompts." -generative ai tools in art education: exploring prompt engineering and iterative processes for enhanced creativity,gpt-4-1106-preview,8,"The study directly addresses prompt engineering within the context of generative AI tools in art education, which involves teaching students how to craft and refine prompts for creative purposes. Although the focus is tailored to art and design, the principles of prompt engineering discussed are relevant to the broader field of study. The emphasis on iterative processes and the detail-oriented approach required for effective prompt engineering are particularly pertinent. The study's lower relevance in terms of being a 'comprehensive systematic review on hard prefix prompts' specifically, is acknowledged, hence the rating is not a full 10." -how does prompt engineering affect chatgpt performance on unsupervised entity resolution?,gpt-4-1106-preview,9,"The study directly investigates the impact of prompt engineering on the performance of ChatGPT in the context of unsupervised entity resolution, which is a relevant topic in natural language processing and artificial intelligence. The systematic experimental approach to understanding how different prompts can influence the results of entity resolution tasks using a language model like ChatGPT is highly pertinent to studies in prompt engineering. The deduction of one point is due to the preliminary nature of the results mentioned in the abstract, which suggests that there could be further work required to fully understand the relationship and generalize the findings." -multimodal propaganda detection via anti-persuasion prompt enhanced contrastive learning,gpt-4-1106-preview,8,"The relevance to prompt engineering is substantial, given that the study introduces a novel model (APCL) that utilizes prompt engineering as a core component for detecting propaganda in memes. The model specifically incorporates category words from propaganda techniques in its prompt engineering strategy, using these prompts to enhance contrastive learning in a multi-label classification task. Though the focus is on propaganda detection rather than prompt engineering itself, the use of 'persuasion' and 'anti-persuasion' prompts directly relates to the study of how prompts can be engineered to improve machine learning tasks. Therefore, the rating is high but not maximum because prompt engineering is a means to an end in this study, rather than the primary focus." -user-friendly image editing with minimal text input: leveraging captioning and injection techniques,gpt-4-1106-preview,8,"The study focuses on making prompt engineering more user-friendly by categorizing prompts by semantic details and proposing methods to simplify the text prompt process for image editing, which is relevant to prompt engineering. The relevance is marked down slightly because the abstract suggests a specific application to image editing rather than a comprehensive systematic review on hard prefix prompts, but it still contributes to the broader topic of prompt optimization and efficiency." -ascm: an answer space clustered prompting method without answer engineering,gpt-4-1106-preview,8,"This paper is highly relevant to prompt engineering as it proposes an innovative approach to prompt-based learning, addressing limitations in answer mapping by using semantic clustering and synonym initialization. Although not explicitly focused on 'hard prefix prompts,' the concept of improved answer-category mapping in prompt-based learning and the influence on model performance is central to the study of efficient and effective prompt designs. The model's approach of clustering answers to manage diverse linguistic expressions without manual or automatic answer constraints is integral to the broader conversation of how prompts interact with pre-trained language models in tasks like classification and NLI. The semi-supervised stair learning method could also contribute to a better understanding of knowledge distillation in the context of prompt engineering." -automatic code summarization via chatgpt: how far are we?,gpt-4-1106-preview,6,"The abstract discusses evaluating ChatGPT's performance on code summarization tasks using specific prompts, which can be considered a form of 'prompt engineering.' Prompt engineering involves crafting prompts to guide a model towards specific desired outputs or behaviors. While the abstract does not focus exclusively on 'hard prefix prompts,' it does entail exploring appropriate prompts to improve ChatGPT's performance. Therefore, it touches upon aspects of prompt engineering which are relevant to the study of how prompts affect an LLM's output, even though it isn't focused specifically on the systematic review of hard prefix prompts as a topic." -do llms possess a personality? making the mbti test an amazing evaluation for large language models,gpt-4-1106-preview,7,"The paper addresses the feasibility of using the Myers-Briggs Type Indicator (MBTI) to evaluate large language models (LLMs), which involves investigating if the personality types of LLMs can be influenced by prompt engineering. This suggests that the study explores, to some extent, how prompts can be used to shape the output of LLMs, aligning with the broader topic of prompt engineering. However, the focus on MBTI and personality assessment is somewhat tangential to the core aspects of prompt engineering, such as prompt formats or effectiveness, and does not directly address the concept of hard prefix prompts. Therefore, while the study is related to prompt engineering, it is not entirely centered on it, leading to the rating of 7 for relevance." -the application of chatgpt in healthcare progress notes: a commentary from a clinical and research perspective,gpt-4-1106-preview,7,"The text discusses the use of ChatGPT, an AI-driven language model, in the context of healthcare progress notes, emphasizing the relevance of 'prompt engineering techniques' for effective integration into clinical practice. While the text does not focus specifically on a 'comprehensive systematic review on hard prefix prompts,' it does reference the application of prompt engineering in a practical setting, demonstrating its significance in real-world applications and hence has relevance to the field of prompt engineering. That said, the focus on healthcare rather than the technical aspects of prompt engineering itself means the relevance is substantial but not complete." -"augmented behavioral annotation tools, with application to multimodal datasets and models: a systematic review",gpt-4-1106-preview,6,"The systematic review discusses the evolution of annotation tools, which are a fundamental part of creating datasets for machine learning, and mentions the increasing emphasis on prompt engineering in the context of training sophisticated multimodal datasets. While the main focus of the paper is on annotation methods and not specifically on hard prefix prompts, the implications for prompt engineering in the context of adding qualitative fine-tuning to models is relevant. This indicates a moderate level of relevance to prompt engineering studies, especially in the context of how these annotation tools may impact the future of prompt engineering as part of machine learning model development." -copilot for xcode: exploring ai-assisted programming by prompting cloud-based large language models,gpt-4-1106-preview,8,"The paper's relevance to prompt engineering is significant as it describes how an AI-assisted tool, Copilot for Xcode, utilizes prompt engineering through a chat interface to enable features such as code generation, autocompletion, documentation, and error detection. The integration of Large Language Models with a development environment and the tool's ability to process 'small' decisions for program composition signifies the application of prompt engineering techniques, making it highly relevant to the study of prompt engineering, especially within the domain of software development and AI-assisted programming tools." -towards equitable representation in text-to-image synthesis models with the cross-cultural understanding benchmark (ccub) dataset,gpt-4-1106-preview,7,"The abstract discusses a 'culturally-aware priming approach' and mentions the use of automated prompt engineering with GPT-3, which is relevant to the topic of prompt engineering. However, the main focus seems to be on text-to-image synthesis and fighting bias through data curation, rather than on the details of prompt engineering itself. Therefore, while prompt engineering is a component of the study, it is not the central topic, hence the rating of 7 for relevance." -omniscientdb: a large language model-augmented dbms that knows what other dbmss do not know,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering study as it discusses automatic prompt engineering within the context of a database management system (DBMS). It specifically addresses the issue of constructing appropriate prompts to a large language model in response to SQL queries for the purpose of data augmentation. The paper's focus on exploring different prompting techniques and their application in a DBMS setting makes it pertinent to the field of prompt engineering. However, it does not cover the topic of hard prefix prompts exclusively or systematically, as the abstract suggests a broader application, hence the rating is not a full 10." -automatic essay scoring method based on multi-scale features,gpt-4-1106-preview,4,"The study discusses a method for automated essay scoring (AES) that integrates Sentence-BERT for sentence vectorization, deep neural networks, and shallow linguistic features, which includes prompt-related features. Although prompt-related features are mentioned, the focus is on scoring essays rather than engineering prompts which suggests a tangential connection to prompt engineering study. The method addresses the extraction and integration of features in AES, which is peripherally related to understanding prompts in the context of their relevance to essays but does not constitute a comprehensive systematic review on hard prefix prompts. Therefore, the relevance to prompt engineering study is moderate." -how understanding large language models can inform their use in physics education,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering study as it specifically discusses the impact of prompt-engineering techniques on an LLM's (ChatGPT) performance in physics education. It includes practical illustrations of how prompt engineering can be used to aid understanding and problem-solving in physics, which is a direct application of prompt engineering. The only reason the rating isn't a 10 is that the paper's focus is somewhat narrow—specific to physics education—and does not address the broader spectrum of hard prefix prompts across various domains." -automatic bug fixing via deliberate problem solving with large language models,gpt-4-1106-preview,9,"The abstract discusses leveraging a large language model to improve automated program repair, specifically by using an interactive prompting technique called Tree of Thoughts (ToT). Since this technique is directly related to the use and innovation of prompt engineering to enhance the model's ability to solve complex tasks such as bug fixing, the relevance to prompt engineering study is very high. The only reason it's not a perfect 10 is that the description doesn't solely focus on the prompt engineering aspect but also on the overall capability of large language models in automated program repair." -data-driven approach for formality-sensitive machine translation: language-specific handling and synthetic data generation,gpt-4-1106-preview,8,"The paper presents an empirical prompt engineering strategy as part of its data-driven approach to FSMT. Although it does not focus solely on hard prefix prompts, the mention of prompt engineering indicates that this aspect was a significant component of their research methodology. The study's focus on artificial data generation and tailoring the model performance using prompt engineering suggests that the paper would be relevant to someone interested in prompt engineering, even if the main context is machine translation rather than a 'comprehensive systematic review' of hard prefix prompts specifically." -noisy exemplars make large language models more robust: a domain-agnostic behavioral analysis,gpt-4-1106-preview,8,"The abstract discusses the use of systematic approaches in prompt engineering to assess the robustness of large language models (LLMs) in multi-hop reasoning tasks. While it doesn't specifically mention 'hard prefix prompts', it does cover the broader topic of prompt engineering and the perturbations used to test and potentially improve the models' responses. The research's emphasis on few-shot prompting and robustness is highly relevant to the field of prompt engineering, thus warranting a high rating." -embracing ai for better quality engineering,gpt-4-1106-preview,5,"The provided text briefly mentions 'prompt engineering for testing use cases', indicating that prompt engineering is indeed part of the study in the context of quality engineering with AI. However, the focus seems to be on a broader application of AI in quality engineering and does not provide specific details on hard prefix prompts or a comprehensive systematic review on such prompts in engineering study. Therefore, the relevance is moderate, as it touches on the subject but does not delve deeply into it." -telehealth intensive care unit nurse surveillance of sepsis,gpt-4-1106-preview,6,"The article is somewhat relevant to prompt engineering study, as it involves the development of a 'sepsis prompt' that integrates usability and human factors engineering standards. Although the focus is on the medical application of prompts, rather than on the hard prefix prompts often used in machine learning or computational contexts, the principles of design, usability testing, and alert optimization could provide insights into prompt engineering methodologies. The evaluation of sensory processing, cognitive processing, and user satisfaction has parallels in the design of effective prompts in other fields. However, the specific application to a telehealth ICU scenario and sepsis surveillance limits the direct applicability to general prompt engineering studies." -original research,gpt-4-1106-preview,7,"The abstract describes a study that focuses on the use of prompt engineering techniques within the context of art and design education, specifically using OpenAI's DALL-E2. The research explores how students were taught to refine their ideas and prompts iteratively to produce better visual outcomes with generative AI tools. This emphasis on the iterative refinement of prompts is directly relevant to the field of prompt engineering, as it pertains to understanding and improving the way prompts are constructed and their effects on the outputs generated by AI models. However, the study also touches on ethical considerations, the replacement of artists, and the integration of AI tools into the art and design curriculum, which, while important, are somewhat tangential to the technical and methodological aspects of prompt engineering. Therefore, the rating reflects the study's substantial relevance to prompt engineering due to the focus on teaching and practicing prompt refinement, but it is not exclusively centered on the systematic study of hard prefix prompts." -chatgpt-based debate game application utilizing prompt engineering,gpt-4-1106-preview,9,"The abstract describes a study focused on the application of prompt engineering within an educational debate game, utilizing ChatGPT. It illustrates the use of prompt engineering to control and refine the outputs of the language model for a specific domain, which is directly related to prompt engineering study. The research aims to improve ChatGPT's responses by providing specific instructions and case-based prompts, aligning perfectly with the concept of hard prefix prompts in prompt engineering. One point is docked because the abstract does not explicitly mention 'hard prefix prompts' as a focus; however, the content is highly relevant to the overall field of prompt engineering." -exploring the impact of prompt engineering on chatgpt 3.5 text summarization: a bert score evaluation,gpt-4-1106-preview,9,"The described study directly investigates the impact of prompt engineering on ChatGPT 3.5, with a particular emphasis on text summarization tasks. It measures the performance by using BERT score evaluation, which is highly relevant to understanding how different prompts can affect the output of AI in NLP applications. Thus, the relevance to prompt engineering studies is high. The reason for not giving a perfect score is the absence of a 'TL;DR' which could provide a concise summary of the results, an element that could further solidify its relevance by directly showcasing how prompts influence the summarization process." -promptor: a conversational and autonomous prompt generation agent for intelligent text entry techniques,gpt-4-1106-preview,9,"The abstract discusses the creation and impact of an agent called Promptor, which directly relates to the field of prompt engineering, as it generates prompts for language models. This is highly relevant because it addresses the challenge of creating effective prompts, a core issue in prompt engineering. Moreover, it involves actual user studies to compare prompts generated by Promptor against those created by human designers. The slight deduction in rating is due to the abstract not focusing exclusively on 'hard prefix prompts,' which was specified in the original prompt, but the overall study still contributes significantly to the domain of prompt engineering." -"artificial intelligence in engineering and society: blue skies, black holes, and the job of requirements engineers (keynote)",gpt-4-1106-preview,6,"The abstract provides a comprehensive overview of artificial intelligence's impact on engineering and society, touching briefly on the use of large language models to address requirements engineering problems which may involve prompt engineering to some extent. However, the focus on prompt engineering, particularly in the context of 'hard prefix prompts,' is not explicit or central to the abstract. It mentions the potential for using prompts to check requirements completeness and to generate models, suggesting some relevance to the field of prompt engineering. The relevance rating is above average because prompt engineering is definitely a subset of the topics covered, but since the abstract does not focus on the systematic review of hard prefix prompts specifically, it does not score higher." -simple llm prompting is state-of-the-art for robust and multilingual dialogue evaluation,gpt-4-1106-preview,8,"The abstract discusses the use of a novel framework that incorporates prompting Large Language Models (LLMs) for improving dialogue evaluation, which is relevant to prompt engineering. Prompt engineering involves designing inputs that help LLMs produce more effective and relevant outputs, and the context here is applying such techniques to evaluate dialogues in multiple languages and ensuring robustness. The relevance might not be a perfect 10 because it is specific to dialogue evaluation rather than prompt engineering in general, but the principles and implications of this research can contribute significantly to the field of prompt engineering as it applies to dialogue systems." -towards understanding chain-of-thought prompting: an empirical study of what matters,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it delves into the specifics of how Chain-of-Thought prompting impacts the performance of language models. Understanding the effectiveness of CoT, even with invalid demonstrations, offers significant insights into prompt design and how language models can generate coherent reasoning steps. This may directly influence future prompt engineering strategies." -improving language model prompting in support of semi-autonomous task learning,gpt-4-1106-preview,9,"The abstract provided discusses the development of an agent capability for constructing effective prompts that elicit useful responses from language models for the purpose of learning new tasks. This is highly relevant to the field of prompt engineering, as it directly involves optimizing interaction strategies (prompts) to improve the utility of language model outputs in specific contexts. Although the term 'hard prefix prompts' from the initial prompt is not explicitly mentioned, the essence of the study is deeply intertwined with the principles of prompt engineering, hence the high relevance rating." -boosting theory-of-mind performance in large language models via prompting,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it investigates the effectiveness of in-context learning prompts in improving the theory-of-mind performance of large language models (LLMs). It directly addresses how tailored prompts can enhance the reasoning capabilities of AI systems, which is a core aspect of prompt engineering. Although the focus is specifically on theory-of-mind tasks, the findings have broader implications for the field of prompt engineering, especially concerning the design of prompts that can guide LLMs towards better understanding and interpreting human mental states and intentions." -smart-llm: smart multi-agent robot task planning using large language models,gpt-4-1106-preview,6,"The study mentioned revolves around the use of Large Language Models (LLMs) for converting high-level instructions into task plans for multi-robot operations, which includes the use of programmatic LLM prompts within the few-shot prompting paradigm. This suggests a relevance to prompt engineering as it involves designing and using prompts to achieve specific outcomes with LLMs. However, the focus is more on robotics and task planning rather than solely on the study of prompt engineering techniques, hence the rating is a medium 6 out of 10 for relevance to prompt engineering studies specifically focused on hard prefix prompts." -prompting large language models with speech recognition abilities,gpt-4-1106-preview,4,"The described study focuses on extending the capabilities of large language models to perform automatic speech recognition by integrating an audio encoder. It does not primarily concentrate on the study or application of hard prefix prompts in the context of prompt engineering. However, because prompt engineering can involve methods for effectively instructing or incorporating additional modalities (like audio) into language models, this paper indirectly relates to the broader field of prompt engineering. The relevance is not direct as it doesn't address hard prefix prompts specifically, but the insights from such a study could potentially influence prompt engineering strategies for multimodal models." -"see, think, confirm: interactive prompting between vision and language models for knowledge-based visual reasoning",gpt-4-1106-preview,7,"The paper introduces a framework, IPVR, which integrates interactive prompting mechanisms within a vision-language reasoning context. While the study primarily focuses on knowledge-based visual reasoning tasks, the use of prompting in the 'think stage' directly relates to prompt engineering as it involves designing prompts to steer a large language model's (LLM) output. This is relevant to the concept of 'hard prefix prompts' which consist of prefixed instructions that guide the model's generation process. Thus, the relevance to prompt engineering is significant, but not exclusive since the paper also emphasizes few-shot learning, transparency, and trustworthiness in reasoning, deviating from prompt engineering as the sole focus." -satisfiability-aided language models using declarative prompting,gpt-4-1106-preview,6,"The abstract details a novel approach to improve reasoning capabilities of large language models (LLMs) by using a satisfiability-aided language modeling (SatLM). Although it does not specifically mention 'hard prefix prompts' or 'prompt engineering,' the integration of an automated theorem prover to enhance the model's problem-solving abilities indirectly relates to the broader field of prompt engineering, where devising the right prompts to elicit desired outcomes from language models is crucial. The approach of generating a declarative task specification could be seen as part of the prompt engineering process, since it involves guiding the LLM to produce useful outputs for theorem proving. However, the lack of explicit focus on prompt engineering techniques limits the relevance to a comprehensive systematic review on hard prefix prompts, thus warranting a middling score." -"prompting and evaluating large language models for proactive dialogues: clarification, target-guided, and non-collaboration",gpt-4-1106-preview,8,"The abstract points to a study focused on the evaluation and enhancement of conversational systems using Large Language Models through prompt engineering techniques such as the 'Proactive Chain-of-Thought'. Although the main emphasis does not appear to be on 'hard prefix prompts' specifically, the relevance to prompt engineering is clear as it discusses prompting strategies and schemes to handle complex dialogue scenarios. This aligns with the study of how different prompts can influence the behavior and responses of language models. However, because it does not explicitly mention 'hard prefix prompts', it cannot receive a perfect score for relevance." -query expansion by prompting large language models,gpt-4-1106-preview,6,"The abstract describes using various prompts, including Chain-of-Thought, in the context of query expansion leveraging Large Language Models (LLMs). The relevance to prompt engineering is clear since it specifically mentions the study of different types of prompts to optimize the performance of LLMs in a search-related task. However, it does not directly address 'hard prefix prompts,' indicating a comprehensive systematic review on a subset of prompt engineering but not covering the full scope that might be suggested by the prompt 'a comprehensive systematic review on hard prefix prompts.' Therefore, while it is relevant due to its focus on prompt types and their effect on LLMs, it's not exactly aligned with the outlined study of hard prefix prompts." -pive: prompting with iterative verification improving graph-based generative capability of llms,gpt-4-1106-preview,8,"The study involves a specific application of prompt engineering in the context of generating structured data from large language models (LLMs). The introduction of a framework (PiVe) that uses fine-grained prompts through iterative verification to enhance an LLM's output is directly related to the mechanics of designing effective prompts, which is a key aspect of prompt engineering. While the focus is on improving graph-based generation, which is a specialized subfield, the core concept of using prompts iteratively to refine outcomes is highly relevant to prompt engineering studies. The rating is not a perfect 10 as the extract does not mention 'hard prefix prompts' directly, but the methodology is clearly within the realm of prompt engineering." -enhancing small medical learners with privacy-preserving contextual prompting,gpt-4-1106-preview,8,"The abstract describes a study focused on enhancing the capabilities of small language models (SLMs) in the medical field through advanced prompting techniques that involve large language models (LLMs) without compromising patient privacy. The core of the study revolves around prompt engineering by designing a system that uses LLMs to generate contextual prompts, which then assist SLMs in performing medical tasks more effectively. This falls under the realm of prompt engineering as it pertains to the creation and optimization of prompts to elicit desired responses from language models. Although it is specific to the medical domain and privacy preservation, the principles and methodologies employed are relevant to the broader study of prompt engineering, especially in how it can be tailored to enhance the performance of smaller models within confidential constraints." -grammar prompting for domain-specific language generation with large language models,gpt-4-1106-preview,8,"The abstract describes an approach to improve the performance of large language models on domain-specific language generation tasks by using grammar prompting. Although the term 'hard prefix prompts' is not explicitly mentioned, grammar prompting can be considered a form of structured prompt engineering, and the systematic review would likely be interested in various methods of prompt engineering, including grammar prompting. This would make the study significantly relevant to those looking to understand different prompting techniques, especially in the context of generating highly structured languages. The relevance is not rated as a full 10 because the abstract does not directly address a review on 'hard prefix prompts' but rather discusses a related concept in prompt engineering." -allies: prompting large language model with beam search,gpt-4-1106-preview,7,"The described study, 'allies: prompting large language model with beam search', presents a method that iteratively refines and expands on initial queries. This iterative process of generating new queries can be seen as a form of prompt engineering, where the goal is to improve the performance of a large language model for complex tasks. Although the study does not directly focus on 'hard prefix prompts' as specified in the prompt engineering study request, the concept of refining and modifying prompts to leverage hidden knowledge aligns with the broader field of prompt engineering. Therefore, the relevance to prompt engineering is significant but not entirely focused on the specific aspect of 'hard prefix prompts'." -prompting language-informed distribution for compositional zero-shot learning,gpt-4-1106-preview,7,"The abstract indicates that the paper introduces a model called PLID that uses prompting strategies with pre-trained large language models for compositional zero-shot learning, which aligns with the field of prompt engineering. While the term 'hard prefix prompts' is not directly mentioned, prompting language-informed distributions could potentially involve relevant concepts. The relevance is rated as 7 because prompt engineering constitutes a significant aspect of the research, but it's not clear if it specifically and directly addresses 'hard prefix prompts' as the primary focus." -retrieval-augmented gpt-3.5-based text-to-sql framework with sample-aware prompting and dynamic revision chain,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it delves into the utilization of large language models (LLMs) and the design of efficient prompts to generate SQL queries from natural language questions. It directly addresses the challenges of prompt learning in contexts that require precise syntax, such as SQL, and proposes innovative solutions to improve the process. The concepts of retrieval-augmented prompting, sample-aware prompting, and a dynamic revision chain are advanced techniques within the scope of prompt engineering, showing how refined prompting strategies can lead to better model performance on specialized tasks." -towards better chain-of-thought prompting strategies: a survey,gpt-4-1106-preview,9,"The abstract indicates that the study is a systematic survey of the Chain-of-Thought (CoT) prompting technique, a relevant aspect of prompt engineering for large language models (LLM). CoT is directly tied to the strategies used to elicit better performance from LLMs, which is a central concern of prompt engineering. The survey’s aims to provide a comprehensive analysis and guide on the influencing factors of CoT prompting makes it highly relevant. However, since it does not cover the 'hard prefix prompts' explicitly, but rather prompting strategies as a whole, one point is deducted, thus not making it a perfect 10." -"reinforcement learning in the era of llms: what is essential? what is needed? an rl perspective on rlhf, prompting, and beyond",gpt-4-1106-preview,7,"The paper in question discusses Reinforcement Learning from Human Feedback (RLHF) and its applications to Large Language Models (LLMs). Prompt engineering is relevant to the use of LLMs, as it encompasses the techniques and strategies used to effectively instruct LLMs to produce desired outputs. While the paper does not focus narrowly on 'hard prefix prompts' specifically, the discussion around RLHF and prompting evaluation is pertinent to prompt engineering as a whole. Understanding RLHF and its implications can contribute to more advanced prompt engineering methods, particularly in evaluating and optimizing prompts for better performance in various tasks assigned to LLMs. Thus, the relevance to prompt engineering study is significant, though not exclusively focused on hard prefix prompts." -can instruction fine-tuned language models identify social bias through prompting?,gpt-4-1106-preview,8,"The study is relevant to prompt engineering as it specifically investigates the use of zero-shot prompting, including Chain-of-Thought (CoT) prompts, to evaluate the capability of language models at bias identification tasks. Since prompt engineering encompasses designing and refining the prompts given to language models to elicit specific types of responses or measure certain capabilities, the study’s focus on how these prompts can be used to detect social biases is pertinent to the field. However, the study does not appear to specifically address 'hard prefix prompts', which would be necessary for a 10 rating since the initial query asked for relevance to prompt engineering studies focused on hard prefix prompts." -approximating online human evaluation of social chatbots with prompting,gpt-4-1106-preview,9,"The study introduces a Dialog system Evaluation framework based on Prompting (DEP), which directly relates to prompt engineering as it involves using large language models conditioned with specific prompts to evaluate conversational agents. This is highly relevant to the study of prompts and their impact on the performance of language models. The relevance is not a full 10 because the study seems to be more focused on evaluating chatbots rather than on the 'hard prefix prompts' and the methodology might be broader than hard prefix prompts alone." -march in chat: interactive prompting for remote embodied referring expression,gpt-4-1106-preview,7,"The provided title and abstract describe a study that engages with large language models (LLMs) for the task of Vision-and-Language Navigation (VLN), particularly focusing on generating navigation plans from high-level instructions — a form of interactive prompting. Although it doesn't directly address the concept of 'hard prefix prompts' in the described system, the use of prompts to communicate with LLMs is relevant to the field of prompt engineering. The March-in-Chat (MiC) model's interactive prompting mechanism that adapts to visual observations could lend insights into how prompt engineering can be applied in dynamic, real-world environments. While the study emphasizes action planning over strict prompting techniques, the interaction between the LLM and the environment via prompts and the adaptability of these prompts is related to the broader topic of engineering prompts for specific tasks. Hence, the rating reflects that the paper has relevance but is not entirely focused on 'hard prefix prompts' specifically within prompt engineering study." -prompting a large language model to generate diverse motivational messages: a comparison with human-written messages,gpt-4-1106-preview,9,"The study directly investigates prompt engineering by comparing the effectiveness of different prompt structures on the output diversity of a large language model (GPT-4). The use of a crowdsourcing pipeline as a model for constructing LLM prompts, and then measuring the impact on message diversity, provides empirical insights into the principles of prompt engineering. It explores the nuances of constructing prompts based on successful human instruction strategies and their potential utility in eliciting quality and diverse outputs from AI systems. This is highly relevant to the field of prompt engineering, although not focused on 'hard prefix prompts' specifically, it evaluates the broader concept of structured prompting." -prompting gpt-3.5 for text-to-sql with de-semanticization and skeleton retrieval,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it discusses a framework for improving the performance of large language models (LLMs) on the Text-to-SQL task, which is inherently based on the concept of providing effective prompts to the model. The de-semanticization process and skeleton retrieval align with hard prefix prompts since they involve manipulating the input to the LLM to enhance its understanding and output. This systematic approach to tailoring demonstrations and prompts to an LLM's requirements is a direct application of prompt engineering strategies. The reason why it's not a full 10 is that it focuses specifically on Text-to-SQL tasks, which is just a subset of all possible applications of prompt engineering." -large language models can self-improve,gpt-4-1106-preview,8,"The abstract outlines a method for self-improvement of large language models using 'Chain-of-Thought' prompting and self-consistency without requiring ground truth labels. This is highly relevant to the field of prompt engineering, as it deals with the creation and use of specific prompts ('high-confidence' rationale-augmented answers) to enhance a model's performance. The study's relevance is not a full 10 because the prompt engineering is focusing specifically on 'hard prefix prompts,' and it is not clear from the abstract if the 'high-confidence' prompts used exactly fit under this category. However, the techniques are closely related to prompt engineering and have implications for the development of prompts used in training LLMs." -multimodal chain-of-thought reasoning in language models,gpt-4-1106-preview,7,"The abstract pertains to the field of language models and their ability to perform complex reasoning, a topic which is inherently connected to prompt engineering as it explores how prompts can be structured to improve performance. While the study focuses on CoT (chain-of-thought) prompting, which is a specific technique within prompt engineering, it also introduces a multimodal approach by incorporating both text and images. The relevance to prompt engineering is significant, as the multimodal CoT could be a novel prompt engineering strategy, but it does not directly address hard prefix prompts, which would have been the direct subject of a prompt engineering study according to the initial prompt inquiry. Therefore, the rating is not a perfect score." -towards expert-level medical question answering with large language models,gpt-4-1106-preview,8,"The abstract provided discusses the use of Large Language Models (LLMs) and their application in medical question answering. It emphasizes the role of prompting strategies, including a 'novel ensemble refinement approach', which are essential components of prompt engineering. This indicates that the study involves research into optimizing prompts for LLMs to improve their performance in a specific domain, which is highly relevant to the broader field of prompt engineering. The rating is not a full 10 because the abstract focuses on medical question answering and LLM improvements in a specific domain rather than a general examination of hard prefix prompts or prompt engineering as a standalone subject." -language models can solve computer tasks,gpt-4-1106-preview,8,"The abstract describes a study related to the use of a prompting scheme called RCI in improving the performance of a pre-trained large language model (LLM) for computer tasks and natural language reasoning. While the study does not specifically mention 'hard prefix prompts', it directly involves the broader field of prompt engineering by showcasing how an LLM can be prompted to enhance its ability to interpret and execute tasks based on natural language commands. The emphasis on the efficacy of specialized prompting schemes (including the comparison with 'chain of thought' prompting) indicates that this research is highly relevant to the study and development of prompt engineering methods. The rating is not a full 10 as it does not explicitly focus on hard prefixes but prompt engineering in general." -how is chatgpt's behavior changing over time?,gpt-4-1106-preview,4,"The provided abstract and TLDR focus on the changes in the behavior of large language models (LLMs) like GPT-3.5 and GPT-4 over time across various tasks. While it is not directly related to a 'systematic review on hard prefix prompts' in prompt engineering, the study's insights into the performance variability and amenity to different prompting techniques (like chain-of-thought prompting) have indirect relevance to prompt engineering. Knowing how model performance can change over time is valuable for designing and updating prompts to maintain or improve LLMs' effectiveness. However, the focus is not specifically on prompt engineering with hard prefixes, which would make the relevance partial and thus results in a moderate rating." -is chatgpt the ultimate programming assistant - how far is it?,gpt-4-1106-preview,8,"The title and abstract provided describe an empirical study of ChatGPT's capabilities as a programming assistant and, importantly, they highlight the significance of prompt engineering in its effectiveness. Although the study itself is not about 'hard prefix prompts' specifically, the ramifications of the research touch upon the broader theme of how to interact effectively with LLMs (like ChatGPT) to solve programming tasks. The mention of 'demonstrating the importance of prompt engineering' illustrates a direct relevance to the field of study, however, since it's not strictly about 'hard prefix prompts', but more broadly covers ChatGPT's functionality, the rating is slightly reduced." -art: automatic multi-step reasoning and tool-use for large language models,gpt-4-1106-preview,8,"The provided abstract describes a framework (ART) that enhances the capabilities of Large Language Models by enabling them to automatically generate intermediate reasoning steps and integrate tool use. This is related to prompt engineering because it explores advanced techniques to optimize how prompts are given to large language models to evoke sophisticated reasoning and external information integration. Although it does not specifically mention 'hard prefix prompts,' the research is highly relevant to the field of prompt engineering as it advances how models are prompted to solve tasks. It falls slightly short of a perfect relevance score because it does not directly address 'hard prefix prompts' but rather focuses on the broader context of generating reasoning steps and tool integration, which can be considered a part of prompt engineering." -graph of thoughts: solving elaborate problems with large language models,gpt-4-1106-preview,9,"The provided abstract relates closely to prompt engineering study as it introduces a new framework for advancing prompting capabilities in LLMs, which is directly relevant to the field. The introduction of 'Graph of Thoughts' as a means to improve LLM reasoning and the possibility of it being used to develop new prompting schemes suggest a high relevance to the study and practice of prompt engineering. The abstract alleges an enhancement over existing prompting paradigms, pointing to a significant contribution to the field. However, the exact term 'hard prefix prompts' is not mentioned, which prevents a full rating of 10." -task and motion planning with large language models for object rearrangement,gpt-4-1106-preview,8,"The abstract describes 'LLM-GROP,' a system that leverages large language models (LLMs) through prompting to understand commonsense knowledge about object arrangements. Prompt engineering is directly used to retrieve information about object configurations, which is relevant to studies of prompt engineering. The paper seems to explore the efficacy of different prompts to enable a robot to understand and execute tasks involving physical objects, thus demonstrating a practical application of prompts in AI/robotic systems. While the main focus appears to be on task and motion planning, the use of prompt engineering is a significant aspect of the study, hence the high relevance rating." -interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions,gpt-4-1106-preview,7,"The study presents an approach, IRCoT, that combines retrieval techniques with chains-of-thought reasoning for enhancing multi-step question-answering in large language models. While it doesn't specifically talk about hard prefix prompts, it is indirectly relevant to prompt engineering as it deals with improving the quality and relevance of the responses generated by the AI. Considering that prompt engineering is all about optimizing how we interact with AI models to improve their output, the study's focus on utilizing a CoT to guide retrieval and improve the AI's reasoning steps is a valuable contribution to the field. It could be applied to warrant investigations into how prompts can be optimized to generate more accurate and contextually relevant retrieval queries, which is a crucial aspect of prompt engineering. However, it does not address hard prefix prompts directly, hence the rating is not a full 10." -gpt-ner: named entity recognition via large language models,gpt-4-1106-preview,5,"The relevance to 'prompt engineering study' is moderate. While the abstract discusses a method to adapt large language models (LLMs) for named entity recognition (NER) by transforming it into a text generation task, which implicitly involves engineering prompts (special tokens @@##) for entity extraction, the main focus is on overcoming the shortcomings of LLMs for NER tasks and not specifically on the study of prompt engineering as a field. The self-verification strategy mentioned does relate to the usage of prompts to verify generated content, which is relevant, but the paper does not seem to be centered on prompt engineering as a comprehensive topic." -unleashing cognitive synergy in large language models: a task-solving agent through multi-persona self-collaboration,gpt-4-1106-preview,8,"The described study is quite relevant to prompt engineering as it explores the concept of Solo Performance Prompting (SPP) which is a method of engaging a Large Language Model (LLM) in multi-turn self-collaboration with multiple personas. This relates to prompt engineering because it involves designing prompts that can elicit certain behaviors or responses from the model, akin to engaging with different facets or 'personas' of the AI. Crafting these nuanced prompts that can stimulate cognitive synergy is a direct example of prompt engineering. The paper does not specifically address 'hard prefix prompts', but the concept of using predetermined prompts to instigate particular responses or modes of operation in the LLM are within the scope of prompt engineering studies. Thus, the study is highly relevant to the development of sophisticated prompt engineering techniques." -safety assessment of chinese large language models,gpt-4-1106-preview,8,"The abstract describes a study focused on the development of a benchmark for the safety assessment of Chinese large language models (LLMs) using a method that involves providing test prompts and evaluating the safety of the model's responses. Since this method relies heavily on 'prompt engineering' (the strategy of crafting prompts to elicit specific responses or behaviors from AI models), there is a high relevance to prompt engineering studies. Specifically, the benchmark involves prompting as a core part of the assessment process. However, it does not directly focus on improving or innovating prompt engineering techniques, therefore the rating is not a perfect 10." -can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms,gpt-4-1106-preview,8,"The presented study is highly relevant to prompt engineering as it explores confidence elicitation in large language models (LLMs) without the need for fine-tuning or access to proprietary information. Prompt engineering is a subset of AI research focused on finding ways to improve the performance of AI models by crafting effective prompts. The methods investigated, which include verbalize-based, consistency-based, and hybrid methods, are directly related to how prompts can be designed to elicit more accurate confidence levels from LLMs. This is a key aspect of prompt engineering because it relates to improving the interaction with and the outputs of LLMs, which is a central goal of prompt engineering. However, it doesn’t focus specifically on 'hard prefix' prompts, which slightly reduces its relevance from a perfect score." -adaptive test generation using a large language model,gpt-4-1106-preview,5,"The relevance to prompt engineering study is moderate. While the abstract discusses the use of a Large Language Model (Codex) for automated test generation, which involves prompting the model with certain inputs to produce desired outputs (tests in this case), the study is focused on practical application rather than a systematic review of prompt engineering techniques or the study of 'hard prefix prompts' specifically. The process involves an adaptive prompting mechanism to improve test generation, which is somewhat related to prompt engineering studies. Therefore, the relevance is rated a 5, as it addresses some elements of prompt design but does not specifically target a comprehensive review or study of prompt engineering methodologies." -when to make exceptions: exploring language models as accounts of human moral judgment,gpt-4-1106-preview,8,"The paper addresses the development and application of a novel prompting strategy (MORALCOT) with the goal of improving the performance of Large Language Models (LLMs) on rule-breaking question-answering tasks that relate to human moral judgments. Since prompt engineering involves crafting inputs that guide AI models to produce the desired outputs, and the MORALCOT strategy is essentially a method of prompt engineering tailored for moral reasoning contexts, this study is quite relevant to prompt engineering. Although it focuses specifically on moral judgments rather than the broader range of prompt engineering applications, the insights gleaned from creating effective prompts in this challenging area are valuable for the field. The rating is not a full 10 as the content of the paper is narrowly focused on moral reasoning, which is just one of many domains where prompt engineering can be applied." -expertprompting: instructing large language models to be distinguished experts,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it discusses a novel strategy 'ExpertPrompting' to improve the performance of large language models by crafting detailed prompts that contextualize the model as an expert. This approach is directly aligned with the study and applications of prompt engineering, aiming to enhance the quality of outputs generated by LLMs. While the paper may not specifically mention 'hard prefix prompts', the concept of customizing prompts to induce expert-level answers fits well into the broader category of prompt engineering techniques, making the paper's content significantly pertinent to the field." -automatic evaluation of attribution by large language models,gpt-4-1106-preview,7,"The relevance to prompt engineering study is significant because the abstract describes research on prompting Large Language Models (LLMs) as one of the approaches for automatic evaluation. Although the main focus is on evaluating attribution, the fact that prompting is used as a method indicates that the results and methodologies could be applicable and informative for prompt engineering studies. However, the primary emphasis seems to be on attribution evaluation rather than prompt construction or optimization itself, which prevents a full relevance score." -on the risk of misinformation pollution with large language models,gpt-4-1106-preview,4,"While the paper addresses the use of large language models for generating misinformation and explores defensive strategies such as prompting, it is not specifically focused on prompt engineering study with regards to hard prefix prompts. The mention of prompting as a defense strategy does lend some relevance, but because the primary focus is on misinformation and not the systematic review of hard prefix prompts in prompt engineering, the relevance to the specific prompt engineering study is moderate to low." -logic-lm: empowering large language models with symbolic solvers for faithful logical reasoning,gpt-4-1106-preview,7,"The study presents a framework that improves the logical reasoning capabilities of Large Language Models (LLMs) through the integration with symbolic solvers. While the topic is not directly related to 'hard prefix prompts' or prompt engineering, the methodology described includes a step to translate natural language problems into symbolic formulation, which could be considered as a form of complex prompt engineering. The method's aim to enhance LLMs' problem-solving skills with better input translation is relevant to the wider field of prompt engineering, especially in terms of constructing prompts that require logical reasoning. Therefore, the relevance is somewhat high, but not directly focused on the core concept of 'hard prefix prompts'." -red-teaming large language models using chain of utterances for safety-alignment,gpt-4-1106-preview,8,"The study presents relevant information to prompt engineering by discussing the effects of 'Chain of Utterances-based (CoU) prompting,' which directly relates to how prompts are structured and used to interact with large language models. Additionally, the work on safety evaluation benchmark RED-EVAL and proposing RED-INSTRUCT for the safety alignment of LLMs contributes to understanding and improving prompt-based interactions with these models. This has a direct implication on prompt engineering as it informs the construction of prompts that can be used to evaluate and align LLMs for safety. However, the paper primarily focuses on the safety and ethical implications of prompting rather than on prompt engineering for improving the general performance or functionality, which is why the rating is not a full 10." -can large language models write good property-based tests?,gpt-4-1106-preview,7,"The abstract describes research into leveraging large language models (LLMs) to synthesize property-based tests, which is a subset of prompt engineering because it specifically looks at how to prompt LLMs to perform a particular software engineering task. The relevance to prompt engineering study is significant as it involves the design of prompts to effectively communicate with LLMs and generate meaningful output. However, it is not directly focused on hard prefix prompts or a comprehensive systematic review of such prompts, which would be the central concern in prompt engineering studies. Therefore, the rating is not a full 10 but still high due to the close connection with the practice of prompt engineering in the context of LLMs." -codehelp: using large language models with guardrails for scalable support in programming classes,gpt-4-1106-preview,6,"Although the study does not focus on 'hard prefix prompts' specifically within the context of prompt engineering, it is related to the field in a broader sense. It examines the use of prompting strategies in the context of a tool called CodeHelp that utilizes large language models to assist students. The relevance rating is above average because understanding how prompts are engineered to generate non-solution revealing outputs in an educational setting can contribute valuable insights to prompt engineering research, especially in terms of designing controllable and ethical AI. However, as the paper's primary focus is on the deployment and effects of an educational tool rather than the systematic review of prompt engineering techniques, it is not rated higher." -i spy a metaphor: large language models and diffusion models co-create visual metaphors,gpt-4-1106-preview,8,"The described study involves a sophisticated form of prompt engineering where the Large Language Models (LLMs) are specifically instructed to generate textual content that then serves as a prompt for diffusion-based text-to-image models. Although the study focuses on the creation of visual metaphors, the process requires careful engineering of text-based prompts to elicit the desired visual outputs from the AI. Therefore, while the research does not directly study 'hard prefix prompts,' it contributes to the broader understanding of how different prompting strategies can guide AI behavior, which is highly relevant to the field of prompt engineering." -reviewergpt? an exploratory study on using large language models for paper reviewing,gpt-4-1106-preview,4,"While the study explores the use of large language models (LLMs) in the context of scientific paper reviewing, which requires sophisticated prompting strategies, it does not specifically focus on 'hard prefix prompts' as the main subject of investigation. The relevance to prompt engineering is present as the research touches upon how different prompts can lead to different performance outcomes by the LLM (e.g., prompting with specific questions versus general review requests). However, since the core study does not concentrate on the engineering of prompts and their systematic review but rather on the application of LLMs in a specific task of paper reviewing, the rating is moderately relevant rather than highly relevant." -graphologue: exploring large language model responses with interactive diagrams,gpt-4-1106-preview,6,"The study is relevant to prompt engineering to a moderate degree. It does not directly deal with hard prefix prompts but explores the broader area of improving interactions with Large Language Models (LLMs) using novel prompting strategies and interface designs. By introducing an interactive system, Graphologue, which converts LLM responses into diagrams, it touches upon enhancing the efficacy of prompts and the ways in which users can solicit and handle information from an LLM. The connection to prompt engineering lies in the fact that extracting entities and relationships for diagrams requires careful prompt design to ensure that the LLM provides structured responses suitable for graphical representation. Although the focus is not on 'hard prefix prompts', the study does contribute to the field of prompt engineering by demonstrating alternative ways to optimize user interactions with LLMs." -"despite ""super-human"" performance, current llms are unsuited for decisions about ethics and safety",gpt-4-1106-preview,8,"The abstract discusses the development and evaluation of a new prompting strategy for Large Language Models (LLMs), and specifically mentions how this strategy outperforms humans at ethical reasoning tasks. Since prompt engineering involves crafting inputs that can significantly affect the performance of LLMs, and this abstract describes a prompting strategy that notably changes the model's output, the content is highly relevant to the study of prompt engineering. The reduction of two points is due to the focus also being on ethical reasoning and model limitations rather than purely prompt engineering techniques." -human-in-the-loop through chain-of-thought,gpt-4-1106-preview,7,"The abstract presents a study that is related to improving the performance of language models through human intervention, specifically in the context of Chain-of-thought prompting. While not directly addressing 'hard prefix prompts,' it discusses the broader topic of prompt engineering and the optimization of human-in-the-loop systems. This is relevant to the field of prompt engineering as it explores enhancing reasoning by correcting intermediate steps, which could be considered a form of prompt optimization. However, since it does not specifically mention 'hard prefix prompts,' the rating is not a full 10." -an evaluation of log parsing with chatgpt,gpt-4-1106-preview,8,"The evaluation study focuses on the performance of ChatGPT in log parsing tasks and how different prompting methods affect this performance. While it does not specifically mention 'hard prefix prompts', it does address the broader concept of 'prompting methods', which is directly relevant to prompt engineering. The focus on few-shot prompting and the exploration of effective prompts for log parsing imply that understanding prompt engineering is a significant component of the research. The study's relevance to prompt engineering is therefore high, but it is not a perfect match since it is not a 'comprehensive systematic review on hard prefix prompts' specifically." -evaluating gpt-3 generated explanations for hateful content moderation,gpt-4-1106-preview,7,"The abstract is relevant to prompt engineering study to a considerable extent, as it discusses the utilization of GPT-3's language model for generating explanations which requires careful design of prompts to tailor the model's outputs for hate speech moderation. The study's focus on evaluating the effectiveness and limitations of explanations prompted from a language model directly ties in with the principles of prompt engineering, which seeks to understand how best to interface with language models to achieve desired outcomes. However, it does not specifically discuss 'hard prefix prompts' but rather general prompting strategies, so the relevance is not absolute." -large language models are strong zero-shot retriever,gpt-4-1106-preview,8,"The relevance to prompt engineering study is high since the abstract describes the use of a large language model (LLM) to improve the efficiency and effectiveness of information retrieval through a prompt-based approach. Specifically, it mentions augmenting a query with potential answers and using prompts to make the LLM generate more precise answers, which aligns with understanding and improving the interaction with language models via prompts. However, it did not focus exclusively on 'hard prefix prompts' which might have been a part of a more targeted study of prompt engineering." -careful data curation stabilizes in-context learning,gpt-4-1106-preview,7,"The abstract discusses in-context learning (ICL) and the impact of data selection on the performance of large language models (LLMs), which is pertinent to prompt engineering study as it relates to the optimization of input data to improve model response. While the focus appears to be on data curation rather than prompt formulation (i.e., hard prefix prompts), the principles of selecting high-quality examples and understanding their influence on model performance are relevant. The methods described, such as CONDACC and DATAMODELS, could potentially be applied to or inform approaches in prompt engineering, making the study somewhat relevant although not exclusively focused on prompt design." -forward-backward reasoning in large language models for verification,gpt-4-1106-preview,8,"The paper discusses a method related to prompt engineering, specifically 'Chain-of-Though (CoT) prompting', which is a form of structuring prompts to guide large language models (LLMs) in reasoning tasks. The introduction of 'forward-backward reasoning,' as a means to enhance the verification of candidate answers generated by LLMs, represents a novel approach within the domain of prompt engineering. Although the paper does not directly mention 'hard prefix prompts', the relevance is high due to the focus on developing novel prompting methodologies to improve the performance and reliability of LLMs in complex reasoning tasks, which falls under the broader umbrella of prompt engineering studies." -how to catch an ai liar: lie detection in black-box llms by asking unrelated questions,gpt-4-1106-preview,7,"The study presents an approach for detecting lies from LLMs that involves crafting and using follow-up prompts or questions, which is related to the concept of prompting in language models. Lie detection in this context can be considered a fringe or specialized aspect of prompt engineering aimed at improving the reliability and truthfulness of LLM responses. While not directly focused on 'hard prefix prompts', the research highlights the impact of prompt design on the behavior of LLMs, which falls within the broader scope of prompt engineering. Hence, the rating reflects that the paper is relevant but not central to a comprehensive systematic review on prompt engineering, specifically with a focus on 'hard prefix prompts'." -self-checker: plug-and-play modules for fact-checking with large language models,gpt-4-1106-preview,8,"The abstract describes the 'Self-Checker' framework, which is relevant to prompt engineering, as it involves constructing prompts for large language models to perform fact-checking tasks in a zero-shot or few-shot setting. While the main focus of the paper is on the application of fact-checking, it directly involves prompt engineering to enable the large language models to understand and execute the task without extensive training or fine-tuning. Therefore, the paper is highly relevant to prompt engineering, especially in the context of using prompts to elicit specific functionalities from pre-trained models. However, it does not exclusively focus on 'hard prefix prompts' as indicated in the prompt engineering study, which might slightly limit its relevance in terms of specificity to that particular type of prompting." -llms to the moon? reddit market sentiment analysis with large language models,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant since the abstract describes a semi-supervised learning approach that utilizes a large language model (LLM) and involves prompting the LLM to generate Chain-of-Thought summaries to improve the quality of sentiment analysis on social media. This indicates the study focuses on how to engineer prompts to obtain more accurate outputs from the LLM, which is a key aspect of prompt engineering. However, the study does not specifically mention 'hard prefix prompts', which suggests that while it is related to prompt engineering, it does not directly address the comprehensive systematic review of such prompts. Therefore, the rating is not a full 10." -leveraging commonsense knowledge from large language models for task and motion planning,gpt-4-1106-preview,8,"The abstract describes the use of prompting techniques within Large Language Models (LLMs) to extract commonsense knowledge for task and motion planning, which is highly relevant to the field of prompt engineering. Specifically, the LLMGROP system leverages prompts to guide the LLM in generating information about plausible physical arrangements, a task that aligns closely with the development of hard prefix prompts for specific applications. Although the study focuses on a practical application for service robots rather than a broad systematic review of prompt engineering, the underlying methodology and use of prompts to gain desired outputs from an LLM provide valuable insights into the prompt engineering process. The rating is not a full 10 as the paper does not explicitly focus on a systematic review of prompt engineering techniques, which appears to be the central requirement of the 'prompt engineering study' in question." -what do llms know about financial markets? a case study on reddit market sentiment analysis,gpt-4-1106-preview,8,"The study's focus on using large language models for sentiment analysis is highly relevant to prompt engineering, as it explores the effect of different prompting strategies on the performance of the model. The mention of Chain-of-Thought summaries and forcing the LLM through several reasoning paths is particularly pertinent to how prompts can be designed to elicit better responses from language models. Although the primary application is market sentiment analysis, the techniques used for prompting can be generalized and applied to other domains, making this research relevant to the study of prompt engineering. The rating is not a full 10 because the paper's primary goal is not the study of prompt engineering itself, but rather the application of prompting techniques to a specific problem, i.e., financial sentiment analysis." -enhancing in-context learning with answer feedback for multi-span question answering,gpt-4-1106-preview,8,"The paper describes a methodology for improving the performance of large language models in specific tasks through in-context learning and a novel prompting approach which involves providing feedback on model outputs. This is highly relevant to prompt engineering as it directly pertains to techniques for constructing prompts that can better guide models like ChatGPT. The focus on multi-span question answering does not explicitly pertain to 'hard prefix prompts' as indicated in the original query, but it does explore the broader field of prompt design and optimization, which is why the relevance is rated an 8 instead of a perfect 10." -retrieving texts based on abstract descriptions,gpt-4-1106-preview,8,"The abstract describes research on using Large Language Models (LLMs) to generate training data for a new model focused on semantic retrieval, which pertains to prompt engineering in that the data sourcing process involves prompting a LLM effectively. The relevance lies in addressing the use of LLMs to formulate prompts that yield useful data for specific tasks, which is a key part of prompt engineering. However, the text does not explicitly address 'hard prefix prompts', a more specialized topic within prompt engineering, hence the rating is not a full 10." -queer people are people first: deconstructing sexual identity stereotypes in large language models,gpt-4-1106-preview,8,"The study is highly relevant to the field of prompt engineering because it discusses a post-hoc method to alter the prompts (chain-of-thought prompting) in order to influence the output of large language models. It addresses the issue of bias in LLMs, particularly against marginalized groups, an essential consideration within prompt engineering to ensure responsible AI practices. Recovering fair and unbiased responses from LLMs is a key application of prompt engineering, even though the study does not focus solely on 'hard prefix prompts' but rather on a broader set of prompt modification strategies." -knowing what llms do not know: a simple yet effective self-detection method,gpt-4-1106-preview,8,"The paper proposes a method that relies on prompt engineering to elicit different responses from LLMs to the same question, which directly involves the study of how prompts can be constructed and used to understand and evaluate the model's knowledge boundaries. Although it does not focus on 'hard prefix prompts' explicitly, the concept of diversifying textual expressions as prompts is closely related to the field of prompt engineering. The systematic approach to identify nonfactual responses through the analysis of divergences in LLM outputs is pertinent to the broader study of prompt engineering strategies, hence the high relevance rating." -retrieving supporting evidence for llms generated answers,gpt-4-1106-preview,8,"The described paper focuses on an experiment which involves prompting a Large Language Model (LLM) with a combination of a question and a retrieved answer to check for support of the LLM's generated answer. While it's not directly studying 'hard prefix prompts', it tackles a closely related topic in the field of prompt engineering: the verification and reliability of responses from LLMs, which could involve a form of prompt crafting. The relevance is high because understanding how prompts can be engineered to elicit verification behavior from an LLM is within the scope of prompt engineering studies. However, because it does not directly address the systematic review or exploration of 'hard prefix prompts', it does not get a full 10." -knowledge sanitization of large language models,gpt-4-1106-preview,7,"The abstract describes an approach for modifying the behavior of large language models using a specific fine-tuning technique to avoid disclosing sensitive information, which is relevant to the field of prompt engineering. This study is indirectly related to prompt engineering as it involves the engineering of prompts to ensure that the model's responses meet certain security and privacy requirements. This demonstrates the use of prompts to control and influence the output of language models. However, it does not specifically address 'hard prefix prompts,' which was the original topic, hence it doesn't receive a full relevance score." -reasoning in large language models through symbolic math word problems,gpt-4-1106-preview,8,"The study's focus on improving the alignment between the symbolic reasoning and the numeric answers of LLMs using a self-prompting approach is closely related to prompt engineering. It hints at the optimization of prompts to yield better performance from large language models in the context of solving symbolic math word problems, which is an exercise in prompting strategies. This aligns with the notion of hard prefix prompts that guide the LLMs towards a specific mode of reasoning. However, the study is not exclusively centered on prompt engineering but also explores the model's reasoning capabilities, hence the rating is not a full 10." -alphazero-like tree-search can guide large language model decoding and training,gpt-4-1106-preview,7,"The abstract discusses an approach to enhance the decoding and reasoning of LLMs by incorporating an AlphaZero-like tree-search framework. This is indirectly relevant to prompt engineering, as the paper seems to focus on improving LLMs' performance on tasks through tree-search algorithms rather than prompting techniques. However, the fact that it references the use of prompts in traditional models, such as CoT, and seeks to provide a method that reduces reliance on human-designed prompts, makes it relevant to the study of prompt engineering. It addresses a limitation of current prompt-based techniques and offers an alternative that could influence future prompt design and utilization." -exploring human-like translation strategy with large language models,gpt-4-1106-preview,7,"The study focuses on the MAPS framework, which involves Multi-Aspect Prompting and Selection, a system that seemingly pertains to 'prompt engineering' as it includes the design of prompts that enable LLMs to extract and utilize translation-related knowledge. While the study does not directly address 'hard prefix prompts', it is implicitly relevant because it involves the engineering of prompts to improve the translation process of LLMs. Therefore, it has relevance to the subject of prompt engineering, albeit not strictly focused on hard prefix prompts specifically." -"mmhqa-icl: multimodal in-context learning for hybrid question answering over text, tables and images",gpt-4-1106-preview,7,"The paper describes a novel method for improving question answering across multiple modalities using in-context learning strategies with large language models, which is relevant to prompt engineering. The technique enhances LLM prompting strategies for the task, which is a core aspect of prompt engineering. However, it does not focus directly on hard prefix prompts but on a broader application of prompts within multimodal question answering. Therefore, the relevance is significant but not entirely focused on the specific topic of hard prefix prompts." -gear: augmenting language models with generalizable and efficient tool resolution,gpt-4-1106-preview,7,"The title and abstract provided discuss an algorithm named GEAR that is relevant to the domain of prompt engineering, as it deals with enhancing the efficiency and effectiveness of large language models (LLMs) by using smaller models for tool grounding. Prompt engineering is a process that's closely related to how a language model interacts with external tools and uses prompts to perform tasks. While the study does not directly address 'hard prefix prompts' which may be a specific kind of prompt engineering technique, it does engage with the overall theme of improving the interaction between language models and tool utilization. Thus, its relevance is considerable but not entirely specific to 'hard prefix prompts' as suggested by the initial inquiry." -constitutionmaker: interactively critiquing large language models by converting feedback into principles,gpt-4-1106-preview,8,"The abstract provided discusses an interactive tool called ConstitutionMaker that is directly involved in prompt engineering by allowing users to refine large language model outputs and steer chatbot behavior through feedback. While the study does not cover 'hard prefix prompts' in specific, it engages with the broader field of prompt engineering through user feedback and principles, which are fundamental to prompt engineering methodology. Thus, the relevance is high but not maximal since the specific focus on 'hard prefix prompts' is not mentioned." -theory of mind in large language models: examining performance of 11 state-of-the-art models vs. children aged 7-10 on advanced tests,gpt-4-1106-preview,7,"The study is relevant to prompt engineering to a significant degree, as it includes examining and scoring the performance of LLMs on complex cognitive tasks using various types of prompts, potentially revealing how different prompts can elicit sophisticated language understanding and reasoning. While the primary focus of the study seems to be on the cognitive abilities of LLMs, particularly Theory of Mind, the aspect of using prompts and evaluating different kinds of prompts (open versus closed questions) is a substantial component of prompt engineering. However, the study doesn't seem to be centered exclusively on 'hard prefix prompts' or the mechanics of prompt design, thus it's not fully aligned with a 'systematic review on hard prefix prompts'. Therefore, the rating isn't a perfect 10." -retrieving supporting evidence for generative question answering,gpt-4-1106-preview,8,"The abstract provided discusses experiments on the validation of generated answers by large language models (LLMs) using a combination of questions and answers as prompts for retrieval processes. This work is directly connected to the concept of prompt engineering, as it involves designing and refining prompts (in this case, combining questions with generated answers) to improve the performance of LLMs. The relevance is not a perfect 10 because the study focuses specifically on verification of LLM-generated content against a corpus, and not broadly on 'hard prefix prompts' or a systematic review of prompt engineering techniques. However, it addresses a key aspect of prompt construction and interaction with language models, which is essential to the field of prompt engineering." -visualizing linguistic diversity of text datasets synthesized by large language models,gpt-4-1106-preview,4,"The abstract presented describes a tool, LinguisticLens, which is not directly related to the study of 'hard prefix prompts' in prompt engineering. However, the tool's function of analyzing syntactic diversity of LLM-generated datasets can have tangential relevance to understanding how different prompting methods, including hard prefix prompts, might influence the generative outcomes of LLMs. Therefore, while the primary focus of the abstract is on visualization and analysis of textual diversity rather than on prompt engineering, the insights from such a tool could potentially inform prompt engineering studies to some extent, which warrants a moderate relevance rating." -synergistic integration of large language models and cognitive architectures for robust ai: an exploratory analysis,gpt-4-1106-preview,7,"The abstract describes the integration of Large Language Models (LLMs) with Cognitive Architectures (CAs), which is relevant to prompt engineering to the extent that it deals with utilizing prompts for directing LLM behavior. Mention of 'chain-of-thought prompting' indicates a direct relevance to prompt engineering techniques. However, the primary focus seems to be on the broader framework of integrating LLMs and CAs, rather than specifically on the development or study of hard prefix prompts within prompt engineering. Therefore, the relevance is substantial but not complete." -feasibility of using the privacy-preserving large language model vicuna for labeling radiology reports.,gpt-4-1106-preview,6,"The provided abstract discusses the application of a large language model (LLM), Vicuna, for labeling radiography reports in a manner that preserves patient privacy. The relevance to prompt engineering lies in the mention of 'using a single-step or multistep prompting strategy' which indicates that prompts were designed and tested to achieve the desired outcome. The study evaluates the efficacy of these prompting strategies against established benchmarks. However, the study is not focused on prompt engineering itself, but rather on the application of prompts in a specific domain (medical report analysis). This means that while prompt engineering is a component of the study, the focus is not on the systematic review of 'hard prefix prompts,' but on the feasibility and efficacy of running a privacy-preserving LLM locally for practical applications in healthcare. Therefore, the relevance is moderate, as insights into prompt engineering can be gleaned but are not the central focus of the study." -empirical study of zero-shot ner with chatgpt,gpt-4-1106-preview,7,"The abstract describes research focused on improving the performance of language models on the zero-shot named entity recognition task, which involves strategies related to prompt engineering such as 'syntactic prompting' and 'tool augmentation'. This indicates relevance to prompt engineering as it involves designing inputs to elicit better performance from the model. However, the focus is more on the specific application of NER and the methodology to enhance LLMs like ChatGPT, rather than on prompt engineering in general or 'hard prefix prompts' specifically. This constitutes a partial but significant relevance to the broader field of prompt engineering studies." -large language models can learn rules,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study as it discusses a method for improving the performance of large language models (LLMs) in reasoning tasks through a novel prompting framework, Hypotheses-to-Theories (HtT). This framework directly relates to the development and refinement of prompts to enhance the reasoning capabilities of LLMs, which is at the core of prompt engineering. The systematic approach to generate, verify, and use rules for modeling better represents the kind of systematic review that could be applied in hard prefix prompts research. The only reason it doesn't receive full marks is that it does not specifically mention 'hard prefix prompts', but it addresses the broader field of prompting methods." -less is more for long document summary evaluation by llms,gpt-4-1106-preview,7,"The abstract describes a novel approach to evaluating summaries of long documents by LLMs that involves a key step of prompting the models after key sentence extraction, which is closely related to the concept of 'prompt engineering.' While the study is not directly focused on 'hard prefix prompts,' its relevance lies in the method of using prompts to efficiently guide LLMs towards desired tasks, which is an essential component of prompt engineering. Additionally, the results and practical recommendations could indirectly contribute to the understanding of how prompts affect the performance of language models in processing long documents. However, it is not a direct study of 'hard prefix prompts' in the sense of a comprehensive systemic review or an exploration of prompt structures and their effects, hence the rating does not reach the top of the scale." -developing a scalable benchmark for assessing large language models in knowledge graph engineering,gpt-4-1106-preview,8,"The described benchmarking framework for assessing Large Language Models in knowledge graph engineering seems to be highly relevant to prompt engineering as it deals with automatic evaluation and storage of LLM responses. This indicates that prompt engineering plays a crucial role in how well these models perform on the specified tasks of syntax and error correction, facts extraction, and dataset generation. The relevance is not a full 10 because the abstract does not specifically focus on 'hard prefix prompts', but rather on prompt engineering in a more general context within knowledge graph generation." -forward-backward reasoning in large language models for mathematical verification,gpt-4-1106-preview,6,"The study presents an innovative approach in using large language models for mathematical verification through FOBAR, which involves prompt engineering to some extent by integrating backward reasoning into the prompts to verify answers. While it doesn't directly address 'hard prefix prompts' in prompt engineering, the use of CoT prompting and the integration of answer verification templates are related to the techniques used in prompt engineering to improve AI performance. It shows the importance of prompt design in eliciting correct outputs from models. The relevance score isn't higher because it doesn't specifically discuss or review hard prefix prompts, which is the focus of the prompt engineering study mentioned." -c o rrpus: codex-leveraged structured representations for neurosymbolic story understanding,gpt-4-1106-preview,7,"The abstract discusses the enhancement of neurosymbolic work in natural language generation and understanding tasks through the use of structured prompts (referred to as 'abstracted prompting procedures'). Although the study primarily focuses on story understanding and generation, the mention of 'abstracted prompting procedures' which can be considered a technique within prompt engineering, signifies a relevance to the broader field of prompt engineering studies. However, the context is specific to story understanding tasks rather than a 'comprehensive systematic review on hard prefix prompts,' hence the rating is not a full 10." -s3-dst: structured open-domain dialogue segmentation and state tracking in the era of llms,gpt-4-1106-preview,7,"The study presents a structured prompting technique, which is relevant to prompt engineering as it involves mechanisms used to improve the interfacing with language models. The concept of 'Pre-Analytical Recollection' could offer insights into designing prompts that facilitate better state tracking and context understanding in conversations with language models. However, the focus seems to be more on dialogue state tracking and segmentation in the context of LLM-based systems, rather than directly on engineering prompts using hard prefixes. The relevance is therefore not maximal, as it does not directly address hard prefix prompts; however, the structured prompting approach is a component of prompt engineering within the larger scope of utilizing language models for complex tasks." -corrpus: detecting story inconsistencies via codex-bootstrapped neurosymbolic reasoning,gpt-4-1106-preview,8,"The provided abstract discusses the use of abstracted prompting procedures alongside neurosymbolic approaches for story understanding tasks. Although it does not specifically mention 'hard prefix prompts,' the subject of prompt engineering is still highly relevant. The abstract explicitly refers to the design of specialized prompts to guide large language models, which aligns with the broader field of prompt engineering studies. The creation and optimization of prompts to improve the performance of language models on specific tasks is a direct example of prompt engineering work. Therefore, the study appears to be very relevant to those interested in how tailored prompting can enhance model performance, even if it doesn't directly address hard prefix prompts." -automatic chain of thought prompting in large language models,gpt-4-1106-preview,9,"The abstract presents a direct study on improving the effectiveness of large language models using a specific type of prompt engineering strategy known as Chain-of-Thought (CoT) prompting. This is highly relevant to prompt engineering as it addresses the optimization of the prompting process to enhance the performance of language models. The approach of automatically generating CoT prompts (Auto-CoT) to replace manual effort is a significant contribution to the field of prompt engineering. The only reason this is not rated a 10 is that the study does not specifically address 'hard prefix prompts' but rather CoT prompting in general, which is a subset of prompt engineering." -neuro-symbolic procedural planning with commonsense prompting,gpt-4-1106-preview,8,"The given abstract discusses the use of commonsense-infused prompting to improve procedural planning in large language models, which aligns with prompt engineering concepts. The study presents a neuro-symbolic approach that incorporates commonsense knowledge into prompts to form a causal structure, reflecting an advanced and targeted application of prompts to enhance model performance. Although the focus is more on procedural planning and less on the structure of prompts themselves, the use of prompts generated from knowledge bases and their optimization for better outcomes in language models is fundamentally connected to prompt engineering." -analyzing bert’s knowledge of hypernymy via prompting,gpt-4-1106-preview,9,"The study on BERT's knowledge of hypernymy through prompting directly relates to prompt engineering because it investigates the effectiveness of using prompts to elicit specific linguistic knowledge from a language model. The paper analyzes how well BERT responds to direct prompts about lexical semantic relations, which is a key aspect of prompt engineering. The relevance is rated at 9 instead of a perfect 10 because the focus is specifically on hypernymy recognition, not on the broader range of prompt engineering strategies or types of prompts (like hard prefix prompts mentioned in the original topic), which could have an impact on how language models generate more diverse responses." -prompter: utilizing large language model prompting for a data efficient embodied instruction following,gpt-4-1106-preview,8,"The abstract discusses 'Prompter,' an approach that involves replacing a semantic search module with language model prompting, which is highly relevant to prompt engineering. The utilization of language model prompting to control robots based on natural language instructions is a practical application of prompt engineering, demonstrating how well-crafted prompts can improve performance in embodied instruction following tasks. The work implies a novel use of prompts and their significance in improving data efficiency, which are key topics in prompt engineering research. The rating is not a full 10 because while the paper is related to the use of prompts, it does not explicitly focus on 'hard prefix prompts' per se, but broadly on the application of language model prompts in a different context." -chain of thought prompting elicits reasoning in large language models,gpt-4-1106-preview,9,"The abstract directly discusses the impact of 'chain of thought prompting' on the performance of large language models. Given that 'chain of thought prompting' is a technique used in prompt engineering to elicit detailed reasoning from language models, and the abstract indicates significant performance improvements on complex tasks, it is highly relevant to the study of prompt engineering. It may not score a perfect 10 as it is not exclusively focused on 'hard prefix prompts' which might be a more specialized subset of prompt engineering." -rethinking with retrieval: faithful large language model inference,gpt-4-1106-preview,7,"The paper described involves using 'chain-of-thought (CoT) prompting' which falls under the broader category of prompt engineering in the context of large language models. Although the main focus appears to be on improving the model's ability to integrate external knowledge and thus enhance inference, it is still relevant because it discusses a method that modifies how prompts are used to obtain explanations from a model. However, the paper doesn't exclusively focus on the design or study of 'hard prefix prompts', so it may not completely align with studies exclusive to prompt engineering techniques. Therefore, the rating indicates moderate relevance, with points deducted for not being directly focused on hard prefix prompts, yet still relating to prompt engineering methodology." -pop quiz! can a large language model help with reverse engineering?,gpt-4-1106-preview,8,"The abstract discusses the use of prompting techniques with Codex, a large language model, to investigate its utility in reverse engineering tasks. This falls under the broader category of 'prompt engineering' as it involves the strategic formulation of prompts to elicit specific information from a language model regarding code comprehension. The study's focus on the model's response to these prompts and the development of a structured quiz to measure its performance is highly relevant to understanding how different prompt strategies might affect the outcome of interactions with AI. However, it is not precisely about 'hard prefix prompts', which suggests a more specialized aspect of prompt engineering, hence the deduction of 2 points." -least-to-most prompting enables complex reasoning in large language models,gpt-4-1106-preview,9,"The described research directly investigates a novel prompting strategy for language models, which is highly relevant to the field of prompt engineering. The 'least-to-most prompting' method addresses a common limitation in generalizing from easy to hard problems. Given that the strategy involves designing prompts to guide the model through incrementally challenging subproblems, this study contributes significantly to the understanding and development of advanced prompt engineering techniques. Therefore, it scores a 9, as it may not solely focus on 'hard prefix' prompts, but covers a broader approach to prompting that includes handling complex problems." -thoughtsource: a central hub for large language model reasoning data,gpt-4-1106-preview,7,"While the provided title and abstract do not specifically mention hard prefix prompts, the mention of 'large language model reasoning data' implies that the study could include research into various prompt engineering techniques, which may encompass hard prefix prompts. The 'ThoughtSource' project aims to facilitate a qualitative understanding of chain-of-thoughts (CoTs), which is a technique often used in prompt engineering to improve language models' performance. Furthermore, the focus on 'empirical evaluations' and 'providing training data' could be relevant to optimizing hard prefix prompts for better language model outputs. Thus, the study might contribute valuable insights to prompt engineering, albeit not exclusively to hard prefix prompts." -large language model prompt chaining for long legal document classification,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it focuses on the technique of 'prompt chaining' to improve the classification of lengthy and complex legal documents. The method specifically involves breaking down the task into parts and using successive prompts, which is at the core of advanced prompt engineering strategies. The successful performance improvement over the zero-shot method and the comparison with larger models like ChatGPT demonstrate a direct application and advancement in the field of prompt engineering for complex tasks such as legal document classification." -generate rather than retrieve: large language models are strong context generators,gpt-4-1106-preview,8,"The abstract describes a novel prompting method within the context of large language models, specifically applied to knowledge-intensive tasks. It details a process where the model generates contextual documents from a given question, which aligns with the concept of 'hard prefix prompts' in that it involves crafting inputs to elicit specific types of outputs from a model. Despite not using the exact term 'hard prefix prompt,' the essence of designing prompts to guide the generation of content is central to prompt engineering. The significance of an 8 rather than a 10 is because the abstract doesn't explicitly discuss hard prefix prompts or broader prompt engineering strategies beyond its specific 'generate-then-read' method." -lessons learned from gpt-sw3: building the first large-scale generative language model for swedish,gpt-4-1106-preview,7,"While the primary focus of the paper seems to be on the development and evaluation of a Swedish language model (GTP-SW3), it is mentioned that an 'extensive prompting study' was part of the research. Although the details of the prompting study are not provided, it suggests that there was an investigation into how the model responds to different prompts, which is relevant to prompt engineering. The rating isn't higher because the prompt study is not the central focus of the paper and without more information on the 'hard prefix prompts' aspect, the overall relevance to the specific area of prompt engineering study mentioned cannot be fully assessed." -a recipe for arbitrary text style transfer with large language models,gpt-4-1106-preview,8,"The paper focuses on a prompting method named 'augmented zero-shot learning' for text style transfer using large language models (LLMs). While it does not directly address 'hard prefix prompts,' it is significantly relevant to the broader field of prompt engineering. The concept of instructing LLMs to perform specific style transformations through natural language prompts aligns with the principles of prompt engineering, which involves crafting input prompts to guide the behavior of AI models. Although the study's primary application is text style transfer, the prompting techniques developed could have implications for the design and effectiveness of hard prefix prompts." -dehallucinating large language models using formal methods guided iterative prompting,gpt-4-1106-preview,8,"The abstract describes a study focused on refining the prompting process to reduce 'hallucinations' in large language models, such as ChatGPT, especially for safety-critical applications. Although it doesn't specifically mention 'hard prefix prompts,' the study's aim to create an architecture for iterative prompting and self-monitoring to ensure the accuracy of the models' responses is relevant to prompt engineering. Prompt engineering involves crafting prompts to obtain better performance from language models, and the research on reducing hallucinations can be seen as an advanced form of prompt engineering. The paper's relevance is not a perfect 10, as it doesn't directly address hard prefix prompts but instead looks at a broader issue within prompt engineering itself." -"camel: communicative agents for ""mind"" exploration of large scale language model society",gpt-4-1106-preview,6,"The abstract indicates that the paper is related to 'inception prompting' which is a form of prompt engineering as it involves guiding language models. However, the main focus seems to be on the cooperative behavior of communicative agents rather than hard prefix prompts. The relevance is moderate because while the paper touches on prompt engineering, it does not appear to conduct a 'comprehensive systematic review on hard prefix prompts' as specified in the original prompt." -towards a mathematics formalisation assistant using large language models,gpt-4-1106-preview,8,"The study discusses the efficacy of large language models in formalizing mathematical statements, emphasizing the importance of 'careful inputdependent prompt selection and postprocessing.' This relates closely to prompt engineering as it highlights the critical role of prompt design in achieving higher performance with language models. Though it doesn't focus on 'hard prefix prompts' specifically, the overall concept of optimizing prompts to improve a model's ability to understand and generate specific outcomes is central to prompt engineering studies." -meta-learning the difference: preparing large language models for efficient adaptation,gpt-4-1106-preview,6,"The abstract discusses ways to adapt large pretrained language models to be more efficient in tasks such as dialogue completion, summarization, and multi-domain language modeling, focusing on model weight differences and structural changes without extensive finetuning. This is relevant to prompt engineering because it touches on the efficiency of adapting models to specific tasks, which is a significant aspect of prompt engineering. However, the text does not directly address 'hard prefix prompts' or their systematic review, thus it is moderately relevant but not a perfect match for the topic of prompt engineering study." -tree of thoughts: deliberate problem solving with large language models,gpt-4-1106-preview,9,"The title 'Tree of Thoughts: Deliberate Problem Solving with Large Language Models' directly refers to an advanced method of prompt engineering for language models. It describes a new framework, Tree of Thoughts (ToT), which improves upon the existing 'Chain of Thought' approach. The abstract explains how this method allows language models to explore different reasoning paths and make more informed decisions. The fact that it facilitates exploration over coherent units of text is highly relevant to the study of hard prefix prompts, as it implies a structured and systematic way to lead and evaluate the language model's output. The significant improvement in problem-solving tasks like Game of 24, Creative Writing, and Mini Crosswords demonstrates the practical impact of this approach on prompt engineering. Despite not using the term 'hard prefix prompts' specifically, the concept and results are very pertinent to the field." -htlm: hyper-text pre-training and prompting of language models,gpt-4-1106-preview,8,"The abstract describes the development and advantages of the HTLM model which is relevant to prompt engineering insofar as it discusses the model's improved efficiency with hyper-text prompts over plain text prompts. This indicates a focus on how different formats of prompts influence the performance of language models. It also touches on 'structured prompting' which is a key aspect of prompt engineering. The relevance is not a perfect 10 since the study is about hyper-text specific prompting rather than 'hard prefix prompts' in general, but the study is still highly pertinent to the field of prompt engineering." -have llms advanced enough? a challenging problem solving benchmark for large language models,gpt-4-1106-preview,7,"While the abstract discusses a comprehensive benchmark for evaluating large language models on complex problem-solving tasks, involving hard problems from IIT JEE-Advanced exam, it indirectly relates to prompt engineering. The techniques mentioned like self-consistency, self-refinement, and chain-of-thought prompting are part of prompt engineering strategies. These strategies contribute to shaping the input provided to the models in order to improve their output. However, the focus of the study is more on the assessment of the models' abilities and the development of a confidence-thresholding method, rather than on the design or study of prompts (hard prefix prompts) specifically. Thus, the relevance to prompt engineering is significant but not the central theme of the paper." -explainability for large language models: a survey,gpt-4-1106-preview,5,"The paper's focus on explainability for large language models (LLMs) is indirectly relevant to prompt engineering study because understanding how LLMs work can inform the design of more effective prompts. However, the paper does not directly address prompt engineering or specifically hard prefix prompts. The relevance is moderate since insights into explainability might overlap with some aspects of prompt engineering, such as understanding model behavior and improving performance through better prompts, without being the central focus." -adapting large language models via reading comprehension,gpt-4-1106-preview,6,"The study explores a novel method of training large language models using domain-specific reading comprehension texts, which could indirectly relate to prompt engineering by enhancing the model's ability to understand and respond to prompts more effectively in different domains. However, the study does not directly address the systematic review of 'hard prefix prompts' which would be the focus of an engineering study on prompt format and structure. Thus, the relevance is moderate as the improvements in domain-specific prompting could benefit from such a training approach, but it is not centrally focused on prompt engineering itself." -radadapt: radiology report summarization via lightweight domain adaptation of large language models,gpt-4-1106-preview,7,"The study discusses adaptation strategies for large language models, including 'discrete prompting', which is relevant to prompt engineering as it involves designing specific prompts to guide the model's performance on a task. While the main focus is on domain adaptation through pretraining and fine-tuning, the mention of discrete prompting shows that the methodology studied does intersect with prompt engineering, especially in how the prompts can affect RRS model effectiveness. Thus, the relevance is significant but not central to prompt engineering studies, which might have a broader scope beyond domain adaptation and parameter tuning." -evaluating factual consistency of summaries with large language models,gpt-4-1106-preview,9,"The abstract addresses the evaluation of factual consistency in summaries using large language models and places a significant focus on the role of prompting methods. The relevance to prompt engineering is high, given that it explores various prompting methods including vanilla, chain-of-thought, and sentence-by-sentence, which are integral to the way LLMs are leveraged to perform tasks. This empirical study contributes to the understanding of how different prompts affect the performance of LLMs, which is a core aspect of prompt engineering. The rating is not a perfect 10 as the study is not exclusively on 'hard prefix prompts' (which was specified in the original prompt engineering study question), but the subject matter is very closely related." -large language models are diverse role-players for summarization evaluation,gpt-4-1106-preview,8,"The provided abstract outlines a study focused on leveraging large language models (LLMs) for the evaluation of text summarization, which is relevant to the domain of prompt engineering. Although the study does not solely concentrate on 'hard prefix prompts', it does propose a framework that involves 'roleplayers prompting mechanism' and 'context-based prompting,' which are examples of prompt engineering techniques used to guide LLMs towards a specific task. The 'multi-roleplayer prompting technology' and 'integrating multiple outputs into the final evaluation results' are indicative of advanced prompt engineering methods to evaluate LLMs' performance on text summarization tasks. The study's high relevance comes from its methodological innovation in prompt engineering for LLM evaluation, but it falls slightly short of perfect relevance due to the absence of a direct focus on 'hard prefix prompts.'" -can chatgpt detect intent? evaluating large language models for spoken language understanding,gpt-4-1106-preview,8,"The paper in question focuses on the ability of language models like ChatGPT to understand and classify intent in spoken language, which is closely related to prompt engineering. In-context learning and prompting are integral parts of language model interactions in natural language understanding tasks. Even though the study does not directly address 'hard prefix prompts,' it discusses the broader context of using prompts to elicit specific model behaviors and understandings, such as intent classification, which is a fundamental part of prompt engineering. The rating is not a full 10 because the study does not specifically focus on 'hard prefix prompts,' but it is highly relevant for anyone studying how prompting affects large language models' abilities." -complexity-based prompting for multi-step reasoning,gpt-4-1106-preview,9,"The given abstract discusses the concept of complexity-based prompting as a method for improving the multi-step reasoning capabilities of large-scale language models. This is highly relevant to prompt engineering because it explores how the complexity of prompts affects the performance of models like GPT-3 and Codex on reasoning tasks. The study directly relates to the process of crafting prompts that elicit better responses from language models, thus contributing to the field of prompt engineering. The systematic assessment of how prompt complexity influences the quality of model-generated reasoning chains is a specific aspect of prompt engineering, making the study pertinent though it doesn't focus on 'hard prefix prompts' as a specific type of prompt construction method." -"""according to ..."" prompting language models improves quoting from pre-training data",gpt-4-1106-preview,9,This study is highly relevant to prompt engineering as it explores a specific technique (according to prompting) aimed at improving the accuracy and reliability of Large Language Models by directing them to reference their pre-training data. The introduction of a novel evaluation metric (QUIP-Score) to measure grounding in underlying text corpora is also a significant contribution to the field. The focus on grounding responses and the empirical evidence showing the impact of different prompts on model output are central to the discipline of prompt engineering. -prompting for a conversation: how to control a dialog model?,gpt-4-1106-preview,9,"The paper directly addresses the challenge of prompt engineering by discussing a method to condition prompts on specific queries, which is a key issue in the field of dialog model control. Exploring alternatives to fine-tuning with this form of prompt engineering has direct implications on how to effectively influence the behavior of language models without compromising their diversity and expressiveness. The relevance to prompt engineering is very high because it contributes to the understanding and application of prompting techniques to guide dialog models. The paper's findings on improved BLEU scores and response diversity are valuable metrics when evaluating the performance of prompt-based methods. The only aspect keeping this from a perfect score may be the specificity of the application in dialogue systems, which, while still under the umbrella of prompt engineering, could be seen as a subset of larger prompt engineering challenges." -scaling instruction-finetuned language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it focuses on the effects of finetuning language models with instructions, which is a key method for improving the performance of language models on task-specific prompts. However, the study does not directly address 'hard prefix prompts', which may suggest specific, fixed prompts that are difficult for models to interpret, rather than the general approach of instruction finetuning. While the study has a strong connection to the field of prompt engineering by demonstrating the benefits of instruction-based finetuning on various models and benchmarks, the absence of a direct focus on 'hard prefixes' warrants a slightly lower rating." -multi-stage prompting for knowledgeable dialogue generation,gpt-4-1106-preview,8,"The paper presents a relevance to prompt engineering study as it focuses on improving dialogue generation by proposing a multi-stage prompting approach with a pretrained language model. This methodology directly relates to the design and refinement of prompts to enhance the model's performance in generating knowledgeable responses. Although the title suggests a dialogue system rather than an explicit 'hard prefix prompt' structure, the concepts of controlling and structuring prompts to improve output are central to prompt engineering. The high relevance score reflects the significance of multi-stage prompting within the broader scope of prompt engineering techniques." -react: synergizing reasoning and acting in language models,gpt-4-1106-preview,6,"The paper discusses an integrated approach where reasoning and acting are combined within LLMs, which is related to prompt engineering in the sense that it explores how to effectively prompt models to engage in both cognitive processes. Although it doesn't directly address 'hard prefix prompts,' it does deal with the broader topic of prompting LLMs to improve performance, suggesting some relevance. However, its focus on the 'ReAct' system's development and evaluation on specific tasks may not provide in-depth insights into the particular strategies used for engineering prompts, hence the rating isn't higher." -unnatural instructions: tuning language models with (almost) no human labor,gpt-4-1106-preview,9,"The described study is highly relevant to prompt engineering as it involves the creation of a large dataset of instructions for fine-tuning language models, which is a core facet of prompt engineering. The method of using language models to generate additional prompts and then employing these prompts for subsequent model training directly pertains to techniques in prompt engineering. The effectiveness of using generated prompts to achieve comparable or superior performance to human-curated datasets provides valuable insights into prompt engineering methodologies and their potential efficiencies. The point deduction is due to the abstract not addressing 'hard prefix prompts' directly, which may indicate the study doesn't focus specifically on that aspect of prompt engineering." -language models are multilingual chain-of-thought reasoners,gpt-4-1106-preview,7,"The content is relevant to prompt engineering because it discusses the use of prompts (chain-of-thought prompting) to evaluate the reasoning abilities of language models in a multilingual context. Although the focus is on the reasoning abilities and multilingual capabilities of the models rather than on the engineering of prompts per se, the effectiveness of different types of prompts, especially those encouraging a chain of thought, is an essential aspect of prompt engineering. Hence, the study indirectly contributes valuable insights to the field of prompt engineering by showcasing the impact of prompt types on the performance of language models across various languages." -teaching small language models to reason,gpt-4-1106-preview,7,"The abstract is highly relevant to the field of prompt engineering as it discusses the teaching of reasoning capabilities to smaller language models via knowledge distillation from larger models. Even though it does not specifically mention 'hard prefix prompts', it is related to the concept of improving model performance through advanced prompting strategies like chaining thoughts. The study's outcome indicates that refined prompting techniques can transfer complex reasoning skills to smaller models, which is a significant aspect of prompt engineering." -instruction induction: from few examples to natural language task descriptions,gpt-4-1106-preview,9,"The provided title and abstract are highly relevant to prompt engineering study as they explicitly discuss the ability of large language models to generate natural language instructions from a few examples. This ability is directly related to the engineering of prompts, as it involves designing prompts that help the model infer the desired task. The systematic exploration and evaluation of this ability are fundamental to understanding and improving prompt engineering strategies. The mention of a novel evaluation metric and differentiation between models based on their alignment with instructions also suggests a nuanced approach to prompt engineering that may yield insights for the systematic review on hard prefix prompts." -weakly supervised data augmentation through prompting for dialogue understanding,gpt-4-1106-preview,8,"The study presented in the prompt directly engages with prompt engineering as it discusses the use of 'prompting' with large pre-trained language models for data augmentation in dialogue understanding tasks, which is a subset of prompt engineering. The relevance is high because it examines the iterative improvement of prompts through weakly-supervised techniques, although it may not focus exclusively on 'hard prefix prompts' but rather on the broader context of prompts for few-shot learning and augmentation. Given that it deals with prompts and language models and their application in a practical task, the study is substantially related to the field of prompt engineering." -pointclip v2: prompting clip and gpt for powerful 3d open-world learning,gpt-4-1106-preview,7,"The study discusses utilizing both CLIP and GPT models in unison to enhance 3D open-world learning, with specific emphasis on zero-shot learning capabilities in classification, segmentation, and detection tasks. The relevance to prompt engineering is evident in the methodology where the authors design prompts for both the visual (CLIP) and textual (GPT) components to align 3D data with the pre-trained language knowledge. This indicates an element of prompt engineering to facilitate the interface between visual and language models for processing 3D point cloud data. Nevertheless, the study appears to be more focused on the application of these models in the 3D domain rather than specifically on the engineering of prompts. Hence, while prompt engineering is a component of the paper, it is not the core focus, which is why the rating is not higher." -milan: masked image pretraining on language assisted representation,gpt-4-1106-preview,4,"The abstract describes an approach to masked image pretraining using language-assisted representation, rather than directly involving 'hard prefix prompts' in the conventional sense associated with language models or text-based prompting. However, the mention of 'prompting decoder architecture' suggests an involvement of prompting mechanisms, which could be related to the topic of prompt engineering in a broader sense. Hence, it receives a moderate relevance rating due to the possible connection through the architecture design that involves prompts in the image-language pretraining context, but it does not directly focus on prompt engineering studies for text-based models or the specific concept of 'hard prefix prompts'." -errors are useful prompts: instruction guided task programming with verifier-assisted iterative prompting,gpt-4-1106-preview,7,"The relevance of the provided abstract to prompt engineering is fairly high, as the paper focuses on a method, CLAIRIFY, that uses iterative prompting combined with program verification. These techniques are critical for refining the interaction between humans and AI to generate accurate outputs, which is a central theme in prompt engineering. While the study is not about 'hard prefix prompts' specifically, it contributes to prompt engineering by exploring error utilization and iterative prompting to improve task programming, which could be applied in the broader context of prompt engineering studies. Therefore, a rating of 7 seems appropriate, given it may indirectly inform methodologies within prompt engineering but is not wholly centered on the specific concept of hard prefix prompts." -language is not all you need: aligning perception with language models,gpt-4-1106-preview,7,"While the provided abstract does not directly discuss 'hard prefix prompts' or 'prompt engineering,' it details the capabilities of Kosmos-1, a Multimodal Large Language Model (MLLM), which is relevant to the field of prompt engineering. The ability of Kosmos-1 to learn in context and follow instructions, including zero-shot and few-shot settings, as well as its evaluation in multimodal chain-of-thought prompting, relates closely to how prompts can be engineered and optimized to interact with language models. Moreover, the cross-modal knowledge transfer mentioned is a component of understanding how prompts can be designed to leverage language in multimodal environments. However, since the focus is primarily on the model's capabilities rather than on the study of prompts themselves, the relevance rating is not a maximal score." -improving factuality and reasoning in language models through multiagent debate,gpt-4-1106-preview,8,"The paper described is highly relevant to prompt engineering as it discusses a novel method for improving language model responses through a multiagent debate system. Although it does not specifically mention a 'hard prefix prompt', the techniques involved in creating prompts that facilitate a debate among language models are closely linked to advanced prompt engineering strategies. The 'society of minds' approach likely involves intricate prompting mechanisms to orchestrate the debate process. This has a direct bearing on the study and advancement of prompting methods, making the paper's content pertinent to the field. However, the rating is not a full 10 due to the lack of explicit mention of 'hard prefix prompts', which are the specific focus of the prompt engineering study mentioned." -orca: interpreting prompted language models via locating supporting data evidence in the ocean of pretraining data,gpt-4-1106-preview,8,"The abstract discusses a novel method named ORCA for interpreting how prompted language models such as BERT perform tasks by locating supporting data from pretraining, which is highly relevant to studies on 'prompt engineering.' Understanding how models relate to pretraining data when generating responses to prompts is a crucial aspect of prompt engineering. It informs how models process prompts and can lead to designing better prompts that leverage the model's knowledge effectively. However, the focus on 'hard prefix prompts' hasn't been explicitly mentioned, which might slightly reduce its relevance to that specific field of study." -are hard examples also harder to explain? a study with human and model-generated explanations,gpt-4-1106-preview,6,"The study touches on aspects of prompting when it investigates size and hardness of the test samples and their impact on the quality of explanations generated by both humans and GPT-3. This is indirectly related to prompt engineering, as the quality of outputs generated by LLMs may depend on the prompt's difficulty level, which can inform prompt engineering strategies. However, the study's primary focus is on the explainability and the comparison of human and model-generated explanations, not specifically on engineering prompts to improve LLM performance." -prefix-tuning: optimizing continuous prompts for generation,gpt-4-1106-preview,8,"The paper discusses 'prefix-tuning,' which is highly relevant to the field of prompt engineering as it involves optimizing task-specific vectors (prefixes) to improve performance on natural language generation tasks without the need to fine-tune all parameters of a language model. While the term 'hard prefix prompts' isn't explicitly used, the concept of prefix-tuning relies on a similar principle of using prompts (in this case, a trainable prefix) to guide the behavior of a language model. This is pertinent to the study of how prompts affect model performance and behavior, thus earning a high relevance rating. However, it's not a perfect match because the prompt specified a 'hard prefix prompts' review, and this paper focuses on a subset of prompt engineering that is not strictly the 'hard prefix.'" -a comprehensive survey on pretrained foundation models: a history from bert to chatgpt,gpt-4-1106-preview,5,"The provided abstract and TLDR are related to Pretrained Foundation Models (PFMs) and cover a wide range of aspects including their history, applications, and challenges across different data modalities. While prompt engineering is not directly mentioned, the study's focus on 'zero shot or few shot prompting' used by models like ChatGPT suggests an indirect relation to the topic. Thus, the paper may contain insights relevant for understanding the broader context of prompt engineering, particularly in how PFMs accommodate prompt-based interactions. However, since prompt engineering is a more specific discipline focused on the design and optimization of prompts to effectively leverage models like GPT or BERT, and the summary does not explicitly address hard prefix prompts or prompt engineering techniques, it receives a moderate rating of relevance." -segment everything everywhere all at once,gpt-4-1106-preview,8,"The abstract provided describes the creation of an interactive and promptable model (SEEM) for image segmentation tasks that is inspired by the mechanism of large language models (LLMs). Since prompt engineering refers to the design and refinement of prompts to effectively interact with models, such as LLMs, the study of SEEM's novel decoding mechanism that allows for diverse prompting is relevant to the field of prompt engineering. SEEM's ability to handle different types of dynamic prompts and its focus on a joint visual-semantic space are aspects that can provide valuable insights into how prompts can be optimized for better interaction with models across various domains. The work also touches on compositionality and semantic-awareness, both of which are key concepts in prompt engineering. While the focus is on image segmentation, the principles of designing prompts for interactive and semantic tasks align closely with prompt engineering methodologies. Therefore, the relevance rating is high but not maximum because the primary application is in the domain of image segmentation rather than text-based models, which are more commonly associated with prompt engineering." -verify-and-edit: a knowledge-enhanced chain-of-thought framework,gpt-4-1106-preview,8,"The abstract describes a method for improving the performance of large language models by addressing the factuality of generated content through a Verify-and-Edit framework in the context of Chain-of-Thought prompting. This is highly relevant to prompt engineering as it presents a new technique for refining prompts to enhance model factuality and trustworthiness. Although it does not directly address 'hard prefix prompts,' it contributes to the broader field of prompt engineering by presenting a strategy to improve output quality, which is a crucial aspect of the study of prompts and their optimizations. Therefore, it scores high on relevance, but not the maximum due to its specific focus on factuality rather than prompt types." -graphprompt: unifying pre-training and downstream tasks for graph neural networks,gpt-4-1106-preview,8,"The paper discusses a novel framework called GraphPrompt, which is directly related to prompt engineering in the context of graph neural networks (GNNs). While the study's focus is on the application of prompts to GNNs rather than text-based models traditionally associated with prompt engineering, it still contributes to the overall field of prompt engineering by extending its principles to another domain of artificial intelligence. The relevance to prompt engineering is high as it involves the development of a learnable prompt to bridge the gap between pre-training and downstream tasks, which is a core concept in prompt engineering studies." -symbolic chain-of-thought distillation: small models can also “think” step-by-step,gpt-4-1106-preview,9,"The abstract describes a method called Symbolic Chain-of-Thought Distillation (SCoTD) that directly relates to prompt engineering, as it involves training smaller language models on the rationalizations produced by larger models. This process is a form of prompt engineering since it deals with enhancing the ability of smaller models to sequentially reason through problems, akin to crafting effective prompts that guide model reasoning. The high relevance rating is due to the focus on improving model performance through engineered prompts (chain-of-thought prompting), which is central to prompt engineering studies. However, the rating is not a full 10 because the abstract does not explicitly mention 'hard prefix prompts' or a systematic review, which is specifically noted in the prompt." -towards revealing the mystery behind chain of thought: a theoretical perspective,gpt-4-1106-preview,8,"The provided title and abstract discuss the effectiveness of Chain-of-Thought (CoT) prompting in improving the performance of Large Language Models (LLMs), particularly for complex tasks. While the study does not explicitly mention 'hard prefix prompts,' it is closely related to prompt engineering, as CoT is a form of prompting strategy used to enhance the problem-solving capabilities of LLMs. The relevance to prompt engineering is high because the theoretical perspective on the mechanism of CoT can contribute significantly to the understanding and development of advanced prompt engineering techniques. However, the rating is not a full 10 because the explicit focus is not on hard prefix prompts but rather on a broader category of CoT prompting strategies." -zeroshotdataaug: generating and augmenting training data with chatgpt,gpt-4-1106-preview,8,"This paper is highly relevant to prompt engineering study as it directly explores the generation of synthetic data using task-specific prompts with ChatGPT. The study delves into the principles of prompt engineering by designing appropriate prompts that lead to superior performance in data augmentation for low resource scenarios. While the paper does not specifically mention 'hard prefix prompts' and the focus is more on data augmentation rather than the core concept of prompt engineering, the underlying premise involves crafting effective prompts to elicit desired outputs from a language model, which is a central aspect of prompt engineering." -reading comprehension quiz generation using generative pre-trained transformers,gpt-4-1106-preview,4,"The study is related to the application of AI in the educational domain, specifically using a pre-trained transformer model (GPT-3) for quiz generation which is a type of prompt engineering. However, it does not specifically focus on 'hard prefix prompts' but rather on the general capability of transformer models to generate educational content. The relevance to prompt engineering is present since quiz generation can be considered a form of prompt design, yet it is not focused on the systematic review of prompts or their optimization, which would make it highly relevant to prompt engineering studies." -meet your favorite character: open-domain chatbot mimicking fictional characters with only a few utterances,gpt-4-1106-preview,8,"The paper presents a method, Pseudo Dialog Prompting (PDP), which is highly relevant to prompt engineering study as it directly involves designing prompts to induce specific behaviors from a language model (mimicking fictional characters). This directly contributes to the broader field of prompt engineering by exploring how to effectively use limited data (a few utterances) to shape the output of a language model. It might not cover 'hard prefix prompts' in the systematic review sense but provides practical insights into the application of prompt engineering for conversational AI." -towards facet-driven generation of clarifying questions for conversational search,gpt-4-1106-preview,8,"The study described in the provided title and abstract demonstrates relevance to prompt engineering as it involves generating clarifying questions in response to user queries using a fine-tuned GPT-2 language model. This is closely related to prompt engineering as it requires careful design of prompts, or inputs, to the language model to ensure that the generated questions are coherent, relevant, and useful in the context of conversational search. While the main focus of the paper seems to be on the generation of clarifying questions rather than on hard prefix prompts specifically, the techniques and findings are likely applicable to prompt engineering studies, especially those concerned with improving interaction patterns with AI systems through conversational interfaces. The only reason the rating isn't higher is because 'hard prefix prompts' isn't explicitly mentioned, but the methodology and goals are nevertheless aligned with the principles of prompt engineering." -codegen: an open large language model for code with multi-turn program synthesis,gpt-4-1106-preview,6,"The abstract describes research on program synthesis using large language models, particularly focusing on a new model called CODEGEN. The relevance to prompt engineering is moderate because it touches on the use of prompts specifying subproblems in a multi-step paradigm for program synthesis. This suggests that different prompt structures (such as multi-turn prompts) can significantly affect the performance of code generation tasks, which is a part of the broader area of prompt engineering. However, the abstract does not specifically discuss 'hard prefix prompts' or provide a systematic review of prompt engineering, so it is only partially relevant to the specified topic of a comprehensive systematic review on hard prefix prompts." -promptchainer: chaining large language model prompts through visual programming,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves creating complex tasks by sequencing multiple prompt-driven interactions with a Large Language Model (LLM). While it doesn't specifically mention 'hard prefix prompts,' it approaches the broader topic of prompt design and chaining, which is a subset of prompt engineering. It also focuses on the user-interface side of prompt engineering through the PromptChainer tool, making it relevant for researchers and practitioners interested in optimizing the human-model interaction process. However, the rating is not a full 10 because the study does not directly focus on 'hard prefix prompts' specifically, which is the exact topic of interest." -"grips: gradient-free, edit-based instruction search for prompting large language models",gpt-4-1106-preview,9,"The article describes an innovative approach to prompt engineering specifically designed for large language models, which is directly relevant to the prompt engineering study. The 'Gradient-free Instructional Prompt Search (GrIPS)' is highly relevant as it directly addresses the challenge of improving language model performance through prompt optimization without the need for computationally expensive gradient-based methods. The relevance is slightly below 10 because the systematic review is not solely focused on hard prefix prompts, but on a broader method of prompt improvement. Nevertheless, the study's contributions to the field of prompt engineering are substantial and directly applicable to the systematic review topic." -ai chains: transparent and controllable human-ai interaction by chaining large language model prompts,gpt-4-1106-preview,8,"The study addresses a novel approach to interacting with large language models through 'Chaining LLM steps', indicating a clear relevance to the field of prompt engineering. Chaining can be viewed as an advanced form of prompt engineering where prompts are not static but follow a dynamic, modular process. Although the study does not directly discuss 'hard prefix prompts,' it explores the controllability and transparency of LLMs, which are crucial aspects in designing effective prompts. The relevance rating is not a full 10 because the study's focus is on chaining mechanisms rather than the specific concept of 'hard prefix prompts.'" -craft an iron sword: dynamically generating interactive game characters by prompting large language models tuned on code,gpt-4-1106-preview,7,"The abstract indicates a study that involves using example conversational prompts with a language model to enhance NPC interactions in games. While the main focus seems to be on generating natural language and code for game development purposes, the underlying premise is that these prompts are essential in directing the behavior of the language model. This relates to the subject of prompt engineering, as the quality and design of the prompts directly affect the output and capabilities of the conversational agent. However, the study does not appear to focus primarily on the systematic review of 'hard prefix prompts' specifically, hence the rating is not a perfect 10. The findings could still contribute valuable insights into prompt engineering as it relates to practical applications in game design and NPC character development." -in-context autoencoder for context compression in a large language model,gpt-4-1106-preview,6,"The abstract describes a method for compressing long contexts into compact memory slots, which although not directly related to 'hard prefix prompts', it does pertain to the broader field of prompt engineering by allowing for more efficient handling of prompts in large language models. This efficiency can impact how prompts are created, managed, and used in large language models. The connection to prompt engineering lies in the fine-tuning on instruction data, which likely would involve crafting prompts to generate desirable responses. However, since the primary focus appears to be context compression rather than prompt engineering techniques or the study of 'hard prefix prompts' specifically, the rating is not higher." -learning to prompt clip for monocular depth estimation: exploring the limits of human language,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it explores the efficiency of CLIP—a model trained on language and vision inputs—when prompted for a specialized task like Monocular Depth Estimation. The research discusses replacing human-language prompts with continuous learnable tokens, which directly pertains to prompt engineering by investigating alternative ways to communicate with AI models. It demonstrates how prompt design can influence performance and understanding of AI models, which is a central concern of prompt engineering studies. The fact that it also touches upon the limitations of human language in prompts and investigates non-linguistic tokens is a novel contribution to the field." -decomposed soft prompt guided fusion enhancing for compositional zero-shot learning,gpt-4-1106-preview,4,"The abstract presents a study on a method for compositional zero-shot learning (CZSL) using a framework called Decomposed Fusion with Soft Prompt (DFSP). While it is related to engineering prompts in the context of vision-language models, which indeed falls under the broader category of prompt engineering, it isn't directly focused on hard prefix prompts as mentioned in the initial request. The paper's relevance is therefore not exact but tangentially related since it involves the construction of vector combinations of learnable soft prompts, which can be considered a part of prompt engineering. However, the method described diverges from the original topic of 'hard prefix prompts,' which typically implies a non-modifiable text input for models, as opposed to the learnable prompts discussed here." -zero-shot rumor detection with propagation structure via prompt learning,gpt-4-1106-preview,8,"The abstract discusses a new approach to rumor detection using a prompt learning framework which is directly relevant to the field of prompt engineering. The study addresses the design of prompts and their integration with data representations and structural features, which are core considerations for prompt engineering. However, the study is more focused on the application of prompt learning for rumor detection rather than the general study of 'hard prefix prompts', so it may not fully cover the systematic review aspect that the hypothetical study on hard prefix prompts suggests." -efficiently enhancing zero-shot performance of instruction following model via retrieval of soft prompt,gpt-4-1106-preview,8,"The described study focuses on the use of soft prompts to improve the zero-shot performance of instruction-following models, specifically mentioning the assistance of these soft prompts to hard prompts. This is relevant to prompt engineering as the research is exploring an innovative approach to optimize how prompts are used, which lies at the core of prompt engineering. The relevance is not maximized (10 out of 10) because the study does not directly focus on 'hard prefix prompts' as specified in the original query but is sufficiently related as it investigates the conjunction of soft and hard prompts in the context of model tuning and performance enhancement. Therefore, it contributes valuable insights to the broader field of prompt engineering studies." -enhancing class understanding via prompt-tuning for zero-shot text classification,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it proposes a method that explicitly uses prompts to enhance semantic understanding in zero-shot text classification tasks. This approach falls within the scope of prompt engineering as it involves the generation of discriminative words (presumably prompts) and a matching model conditioned on prompts. The study focuses on enhancing class understanding which is a key aspect of prompt-based models, although it does not specifically mention 'hard prefix prompts', which was the focus of the original prompt." -prompt-based zero-shot relation classification with semantic knowledge augmentation,gpt-4-1106-preview,9,"The abstract describes a study focused on leveraging prompt-based approaches along with semantic knowledge to address the challenge in relation classification, especially for unseen relations under a zero-shot setting. The methodology described involves creating prompts that incorporate semantic knowledge from an external knowledge graph and using these to train a model. This aligns closely with the field of prompt engineering as it specifically addresses the development and use of prompts to guide model performance in a challenging AI task. The reason for not giving a full 10 is due to the absence of specific mention of 'hard prefix prompts,' which may indicate this study does not focus exclusively on that aspect of prompt engineering." -matching exemplar as next sentence prediction (mensp): zero-shot prompt learning for automatic scoring in science education,gpt-4-1106-preview,8,"The abstract describes a study that investigates the use of a zero-shot approach to automatically score student responses in science education using a novel method called Matching Exemplars as Next Sentence Prediction (MeNSP). This approach is highly relevant to the field of prompt engineering, as it involves the use of prompts to align with a scoring procedure without the need for fine-tuning. While the abstract does not explicitly mention 'hard prefix prompts', it does discuss prompt-based techniques for language model adaptation, which falls under the broader umbrella of prompt engineering. Therefore, the rating is an 8, indicating high relevance due to the innovative application of prompt-related methods in an educational context, but not a perfect score as the specific term 'hard prefix prompts' was not discussed." -controlling personality style in dialogue with zero-shot prompt-based learning,gpt-4-1106-preview,9,"The abstract describes a study focused on 'prompt-based learning' for controlling both personality and semantic accuracy in natural language generation, which is highly relevant to the field of prompt engineering. The experimentation with different classes of prompts and their effects on the NLG performance directly pertains to how prompts can be engineered to achieve specific outcomes. The high rating acknowledges the direct relevance to prompt engineering studies, especially within the context of controlling specific attributes in generated text, which is a crucial aspect of prompt engineering. The only reason it does not receive a full score might be because it does not explicitly address 'hard prefix prompts' but rather prompt-based learning in general." -structured prompt interrogation and recursive extraction of semantics (spires): a method for populating knowledge bases using zero-shot learning,gpt-4-1106-preview,8,"The given abstract describes a method, SPIRES, for populating knowledge bases using Large Language Models (LLMs) through zero-shot learning and prompt interrogation. As prompt engineering involves the design and refinement of prompts to effectively communicate with AI models, this abstract is highly relevant, as it suggests a structured way to use prompts to extract information and populate databases, a task that directly pertains to how prompts are constructed and their effectiveness. The rating is not a perfect 10 as the abstract specifically focuses on knowledge extraction and ontologies, which is a subset of prompt engineering." -prompt-guided zero-shot anomaly action recognition using pretrained deep skeleton features,gpt-4-1106-preview,4,"The study is somewhat related to prompt engineering as it incorporates user prompt-guided zero-shot learning which hints at the use of prompts to guide the anomaly detection model. However, the focus is primarily on skeleton-based anomaly detection and the usage of prompts seems to be a part of the overall anomaly score calculation rather than the core study of different prompt engineering techniques or hard prefix prompts. Therefore, the relevance is moderate." -bayesian sharpness-aware prompt tuning for cross-domain few-shot learning,gpt-4-1106-preview,8,"The paper presents a novel approach to prompt tuning, specifically Bayesian Sharpness-Aware Prompt Tuning (BSAPT), within the context of few-shot learning and domain adaptation. This is highly relevant to prompt engineering as it directly focuses on enhancing the method through which prompts are constructed and tuned, a core aspect of prompt engineering studies. The application to cross-domain few-shot learning demonstrates an advanced utilization of prompt engineering techniques. The rating is not a full 10 because the abstract suggests a specific application of prompt engineering rather than a comprehensive study of hard prefix prompts in general." -zero-shot generative model adaptation via image-specific prompt learning,gpt-4-1106-preview,7,"The provided abstract discusses Image-specific Prompt Learning (IPL), a methodology related to adapting generative models using text-based prompts, which is highly relevant to the field of prompt engineering. Although the text does not directly address 'hard prefix prompts', it does tackle the use of text prompts in controlling and improving the output of generative models, thus making significant contributions to the broader topic of prompt engineering. The connection to prompt engineering is substantial as IPL is an innovative way of providing domain-specific textual directions to a generative model, which aligns with the disciplines involved in studying how prompts affect the behavior of AI models. However, it does not fully align with a 'comprehensive systematic review on hard prefix prompts' as the abstract seems to focus on a specific application rather than a broad review. Hence, the rating is not a perfect score." -relationprompt: leveraging prompts to generate synthetic data for zero-shot relation triplet extraction,gpt-4-1106-preview,9,"The study directly addresses prompt engineering by exploring how prompts can be used to generate synthetic data for a Zero-Shot Relation Triplet Extraction task. It presents a novel method of leveraging language model prompts in conjunction with structured text approaches to create relation samples, which is a significant contribution to prompt engineering literature. The fact that they also designed a novel decoding method to work with their prompting strategy further emphasizes its high relevance to the field of prompt engineering." -decoupling knowledge from memorization: retrieval-augmented prompt learning,gpt-4-1106-preview,9,"The presented abstract is highly relevant to prompt engineering study as it directly addresses the concept of prompt learning, which is a cornerstone of prompt engineering. It proposes a novel method, RetroPrompt, which aims to enhance the general learning capabilities of language models by decoupling knowledge from memorization. This pertains to an advanced area within prompt engineering that targets improvements in model generalization and few-shot learning abilities, both of which are critical metrics in evaluating the effectiveness of prompts. Although it does not explicitly mention 'hard prefix prompts,' the subject matter is closely related to the broader field of prompt design and optimization." -zero-shot video captioning with evolving pseudo-tokens,gpt-4-1106-preview,7,"The abstract describes a method for zero-shot video captioning that involves a form of prompt engineering by optimizing part of the prompt during the generation process. This relates to the prompt engineering study as it includes the manipulation of prompts to improve language model outputs. Although it does not specifically mention 'hard prefix prompts,' the concept of evolving pseudo-tokens could potentially fall under a broader interpretation of prompt engineering. Therefore, the relevance is fairly high but not completely aligned, as the central focus is on video captioning rather than prompt engineering in isolation." -socratic models: composing zero-shot multimodal reasoning with language,gpt-4-1106-preview,5,"The abstract discusses the use of Socratic Models (SMs) for zero-shot multimodal reasoning which relates to the field of prompt engineering in that it involves effective prompting to enable communication and information exchange between models. While it's not explicitly focused on 'hard prefix prompts' as mentioned in the study prompt, the concept of multimodal-informed prompting falls within the broader scope of prompt engineering. Therefore, the relevance to prompt engineering is moderate but not directly aligned with the specific topic of hard prefix prompts." -improving few-shot performance of language models via nearest neighbor calibration,gpt-4-1106-preview,7,"The study targets the optimization of in-context learning for pre-trained language models (PLMs), which is closely related to prompt engineering, as it deals with the arrangement and selection of prompts to enhance few-shot learning performances. The introduction of a nearest-neighbor calibration framework addresses the effectiveness of prompts. Even though the study does not explicitly mention 'hard prefix prompts', the principles and methodologies used for calibration and enhancement of few-shot learning may be applicable to the systematic review and improvement of hard prefix prompts. Hence, the study is relevant but not fully focused on hard prefix prompts, leading to a rating of 7." -few-shot fine-grained entity typing with automatic label interpretation and instance generation,gpt-4-1106-preview,7,"The abstract discusses a novel framework for few-shot Fine-grained Entity Typing (FET) that utilizes prompt-based tuning, which is directly related to the concept of prompt engineering. It addresses the challenge of how to effectively design prompts (verbalizers) automatically, considering the target corpus and label hierarchy, which is a core problem in prompt engineering studies. Moreover, it also introduces a generation aspect to create new instances, hinting at iterative prompt improvement or instance augmentation, which could be relevant for generating more effective prompts. However, the study seems to focus more on entity typing within a few-shot learning framework rather than on hard prefix prompts specifically or prompt engineering more broadly, which may include a variety of other techniques and applications. Therefore, the rating is not a full 10 but still significant due to its partial relevance." -natural language inference prompts for zero-shot emotion classification in text across corpora,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it examines the effects of different prompt formulations on the performance of a natural language inference-based zero-shot-learning classifier. This is directly related to the field of prompt engineering, which involves studying how the design of prompts influences the behavior and output of language models. The study's focus on tailoring prompt selection to fit specific language corpora aligns well with prompt engineering objectives, which seek to optimize interactions with language models for various tasks, including emotion classification mentioned in the abstract." -clinical prompt learning with frozen language models,gpt-4-1106-preview,8,"The abstract discusses the application of prompt learning within the specialized domain of clinical texts, comparing its effectiveness to traditional fine-tuning methods. While it doesn't focus exclusively on 'hard prefix prompts', prompt learning is a closely related aspect of prompt engineering. It's highly relevant to a study on prompt engineering, particularly due to the exploration of efficiency and domain-specific challenges, which are key considerations in the field. However, the absence of a specific mention of 'hard prefix prompts' precludes a perfect score." -language models as zero-shot planners: extracting actionable knowledge for embodied agents,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it explores the use of language models to interpret and execute high-level tasks by breaking them down into actionable steps. This indicates a level of prompt engineering where the model is not only responding to prompts but is being evaluated on its ability to translate prompts into a sequence of actions in a simulated environment. Although the title does not explicitly mention 'hard prefix prompts', the concept of prompt engineering is central to the study as it requires effective prompts to guide the model in generating plans that can map to executable actions. The study's focus on grounding tasks and improving the executability of plans derived from language models is at the core of advanced prompt engineering techniques." -few-shot table-to-text generation with prefix-controlled generator,gpt-4-1106-preview,8,"The study presents a prompt-based approach, specifically the Prefix-Controlled Generator, which is highly relevant to the field of prompt engineering. It addresses the challenge of few-shot table-to-text generation by pre-pending task-specific prompts to improve the ability of Pre-trained Language Models to handle structured data like tables. The focus on controlling the output through hard prefixes is directly applicable to prompt engineering. The two-point deduction from a perfect score acknowledges that the paper might be tangentially related to a 'systematic review on hard prefix prompts' since it appears to be a novel methodology rather than a review. However, the proposed method's successful application in a few-shot learning context and control over PLM outputs keeps it highly relevant to the study of engineering prompts for language models." -p3 ranker: mitigating the gaps between pre-training and ranking fine-tuning with prompt-based learning and pre-finetuning,gpt-4-1106-preview,8,"The abstract provided discusses the utilization of prompt-based learning in the context of adapting pre-trained language models for search ranking tasks. This approach aligns closely with prompt engineering, which focuses on designing prompts that effectively guide models to perform specific tasks or understand particular contexts. The P3 Ranker's emphasis on converting the ranking task to fit a pre-training schema using prompts directly relates to the study of prompt engineering, justifying a high relevance rating. Although the paper specifically targets the search ranking domain and may not address hard prefix prompts directly, the principles of prompt-based learning discussed are central to prompt engineering studies." -prompt tuning with soft context sharing for vision-language models,gpt-4-1106-preview,9,The paper presents research directly relevant to prompt engineering by discussing a novel methodology for prompt tuning in vision-language models. The primary focus on fine-tuning models for few-shot tasks using a shared meta network for prompt generation aligns closely with advanced techniques in prompt engineering. The relevance is only slightly less than maximum because it is specifically about vision-language models and may not cover the broader aspects or methods used in all types of models related to 'prompt engineering.' -partslip: low-shot part segmentation for 3d point clouds via pretrained image-language models,gpt-4-1106-preview,5,"While the abstract indicates the use of a pretrained image-language model, GLIP, in the context of 3D part segmentation leveraging multi-view priors and few-shot prompt tuning, it does not directly address prompt engineering study or the investigation of hard prefix prompts. However, the mention of 'few-shot prompt tuning' suggests a relevant connection to the disciplines of prompt engineering and the model's ability to interpret and process language-based inputs, which may overlap with the interests of those studying prompt design and effectiveness. Thus, the relevance is moderate as it sits at the intersection of neural language models and their application in visual tasks, without focusing explicitly on the study of prompt engineering." -prompt-tuning can be much better than fine-tuning on cross-lingual understanding with multilingual language models,gpt-4-1106-preview,8,"The abstract discusses the effectiveness of prompt-tuning compared to fine-tuning in multilingual language models for natural language understanding tasks. The relevance to prompt engineering is significant, as prompt-tuning is a method of prompt engineering that modifies the input prompt to improve model performance, without extensive retraining. This is particularly applicable to the engineering study of 'hard prefix prompts' as it provides empirical evidence of how different prompting strategies can impact cross-lingual understanding and transferability of language models. The reason why it is not a full 10 is that it does not specifically discuss 'hard prefix prompts,' but rather prompt tuning in a general sense, and thus, it is not exclusively focused on the prompt engineering aspect described in the original query." -few-shot anaphora resolution in scientific protocols via mixtures of in-context experts,gpt-4-1106-preview,6,"The study presents MICE, a method for few-shot anaphora resolution using in-context learning, which is relevant to prompt engineering in that it involves conditioning language models on specific inputs for desired outputs. The focus on in-context learning and efficiency in handling long sequences could inform strategies in prompt engineering, especially for complex tasks like anaphora resolution. However, the study is not directly focused on designing or optimizing prompts (i.e., 'hard prefix prompts'), but rather on a specific application of in-context learning. As such, the relevance is moderate but not high." -exploiting domain-slot related keywords description for few-shot cross-domain dialogue state tracking,gpt-4-1106-preview,7,"The paper describes an approach to enhancing dialogue state tracking by using domain-slot related descriptions which act as prompts to identify slot information. This is relevant to prompt engineering because the paper discusses a method of designing and utilizing prompts (in the form of domain-slot descriptions) to improve the performance of an NLP model. Furthermore, the results indicate that these engineered prompts (domain-slot descriptions) help the model to outperform other methods. While the focus is on dialogue state tracking rather than on prompt engineering directly, the usage of customized descriptions to improve model performance does partially fall under the broader umbrella of prompt engineering." -decorate the examples: a simple method of prompt design for biomedical relation extraction,gpt-4-1106-preview,9,"The title and abstract indicate that the paper directly addresses prompt design, an essential aspect of prompt engineering, specifically for the task of biomedical relation extraction. The use of a systematic method to generate prompts and the evaluation of their effectiveness in the context of fine-tuning and few-shot learning are highly relevant to studying prompt engineering. Furthermore, the concrete results showing improved performance by using prompts suggest practical significance in the field. The only reason for not giving a full score of 10 is that the paper focuses on a specific domain (biomedical), which may slightly limit the breadth of its relevance to prompt engineering in general, even though the methodology may be applicable across different domains." -slot dependency modeling for zero-shot cross-domain dialogue state tracking,gpt-4-1106-preview,8,"The study's focus on utilizing slot prompts combination in dialogue state tracking is highly relevant to prompt engineering due to its emphasis on prompt construction for capturing dependencies and domain knowledge in natural language processing tasks. Although it is not directly focused on 'hard prefix prompts', the principles of designing and utilizing prompts for zero-shot learning are closely related to prompt engineering, hence the high relevance rating." -pre-trained language models can be fully zero-shot learners,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses a method (NPPrompt) for zero-shot language understanding that relies on pre-trained language models without the need for labeled data, fine-tuning, or human-constructed prompts. This directly pertains to the study of prompting since it tackles the challenge of leveraging the underlying knowledge of PLMs for various NLP tasks using a novel prompting technique. While it doesn't specifically mention 'hard prefix prompts,' it is within the domain of research and advancing the understanding of how to use prompts effectively with PLMs. The rating is not a full 10 because the direct relevance to 'hard prefix prompts' is not explicit, which might be specifically addressed in a comprehensive systematic review on that sub-topic." -multitask prompted training enables zero-shot task generalization,gpt-4-1106-preview,9,"The provided abstract discusses the development of a system for mapping natural language tasks into a prompted form and explicitly training a model on a diverse set of prompts. This is highly relevant to prompt engineering as it explores the creation and use of different prompts to achieve zero-shot task generalization. The focus on prompted datasets is directly tied to the study of how prompts affect language model behavior, a core aspect of prompt engineering. The relevance is not a full 10 because the abstract does not specifically mention 'hard prefix prompts', which could be a more narrow subtopic within prompt engineering." -tess: zero-shot classification via textual similarity comparison with prompting using sentence encoder,gpt-4-1106-preview,8,"The mentioned study on the TeSS (Text Similarity Comparison using Sentence Encoder) framework is highly relevant to prompt engineering because it focuses on a method where label assignment in zero-shot classification is achieved through the comparison of embeddings from text input and label prompts. This process is integral to prompt engineering as it relies on the design and utilization of prompts that can effectively represent the semantic space for classification tasks. The use of external corpora to enhance the descriptive power of label prompts (TeSS-R) is particularly pertinent to prompt engineering research. However, the study did not explicitly focus on 'hard prefix prompts,' which would encompass a specific subset of prompting techniques and strategies, hence the rating of 8 rather than a perfect 10." -zero-shot program representation learning,gpt-4-1106-preview,7,"The abstract discusses 'Zecoler', which utilizes the concept of inserting trainable prompts into code to elicit knowledge from pre-trained models in the context of code representation learning tasks. This approach is relevant to prompt engineering study because it involves optimizing the input to a pre-trained model through trainable prompts, which is akin to hard prompting strategies. The concept of transforming downstream tasks into the form of pre-training tasks using prompts is central to prompt engineering. However, the focus on code intelligence tasks and domain-specific applications like Solidity reduces the relevance slightly, as a comprehensive systematic review on hard prefix prompts may encompass a broader range of tasks and domains beyond code representation learning." -queryform: a simple zero-shot form entity query framework,gpt-4-1106-preview,7,"The study presents a zero-shot transfer learning framework called QueryForm, which includes a 'dual prompting mechanism.' Although the paper does not focus specifically on 'hard prefix prompts' as a separate study area, the concept of using prompts to extract information from a model without task-specific training data is a form of prompt engineering. The relevance to prompt engineering lies in the framework's ability to influence a model's behavior with carefully constructed queries (prompts). However, the paper discusses prompting within the context of a specific document understanding task rather than a wider exploration of various prompt engineering techniques. The rating reflects relevance in terms of prompting mechanisms and their application, but it is not a direct study of hard prefix prompts in a comprehensive manner." -what language model architecture and pretraining objective work best for zero-shot generalization?,gpt-4-1106-preview,5,"While the abstract provided does not directly address prompt engineering or the study of hard prefix prompts specifically, it discusses related aspects of language model performance such as zero-shot generalization, model architectures, and pretraining objectives. Understanding how different architectures and objectives contribute to a model's ability to understand and process prompts is relevant to prompt engineering. However, since the focus is not on prompt engineering itself or on systematic reviews of prompts, the relevance is moderate." -generating variable explanations via zero-shot prompt learning,gpt-4-1106-preview,8,"The abstract addresses the use of 'zero-shot prompt learning' as a central method in generating explanations for variables in programming, which is relevant to the field of prompt engineering. Prompt engineering typically involves designing and refining prompts to improve interaction with AI models, and the study’s focus on leveraging prompts in a zero-shot context to enhance program comprehension is closely related. However, it does not specifically address 'hard prefix prompts' which would be more directly related to the exact terminology in the prompt engineering study. Hence, a couple of points are deducted for the specialized focus on variable explanations rather than the actual construction or analysis of prompt formats or their impacts in broader applications." -prompt gating: a parameter efficient tuning method for zero-shot multi-source translation,gpt-4-1106-preview,8,"The paper introduces 'Prompt Gating', a method that appends prompts to model inputs, which is directly related to prompt engineering as it involves manipulating prompts to achieve better performance in a machine learning task. The study's relevance to prompt engineering is high because it deals with the integration of prompts into translation models and discusses their impact. The fact that it is applied to machine translation, however, makes it slightly less relevant than if it would have been a study solely focused on prompt engineering for a broader range of applications." -peinet: joint prompt and evidence inference network via language family policy for zero-shot multilingual fact checking,gpt-4-1106-preview,8,"Although the title and abstract do not specifically mention 'hard prefix prompts', they discuss the concept of using joint prompt and evidence inference for zero-shot multilingual fact-checking. This is relevant to prompt engineering as it involves the design of prompts (in this case, for understanding and verifying multilingual claims) and how these prompts interact with an AI model to achieve better performance in a specific task. The novel approach of combining prompts with a mechanism for evidence aggregation aligns with prompt-based methodologies. Hence, the paper is quite relevant to the study of prompt engineering, although it is not directly focused on 'hard prefix prompts,' which might be a specific subset of prompt engineering." -an exploration of prompt-based zero-shot relation extraction method,gpt-4-1106-preview,8,"The relevance to prompt engineering is high because the work involves prompt-tuning, a technique directly related to prompt engineering. It suggests optimizing a model for zero-shot relation extraction by utilizing prompts which influence the model's predictions. Although it's not specifically about 'hard prefix prompts' as the original prompt indicates, prompt-tuning is a subset of prompt engineering and thus highly relevant to studies of prompts and their impact on model performance. The rating is not a full 10 due to the abstract being unavailable ('nan'), which limits the ability to fully assess the relevance, and the absence of direct mention of 'hard prefix prompts', which the original study prompt seems to specify." -prompt-guided scene generation for 3d zero-shot learning,gpt-4-1106-preview,7,"The paper presents an application of prompt engineering in the context of 3D zero-shot learning, where prompts are used to guide scene generation and are integral to the architecture of the learning model. Although prompt engineering is usually discussed in relation to natural language processing, this study adapts the concept for a novel application in 3D data augmentation and model training. It is relevant to the broader field of prompt engineering in that it showcases its adaptability and potential in different areas of AI. However, it might not be considered a pure study of prompt engineering in the textual or linguistic sense, hence the rating is not a full 10." -a topic-based prompt learning method for zero-shot stance detection,gpt-4-1106-preview,4,"While the study involves the use of prompts to determine the stance detection ability, it is focused more on the classification and processing of language with respect to stance detection, rather than the creation or systematic review of hard prefix prompts in the context of prompt engineering. Since prompt engineering typically refers to methods for improving language model responses, and this paper seems to touch on related concepts without being squarely focused on prompt engineering, it receives a moderate rating." -prompt-based zero-shot relation extraction with semantic knowledge augmentation,gpt-4-1106-preview,8,"The paper discusses a prompt-based model, which is highly relevant to the field of prompt engineering, particularly in the context of zero-shot learning. The focus on generating prompts with semantic knowledge integration touches on a core area of how prompts can be engineered to improve task performance in natural language processing. The relevance score is not a full 10 because the study seems to emphasize the zero-shot relation extraction aspect alongside prompt engineering, rather than being exclusively focused on the methodologies for creating and optimizing prompts (i.e., hard prefix prompts). Nevertheless, the paper still offers substantial insight into the application of prompt engineering concepts." -from visual prompt learning to zero-shot transfer: mapping is all you need,gpt-4-1106-preview,8,"The article discusses a novel approach to adapting large-scale pre-trained models to new tasks using a technique called SeMap, which aligns semantic knowledge for visual prompt learning. The relevance to prompt engineering is high because the research deals with the optimization and creation of prompts that facilitate the use of pre-trained models in new tasks without fine-tuning (zero-shot transfer). This is closely related to the concept of hard prefix prompts in prompt engineering, where the goal is to improve the interaction with a model to produce better performance on target tasks. However, since the main focus is on visual prompt learning rather than hard prefix prompts specifically, the rating is not a full 10." -layout and task aware instruction prompt for zero-shot document image question answering,gpt-4-1106-preview,7,"The relevance to prompt engineering is moderately high because the paper discusses the use of instruction-tuning language models and emphasizes the understanding of layout via spaces and line breaks, which relates to generating prompts that are layout-aware. The proposed LATIN-Prompt and LATIN-Tuning are direct applications of modifying prompts to include layout information and improve task performance, which is a form of prompt engineering. However, the paper is more focused on the interaction between layout awareness and zero-shot learning, rather than on hard prefix prompts specifically. Therefore, while the study is relevant to prompting techniques and their optimizations in the context of language models, it does not directly address the systematic review of hard prefix prompts." -navigating prompt complexity for zero-shot classification: a study of large language models in computational social science,gpt-4-1106-preview,9,"The study directly addresses the role of different prompting strategies in the performance of large language models on classification tasks, which is a core component of prompt engineering. The exploration of how prompt complexity and modifications affect model performance is highly relevant to understanding the mechanisms by which prompts can be engineered for better outcomes in natural language processing tasks. Although the study does not specifically mention 'hard prefix prompts,' it does analyze the influence of variations in prompts, which is closely related to the concept of prompt engineering." -injecting commonsense knowledge into prompt learning for zero-shot text classification,gpt-4-1106-preview,8,"The provided abstract is relevant to prompt engineering to a significant extent. The research discusses enhancing prompt learning for NLP tasks in scenarios with limited data by injecting commonsense knowledge from a Knowledge Graph (KG) into Pre-trained Language Models (PLMs). While this does not directly reference 'hard prefix prompts', it does focus on the improvement of prompts (referred to as verbalizer) used in NLP models. Since prompt engineering generally deals with methods for designing and improving prompts to make them more efficient for language models, this research contributes to the wider field of study by proposing a method to enrich prompts with commonsense knowledge for better performance in zero-shot text classification." -knowledge-embedded prompt learning for zero-shot social media text classification,gpt-4-1106-preview,7,"The title and abstract detail a study that focuses on prompt learning which is an aspect of prompt engineering, specifically within the context of zero-shot text classification for social media. While it does not explicitly mention 'hard prefix prompts', it does discuss embedding knowledge within the prompts, which suggests a degree of specificity and deliberation in prompt design that is relevant to the field of prompt engineering. The method seems to enhance the model's performance without large datasets by using prompts effectively, which is a core concern in prompt engineering studies. Therefore, the relevance to prompt engineering is fairly high, but it might be less relevant to a systematic review specifically focused on 'hard prefix prompts'." -zero-shot continuous prompt transfer: generalizing task semantics across language models,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it directly addresses an advanced application of prompt tuning—namely, the transferability of continuous prompts between different language models. The zero-shot learning aspect and the focus on preserving 'task semantics' when transferring prompts make the research important for the broader understanding of how prompt engineering can be applied across various models. It does not, however, directly address 'hard prefix prompts,' but is still substantially connected to the field of prompt engineering." -spteae: a soft prompt transfer model for zero-shot cross-lingual event argument extraction,gpt-4-1106-preview,8,"The abstract discusses 'SPTEAE', a model which utilizes tunable vectors as prompts, indicating a level of relevancy to prompt engineering. The focus on soft prompts and the mechanism of transferring knowledge from a source language to a target language via prompts are of particular interest to prompt engineering studies, especially in the context of zero-shot cross-lingual tasks. Although the study does not deal with hard prefix prompts directly, the concept of prompt transfer and the use of event type prompts are relevant to the broader field of prompt engineering. The rating is not a full 10 as the specific emphasis of the study is on zero-shot cross-lingual event argument extraction rather than a general exploration of prompt engineering or hard prefix prompts." -prompt-based zero-shot text classification with conceptual knowledge,gpt-4-1106-preview,8,"The paper described seems highly relevant to prompt engineering as it directly discusses the use of prompts for text classification in a zero-shot learning context. The incorporation of conceptual knowledge into prompt-based systems is closely aligned with the study of how different prompt formulations can impact AI performance. While the study's focus on zero-shot learning is slightly broader than prompt engineering alone, its relevance is still significant since prompt engineering is a major component of zero-shot learning approaches." -prompt-ner: zero-shot named entity recognition in astronomy literature via large language models,gpt-4-1106-preview,8,"The study described in the title and abstract is highly relevant to prompt engineering as it proposes and evaluates a prompt-based strategy (Prompt-NER) for enhancing zero-shot Named Entity Recognition (NER) using Large Language Models (LLMs). Although the application is specific to astronomy literature, the methodology and findings can contribute valuable insights to the broader field of prompt engineering, especially in the development and application of prompts for domain-specific zero-shot learning tasks." -"synthesize, prompt and transfer: zero-shot conversational question generation with pre-trained language model",gpt-4-1106-preview,7,"The paper presents a multi-stage knowledge transfer framework (SPARTA) that involves a prompt-based approach for conversational question generation in a zero-shot setting. While it is not explicitly focused on 'hard prefix prompts' in prompt engineering study, the utilization of prompts in the training process to facilitate knowledge transfer from single-turn instances to conversational question generation does relate to prompt engineering. Therefore, it holds relevance for those studying the broader field of prompt engineering, though the exact technique may differ from hard prefix prompting." -weakly supervised few-shot and zero-shot semantic segmentation with mean instance aware prompt learning,gpt-4-1106-preview,8,"The abstract describes a novel approach in semantic segmentation that leverages language-guided segmentation techniques, which is directly related to prompt engineering as it involves learning from class prompts. However, the focus seems to be more on the application of prompt learning for weakly supervised few-shot and zero-shot semantic segmentation rather than a comprehensive study of hard prefix prompts. The relevance is high as prompt engineering is essential to the proposed MIAPNet system, but it is not a systematic review of hard prefix prompts." -"entities, dates, and languages: zero-shot on historical texts with t0",gpt-4-1106-preview,8,"This abstract is highly relevant to prompt engineering as it directly discusses using prompts to achieve zero-shot Named Entity Recognition with the T0 model on historical texts in various languages. It indicates an exploration of prompt-based methods and their efficacy in a challenging domain, which is central to prompt engineering studies. However, the paper does not focus solely on 'hard prefix prompts' but also addresses broader topics such as zero-shot learning and Named Entity Recognition, hence the rating of 8 instead of a perfect 10." -pesco: prompt-enhanced self contrastive learning for zero-shot text classification,gpt-4-1106-preview,8,"The abstract describes PESCO, a framework that uses prompts as part of its contrastive learning approach for zero-shot text classification, which is relevant to the field of prompt engineering. Although it does not focus exclusively on 'hard prefix prompts,' the use of prompts to enhance label retrieval is a direct application of prompt engineering techniques. Therefore, the relevance is high, but not perfect since the abstract does not specify 'hard prefix prompts' as its primary subject." -prompt to be consistent is better than self-consistent? few-shot and zero-shot fact verification with pre-trained language models,gpt-4-1106-preview,7,"The paper's focus on a novel method called ProToCo, which stands for 'Pro' to 'Co'nsistent, involves prompt engineering as it seeks to improve the accuracy of pre-trained language models (PLMs) for fact verification by generating multiple prompt variants and using consistency as a constraint. This method is directly related to prompt engineering as it involves crafting prompts that can effectively query PLMs. However, the paper does not seem to concentrate specifically on 'hard prefix prompts' but on prompting techniques in general to enforce consistency in predictions. Therefore, while it is relevant, it might not directly address the specifics of hard prefix prompt engineering as indicated by your query but still offers significant insights into the broader field of prompt engineering for PLMs." -hierarchical prompt learning for compositional zero-shot recognition,gpt-4-1106-preview,7,"The paper appears to address the concept of prompt engineering by exploring hierarchical prompt learning within the context of Compositional Zero-Shot Learning (CZSL). While it is not a comprehensive systematic review of hard prefix prompts as such, it does contribute to the field of prompt engineering by proposing a novel approach to learning prompts hierarchically, and is thus relevant. The use of prefixed prompts to improve the performance of a vision-language model like CLIP could be considered a form of prompt engineering. However, the rating is not a full 10 because the study is not specifically a systematic review of hard prefix prompts, which was the exact topic requested." -anomalyclip: object-agnostic prompt learning for zero-shot anomaly detection,gpt-4-1106-preview,7,"The abstract describes AnomalyCLIP, a novel approach to adapting the CLIP model for zero-shot anomaly detection by learning object-agnostic text prompts. Although the main focus is on improving anomaly detection, the method involves prompt engineering specifically designed to capture generic concepts of normality and abnormality in images, which is relevant to the study of prompt design and effectiveness. The rating is not a full 10 because the primary application is anomaly detection rather than prompt engineering itself, but the method provides valuable insights into prompt engineering within the context of zero-shot learning." -enhancing zero-shot crypto sentiment with fine-tuned language model and prompt engineering,gpt-4-1106-preview,8,"The abstract provided focuses on the enhancement of sentiment analysis for cryptocurrencies using fine-tuned language models and an investigation into the efficacy of different instruction-based fine-tuning methods. The relevance to prompt engineering lies in the part of the study that examines instruction tuning, which is a form of prompt engineering, as it entails optimizing the instructions given to the model to improve its performance on unseen tasks. Also, it discusses the impact of short and simple versus long and complex instructions on the performance of language models. However, it doesn't explicitly mention the term 'hard prefix prompts,' which suggests that the paper might not delve into that specific area of prompt engineering, instead covering a broader range of instruction-based fine-tuning strategies. Therefore, the relevance is high but not complete, as the connection to 'hard prefix prompts' is not clearly established." -zero-shot domain adaptation for neural machine translation with retrieved phrase-level prompts,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it investigates a prompt-based method for domain adaptation in neural machine translation, which is a novel approach within the field of machine learning and specifically relates to the engineering of prompts. It does not focus on 'hard prefix prompts' specifically, but the usage of bilingual phrase-level prompts for domain adaptation suggests a strong connection to the concept of engineering prompts to improve the performance of a language model. The improvement in BLEU scores and translation accuracy further attests to the effectiveness of the prompt-based method, highlighting its potential relevance in the study of prompt engineering." -"electra is a zero-shot learner, too",gpt-4-1106-preview,8,"The provided abstract primarily relates to prompt engineering as it discusses a novel prompt-based learning method using ELECTRA for zero-shot learning tasks. Prompt engineering is explicitly mentioned as part of the new 'pre-train, prompt, and predict' paradigm. Even though it does not specifically discuss 'hard prefix prompts,' the focus on prompt-based approaches and their effectiveness in improving model performance is highly relevant to studies of prompt design and implementation in NLP models." -empowering sentence encoders with prompting and label retrieval for zero-shot text classification,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it addresses the enhancement of sentence encoders using prompted label candidates. Additionally, the incorporation of retrieval-based methods to refine the label prompts directly relates to the concept of hard prompts in prompt engineering. Although the study does not exclusively focus on 'hard prefix prompts', the general exploration of leveraging prompts in the context of zero-shot text classification closely aligns with the topic of prompt engineering. The retrieval-augmented approach (RaLP) presented in the study exemplifies a practical application of prompt engineering in improving model performance without the need for fine-tuning on specific tasks. The only reason it does not receive a full score is that it doesn't focus solely on 'hard prefix prompts', but instead encompasses a broader range of prompting techniques." -evaluating prompts across multiple choice tasks in a zero-shot setting,gpt-4-1106-preview,8,"This abstract describes a study focused on the evaluation of natural language prompts across multiple choice tasks in a zero-shot setting, which is highly relevant to the field of prompt engineering. It seeks to understand the impact of prompt qualities on model performance, aligning well with the interests of prompt engineering research. The study’s goal to standardize prompts for tasks they were not initially designed for and the quantitative analysis of prompt attributes is significant for the design of effective prompts. Although the study does not explicitly mention 'hard prefix prompts', it contributes to the broader context of prompt engineering, thus the rating of 8 rather than a perfect 10." -zerotop: zero-shot task-oriented semantic parsing using large language models,gpt-4-1106-preview,8,"The paper presents a novel application of large language models (LLMs) for zero-shot semantic parsing, which is indirectly related to prompt engineering. Prompt engineering involves crafting inputs to LLMs in a way that optimizes their performance on a given task, and the study's focus on decomposing the semantic parsing problem into a series of QA problems is a form of prompt engineering. They are effectively engineering prompts to elicit specific types of information from an LLM in a structured format. However, the paper is more about the application of LLMs in a zero-shot learning setting than about the systematic study of prompt engineering techniques. Therefore, the relevance is rated high but not perfect." -go-tuning: improving zero-shot learning abilities of smaller language models,gpt-4-1106-preview,4,"The abstract discusses a method to improve zero-shot learning of smaller language models, which indirectly pertains to prompt engineering, as it may influence the way prompts are designed to interact with these models. However, the focus is on the self-supervised learning approach and the update of language models rather than the systematic study or design of hard prefix prompts specifically." -"how to prompt llms for text-to-sql: a study in zero-shot, single-domain, and cross-domain settings",gpt-4-1106-preview,9,"The abstract describes a study focused on the effectiveness of different prompt constructions in the context of using large language models for the text-to-SQL task. This directly relates to prompt engineering as it explores how varying prompts influence the performance of language models in specific language processing tasks. The study's investigation into the impact of different prompts and its goal to provide insights for future work is highly relevant to the field of prompt engineering, although it is more specialized towards text-to-SQL rather than hard prefix prompts specifically." -dialogue state tracking with zero-shot and few-shot learning for generalization: a review,gpt-4-1106-preview,7,"The paper's abstract suggests that one of the categories reviewed in the study is 'DST using a prompt,' which directly relates to prompt engineering as it likely involves the use of prompts to improve the performance of dialogue state tracking models. The relevance to prompt engineering is significant since the study appears to include a systematic review of this method among others. However, the abstract does not focus solely on 'hard prefix prompts' as specified in the initial query, indicating that while relevant, it may not cover the full scope of 'hard prefix prompts.' Therefore, the rating is not a full 10." -malm: mixing augmented language modeling for zero-shot machine translation,gpt-4-1106-preview,7,"The abstract discusses the usage of large pre-trained language models and their effectiveness in avoiding off-target language errors for zero-shot machine translation when conditioned with prompts. This suggests that the study delves into prompt engineering to some extent, particularly with regard to its influence on language model behavior in translation tasks. However, the core focus seems to be on zero-shot translation and multilingual model performance rather than exclusively on prompt engineering, so the relevance is significant but not complete." -zero-shot domain-sensitive speech recognition with prompt-conditioning fine-tuning,gpt-4-1106-preview,8,"The study described is highly relevant to prompt engineering as it involves fine-tuning a pre-trained model using text prompts to achieve domain sensitivity and adaptation in speech recognition tasks. Such conditioning on prompts is a direct application of prompt engineering principles to improve model performance on specific domains, showcased by the significant Word Error Rate reductions. However, it is focused specifically on speech recognition and does not cover a broader spectrum of 'hard prefix prompts', which might include other areas beyond speech recognition, hence the rating is not a full 10." -kbpt: knowledge-based prompt tuning for zero-shot relation triplet extraction,gpt-4-1106-preview,7,"Despite the absence of an abstract or TLDR, the title indicates the study is related to 'knowledge-based prompt tuning,' which falls under the broader scope of prompt engineering. The application of prompt tuning for zero-shot relation triplet extraction suggests an advanced use of prompts to improve model performance without extra training data which is relevant to prompt engineering. However, without additional information on the study's methodology or results, a full assessment of relevance cannot be completed, thus the rating cannot be maximized." -zero-shot recommendation as language modeling,gpt-4-1106-preview,6,"The abstract indicates a recommendation system that operates using pre-trained language models and unstructured text corpora, which is tangentially related to prompt engineering as it involves using language models in an innovative application. However, the focus on recommendation systems and matrix factorization suggests that the study does not directly address the creation or manipulation of prompts (i.e., the 'hard prefix prompts' mentioned in the original prompt). Therefore, the relevance is moderate because while it deals with language models, it may not directly contribute to our understanding of prompt engineering in the context of a comprehensive systematic review." -zero-shot learning by generating task-specific adapters,gpt-4-1106-preview,7,"The relevance to prompt engineering is fairly high as the abstract describes a novel approach to zero-shot learning that includes utilizing task descriptions as prompts, which could be seen as related to 'hard prefix prompts' in the context of designing inputs that guide the model's predictions. The study focuses on improving the model's ability to generalize to new tasks through a meta-learning framework, which aligns with the concept of improving the effectiveness of prompts in a zero-shot learning setting. However, it does not explicitly address 'hard prefix prompts' in any systematic review manner, which would be necessary for a 10 rating. Nonetheless, the connection to prompt engineering is clear enough to warrant a relatively high rating." -domain-aware continual zero-shot learning,gpt-4-1106-preview,7,"The abstract indicates that the study involves a 'class-wise learnable prompt' which is relevant to prompt engineering as it relates to the generation of text representations for facilitating zero-shot learning. However, the focus of the study seems to be more on addressing challenges of domain awareness and continual learning in the context of zero-shot learning, rather than on hard prefix prompts specifically. Therefore, while it is relevant due to its inclusion of a learnable prompt component for class representation, it does not appear to be a comprehensive systematic review or focus directly on hard prefix prompts in prompt engineering, hence the rating of 7 instead of a full 10." -sam.md: zero-shot medical image segmentation capabilities of the segment anything model,gpt-4-1106-preview,5,"The title and abstract provided discuss a model that utilizes prompting (SAM) for image segmentation tasks, which is relevant to the concept of prompt engineering as it involves the use of prompts to direct the behavior of AI models. However, the focus is mainly on the zero-shot learning capabilities of SAM in medical image segmentation, rather than a systematic review of 'hard prefix prompts' in a broader context. The relevance to prompt engineering is moderate because it showcases an application of prompts in a specialized domain but does not address prompt engineering study in a comprehensive manner." -zero-shot clinical entity recognition using chatgpt,gpt-4-1106-preview,8,"The abstract indicates that the study investigates the use of different prompt strategies for enhancing the performance of ChatGPT in a zero-shot clinical entity recognition task. It directly tackles prompt engineering by comparing the effectiveness of prompts in a specialised application (clinical NER), which is highly relevant to the study of how prompts affect AI behavior. However, it doesn't specify that it focuses on 'hard prefix prompts,' which would be essential for a 'comprehensive systematic review on hard prefix prompts,' hence not a perfect score." -a preliminary evaluation of chatgpt for zero-shot dialogue understanding,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering is notable due to the exploration of ChatGPT's capabilities in zero-shot dialogue understanding tasks, which inherently involves crafting prompts that can elicit the desired outcomes without task-specific training. The mention of 'multi-turn interactive prompt' within the dialogue state tracking (DST) task highlights an aspect of prompt engineering. Understanding how ChatGPT responds to different kinds of prompts, especially in zero-shot scenarios, is crucial for developing better prompt-engineering strategies. However, the study does not focus primarily on the 'hard prefix prompts' which is specific to the systematic review in question, hence the rating is not a full 10." -"clip for all things zero-shot sketch-based image retrieval, fine-grained or not",gpt-4-1106-preview,7,"The abstract discusses the application of prompt learning specifically tailored to the sketch community and its impact on zero-shot sketch-based image retrieval. While it does not explicitly focus on 'hard prefix prompts,' it does mention the implementation of a prompt learning setup, and designing sketch-specific prompts which are relevant to prompt engineering. The substantial performance gains reported indicate the relevance and effectiveness of prompt tuning in this domain. However, the focus seems to be more on the application of prompts in conjunction with the CLIP model rather than a comprehensive study of prompts engineering itself, hence the rating is not a perfect 10." -rapgen: an approach for fixing code inefficiencies in zero-shot,gpt-4-1106-preview,8,"The abstract describes a method called Retrieval-Augmented Prompt Generation (RAPGen) that involves the construction and utilization of prompts to fix performance issues in code. Although it specifically targets performance bugs and uses a pre-constructed knowledge-base intended for this purpose, the basic principles of constructing and using prompts for a language model are at the core of both tasks. Therefore, this paper is highly relevant to the study of prompt engineering because it explores a novel, prompt-based method to interact with a language model to solve a specific problem." -zero-shot cross-lingual summarization via large language models,gpt-4-1106-preview,7,"The reported study is directly related to prompt engineering as it involves using prompts to guide Large Language Models in the task of zero-shot cross-lingual summarization. The relevance is high because it assesses how well prompts can improve the performance of LLMs in a complex task that combines translation and summarization. Nonetheless, the study's primary focus is on cross-lingual summarization rather than on the depth of prompt engineering mechanisms like hard prefix prompts, which reduces the relevance rating slightly." -enabling calibration in the zero-shot inference of large vision-language models,gpt-4-1106-preview,4,"The abstract presents a study focused on the calibration of vision-language models, particularly CLIP, in the context of zero-shot inference. While the research addresses aspects such as prompt choice, its core contribution lies in proposing a modified temperature scaling method for calibrating the models rather than in-depth analysis or methodology development for 'prompt engineering' itself. The mention of prompt as one of the variables does increase the relevance to 'prompt engineering,' yet since it is not the main focus of the study, the relevance is moderate." -clipn for zero-shot ood detection: teaching clip to say no,gpt-4-1106-preview,8,"The abstract reveals that the study involves designing a 'learnable no prompt' and a 'no text encoder' to capture negation semantics within images, which is directly related to prompt engineering as it focuses on developing prompts that enable a language-image model to understand and respond with negation, a nuanced language feature. This development aligns with engineering prompts that can enhance model performance in specific tasks, such as OOD detection in this case. Although the emphasis is on OOD detection rather than on prompt engineering itself, the methodology is highly relevant to the study of prompt engineering techniques." -zero-shot information extraction for clinical meta-analysis using large language models,gpt-4-1106-preview,8,"The abstract describes a study that employs large language models for zero-shot prompt-based information extraction in the medical field, which is directly related to the concept of prompt engineering. The investigation of zero-shot performance implicates the design and structuring of prompts to elicit accurate information from language models without any training examples, which is a subset of prompt engineering. While the study focuses on a specialized application in clinical meta-analysis rather than a broad systematic review of hard prefix prompts, it does contribute to the overall knowledge of prompt engineering effectiveness and challenges. Therefore, the relevance is high, but not absolute given the specialized context." -vision-language models are zero-shot reward models for reinforcement learning,gpt-4-1106-preview,6,"The abstract describes the use of vision-language models (VLMs) as zero-shot reward models in reinforcement learning, which includes a component of prompt engineering by providing text prompts to specify tasks. Although the main focus is on reinforcement learning and the efficacy of VLMs in this context, the mention of using 'minimal prompt engineering' indicates that there is a relevance to the study of crafting prompts. However, the primary emphasis is not on the systematic review of 'hard prefix prompts' or the intricacies of prompt engineering methods, which would be required for a higher relevance score." -zero-shot text classification via self-supervised tuning,gpt-4-1106-preview,6,"The abstract discusses a novel approach to zero-shot text classification using self-supervised learning, which includes an alternative prompting method where the model learns to predict the first sentence of a paragraph. This is relevant to prompt engineering as it touches on the use of prompts to improve language model performance without relying on large-scale annotated data. However, the focus is more on the self-supervised learning aspect and the specific learning objective, rather than a deep dive into prompt engineering or hard prefix prompts specifically. Therefore, the relevance is moderate." -harnessing the zero-shot power of instruction-tuned large language model in end-to-end speech recognition,gpt-4-1106-preview,6,"The abstract deals with the utilization of an instruction-tuned large language model within the context of ASR, which relates to prompt engineering in the sense that precise instructions are used to guide the LLM. However, the focus is more on the application of LLMs for improving ASR rather than on the study or optimization of the prompts themselves (i.e., hard prefix prompts or prompt engineering techniques). The relevance is moderate because it showcases an implementation of prompt-instructed LLMs, but it does not directly address a systematic review or study on prompt engineering." -align your prompts: test-time prompting with distribution alignment for zero-shot generalization,gpt-4-1106-preview,9,"The provided abstract is highly relevant to prompt engineering study, especially in the context of zero-shot generalization and prompt tuning to align feature distributions between source and test data, which are key components of prompt engineering. The paper discusses a specific method of prompt tuning that takes distribution shift into account, a topic that is directly related to the engineering and optimization of prompts for better performance in unseen domains. The only reason it doesn't receive a full 10 is that it doesn't specifically mention 'hard prefix prompts', which was the specific focus mentioned in the initial prompt, but it still seems to represent a significant contribution to the field of prompt engineering broadly." -towards realistic zero-shot classification via self structural semantic alignment,gpt-4-1106-preview,7,"The relevance of the text to prompt engineering is moderate to high. The paper discusses a Self Structural Semantic Alignment (S^3A) framework that involves generating discriminative prompts using large language models, which is directly related to the field of prompt engineering. The fact that the S^3A framework includes a component where prompts are generated to discern confusing candidates demonstrates the application of prompt engineering in the paper. However, the overarching goal of the paper is zero-shot classification using Vision Language Models, and prompt engineering is only one aspect of the complex methodology being proposed. The rating is not higher because the main focus is not solely on prompt engineering; instead, it's a part of a larger framework designed for a specific application in machine learning." -instruction distillation makes large language models efficient zero-shot rankers,gpt-4-1106-preview,8,"The abstract discusses the instruction distillation method as a means of improving efficiency and performance in zero-shot relevance ranking by LLMs, which is directly related to prompt engineering. This research tackles the issues of complexity and inefficiency in typical prompt-based ranking methods by simplifying instructions. However, it does not focus solely on 'hard prefix prompts,' but rather on instruction distillation for overall efficiency and performance enhancement in a broader context. Thus, the relevance is high but not entirely focused on the specific subtopic of hard prefix prompts." -locally differentially private document generation using zero shot prompting,gpt-4-1106-preview,8,"The abstract discusses the use of 'zero-shot prompting' with pretrained language models to address privacy concerns, which is relevant to prompt engineering. The introduction of DP-Prompt as a mechanism relies on the strategic use of prompts to enhance privacy while maintaining utility. Although the focus is more on privacy preservation than on prompt engineering in itself, the application of zero-shot prompting techniques is at the core of the study, earning a high relevance rating. However, it isn't exclusively focused on 'hard prefix prompts' or a comprehensive systematic review of such prompts, therefore the rating is not a full 10." -zero-shot relation triple extraction with prompts for low-resource languages,gpt-4-1106-preview,8,"The study directly deals with prompt engineering as it involves creating and using prompts to guide a language model for relation extraction. The work focuses on zero-shot learning for low-resource languages, specifically using prompts to generate structured texts that facilitate the extraction of relation triplets. The structured relation prompt template mentioned also indicates a direct manipulation of prompts to improve model performance. However, the use of the term 'hard prefix prompts' is not specifically mentioned, so the study may not align perfectly with a systematic review on hard prefix prompts but still is highly relevant to the field of prompt engineering." -instance needs more care: rewriting prompts for instances yields better zero-shot performance,gpt-4-1106-preview,9,"The abstract describes a study that directly involves prompt engineering, focusing on improving large language model (LLM) performance in zero-shot tasks by customizing prompts for individual test instances. The approach aligns closely with prompt engineering as it involves the strategic rewriting of prompts to enhance model understanding and performance, which is central to the study of prompt engineering. The high relevance is due to the proposed method's focus on the construction and optimization of prompts for better task execution by LLMs, although the study seems to be more practical and application-oriented rather than theoretical, as implied by the term 'systematic review' in the original query." -zyn: zero-shot reward models with yes-no questions,gpt-4-1106-preview,8,"The abstract describes a method of using yes-no questions as prompts to guide the behavior of a language model without additional labeled data, which is highly relevant to prompt engineering. It addresses the use of prompts to achieve zero-shot learning and align a model's output with user preferences, which are core areas of interest in the study of prompts. However, it is not focused specifically on 'hard prefixes,' but on a broader application of prompts, so the rating is not a full 10." -random word data augmentation with clip for zero-shot anomaly detection,gpt-4-1106-preview,8,"The paper presents a method that uses CLIP, a visual-language model, and involves prompt-guided classification which is clearly related to prompt engineering. Although the focus is on zero-shot anomaly detection and data augmentation, the use of prompts to guide the CLIP model's text encoder for generating data brings it within the domain of prompt engineering studies. The prompts are crucial for the generation of text embeddings which are subsequently used to train the anomaly detection model, significantly impacting the performance of the system. The paper does not focus on 'hard prefix prompts' specifically, so it may not align completely with a comprehensive review of that exact topic, but it certainly provides relevant information about prompt usage in the context of AI-powered anomaly detection." -supplementary - i2mvformer: large language model generated multi-view document supervision for zero-shot image classification,gpt-4-1106-preview,7,"The abstract discusses the use of a large language model (LLM) for prompting strategy in the context of zero-shot image classification. Although it does not directly reference 'hard prefix prompts' or a 'systematic review', the mention of LLM prompting strategies and the analysis of their robustness is relevant to the broader field of prompt engineering. The abstract suggests an investigation into the effectiveness of different prompts, which is a central concern of prompt engineering studies. Therefore, the relevance rating is moderately high, as the content could provide valuable insights for those studying how prompts can affect the performance of AI models, even though it is not a direct match for a study focused specifically on 'hard prefix prompts'." -a setwise approach for effective and highly efficient zero-shot ranking with large language models,gpt-4-1106-preview,8,"The abstract details a study on zero-shot ranking with Large Language Models (LLMs) through the use of different prompting approaches (Pointwise, Pairwise, Listwise, and a novel Setwise approach). Although the study does not specifically mention 'hard prefix prompts,' it does deeply engage with prompt engineering for zero-shot tasks in LLMs. Since prompt engineering is essential in operationalizing these models for specific tasks, and the study clearly contributes to understanding and innovating in this field, it has high relevance to prompt engineering study. However, it does not directly address 'hard prefix prompts,' hence the rating is not a perfect 10." -reducing negative effects of the biases of language models in zero-shot setting,gpt-4-1106-preview,7,"The paper is relevant to prompt engineering as it addresses the issue of biases in language models, particularly GPTs, which is a key concern when engineering prompts for zero-shot settings. By proposing a method to reduce bias through the use of probing samples and a Calibration Adapter, the study is relevant to the prompt engineering field as it contributes to the development of more fair and balanced prompting strategies. However, the primary focus seems to be on model calibration rather than on designing or structuring prompts, hence the rating is not a perfect 10." -model-generated pretraining signals improves zero-shot generalization of text-to-text transformers,gpt-4-1106-preview,7,"The paper is relevant to prompt engineering, particularly in the exploration of training strategies that could impact how effectively models respond to prompts. Although the main focus is on zero-shot generalization of text-to-text Transformers and pretraining strategies (e.g., using model-generated signals), the fact that it includes prompt-finetuning on a mixture of NLP tasks indicates relevance. The creation of METRO-T0, which competes with state-of-the-art models on prompted NLP benchmarks, underscores the potential impact of pretraining on prompt-based tasks. However, the paper does not seem to focus specifically on 'hard prefix prompts' but rather on a broader approach to pretraining and finetuning." -interaction-aware prompting for zero-shot spatio-temporal action detection,gpt-4-1106-preview,6,"The study describes the use of prompting as a mechanism to obtain more appropriate text features for zero-shot spatio-temporal action detection, which falls under the broader scope of prompt engineering. However, the context is very specialized and focuses more on the application to a specific domain (video processing and action detection) rather than the study of hard prefix prompts in general. The relevance is moderate because it deals with an application of prompts in a machine learning system, but it does not directly address a 'comprehensive systematic review on hard prefix prompts' as the original query specifies." -beyond yes and no: improving zero-shot llm rankers via scoring fine-grained relevance labels,gpt-4-1106-preview,9,"The paper discusses improving zero-shot text rankers by refining the prompting mechanism used in large language models (LLMs), specifically by introducing fine-grained relevance labels instead of binary ones. This is highly relevant to prompt engineering as it directly involves optimizing the way prompts are structured to achieve better performance in text ranking tasks. The incorporation of more nuanced labels is a method of prompt engineering aimed at enhancing the model's capability to assess relevance. The study's focus on prompting strategies and its impact on the model's output makes it pertinent to the field of prompt engineering study, hence the high score." -exploring grounding potential of vqa-oriented gpt-4v for zero-shot anomaly detection,gpt-4-1106-preview,7,"The abstract details a study focused on the application of a Large Multimodal Model (GPT-4V) for anomaly detection using the Visual Question Answering paradigm, which includes an aspect of 'Prompt Designing' as one component of the proposed framework. This directly relates to prompt engineering as it involves designing prompts to effectively interact with AI models. However, the study's primary focus seems to be on the application of the model to anomaly detection rather than the intricacies or methodologies behind prompt engineering. Therefore, while prompt engineering is a component of the study, it is not the central theme, which is why the relevance is rated as a 7 rather than a full 10." -zero-shot learning for named entity recognition in software specification documents,gpt-4-1106-preview,8,"The abstract discusses the application of zero-shot learning to Named Entity Recognition (NER) in the context of software specification documents. One of the two zero-shot approaches mentioned employs prompt engineering, achieving a high accuracy of 93%. The relevance to prompt engineering is high because the study specifically involves the use of prompt engineering techniques in an NER task, which is a significant part of language model application. However, the relevance is not rated as a full 10 because the abstract also describes a second approach that diverts from prompt engineering and is based on transforming the problem into a question-answering task. Therefore, while prompt engineering is a central theme, it is not the exclusive focus of the study." -zero-textcap: zero-shot framework for text-based image captioning,gpt-4-1106-preview,4,"The abstract discusses the Zero-TextCap model for text-based image captioning. It touches on prompt engineering indirectly by mentioning the generation of candidate sentences from the prompt 'Image of' and the refinement process for improving caption quality and diversity. However, the main focus is on image captioning and OCR technology, rather than prompt engineering. The relevance to prompt engineering study is moderate because it deals with a specific use of prompts within a different field of study, i.e., text-based image captioning. The study is more relevant to the fields of computer vision and natural language processing than to the study of prompt engineering in general." -are soft prompts good zero-shot learners for speech recognition?,gpt-4-1106-preview,6,"The abstract discusses 'soft prompts' in the context of automatic speech recognition and zero-shot learning, which is related to the field of prompt engineering, as it involves the manipulation of prompts to enhance model performance. However, the prompt specifically asks about 'hard prefix prompts,' and this study focuses on 'soft prompts,' not 'hard' ones. Therefore, the study is relevant to the broader field of prompt engineering but not directly relevant to the specified subset of 'hard prefix prompts.' The relevance rating acknowledges the connection to prompt engineering while also recognizing the divergence from the specified topic of 'hard prefix prompts'." -blended-nerf: zero-shot object generation and blending in existing neural radiance fields,gpt-4-1106-preview,5,"The presented work, Blended-NeRF, involves some aspects of prompt engineering, such as the use of text prompts to guide the editing of 3D scenes. This suggests a connection to natural language processing and the translation of text instructions to visual modifications. However, the focus seems to be more on the application of 3D neural radiance fields and the integration of new objects in existing scenes rather than on the detailed study of prompt engineering itself. Therefore, the relevance to prompt engineering as a primary study objective appears to be moderate." -global constraints with prompting for zero-shot event argument classification,gpt-4-1106-preview,9,"The abstract describes a novel approach that leverages prompting techniques, specifically prefix prompts, in the context of event argument classification which is highly relevant to prompt engineering. The study's focus on how prompts can be used to improve performance in a zero-shot learning scenario indicates a significant contribution to the area of natural language processing related to prompt engineering. Although the work is not solely about hard prefix prompts in general, the application and development of new prompt templates for a specific task align closely with prompt engineering studies. The only reason it does not receive a full 10 is that it does not address a 'comprehensive systematic review' on prompts but rather presents a specific applied use-case of prompt engineering." -large language models are frame-level directors for zero-shot text-to-video generation,gpt-4-1106-preview,7,"The provided abstract discusses the use of large language models (LLMs) to generate frame-by-frame descriptions for text-to-video generation, which is relevant to prompt engineering. While the primary focus seems to be on video generation, the role of LLMs in interpreting and directing user prompts aligns with the study of designing and improving prompts to achieve specific outcomes. The framework's ability to translate user prompts into separate and temporally consistent frame prompts demonstrates an application of prompt engineering techniques. Therefore, the approach of dissecting abstract prompts into frame-level instructions can be viewed as a form of prompt engineering. The rating is not a full 10 because the abstract does not explicitly focus on the study of prompt engineering in general but rather its application within a specific context of video generation." -sc vall-e: style-controllable zero-shot text to speech synthesizer,gpt-4-1106-preview,7,"The title of the study 'SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer' indicates a research focus on text to speech (TTS) synthesis with style control, which is tangentially relevant to prompt engineering. Although prompt engineering typically involves refining input prompts to achieve better performance in language models, the abstract describes a system that takes text and prompt audio as input to control speech attributes like emotion and pitch. This relates to a form of prompt engineering where the prompt is not just textual but also auditory. The mention of 'tokens in the style embedding matrix' also suggests a relationship with prompt engineering as it implies the manipulation of specific elements to guide the model's output. However, the primary focus on TTS synthesis and lack of explicit discussion on prompt engineering in language models warrants a rating that isn't at the highest relevance." -language models as zero-shot trajectory generators,gpt-4-1106-preview,4,"While the abstract discusses the usage of Large Language Models for trajectory generation in robotics, which would require careful crafting of prompts to interact with the model effectively, the focus on 'hard prefix prompts' in the context of a comprehensive systematic review is not directly addressed. Although the principles of prompt engineering could be applied to formulate the inputs for GPT-4 in this study, the abstract does not specifically mention or concentrate on 'hard prefix prompts', nor does it suggest a systematic review of such prompts. Therefore, the relevance is moderate as the concept of prompting is involved, but not specific to the requested area of study." -zeroprompt: streaming acoustic encoders are zero-shot masked lms,gpt-4-1106-preview,5,"The study presents a technique called ZeroPrompt that is applied to streaming acoustic encoders, which is tangentially relevant to 'prompt engineering' since it involves what can be described as a prompting strategy. However, the core of the study focuses on streaming ASR (Automatic Speech Recognition) models and improving their latency, which is not directly related to the systematic review of 'hard prefix prompts' in the traditional sense of prompt engineering for language models. Therefore, the relevance is moderate as it deals with prompts in a different context than what 'prompt engineering study' might typically imply, which is often associated with text-based language model prompting." -zero-shot text-driven physically interpretable face editing,gpt-4-1106-preview,4,"The paper discusses text-driven face editing and involves the use of text prompts to guide the image editing process. Its relevance to prompt engineering is in the use of the CLIP model which involves understanding and correlating text descriptions to visual content. However, the primary focus of the paper seems to be on face editing using a novel method rather than on the study or improvement of prompt engineering techniques themselves. Therefore, it has some relevance due to the application of text prompts, but it is not a direct study on prompt engineering." -applenet: visual attention parameterized prompt learning for few-shot remote sensing image generalization using clip,gpt-4-1106-preview,7,"The provided abstract demonstrates relevance to prompt engineering as it discusses the development of a novel approach to prompt learning, which is central to adapting language models to specific tasks. The Visual Attention Parameterized Prompts Learning Network (APPLeNet) incorporates visual tokens combined with textual tokens, indicating that it deals with the intersection of language (through prompts) and vision, which is a component of prompt engineering. Additionally, the TLDR section reinforces the focus on prompt learning strategies. However, the application is specifically for remote sensing image generalization, which is a niche area within the broader scope of prompt engineering studies. Hence, the rating is not a full 10, because while it does contribute to the field, it does so in a specific context rather than addressing hard prefix prompts in a broad sense." -schema-aware reference as prompt improves data-efficient relational triple and event extraction,gpt-4-1106-preview,9,"The abstract presents research on a novel approach for prompt-based information extraction using pre-trained language models, which directly relates to the study of engineering prompts for better performance in language understanding tasks. As the study introduces a schema-aware mechanism to improve the efficiency of prompts by leveraging global training data and knowledge, it is highly relevant to the concept of 'hard prefix prompts' in the prompt engineering field. The approach is designed to overcome the semantic gap and representation learning limitations, which are critical considerations in prompt engineering. The only reason it does not receive a 10 is because the abstract does not explicitly mention 'hard prefix prompts', but the content is otherwise highly relevant." -prompt combines paraphrase: teaching pre-trained models to understand rare biomedical words,gpt-4-1106-preview,8,"The abstract describes an approach to prompt-based fine-tuning tailored towards the biomedical domain, which is relevant to the field of prompt engineering. It focuses on helping models learn and understand rare biomedical terminology, a challenge unique to this specialized area. The approach is directly related to improving the capabilities of pre-trained models with prompt engineering in a specific and practical instance, which can be beneficial for the broader study of prompts in different contexts. However, the abstract does not discuss 'hard prefix prompts' specifically, which may slightly reduce its relevance to the precise topic of a systematic review on such prompts. Therefore, while it is highly relevant to prompt engineering overall, it is not a perfect match for the subject of 'hard prefix prompts.', which is why the rating is not a perfect 10." -domain prompt learning for efficiently adapting clip to unseen domains,gpt-4-1106-preview,9,The abstract describes Domain Prompt Learning (DPL) as a novel approach for domain inference through the generation of conditional prompts. This is highly relevant to prompt engineering as it explicitly deals with the creation of prompts to improve the performance of a foundation model in domain generalization. The approach's focus on prompt generation and its impact on model accuracy makes it a significant contribution to the field of prompt engineering. -amortized prompt: lightweight fine-tuning for clip in domain generalization,gpt-4-1106-preview,7,"The abstract discusses the use of prompt generation as a novel approach for domain inference with an emphasis on improving domain generalization in image classification using the CLIP model. This is relevant to prompt engineering, as it describes developing a method (Amortized Prompt) related to creating and utilizing prompts to enhance model performance without fine-tuning. Although the study appears to focus more broadly on domain generalization and does not specifically address 'hard prefix prompts,' the concept of prompt generation within this context is still within the domain of prompt engineering, hence the rating of 7. The absence of a direct mention of 'hard prefix prompts' means it is not entirely focused on that specific aspect of prompt engineering, thus not receiving a full score." -feature normalization and cartography-based demonstrations for prompt-based fine-tuning on emotion-related tasks,gpt-4-1106-preview,8,"The relevance to prompt engineering is high because the paper discusses a novel approach to prompt-based fine-tuning, which is a method within prompt engineering. It focuses on improving the performance of language models on NLP tasks through feature normalization and the introduction of training dynamics to select informative samples for prompts. The paper's central theme revolves around optimizing the input context for prompt-based models, which is directly relevant to prompt engineering. However, it does not specifically address 'hard prefix prompts,' but rather the broader concept of prompt-based fine-tuning. Hence the reasoning for not giving a full score of 10." -understanding prompt engineering may not require rethinking generalization,gpt-4-1106-preview,8,"The provided abstract directly involves the study of prompt engineering within the context of zero-shot learning and vision-language models. It discusses the impact of manual prompt crafting on generalization performance and how classical PAC-Bayes bounds can explain the success of such methods. Although the specific term 'hard prefix prompts' is not mentioned, the abstract's focus on the structural aspects of prompt design and their implications for model performance is highly relevant to the field of prompt engineering. The TLDR further emphasizes the significance of the discrete nature of prompts and language model priors in maintaining tight generalization bounds, which are central considerations in prompt engineering studies." -few shot learning approaches to essay scoring,gpt-4-1106-preview,8,"The abstract provided discusses few-shot learning methods, specifically the use of a prompt-based few-shot learning method (PET) in the context of automated essay scoring. Although the primary focus is on AES, the implementation of prompt-based learning is highly relevant to the study of prompt engineering, as PET is a methodology that relies on engineering prompts to improve model performance with limited training data. Therefore, the study is substantially relevant to prompt engineering, specifically within the field of NLP and machine learning. The deduction in the rating arises because the prompt engineering for AES may not cover the entire scope of 'hard prefix prompts' but is nevertheless significant in demonstrating the application and impact of prompt engineering techniques." -byoc: personalized few-shot classification with co-authored class descriptions,gpt-4-1106-preview,8,"The study presents a novel approach to few-shot text classification with the involvement of an LLM and interaction with users to generate class descriptions. This is highly relevant to prompt engineering, as the method relies on creating effective prompts that enable the LLM to categorize texts with minimal training data. Although the research focuses specifically on text classification and user interaction for class description generation, rather than hard prefix prompts exclusively, the process of prompt construction and its role in model performance is central to the field of prompt engineering. Therefore, the study contributes valuable insights to prompt engineering by exploring interactive ways to enhance LLM understanding and classification accuracy." -prompt sketching for large language models,gpt-4-1106-preview,9,"The provided abstract for 'prompt sketching for large language models' discusses an innovative prompting strategy that involves generating a template with variables that the LLM predicts values for, which directly relates to engineering better prompts for LLMs. The approach aims to address issues with current prompting strategies that result in disconnected and verbose responses by proposing a more structured interaction with the model via templated prompts. The abstract mentions the improvement in performance on various benchmarking tasks, indicating a substantial contribution to the study of prompt engineering. The paper's focus on optimizing the generation process and providing control over the model's output through a novel prompting paradigm makes it highly relevant to the field. It is rated slightly less than 10 because the prompt specifically asks for a review on 'hard prefix prompts', and it is not explicitly clear from this abstract whether prompt sketching falls into that category. However, the general relevance to prompt engineering study is evident." -strength in numbers: estimating confidence of large language models by prompt agreement,gpt-4-1106-preview,9,"The paper discusses a method to improve confidence estimates for language model predictions by using a variety of prompts, which is highly relevant to the field of prompt engineering. The study focuses on the generation of multiple prompts to enhance the reliability of large language model outputs, which directly pertains to the design and usage of prompt strategies to elicit more accurate responses from these models. The relevance is not a full 10 only because it does not specifically mention 'hard prefix prompts' but rather the broader concept of improving confidence estimation through the use of diverse prompts." -the language of prompting: what linguistic properties make a prompt successful?,gpt-4-1106-preview,9,"The described study directly relates to prompt engineering because it investigates how linguistic properties of prompts affect the performance of language model tasks. It focuses on the nuances of prompt design, which is a core aspect of prompt engineering, aiming to understand what makes a prompt effective. This is highly relevant as it contributes to the development of guidelines and standards for prompt creation, essential for refining the prompt engineering process. The only reason it does not receive a perfect score is that it does not specify 'hard prefix prompts' but prompts in general, which could include a variety of types beyond the hard prefix category." -hard sample aware prompt-tuning,gpt-4-1106-preview,9,"The provided abstract describes research directly related to prompt-tuning, specifically addressing challenges in differentiating between informative hard samples and misleading samples during few-shot learning for NLP tasks. The relevance to prompt engineering is high, considering that the study introduces a 'Hard Sample Aware Prompt-Tuning framework (HardPT)' to improve the effectiveness of prompts in machine learning models by using advanced techniques such as reinforcement learning and contrastive learning. These methodologies directly contribute to the field of prompt engineering by enhancing the model's ability to learn from limited data. The only reason for not giving a perfect score is the focus on 'hard sample' differentiation may be considered a specific subset within the broader domain of prompt engineering." -multi-view vision-prompt fusion network: can 2d pre-trained model boost 3d point cloud data-scarce learning?,gpt-4-1106-preview,4,"The abstract discusses the fusion of 2D pre-trained models with 3D point cloud data through a novel network (MvNet) for few-shot 3D classification, which includes aspects of prompt learning inspired by NLP. Although the application is primarily for 3D classification in computer vision and not for prompt engineering in a textual context, the inspiration from prompt learning and the mention of using prompts to describe prior knowledge for image models suggests some relevance to the topic of prompt engineering study. However, since the primary focus is not on textual or linguistic prompts but on prompts that bridge 3D and 2D model data, the relevance is moderate but not high." -voucher abuse detection with prompt-based fine-tuning on graph neural networks,gpt-4-1106-preview,8,"The study presents a novel application of prompt-based fine-tuning, albeit in the domain of graph neural networks for voucher abuse detection rather than natural language processing. The focus on designing a prompting function to better align the pre-training and fine-tuning tasks shows relevance to prompt engineering, as it involves creating effective prompts to improve machine learning models’ performance. The improvement in performance with this method demonstrates the potential effectiveness of prompt engineering strategies in various domains, which is relevant for the broader field of study. However, the specificity to graph neural networks slightly reduces its direct applicability to studies focused exclusively on text-based prompt engineering." -modal interaction-enhanced prompt learning by transformer decoder for vision-language models,gpt-4-1106-preview,9,"The title suggests that the study introduces a prompt tuning method specifically designed for improving the performance of transformer decoders in vision-language models. This is highly relevant to prompt engineering as it deals with enhancing model interaction with prompts. Although the term 'hard prefix prompts' from the original query is not explicitly mentioned, the nature of the study seems to be closely related to developing and enhancing prompting strategies. Hence, the relevance rating is high. The abstract being 'nan' does not provide additional information, but the TLDR suggests that the method being proposed has shown improved performance over a baseline model, indicating that this research contributes valuable insights to the field of prompt engineering." -stabilized in-context learning with pre-trained language models for few shot dialogue state tracking,gpt-4-1106-preview,8,"The study addresses designing prompts for complex tasks like dialogue state tracking (DST) and discusses techniques to stabilize in-context learning performance with pre-trained language models. As prompt engineering involves both the creation of effective prompts and the stability of model performance when using those prompts, this study is highly relevant to the field. However, it specifically focuses on few-shot learning techniques and dialogue tasks, which may not fully cover the broad spectrum of prompt engineering topics such as hard prefix prompts. Thus, it does not merit a perfect score, but it is still significantly pertinent." -does gpt-3 generate empathetic dialogues? a novel in-context example selection method and automatic evaluation metric for empathetic dialogue generation,gpt-4-1106-preview,8,"The provided abstract directly relates to prompt engineering as it discusses the exploration of GPT-3's ability to generate empathetic dialogues through prompt-based in-context learning, which is a part of the field of prompt engineering. The study's investigation of novel in-context example selection methods and the introduction of a new automatic evaluation metric are also relevant to the development and optimization of prompts, which are essential for fine-tuning the performance of language models in specific tasks. Although it doesn't mention hard prefix prompts specifically, the focus on in-context learning and prompt-based methods makes it highly relevant to the broader field of prompt engineering in the context of empathetic dialogue generation." -emotionprompt: leveraging psychology for large language models enhancement via emotional stimulus,gpt-4-1106-preview,8,"The presented abstract is highly relevant to prompt engineering, as it specifically addresses the enhancement of large language models (LLMs) through 'EmotionPrompt', which is essentially an innovative technique in prompt engineering involving emotional stimuli. Although the focus on 'hard prefix prompts' is not directly mentioned, the research could be considered adjacent or complementary due to its emphasis on improving the interaction between humans and LLMs by refining the way prompts are engineered. Hence, the relevance to prompt engineering is significant, warranting a high rating. Nonetheless, the specificity to 'hard prefix prompts' is not clearly stated, which is why the rating is not a full 10." -scone: benchmarking negation reasoning in language models with fine-tuning and in-context learning,gpt-4-1106-preview,7,"The abstract describes a study focusing on negation reasoning in language models, particularly in the context of NLI (Natural Language Inference) and sentence completion tasks. Although the study is not directly about 'hard prefix prompts', prompt engineering is inherent in the design of tasks for language models to assess their abilities. The construction of the ScoNe-NLG and the insights from testing different prompt strategies with InstructGPT are relevant to prompt engineering, as they can inform how prompts can be optimized for better model performance, especially in handling negations. Therefore, the study is moderately relevant to prompt engineering, even if the primary focus is not on prompt construction itself." -enabling classifiers to make judgements explicitly aligned with human values,gpt-4-1106-preview,9,"The abstract describes a study that is highly relevant to prompt engineering. It discusses how prompt-based few-shot learning is used to generate training data from large-scale language models, which is a key aspect of prompt engineering. The focus on value alignment and the construction of classifiers based on explicit human input also reflects on the prompt's ability to direct model behavior in a specific way, showcasing an advanced application of prompt engineering. The only reason it doesn't receive a perfect score is that it does not exclusively deal with 'hard prefix prompts', which the study request specifically asks for, but addresses a broader topic of prompt-based few-shot learning and classifier fine-tuning." -bits of grass: does gpt already know how to write like whitman?,gpt-4-1106-preview,7,"The study is relevant to prompt engineering insofar as it examines how generative language models like GPT-3.5 and GPT-4 respond to zero-shot and many-shot prompts without fine-tuning. It evaluates the model's ability to generate poetry in a specific style, which is closely related to the effectiveness of the prompts used. It does not, however, specifically address 'hard prefix prompts,' but rather the broader concept of prompt effectiveness in generating author-specific language patterns. Therefore, the relevance is high but not entirely focused on the specific aspect of 'hard prefix prompts'." -do prompts solve nlp tasks using natural language?,gpt-4-1106-preview,9,"The given title and abstract are highly relevant to prompt engineering as they discuss the effectiveness of different types of prompts in NLP tasks, a core issue in the study of prompt engineering. The research specifically evaluates human-designed prompts, schema prompts, and null prompts, which are directly related to the process of engineering and optimizing prompts for language models. However, it might not be a 'comprehensive systematic review' as the prompt specifies, which is why it doesn't receive a full 10 rating." -cliptexture: text-driven texture synthesis,gpt-4-1106-preview,8,"The abstract discusses a texture synthesis framework that utilizes language-based controls to guide the synthesis process, which is relevant to prompt engineering. The use of text prompts to influence the output of an AI model aligns closely with prompt engineering principles, where the goal is to effectively communicate an intended outcome to the model through language. However, this paper specifically focuses on texture synthesis in images rather than prompt engineering as a broader field of study, hence the rating is not a perfect 10." -bertnet: harvesting knowledge graphs with arbitrary relations from pretrained language models,gpt-4-1106-preview,7,"The research is highly relevant to prompt engineering as it involves using prompts to interrogate pretrained language models for extracting knowledge graph relationships. While the study does not focus on 'hard prefix prompts' specifically, the concept of designing prompts to elicit specific types of knowledge from language models is central to prompt engineering. Therefore, the use of prompts to define relations and the subsequent extraction process aligns with studying the effectiveness and methodology of prompt engineering, despite not directly addressing the systematic review topic on 'hard prefix prompts'." -learning disentangled prompts for compositional image synthesis,gpt-4-1106-preview,9,"The abstract describes a study highly relevant to prompt engineering, focusing on a specific application in image synthesis. The research introduces a framework for learning disentangled prompts that separate semantic and domain information, which is a concept closely associated with constructing effective prompts in generative models. The ability to control these aspects and the application to zero-shot domain adaptation show a direct relevance to the field of prompt engineering. However, the focus is specific to image synthesis rather than a broad range of applications or a purely theoretical exploration, hence the rating is not a full 10." -language models as black-box optimizers for vision-language models,gpt-4-1106-preview,9,"The provided abstract describes research into a novel fine-tuning approach for vision-language models (VLMs) using natural language prompts, which is highly relevant to prompt engineering. The study's focus on refining prompts using large language models and without requiring white-box access aligns with the core principles of prompt engineering. The research advances the understanding of how effective prompts can be generated and optimized, which is a fundamental aspect of prompt engineering. The deduction of one point is due to the specificity of the application to vision-language models and not to the broader spectrum of prompt engineering, but it still remains a significant contribution to the field." -effects of target words and their locations in prompts,gpt-4-1106-preview,9,"The researched document is highly relevant to the field of prompt engineering as it directly investigates the effects of target words and their placement within prompts, which are critical components in constructing effective prompts for language models. The study's examination of different prompt structures and their outcomes on model performance, as well as comparisons between models that are instruction tuned (T0) and those that are not (ALBERT), provide valuable insights into prompt design strategies. The focus on varying difficulties and tasks, including NLI, coreference resolution, sentence completion, and multiple choice Q&A, further underscores the study's comprehensive approach to understanding prompt engineering. Although the title does not specifically mention 'hard prefix prompts,' the abstract indicates a thorough examination of prompt-related factors which are indeed pertinent to the study of prompt engineering. The only reason it's not a full 10 is that the thesis does not seem to exclusively focus on 'hard prefix prompts,' which could be construed as a specific type of prompt from the title of the systematic review." -weak supervision for question type detection with large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it investigates the use of rules as an alternative to manual prompts for leveraging large pre-trained language models in a specific NLP task, which is question type detection in dialogue. This aligns with prompt engineering by exploring how to effectively communicate with LLMs to produce desired outputs. The systematic review aspect is not directly mentioned, but given that the work compares different models and addresses the design of prompts versus rules, it reflects an understanding of the prompt engineering landscape, which is essential for a systematic review." -b-pet: the pet model with parameter-efficient learning,gpt-4-1106-preview,4,"The abstract provided discusses the B-pet model, which focuses on few-shot learning (FSL), parameter efficiency, and storage reductions for model training and deployment. This involves the concept of 'prompt learning' as a component of the PET model, indicating some relevance to prompt engineering. However, the main content is centered on fine-tuning efficiency and parameter freezing, not directly on the systematic study or development of prompting methods. Consequently, relevance is limited to the aspect of 'prompt learning' in the context of the broader FSL and model efficiency discussions." -automatic data transformation using large language model: an experimental study on building energy data,gpt-4-1106-preview,8,"The study presents a framework that includes a prompt generator for large language models, which is highly relevant to the field of prompt engineering. The iterative prompt optimization mechanism for flaw detection aligns well with advanced prompt engineering techniques. Although the focus is on building energy data and SQL code transformation, the core concept of utilizing LLMs with a prompt-based interface has broad implications for prompt engineering. The study emphasizes the integration of domain knowledge and adaptive learning, which are crucial components of prompt engineering. The reason for not rating it a full 10 is that the primary application is data transformation rather than a broad analysis of 'hard prefix prompts' in general." -grass: unified generation model for speech-to-semantic tasks,gpt-4-1106-preview,4,"The paper is relevant to prompt engineering to some extent as it involves generating target text conditioned on a task-related prompt for audio data. Although it does focus on utilizing prompts for refining the production of target text, which is an aspect of prompt engineering, it specifically addresses speech-to-semantic tasks rather than hard prefix prompts within a text-input domain. Therefore, while it has some relevance due to the usage of prompts in the model's training and task execution, it is not a direct study on hard prefix prompts, reducing its relevance to the specific area of prompt engineering under review." -leveraging vision-language foundation models for fine-grained downstream tasks,gpt-4-1106-preview,7,"The abstract mentions developing a multitask fine-tuning strategy based on a positive/negative prompt formulation to improve the performance of vision-language foundation models on fine-grained attribute detection and localization tasks. This indicates a utilization of prompt engineering for improving model accuracy on specific tasks. While it is not specifically about 'hard prefix prompts' which could be more related to text-based tasks, the concept of using prompt strategies to finetune models, even in the vision-language domain, is related to the broader field of prompt engineering. Hence, the relevance is moderately high but not entirely direct with respect to the specific topic of hard prefix prompts." -towards expert systems for improved customer services using chatgpt as an inference engine,gpt-4-1106-preview,8,"The abstract indicates that the paper discusses an iterative procedure that involves prompt engineering as part of the process to develop ChatGPT-powered expert systems for customer services. Since it addresses the design of descriptive knowledge and few-shot prompts, which are key components of prompt engineering for AI models, it is relevant to the study of prompt engineering. The relevance is not at the maximum since the abstract suggests that the paper covers a broader range of topics within the AI application in customer service, and prompt engineering is only one part of the study." -generating domain-specific programs for diagram authoring with large language models,gpt-4-1106-preview,8,"The study addresses the concept of engineering prompts specifically for one-shot learning with Large Language Models (LLMs) to generate domain-specific language (DSL) programs, which is relevant to prompt engineering. Developing structured prompts that can effectively guide LLMs, like the study's use of LLMs for Penrose diagram creation from prose, illustrates a practical application of prompt engineering. This process is central to optimizing LLM performance in specific tasks, thus the high relevance. However, the provided title and abstract do not mention 'hard prefix prompts' as a focused subject within the realm of prompt engineering, which would align directly with the systematic review of hard prefix prompts. Instead, it discusses prompt structures for DSL program creation in general, which may not comprehensively cover all aspects of prompt engineering or the specific topic of hard prefix prompts, leading to a rating slightly less than perfect." -rewriting math word problems with large language models,gpt-4-1106-preview,9,"The abstract provided talks about a study where Large Language Models, specifically GPT-4, were used to rewrite math word problems, following the same guidelines as human authors. It directly relates to prompt engineering as it involves developing and comparing different prompting strategies like zero-shot, few-shot, and chain-of-thought. Furthermore, it discusses the process of encoding mathematical components using GPT´s capacity to write python code, which is an essential aspect of prompt engineering when dealing with specialized tasks such as math word problems. Although the primary focus is on improving learning outcomes rather than prompt optimization, the process of refining the prompts to achieve high-quality rewrites is squarely within prompt engineering methodology. The reason for not rating it a full 10 is because the primary outcome seems to be focused on educational efficacy rather than the refinement of the prompt engineering itself." -eliciting knowledge from language models for event extraction,gpt-4-1106-preview,8,"The paper is clearly relevant to prompt engineering as it discusses the use of prompt-based learning to elicit knowledge from language models for a complex NLP task like event extraction. Designing such prompts is closely related to the concept of prompt engineering, which involves crafting inputs that help elicit desired responses from the model. Although the paper might not focus solely on 'hard prefix prompts' as per the original systematic review topic, it pertains to the general field of study of how prompts can be engineered to improve the extraction of information from language models. The deduction of two points in rating reflects that while it is highly relevant, it might not cover 'hard prefix prompts' specifically if that were the exclusive focus of the review." -ccprompt: counterfactual contrastive prompt-tuning for many-class classification,gpt-4-1106-preview,9,"The provided abstract relates to the development and analysis of a specific type of prompt-tuning approach named 'Counterfactual Contrastive Prompt-Tuning (CCPrompt)' which is highly relevant to the field of prompt engineering. Prompt engineering involves the design and optimization of prompts to improve the performance of neural language models on various tasks. The described CCPrompt method focuses on enhancing many-class classification by identifying contrastive attributes and using them to construct elaborate prompts, which is a direct application of prompt engineering techniques. The high relevance rating is supported by the abstract's discussion on the method's effectiveness for different NLP tasks and the use of prompts as a core element of the model. The rating is not a perfect 10 primarily because it does not cover a 'systematic review' of hard prefix prompts but instead introduces a novel approach within prompt engineering." -what does a platypus look like? generating customized prompts for zero-shot image classification,gpt-4-1106-preview,8,"The abstract describes research on generating prompts to improve the performance of open-vocabulary image classification models, which is a significant contribution to the field of prompt engineering, particularly in the realm of zero-shot learning. While the study focuses on image classification and doesn't specifically mention 'hard prefix prompts', it does address the creation and optimization of prompts to improve task performance, which is relevant to the general area of prompt engineering." -a uto g raphex : zero-shot biomedical definition generation with automatic prompting,gpt-4-1106-preview,8,"The abstract discusses a zero-shot definition generation model that leverages prompting with pre-trained language models, specifically in the context of biomedical terminology. While it does not explicitly mention 'hard prefix prompts', it does relate to prompt engineering as it involves automatically generating prompts to facilitate knowledge elicitation from language models. This is highly relevant to studies exploring various aspects of prompt engineering, although it may not address the 'hard prefix prompts' directly. The high relevance is due to the focus on automatic prompting which is a subset of prompt engineering. The rating is not a full 10 as the abstract does not cover the full breadth of prompt engineering, specifically not mentioning the term 'hard prefix prompts'." -better zero-shot reasoning with role-play prompting,gpt-4-1106-preview,9,"The study's theme is highly relevant to prompt engineering as it focuses on advanced techniques of prompting, specifically role-play prompting, and its impact on the performance of large language models (LLMs). Prompt engineering is crucial for the effective utilization of LLMs, and this research delves into the significant aspect of how different prompting methods, like role-play, can enhance a model's reasoning abilities in zero-shot scenarios across a variety of benchmarks. Although the study is not specifically about 'hard prefix prompts,' the broader category of prompt engineering still applies, thus the high relevance rating." -zero-shot slot filling with slot-prefix prompting and attention relationship descriptor,gpt-4-1106-preview,8,"The described paper introduces a novel prompting scheme specifically designed for zero-shot slot filling, which is directly related to prompt engineering. Prompt engineering involves creating effective prompts to guide models' behavior without extensive training, and this paper's approach to including learnable tokens and slot names fits within that scope. The use of attention values to enhance the prompts further ties it to advancements in the methodology of how prompts are constructed and their relationship to the model's attention mechanisms. The rating is not a perfect 10 because the paper is more focused on slot filling and attention features rather than a broad study on prompt engineering, but it still offers significant insights into the field." -relational representation learning for zero-shot relation extraction with instance prompting and prototype rectification,gpt-4-1106-preview,7,"The paper's focus on Instance Prompting as a method to bridge the gap between pre-training and fine-tuning for relation extraction aligns with techniques used in prompt engineering, particularly in the context of tailoring model outputs to specific tasks without extensive additional training data (zero-shot learning scenarios). Additionally, the mechanism of guiding pre-trained models to generate more task-specific representations is akin to the notion of constructing prompts to elicit desired responses from a model. However, the paper does not explicitly address 'hard prefix prompts' or the systematic review of prompt engineering as a broader field, thereby receiving a moderate score instead of a higher one for full relevance." -distilling hypernymy relations from language models: on the effectiveness of zero-shot taxonomy induction,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it discusses the extraction of structured knowledge from language models via prompting techniques, which is a core aspect of prompt engineering. Although it specifically focuses on taxonomy learning, prompt engineering is central to the methodology, making the paper relevant to the field. However, the exact match for 'hard prefix prompts' is not indicated, so the paper might not address that specific aspect of prompt prompting, hence the rating is not a full 10." -prompting scientific names for zero-shot species recognition,gpt-4-1106-preview,7,"The study is relevant to prompt engineering because it explores how different forms of prompts (using scientific names vs. common English names) can affect the performance of Vision-Language Models like CLIP in zero-shot species recognition tasks. Although it doesn't focus specifically on 'hard prefix prompts,' it directly examines the impact of prompt design on model accuracy, which is a significant aspect of prompt engineering. The study’s findings that common names yield better results than scientific names for prompts provide insight into effective strategies for prompt creation, thus contributing to the field of prompt engineering." -zero-shot next-item recommendation using large pretrained language models,gpt-4-1106-preview,8,"The abstract describes the process of using prompting strategies for LLMs to conduct next-item recommendations, which is directly related to prompt engineering. The study details a prompting approach specific for improving the performance of LLMs in a zero-shot recommendation task. While the focus is on the application of prompts in recommender systems, rather than on the study of 'hard prefix prompts' more generally, it contributes valuable insights into how prompts can be engineered and utilized to enhance the capabilities of LLMs in a practical scenario. This aligns with the broader field of prompt engineering, hence the high relevance rating." -selfcheck: using llms to zero-shot check their own step-by-step reasoning,gpt-4-1106-preview,7,"While the study described in the abstract is not directly related to prompt engineering in terms of developing or enhancing hard prefix prompts, it does address an important aspect of how LLMs (Large Language Models) can be improved in processing and verifying their reasoning, which can indirectly benefit prompt engineering. The ability of an LLM to self-check its reasoning is valuable for prompt engineering as it can lead to more effective prompting strategies that rely on the model's self-assessment of its reasoning process. Specifically, if an LLM can recognize errors in its own reasoning and adjust accordingly, this can inform the development of more advanced prompting techniques. The study is relevant to the field of prompt engineering, but it's not a direct study on prompt engineering itself, hence the rating of 7." -c3: zero-shot text-to-sql with chatgpt,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering because it focuses on a method that involves 'Clear Prompting' which is essentially a form of prompt engineering. It must strategically craft inputs to guide the ChatGPT model to generate correct SQL queries without previous training (zero-shot capability). Although the main focus is on Text-to-SQL, the principles and methods applied are directly related to prompt engineering as they deal with how to effectively prompt a language model to achieve a specific task." -tab-cot: zero-shot tabular chain of thought,gpt-4-1106-preview,8,"The abstract describes Tab-CoT, a novel prompting method that enhances the structure and explicit detailing of the reasoning process for complex tasks in a tabular format. This is highly relevant to prompt engineering, particularly as it relates to refining the interventions used to elicit specific and structured responses from AI systems. However, it is specifically tailored for tabular data and reasoning tasks, so it might not cover all aspects of prompt engineering study which can include other types of data and tasks. Hence the rating is not a perfect 10." -the benefits of label-description training for zero-shot text classification,gpt-4-1106-preview,7,"The abstract describes a method to improve zero-shot text classification accuracies by using data that describes labels, which aligns with prompt engineering efforts that involve describing tasks or labels to better inform the model's predictions. Although it doesn't explicitly address 'hard prefix prompts', the concept of using label descriptions can be relevant to designing more effective prompts. Thus, the relevance to prompt engineering is substantial but not direct, hence the rating of 7." -self-icl: zero-shot in-context learning with self-generated demonstrations,gpt-4-1106-preview,7,"The abstract describes a novel approach to in-context learning (ICL) with language models, which is indeed relevant to the study of prompt engineering as it focuses on generating and utilizing prompts to improve the performance of models without the need for additional demonstrations. The concept of Self-ICL generates pseudo-inputs and pseudo-labels as part of the prompting process, which aligns with the techniques used in prompt engineering. The relevance is not a perfect 10 because the study doesn't specifically address 'hard prefix prompts' as mentioned in the original query, but it is still highly relevant to the broader field of prompt engineering and the design of prompting strategies to improve language model outcomes in a zero-shot setting." -ontotype: ontology-guided zero-shot fine-grained entity typing with weak supervision from pre-trained language models,gpt-4-1106-preview,6,"The paper discusses a method which leverages pre-trained language models (PLMs) for fine-grained entity typing (FET) and specifically mentions how it ensembles multiple PLM prompting results, suggesting a novel use of prompts in model processing. While the main focus of the study is on FET and it introduces OntoType, a zero-shot ontology-guided FET method, the paper still has relevance to prompt engineering since it deals with generating and refining prompts for PLMs to improve typing resolution. The significance of prompt engineering is not the central theme of the paper, but prompts play a significant role in the described methodology, which aligns with how prompts can be engineered to work with ontological structures. Therefore, the paper is somewhat relevant to prompt engineering but not directly focused on it." -jack-ryder at semeval-2023 task 5: zero-shot clickbait spoiling by rephrasing titles as questions,gpt-4-1106-preview,7,"The paper addresses the use of pre-trained models to manipulate and interact with prompts by rephrasing clickbait titles into questions to optimize the models' response towards the task of clickbait spoiling. Although not directly focusing on 'hard prefix prompts', this study is relevant to the broader field of prompt engineering, as it involves the strategic alteration of prompts to suit the capabilities of pre-trained QA models and to achieve specific outcomes without task-specific training. The rephrasing technique and optimization strategy for better alignment with pre-trained models' strengths are of interest in prompt engineering research." -anovl: adapting vision-language models for unified zero-shot anomaly localization,gpt-4-1106-preview,7,"The abstract discusses the adaptation of CLIP models for zero-shot anomaly localization which involves designing specialized prompts for text supervision, a key aspect of prompt engineering. The introduction of a unified domain-aware contrastive state prompting template is directly related to the study of how prompts influence model performance, which is a subset of prompt engineering. The focus on aligning text with specific visual representations indicates relevance as it showcases a practical application of prompt engineering in the field of computer vision and anomaly detection. However, the paper's primary focus is on anomaly localization rather than prompt engineering itself, which is why the rating is not closer to 10." -lt at semeval-2023 task 1: effective zero-shot visual word sense disambiguation approaches using external knowledge sources,gpt-4-1106-preview,6,"The paper abstract is partially relevant to prompt engineering study as it discusses different textual prompting strategies as they relate to multi-modal machine learning and zero-shot capabilities. However, the main focus seems to be on Visual Word Sense Disambiguation (VWSD) using pre-trained visiolinguistic models and external knowledge sources, rather than a direct emphasis on hard prefix prompts or a comprehensive analysis of prompt engineering. The relevance rating of 6 reflects that prompt engineering is a supporting concept in the study rather than the primary focus." -instruction tuning with lexicons for zero-shot style classification,gpt-4-1106-preview,7,"The abstract discusses the use of lexicons for instructing language models in style classification without the need for fine-tuning. This study is relevant to prompt engineering, as it explores how specific language structures (style lexicons) can be used to guide pre-trained language models to perform new tasks without additional training. The concept of using lexical cues fits within the larger framework of prompt engineering, which seeks to optimize prompts to elicit desired outputs from language models. However, the focus on 'style classification' and 'zero-shot performance' is slightly tangential to prompt engineering's central theme of crafting and testing various prompts, hence the rating is not a full 10." -zero-shot faithfulness evaluation for text summarization with foundation language model,gpt-4-1106-preview,8,"The paper's relevance to prompt engineering study is high, since it investigates the use of a new metric FFLM, which involves prefixing text to evaluate faithfulness in text summarization. This approach is directly related to how prompts, including hard-coded prefixes, can be engineered to improve the predictions of a language model. Although the main focus is on faithfulness evaluation rather than the study of prompts in general, the use of prefixes is a significant component of prompt engineering techniques." -the art of socratic questioning: zero-shot multimodal reasoning with recursive thinking and self-questioning,gpt-4-1106-preview,7,"The study introduces Socratic Questioning as a method to improve problem-solving in large-scale language models, which is closely related to prompt engineering as it informs how prompts can be structured to facilitate more complex reasoning in AI. The emphasis on recursive thinking and self-questioning aligns with designing prompts that elicit more detailed and nuanced responses. However, it slightly diverges from the specific topic of 'hard prefix prompts' as it discusses a broader technique rather than focusing solely on the effects of hard prefixes in prompts." -zero-shot refinement of buildings' segmentation models using sam,gpt-4-1106-preview,8,"The abstract discusses the adaptation of foundation models using prompting strategies, which is relevant to prompt engineering. Specifically, it mentions the use of prompts to augment a Segment Anything Model (SAM) with recognition abilities. This is a direct application of prompt engineering to improve the performance of AI models. The focus is not on a 'hard prefix prompt' as outlined in the initial request, which would fit the definition of prompt engineering more closely, but the use of prompts to refine the SAM model's capabilities suggests a strong relevance to the field." -mm-react: prompting chatgpt for multimodal reasoning and action,gpt-4-1106-preview,7,"The title and abstract of the study discuss 'MM-REACT,' a system designed to enhance the capabilities of language models like ChatGPT by integrating them with vision experts for multimodal reasoning and action. The relevance to prompt engineering study is significant given that MM-REACT involves designing textual prompts that can facilitate multimodal information processing. Although the study does not exclusively focus on 'hard prefix prompts,' the concept of textual prompt design lies at the core of prompt engineering, hence the relevance. This system demonstrates an application of prompt engineering principles in the context of multimodal reasoning, which is a subset of the broader field of prompt engineering." -can an embodied agent find your “cat-shaped mug”? llm-guided exploration for zero-shot object navigation,gpt-4-1106-preview,7,"The abstract describes 'Language-guided Exploration' (LGX), which is a novel algorithm that uses Large Language Models (LLMs) to assist an embodied agent in zero-shot object goal navigation. The relevance to prompt engineering is significant in that it involves leveraging LLMs and employing various prompting strategies to improve sequential navigational decisions. The study of different prompting strategies directly pertains to prompt engineering, as it impacts how the language model guides the agent. While the primary focus of the study seems to be on robot navigation and object detection, the aspects where LLMs are being utilized and prompting strategies are analyzed contributes to the field of prompt engineering studies, hence the rating of 7. However, it's not exclusively focused on hard prefix prompts or a comprehensive systematic review of such prompts in prompt engineering, which would have resulted in a higher rating." -the art of prompting: event detection based on type specific prompts,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it explores the effectiveness of type-specific prompts for event detection in various scenarios, including few-shot and zero-shot learning. It directly addresses how the construction and application of prompts can affect model performance, a crucial aspect of prompt engineering." -is evalita done? on the impact of prompting on the italian nlp evaluation campaign,gpt-4-1106-preview,8,"The provided title and abstract directly relate to prompt-based learning, a key component of prompt engineering. The study assesses the efficacy of these prompts in Italian NLP tasks, which contributes to the understanding of prompt-based learning within a specific linguistic context. Although the study is more focused on the applications and implications for evaluation campaigns, rather than the methodological exploration of 'hard prefix prompts', it remains significantly relevant to the field of prompt engineering, especially in demonstrating the practical implications and current challenges in the field." -clip also understands text: prompting clip for phrase understanding,gpt-4-1106-preview,8,"The paper explores the use of the text encoder of CLIP for phrase understanding, which relates directly to prompt engineering as it involves designing effective prompts to leverage the model's capabilities. The comparison with other language models like BERT underlines the importance of how prompts are formulated in model performance. This research contributes to the understanding of how different prompting strategies can impact the outcome of language understanding tasks. Although it doesn't focus on 'hard prefix prompts' as specified, the study is highly relevant to the broader field of prompt engineering and how prompts can be optimized for model understanding." -odor descriptor understanding through prompting,gpt-4-1106-preview,6,"The study addresses a niche aspect of prompt engineering by focusing on generating word embeddings specific to olfactory descriptors, which implies a form of prompt optimization for a specialized application. The relevance to prompt engineering is moderate because it deals with improving the interaction between an NLP model and domain-specific language, which is an important aspect of prompt engineering. However, the paper does not seem to offer a broad investigation into hard prefix prompts or their systematic review, but rather presents practical methods for a specific type of prompting to improve performance in a specialized benchmark." -arggen: prompting text generation models for document-level event-argument aggregation,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering since it discusses the use of prompt-based methods for text generation in Information Extraction tasks, specifically for document-level event-argument aggregation. This demonstrates a practical application of prompt engineering in natural language understanding and reasoning, which aligns with the broader topic of prompt engineering study. However, it may not directly address the systematic review of 'hard prefix prompts,' hence the rating is not a full 10." -on the evaluations of chatgpt and emotion-enhanced prompting for mental health analysis,gpt-4-1106-preview,8,"The study evaluates how different prompting strategies, specifically those with emotional cues, affect the performance of a large language model like ChatGPT in the context of mental health analysis. Since prompt engineering involves the design of inputs that can effectively guide AI models to produce desired outputs, the research's focus on the impact of prompts enhanced with emotional information is highly relevant to the field of prompt engineering. The study's analysis of the efficacy of these prompts directly contributes to understanding and optimizing prompt design, which is a central concern in prompt engineering. However, the score is not a perfect 10 because the study is not exclusively dedicated to prompt engineering — it also delves into the broader scope of mental health analysis performance of language models." -reasoning implicit sentiment with chain-of-thought prompting,gpt-4-1106-preview,8,"The study addresses advanced prompt engineering techniques for implicit sentiment analysis (ISA) using chain-of-thought prompting, introducing a Three-hop Reasoning (THOR) framework. This is highly relevant to the field as it demonstrates prompt engineering's applicability in complex reasoning tasks and shows how to structure prompts to induce reasoning steps. The relevance is not rated a perfect 10 since the study focuses more on the reasoning aspect than on prompt engineering itself, but it is nonetheless a significant contribution to the area of prompt construction and optimization." -pearl: prompting large language models to plan and execute actions over long documents,gpt-4-1106-preview,9,"The study introduces PEARL, a framework specifically designed for prompting large language models (LLMs) that enhances their capability to process and reason over lengthy texts. This is highly relevant to prompt engineering as it directly tackles challenges in designing prompts that assist LLMs in managing complex tasks such as decomposing questions, planning, and executing a sequence of actions to generate accurate responses. The successful application of PEARL over challenging datasets and its comparison with other prompting methods like zero-shot and chain-of-thought demonstrates a significant advancement in the field of prompt engineering, particularly for tasks involving extensive reasoning. It only falls short of a perfect rating because it addresses a specific subset of prompt engineering focused on long documents rather than the entire breadth of prompt engineering." -multimodal procedural planning via dual text-image prompting,gpt-4-1106-preview,9,"The provided abstract discusses a dual-modality prompting method involving text and image prompts to guide procedural planning, which is highly relevant to the field of prompt engineering since it directly deals with how prompts can be engineered and optimized for multi-modal tasks. The method described leverages the capabilities of large language models and text-to-image generation, which are both core technologies relevant to prompt engineering. The relevance isn't perfect 10 due to the specific focus on the generation of text-image pairs for task completion, rather than on the hard prefix prompts mentioned in the initial query, but the study still contributes significantly to the broader topic of how prompts can be structured and used effectively." -federated prompting and chain-of-thought reasoning for improving llms answering,gpt-4-1106-preview,7,"The study appears to address question handling and improving response accuracy in Large Language Models through techniques that could be considered part of prompt engineering, namely the Self-Consistency (SC) and Chain-of-Thought (CoT) techniques. Prompt engineering often involves strategies to enhance the model's understanding and output, and these techniques align with such goals. While the study does not directly mention 'hard prefix prompts', it engages with the broader area of prompts and their optimization, therefore the relevance is moderate to high." -code prompting: a neural symbolic method for complex reasoning in large language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores advanced prompting methods (code prompting) in the context of improving the performance of large language models in complex reasoning tasks. This directly pertains to the development and evaluation of new prompting techniques, which is a core aspect of prompt engineering. The abstract indicates significant experimental work and analysis that can contribute to the field, such as comparing code prompting with the existing chain-of-thought (CoT) prompting. However, the study seems to focus on a specific type of prompting (neural symbolic prompting with code), rather than a comprehensive systematic review. Hence, the rating is not a full 10, but it's still high because of the clear relevance and potential impact on the study of prompting methods." -set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering lies in its methodology of using a visual prompting method (Set-of-Mark or SoM) to improve the performance of a language model with visual capabilities (GPT-4V). Although the study is focused on enhancing the visual grounding aspects of multimodal models, it does indirectly relate to the broader concept of prompt engineering by demonstrating a specific way to structure input (in this case, visual input) to achieve better performance on tasks that require understanding and interpreting visual information. Thus, the study is somewhat relevant as it expands the scope of prompt engineering into the multimodal domain, demonstrating that the structuring of prompts is important not just in text but also in how models interact with and interpret visual data." -legal syllogism prompting: teaching large language models for legal judgment prediction,gpt-4-1106-preview,8,"The paper focuses on a specific application of prompt engineering in the context of legal judgment prediction using a technique named 'legal syllogism prompting'. Although it is not about 'hard prefix prompts' per se, it explores a similar area by using prompts to direct the response of large language models. This is relevant to prompt engineering as it demonstrates the application of custom prompts to structure logical reasoning in AI, which is in line with the broader study of how prompts can be designed to elicit specific types of responses from language models. The systematic review on hard prefix prompts would likely cover various approaches in prompt engineering including such domain-specific applications; hence, the paper could offer valuable insights into this niche but relevant application within the field." -ramp: retrieval and attribute-marking enhanced prompting for attribute-controlled translation,gpt-4-1106-preview,9,"The study presents 'Retrieval and Attribute-Marking enhanced Prompting (RAMP)', a method that modifies and enhances the standard prompting approach in the context of machine translation, specifically for attribute-controlled translation. The inclusion of attribute annotations and the use of a semantic retrieval component are innovative strategies within prompt engineering. This approach is relevant to prompt engineering as it directly involves manipulating and engineering prompts to improve performance on a language task. It is particularly focused on prompting in the context of large language models, which is a core area of interest in prompt engineering studies. Although the paper is focused on translation tasks, the techniques and concepts discussed may be applicable to prompt engineering in broader contexts as well." -pieclass: weakly-supervised text classification with prompting and noise-robust iterative ensemble training,gpt-4-1106-preview,8,"The paper discusses PIEClass, which includes a pseudo label acquisition module utilizing zero-shot prompting of pre-trained language models (PLMs). This is relevant to prompt engineering because it involves using prompts to facilitate text classification in the absence of extensive datasets. It shows an application of prompt engineering in enhancing understanding beyond static keyword matching, which is a core challenge in the field. The iterative ensemble training module, while interesting as an approach to classifier training, is less directly related to prompt engineering. Hence the score is an 8 instead of a perfect 10, as the relevance is strong but not exclusively focused on prompt engineering." -self-explanation prompting improves dialogue understanding in large language models,gpt-4-1106-preview,8,"The study focuses on a novel 'Self-Explanation' prompting strategy specifically designed to improve Large Language Models' (LLMs) understanding in task-oriented dialogues, which falls under the broader category of prompt engineering. Although it does not deal with 'hard prefix prompts' per se, the research is highly relevant to the field of prompt engineering because it explores new methods for improving the performance of LLMs in processing complex dialogue contexts. The relevance rating is not a full 10 because the study is not directly about 'hard prefix prompts,' but it is significant due to its contribution to the overarching goal of optimizing prompts to enhance model comprehension." -fixed input parameterization for efficient prompting,gpt-4-1106-preview,10,"The abstract provided discusses the Fixed Input Parameterization (FIP) problem in the context of prompt engineering and how it aims to make the use of fixed prompts more efficient by integrating them into the parameters of a Language Model (LM). This is highly relevant to prompt engineering study as it tackles the optimization of prompt usage, which is a core aspect of prompt engineering in language models. The efficiency improvements and the exploration of methodologies for FIP in specific tasks such as persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions offer direct insights into prompt engineering. Therefore, the content of this abstract is directly related to the field of prompt engineering, addressing both the technical and application aspects of the topic." -map: low-data regime multimodal learning with adapter-based pre-training and prompting,gpt-4-1106-preview,7,"The study discusses the use of prompting in the context of vision-language multimodal learning, which is pertinent to prompt engineering. The focus on a moderate-size model (MAP) that leverages adapter-based pretraining and prompting for efficient transfer learning in a low-data regime demonstrates the application of prompting strategies. While the specifics of 'hard prefix prompts' are not mentioned, the concept of prompting is central to the paper, thereby making it relevant to the broader field of prompt engineering studies. However, the relevance is not maximal since the primary focus seems to be on the application of prompting within multimodal learning and not on the systematic review of the prompt engineering itself." -p5: plug-and-play persona prompting for personalized response selection,gpt-4-1106-preview,8,"The presented paper is highly relevant to prompt engineering due to its focus on using prompt sequences for personalized response selection in chatbots, which is a specific application of prompt engineering. The proposed method integrates the use of prompts to manage conversation flow based on persona, and it directly pertains to the engineering of prompts that help personalize chatbot responses. However, the paper is not exclusively about 'hard prefix prompts' (a term often related to the fixed instruction or text added to input data in language models to steer the response), which might have been implied in the phrase 'comprehensive systematic review on hard prefix prompts' in the original prompt. The paper focuses on persona prompting, which is a subset of prompt engineering but does not represent a broad overview or systematic review of hard prefix prompts in general. Therefore, while very relevant, the rating is not a full 10." -cof-cot: enhancing large language models with coarse-to-fine chain-of-thought prompting for multi-domain nlu tasks,gpt-4-1106-preview,8,"The presented work introduces the Coarse-to-Fine Chain-of-Thought (CoF-CoT) approach as a form of prompt engineering which is highly relevant to the field. It focuses on enhancing the reasoning capabilities of Large Language Models in Natural Language Understanding tasks. While the study might not directly address 'hard prefix prompts,' it proposes a novel way of structuring prompts that allow for a breakdown of tasks into multiple reasoning steps. This is inherently connected to the concept of prompt engineering, as it involves designing prompts that guide the model through a reasoning process, thus fitting well within the scope of prompt engineering studies. The reason for not rating it a 10 is because it doesn't explicitly state a focus on 'hard prefix prompts,' which the original query specified, but it is nonetheless substantially relevant." -a communication theory perspective on prompting engineering methods for large language models,gpt-4-1106-preview,9,"The provided title and abstract offer a high level of relevance to the field of prompt engineering study as it directly discusses prompting methods for large language models, an essential component of prompt engineering. It suggests a novel perspective by framing the review within communication theory, which is crucial for understanding the interactions between humans and AI in the PE context. Additionally, the abstract references practical use-cases in the form of typical tasks and discusses the future developments in PE methodologies, all of which are core to the study of prompt engineering. The only reason it doesn't receive a full score is due to the lack of specific detail on 'hard prefix prompts', which is mentioned in the prompt. However, the general connection to PE is strong, justifying the high rating." -winning solution for the cvpr2023 visual anomaly and novelty detection challenge: multimodal prompting for data-centric anomaly detection,gpt-4-1106-preview,6,"The study discusses the use of multimodal prompts within the context of zero-shot anomaly segmentation, which is related to the field of prompt engineering due to the involvement of customized prompts for model adaptation. While the study may not directly address 'hard prefix prompts', the concept of utilizing expert knowledge and context to create prompts for foundation models exhibits a component of prompt-engineering techniques. This relevance is not direct, as prompt engineering typically involves text-based language prompts for natural language models as opposed to prompts for visual anomaly detection; hence, the rating is above the midpoint but not fully aligned with the focus on prompt engineering." -2nd place winning solution for the cvpr2023 visual anomaly and novelty detection challenge: multimodal prompting for data-centric anomaly detection,gpt-4-1106-preview,7,"The technical report describes a methodology for zero-shot anomaly segmentation using multi-modal prompts, which falls under the broader category of prompt engineering. Multimodal prompting constitutes a form of prompt engineering as it involves designing and utilizing prompts that can effectively guide machine learning models, specifically foundation models, for particular tasks such as anomaly detection. This is relevant to prompt engineering study as it includes the formulation and application of prompts; however, the focus on 'hard prefix prompts' is not explicitly stated. Therefore, the relevance is significant but not complete in the context of a systematic review on hard prefix prompts in prompt engineering." -prompting segmentation with sound is generalizable audio-visual source localizer,gpt-4-1106-preview,8,"The abstract describes the use of a novel 'encoder-prompt-decoder' paradigm which directly relates to prompt engineering, as it involves constructing Semantic-aware Audio Prompts (SAPs) to improve model performance. This approach aims to enable pre-trained models to focus on sounding objects and deal with data scarcity and varying distributions, both of which are significant concerns in prompt engineering. Although the study focuses specifically on the audio-visual domain and not directly on general prompt engineering methodologies, its innovative use of prompts to bridge the semantic gap between modalities indicates its relevance to the field of prompt engineering. Therefore, it receives a high relevance rating." -aspiro: any-shot structured parsing-error-induced reprompting for consistent data-to-text generation,gpt-4-1106-preview,7,"The presented abstract details a novel approach (ASPIRO) for structured data verbalization which utilizes prompt engineering techniques such as re-prompting LLMs based on parsing checks. However, the focus appears to be more on reducing parsing errors and improving data-to-text generation consistency than on the study of hard prefix prompts specifically. Therefore, it is moderately relevant to the broader topic of prompt engineering but does not focus on a 'comprehensive systematic review on hard prefix prompts.'" -prompting strategies for citation classification,gpt-4-1106-preview,8,"The paper directly addresses prompt engineering by investigating the effectiveness of various prompting strategies for a specific NLP task – citation classification. This is highly relevant to the study of prompt engineering as it explores how different prompting methods can influence the performance of language models. Although it doesn't specifically mention 'hard prefix prompts', the mention of 'Fixed-prompt LM tuning' suggests it touches on the subject of static prompts, which could be related. The research's systematic approach to comparing these strategies and the inclusion of newly proposed methods indicate a substantial contribution to the understanding of how prompting affects language model performance, making it fairly relevant to the field of prompt engineering." -can large language models transform computational social science?,gpt-4-1106-preview,8,"The research discussed in the title clearly has implications for prompt engineering, as it talks about using Large Language Models (LLMs) for Computational Social Science (CSS) tasks. The abstract mentions 'prompting best practices,' indicating that the study likely delves into how to formulate prompts to optimize LLM performance in CSS applications. While the study might not focus exclusively on 'hard prefix prompts' but rather on a broader range of prompting techniques, the findings would still be highly relevant to the field of prompt engineering since they contribute to understanding how to effectively employ prompts in complex analysis tasks, such as CSS. The relevance is not rated as a full 10 because the study’s primary focus seems to be on broad LLM application in CSS rather than focused on prompt engineering alone." -solving challenging math word problems using gpt-4 code interpreter with code-based self-verification,gpt-4-1106-preview,8,"The abstract describes a study focusing on the development of a prompting strategy (explicit code-based self-verification) to enhance the performance of the GPT-4 Code Interpreter in solving math problems. Although this study is centered on prompting methods, it is specifically tailored to mathematical reasoning and involves verification of the model's output. It is highly relevant to the field of prompt engineering in that it presents a novel approach to using prompts to improve the accuracy of a language model's responses. The reason for not giving a full score of 10 is that the study is particularly focused on math word problems, which is just one aspect of prompt engineering." -a fine-grained comparison of pragmatic language understanding in humans and language models,gpt-4-1106-preview,6,"The study addresses a comparison of pragmatic language understanding in both humans and language models, which indirectly relates to prompt engineering since the effectiveness of prompts can be influenced by a model's ability to deal with pragmatics and non-literal meanings. However, the study does not focus specifically on prompt engineering or on the design, structure, or optimization of prompts ('hard prefix prompts'), therefore the relevance is not direct and merits a mid-range rating." -knowledge-in-context: towards knowledgeable semi-parametric language models,gpt-4-1106-preview,4,"While the discussed paper presents a novel semi-parametric language model architecture that is closely related to enhancing the performance of language models, it does not directly address prompt engineering, especially with regards to 'hard prefix prompts.' The architecture indeed involves prompting in the broader sense, as it uses prompts to generate output answers, but the main focus of the study lies in knowledge integration and model efficiency, rather than on the design or study of prompts themselves. Therefore, the relevance to prompt engineering is somewhat tangential and not the central theme of the paper." -learning to decompose visual features with latent textual prompts,gpt-4-1106-preview,8,"The abstract provided discusses an innovation in prompt engineering, specifically within the domain of vision-language models. The study introduces Decomposed Feature Prompting (DeFo), which utilizes textual prompts as part of the learning process, aligning with the concept of prompt engineering. The relevance to prompt engineering is high because it directly involves the use of textual inputs to improve the feature extraction in a dual-model architecture. However, it does not address 'hard prefix prompts' specifically, which suggests that the content is more general in the realm of prompt engineering rather than focused on a comprehensive systematic review of hard prefix prompts." -xricl: cross-lingual retrieval-augmented in-context learning for cross-lingual text-to-sql semantic parsing,gpt-4-1106-preview,7,"The abstract describes a system (XRICL) that involves constructing prompts to improve cross-lingual Text-to-SQL semantic parsing, which is relevant to the field of prompt engineering as it deals with the creation and optimization of prompts for language models. However, the focus on retrieval-augmented in-context learning and the cross-lingual aspect means it is not entirely centered on 'hard prefix prompts,' which suggests a subset of prompt engineering focusing on rigid or inflexible prompts. The study still contributes valuable insights to the broader domain of prompt engineering, hence the moderately high relevance rating." -multidimensional evaluation for text style transfer using chatgpt,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering study is moderate to high because it investigates the use of ChatGPT as an evaluator for text style transfer, which involves prompt engineering to some extent. Getting ChatGPT to perform a zero-shot evaluation entails designing prompts that effectively convey the evaluation task to the model. Therefore, the study indirectly contributes to understanding how different prompts affect the performance of large language models in generating or evaluating stylized text. However, the paper primarily focuses on the application of ChatGPT as an evaluator and correlates its performance with human judgments, rather than explicitly studying the hard prefix prompts or the mechanics of prompt construction, hence the rating is not a full 10." -yes but.. can chatgpt identify entities in historical documents?,gpt-4-1106-preview,7,"The abstract indicates that the study explores ChatGPT's ability to recognize entities within historical documents, specifically addressing the specificity of prompting, which is an integral aspect of prompt engineering. Although the core focus seems to be on entity recognition and classification, the mention of 'the specificity of prompting' suggests that the study does delve into how different prompts affect ChatGPT's performance in a task relevant to natural language processing. Therefore, while it is not entirely focused on 'prompt engineering' as a primary subject area, it is relevant due to its examination of prompts' effectiveness, which is a significant component of prompt engineering studies." -chain of thought prompt tuning in vision language models,gpt-4-1106-preview,7,"The document discusses 'chain of thought prompt tuning in vision language models,' which is a specific method within prompt engineering that aims at improving the reasoning process of AI models in image-related tasks. While the topic is closely related to the concept of prompt engineering, it is more narrowly focused on vision-language models and does not directly touch on 'hard prefix prompts' which seems to be the focus of the initial inquiry. The relevance is rated as 7 since the technique of chain of thought prompting falls under the wider umbrella of prompt engineering strategies and contributes to the field, even if it is not a direct study on hard prefix prompts." -speechx: neural codec language model as a versatile speech transformer,gpt-4-1106-preview,5,"While the abstract does discuss the use of audio-text prompts for speech generation and how SpeechX leverage task-dependent prompting for various speech tasks, it does not specifically address hard prefix prompts in the context of prompt engineering within the text generation domain, which is generally implied by prompt engineering. There is relevance to prompts and task-dependent prompting, but not directly to the study of hard prefix prompts in a systematic review sense, hence the intermediate score." -is chatgpt a good personality recognizer? a preliminary study,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves evaluating ChatGPT's abilities in a specific natural language processing task using various prompting strategies, including the 'level-oriented' strategy, which is a type of hard prompt engineering tailored to guide the AI's reasoning. Although the primary focus is on personality recognition, the methodology and implications of different prompting strategies, including zero-shot chain-of-thought, directly contribute to the knowledge and optimization of prompt engineering. Hence, the relevance rating is high but not maximum, as the study does not exclusively concentrate on prompt engineering but also includes the application of the derived prompts in various downstream tasks." -let's do a thought experiment: using counterfactuals to improve moral reasoning,gpt-4-1106-preview,8,"The provided abstract discusses a new prompting framework, 'Thought Experiments,' which involves the engineering of prompts to teach language models improved moral reasoning using counterfactuals. While the study itself is not directly focused on 'hard prefix prompts,' it is highly relevant to the field of prompt engineering, as it explores the design of specialized prompts to enhance the performance of language models in a specific type of reasoning task. Therefore, the relevance is quite high for those interested in the broader topic of how different prompting approaches can impact model performance. However, it doesn't address 'hard prefix prompts' explicitly, hence the rating is not a perfect 10." -symbolic math reasoning with language models,gpt-4-1106-preview,7,"The abstract provided discusses the use of large language models (LLMs) such as OpenAI's GPT-3 for solving math word problems and explores their reasoning capabilities. Although the primary focus is on these models' ability to solve mathematical problems symbolically and numerically, it does mention the role of specific prompting techniques and their influence on the model's problem-solving process. Therefore, while the abstract is not directly focused on a review of 'hard prefix prompts,' it does pertain to prompt engineering in the broader context of eliciting reasoning and explanations from a language model. This justifies a moderate-to-high relevance rating, as the paper could potentially contribute valuable insights into the efficacy of prompting strategies in complex problem-solving tasks with language models." -instructexcel: a benchmark for natural language instruction in excel,gpt-4-1106-preview,7,"The provided abstract describes a study involving the creation of a benchmark for assessing Large Language Models' (LLMs) capability to interpret natural language instructions and generate Excel-related code. This directly relates to the field of prompt engineering, as it concerns the design and testing of prompts that efficiently guide a language model to perform domain-specific tasks. However, the study does not explicitly mention 'hard prefix prompts' or a 'systematic review' of such prompts, but rather it is an example of applied prompt engineering in a practical, task-oriented context. Therefore, the relevance is high but not absolute, hence a rating of 7." -an investigation of llms' inefficacy in understanding converse relations,gpt-4-1106-preview,5,"The abstract presents a study on how LLMs process and understand converse relations, which relates to their semantic understanding capabilities. While this does touch on the issue of understanding structured semantics and could have indirect implications for prompt engineering (e.g., designing prompts that account for the converse relations might improve LLMs' performance), the study is not directly focused on prompt engineering or the effectiveness of hard prefix prompts. Thus, the relevance is moderate as the findings might inform prompt engineering strategies indirectly, but it is not the central theme of the study." -zero-shot generalization in dialog state tracking through generative question answering,gpt-4-1106-preview,6,"The abstract discusses the use of a generative question-answering framework with a conditional language model for improving dialog state tracking, which indirectly relates to prompt engineering in that it deals with the generation of language model queries (which can be considered as prompts) for unseen constraints and slots. The system is designed to interpret natural language queries, akin to how prompts are used to extract information from language models. However, the specific focus of the study is not on prompt engineering itself or on the systematic review of 'hard prefix prompts', but rather on the application of a generative language model to dialog systems for zero-shot adaptation. Therefore, while the study is relevant to the general field of language model applications (and thus has some relevance to prompt engineering), it does not directly address the subject of prompt engineering in relation to hard prefix prompts." -improving zero-shot generalization and robustness of multi-modal models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explicitly addresses the issue of improving the performance of multi-modal models by refining how text prompts are used. The research investigates how ambiguity in text prompts can lead to a performance gap in zero-shot tasks and proposes a methodology to enhance the accuracy by leveraging semantic label hierarchies in prompts. While the study does not focus on 'hard prefix prompts' per se, it does contribute to the overall understanding of how prompt design influences model predictions, making it relevant to the field of prompt engineering." -enhancing cross-lingual natural language inference by soft prompting with language-independent knowledge,gpt-4-1106-preview,7,"The abstract discusses 'Soft prompt learning framework' and its application in cross-lingual natural language inference, which is relevant to prompt engineering as it deals with a form of prompts—soft prompts. Although it does not specifically address 'hard prefix prompts,' which the original prompt inquires about, the study of soft prompts is related and contributes to the broader field of prompt engineering. It would be more relevant if the specifics of 'hard prefix prompts' were examined, therefore it doesn't receive a full score." -precise zero-shot dense retrieval without relevance labels,gpt-4-1106-preview,7,"The relevance to prompt engineering is fairly high, as the abstract describes a process where a language model is prompted to generate a hypothetical document in a zero-shot context, which is clearly a form of prompt engineering. However, the focus of the study seems to be more on dense retrieval and encoding relevance rather than on the detailed study of prompt engineering or the effects of different prompting techniques. Thus, while relevant, the study may not be addressing prompt engineering in a direct or comprehensive manner as a primary focus." -seqzero: few-shot compositional semantic parsing with sequential prompts and zero-shot models,gpt-4-1106-preview,7,"The paper presents a novel approach in few-shot learning and semantic parsing, which directly relates to improving the performance of language models with limited data. Prompt engineering is an aspect of tuning language models to better interpret and respond to prompts. Since SeqZero involves creating sequential prompts that aid in generating outputs for sub-problems in semantic parsing, this study is relevant to prompt engineering as it pertains to the construction and optimization of prompts for improved model performance. However, the study's primary focus is not on the prompt engineering process itself, but rather on how prompts are utilized within a specific application of semantic parsing to achieve state-of-the-art results. Therefore, it is relevant, but not exclusively focused on the prompt engineering aspect." -from images to textual prompts: zero-shot visual question answering with frozen large language models,gpt-4-1106-preview,9,"The abstract describes a method (Img2LLM) involving the generation of prompts that effectively allow large language models (LLMs) to perform zero-shot visual question-answering (VQA) tasks. This is highly relevant to prompt engineering because Img2LLM essentially acts as a prompt engineering tool, transforming image content into textual prompts that enable LLMs to understand and respond to visual data without the need for end-to-end training. It directly involves the design and application of effective prompts to improve the utility of LLMs in a cross-modality context. The only reason it does not receive a full 10 rating is because it specifically pertains to visual data and VQA, whereas prompt engineering can also encompass other forms of data and tasks." -rethinking the role of demonstrations: what makes in-context learning work?,gpt-4-1106-preview,8,"The presented paper is highly relevant to prompt engineering as it delves into the mechanics of in-context learning, which is a core aspect of prompt engineering for large language models. Understanding the role of demonstrations and the impact of various aspects of those demonstrations informs how prompts should be designed. While the paper does not directly address 'hard prefix prompts,' it does explore the components of demonstrations that influence a model's performance, which can be directly applied to the design and optimization of prompts (including hard prefixes) to improve model behavior. Therefore, the findings of this study are important for advancing the science of prompt engineering, though not exclusively focused on 'hard prefix prompts.'" -a survey for in-context learning,gpt-4-1106-preview,7,"The survey deals with in-context learning (ICL), which is closely related to prompt engineering, as ICL often involves using prompts to deliver the training examples to language models. Although hard prefix prompts, which are more specific in their constructions, are not mentioned explicitly, prompting strategies in general are an integral part of ICL. The survey's focus on the broader aspects of prompting strategies makes it relevant to the field of prompt engineering. However, a more direct discussion on hard prefix prompts would be required to make the paper fully applicable to a comprehensive systematic review on that specific topic." -transformers as algorithms: generalization and stability in in-context learning,gpt-4-1106-preview,6,"The study examines in-context learning and generalization in transformer models, which is relevant to prompt engineering as it relates to how these models use input prompts to infer outputs. However, the focus on 'hard prefix prompts' is not specifically addressed, meaning the study might offer insights related to prompt engineering more broadly rather than hard prefix prompts in particular. Therefore, the relevance is moderate." -differentially private in-context learning,gpt-4-1106-preview,6,"The study touches on the deployment of large language models (LLMs) and their adaptation to new tasks, which relates to prompt engineering in the broader sense of preparing LMs for specific applications. However, the focus is primarily on maintaining privacy via Differentially Private In-context Learning (DP-ICL), and not on the prompt engineering techniques such as 'hard prefix prompts'. Although prompt engineering may rely on data privacy principles when integrating private data, the abstract lacks a direct mention or analysis of 'hard prefix prompts', yielding a moderate relevance score." -what can transformers learn in-context? a case study of simple function classes,gpt-4-1106-preview,7,"The abstract discusses 'in-context learning' which is a key aspect of prompt engineering as it deals with the ability of models to learn from the information provided in a prompt. The study's focus on how transformers can learn from in-context examples to perform tasks is relevant to understanding and improving prompt-based learning mechanisms, albeit it focuses more specifically on function classes rather than hard prefix prompts. It does not directly address prompt engineering as a systematic review but is certainly related to the broader category of how models respond to prompts. Therefore, it receives a high but not maximum relevance score." -what makes good in-context examples for gpt-3?,gpt-4-1106-preview,9,"The abstract describes a study focused on optimizing the selection of in-context examples for GPT-3's prompt generation, which is highly relevant to the field of prompt engineering. The research aims to improve GPT-3's performance by retrieving semantically-similar examples to the test query, which directly involves engineering better prompts for the model. The significant improvements reported in the benchmarks further underscore the relevance of this study to prompt engineering. The only reason it does not receive a perfect score is that it is focused on GPT-3, and prompt engineering can also involve other models or broader methodologies." -developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer,gpt-4-1106-preview,9,"The abstract presents a focused application of prompt engineering to improve data extraction from medical records using a large language model, which is highly relevant to prompt engineering studies. The study evaluates the effectiveness of specialized prompts for the task and discusses their development cost and accuracy, providing concrete data about prompt engineering in a real-world context. It doesn't directly address 'hard prefix prompts', but it's substantially related to engineering prompts for specific purposes." -swectrl-mini: a data-transparent transformer-based large language model for controllable text generation in swedish,gpt-4-1106-preview,8,"The relevance to prompt engineering study is high because the abstract describes the 'SweCTRL-Mini' model, which utilizes special tokens in generation prompts to control the genre of the generated text. This capability is directly related to prompt engineering, where prefixes or special tokens are crafted to steer the output of language models. While the abstract does not specifically focus on a 'systematic review on hard prefix prompts,' it does highlight the use of controlled prompts which is a significant aspect of prompt engineering. Therefore, the rating is slightly lowered because the paper does not explicitly cover a systematic review but is substantially related to the concept of hard prompts in controlling text generation." -reward modeling for mitigating toxicity in transformer-based language models,gpt-4-1106-preview,4,"While the study focuses on mitigating toxicity in language models, which is related to improving AI behavior and output quality, it is tangential to the specific topic of 'prompt engineering', particularly 'hard prefix prompts'. Prompt engineering involves crafting inputs to guide AI models more effectively, whereas this study seems centered on a method (Reinforce-Detoxify) for reducing toxicity. Although related, it is not a direct study of prompt engineering techniques, thus the moderate rating reflects this indirect relevance." -optimizing continuous prompts for visual relationship detection by affix-tuning,gpt-4-1106-preview,7,"This abstract details a novel method involving affix-tuning transformers for optimizing visual relationship detection. While it does not explicitly use the term 'hard prefix prompts,' it does discuss the concept of 'affix-tuning,' which could be seen as a form of prompt engineering where a 'continuous task-specific vector' is optimized. This is somewhat relevant to prompt engineering as it relates to the training and utilization of model parameters in a task-specific manner. The approach of using 'prompt template' also indicates work in the direction of designing inputs that can influence model behavior, which is central to prompt engineering. However, the main focus appears to be on visual relationship detection rather than on the study or characterization of prompts (textual) in NLP tasks, hence not a perfect fit, but still relevant." -contextual transformer for offline meta reinforcement learning,gpt-4-1106-preview,8,"The presented abstract is relevant to prompt engineering as it discusses the use of prompts to improve sequence modeling-based offline reinforcement learning algorithms. The concept of prompt tuning is central to the study, and the introduction of the Contextual Meta Transformer (CMT) shows an innovative application of prompts in guiding the model towards desired outcomes and improving generalization on unseen tasks. The relevance is high since prompt engineering is explicitly mentioned and is a key part of the methodology. However, it focuses specifically on RL contexts and may not cover other aspects or domains of prompt engineering, hence the rating is not a full 10." -context-based narrative generation transformer (ngen-transformer),gpt-4-1106-preview,6,"The abstract indicates that the paper discusses a text generation model, the NGen-Transformer, which is relevant to natural language processing and prompt engineering to some extent. It emphasizes the context assimilation capabilities of the architecture, which aligns with the concept of prompt engineering as it involves providing context or prompts for generating text. Although the paper appears to focus more on the model's architecture for story generation rather than on the systematic study of hard prefix prompts, the use of prompts (in the form of user-defined context) and performance evaluation on a prompt-based dataset (WritingPrompts) makes it moderately relevant to prompt engineering studies." -chinese text paraphrase recognition based on openprompt introducing hybrid prompts,gpt-4-1106-preview,9,"The abstract discusses the use of hybrid prompts, which are directly related to prompt engineering, offering a method to enhance the knowledge extraction from pretrained language models for paraphrase recognition tasks. It demonstrates a practical application of prompt engineering in the form of OpenPrompt and hybrid prompts, providing relevant outcomes like the improvement in F1 score and accuracy when using such prompts. This study helps in understanding prompt-based methods, hence the high relevance rating to prompt engineering. Only a full read-through could confirm if it tackles 'hard prefix prompts' specifically, but the mention of hybrid prompts with [mask] slots strongly suggests relevance to the field of prompt engineering." -fp-detr: detection transformer advanced,gpt-4-1106-preview,5,"The paper 'fp-detr: detection transformer advanced' mentions the use of a concept analogous to prompts in NLP, where query positional embeddings serve as 'visual prompts' to assist in object detection. While this indicates a potential crossover concept with prompt engineering, it's specific to the visual domain rather than the textual domain typically associated with prompt engineering in NLP studies. Therefore, the relevance is moderate as it offers insight into how the idea of prompts can be applied in different contexts, but it does not address hard prefix prompts or their systematic review in NLP applications specifically." -learning to compress prompts with gist tokens,gpt-4-1106-preview,9,"The abstract describes a method directly related to prompt engineering, focusing on the efficiency of using prompts with language models. The introduction of 'gisting' to compress prompts into 'gist' tokens falls within the field of prompt engineering as it aims to optimize the use of prompts in terms of computational resources. The mentioned benefits, such as compute efficiency, compression ratios, and minimal loss in output quality, are highly relevant to the study of prompt engineering. The relevance is not rated as a perfect 10 because the specific context of 'hard prefix prompts' is not directly addressed, but the overall subject is still substantially pertinent to the field." -zero-shot entity and tweet characterization with designed conditional prompts and contexts,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it involves the use of 'hard prefix prompts' which are a form of prompt construction. It explores the capabilities of GPT-2 in zero-shot settings, which is an important aspect of prompt engineering, particularly when it comes to designing prompts that guide the model to perform specific tasks without prior task-specific training. The focus on human psychology-inspired and logical conditional prefixes is directly related to engineering prompts to produce desired outputs. However, the research is not exclusively focused on the systematic review of hard prefix prompts but rather on the application of these prompts for a specific task, which is why it does not receive a full score." -instruction-vit: multi-modal prompts for instruction learning in vit,gpt-4-1106-preview,8,"The paper presents an application of prompt engineering in the context of visual transformers, focusing on multi-modal prompts for instruction learning, which is highly relevant to prompt engineering. Although it primarily discusses visual transformer models and their application to image classification tasks, the concept of using text or image prompts to improve model performance is directly connected to the field of prompt engineering. The review on 'hard prefix prompts' might have a different focus compared to multi-modal prompts in visual transformers, but both share the overarching theme of enhancing model capabilities through prompts. Hence, the relevance is high, although not exact, hence not a perfect score of 10." -clinical decision transformer: intended treatment recommendation through goal prompting,gpt-4-1106-preview,7,"The relevance of the study titled 'clinical decision transformer: intended treatment recommendation through goal prompting' to prompt engineering is moderately high. The concept of 'goal prompting' directly connects to the practice of designing prompts to achieve specific outputs in a natural language processing context. Although this paper is primarily focused on a medical application, the technique of formulating prompts to guide the decision-making output of an AI model is a key aspect of prompt engineering. The concept could potentially be applied to other areas in AI where prompt design is crucial. However, the specificity to clinical recommendations and the absence of a direct focus on hard prefix prompts or a broad range of prompt engineering applications slightly reduce its overall relevance." -adversarial transformer language models for contextual commonsense inference,gpt-4-1106-preview,8,"The paper discusses the use of both hard prompts (specific words) and soft prompts (virtual learnable templates) in the context of language model prompting to control the generation of commonsense assertions, which is directly related to prompt engineering. Although the paper's primary focus is on commonsense inference, the technique of 'hinting' as described involves engineering prompts to guide the language model, which is relevant to the study of prompt engineering." -"tryage: real-time, intelligent routing of user prompts to large language models",gpt-4-1106-preview,4,"While the described paper, 'tryage: real-time, intelligent routing of user prompts to large language models,' indirectly relates to the field of prompt engineering by addressing optimal model selection based on input prompts, it does not explicitly focus on 'hard prefix prompts' or the systematic review of these prompts. Prompt engineering generally refers to the design of input prompts to achieve better performance or more relevant responses from language models. The paper's relevance to prompt engineering is in its ability to select the best-suited model for a given prompt, which could be a component of a larger prompt engineering strategy. However, the absence of specific focus on 'hard prefix prompts' or systematic review thereof limits the relevance score." -vima: robot manipulation with multimodal prompts,gpt-4-1106-preview,8,"The study described in the abstract illustrates a novel application of prompt-based learning in the domain of robotics rather than just natural language processing. The use of 'multimodal prompts' that includes both textual and visual tokens is directly related to the concept of prompt engineering, as it involves crafting prompts that a machine learning model interprets to perform various tasks. Although it does not explicitly address the engineering of 'hard prefix' prompts, the systematic development of multimodal prompts for robot manipulation is a significant contribution to prompt engineering research. The study's relevance is slightly lessened only due to the lack of a specific focus on 'hard prefix' prompts, which the original query stipulates." -prompt-based tuning of transformer models for multi-center medical image segmentation of head and neck cancer,gpt-4-1106-preview,7,"The paper describes the use of prompts in the form of 'learnable parameters' for fine-tuning pre-trained vision transformer models in medical image segmentation tasks, which is relevant to the concept of prompt engineering. This kind of study could potentially contribute to the field of prompt engineering as it explores how altering input prompts (in this case, learnable parameters) can adapt a model to new data. However, the focus here is on medical image segmentation and not on textual data or NLP models which are more common areas for prompt engineering. Thus, the relevance is significant but not entirely direct to studies narrowly focused on hard prefix prompts for NLP applications." -tempo: prompt-based generative pre-trained transformer for time series forecasting,gpt-4-1106-preview,5,"The relevance to prompt engineering study is moderate. The described TEMPO framework does incorporate 'selection-based prompts' which indicates some element of prompt engineering. However, the core focus is on time series forecasting using generative transformers rather than the systematic review or study of hard prefix prompts in general. Therefore, the relevance is partial as it pertains to adapting prompts for time series tasks specifically rather than prompt engineering as a broader field." -prompt-engineering and transformer-based question generation and evaluation,gpt-4-1106-preview,8,"The study presented involves the application of prompt engineering to improve the performance of a transformer-based question generation model. Since prompt engineering is integral to this research, with the effectiveness of various prompts being directly assessed and compared, it shows high relevance to the field of prompt engineering. However, it does not focus solely on 'hard prefix prompts' specifically, which may be a more nuanced subtopic within prompt engineering. Therefore, the relevance rating is not a full 10." -prompt guided transformer for multi-task dense prediction,gpt-4-1106-preview,7,"The presented abstract describes a research paper regarding a model called Prompt Guided Transformer (PGT), which explicitly utilizes task-specific prompts within its architecture. The use of prompts is integral to the model's operation, making it highly relevant to studies on prompt engineering. However, it seems to focus more on parameter efficiency and architecture design for multi-task learning rather than the systematic review of 'hard prefix prompts' or broad prompt engineering strategies, hence the rating does not reach the maximum." -efficient model personalization in federated learning via client-specific prompt generation,gpt-4-1106-preview,8,"The abstract describes a methodology for personalizing machine learning models in a federated learning context using client-specific prompt generation. Although it does not explicitly mention 'hard prefix prompts', it is highly relevant to prompt engineering as it discusses the generation and adaptation of prompts to improve model performance on distributed client-specific data. This is a crucial aspect of prompt engineering, which typically involves optimizing inputs to pre-trained models to achieve better customization and efficiency. Therefore, the relevance of the paper to prompt engineering is high, although it may not directly focus on the specific subset of 'hard prefix prompts'." -kosmos-2.5: a multimodal literate model,gpt-4-1106-preview,8,"The abstract describes a model that uses task-specific prompts to achieve its multimodal literate capabilities, which is highly relevant to the study of prompt engineering. The ability to adapt the model for various text-intensive image understanding tasks with different prompts through supervised fine-tuning underscores the relevance of prompt engineering to the model's functionality. Although the main focus of Kosmos-2.5 is on machine reading of text-intensive images, the mention of flexible text representations and task-specific prompts indicates that prompt engineering is a significant component of the research. The rating is not a full 10 because the primary focus seems to be on the model's multimodal capabilities rather than exclusively on prompt engineering." -automated reading passage generation with openai's large language model,gpt-4-1106-preview,7,"The study is relevant to prompt engineering as it involves using 'carefully engineered prompts' to guide GPT-3 in generating reading passages that are appropriate for a specific educational level and style. The engineering aspect of the prompts plays a crucial role in the automated item generation process mentioned in the abstract, ensuring that the AI-generated text conforms to certain standards and matches original content in terms of structure and difficulty. While the focus is on AIG and not specifically on the study of 'hard prefix prompts,' the research contributes valuable insights into how tailored prompts can be used to guide the output of a language model to meet predefined criteria. Therefore, it has a significant relevance to the field of prompt engineering, even though it might not directly address the concept of hard prefix prompts in systematic review terms." -prompt tuning gpt-2 language model for parameter-efficient domain adaptation of asr systems,gpt-4-1106-preview,8,"The abstract discusses the use of 'domain-prompts,' which seems to be a technique closely related to prompt engineering, as it involves training domain-specific embeddings to adapt a language model to new domains. This method resembles hard prompt tuning where prompts are fixed and designed to prime the model for a specific task or domain. The study's relevance is high for prompt engineering research, particularly within the context of ASR systems and parameter-efficient adaptations. However, it doesn't discuss 'hard prefix prompts' specifically; it mentions 'domain-prompts' which may or may not be exactly the same concept. Hence, the rating is not a full 10, reflecting this small uncertainty." -phenaki: variable length video generation from open domain textual description,gpt-4-1106-preview,4,"The abstract describes a model, Phenaki, which deals with generating videos from textual descriptions using a novel representation and learning approach. This is relevant to prompt engineering to the extent that it involves creating prompts (textual descriptions) that are used to generate content (videos). However, the focus of the study appears to be more on video synthesis and representation learning rather than on the design or optimization of the textual prompts themselves ('hard prefix prompts'). Therefore, the relevance is moderate, indicating a tangential connection to prompt engineering, particularly in how text prompts are used to generate complex media like videos, rather than a direct study on the engineering of prompts." -llama-adapter: efficient fine-tuning of language models with zero-init attention,gpt-4-1106-preview,7,"The abstract describes the development of a method for fine-tuning language models using a set of learnable adaption prompts, which is relevant to prompt engineering, particularly in the context of instruction-following models. The integration of these prompts into the higher transformer layers is a technique related to prompt engineering as it involves modifying the input sequence to achieve a desired behavior from the model. However, the study seems to be more focused on an efficient fine-tuning mechanism rather than on the specifics of designing prompts (hard prefixes), so it is not a perfect match to prompt engineering studies that focus exclusively on hard prefix prompts. Therefore, the rating acknowledges the relevance of the learnable adaption prompts but is not a full 10 due to the broader scope of the study." -in-context learning of large language models explained as kernel regression,gpt-4-1106-preview,7,"The study presents an analysis of in-context learning in large language models (LLMs), a concept closely related to prompt engineering since in-context learning involves providing LLMs with carefully crafted prompts (examples) to shape their output without updating the models' parameters. Understanding the mechanism behind LLMs' in-context learning capabilities could contribute valuable insights into the design of effective prompts, potentially improving prompt engineering strategies. However, the study does not directly focus on 'hard prefix prompts,' which are specific types of prompts, or on a systematic review of prompt engineering studies, so the relevance is substantial but not complete." -prompt tuning of deep neural networks for speaker-adaptive visual speech recognition,gpt-4-1106-preview,8,"The study presents prompt tuning methods for speaker-adaptive Visual Speech Recognition (VSR), which parallels prompt tuning in Natural Language Processing (NLP). Though the context is VSR rather than text-based models, the principles of prompt engineering (e.g., fine-tuning prompts for adaptation without changing the entire pre-trained model) are highly relevant to the prompt engineering study. As such, the techniques and results from this study could inform prompt engineering practices, especially those that deal with adaptation to new data or domains using small amounts of adaptation data. This makes it significantly relevant, though slightly less if the focus of the prompt engineering study is strictly on text-based NLP models." -à-la-carte prompt tuning (apt): combining distinct data via composable prompting,gpt-4-1106-preview,9,"The abstract discusses 'À-la-carte Prompt Tuning (APT)' which is directly related to prompt engineering as it deals with the methodology of tuning and composing prompts for transformer-based models. The approach to train individual prompts and compose them based on user-defined criteria is highly relevant to the study of prompt engineering. This could offer insights into the mechanics of prompt tuning and its practical applications in customizing machine learning models to specific data sets or user preferences. The only reason it doesn't score a perfect 10 is that the description does not explicitly mention 'hard prefix prompts', thus it may not cover the entire scope of the prompt engineering study mentioned in the prompt." -proof of concept: using chatgpt to teach emergency physicians how to break bad news,gpt-4-1106-preview,7,"The abstract highlights the use of detailed prompts to create realistic clinical scenarios and provide feedback, which directly relates to the concept of prompt engineering. The study illustrates the impact of carefully designed prompts on the AI's performance in a specific application (medical training), which is relevant to the field of prompt engineering. However, the focus is not solely on the theoretical or systematic aspects of prompt engineering but rather its practical implementation in a medical training context, which may not cover the depth or breadth of a 'comprehensive systematic review on hard prefix prompts' as the original query suggests." -promptonomyvit: multi-task prompt learning improves video transformers using synthetic scene data,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering is moderate to high because it introduces the concept of 'task prompts' within video transformers, which are specialized parameters used for enhancing performance on different video understanding tasks. 'Promptonomy' is essentially an application of prompt engineering in the context of video transformers, where prompts are designed to model task-specific structure and improve machine learning model aptitude. While the study does not explicitly cover 'hard prefix prompts' or their systematic review, it does involve the creation and utilization of prompts in a learning context, thus contributing to the broader field of prompt engineering. However, the main focus is on the usage of synthetic scene data and improving video transformers, so it is not entirely centered on the theory or methodology of prompt engineering itself." -can language models automate data wrangling?,gpt-4-1106-preview,5,"The content seems to address the utilization of language models for data wrangling tasks, and while it does imply a certain level of task design and user interaction with language models (which could be related to prompt engineering), the focus on data wrangling rather than prompt design specifically for eliciting desired outputs from a language model suggests that this isn't a comprehensive study on hard prefix prompts. There is potential crossover in terms of understanding how prompts work in the context of data wrangling, but it is not directly about prompt engineering." -textdiffuser: diffusion models as text painters,gpt-4-1106-preview,4,"While the study introduces TextDiffuser, which involves generating images from text prompts and might have indirect applications in understanding and improving how models handle text prompts, the main focus is on image generation and enhancing text coherence within visual content. The mention of prompts relates more to the input for image generation rather than the study of prompt engineering itself. Therefore, the relevance to prompt engineering study is moderate as the techniques developed could be tangentially useful, but it is not the central theme of the research." -language prompt for autonomous driving,gpt-4-1106-preview,8,"The abstract describes a study focused on the intersection of natural language prompts and autonomous driving technology, which involves prompt engineering to some extent. Although the primary application is within the domain of computer vision and autonomous driving, the creation of the object-centric language prompt set and the formulation of a new prompt-based driving task indicates a substantial involvement of prompt engineering. The study's goal to predict object trajectories based on language descriptions necessitates understanding and engineering of prompts to be suitable for machine comprehension within a driving context. This is highly relevant to prompt engineering as it deals with generating and utilizing prompts to guide AI models. However, the rating is not a perfect 10 as the core application differs from general prompt engineering studies and focuses specifically on driving scenarios." -vector representations of idioms in conversational systems,gpt-4-1106-preview,5,"The study is partially relevant to prompt engineering as it touches on how training on specific language constructs (in this case, idioms) improves the performance of conversational systems. While it does not directly address 'hard prefix prompts' or the systematic review of such prompts, understanding how idiomatic expressions are handled by NLP systems can inform prompt design strategies and might be leveraged in the creation of more sophisticated prompts. This relevance is more tangential than direct to the focus of prompt engineering, thus warranting a mid-range rating." -generalization properties of retrieval-based models,gpt-4-1106-preview,4,"While the abstract discusses retrieval-based models and their generalization properties, which are relevant to the broader field of machine learning and could potentially be applied to prompt engineering with respect to selecting the best prompts in a retrieval-based manner, it does not directly address 'hard prefix prompts' or prompt engineering specifically. Therefore, its relevance to a systematic review on hard prefix prompts in prompt engineering studies is tangential rather than central." -clinical prompt learning with frozen language models.,gpt-4-1106-preview,8,"The abstract discusses the application of prompt learning in a clinical context, which is a subset of prompt engineering. It highlights the advantages of prompt learning over traditional fine-tuning, such as fewer trainable parameters, less training time, and lower computational resources, all of which are key considerations in prompt engineering. Although it does not explicitly mention 'hard prefix prompts,' the focus on prompt learning's efficiency and effectiveness is highly relevant to the overarching field of prompt engineering. The reason for not giving a full score of 10 is because the study is specific to clinical applications rather than a broad systematic review of hard prefix prompts in general." -fedyolo: augmenting federated learning with pretrained transformers,gpt-4-1106-preview,7,"The abstract discusses modularity in the context of using modules such as prompts for adapting large pretrained transformer models in federated learning setups. While it does not specifically focus on 'hard prefix prompts,' it does touch on the general relevance of prompts (or similar kinds of modules) for model adaptation. This relevance is given a rating of 7 because the study could provide useful insights into the applications of prompt engineering within federated learning, even though it does not directly focus on a comprehensive systematic review of hard prefix prompts." -chinese asr and ner improvement based on whisper fine-tuning,gpt-4-1106-preview,7,"The abstract indicates that the paper explores how to fine-tune Chinese ASR and NER tasks using Whisper, touching on the aspect of designing different prompts for various generative tasks, which is closely related to prompt engineering. While the main focus seems to be on improving ASR and NER performance, the inclusion of prompt design as a part of the fine-tuning process makes it relevant to the study of prompt engineering. However, the mention of prompts is not the central focus of the paper, which suggests that although prompt engineering is covered, it is not the primary subject matter, hence the rating of 7." -prores: exploring degradation-aware visual prompt for universal image restoration,gpt-4-1106-preview,8,"The abstract discusses the use of degradation-aware visual prompts within a universal image restoration model, which is a form of prompt engineering applied to visual tasks rather than language tasks. It touches on the principle of encoding information (degradation types) into prompts to guide the behavior of a model (Vision Transformer), a concept parallel to hard prefix prompts in NLP. While the paper does not deal directly with linguistic prompt engineering, the underlying ideas of customizing prompts to steer model behavior are highly relevant to the study of prompt engineering as a broader concept. Hence, a lower rating would be given if the question strictly asked for relevance to text-based prompts, but since it outlines the foundation of 'prompt engineering' which can extend beyond just language models, a higher rating is appropriate." -attention satisfies: a constraint-satisfaction lens on factual errors of language models,gpt-4-1106-preview,4,"The abstract provides insights into the internal mechanisms of Large Language Models (LLMs) related to factual accuracy, which is peripherally relevant to prompt engineering. Prompt engineering often involves crafting prompts to elicit accurate and reliable responses from a model. The study's focus on the attention mechanism and factual accuracy can be indirectly useful in understanding how prompts might be structured to improve the likelihood of factually correct outputs. However, the study does not directly investigate hard prefix prompts or prompt engineering techniques, therefore the rating isn't higher." -making humanoid robots teaching assistants by using natural language processing (nlp) cloud-based services,gpt-4-1106-preview,7,"The study involves using NLP and GPT language models, which are relevant to prompt engineering. The research is focused on fine-tuning GPT models with prompts derived from environmental context and robot actions, directly linking to the construction of prompts for language models. The rating is not a full 10 because the main application is on human-robot interaction and the deployment of these models, rather than on the systematic review of 'hard prefix prompts' or the discipline of prompt engineering itself." -prompt generation networks for input-based adaptation of frozen vision transformers,gpt-4-1106-preview,9,"The abstract describes a novel approach to adapt frozen vision transformers via visual prompt learning, which is highly relevant to prompt engineering as it deals with generating and optimizing prompts that can be input-dependent. Although the study focuses on the visual domain, the techniques and concepts of prompt generation, learning, and the mentioned 'prompt inversion' trick are applicable and insightful for prompt engineering for different modalities. It achieves adaptation without modifying the model and is part of the broader discussion on how to efficiently use large-scale models, a significant aspect of prompt engineering. The relevance is slightly less than perfect because the specific focus on vision transformers and input-dependent prompts may not cover the entire scope of hard prefix prompts directly, but the principles are closely related." -optimizing language models for argumentative reasoning,gpt-4-1106-preview,8,"The provided abstract details an investigation into optimizing a language model for argumentative reasoning tasks, which includes an evaluation of different optimization strategies such as prompt programming. Prompt engineering, which refers to the design and usage of prompts to guide language models, is closely related to the study's focus on prompt programming as one of the optimization strategies. Although the term 'hard prefix prompts' is not explicitly mentioned, prompt programming is a technique that often involves the use of hardcoded prompts (which could be considered 'hard prefix prompts') to direct a model's output. Therefore, this study is highly relevant to the broader field of prompt engineering; however, the relevance is slightly lower as the study does not solely concentrate on hard prefix prompts but also considers other optimization strategies." -prompt enhanced generative mrc framework for pancreatic cancer ner,gpt-4-1106-preview,7,"The paper directly engages with prompt engineering through its introduction of continuous prompts to improve the performance of a generative NER task within the context of medical document analysis. The use of prompts in the self-attention mechanism of the Transformer model is relevant to the study of how prompts can be optimized to facilitate better understanding and generation of responses by the model. While the focus is not exclusively on 'hard prefix prompts' and it is more application specific (medical NER), it does contribute to the broader understanding of prompt engineering in NER tasks." -bootstrapping vision-language learning with decoupled language pre-training,gpt-4-1106-preview,8,"The paper describes the use of a model (P-Former) to predict ideal prompts within the context of vision-language learning, by focusing on language component optimization. This relates closely to prompt engineering, as the research aims to determine how best to elicit desired responses from language models, which is a fundamental aspect of prompt engineering. The methodology of prompt prediction is directly relevant to the art of crafting effective prompts. However, the specific application to vision-language learning might be slightly tangential to more general prompt engineering studies that might not focus on multimodal contexts. Despite that, the principles discussed could nonetheless provide valuable insights into prompt engineering for LLMs in general." -prompt-based ingredient-oriented all-in-one image restoration,gpt-4-1106-preview,7,"The abstract describes a novel technique for image restoration that uses 'prompt-based learning' as part of its methodology. This indicates some relevance to prompt engineering as it pertains to the use of prompts to guide the decoder in image processing tasks. However, the term 'prompt-based learning' in this context is more related to the domain of image restoration rather than to the development and study of textual or linguistic prompts in AI and machine learning. Even though the technique involves 'prompts' in some form, it may not specifically address the systematic review of 'hard prefix prompts' as one might expect in the study of AI or natural language processing. Therefore, the relevance is moderate since it's within the area of prompts as a concept but not directly focused on the linguistic aspect of prompt engineering." -hierarchical prompt tuning for few-shot multi-task learning,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it discusses a novel approach to prompt tuning, which is a key aspect of prompt engineering. The hierarchical prompt tuning model addresses the need for effective prompts in multi-task learning, especially in few-shot scenarios. The introduction of shared prompts, auto-adaptive prompts, and task-specific prompts directly pertains to the methodology of engineering prompts to enhance performance. Although the study is not specifically about 'hard prefix prompts', the relevance to prompt engineering is strong because the paper contributes to the broader understanding of how to construct and implement prompts in complex, multi-layer neural networks such as Transformers." -pm-detr: domain adaptive prompt memory for object detection with transformers,gpt-4-1106-preview,8,"The document describes the use of prompts (though in a different context from language models) to improve the domain adaptability of object detection models. It focuses on prompt-based strategies to bridge the gap between different data distributions. The concept of 'prompt memory' is relevant to prompt engineering, as it involves using prompts to encode domain-specific knowledge which can then influence the behavior of a model. However, the application of prompts here differs from their use in language models, where the term 'prompt engineering' is often used to describe the process of crafting inputs that elicit desired outputs. In this context, prompts are aiding domain adaptation of object detection systems rather than natural language processing tasks. Nonetheless, the use of prompts as a technique to improve machine learning models is relevant to the broader field of prompt engineering study." -visual prompt flexible-modal face anti-spoofing,gpt-4-1106-preview,7,"The abstract discusses the development of a visual prompt-based approach for improving the robustness of face anti-spoofing systems, which is indirectly related to prompt engineering. Although prompt engineering is primarily associated with natural language processing and the use of textual prompts in language models, the abstract suggests an adaptation of prompt learning principles to the domain of computer vision and multimodal learning. The concept of 'visual prompts' and their application in a flexible-modal face anti-spoofing task is relevant to the study of how prompts can be engineered and utilized in AI models, extending beyond textual inputs to visual and multimodal contexts. The relevance is not a direct match to 'hard prefix prompts,' indicating that the context of prompts is being extended to a different domain, thus the rating does not reach the maximum." -harnessing gpt-3.5-turbo for rhetorical role prediction in legal cases,gpt-4-1106-preview,9,"The provided abstract discusses the implementation of prompting strategies in GPT-3.5-turbo for a specialized task within the legal domain. The focus on one-stage elicitation techniques, the influence of different prompting strategies such as zero-shot learning, task specification, and the exploration of hard prefix prompts (detailed in the mention of the textual context, number of examples, and label definitions) are highly relevant to prompt engineering. Although it doesn't exclusively concentrate on 'hard prefix prompts,' the exploration and systematic review of prompting strategies contributing to performance improvement are central to prompt engineering. The slight deduction in the rating acknowledges that the study is about prompt engineering as a whole rather than solely on 'hard prefix prompts.'" -on the relationship between skill neurons and robustness in prompt tuning,gpt-4-1106-preview,9,"The paper discusses Prompt Tuning, which is highly relevant for prompt engineering as it studies how prompt tuning affects the robustness and transferability of pre-trained language models for specific tasks. Although it does not directly address 'hard prefix prompts', the concept of 'skill neurons' and their role in prompt tuning is crucial for understanding and engineering effective prompts. It hints at an underlying mechanism that could influence the construction and refinement of prompts, potentially making this area of study valuable for those engaged in prompt engineering." -efficient domain adaptation of language models in asr systems using prompt-tuning,gpt-4-1106-preview,8,"The abstract presents research on using prompt-tuning, a form of prompt engineering, for domain adaptation in ASR systems. Although the focus is on ASR systems and not specifically on 'hard prefix prompts', prompt-tuning is related to prompt engineering studies. The research seems to involve adapting language models to specific domains using prompts, which is a core aspect of prompt engineering. The methodology could be highly relevant to those interested in tailoring LMs for specific applications without the costs associated with maintaining multiple domain-specific models. However, it falls short of a perfect score because it does not address hard prefix prompts specifically, but rather the broader application of prompt-tuning for domain adaptation." -all birds with one stone: multi-task learning for inference with one forward pass,gpt-4-1106-preview,8,"The focus on utilizing a prompt-sharing module to enable a model to handle multiple tasks with a single forward pass is highly relevant to prompt engineering, as it directly pertains to the design and efficiency of prompts in multi-task learning. Although the abstract does not specifically mention 'hard prefix prompts,' the concept of prompt design for task efficiency and model performance improvement is central to the topic of prompt engineering. Therefore, the relevance rating is relatively high, with a couple points deducted for not mentioning the specific aspect of 'hard prefix prompts.'" -medical intervention duration estimation using language-enhanced transformer encoder with medical prompts,gpt-4-1106-preview,7,"The study describes a framework that integrates medical prompts within a transformer encoder to improve the estimation of medical intervention durations. While this approach does utilize 'prompts' in the form of medical queries to improve the model's understanding of free-text EHR data, these prompts do not appear to be 'hard prefix prompts' in the context of prompting techniques typically discussed in natural language processing (NLP). The focus of the study is not on exploring the design or effectiveness of various prompts but rather on the application of medical prompts to harmonize different data modalities for medical predictions. Therefore, while prompts are relevant to the system being developed, the study does not seem to primarily address 'prompt engineering' as it would pertain to the generation or optimization of prompts themselves. This results in a moderate rating of relevance." -ctrl: a conditional transformer language model for controllable generation,gpt-4-1106-preview,8,"The referenced paper describes a language model (CTRL) designed to incorporate control codes that can direct the generation of text according to specified attributes, which is highly relevant to the field of prompt engineering. Although the paper does not directly discuss 'hard prefix prompts,' it is nonetheless pertinent because control codes essentially function as a form of prompts to guide the model output. The ability to use these codes aligns with the broader goal of prompt engineering, which is to control and guide the behavior of language models. Therefore, the paper is quite relevant to the study of prompting methods in AI, even if it doesn't address 'hard prefix prompts' specifically." -exploring visual prompts for adapting large-scale models,gpt-4-1106-preview,8,"The abstract indicates the study focuses on 'visual prompting' to adapt large-scale models in vision, which is a form of prompt engineering. While 'hard prefix prompts' are not directly mentioned, the concept of adapting models by using prompts (here, visual) is central to the discussed approach, thus making it relevant to the field of prompt engineering. The study’s relevance could be even higher if it specifically related to textual prompts and hard prefixes, but its focus on a related concept in the visual domain still provides valuable insights that could be transferable to other forms of prompt engineering." -gpt4mia: utilizing geneative pre-trained transformer (gpt-3) as a plug-and-play transductive model for medical image analysis,gpt-4-1106-preview,4,"The relevance of the paper to prompt engineering is tangential rather than direct. It discusses using GPT-3 as a tool for medical image analysis, which implies a level of prompt engineering in structuring the interaction between the language model and the image analysis tasks. The paper's focus on technical treatments for efficiency and effectiveness might involve innovative prompt design strategies, which is pertinent to prompt engineering. However, because the central theme is the application of GPT-3 to medical image analysis rather than prompt engineering itself, the relevance is moderate." -pretraining data mixtures enable narrow model selection capabilities in transformer models,gpt-4-1106-preview,5,"The study relates indirectly to prompt engineering, as it deals with the ability of transformer models to perform in-context learning and adapt to new tasks based on their pretraining data mixture. Prompt engineering typically involves designing prompts to elicit desired behaviors or responses from LLMs. The relevance lies in understanding how different pretraining data affects the model's response to prompts, which is crucial for prompt engineers. However, the study does not explicitly focus on 'hard prefix prompts' which would be more directly aligned with prompt engineering, thus the rating reflects a moderate relevance." -domain prompts: towards memory and compute efficient domain adaptation of asr systems,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses domain-prompts, which is a form of prompt engineering for adapting transformer-based language models to specific domains with minimal additional parameters. While it focuses specifically on ASR systems, the concept of domain adaptation through prompts is applicable to wider studies of prompt engineering. The rating is not a full 10 because the paper does not address 'hard prefix prompts' specifically, but rather uses the concept of domain-specific prompts generally." -planning with learned entity prompts for abstractive summarization,gpt-4-1106-preview,8,"The study discusses the use of entity chains as prompts to improve the quality of abstractive summarization, which is a form of prompt engineering. The research directly involves engineering prompts (entity chains) to guide a model's generation process, making it highly relevant to the subject of prompt engineering. However, it is not solely focused on 'hard prefix prompts', as it encompasses a broader scope of learned entity prompts for content planning in summarization tasks." -a survey of controllable text generation using transformer-based pre-trained language models,gpt-4-1106-preview,7,"The provided abstract discusses the controllability of text generation using Transformer-based pre-trained language models, which is relevant to the field of prompt engineering since it deals with methods to direct language models in generating text that fulfills certain constraints. However, the abstract does not specifically mention 'hard prefix prompts' or delve into the topic of prompt engineering within controllable text generation. Therefore, while the survey has relevance due to its focus on control mechanisms, which could encompass prompt engineering techniques, it is not a perfect match for a study specifically on 'hard prefix prompts.' The rating reflects the general relevance but indicates that the document is not exclusively focused on the narrower subject of prompt engineering, especially centered around hard prefix prompts." -promptcal: contrastive affinity learning via auxiliary prompts for generalized novel category discovery,gpt-4-1106-preview,8,"The abstract discusses advancements in semi-supervised learning through the use of auxiliary visual prompts and contrastive learning methods. Though not explicitly centered on 'hard prefix prompts,' the research explores the usage of prompts (in the form of visual cues) to improve semantic clustering and discover novel classes. This is closely related to 'prompt engineering,' as it deals with the optimization of prompts to enhance model performance. Therefore, it is quite relevant to the field of prompt engineering, though it may not directly address the systematic review aspect of hard prefix prompts mentioned in the initial study description." -text style transfer between classical and modern chinese through prompt-based reinforcement learning,gpt-4-1106-preview,8,"The text discusses the use of an unsupervised prompt-based reinforcement learning (PBRL) framework for style transfer in text, which is highly relevant to prompt engineering as it involves the use of prompts to guide the learning process. While the application is specific to style transfer between classical and modern Chinese, the underlying technique is applicable to prompt engineering broadly. It does not directly study 'hard prefix prompts' as the original study query suggests, but it does contribute to the overall field of prompt engineering." -matchprompt: prompt-based open relation extraction with semantic consistency guided clustering,gpt-4-1106-preview,8,"The text describes a novel approach to open relation extraction using a prompt-based framework, which directly relates to the field of prompt engineering as it entails designing and utilizing prompts to train models with a small amount of pre-defined relational instances. This suggests innovation in the area of using prompts for machine learning tasks, which is relevant to the prompt engineering study. It is not a 'comprehensive systematic review on hard prefix prompts', but it is a practical application of prompt engineering principles, thus the relevance rating is 8 rather than 10." -dynamic visual prompt tuning for parameter efficient transfer learning,gpt-4-1106-preview,8,"The paper describes a method of parameter efficient transfer learning through the creation of dynamic, instance-wise tokens or 'prompts' for each image in visual tasks. While it is not directly related to 'hard prefix prompts', it discusses 'prompt tuning', which falls under the broader category of prompt engineering. The proposed method aims to adapt pre-trained models to new tasks more efficiently, which is relevant to the study of how prompts can be engineered to improve model performance. The high relevance score is given because the paper's core focus on dynamic visual prompts is closely aligned with the principles of prompt design and optimization, which are essential concepts in prompt engineering studies." -efficiently aligned cross-lingual transfer learning for conversational tasks using prompt-tuning,gpt-4-1106-preview,9,"The abstract discusses the use of 'prompt-tuning-based method for learning alignment prompts' which is directly related to prompt engineering. Specifically, it addresses the development of prompts that facilitate cross-lingual transfer learning, a key component of prompt engineering in the context of creating efficient language models for conversational tasks. The systematic review might explore various prompt techniques, including this efficient prompt-tuning method, making it highly relevant to the study. The reason it's not a perfect 10 is that the focus is also on the creation of a multilingual dataset and cross-lingual transfer learning, which, while related, are broader topics than prompt engineering alone." -adpl: adversarial prompt-based domain adaptation for dialogue summarization with knowledge disentanglement,gpt-4-1106-preview,9,"The paper presents an Adversarial Disentangled Prompt Learning (ADPL) model which is relevant to the study of prompt engineering as it involves the creation and utilization of prompts (domain-invariant, domain-specific, and task-oriented) to improve domain adaptation in dialogue summarization. The focus on prompt-based methods for zero-shot learning in this context is highly pertinent to understanding how prompts can be engineered to enhance the performance of language models on specific tasks. Despite not focusing exclusively on 'hard prefix prompts', which the original query asks about, its contribution to prompt engineering methods warrants a high relevance score." -clinical concept and relation extraction using prompt-based machine reading comprehension,gpt-4-1106-preview,7,"The described study makes significant use of prompt-based machine reading comprehension (MRC) architecture in the context of natural language processing for clinical data, which is directly related to the use of prompts in AI systems. Prompt engineering is central to designing the MRC architecture that can comprehend and extract relevant information from clinical texts. The fact that different prompting strategies were examined for their effects on MRC model performance bolsters its relevance to prompt engineering. However, the focus on clinical concept and relation extraction may mean that the specific prompt engineering details relevant to other domains or applications of prompt engineering are not explored in the abstract provided. Thus, the content is relevant due to its reliance on prompts and their optimization in an MRC system, but it is not exclusively focused on the concept of 'hard prefix prompts' as might be expected in a systematic review specifically dedicated to that subject." -is prompt-based finetuning always better than vanilla finetuning? insights from cross-lingual language understanding,gpt-4-1106-preview,8,"The abstract provided discusses the comparison of prompt-based fine-tuning versus vanilla fine-tuning in the context of cross-lingual language understanding tasks. This is highly relevant to the field of prompt engineering, as it studies the effectiveness of prompt-based approaches in model training. It may not be a perfect match to 'hard prefix prompts' specifically, but the exploration of prompt-based fine-tuning methods, such as the proposed ProFiT pipeline, contributes to the broader understanding of prompt efficacy in different scenarios, including multilingual tasks, and hence holds substantial relevance to studies in prompt engineering." -knowledge transfer with visual prompt in multi-modal dialogue understanding and generation,gpt-4-1106-preview,7,"The study described involves the use of prompts in the context of multi-modal data fusion and dialogue generation, which is relevant to prompt engineering in terms of developing methods to maximize the efficacy of prompts. However, the term 'hard prefix prompts' is not mentioned, suggesting that while the study is within the domain of prompting (visual prompts in this case), it may not directly address the particular area of 'hard prefix prompts'. Therefore, the relevance is notable but not complete, hence a rating of 7." -plato-ad: a unified advertisement text generation framework with multi-task prompt learning,gpt-4-1106-preview,4,"The abstract discusses PLATO-Ad, a framework for online advertisement text generation that incorporates multi-task prompt learning, which is related to prompt engineering. However, it does not focus specifically on 'hard prefix prompts' or systematic reviews of them, but rather the application of prompt learning to advertisement text generation. The relevance to prompt engineering exists due to the implementation of prompts in the learning process, but because the focus is on a specific application and not on a broad study or review of prompts, the rating is not higher." -motif-based prompt learning for universal cross-domain recommendation,gpt-4-1106-preview,7,"The abstract describes a motif-based prompt learning framework aimed at enhancing cross-domain recommendation systems. Although the study focuses primarily on recommendations, the use of 'motif-based prompt learning' relates closely to prompt engineering, especially in the context of adapting machine learning models to respond to different kinds of data inputs or prompts. Prompt engineering is about designing prompts that help models perform better on specific tasks. The paper's mention of 'adaptable prompt parameters' and the integration of these into pre-training and fine-tuning paradigms indicates that it deals with adjusting how models interact with prompts. However, it does not strictly focus on 'hard prefix prompts' as the study prompt requests, thus the relevance rating is not a full 10." -pro-cs : an instance-based prompt composition technique for code-switched tasks,gpt-4-1106-preview,8,"The abstract discusses a prompt composition technique for code-switched tasks, which is highly relevant to prompt engineering, as it directly pertains to designing prompts that effectively interact with language models on code-switched data. The fact that it compares its approach to both prompt-tuning and fine-tuning indicates an in-depth analysis of prompts in the context of significant efficiency in parameter use. The relevance is not rated a full 10 because the abstract does not explicitly mention 'hard prefix prompts,' which could be a more specific aspect of prompt engineering, but the overall content is very relevant to the broader field of prompt engineering study." -"continually detection, rapidly react: unseen rumors detection based on continual prompt-tuning",gpt-4-1106-preview,8,"The paper is highly relevant to the field of prompt engineering due to its focus on 'Continual Prompt-Tuning RD (CPT-RD) framework' which relates directly to the engineering and optimization of prompts in the context of rumor detection. The study addresses challenges such as catastrophic forgetting and knowledge transfer in prompt-tuning, which are central to improving the utility of prompts in continual learning scenarios. The deduction of two points is due to the prompt not directly addressing 'hard prefix prompts' specifically, but the broader context of prompt-tuning is still substantially relevant to the study of prompt engineering." -"continuous detection, rapidly react: unseen rumors detection based on continual prompt-tuning",gpt-4-1106-preview,9,"This paper is highly relevant to prompt engineering as it presents a framework for 'Continual Prompt-Tuning' specifically designed to tackle rumor detection. It directly deals with the optimization and storage of task-specific soft-prompts, which are central to the concept of prompt engineering within the context of language models. It also introduces strategies for knowledge transfer and a hypernetwork approach, both of which could influence future work in prompt engineering for continual learning scenarios. The only reason it is not a 10 is that it is specific to the context of rumor detection and the systematic review aspect might not be covered comprehensively." -soft prompt guided joint learning for cross-domain sentiment analysis,gpt-4-1106-preview,8,"The abstract discusses a 'soft prompt-based joint learning method' which is highly relevant to the topic of prompt engineering, particularly in the context of transfer learning and aspect term extraction. It explores how learnable vectors, as soft prompts, can be used to bridge domain differences and enhance model performance. While not focused exclusively on hard prefix prompts, the concept of soft prompts is intrinsically linked to prompt engineering, thus the study can contribute valuable insights to the broader field of prompt engineering research." -adaptive prompt learning with distilled connective knowledge for implicit discourse relation recognition,gpt-4-1106-preview,9,"The abstract describes a novel approach in the area of prompt engineering, focusing on the development of an advanced prompt learning framework called AdaptPrompt, which uses continuous prompts and connective knowledge distillation. This is highly relevant to the field of prompt engineering because it addresses a common challenge in the manual design of prompts and offers a solution that could be broadly applicable to other prompt engineering tasks. Although the study is specifically applied to implicit discourse relation recognition, the methods and findings are likely to have implications for prompt engineering in general, making it a valuable study within this domain. The only reason the rating is not a perfect 10 is that it focuses on a specific usage of prompt engineering within the context of discourse relation recognition, which may not cover all aspects of prompt engineering studies, such as hard prefix prompts explicitly." -prompt learning with knowledge memorizing prototypes for generalized few-shot intent detection,gpt-4-1106-preview,7,"The abstract mentions the use of 'prompt learning' as a technique within a two-stage learning framework for the purpose of Few-Shot Intent Detection. Prompt learning is relevant to prompt engineering as it involves designing and utilizing prompts to teach models specific tasks. However, the focus on 'knowledge memorizing prototypes' and issues specifically connected with intent detection makes it less directly relevant to the broader field of prompt engineering study. The use of prompts is a significant aspect of the research, but the particulars seem more narrowly focused on a specific application (intent detection) rather than on hard prefix prompts in general." -rgb-t tracking via multi-modal mutual prompt learning,gpt-4-1106-preview,6,"The study uses the concept of 'prompt learning' in the context of computer vision, specifically for RGB-T tracking, which demonstrates an application of prompt engineering to improve the interaction between different data modalities (visible and thermal images) and enhance the model's performance. The relevance to prompt engineering is evident in the design of the 'lightweight prompter' and the use of attention mechanisms as a form of information transfer, which can be seen as a specialized application of prompts in machine learning. However, the study does not directly focus on hard prefix prompts or their systematic review, which limits its relevance to the specific area of prompt engineering referred to in the original query. It is more related to the application and implementation of prompts in a practical task rather than the study of prompt engineering itself." -visual-attribute prompt learning for progressive mild cognitive impairment prediction,gpt-4-1106-preview,7,"The title suggests the study involves a machine learning model using prompts to predict progressive mild cognitive impairment (pMCI), indicating that prompt engineering is a fundamental part of the research. Specifically, the mention of a 'prompt learning model' and 'global prompt token' implies an exploration into how prompts interact with the model to improve performance. This is relevant to prompt engineering as it relates to designing and utilizing prompts to guide machine learning models effectively. However, it does not explicitly mention 'hard prefix prompts' and seems to focus on a specific application rather than a broad systematic review, so it may not be entirely comprehensive in the context of prompt engineering studies." -can unsupervised knowledge transfer from social discussions help argument mining?,gpt-4-1106-preview,7,"The abstract describes a study focused on argument mining that utilizes a novel prompt-based strategy for inter-component relation prediction, which is relevant to the concept of prompt engineering. The use of finetuned language models in conjunction with prompt-based techniques to leverage discourse context indicates a level of innovation and practical application in the realm of prompt engineering, warranting a rating of 7. The relevance is not at the maximum because the study is not exclusively concentrated on hard prefix prompts or comprehensive systematic review, but it does provide insights into the domain of prompt engineering within the context of argument mining." -hetgpt: harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks,gpt-4-1106-preview,7,"While the title and abstract describe a study related to prompt engineering, the context differs from what's typically associated with 'hard prefix prompts' in prompt engineering study, which is usually referenced in the field of Natural Language Processing (NLP). Here, the concept of 'prompting' is being applied to the domain of heterogeneous graph neural networks (HGNNs) and their pre-training routines. Although it does deal with prompts in an abstract sense, and may be relevant to the broader discussion on the utility of prompt-like methods in AI model training, it is not specifically about prompt engineering in the context of language models or text-based neural networks. Therefore, it is tangentially relevant, hence the rating of 7." -approximated prompt tuning for vision-language pre-trained models,gpt-4-1106-preview,8,"The abstract provided discusses prompt tuning, which is a technique relevant to prompt engineering studies. The focus on approximating the impact of soft prompt tokens and proposing a method for reducing computational complexity directly impacts the efficiency of prompt engineering for vision-language pre-trained (VLP) models. The fact that it explores a novel Approximated Prompt Tuning (APT) approach and demonstrates the performance and efficiency improvements through experiments makes it quite relevant to the field. However, it does not specifically mention 'hard prefix prompts,' which was the focus of the initial request. Therefore, the rating is not a perfect 10." -p3o: transferring visual representations for reinforcement learning via prompting,gpt-4-1106-preview,7,"The study focuses on the transfer of learned policies in deep reinforcement learning using a process called 'prompting', which aligns with the concept of 'prompt engineering'. While the prompting here is specific to visual representation and policy optimization in DRL, it shows an application of prompts to modify behavior of a model without full retraining. This is relevant to prompt engineering as it demonstrates how prompts can be employed to adapt models to new situations. However, the study does not discuss 'hard prefix prompts' or explore the general space of natural language processing, which are commonly associated with prompt engineering, hence the relevance is not maximum." -icpc: instance-conditioned prompting with contrastive learning for semantic segmentation,gpt-4-1106-preview,8,"The paper is high in relevance to prompt engineering for a couple of reasons. Firstly, it deals directly with designing prompts for semantic segmentation, which is part of the broader spectrum of prompt engineering studies. The study focuses on dynamic prompting as opposed to static prompts, which is a notable aspect of prompt design. Secondly, the paper proposes an align-guided contrastive loss to refine the vision and text embeddings' alignment, which is an advanced technique in prompt tuning for multimodal models. The only reason it does not score a perfect 10 is that it is applied to semantic segmentation specifically, rather than prompt engineering in general. Nevertheless, the methods developed could potentially influence or be part of prompt engineering techniques in a broader context." -gradient-based automated iterative recovery for parameter-efficient tuning,gpt-4-1106-preview,8,"The paper discusses the use of gradient-based explainability methods like TracIn for improving model performance specifically mentioning 'prompt-tuning' which is a form of prompt engineering. It shows the process of recovering performance in the context of parameter-efficient tuning (PET), a concept closely related to optimizing prompts for language models. While the paper does not focus exclusively on prompt engineering, the application of TracIn in the PET context suggests significant relevance to the study of how prompts can be engineered and debugged effectively." -prefixmol: target- and chemistry-aware molecule design via prefix embedding,gpt-4-1106-preview,6,"The provided title and abstract refer to a generative model using 'prefix embeddings,' which can be seen as a form of prompt engineering, albeit in a different domain (molecular design rather than text generation). The concept of prefix embeddings as contextual prompts shares a conceptual similarity with prefix prompts in text-based models, as they both aim to guide the generation process under specific conditions. However, the application is quite niche and specific to chemistry and drug design, which means the focus is not on prompt engineering in the general sense but is instead applied in a specialized context. Therefore, the relevance to prompt engineering studies is moderate but not direct, as it uses similar concepts in a domain-specific application." -extracting latent steering vectors from pretrained language models,gpt-4-1106-preview,8,"The work discussed in the abstract is highly relevant to prompt engineering since it deals with controlling language models to produce desired outputs, which is a core aspect of prompt engineering. The idea of extracting latent steering vectors aligns with engineering prompts to manipulate model behavior. However, it's not centered on hard prefix prompts specifically but rather on a broader control mechanism within the language model, thus not warranting a full 10 rating." -rethinking efficient tuning methods from a unified perspective,gpt-4-1106-preview,7,"The abstract discusses Parameter-efficient transfer learning (PETL) where tuning methods such as prompt, prefix, and adapter are briefly mentioned. Although the focus is on the development of a unified framework called U-Tuning, it is relevant to prompt engineering study as it involves task-specific lightweight adjustments and potentially new approaches for parameter-efficient transfer learning which could include improvements in prompt engineering techniques. However, the abstract does not solely concentrate on 'hard prefix prompts' but rather a broader range of PETL methods, hence the 7 out of 10 rating for relevance." -retrieval-augmented generative question answering for event argument extraction,gpt-4-1106-preview,7,"The relevance of the study to prompt engineering is significant as it discusses the augmentation of prompts with retrieved QA pairs to improve event argument extraction. Such a retrieval-augmented approach is directly related to prompt engineering because it involves the strategic manipulation of prompts to enhance model performance. While the primary focus of the study appears to be on augmenting prompts for a specific task of argument extraction, the underlying principles and methods could be widely applicable to other areas of prompt engineering. Therefore, the study could contribute valuable insights into the prompt engineering domain, even though it may not address hard prefix prompts specifically." -integrated parameter-efficient tuning for general-purpose audio models,gpt-4-1106-preview,7,"The abstract of the study discusses the use of a 'prompt-based learning approach' as part of the proposed Integrated Parameter-Efficient Tuning (IPET) framework, indicating that prompt engineering is relevant to the framework's methodology. The embedding prompt as one of its components suggests that the study investigates a form of prompt engineering within the context of audio model adaptation. Although the study is specific to the audio domain and does not directly address the broader concept of hard prefix prompts in general, the inclusion of a prompt-based learning approach within the IPET framework and its application to pre-trained models is indeed relevant to prompt engineering techniques. Therefore, the study would likely be of interest to those researching prompt engineering in specific applications, albeit with a specific focus on audio tasks rather than a comprehensive systematic review on hard prefix prompts." -virtual node tuning for few-shot node classification,gpt-4-1106-preview,7,"The abstract discusses 'Virtual Node Tuning (VNT),' which involves injecting virtual nodes as 'soft prompts' in the embedding space that can be optimized for few-shot node classification tasks. While this does not directly address 'hard prefix prompts,' it does pertain to the usage of prompts (in this case, soft ones) in the context of machine learning. The technique is a form of prompt engineering but applied within a graph representation learning task rather than natural language processing. This alternative application of prompts in a learning framework is relevant to the broader field of prompt engineering as it provides insight into how prompts can be used to improve performance in tasks with limited labeled data. However, its relevance is somewhat indirect since it does not address hard prefix prompts explicitly or delve into systematic reviews of prompt engineering, thus the rating of 7." -alexander knox at semeval-2023 task 5: the comparison of prompting and standard fine-tuning techniques for selecting the type of spoiler needed to neutralize a clickbait,gpt-4-1106-preview,8,"The study directly compares prompt engineering with standard fine-tuning techniques, which is highly relevant to prompt engineering research. Its focus on the application of prompt engineering for a specific NLP problem—clickbait neutralization—demonstrates the practical implications of prompt-based approaches and allows for insights into their effectiveness when contrasted with traditional fine-tuning. While the study is not exclusively about prompt engineering and also encompasses fine-tuning methods, its comparative analysis of the two techniques makes it significant for researchers interested in the area of prompt engineering." -pcbert: parent and child bert for chinese few-shot ner,gpt-4-1106-preview,8,"The abstract talks about 'prompt-tuning', which is a method within prompt engineering, that is being applied for Chinese few-shot Named Entity Recognition (NER). While the specific term 'hard prefix prompts' is not mentioned, the concept of prompt-based techniques, which are at the heart of prompt engineering, is central to the study described in the paper. This suggests that the paper's focus on using prompt-based methods for improving model performance in low-resource settings makes it highly relevant to the field of prompt engineering." -auto-prompting sam for mobile friendly 3d medical image segmentation,gpt-4-1106-preview,8,"The abstract discusses the development of an 'AutoSAM Adapter' that automatically generates prompts for 3D medical image segmentation, which is a specific application of prompt engineering. While it does not generalize to all forms of prompt engineering, this study focuses on automatic prompt generation to improve the performance of a segmentation model. Therefore, it is highly relevant to the study of prompt engineering, particularly in the field of medical image analysis using machine learning models. The deduction of two points is due to the specialized application rather than a broad, comprehensive review of techniques across different domains." -transferring pre-trained multimodal representations with cross-modal similarity matching,gpt-4-1106-preview,7,"The abstract and the TLDR mention designing context-based prompt augmentation (CPA), which indicates a direct relevance to prompt engineering as it pertains to refining the text prompts for improved performance in multimodal models. Although the main focus is on representation transfer and not on prompt engineering per se, the use of prompts to achieve cross-modal similarity matching shows that prompts are a noteworthy aspect of the proposed method's overall framework and application, thus suggesting moderate relevance to prompt engineering studies." -srcb at the ntcir-16 real-mednlp task,gpt-4-1106-preview,8,"The abstract indicates the use of prompt learning as part of the approach for tackling Named Entity Recognition and Adverse Drug Event detection tasks, which are directly related to natural language processing challenges in computational linguistics. The involvement in prompt learning suggests that the paper includes discussion or experimentation with the implementation or optimization of prompts, which is relevant to the study of prompt engineering. However, the abstract does not provide details specifically about 'hard prefix prompts' which might be one of the variations or specific interest within prompt engineering. Therefore, the relevance is high but not complete with respect to the specified topic of 'hard prefix prompts'." -controllable generation of dialogue acts for dialogue systems via few-shot response generation and ranking,gpt-4-1106-preview,9,"The article presents a novel approach for controllable generation of dialogue acts (DAs) in dialogue systems through a few-shot learning and ranking method, which is highly relevant to prompt engineering. The use of few-shot prompts and the creation of methods for ranking generated responses based on their semantic accuracy and adherence to specific DAs are directly related to improving and refining the efficacy of prompts in generation tasks. The research aims to control the output of language models using prompt-based learning, a core aspect of prompt engineering." -adapting pre-trained language models to vision-language tasks via dynamic visual prompting,gpt-4-1106-preview,8,"The abstract discusses 'Dynamic Visual Prompting (DVP)', which is a novel approach to adapt pre-trained language models to vision-language tasks. While the focus is on bridging the gap between single- and multi-modal learning, the relevance to prompt engineering study lies in the exploration and implementation of prompts as a transfer learning approach. DVP as a means to reduce redundancy and optimize the placement of prompt tokens in the context of visual features directly pertains to prompt engineering, particularly in the way it demonstrates prompt effectiveness and modification techniques. Although the study is not exclusively about 'hard prefix prompts', it contributes to the broader field of prompt engineering by showing how prompts can be dynamically integrated with pre-trained models for enhanced performance in multi-modal tasks. The rating is given an 8 instead of a 10 because the study's primary focus is not on the comprehensive systematic review of hard prefix prompts, but rather on a particular application of prompts in vision-language tasks." -eco: ensembling context optimization for vision-language models,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering, as it discusses improving image classification in vision-language models by engineering or learning textual prompts to optimize performance. The ensemble of prompts strategy directly ties to the manipulation and optimization of prompts, which is the essence of prompt engineering. Although the prompt engineering in question is utilized for vision-language scenarios rather than the 'hard prefix prompts' mentioned, the principles and goals appear to be closely aligned. Hence, the paper is not entirely focused on 'hard prefix prompts' but is still within the broader domain of prompt engineering." -generalizing few-shot named entity recognizers to unseen domains with type-related features,gpt-4-1106-preview,8,"The paper presents a framework (PLTR) that involves a form of prompt engineering by generating unique prompts for unseen examples using type-related features. This is highly relevant to prompt engineering as it directly involves the creation and optimization of prompts for improving the model's performance on few-shot named entity recognition tasks. The reason the rating is not a full 10 is that the study focuses specifically on the NER task and the use of type-related features, which may not cover the broader concept of hard prefix prompts in the context of prompt engineering more generally." -towards a unified view on visual parameter-efficient transfer learning,gpt-4-1106-preview,4,"While the study presents a framework in parameter efficient transfer learning (PETL) and investigates prefix-tuning, it does so in the context of vision models rather than language models, which is the primary domain for prompt engineering. However, the concept of a 'hard prefix prompt' isn't directly addressed, but the methods and findings could be considered somewhat relevant for those interested in the extension of prompt engineering concepts to the vision domain. Thus, relevance is moderate but not directly aligned with the specific focus of hard prefix prompts in prompt engineering studies." -enhancing cross-lingual natural language inference by prompt-learning from cross-lingual templates,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses a prompt-learning based framework to enhance cross-lingual natural language inference (XNLI), which is a direct application of prompt engineering techniques. The use of cloze-style questions constructed from cross-lingual templates is an example of hard prefix prompts, which fits within the broader category of prompt engineering studies. The significance of the research is supported by experimental results on benchmark datasets, although it focuses specifically on the XNLI task rather than prompt engineering in general, which prevents it from receiving a full 10." -nlpbench: evaluating large language models on solving nlp problems,gpt-4-1106-preview,8,"The abstract and TLDR describe a study focused on evaluating the performance of large language models on NLP problems using a new benchmarking dataset. Prompting strategies like chain-of-thought (CoT) and tree-of-thought (ToT) are an integral part of this performance evaluation. These strategies are directly related to prompt engineering as they involve devising ways to present problems to LLMs in a manner that leverages their strengths. Although the abstract does not specifically mention 'hard prefix prompts,' the discussion of prompting strategies is closely related to the field of prompt engineering and the study appears to contribute to our understanding of how LLMs can be more effectively prompted. The rating is not a full 10 because the provided abstract doesn't focus exclusively on prompt engineering but rather on a wider scope of NLP problem-solving capabilities." -retuyt-inco at bea 2023 shared task: tuning open-source llms for generating teacher responses,gpt-4-1106-preview,8,"This paper is highly relevant to prompt engineering as it discusses the fine-tuning of Open-Source Large Language Models (LLMs) for a specific application, which is the generation of teacher responses in educational dialogues. The exploration of different prompting strategies, such as Few-Shot and Chain-of-Thought, directly pertains to the field of prompt engineering. While the paper does not focus solely on 'hard prefix prompts,' which the original question inquires about, it examines relevant techniques that would influence the design and implementation of effective prompts for LLMs. The deduction of two points accounts for the absence of a direct focus on 'hard prefix prompts,' but overall, the study presents material that would be of significant interest to anyone researching prompting methods." -aligning large language models for clinical tasks,gpt-4-1106-preview,7,"The abstract discusses the alignment of Large Language Models (LLMs) for clinical tasks, focusing on strategies such as 'expand-guess-refine' for question-answering applications. Although it does not directly mention 'hard prefix prompts' or conduct a comprehensive systematic review on them, the alignment strategy includes in-prompt strategies like few-shot and chain-of-thought prompting which are related to prompt engineering. Therefore, while it is not wholly focused on prompt engineering, it is still relevant due to the discussion of prompt-based techniques for improving LLM performance in a specific domain." -naisteacher: a prompt and rerank approach to generating teacher utterances in educational dialogues,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering as it specifically deals with the generation of teacher responses using a prompt-based approach with GPT-3.5-turbo and involves reranking, which is an advanced form of prompt engineering. The only reason it does not receive a full score is that it may not directly address 'hard prefix prompts,' assuming 'hard prefix prompts' refers to a specific sub-category or method within prompt engineering." -evaluating adaptive pedagogical agents' prompting strategies effect on students' emotions,gpt-4-1106-preview,5,"The relevance to prompt engineering is moderate, as the study examines the impact of different prompting strategies on students' emotions within an Intelligent Tutoring System (ITS). While not directly focused on 'hard prefix prompts' or prompt engineering in the AI language model sense, the research does explore how different types of prompts can influence user experience and engagement, which can be parallel to how prompts are engineered to guide AI behavior. However, the specific connection to 'hard prefix prompts' in prompt engineering is not made, which limits the direct relevance to the topic." -visual prompting via image inpainting,gpt-4-1106-preview,8,"The abstract presents a study relevant to prompt engineering in the context of visual models rather than textual ones. It discusses a method analogous to prompting in NLP but applied to image processing tasks using image inpainting. Even though it doesn't involve 'hard prefix prompts' directly and focuses on the visual domain, the concept of adapting pre-trained models to new tasks with example-based prompts is closely related to the principles of prompt engineering. Therefore, the relevance is high, but not absolute, as this study does not directly discuss textual prompt engineering or hard prefix prompts specifically." -can adaptive pedagogical agents' prompting strategies improve students' learning and self-regulation?,gpt-4-1106-preview,7,"The study addresses prompting strategies in the context of adaptive pedagogical agents, which can be considered a form of prompt engineering as it relates to optimizing the prompts for better learning and self-regulation outcomes. Although it does not directly address 'hard prefix prompts' in a systematic review manner, the concept of a 'fading prompting strategy' is related to how prompts are engineered for effectiveness over time, which could be relevant in the broader scope of prompt engineering study." -impact of different pedagogical agents' adaptive self-regulated prompting strategies on learning with metatutor,gpt-4-1106-preview,5,"The study focuses on the effect of prompting strategies on learning outcomes within an educational tool, which marginally relates to prompt engineering as it deals with the design and effectiveness of prompts. Prompt engineering specifically pertains to the construction and optimization of prompts to improve the performance of artificial intelligence systems. While the study on pedagogical agents' prompting strategies is adjacent to this domain, its direct application to prompt engineering in AI is not clear. Therefore, the relevance is moderate." -low-resource ner by data augmentation with prompting,gpt-4-1106-preview,8,"The mentioned paper is highly relevant to prompt engineering study, especially considering its use of prompting strategies to elicit knowledge from a language model (BERT) for named entity recognition (NER) in a low-resource setting. The relevance score is not a perfect 10 because the focus is on data augmentation for NER and not solely on hard prefix prompts, which are a subset of prompt engineering techniques. Furthermore, the emphasis on label-conditioned word replacement and generation of new training data via QA prompting demonstrates a practical application of prompt engineering within a specific NLP task, underscoring its importance and relevance to the field." -smart homes for people with alzheimer's disease: adapting prompting strategies to the patient's cognitive profile,gpt-4-1106-preview,6,"The study's relevance to prompt engineering is moderate. While it does not directly address 'hard prefix prompts' in the context of machine learning or computational prompt engineering, it deals with the adaptation of prompts (cognitive assistance) to users' needs, which parallels the customization aspect of prompt engineering. Furthermore, the development of guidelines for effective prompting strategies and an experimental protocol has some commonalities with the principles of designing and testing prompts in AI systems. However, the application is specific to smart homes and Alzheimer's patients and may not fully translate to the broader field of prompt engineering study." -this joke is [mask]: recognizing humor and offense with prompting,gpt-4-1106-preview,8,"The study described in the title and abstract focuses on the effectiveness of prompting, which is a technique used in NLP and directly relevant to prompt engineering. The investigation of humor recognition through prompts falls within the scope of prompt engineering studies, as it explores how prompts can be designed and utilized to achieve a specific task (humor recognition in this case). The fact that the paper compares prompting to fine-tuning and looks at low-resource scenarios also adds to its relevance. However, the specificity to humor and offense slightly limits the rating as prompt engineering can encompass a broader range of tasks beyond these topics." -demonstrate-search-predict: composing retrieval and language models for knowledge-intensive nlp,gpt-4-1106-preview,7,"The abstract provided discusses an advanced technique in the domain of natural language processing that could clearly relate to prompt engineering. The Demonstrate-Search-Predict (DSP) framework integrates language models (LM) and retrieval models (RM) in a complex pipeline to improve performance on knowledge-intensive tasks. While this does not directly reference 'hard prefix prompts', it aligns with the broader field of prompt engineering due to its focus on improving the interaction between models for better information retrieval and processing. Prompt engineering is crucial in designing the inputs to such systems to ensure the most relevant and accurate outputs. However, without explicit mention of 'hard prefix prompts', the relevance is not a perfect fit; hence, a rating of 7 is assigned to indicate its substantial relevance but not a direct match to the specific topic of prompt engineering study." -error analysis prompting enables human-like translation evaluation in large language models: a case study on chatgpt,gpt-4-1106-preview,9,"The study specifically focuses on the development and refinement of a prompting method, namely Error Analysis Prompting (EAPrompt), which is a direct application of prompt engineering. The use of prompts in this context is to enhance the capability of generative LLMs, such as ChatGPT, to evaluate machine translation quality more effectively. This falls within the domain of prompt engineering, as it involves designing prompts to elicit desired behaviors from a language model. However, it does not directly address 'hard prefix prompts' as mentioned in the initial request, but it is highly relevant to the overall field of prompt engineering." -explicit visual prompting for low-level structure segmentations,gpt-4-1106-preview,8,"The relevance to prompt engineering is significant as the study adapts the concept of prompt tuning from natural language processing (NLP) to the visual domain, which is a novel application of prompt engineering principles. Prompt tuning is a core area of study within prompt engineering, and the paper's proposition of a new visual prompting model called 'Explicit Visual Prompting (EVP)' shows direct influence from NLP prompt tuning methods, indicating that the findings could be beneficial to the field. Although EVP is tailored for image-based tasks and not textual prompt engineering, the conceptual crossover and potential implications for the development of similar strategies in NLP make this study relevant. The rating is not a perfect 10 because the study does not directly address textual prompt engineering but rather adapts its concepts to a different domain." -pushing the limits of chatgpt on nlp tasks,gpt-4-1106-preview,9,"The abstract presents research that directly involves the optimization of prompts and input strategies for improving ChatGPT's performance on a variety of NLP tasks. Techniques such as 'one-input-multiple-prompts' and the development of modules to address specific issues inherent in language model tasks are inextricably linked to prompt engineering. Although the study's title does not explicitly mention 'hard prefix prompts,' the body of work encompasses strategies that likely include or are related to prompt engineering concepts. Therefore, the study is highly relevant to prompt engineering, meriting a rating of 9 out of 10. It loses one point because it does not specifically mention the systematic review on 'hard prefix prompts,' which might be considered a subset or particular aspect of prompt engineering the inquiry could be asking about." -all in one: multi-task prompting for graph neural networks,gpt-4-1106-preview,8,"The paper focuses on the adaptation of prompt learning from NLP to graph tasks, seeking to bridge the gap between pre-trained models and diverse graph tasks by proposing a novel multi-task prompting method. This is highly relevant to prompt engineering as it explores the concept of prompts, albeit in the domain of graph models. The integration of NLP prompting techniques into a different domain suggests a broader potential application of prompt engineering principles. The rating is not a full 10 due to the specific focus on graph models rather than a general prompt engineering approach." -diffusion-nat: self-prompting discrete diffusion for non-autoregressive text generation,gpt-4-1106-preview,7,"The abstract discusses the integration of discrete diffusion models with non-autoregressive text generation and the improvement of this integration via a novel strategy called 'iterative self-prompting.' While it does not directly mention 'hard prefix prompts,' the concept of self-prompting is related to prompt engineering because it involves the manipulation of prompts to improve the text generation process. This means that the study contributes to the field of prompt engineering, even if it doesn't directly address the specific topic of hard prefix prompts. Therefore, it has relevance to the broader field of prompt engineering but is not a perfect match for a systematic review focused exclusively on hard prefix prompts." -parafuzz: an interpretability-driven technique for detecting poisoned samples in nlp,gpt-4-1106-preview,8,"The relevance to prompt engineering is quite high in this study. The abstract mentions the formulation of the trigger-removal task as a prompt engineering problem, indicating a direct engagement with prompt engineering techniques. Furthermore, the application of 'fuzzing' to discover optimal paraphrase prompts for the purpose of maintaining input semantics while eliminating backdoor triggers in NLP models is aligned with innovative practices within prompt engineering. Although the primary focus is on the detection of poisoned samples and ensuring interpretability, the use of prompt engineering as a method to achieve these aims supports the rating of 8 out of 10." -self-diagnosis and self-debiasing: a proposal for reducing corpus-based bias in nlp,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it addresses the critical aspect of bias mitigation in NLP models, which is an essential consideration when designing prompts. The concept of 'self-diagnosis' is particularly pertinent, as it implies that models can detect undesirable biases in response to prompts. Similarly, 'self-debiasing', where the model actively avoids generating problematic outputs based on the prompt description, is a direct application of prompt engineering principles. The techniques discussed could be employed in designing prompts that encourage models to produce less biased content. Although the paper does not directly elaborate on 'hard prefix prompts,' it does contribute to the overarching field of prompt engineering by exploring decoding algorithms and model behavior in response to prompts and bias management." -large language models (llms) for natural language processing (nlp) of oil and gas drilling data,gpt-4-1106-preview,7,"The abstract mentions the use of various prompt engineering strategies as part of the methodology to handle text downstream tasks in oil and gas drilling data using large language models. Although the study primarily focuses on the application of LLMs in a specific domain (oil and gas), the inclusion of prompt engineering in the process indicates a significant relevance to the field of prompt engineering study. However, a perfect relevance score is not given because the primary focus of the study is not purely on prompt engineering, but rather on the domain-specific application of large language models which includes prompt engineering as a part of the process." -automatically correcting large language models: surveying the landscape of diverse self-correction strategies,gpt-4-1106-preview,8,"The paper's focus on 'self-correction strategies' for large language models (LLMs) is highly relevant to prompt engineering study, as prompt engineering often involves designing prompts to elicit the desired behavior or correct the output of an LLM. The detailed review of automated feedback methods can be crucial for advancing the prompt engineering field, especially in the context of minimizing the necessity for human intervention in generating effective prompts. However, the paper may not be specifically centered on 'hard prefix prompts,' which the original prompt suggested, hence not a perfect 10." -adversarial attacks on large language model-based system and mitigating strategies: a case study on chatgpt,gpt-4-1106-preview,9,"The abstract details a study that is highly relevant to prompt engineering as it focuses on using prefix prompts as a mitigating strategy against adversarial attacks on language models, directly impacting how prompts are engineered for safety and robustness. Evaluating and enhancing the security of language models like ChatGPT with prefix prompts falls within the scope of prompt engineering research. Although the study may not solely concentrate on the engineering of hard prompts, the development of a 'training-free prefix prompt mechanism' indicates a significant contribution to the field of prompt design and mitigation strategies, which is a crucial aspect of prompt engineering." -evaluating tuning strategies for sequence generation with protein language models,gpt-4-1106-preview,8,"The response evaluates a study that involves adapting NLP models for use in generating artificial protein sequences, with a focus on prompt tuning as an alternative to fine-tuning. Although the study is not directly examining 'hard prefix prompts,' it is investigating the efficiency and effectiveness of tuning strategies, particularly prompt tuning, within the context of a language model adapted for a specialized domain. This makes the study highly relevant to prompt engineering as it explores adaptable methodologies for model tuning, which can include prompt engineering strategies. The study's results and the discussion of the quality assessment tools also contribute valuable insights for future developments in prompt engineering, despite not specifically addressing 'hard prefix prompts.'" -from humans to machines: can chatgpt-like llms effectively replace human annotators in nlp tasks?,gpt-4-1106-preview,7,"The abstract discusses the potential use of large language models (LLMs) like ChatGPT for NLP tasks, which is relevant to prompt engineering in the sense that prompt engineering could be vital for directing such models to perform annotation tasks. The ability of LLMs to understand and respond to prompts effectively would be central to their use as annotators. Although the focus here is more on annotation than prompt engineering directly, the quality and nature of prompts would inherently affect the success of such an application. Therefore, the study indirectly addresses issues that are significant to the field of prompt engineering." -understanding the effect of in-video prompting on learners and instructors,gpt-4-1106-preview,4,"While the abstract discusses in-video prompting, which is a form of engagement tactic within an educational context, it does not specifically mention or focus on 'hard prefix prompts' or the systematic review of prompt engineering. The study is relevant to the broader context of prompt design and use in learning environments but does not directly address the topic of a comprehensive review of hard prefix prompts in prompt engineering. Therefore, it has some relevance due to its focus on the effects of prompts in an instructional setting but falls short of directly addressing the specified topic of hard prefix prompts." -effects of a progressive prompting-based educational game on second graders' mathematics learning performance and behavioral patterns,gpt-4-1106-preview,6,"The study focuses on the use of prompting strategies within a game-based learning environment, which is tangentially related to the broader concept of 'prompt engineering' in that it involves the design of prompts to guide users (learners) towards specific outcomes. However, 'prompt engineering' typically refers to designing prompts to interact with AI systems or computer models, rather than human students. Therefore, while the educational prompting strategy is a form of prompt design and may share underlying principles with prompt engineering for AI, it is not a direct study on 'hard prefix prompts' as the context differs. The relevance is moderate because the skills and insights from designing effective prompts for education might be applicable to prompt engineering for AI in developing user instructions or interactions." -iie-nlp-nut at semeval-2020 task 4: guiding plm with prompt template reconstruction strategy for comve,gpt-4-1106-preview,9,"The paper is highly relevant to prompt engineering because it discusses a prompt template reconstruction strategy within the context of a natural language processing task (i.e., SemEval Task4). The use of prompt templates to guide pre-trained language models (PLMs) for specific tasks like commonsense validation and explanation is a direct application of prompt engineering. Even though the study does not seem to be a systematic review on 'hard prefix prompts', the introduction of input reconstruction strategy with prompt templates is closely related to the engineering and structuring of prompts to improve the performance of language models, which is a key aspect of prompt engineering. Therefore, the paper's content aligns well with the field of study." -considering student choice when selecting instructional strategies: a comparison of three prompting systems.,gpt-4-1106-preview,6,"The study touches upon the effectiveness of prompting systems in educational settings, which is tangentially relevant to prompt engineering as it involves the use of prompts to enhance learning outcomes. However, prompt engineering typically focuses on improving the interaction with AI models and systems, rather than instructional strategies for human learning. Despite the different context, principles from studying human response to prompts could be insightful for designing AI prompts, thus earning a moderate relevance rating." -news summarization and evaluation in the era of gpt-3,gpt-4-1106-preview,8,"The paper is highly relevant to prompt engineering as it directly involves prompting a large language model (GPT-3) and studying its performance in a specific NLP task - news summarization. Although it does not focus exclusively on 'hard prefix prompts', the mentioned concept of 'task description' prompting is a critical element of prompt engineering. The examination of how effectively GPT-3 can generate summaries with only a task description highlights the importance of designing prompts to elicit desired responses from AI models. The relevance to prompt engineering study is not rated a perfect 10 because the paper seems to cover broader aspects of model evaluation and summarization tasks rather than focusing solely on the detailed structure and impact of prompts." -a progressive prompting approach to conducting context-aware learning activities for natural science courses,gpt-4-1106-preview,7,"The relevance to prompt engineering lies in the exploration of a progressive prompt-based approach to enhance learning outcomes, which is conceptually similar to designing prompts to improve interaction with AI or learning systems. However, the study is situated in the context of mobile learning in natural science courses, not specifically within prompt engineering for AI or computational systems. Nevertheless, the methodologies and findings could have implications for the practice of prompt engineering, particularly in creating adaptive and context-aware prompts for various applications." -opt-iml: scaling language model instruction meta learning through the lens of generalization,gpt-4-1106-preview,8,"The study pertains to the broader field of instruction-tuning, which is closely related to prompt engineering, as it involves optimizing language models to understand and execute instructions from prompts more effectively. Although the specific term 'hard prefix prompts' is not mentioned, the principles and findings from such instruction-tuning experiments can be highly relevant and applicable to prompt engineering, including the development and assessment of hard prefix prompts." -how good are gpt models at machine translation? a comprehensive evaluation,gpt-4-1106-preview,7,"The relevance of the presented paper to prompt engineering is significant, mainly due to the examination of the 'effect of prompting strategies' on the performance of GPT models in machine translation. Prompt engineering is crucial for optimizing the model's output, and this paper's exploration of how GPT models respond to different prompts could provide valuable insights for the field. Although the study's primary focus is on machine translation, the inclusion of prompting strategies as one of the evaluated aspects means that the findings could potentially contribute to a better understanding of prompt engineering. Therefore, the rating acknowledges the indirect but important relation to prompt engineering within the context of machine translation." -can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert,gpt-4-1106-preview,6,"The abstract addresses the understanding ability of ChatGPT as compared to fine-tuned BERT models and mentions the use of advanced prompting strategies to improve ChatGPT's understanding. While the main focus is on the comparative analysis of model performance, the mention of prompting strategies implies some relevance to prompt engineering. However, the abstract does not offer a detailed exploration or direct focus on hard prefix prompts or their systematic review, which reduces its direct relevance to the specified topic of prompt engineering study." -enabling large language models to generate text with citations,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it directly addresses the construction of prompts to enable large language models to generate text that includes citations. This requires the development of novel prompting strategies that guide the model not just to produce answers, but also to provide evidence through citations. While the study is not solely focused on 'hard prefix prompts,' it falls within the broader field of prompt engineering and is very relevant due to its focus on the performance and verification of information produced by LLMs. Prompt engineering is a critical component in achieving the goals outlined in the study." -diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine,gpt-4-1106-preview,9,The paper is highly relevant to prompt engineering as it specifically focuses on the development and use of 'diagnostic reasoning prompts' designed to investigate the ability of LLMs (like GPT-4) to replicate clinical reasoning processes. This research directly contributes to the field of prompt engineering by demonstrating that prompts can be designed in a way that not only elicits specific types of reasoning from LLMs but can also do so with a level of interpretability that aligns with the cognitive processes of professionals in the field of medicine. The study's aim to enhance understanding and trust in LLMs through better-designed prompts is squarely within the goals of prompt engineering. -make llm a testing expert: bringing human-like interaction to mobile gui testing via functionality-aware decisions,gpt-4-1106-preview,8,"The abstract describes the use of Large Language Models (LLMs) like ChatGPT in automated GUI testing, which involves a novel application of prompt engineering. By formulating the problem as a Q&A task and introducing a functionality-aware prompting mechanism, the study essentially deals with the design and utilization of prompts to enable the LLM to generate useful outputs for testing purposes. This showcases an implementation of prompt engineering to improve the performance of an AI model in a domain-specific task. However, it doesn't directly study the prompt engineering process in a broader context, and therefore doesn't merit a perfect score." -boosting static resource leak detection via llm-based resource-oriented intention inference,gpt-4-1106-preview,6,"The provided abstract outlines research on 'InferROI,' a system designed to detect resource leaks in code using large language models (LLMs) for intention inference. Though this approach employs prompts to guide the LLM toward inferring intentions from code snippets, it is indirectly relevant to prompt engineering. The use of prompts is in the context of static analysis in software engineering, while prompt engineering generally refers to designing prompts to accurately elicit specific responses from language models. Since this research involves instructing an LLM via prompts, it could offer some insights into prompt design and effectiveness; hence, it is given a moderate relevance rating. However, it does not focus on prompt engineering as a primary study area, which is why the rating is not higher." -llm-funcmapper: function identification for interpreting complex clauses in building codes via llm,gpt-4-1106-preview,8,"The abstract describes the use of a large language model (LLM) to interpret complex regulatory texts, which is relevant to prompt engineering study as it involves the development of a prompt template with chain of thought thinking. While the study isn't focused on 'hard prefix prompts' specifically, the creation of this tailored template and its adjustment using a classification-based tuning strategy are key examples of prompt engineering. The approach of identifying functions and utilizing LLM for understanding complex clauses is closely related to how prompts are engineered to improve the performance of language models on specific tasks. The rating is not a full 10 because the research is not exclusively centered on prompt engineering, but rather on the application of LLMs in the context of interpreting building codes; nonetheless, the methodology includes relevant elements of prompt engineering." -resolving the imbalance issue in hierarchical disciplinary topic inference via llm-based data augmentation,gpt-4-1106-preview,7,"The paper discusses the use of large language models for data augmentation in order to tackle the problem of data imbalance in the context of hierarchical disciplinary topic inference. This is relevant to the field of prompt engineering because designing effective prompts is essential for guiding language models like Llama V1 to generate meaningful and well-aligned augmented text data. The study's emphasis on prompt design for keyword-based research proposal generation is a significant aspect of prompt engineering. However, the primary focus appears to be on addressing data imbalances in the machine learning system, rather than the nuances of prompt engineering itself. Therefore, while prompt engineering is undoubtedly a component of the study, it is not the singular focus." -workshop on large language models' interpretability and trustworthiness (llmit),gpt-4-1106-preview,8,"The abstract discusses the significance of context (prompts) and the need for research on the effects of inputs on Large Language Models (LLMs) and their outputs. It directly relates to prompt engineering, as it addresses the importance of understanding how small changes in prompts can significantly alter the behavior of LLMs (a key issue in prompt engineering). However, it doesn't explicitly mention 'hard prefix prompts' or a systematic review on prompt engineering, hence it doesn't fully match the comprehensive systematic review aspect of the prompt engineering study specified." -improving zero-shot visual question answering via large language models with reasoning question prompts,gpt-4-1106-preview,8,"The title and abstract describe a study focused on improving the effectiveness of Large Language Models (LLMs) for zero-shot Visual Question Answering tasks by using 'Reasoning Question Prompts'. This is relevant to prompt engineering as it involves the strategic design of prompts to enhance the performance of LLMs in interpreting and answering questions without any prior specific training on the task. Although the study does not specifically mention 'hard prefix prompts,' it nonetheless pertains to the broader field of crafting prompts to guide the LLMs towards better comprehension and response generation. Therefore, the relevance to prompt engineering is high, but not the maximum as the study doesn't directly address the concept of 'hard prefix prompts'." -prompts of large language model for commanding power grid operation,gpt-4-1106-preview,8,"The abstract describes a study that is focused on redefining the interaction between humans and a power grid operation system through the use of specifically engineered prompts for a Large Language Model. Given that prompt engineering is central to the process of adapting the LLM to interpret and execute natural language commands in the context of power grid operations, the study is highly relevant to the field. The rating is an 8 instead of a perfect score because, while it is about prompt engineering, the application is very specific to power grid operations and might not cover all aspects of prompt engineering, which could also include a broader range of topics beyond this specific use case." -psychologically-informed chain-of-thought prompts for metaphor understanding in large language models,gpt-4-1106-preview,9,"The study presents the application of chain-of-thought prompts to large language models in order to incorporate structured reasoning, similar to probabilistic models, particularly focusing on metaphor understanding. Although it does not specifically address 'hard prefix prompts,' it does fall within the broader category of prompt engineering, which involves designing prompts to elicit specific behaviors or capabilities in language models. The emphasis on structured reasoning through prompts and the reference to improving performance on a specific language task, metaphor paraphrase selection, make it highly relevant to studies in prompt engineering. The only reason it does not receive a full 10 is that it is not exclusively centred on 'hard prefix prompts' as the original term suggests." -can large language models explain themselves? a study of llm-generated self-explanations,gpt-4-1106-preview,8,"The abstract addresses the concept of 'self-explanations' generated by LLMs like ChatGPT, which directly pertains to one aspect of prompt engineering—eliciting detailed and insightful explanations from the model. Even though the abstract does not explicitly mention 'hard prefix prompts,' it discusses the broader area of how to effectively prompt LLMs for specific types of outputs, in this case, self-explanations. Since the study contributes to the understanding of how LLMs can be guided to provide explanations, it is relevant to the study of prompt engineering. However, the rating is not a full 10 because the abstract does not focus specifically on hard prefix prompts but rather on the general capability of LLMs to explain their reasoning." -fake news in sheep's clothing: robust fake news detection against llm-empowered style attacks,gpt-4-1106-preview,6,"The relevance to prompt engineering study in the context of 'hard prefix prompts' is moderately substantial as the abstract describes the use of 'style-oriented reframing prompts' which are a form of prompts used in engaging with Language Models (LLMs). Although the main focus is on fake news detection and style-agnostic approaches to improve robustness against camouflage attempts by LLMs, the application of prompts is directly related to the mechanics of how LLMs are manipulated or interacted with to produce or detect certain styles of content. Therefore, while the primary topic is not a comprehensive systematic review of hard prefix prompts, the paper relates to one aspect of prompt engineering—using prompts to reframe content style to train a more robust detection model." -towards llm-based fact verification on news claims with a hierarchical step-by-step prompting method,gpt-4-1106-preview,9,"The presented paper is highly relevant to prompt engineering study as it explores a novel prompting method, the Hierarchical Step-by-Step (HiSS), specifically for the task of fact verification of news claims using large language models (LLMs). This approach falls directly within the scope of prompt engineering, where the design of prompts is used to guide the LLMs to perform complex tasks such as dissecting claims into subclaims and verifying them, which is a more nuanced application of prompt engineering. The relevance is not rated a full 10 only because the abstract does not explicitly discuss the engineering of 'hard prefixes,' but the prompting methodology itself is a significant contribution to the field of prompt engineering." -cgsmp: controllable generative summarization via multimodal prompt,gpt-4-1106-preview,5,"The abstract discusses the use of a multimodal approach to reduce hallucination in Natural Language Generation (NLG) and improve the quality of abstractive summarization, relating to language model performance and prompt design to some extent. However, the focus here is on the use of multimodal (image and text) inputs rather than on the study of 'hard prefix prompts' specifically. While prompt engineering is a broader field that includes various methods to control language model outputs, this paper seems to address only a subset of that field related to multimodal interaction and controllability. Therefore, the relevance to prompt engineering study is moderate, as it could provide insights into one aspect of the field without directly focusing on hard prefix prompts." -unified human-scene interaction via prompted chain-of-contacts,gpt-4-1106-preview,7,"The relevance of the 'unified human-scene interaction via prompted chain-of-contacts' study to prompt engineering is significant, as it describes a system that uses language commands to control interactions within a virtual environment. This means that it requires engineered prompts to interpret human language and convert it into actionable commands, aligning closely with the concept of prompt engineering. Although the study focuses specifically on Human-Scene Interaction and does not explicitly discuss the process of designing prompts or the systematic review of hard prefix prompts, the usage of a Large Language Model (LLM) Planner to translate these commands indicates that prompt engineering is an integral part of the framework. Therefore, it is relevant to the study of prompt engineering but not entirely focused on it; hence, it receives a rating of 7." -learning profitable nft image diffusions via multiple visual-policy guided reinforcement learning,gpt-4-1106-preview,7,"The study focuses on generating Non-Fungible Token (NFT) images using a combination of language and image generation models, which relates to prompt engineering in that it involves generating detailed prompts to create specific visual attributes in NFTs. The use of a large language model (LLM) to enhance human input into more complex prompts is particularly relevant to prompt engineering. However, the study also diverges into optimization metrics and market value considerations, aspects that are less directly connected to traditional prompt engineering. Hence, the rating acknowledges the relevance of prompt generation and refinement while noting that not all aspects of the paper are centered on prompt engineering." -majority rule: better patching via self-consistency,gpt-4-1106-preview,8,"The abstract provided discusses an advanced application of prompting techniques in the specific context of software engineering problem-solving. While the focus is on a particular domain, the techniques used, such as few-shot prompts, chain of thought explanations, and the self-consistency method are directly related to prompt engineering. The paper's contribution to prompt engineering is substantial as it explores the effectiveness of particular prompting strategies (like using commit logs as explanations) that lead to state-of-the-art results. However, the research does not appear to be about 'hard prefix prompts' specifically, so it is not a perfect match for a 'comprehensive systematic review on hard prefix prompts.' Therefore, the rating is not a full 10." -llm-assisted content analysis: using large language models to support deductive coding,gpt-4-1106-preview,7,"The paper 'llm-assisted content analysis: using large language models to support deductive coding' is moderately relevant to prompt engineering studies. The study investigates the potential of Large Language Models like GPT-3.5 to assist with the labor-intensive process of deductive coding in qualitative research, which is a specific application of natural language processing. Although it does not directly focus on 'hard prefix prompts,' it does explore the broader realm of using prompts (or queries) to facilitate analysis with an LLM, and it examines how LLMs can be used to refine prompts for better deductive coding outcomes, which is a core part of prompt engineering. Therefore, the principles and findings regarding prompt optimization and evaluation in this research can be valuable for those studying prompt engineering, even if the primary focus of the study does not directly align with the construction or systematization of hard prefix prompts." -toolkengpt: augmenting frozen language models with massive tools via tool embeddings,gpt-4-1106-preview,7,"The abstract provided does pertain to the general field of prompt engineering, given it discusses an approach to augment large language models in a way that could enhance their use of prompts for tool execution. Although it doesn't specifically mention 'hard prefix prompts' or conduct a 'systematic review' on them, the description of ToolkenGPT and the concept of 'toolkens' is relevant to the field of prompting language models for specific tasks. The paper suggests a method for improving the interaction between language models and the tools they can utilize, which could be considered a form of advanced prompt engineering. Therefore, the rating is moderately high for relevance, but not a full score because it does not directly address a systematic review or the specific concept of 'hard prefix prompts.'" -revisiting the plastic surgery hypothesis via large language models,gpt-4-1106-preview,5,"The abstract describes how Large Language Models (LLMs) can be utilized for Automated Program Repair (APR) and discusses the relevance of the plastic surgery hypothesis in this context. The mention of 'prompting strategy' indicates some level of relevance to prompt engineering, as it suggests that the study explores how to effectively use prompts to improve model performance. However, the focus seems to be on the application of LLM-based APR rather than on the study of prompt engineering itself. Therefore, the relevance to prompt engineering is moderate, as the paper likely touches on elements of prompt engineering as part of APR, but is not centered on prompt engineering as its primary topic of investigation." -automatic calibration and error correction for generative large language models via pareto optimal self-supervision,gpt-4-1106-preview,7,"The abstract describes a methodology for improving the calibration and error correction of generative large language models, which is an important aspect of prompt engineering. Effective prompt engineering can benefit greatly from systems that are able to self-evaluate their confidence and error likelihood, providing insight into how prompts might be refined for better outcomes. While the study does not directly deal with 'hard prefix prompts', the proposed framework for self-supervision and dynamic prompting strategy is relevant to the field of prompt engineering as it touches on the calibration and adaptation of prompts based on model confidence. Therefore, the relevance to prompt engineering is significant, although not exclusively focused on 'hard prefix prompts' but rather on the broader issues of model response calibration and error correction." -instructprotein: aligning human and protein language via knowledge instruction,gpt-4-1106-preview,6,"The abstract describes InstructProtein, a large language model trained for bidirectional human and protein language comprehension, which involves specialized prompt engineering to facilitate this unique form of language alignment. Prompt engineering is relevant here, as it is necessary to construct instructions that enable the model to translate between human language and protein sequences. The knowledge graph-based instruction framework mentioned can be seen as an advanced form of prompt engineering, designed to overcome issues of annotation imbalance and instruction deficits. However, the content is more focused on the application within a bioinformatics context rather than prompt engineering as a standalone subject. Therefore, while prompt engineering is a component of the research, the paper is not primarily about prompt engineering in the broader sense but rather a specific application of it." -self-detoxifying language models via toxification reversal,gpt-4-1106-preview,9,"The abstract is highly relevant to prompt engineering study because it directly involves the process of manipulating prompts to achieve a desired behavior in a pretrained language model (PLM). The concept of 'self-detoxification' by reversing the toxification direction is an application of prompt engineering where the input prompt's design has a pivotal role. While it doesn't focus on 'hard prefix prompts' explicitly, it aligns with the core principles of prompt engineering—altering the prompts to influence the model's outputs." -enhance reasoning ability of visual-language models via large language models,gpt-4-1106-preview,8,"The provided abstract is relevant to prompt engineering study because it describes a method (TReE) for enhancing the reasoning ability of visual-language models by using prompts derived from a large language model. This is particularly applicable to hard prefix prompts, as it involves structuring input to the models in a way that guides them through a multi-stage reasoning process. Although the abstract may not explicitly state 'hard prefix prompts', the thinking and re-thinking stages likely involve constructing prompts that carefully direct the model's reasoning, a key concept in prompt engineering." -violation of expectation via metacognitive prompting reduces theory of mind prediction error in large language models,gpt-4-1106-preview,7,"The abstract describes a study on the application of a metacognitive prompting framework in the context of LLMs and their ability to perform Theory of Mind tasks. These tasks are directly related to the prediction capabilities and interpretation strategies of the models, which are essential elements in the broader scope of prompt engineering. Though the concept of 'hard prefix prompts' as specified in the initial request is not addressed directly, the nature of modifying LLM behavior through specific prompting techniques (metacognitive prompting) is highly relevant to enhancing the understanding of how prompts affect model performance and behavior. Therefore, the study is considerably relevant as it focuses on systematic approaches to improve interaction quality between humans and AI via prompts, which could indirectly contribute to the understanding and development of hard prefix prompts in prompt engineering." -automatic hallucination assessment for aligned large language models via transferable adversarial attacks,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores the creation of prompts (in this case, adversarial attacks) that influence language model performance. This involves understanding how prompting affects LLM behavior and assessing the models' reliability, which is a core aspect of prompt engineering. The use of prompting chaining is directly related to the design and engineering of prompts that can manipulate or test the behavior of LLMs. Although the study's focus is on hallucination and the generation of evaluation data, the methods used are a part of prompt engineering practices." -improving few-shot generalization of safety classifiers via data augmented parameter-efficient fine-tuning,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explores the use of prompt-tuning (a form of prompt engineering) combined with data augmentation to improve the performance of language models on safety classification tasks. This work directly pertains to the field of prompt engineering, as it aims to enhance model generalization using techniques that modify the input prompt structure to better guide the model in few-shot learning scenarios. The approach mentioned, similarity-based data-augmentation + prompt-tuning (DAPT), is a specific instance of prompt engineering, thus making the study quite relevant. Despite the focus on domain-generalized few-shot learning for safety applications and not solely on 'hard prefix prompts', the paper’s exploration of prompt-tuning in practice warrants a high relevance score." -evaluation of gpt-3.5 and gpt-4 for supporting real-world information needs in healthcare delivery,gpt-4-1106-preview,6,"The abstract highlights the need for further research in prompt engineering to improve the performance of large language models (LLMs) in healthcare settings. It mentions the variability in the quality of responses by GPT-3.5 and GPT-4 to specific information needs, which implies that there is room for improvement in how prompts are designed to achieve better results. This is relevant to the study of prompt engineering since it suggests that better-designed prompts could potentially lead to more accurate and useful responses from LLMs. However, the abstract does not directly focus on 'hard prefix prompts' but rather on the broader application of LLMs in healthcare. Therefore, it is somewhat relevant but not fully focused on prompt engineering, hence the rating of 6." -tempera: test-time prompt editing via reinforcement learning,gpt-4-1106-preview,9,"The paper's abstract indicates that the work is highly relevant to prompt engineering as it presents a novel method (TEMPERA) which focuses on editing prompts using reinforcement learning. This directly aligns with innovations and advancements in prompt design strategies for large language models, which is at the heart of prompt engineering studies. The only reason the rating is not a full 10 is that the relevance might be slightly more specific to reinforcement learning techniques in prompt engineering rather than a broad systematic review on 'hard prefix prompts'. However, the contributions to optimizing prompts and improving sample efficiency are very pertinent to the field." -gpt-4 is too smart to be safe: stealthy chat with llms via cipher,gpt-4-1106-preview,8,"The relevance of this study to prompt engineering is high because it directly investigates the interaction dynamics between humans and LLMs (Large Language Models) by introducing a novel method of communication—CipherChat. This approach challenges existing safety alignment techniques, which are crucial for prompt engineering as they ensure that model responses align with intended outcomes and ethical guidelines. The use of ciphers as a tool to test and potentially enhance LLMs' interpretative faculties aligns with prompt engineering strategies that seek to refine how models understand and generate language-based responses. Furthermore, the discovery of a 'secret cipher' within LLMs and the development of a SelfCipher method pertains to advanced prompt engineering, where understanding model behavior in non-natural languages can lead to more sophisticated and safer human-AI interactions. However, because the study primarily focuses on safety alignment and communication in ciphers, which are a subset of prompt engineering tasks, it does not fully encompass the breadth of prompt engineering studies. Hence, the rating falls short of a perfect score." -chatspot: bootstrapping multimodal llms via precise referring instruction tuning,gpt-4-1106-preview,4,"The study primarily focuses on improving human-AI interactivity within multimodal large language models by introducing a more sophisticated method of instruction via referring prompts. While this does involve some form of prompt engineering, specifically in relation to how the model receives and understands instructions, it is not strictly concerned with 'hard prefix prompts' as it seems to combine multiple input modalities (language, clicks, drag-and-drop, drawings). The relevance is thus moderate because it does intersect with the concept of prompt design and efficacy but does not explicitly address the engineering of hard-coded text prompts within a linguistic context." -ask an expert: leveraging language models to improve strategic reasoning in goal-oriented dialogue models,gpt-4-1106-preview,8,"The study focuses on incorporating strategic reasoning into dialogue models through the use of specialized prompts, which is related to prompt engineering. Although the 'hard prefix prompt' is not explicitly mentioned, the concept of structured prompts guiding dialogue systems is fundamental to prompt engineering and is reflected in the 'Ask an Expert' framework. This framework relies on pre-specified prompts to direct the conversation, which is a core aspect of prompt engineering. The relevance to prompt engineering is high, but the rating is not a full 10 due to the absence of a direct focus on 'hard prefix prompts' specifically." -zero-shot visual relation detection via composite visual cues from large language models,gpt-4-1106-preview,9,"The described study's focus on using language model-generated description-based prompts, referred to as 'Composite Description prompts', to improve zero-shot visual relation detection directly relates to the field of prompt engineering. The systematic review of 'hard prefix prompts' could encompass studies that explore innovative ways of combining language models with vision tasks, including the generation of prompts to guide visual recognition. Furthermore, the introduction of a chain-of-thought method to prompt language models for weight generation aligns with strategic prompt design to elicit specific model behaviors. Thus, the relevance is high, though not a perfect 10 as the primary focus is on visual relation detection rather than prompt engineering exclusively." -chill: zero-shot custom interpretable feature extraction from clinical notes with large language models,gpt-4-1106-preview,6,"The described study focuses on using expert-crafted queries to generate interpretable features from health records, which indirectly relates to prompt engineering since it involves crafting queries (prompts) for a model to generate useful outputs. However, the study applies the technique for feature extraction from clinical notes rather than the systematic review of 'hard prefix prompts,' which is more specific to improving prompt engineering methods or understanding their efficacy. Therefore, the relevance is moderate but not directly focused on the prompt engineering field as defined by the initial prompt." -can llms keep a secret? testing privacy implications of language models via contextual integrity theory,gpt-4-1106-preview,7,"The study discusses the implications of information handling by large language models (LLMs), which relates to how these models process and output information based on the instructions (prompts) they receive. While it does not directly address 'hard prefix prompts,' it touches on the broader topic of prompt design and its influence on model behavior, particularly regarding privacy. It is relevant to prompt engineering since understanding and improving the privacy reasoning capabilities of LLMs can lead to the development of better prompts that protect user privacy. The rating is not a perfect 10 because the study's focus is on privacy and not explicitly on the structure or format of the prompts themselves, which would be a central aspect of a study dedicated entirely to prompt engineering." -interleaving pre-trained language models and large language models for zero-shot nl2sql generation,gpt-4-1106-preview,4,"The abstract discusses the development of a framework (ZeroNL2SQL) that involves using prompts to guide language models for a specialized task (NL2SQL generation). Although the specific term 'hard prefix prompts' is not used, the concept of using prompts to direct language model behavior is central to the study. This indicates some relevance to the study of prompt engineering but not directly focused on hard prefix prompts or a systematic review of them. Therefore, it is somewhat relevant to prompt engineering but not fully aligned with a comprehensive systematic review on that specific topic." -distinguish before answer: generating contrastive explanation as knowledge for commonsense question answering,gpt-4-1106-preview,8,"The abstract describes CPACE, a model that uses explanation prompts to generate contrastive explanations from symbolic knowledge, which is particularly relevant to the field of prompt engineering. The use of prompts to guide the generation of explanations indicates that this research is focused on enhancing the interpretability and effectiveness of a question answering system through careful design of prompts. While not exclusively focused on 'hard prefix prompts', the study emphasizes the use of prompts in an AI model, which aligns with studies in prompt engineering. The relevance rating is not the maximum because the connection to 'hard prefix prompts' is not direct, yet the concept of using prompts to drive AI behavior is central to the research presented." -using global land cover product as prompt for cropland mapping via visual foundation model,gpt-4-1106-preview,7,"The abstract discusses leveraging the 'Pretrain+Prompting' paradigm, which is relevant to prompt engineering as it involves designing prompts to aid in domain adaptation for cropland mapping. The introduction of the auto-prompting (APT) method aligns with prompt engineering by using prompts to modify the behavior of pre-trained models on specific tasks. However, the direct focus on cropland mapping and the use of visual foundation models means it is not exclusively centered on prompt engineering but rather its application in a specific domain. Thus, it is moderately relevant but not a comprehensive systematic review on hard prefix prompts." -epa: easy prompt augmentation on large language models via multiple sources and multiple targets,gpt-4-1106-preview,8,"The paper describes a method called EPA (Easy Prompt Augmentation) which is directly related to prompt engineering. It improves the performance of large language models by augmenting task prompts with paraphrased demonstrations, reducing the user's effort in creating effective prompts. Since the study is about a technique to enhance prompt efficacy for NLP tasks, it has high relevance to the field of prompt engineering. However, the information provided does not explicitly mention 'hard prefix prompts', which was the specific topic of interest mentioned in the original inquiry, thus the rating is not a full 10." -expclip: bridging text and facial expressions via semantic alignment,gpt-4-1106-preview,7,"The abstract describes a research study that focuses on using natural language prompts to control the style of facial expressions in speech-driven animation, which is relevant to prompt engineering in the context of using language prompts for specific tasks. However, the primary application is in the domain of facial animation rather than prompt engineering for text generation or data processing tasks. Nevertheless, the study's use of a CLIP-based model and the development of a Text-Expression Alignment Dataset (TEAD) suggests significant overlap with prompt engineering methodologies, as it involves the alignment of text prompts with emotional expressions. The relevance is not complete as the scope of prompt engineering can be more extensive, but the techniques and mechanisms such as automatic annotation with LLMs and Expression Prompt Augmentation (EPA) are of interest to the field of prompt engineering." -divknowqa: assessing the reasoning ability of llms via open-domain question answering over knowledge base and text,gpt-4-1106-preview,4,"The study focuses on the retrieval capabilities of Large Language Models and how they can be grounded on heterogeneous knowledge sources for better question-answering performances. While it relates to prompt engineering in the broader context of machine learning and enhancing LLMs' interactions with external data, the study's primary concern is not with hard prefix prompts directly but rather with improving the information retrieval process, which is a component of the system that supports effective prompting. Therefore, its relevance to prompt engineering, specifically to a systematic review of hard prefix prompts, is tangential rather than central." -tailoring personality traits in large language models via unsupervisedly-built personalized lexicons,gpt-4-1106-preview,7,"The study described in the abstract addresses the manipulation of language models' outputs by tailoring personality traits, which is related to prompt engineering in the sense that it involves guiding the language model to generate text with certain characteristics. Although the main focus is on personality traits via lexical choices rather than 'hard prefix prompts,' it still falls within the broader scope of controlling language model behavior, which is a key aspect of prompt engineering. Thus, the relevance is significant but not directly aligned with hard prefix prompts, hence the rating is not a full 10." -denevil: towards deciphering and navigating the ethical values of large language models via instruction learning,gpt-4-1106-preview,9,"The described paper is highly relevant to prompt engineering, as it develops a novel prompt generation algorithm (DeNEVIL) that interacts with large language models to explore and expose their ethical value alignment through instructions. Although not directly labeled as 'hard prefix prompts,' the concept of generating prompts to induce model behavior aligns with studies concerning prompt design and efficacy. The focus on ethical considerations adds a dimension of value-based prompt engineering, which is a specialized and relevant aspect of the broader field of prompt engineering studies." -pitl: cross-modal retrieval with weakly-supervised vision-language pre-training via prompting,gpt-4-1106-preview,8,"The study is highly relevant to the field of prompt engineering as it describes a method to improve the performance of vision-language pre-training models by using prompts to elicit knowledge from large language models. The method, called Prompts-in-The-Loop (PiTL), uses prompts to generate language counterparts for images, which reduces the need for paired image-text data and is a direct application of prompt engineering techniques. Although the study does not specifically focus on 'hard prefix prompts', it is still related to the broader area of prompt engineering, hence the rating of 8." -vision-language interpreter for robot task planning,gpt-4-1106-preview,7,"The study discussed in the abstract is moderately relevant to prompt engineering, as it deals with the generation of problem descriptions (PDs) from language instructions, which is a component of prompt engineering. In prompt engineering, one must design prompts that effectively communicate tasks to language models, and here, the model is interpreting language to create PDs for robot task planning. Although the study focuses on robot planning and multimodal inputs, the underlying principle of translating natural language into machine-readable formats aligns with the techniques and goals of prompt engineering. The interdisciplinary nature of this research, combining language models with symbolic planners, reflects the complexity encountered in prompt engineering scenarios. However, it does not directly address 'hard prefix prompts,' which suggests it is not fully specialized in the field of prompt engineering but is nonetheless relevant." -prompting with pseudo-code instructions,gpt-4-1106-preview,8,"The paper directly addresses the concept of 'prompt engineering' by exploring the use of pseudo-code as a form of prompt style for improving the performance of pre-trained language models. It compares pseudo-code prompts with natural language prompts and presents empirical results showing the effectiveness of pseudo-code, which includes structural elements pertinent to the field of prompt engineering. The improvement in performance metrics like F1 scores for classification and ROUGE-L scores for generative tasks indicates a significant relevance to the area of study. However, it focuses specifically on pseudo-code prompting rather than a broader range of hard prefix prompts, which is why the rating is not a full 10." -towards general visual-linguistic face forgery detection,gpt-4-1106-preview,8,"The abstract describes a study that centers on using 'fine-grained sentence-level prompts' for more effective face forgery detection. Prompt engineering is directly related to the design of these fine-grained prompts, making it highly relevant to the stated topic. The use of prompts within a Visual-Linguistic Face Forgery Detection system to improve semantic information and interpretability aligns with the study of hard prefix prompts which are designed for better interaction between language and models. The rating isn't a full 10 because the study focuses on a specific application of prompts in face forgery detection rather than a broad systematic review of hard prefix prompts across various domains." -forgetful large language models: lessons learned from using llms in robot programming,gpt-4-1106-preview,9,"The abstract indicates a study focused on reducing errors in execution of robotic programming tasks by employing language models with prompts. Although it concentrates on the 'forgetfulness' of LLMs and proposes solutions through prompt engineering tactics, it doesn't strictly cover 'hard prefix prompts' as the original study question suggests. However, the relevance is quite high as the paper seems to be a direct application of prompt engineering to improve task performance. Just the focus on prefix prompts specifically is not stated, which slightly reduces the rating." -interpretable unified language checking,gpt-4-1106-preview,8,"The abstract mentions the use of a 'simple, few-shot, unified set of prompts' for improving the performance of large language models (LLMs) on a variety of language checking tasks. This indicates that the research involved studies on how prompt engineering can enhance the capabilities of LLMs in detecting misinformation, stereotypes, and hate speech. Although the focus is not solely on 'hard prefix prompts,' the relevance to prompt engineering is clear because the study explores how different kinds of prompts can affect the performance of LLMs on specific language tasks. The rating is not a full 10 because the abstract does not focus exclusively on systematic review of prompt engineering or on 'hard prefix prompts', which are specific types of prompts used to control the behavior of language models." -genrec: large language model for generative recommendation,gpt-4-1106-preview,7,"The abstract indicates the use of 'specialized prompts' to improve the ability of a Large Language Model (LLM) to understand recommendation tasks, which implies a form of prompt engineering. Since prompt engineering is essential for fine-tuning LLMs to perform specific tasks such as generative recommendation, and this paper discusses formulating these prompts, it has a substantial relevance to prompt engineering study. However, the focus of the abstract seems more on the application of large language models for recommendation systems rather than the detailed study of hard prefix prompts, which prevents a perfect score." -"a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity",gpt-4-1106-preview,8,"The abstract describes an evaluation framework that specifically includes assessments of ChatGPT's capabilities in a 'multi-turn ""prompt engineering"" fashion,' indicating that the study examines and utilizes prompt engineering as a part of the evaluation process. Since prompt engineering is integral to optimizing the performance of ChatGPT in various tasks as mentioned in the abstract, it is highly relevant to the study of prompt engineering. However, it is not entirely focused on 'hard prefix prompts,' which would be explicitly tailored cues designed to guide the language model's responses, therefore the rating is not a full 10." -chain-of-thought prompt distillation for multimodal named entity and multimodal relation extraction,gpt-4-1106-preview,8,"This abstract describes a study focused on prompt distillation, which is a technique related to prompt engineering. The core concept of prompt engineering is leveraged here, as it involves crafting prompts to extract reasoning abilities from large language models and effectively transfer this knowledge to smaller models. This research is relevant to the field of prompt engineering, specifically concerning the design of 'chain-of-thought' prompts to facilitate multimodal understanding. Although the study doesn't explicitly focus on 'hard prefix prompts,' it aligns closely with the larger domain of prompt engineering, thus meriting a high relevance rating." -fedlogic: interpretable federated multi-domain chain-of-thought prompt selection for large language models,gpt-4-1106-preview,9,"The relevance of the paper 'FedLogic: Interpretable Federated Multi-domain Chain-of-Thought Prompt Selection for Large Language Models' to prompt engineering is high. It directly addresses the challenge of prompt selection in LLMs, aiming to improve both the precision of responses and the interpretability of the prompting process. The focus on Chain-of-Thought reasoning, a method that has shown promise for enhancing the quality of LLM outputs, further emphasizes its relevance to the current landscape of prompt engineering. The introduction of FedLogic to navigate the complexities of multi-domain prompt selection and its emphasis on a theoretical framework and constraint incorporation suggests significant contributions to the field of prompt engineering. The only reason it doesn't score a full 10 is that the abstract does not mention 'hard prefix prompts,' which might be understood as a subset or a particular method within prompt engineering; the paper seems to focus more broadly on CoT prompts." -robust preference learning for storytelling via contrastive reinforcement learning,gpt-4-1106-preview,7,"The abstract describes an approach to controlled automated story generation that involves a level of prompt engineering, particularly in the fine-tuning phase using prompt-learning techniques. This suggests relevance to the study of prompt engineering, especially in the context of enhancing the robustness of a generative model's outputs with respect to user preferences. However, the focus of the study is on contrastive reinforcement learning rather than exclusively on hard prefix prompts or a detailed dissection of prompt engineering approaches. The relevance is therefore notable but not comprehensive concerning prompt engineering as a broad field." -using natural language explanations to rescale human judgments,gpt-4-1106-preview,7,"The abstract describes a study involving the use of large language models (LLMs) to rescale human judgment annotations based on natural language explanations. This is relevant to prompt engineering as it directly pertains to the optimization of LLM outputs through the integration of human feedback. Specifically, feeding Likert ratings and explanations into an LLM to homogenize ratings across annotators is a form of prompt design that guides the model to generate more consistent and possibly more reliable numeric scores. The technique is studied within the context of a specific NLP task (document-grounded question answering), and it addresses challenges inherent in subjective human evaluations which are critical for training and evaluating LLMs. The relevance is not rated higher because the study is more focused on the annotation process and the rescaling of human judgments rather than the construction of hard prefix prompts specifically." -chain-of-thought prompt distillation for multimodal named entity recognition and multimodal relation extraction,gpt-4-1106-preview,8,"The abstract discusses leveraging the 'chain of thought' (CoT) as an intermediate reasoning process for distilling knowledge from large language models to a student model, which is highly relevant to prompt engineering. This process directly involves designing prompts to elicit reasoning steps, indicating how the model should approach a problem, thus involving prompt engineering. However, the focus is primarily on multimodal named entity recognition and relation extraction, so it is not entirely within the realm of hard prefix prompts in a strict sense, hence the rating is not a full 10." -litsumm: large language models for literature summarisation of non-coding rnas,gpt-4-1106-preview,9,"The abstract discusses the use of large language models (LLMs) with a series of prompts and checks to automatically generate summaries of literature for non-coding RNAs, which is highly relevant to prompt engineering. The study highlights the importance of prompt design in achieving high-quality output from LLMs. It illustrates a practical application of prompt engineering within the context of automating curation processes in the life science field. This aligns closely with the concept of 'hard prefix prompts' in prompt engineering studies, as it emphasizes the effectiveness of structured input (prompts) in guiding the language model toward the desired task. The sole reason for not rating it a perfect 10 is that the abstract does not focus exclusively on the theory or mechanics of prompt engineering itself, but rather on the application of prompt engineering techniques in a specific domain." -who wrote it and why? prompting large-language models for authorship verification,gpt-4-1106-preview,9,"The abstract outlines a study that uses engineered prompts, specifically 'step-by-step stylometric explanation prompts,' as a key component of their proposed method (PromptAV) for authorship verification. This directly falls into the scope of prompt engineering studies as it involves designing prompts that enable a Large-Language Model to perform a specific task more effectively. The work not only engages with prompt design but also tackles the challenges of data efficiency and model interpretability, which are pertinent to the development and assessment of prompts in language models. The one point deduction is due to the possibility that the study may not encompass a 'comprehensive systematic review' on the topic, but rather presents a novel approach within the field." -alltogether: investigating the efficacy of spliced prompt for web navigation using large language models,gpt-4-1106-preview,7,"The study addresses the concept of prompt engineering by introducing 'AllTogether,' a prompt template aimed at improving the performance of Large Language Models in web navigation tasks, which is a specialization within prompt engineering. Though the study's focus is not on 'hard prefix prompts' specifically, it is still relevant to the broader domain of prompt engineering because it explores how to optimize prompts to enhance LLMs' understanding of tasks. As such, while it does not cover the full breadth of prompt engineering, especially with regards to systematic reviews of hard prefix prompts, it does contribute to the field by investigating prompt efficacy and template standardization." -wordart designer: user-driven artistic typography synthesis using large language models,gpt-4-1106-preview,8,"The paper describes a framework for artistic typography synthesis that centrally involves the use of Large Language Models (LLMs) to interpret user inputs and generate actionable prompts, which is directly related to prompt engineering. While the title does not explicitly mention 'hard prefix prompts', the 'LLM Engine' described operates with some form of prompt that guides the generation process. This indicates that the study does indeed involve an aspect of prompt engineering, particularly as it pertains to the synthesis of graphic designs. However, as the prompt in question specifically asks for a 'comprehensive systematic review on hard prefix prompts,' an approach or a model that is not the primary subject of this paper, the relevance is not maximal. Therefore, the rating reflects high relevance to prompt engineering in general but not a perfect match to the exact subject of 'hard prefix prompts.'" -context-aware prompt tuning for vision-language model with dual-alignment,gpt-4-1106-preview,9,"The abstract describes the development and application of a method called Dual-Aligned Prompt Tuning (DuAl-PT) in the context of vision-language models, which is highly relevant to the field of prompt engineering. Prompt engineering is a critical aspect of adapting large models to specific tasks, and the introduction of a novel method that utilizes both pre-trained language models and alignment techniques directly pertains to advancements in prompt engineering. The high relevance is underscored by the explicit focus on improving the efficiency and context-awareness of prompts, which are key goals in prompt engineering. The reason for not giving a perfect 10 is that the abstract does not focus on 'hard prefix prompts' specifically but rather on prompt learning methods in general, which encompasses a wider field than the specified study area." -de-diffusion makes text a strong cross-modal interface,gpt-4-1106-preview,7,"The title and abstract suggest that the study focuses on encoding images as text for use in a cross-modal interface, which has relevance to prompt engineering considering that prompts are a form of text input. The approach allows for the use of natural language as an interface to interact with images and demonstrates the potential to prompt large language models for multi-modal tasks. The relevance to prompt engineering is significant due to the generation of text representations that can serve as prompts and the improvement in interfacing with text-to-image tools. However, the paper is more focused on the cross-modal exchange and image representation than on the design or optimization of prompts themselves, which are typically the main focus in prompt engineering studies." -llamarec: two-stage recommendation using large language models for ranking,gpt-4-1106-preview,7,"The abstract describes a use of large language models (LLMs) in a two-stage recommendation framework, which includes the use of prompt templates for inputting user interaction history and candidate items into the LLM. Prompt engineering is relevant here because the design of prompt templates can be considered a form of engineering prompts to improve the performance of the LLM in the task of ranking-based recommendation. However, the study does not seem to focus primarily on the 'hard prefix prompts' aspect, but rather on the overall framework of using LLMs for recommendation, which includes prompt engineering as a component. Therefore, the relevance is significant but not exclusive to prompt engineering study." -scpatcher: mining crowd security discussions to enrich secure coding practices,gpt-4-1106-preview,6,"The paper discusses SCPatcher, a tool that uses Prompt Learning with a Large Language Model to improve secure coding practices by mining crowd security discussions. Although the primary focus is on enhancing secure coding, the use of Prompt Learning is relevant to the study of prompt engineering. However, the paper does not specifically focus on 'hard prefix prompts' as implied by the term 'prompt engineering study.' Therefore, the relevance to prompt engineering is secondary and not central to the main objective of the paper, resulting in a moderate rating of relevance." -mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning,gpt-4-1106-preview,9,The abstract describes research that directly relates to prompt engineering by analyzing the stability and consistency of language model predictions in response to different prompting setups. This type of investigation is crucial for understanding how different prompt designs affect model performance and is a core aspect of prompt engineering. The high relevance rating is due to the focus on prompt-based learning and the systematic review of factors that influence the behavior of language models in response to prompts. -collaborative large language model for recommender systems,gpt-4-1106-preview,7,"The abstract mentions the development of CLLM4Rec, which incorporates a 'soft+hard prompting strategy' during the pretraining stage for language modeling on recommendation system-specific corpora. The mention of hard prompts directly ties to prompt engineering, particularly within the context of integrating these prompts to improve the performance of a recommender system driven by a large language model. Given that the paper appears to specifically address and include prompt engineering strategies, it is relevant to studies of prompt engineering albeit focused more on the application within recommender systems rather than a general discussion or a review of hard prefix prompts in a wide array of domains. The rating is not a full 10 because the primary focus is on the recommender systems with prompt engineering being an element of the solution rather than the main subject of the paper." -can large language models be good path planners? a benchmark and investigation on spatial-temporal reasoning,gpt-4-1106-preview,6,"The title and abstract indicate research dealing with how large language models can handle tasks requiring spatial-temporal reasoning, which includes the analysis of few-shot prompting methodologies. These methodologies are a subset of prompt engineering, as they explore how to design prompts that enable language models to perform spatial reasoning tasks. While the focus is not explicitly on 'hard prefix prompts' as the prompt engineering study may suggest, few-shot prompting, as part of prompt engineering, is relevant because it discusses the effectiveness of different prompting techniques. Therefore, the study is indirectly related to the broader field of prompt engineering but does not directly address the comprehensive systematic review on hard prefix prompts." -automated assessment of comprehension strategies from self-explanations using llms,gpt-4-1106-preview,8,"The study's focus on leveraging open-source Large Language Models for the assessment of comprehension strategies is highly relevant to prompt engineering given that it employs the technique of fine-tuning LLMs and providing examples via prompts to improve performance. This is particularly pertinent to the field of prompt engineering as it directly involves strategies for optimizing the interaction with LLMs to achieve better outcomes in understanding and generating text. Although the study does not specifically mention 'hard prefix prompts', the practice of providing examples via the prompt and the implicit structuring of input to elicit specific types of responses are at the core of prompt engineering studies. Hence, the relevance to prompt engineering is quite significant, but not entirely focused on the 'hard prefix prompts' aspect, leading to a rating of 8." -amortizing intractable inference in large language models,gpt-4-1106-preview,4,"The provided abstract discusses the use of amortized Bayesian inference to sample from intractable posterior distributions in autoregressive large language models (LLMs) and touches on chain-of-thought reasoning as a latent variable modeling problem. While this research is related to the functioning and fine-tuning of LLMs, it does not directly address 'hard prefix prompts' or any aspect of prompt engineering. However, the methods developed in this work for fine-tuning LLMs could indirectly benefit prompt engineering by enabling more efficient adaptation of models to specific tasks, which is why the relevance rating is not at the lowest end of the scale." -fabricator: an open source toolkit for generating labeled training data with teacher llms,gpt-4-1106-preview,4,"The relevance to prompt engineering is moderate. The abstract discusses the use of LLMs (Large Language Models) to generate labeled data for training other NLP models, which does involve prompting the LLM to produce specific outputs. The process of designing these prompts to effectively direct the LLM's output towards useful labeled data creation is related to 'prompt engineering.' However, the abstract does not specifically mention 'hard prefix prompts' nor does it focus on a comprehensive systematic review of such. Therefore, while the topic is related to prompt engineering, it does not fully align with a 'comprehensive systematic review on hard prefix prompts.' Thus, the given rating is moderately relevant, but not directly on point." -scalable multi-robot collaboration with large language models: centralized or decentralized systems?,gpt-4-1106-preview,6,"The abstract describes research on planning frameworks for pre-trained large language models (LLMs) in multi-robot task scenarios, addressing token efficiency which relates to the token budget and potentially the prompt constructions. While the study isn't focused specifically on 'hard prefix prompts', it does engage with prompt engineering in the context of task planning for robots using LLMs. The relevance to prompt engineering is indirect through its exploration of token-efficient LLM frameworks and mention of prompting techniques, which could include prompt design or optimization. However, the core focus is on the application within robotics rather than the systematic study or review of prompt engineering itself." -distilled language models are economically efficient for the enterprise. ...mostly.,gpt-4-1106-preview,7,"The abstract discusses the comparison of three strategies to specialize a Large Language Model (LLM) for enterprise use in assisting customer service agents, one of which is prompt engineering. While the main focus appears to be on the economic efficiency of using distilled language models, prompt engineering is directly mentioned as one of the methods assessed. Therefore, it is relevant from the perspective of comparing the effectiveness and costs of different methods of leveraging LLMs, including prompt engineering. However, the complete focus on prompt engineering is not evident, thus not deserving a full score." -curriculum prompt learning with self-training for abstractive dialogue summarization,gpt-4-1106-preview,8,"The paper presents a curriculum-based prompt learning method which is highly relevant to the field of prompt engineering. The method's gradual increase in prompt perturbation is particularly pertinent to the study of hard prefix prompts, as it deals with enhancing the model's understanding through strategically structured prompts. However, it doesn't focus exclusively on 'hard prefix prompts' but rather on prompt learning in general within the specific application of dialogue summarization. Thus, while the paper is relevant due to its focus on innovative prompt engineering techniques, the relevance is not perfect as the study does not solely center on hard prefix prompts per se." -the impact of scaffolding prompts on the collaborative problem solving of ill-structured tasks by undergraduate engineering student groups,gpt-4-1106-preview,5,"The study seems to focus on scaffolding prompts in the context of collaborative problem solving for ill-structured tasks, rather than hard prefix prompts specifically. Nonetheless, the research is relevant to the field of prompt engineering to some extent because it explores how certain types of prompts can affect the problem-solving abilities of engineering students. This could indirectly inform studies or practices within prompt engineering, especially concerning the design of prompts that facilitate learning and problem-solving in educational settings." -meta-augmented prompt tuning for better few-shot learning,gpt-4-1106-preview,7,"The study mentioned in the abstract addresses issues related to prompt tuning, particularly in the context of few-shot learning. While prompt tuning is directly relevant to prompt engineering, the study focuses on soft prompts rather than hard prefix prompts. The proposed SUMMER framework seeks to improve the initialization and generalizability of soft prompts, which is suggestive of techniques that could potentially be applicable to a broader set of prompt engineering challenges. However, since the study is not specifically about hard prefix prompts, the relevance is significant but not direct, leading to a rating of 7." -exploring the impacts of cognitive and metacognitive prompting on students’ scientific inquiry practices within an e-learning environment,gpt-4-1106-preview,4,"While the study focuses on the use of prompts to enhance scientific inquiry in an educational context, and thus tangentially touches upon the concept of prompting, it does not directly address prompt engineering related to natural language processing or AI prompt design. The relevance lies in the investigation of prompts effectiveness, which could be conceptually extended to prompt engineering for AI systems. However, the study's primary focus on educational cognitive and metacognitive prompts limits its direct applicability to prompt engineering study, specifically regarding hard prefix prompts." -autotutor: a tutor with dialogue in natural language,gpt-4-1106-preview,4,"While the 'autotutor' paper focuses on a system that uses dialogue in natural language, which is tangentially related to prompt engineering in the sense that it deals with natural language processing and potentially the design of prompts for tutorial purposes, it does not directly address 'hard prefix prompts' or systematic reviews related to prompt engineering studies. The connection to prompt engineering is more incidental as it relates to dialogue patterns and design, which might apply to the field but are not centrally concerned with the systematic approaches to hard prefix prompts specifically. Therefore, the relevance is moderate, as some of the underlying principles may be applicable, but the core subject of the study diverges from the specific focus on prompt engineering." -combating the covid-19 infodemic using prompt-based curriculum learning,gpt-4-1106-preview,7,"The abstract suggests that the study involves a prompt-based curriculum learning method, which is connected to the field of prompt engineering, as it implies the use of prompts to extract reliable information from a text. This method seems to be focused on content verification, relevant to the application of prompt engineering in creating models that combat misinformation—a key aspect of information processing and decision-making for AI language models. However, the absence of specific details on 'hard prefix prompts' means the study may not be exclusively focused on the aspect of 'hard prefix prompts' in prompt engineering, thus not warranting a higher relevance score." -taxonprompt: taxonomy-aware curriculum prompt learning for few-shot event classification,gpt-4-1106-preview,7,"The title suggests that the study involves 'taxonomy-aware curriculum prompt learning' which indicates a connection to 'prompt engineering', as it discusses designing prompts that are aware of a certain taxonomy. This seems relevant for prompt engineering studies since it likely deals with the creation and optimization of prompts for machine learning tasks. However, without an abstract or TLDR, it's difficult to determine the exact focus of the paper and its direct applicability to hard prefix prompts, hence the relevance is not rated higher." -fpc: fine-tuning with prompt curriculum for relation extraction,gpt-4-1106-preview,9,The paper's focus on prompt-based fine-tuning aligns closely with the study of prompt engineering. It explores how prompts can be designed and utilized to improve the performance of relation extraction tasks by capturing the semantics of relation labels. The concept of a 'Prompt Curriculum' contributes to the field by addressing how to incrementally build up a model's capacity through prompts. This is highly relevant to prompt engineering as it deals with strategic prompt design and application in the context of fine-tuning pre-trained language models. The reason it is not a full 10 is because it is specific to relation extraction and may not cover every aspect of prompt engineering in a broader sense. -learning to perform complex tasks through compositional fine-tuning of language models,gpt-4-1106-preview,7,"The abstract describes a method related to prompt engineering — compositional fine-tuning (CFT). While it does not directly address 'hard prefix prompts,' it does engage with the broader theme of structuring the interaction with language models to improve task performance. The work on CFT contributes to our understanding of how tasks can be decomposed and taught to language models, which is relevant to the study of how prompts can be designed and optimized. This is tangentially related to hard prefix prompts, as both are concerned with the efficacy of input structures for language models. However, the focus on CFT instead of hard prefix prompts directly means the relevance is significant but not complete." -which prompts make the difference? data prioritization for efficient human llm evaluation,gpt-4-1106-preview,5,"The provided title and abstract describe a study focused on the optimization of human evaluations of large language models through data prioritization, which is indirectly relevant to prompt engineering. Prompt engineering typically involves constructing prompts to elicit specific outputs from language models but does not directly address the question of human evaluators in the loop. However, the study's implications for improving the efficiency of model-evaluation can influence the prompt engineering process indirectly by refining the human feedback loop connected to prompt tuning performances. This makes it somewhat relevant to prompt engineering, especially in the scope of human-in-the-loop evaluations and performance measurement. Nonetheless, the study does not seem to address 'hard prefix prompts' or any specific prompt engineering methodologies, which limits its direct relevance to the field of prompt engineering." -"conversational challenges in ai-powered data science: obstacles, needs, and design opportunities",gpt-4-1106-preview,7,"The study addresses some core issues related to prompt engineering, such as formulating prompts for complex tasks and refining prompts iteratively. These topics are highly relevant to the field, as effective communication with LLMs is contingent upon constructing well-defined prompts. However, the study seems to focus more broadly on conversational challenges in AI within data science, rather than exclusively on 'hard prefix prompts' or systematic reviews on prompt engineering. Thus, while the content is relevant, it does not specifically target hard prefix prompts or provide a comprehensive systematic review, which the prompt specifically asks for." -cona: a novel context-aware instruction paradigm for communication using large language model,gpt-4-1106-preview,8,"The abstract discusses CONA, a context-aware instruction paradigm designed for effective knowledge dissemination with GPT models, which certainly falls under the broader category of prompt engineering, as it explores new methods for communication and interaction with LLMs. Despite not addressing 'hard prefix prompts' specifically, it presents a framework that utilizes the mechanisms of prompt engineering to optimize interactions with LLMs. However, the connection to 'hard prefix prompts' is not explicit, hence the rating is not a full 10." -exploring the design space of ai based code completion engines,gpt-4-1106-preview,8,"The abstract describes a thesis that has a significant focus on prompt engineering as it pertains to AI-based code completion tools like Github Copilot. It explicitly mentions the study of prompt engineering in the context of providing the AI model with the right context and assessing the impact of that context on the quality of the code suggestions. While the study seems more broadly focused on the overall design and factors affecting code completion tools, prompt engineering is indeed a crucial aspect of the thesis as it can greatly influence the AI model's performance. Therefore, it is highly relevant to the study of prompt engineering, though it might not focus solely on 'hard prefix prompts' as specified in the original prompt." -incremental learning of humanoid robot behavior from natural interaction and large language models,gpt-4-1106-preview,7,"The study discusses the integration of Large Language Models (LLMs) into the behavior orchestration of a humanoid robot, focusing on natural-language interaction and incremental learning through feedback loops. While not directly focusing on hard prefix prompts, the concept of 'incremental prompt learning' is introduced, where the system learns and modifies its interactions based on human feedback. This relates to prompt engineering in the broader sense because it involves designing and refining prompts that the LLM uses to generate proper Python statements, which directly affect the robot's actions. However, the study does not appear to specifically address hard prefix prompts or a systematic review thereof, hence the score is not a full 10, reflecting its partial relevance to the specific area of prompt engineering mentioned in the initial query." -tree-planner: efficient close-loop task planning with large language models,gpt-4-1106-preview,8,"The paper discusses an approach to task planning with Large Language Models that includes the use of prompts to generate plans, which is closely linked to the concept of prompt engineering. While the focus is on efficiency and error reduction in iterative actions rather than the study of hard prefix prompts specifically, the principles of designing effective prompts are implicitly a part of the paper due to the need for clear and structured input to guide the LLMs' plan generation and decision-making processes. Therefore, the study is relevant to the broader context of prompt engineering, although it does not directly address a comprehensive systematic review on hard prefix prompts." -batchprompt: accomplish more with less,gpt-4-1106-preview,9,"The abstract describes research focused on improving the efficiency of large language model prompting through batching strategies, specifically 'BatchPrompt.' This is highly relevant to prompt engineering as it directly tackles the challenge of optimizing prompts for better performance in terms of processing time and resource consumption, which is a core aspect of prompt engineering. The introduction of strategies like Batch Permutation and Ensembling (BPE) and Self-reflection-guided Early Stopping (SEAS) to address performance issues associated with batching denotes a significant contribution to the field. The detailed experimental results showing comparative performance with traditional single-data prompting further underscore the relevance of this study to prompt engineering. The deduction of a point from a perfect score is due to the abstract slightly broader focus on overall efficiency rather than the fine-grained specifics of prompt crafting. However, the study's outcome directly impacts prompt engineering practices for large language models." -modular and parameter-efficient multimodal fusion with prompting,gpt-4-1106-preview,7,"The paper discusses the use of prompt vectors to align modalities in multimodal fusion, which is relevant to the field of prompt engineering as it involves the use of prompts to achieve model efficiency and modularity. However, it may not directly address the exact concept of 'hard prefix prompts' as might be suggested by a 'comprehensive systematic review'. Nonetheless, the paper still contributes to the broader area of prompt engineering by exploring efficient alternatives to finetuning in multimodal pre-training, thus the rating is above average but not maximum." -attempt: parameter-efficient multi-task tuning via attentional mixtures of soft prompts,gpt-4-1106-preview,9,"The abstract presents a novel approach to multi-task learning in language models that leverages soft prompts—small prefix embedding vectors—for efficient parameter tuning. Given that the study explicitly addresses prompt engineering through soft prompts and their application in multi-task learning and knowledge transfer, it is highly relevant to the field of prompt engineering. The approach's efficiency and effectiveness in comparison to other tuning methods underscore its significance within the realm of prompt engineering studies. The score is not a perfect 10 because the focus is specifically on 'soft' prompts rather than 'hard' prompts as mentioned in your inquiry, suggesting a slightly wider scope than just hard prefix prompts." -effective structured prompting by meta-learning and representative verbalizer,gpt-4-1106-preview,8,"The provided abstract details the use of prompts in natural language processing with a focus on prompt tuning and the introduction of a new method called MetaPrompter. It relates directly to prompt engineering as it discusses the initialization of prompts, the use of meta-learning for task-specific prompts, and the creation of a more efficient system for prompt application in pre-trained MLMs. The relevance score is not a full 10 because the abstract does not specifically mention 'hard prefix prompts' which is the particular focus of the solicited comprehensive review. However, it discusses the broader field of prompt engineering and provides insights into the recent developments in prompt tuning techniques, which are pertinent to the study of hard prefix prompts." -prompting classes: exploring the power of prompt class learning in weakly supervised semantic segmentation,gpt-4-1106-preview,8,"The provided abstract details a study that explores prompt tuning in the context of weakly supervised semantic segmentation (WSSS), which is a specific application of prompt engineering. The focus on how the modification of text prompts can impact the Class Activation Map (CAM) and the introduction of a novel PrOmpt cLass lEarning (POLE) strategy demonstrate a direct relevance to prompt engineering as it pertains to adapting language-vision models to downstream tasks. While the study is specific to WSSS and does not cover the broader topic of 'hard prefix prompts' comprehensively, the principles and findings can contribute valuable insights into the broader field of prompt engineering, hence the high relevance rating." -rewoo: decoupling reasoning from observations for efficient augmented language models,gpt-4-1106-preview,7,"The study introduces ReWOO (Reasoning WithOut Observation) which aims to make Augmented Language Models more efficient by decoupling the reasoning process from knowledge retrieval. This approach could be highly relevant to prompt engineering, especially in complex systems that require prompt optimization to reduce computational costs and improve efficiency. Since the methodology addresses issues related to prompt redundancy and token optimization, it would contribute to the design of better-engineered prompts that effectively interact with external tools without unnecessary computational overhead. However, the study does not directly focus on 'hard prefix prompts' or the systematic review of various prompt types, therefore the relevance is notable but not absolute." -efficient domain adaptation of language models via adaptive tokenization,gpt-4-1106-preview,7,"The study discussed in the title 'efficient domain adaptation of language models via adaptive tokenization' is relevant to prompt engineering study to a significant extent. While it does not directly address 'hard prefix prompts', it focuses on improving the adaptation of language models to new domains, which is a related aspect of prompt engineering. The process of optimizing tokenizer behavior for domain-specific understanding can enhance prompt responses by tailoring model input to better represent contextual nuances. This indirect relation to prompt construction and optimization reflects an underlying relevance to prompt engineering, as tokenization is a foundational component that influences the quality of prompts and their interpretation by language models. Nevertheless, the study does not directly tackle prompt engineering methodologies or the systematic review of 'hard prefix prompts', thus the relevance is not maximal." -parameter-efficient low-resource dialogue state tracking by prompt tuning,gpt-4-1106-preview,9,"The abstract discusses the use of soft prompt token embeddings, which is a technique within the paradigm of prompt engineering. Although it does not discuss 'hard prefix prompts' specifically, it relates closely to the topic as prompt tuning is a key area within prompt engineering studies. The research aims to enhance dialogue state tracking by using prompts to tune language models with fewer parameters, which is a direct application of prompt engineering principles. Therefore, the rating is high because it is very relevant to the broader field of prompt engineering, but not a perfect score as it does not directly pertain to 'hard prefix prompts'." -uniex: an effective and efficient framework for unified information extraction via a span-extractive perspective,gpt-4-1106-preview,4,"The abstract mentions the use of schema-based prompts within the UniEX framework for universal information extraction, which touches upon the aspect of utilizing prompts in AI tasks. However, it does not specifically address 'hard prefix prompts' or conduct a 'comprehensive systematic review' on prompt engineering. The focus appears to be on information extraction tasks and improving their efficiency through a unified extractive framework rather than on the study of prompt engineering itself. Consequently, it has some relevance due to the mention of prompts but is not centrally focused on prompt engineering studies." -prompt-tuning in asr systems for efficient domain-adaptation,gpt-4-1106-preview,8,"The paper is highly relevant to the field of prompt engineering as it addresses the application of prompt-tuning, specifically within the context of domain adaptation for Automatic Speech Recognition (ASR) systems. The concept of training a small number of domain-specific token embeddings to adapt a transformer-based language model is a practical example of prompt engineering. By achieving significant performance improvements with a minimal increase in parameters, the study contributes to the field by demonstrating the effectiveness of prompt-based techniques for improving model performance in specialized domains. The lower than perfect score is due to the focus on ASR systems specifically, which is a subset of prompt engineering applications, rather than the entire breadth of prompt engineering." -panda: prompt transfer meets knowledge distillation for efficient model adaptation,gpt-4-1106-preview,9,"The provided abstract and TLDR discuss research on prompt-tuning and prompt transfer (PoT) as methods for efficient model adaptation in the context of pretrained language models (PLMs), addressing the challenges with smaller PLMs and the innovation of a new approach named PANDA. Since prompt engineering studies how to design and use prompts to communicate effectively with language models, the mentioned techniques of prompt transfer and the novel PANDA approach are highly relevant to the field. It focuses on the optimization and enhancement of prompts, which is a core aspect of prompt engineering. The only reason the rating is not a 10 is because the study is narrower in scope, focusing on efficiency and specific techniques rather than a broader methodological investigation into prompt design or the theory behind prompt engineering." -toward efficient language model pretraining and downstream adaptation via self-evolution: a case study on superglue,gpt-4-1106-preview,7,"The relevance of this study to prompt engineering is moderate to high as it discusses the 'prompt transfer technique' which is a form of prompt engineering. This technique involves transferring knowledge from one task to another, which is central to the idea of adapting language models to various downstream tasks using prompts. The study's focus on leveraging this technique to improve low-resource tasks indicates that it involves modifying or engineering prompts to enhance performance, which is pertinent to the study of prompt engineering. However, the report does not seem to specifically address 'hard prefix prompts,' which was the explicit focus mentioned in your query. Therefore, the study is relevant due to its inclusion of prompt-based techniques, but not as high as it would be if it were centered on hard prefix prompts specifically." -degree: a data-efficient generation-based event extraction model,gpt-4-1106-preview,8,"The study appears highly relevant to prompt engineering as it involves the design of manual prompts to guide a data-efficient event extraction model, termed DEGREE. The model's dependency on these prompts for semantic guidance indicates that a significant portion of the research likely involves understanding and improving how prompts are constructed (prompt engineering) to better capture event arguments. Although the primary focus is event extraction, the reliance on manually designed prompts for model training and the discussion of prompt-encoded information suggest a substantial relevance to the field of prompt engineering." -fedprompt: communication-efficient and privacy-preserving prompt tuning in federated learning,gpt-4-1106-preview,7,"The paper discusses prompt tuning within the context of federated learning, which directly relates to the broader field of prompt engineering. While it does not explicitly mention 'hard prefix prompts,' the study of prompt tuning techniques and their efficiency and privacy implications within federated learning frameworks adds to the understanding of how prompts can be optimized. Given that prompt engineering encompasses the exploration and application of prompts in various scenarios, the relevance is high. However, it is not rated a full 10 because the specific focus on communication efficiency and privacy in federated learning does not directly address the systematic review aspect of hard prefix prompts, which seems to be a more targeted area within the field of prompt engineering." -prompt tuning for parameter-efficient medical image segmentation,gpt-4-1106-preview,8,"The abstract presents a study on the application of prompt tuning, a concept closely related to prompt engineering, in the context of medical image segmentation. Although the study focuses on a specific application (parameter-efficient adaptations for semantic segmentation in medical imaging), it explores the use of prompts (learnable prompt tokens) to adapt a neural network model to new tasks without full model fine-tuning. Since prompt engineering involves techniques for efficiently integrating prompts in order to steer model behavior, albeit typically in the context of language models, this work's investigation into prompts in the UNet architecture for medical imaging is relevant to the broader study of prompt engineering principles and methods. The rating is not a full 10 because the study is highly specialized and may not directly address 'hard prefix prompts' or the specificities of prompt engineering in natural language processing, which often is the primary focus of prompt engineering literature." -rethinking visual prompt learning as masked visual token modeling,gpt-4-1106-preview,7,"The discussed paper is relevant to the study of prompt engineering, despite its focus on the vision domain rather than natural language processing (NLP). The paper introduces a method for visual prompt learning, which parallels the concept of prompt engineering in NLP by adapting pre-trained models to downstream tasks. The proposal of Visual Prompt learning as Masked visual Token Modeling (VPTM) to unify the form of pre-training and downstream tasks is conceptually similar to hard prompt methods in NLP that aim to bridge the gap between the two stages. Although the specific application to visual tasks might not directly correspond to textual 'hard prefix prompts,' the underlying principles of prompting and task reformulation involved in VPTM are relevant to the broader study of prompt engineering. The emphasis on consistency, robustness, and unified deployment also echoes concerns in prompt engineering research." -parameter-efficient tuning helps language model alignment,gpt-4-1106-preview,7,"The given abstract presents a method for aligning language models with human preferences by using a technique called 'alignMEnt with parameter-Efficient Tuning (MEET)'. This involves optimizing control tokens using parameter-efficient tuning strategies such as prompting tuning and low-rank adaptation, which is highly relevant to prompt engineering. The reference to 'control tokens' and 'hand-crafted prompts' directly relates to the design and engineering of prompts for tuning model behavior. The focus on parameter-efficiency is also pertinent to prompt engineering because it relates to optimizing the input given to models without overhauling the entire model architecture. However, the abstract does not specifically address 'hard prefix prompts' which would be the focus of a comprehensive systematic review on that topic. For this reason, the relevance is not rated a full 10, as it is more broadly about language model alignment with control tokens rather than narrowly focused on hard prefix prompts in prompt engineering." -multimodal prompting with missing modalities for visual recognition supplementary materials,gpt-4-1106-preview,8,"While the study is not specifically focused on 'hard prefix prompts', it does address the broader topic of prompt engineering in the context of multimodal learning and attention mechanisms. The research on the impact of prompt length and the layer at which the prompt is inserted is relevant to the understanding of how prompts can be optimized for improved performance in AI models. Therefore, the paper's relevance to prompt engineering is high, warranting a rating of 8. However, the exact match with the 'hard prefix prompts' focus may be lacking, hence not a full 10." -prompting as multimodal fusing,gpt-4-1106-preview,7,"The abstract describes research on using visual prompts to improve the capability of a language model to perform multi-modal tasks, which is related to the field of prompt engineering. The concept of 'prompting' is central to the study. However, the focus on multimodal tasks and disentangling objectives for the vision encoder introduces specificity that is somewhat tangential to hard prefix prompts in text-based prompt engineering. While the principles of the study could potentially be applied or extended to text-based prompt engineering, the immediate relevance is somewhat indirect, hence the rating of 7." -ddcot: duty-distinct chain-of-thought prompting for multimodal reasoning in language models,gpt-4-1106-preview,8,"While the abstract describes a study focused on multimodal reasoning and Chain of Thought (CoT) with language models, its relevance to prompt engineering lies in the novel DDCoT prompting approach which is presented. The notion of 'negative-space prompting' and tailoring prompts to encourage 'critical thinking' and proper distribution of tasks ('letting everyone do their jobs') within multimodal CoT reasoning are directly related to the design and engineering of effective prompts that enhance AI performance. Consequently, the abstract is highly relevant to the study of prompt engineering, particularly in the context of improving AI's multimodal reasoning capabilities. However, the rating is not a full 10 because it does not focus exclusively on 'hard prefix prompts' but rather on a broader set of techniques within multimodal CoT prompting, leaving some room for more specific relevance to the systematic review aspect of the provided prompt." -prompting chatgpt in mner: enhanced multimodal named entity recognition with auxiliary refined knowledge,gpt-4-1106-preview,7,"The study presents a two-stage framework (PGIM) designed to improve Multimodal Named Entity Recognition (MNER) by using ChatGPT as an implicit knowledge base for generating auxiliary knowledge, which relates to prompt engineering as it involves creating and using prompts to guide ChatGPT in generating useful information for a specific task. However, the paper seems to focus more on improving MNER performance and leveraging implicit knowledge bases rather than on the underlying mechanisms of prompt engineering, such as prompt design or optimization techniques specifically. Therefore, the relevance is significant but not exclusively centered on prompt engineering." -initial images: using image prompts to improve subject representation in multimodal ai generated art,gpt-4-1106-preview,7,"The paper addresses the utilization of image prompts to enhance subject representation in AI-generated art, which falls within the realm of prompt engineering as it involves guiding generative models to achieve desired outputs. Although the study focuses specifically on multimodal interactions (text and image prompts) rather than purely text-based 'hard prefix prompts,' the findings and design guidelines derived from the research could be informative for prompt engineering in a broader context. The aspects of conditioning models and evaluating their performance based on input prompts are directly relevant to the techniques and methodologies of prompt engineering, hence the relatively high relevance rating." -multimodal multi-task stealth assessment for reflection-enriched game-based learning,gpt-4-1106-preview,4,"The study mentioned does not directly address 'hard prefix prompts' or 'prompt engineering' as it appears to be more focused on game-based learning environments and using a stealth assessment framework for educational purposes. The relevance comes from the use of in-game reflection prompts and the multifaceted assessment of student responses which tangentially touches upon the concept of prompts and reflection in learning. However, it does not engage with the specific study of engineering prompts in the context of AI systems or conversational models, which would be necessary for a higher relevance rating." -visual chain of thought: bridging logical gaps with multimodal infillings,gpt-4-1106-preview,6,"The study introduces VCoT, which uses a form of prompt engineering by leveraging chain of thought prompting which is relevant to prompt engineering study. However, the focus is more on multimodal integration and recursive infillings to improve reasoning in sequential data, rather than on prompt engineering with hard prefixes specifically. The relevance is moderate because it does involve prompt engineering techniques, though it is not focused on the systematic review of hard prefix prompts." -making a case for spatial prompting in human-robot communication,gpt-4-1106-preview,4,"This paper is somewhat relevant to prompt engineering in that it discusses communication strategies with robots, which could include developing prompts for human-robot interaction. However, it focuses on 'spatial prompting' and non-verbal communication cues, which is a different area than 'hard prefix prompts,' which are typically textual or verbal in nature and used in language model interactions. The study's relevance to prompt engineering is tangential and not directly aligned with the concept of hard prefix prompts in language models or more conventional prompting techniques." -the smartweb corpora: multimodal access to the web in natural environments,gpt-4-1106-preview,4,"The description indicates that the chapter discusses a prompting scheme called SitPro, a recording technique, and properties of created corpora. While the mention of a prompting scheme suggests relevance to prompt engineering, there is no explicit mention of 'hard prefix prompts' or a systematic review approach. The relevance appears to be tangential rather than directly focused on prompt engineering as it pertains to pre-determined structured prompts. The rating reflects moderate relevance due to the connections to prompts and data acquisition which could be applicable to prompt engineering studies but lacks specificity regarding 'hard prefix prompts'." -adaptive action prompting: a complementary aid to support task-oriented interaction in explorative user interfaces,gpt-4-1106-preview,7,"The abstract refers to 'Adaptive action prompting,' which is closely related to prompt engineering in that it involves the system generating suggestions or prompts based on various models. This concept aligns with prompt engineering, as it requires understanding how to design and adapt prompts for optimal user interaction. However, the study seems to focus more on user interface interaction rather than the specific linguistic or conversational design of prompts. Therefore, while relevant, it may not fully delve into the 'hard prefix prompts' aspect of the prompt engineering study." -prompt tuning for generative multimodal pretrained models,gpt-4-1106-preview,8,"This abstract is quite relevant to prompt engineering as it discusses 'prompt tuning', which is a specific method within the broader area of prompt engineering. Prompt tuning is a new paradigm where prompts are specifically crafted or optimized to improve the performance of pretrained models on various tasks. The focus on generative multimodal pretrained models suggests that the study addresses complex scenarios where prompt engineering could be crucial for model tuning. Despite the high relevance, the rating is not a complete 10 because the study seems to be more focused on implementing prompt tuning as a lightweight alternative to full model finetuning, rather than a comprehensive systematic review of hard prefix prompts as the original prompt might suggest." -opal: multimodal image generation for news illustration,gpt-4-1106-preview,7,"The paper's focus on a system named Opal that navigates the challenges of finding the right visual language for text prompts does relate to prompt engineering, particularly in multimodal AI contexts. Although the paper does not directly address 'hard prefix prompts,' it does deal with the structured creation of text prompts to guide AI in generating images, which is an essential part of prompt engineering. The relevance is high because prompt engineering is critical for effective human-AI co-creation, especially in text-to-image generation tasks. However, the paper centers more on the application of such a system for news illustrations rather than the theoretical or methodological aspects of prompt engineering study." -draw your art dream: diverse digital art synthesis with multimodal guided diffusion,gpt-4-1106-preview,7,"The paper presented addresses the usage of multimodal prompts which involve feeding a model with inputs from different modalities such as text and image, which aligns with the concept of 'prompt engineering' that typically involves crafting inputs to guide a model’s output. Although not directly focused on 'hard prefix prompts', the concept of using complex, multimodal inputs for guiding a diffusion model in digital art synthesis demonstrates advanced prompt techniques and is indirectly related to the engineering of prompts to achieve desired outcomes in AI systems. Hence, there is a significant relevance to prompt engineering, but it is not a perfect match as the primary study is not about hard prefix prompts in the context of systematic reviews." -promptmner: prompt-based entity-related visual clue extraction and integration for multimodal named entity recognition,gpt-4-1106-preview,8,"The presented work is highly relevant to prompt engineering as it discusses the utilization of entity-related prompts to improve multimodal named entity recognition. It specifically targets the extraction of visual clues with the help of prompts, which is a novel application of prompt engineering in the field of image processing and analysis. The 'prompt-based' method for extracting visual information addresses the central theme of prompt engineering. However, since the focus is also on modality-aware attention mechanisms and cross-modal fusion, the relevance is not solely on prompt engineering. Therefore, the rating is not a full 10." -lvp-m3: language-aware visual prompt for multilingual multimodal machine translation,gpt-4-1106-preview,7,"The paper introduces a model LVP-M3 that utilizes visual prompts for the task of Multilingual Multimodal Machine Translation. While the study focuses primarily on translation and the integration of visual features for understanding context across multiple languages, the concept of 'visual prompts' does relate to the idea of 'prompt engineering' as it involves designing inputs to improve the machine's understanding and performance. Although these visual prompts are not 'hard prefix prompts' explicitly, the process of generating and utilizing prompts to enhance model performance overlaps with the broader theme of prompt engineering. Thus, the relevance is significant but not directly focused on the systematic study of hard prefix prompts, hence the rating of 7." -few-shot multimodal sentiment analysis based on multimodal probabilistic fusion prompts,gpt-4-1106-preview,7,"The study addresses prompt engineering to some extent by introducing a novel method that includes the design of 'unified multimodal prompts' to decrease discrepancies between different modalities in the few-shot sentiment analysis. This involves engineering prompts that cater to more than just textual data, integrating multimodal data which is a unique and relevant approach to prompt engineering. Additionally, the concept of 'probabilistic fusion method to fuse output predictions from multiple diverse prompts' indicates an advanced level of prompt engineering where different prompts and their predictions are combined. However, the study focuses more specifically on multimodal sentiment analysis and few-shot learning, rather than solely on prompt engineering or 'hard prefix prompts' as stated in the initial topic. Therefore, it is not exclusively aligned with the concept of 'hard prefix prompts' in prompt engineering studies but still significantly contributes to the broader domain of prompt engineering." -beyond bounding box: multimodal knowledge learning for object detection,gpt-4-1106-preview,5,"The paper deals with the use of language prompts for improving object detection in machine learning, indicating relevance to prompt engineering in that it involves designing prompts to facilitate learning. However, the study's primary focus is on multimodal knowledge learning in object detection, rather than on prompt engineering specifically. It discusses the creation and use of prompts as part of the method but does not center around designing or systematically reviewing hard prefix prompts, which would be more directly related to prompt engineering studies." -meta learning to bridge vision and language models for multimodal few-shot learning,gpt-4-1106-preview,4,"The presented abstract discusses a multimodal meta-learning approach to bridge vision and language models, aiming to improve few-shot learning by automatically adapting to new tasks. The relevance to 'prompt engineering' is tangential since the abstract mentions induction of tasks without hand-engineering and could relate to auto-generating or tuning prompts in a broad sense. However, it deals more with meta-learning and the interplay between different modalities than the specific study of hard prefix prompts as described in the initial request. Therefore, it is only moderately related to prompt engineering as the focus of the paper is on model adaptation and few-shot learning rather than prompt design or engineering." -towards multimodal computational humanities. using clip to analyze late-nineteenth century magic lantern slides,gpt-4-1106-preview,7,"Although the study does not solely focus on prompt engineering, it does discuss the impact of different textual prompts on the performance of the CLIP model and identifies the lack of effective prompt engineering techniques as an issue affecting the model's stability. Therefore, the paper is relevant to the field of prompt engineering to a noticeable extent, especially regarding the application and challenges of prompt engineering in multimodal learning within the computational humanities." -π-tuning: transferring multimodal foundation models with optimal multi-task interpolation,gpt-4-1106-preview,7,"The abstract mentions compatibility with diverse types of parameter-efficient experts, including prompts, which implies that the study covers aspects of prompt engineering. However, the focus seems to be broader, targeting transfer learning methods in general rather than specifically on 'hard prefix prompts'. Thus, while it has relevance due to its inclusion of prompts within the scope of parameter-efficient transfer learning, it's not solely dedicated to prompt engineering, leading to a rating of 7." -beyond text-to-image: multimodal prompts to explore generative ai,gpt-4-1106-preview,7,"The abstract and TLDR of 'Beyond Text-to-Image: Multimodal Prompts to Explore Generative AI' are relevant to prompt engineering because they discuss the development of workflows that facilitate the translation of abstract design goals into prompts for AI systems. This aligns with the principles of prompt engineering, which is concerned with the creation and optimization of prompts to effectively guide AI behavior. However, the study appears to focus on the broader context of multimodal interactions and integrating creator contributions rather than hard prefix prompts specifically. Hence, while it is relevant due to its focus on improving the AI prompting process, it does not directly address systematic reviews on hard prefix prompts, thus receiving a rating of 7." -mass-producing failures of multimodal systems with language models,gpt-4-1106-preview,7,"The abstract describes a novel system, MultiMon, which involves in part the use of language models to identify and generate natural language descriptions of patterns of failures in multimodal systems. This bears relevance to the prompt engineering field since the process includes feeding certain inputs (prompts) to a language model to elicit descriptive outputs regarding the failures. However, the main focus appears to be on the identification of systematic failures in multimodal systems rather than the study of hard prefix prompts themselves. Thus, while related to prompt engineering in the context of multimodal system failure analysis, it is not entirely centered on a comprehensive study of prompts or their structures." -open visual knowledge extraction via relation-oriented multimodality model prompting,gpt-4-1106-preview,7,"The abstract describes a novel approach to visual knowledge extraction that indirectly involves a form of prompt engineering, as it relies on prompting a multimodality model to generate knowledge. Although the primary focus is not on the engineering of text prompts for language models, the concept of 'model prompting' is closely related to prompt engineering, particularly in the context of multimodal models that process both visual and textual data. The mention of employing prompts for knowledge generation aligns with current interests in optimising prompts to improve model performance. However, the direct relevance to 'hard prefix prompts' may be limited, hence a full relevance rating is not given." -a prompt-based multimodal tabular transformer encoder for medical intervention duration estimation,gpt-4-1106-preview,6,"The study introduces a prompt-based approach within a medical context, focusing on a multimodal deep learning framework for medical intervention duration estimation. While it does not directly address 'prompt engineering' in the broader sense, the use of prompts in conjunction with a pre-trained sentence encoder indicates an application of prompt engineering principles. Hence, the relevance is moderate, as it shows an example of how prompts can be interfaced with other machine learning components, but the study is specific to medical interventions and does not cover prompt engineering as a standalone subject." -multimodal prompt learning in emotion recognition using context and audio information,gpt-4-1106-preview,7,"The study is relevant to prompt engineering due to its focus on improving language models' performance using prompt learning techniques. Although it primarily deals with multimodal sources (text and audio) rather than being strictly about hard prefix prompts, it addresses the aspect of how prompts are engineered to enhance a pre-trained model's ability to perform specific tasks, in this case, emotion recognition. The study proposes a method for prompt learning that considers the context and emotional information, which is a valuable insight into prompt engineering for specialized tasks. However, the relevance is not at the maximum because the study diverges from hard prefix prompts specifically to a broader application of prompts in multimodal learning." -mpt: multimodal prompt tuning for event detection,gpt-4-1106-preview,5,"The presented abstract discusses a multimodal Prompt Tuning approach for event detection that incorporates both images and text, which pertains to prompt engineering in the context of modal feature aggregation and the use of soft prompts. However, the primary focus is on event detection using multimodal inputs rather than on hard prefix prompts specifically. The mention of 'prompt tuning' indicates relevance to prompt engineering, but the absence of discussion about hard prefix prompts specifically results in a moderate relevance rating." -what matters in training a gpt4-style language model with multimodal inputs?,gpt-4-1106-preview,5,"The abstract discusses various factors that affect the training of a GPT4-style multimodal language model, among which the influence of diversified prompts on the instruction-following ability of the trained models is mentioned. This indicates some relevance to prompt engineering, as understanding how prompts affect model performance is a subset of prompt engineering. However, the focus of the study includes a broader range of topics such as network structures, training data, and benchmarks, which are not exclusively concerned with prompt engineering. Hence, the rating is at the midpoint to reflect this partial relevance." -multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation,gpt-4-1106-preview,6,"The abstract provided references the use of a 'Multimodal Prompt Transformer' which may imply some relevance to prompt engineering, particularly as it pertains to encoding textual features and facilitating multimodal fusion in the context of emotion recognition. However, the primary focus of the study is on emotion recognition in conversation rather than on hard prefix prompts or prompt engineering in a broader sense. Prompt engineering typically involves the strategic design of input prompts to produce desired outputs from AI models, which is a tangential aspect of the described research. Therefore, while there is some relevance, it is not the core subject of the study." -multimodal parameter-efficient few-shot class incremental learning,gpt-4-1106-preview,7,"The abstract mentions the use of 'learnable prompts for both the language and vision encoders' in the proposed Continual Parameter-Efficient CLIP (CPE-CLIP) model, which directly relates to prompt engineering. While the main focus is on Few-Shot Class Incremental Learning (FSCIL) and the use of CLIP for transfer learning across sessions, the mention of learnable prompts indicates that prompt engineering is a component in the study's approach to improve performance in learning tasks. However, since prompt engineering is not the central theme but rather a part of the methodology, the relevance rating is a 7." -multitask instruction-based prompting for fallacy recognition,gpt-4-1106-preview,8,"The abstract describes a study on how instruction-based prompting in a multitask setup can improve the recognition of fallacies by computational models. This is highly relevant to prompt engineering as it explores the construction and optimization of prompts to enhance model performance. The use of a multitask setup indicates a sophisticated approach to prompt engineering which is likely to be of interest to those studying prompt design. However, the focus on fallacy recognition means the research is specialized and may not cover all areas of interest within the broader field of prompt engineering." -when do you need chain-of-thought prompting for chatgpt?,gpt-4-1106-preview,8,"The abstract discusses the performance and challenges of Chain-of-Thought prompting for ChatGPT, which is directly related to the field of prompt engineering. It explores the limitations and potential of CoT instructions in improving LLM output, providing insights into instruction-based finetuning. The analysis of instruction memorization and potential dataset leakage is crucial for understanding how to engineer prompts effectively for different tasks. Despite not focusing specifically on 'hard prefix prompts,' the study provides valuable information for prompt engineering in a broader sense, which is why it does not receive a perfect score." -a comparison of prompt delays with trial-and-error instruction in conditional discrimination training,gpt-4-1106-preview,4,"The study focuses on 'prompt delays' within the context of conditional discrimination training, which is relevant to learning processes and instruction strategies but does not directly address 'prompt engineering' as related to computational models or hard prefix prompts. However, considering that 'prompt delays' could potentially be related to the timing and delivery aspects of prompts in computational terms, the study might offer some indirect insights useful for prompt engineering, especially in the nuanced aspects of timing and response effectiveness. Therefore, a moderate relevance rating is provided." -coder reviewer reranking for code generation,gpt-4-1106-preview,8,"The abstract describes an advanced technique in prompt engineering where two models are used in tandem for code generation – a 'Coder' model to generate programs and a 'Reviewer' model to evaluate these programs. This process of generating and reranking outputs based on prompt-engineered models is clearly relevant to the study of prompt engineering. The methodology explores optimizing the interaction between these models to produce better results, which is a critical part of prompt engineering – refining inputs and evaluating outputs to improve performance. The reason why the rating is not a full 10 is because the abstract focuses on the application of prompt engineering to code generation, which may be a subset of the broader prompt engineering field. However, the principles and techniques exemplified are directly applicable to prompt engineering studies." -an automated prompting system for smart environments,gpt-4-1106-preview,4,"While the document seems to deal with automation and smart systems, which could involve some form of prompt engineering, the focus on 'hard prefix prompts' is unclear without further content. A 'fully automating prompting system' suggests relevance to automated prompt generation, but the extent to which this aligns with 'hard prefix prompts' is not specified. The relevance rating could be higher if the paper's approach to prompting systems includes or overlaps with the structured method of prompt engineering implied by hard prefix prompts." -generating instruction automatically for the reading strategy of self-questioning,gpt-4-1106-preview,7,"The relevance to prompt engineering is significant since the paper focuses on generating instructional content automatically, which aligns with the creation of prompts for educational purposes. Specifically, breaking down the instruction into describing, modeling, scaffolding, and prompting is similar to the process of designing prompts that are effective in prompting the strategy. The paper also touches upon automatic generation of prompts, which is a core task in prompt engineering. However, the primary objective of the paper is centered around self-questioning in reading comprehension rather than the broader scope of hard prefix prompts or prompt engineering in general, which justifies a rating of 7 instead of a perfect score." -short-term versus long-term effects of cognitive and metacognitive prompts in writing-to-learn,gpt-4-1106-preview,7,"The study is moderately relevant to prompt engineering because it investigates the effects of cognitive and metacognitive prompts on learning and writing. This is related to understanding how prompts can influence cognitive processes and outcomes, which is a key part of prompt engineering. However, as the focus is on educational contexts and long-term effects rather than computational systems or machine learning, it is not directly focused on prompt engineering for language models or other AI systems, hence the rating isn't higher." -dualprompt: complementary prompting for rehearsal-free continual learning,gpt-4-1106-preview,8,"The content of the abstract is highly relevant to prompt engineering study because it discusses a novel framework called DualPrompt, which involves learning a tiny set of parameters (prompts) that instruct a pre-trained model on handling new tasks sequentially without the need for rehearsing previous tasks. This approach to prompt engineering is significant as it addresses the challenge of catastrophic forgetting in continual learning models and does so without the need for storing old examples, hence respecting privacy and memory constraints. The abstract focuses on the application of prompt learning in the context of continual learning models, which is a subset of the broader prompt engineering field. The rating is not a full 10 because the study is specific to the continual learning application and may not cover all possible aspects or methodologies of prompt engineering, especially those outside the scope of continual learning." -editeval: an instruction-based benchmark for text improvements,gpt-4-1106-preview,7,"The provided abstract discusses 'EditEval', which is an evaluation suite for text generation models, specifically focusing on their editing capabilities. While it does not directly address 'hard prefix prompts' or 'prompt engineering', its core concept of evaluating and optimizing text generation models is relevant to the field. The study examines InstructGPT and PEER models in the context of editing tasks and acknowledges the challenges in prompt optimization. This can inform prompt engineering studies by providing insights into how models respond to instructions and the issues with current metrics, therefore facilitating the creation of better prompts for model evaluations. However, the direct application to hard prefix prompts is tangential and not the central focus of the study, which affects the overall relevance rating." -promptsource: an integrated development environment and repository for natural language prompts,gpt-4-1106-preview,9,"The paper describes 'PromptSource', a system designed specifically for creating, sharing, and using natural language prompts, which is central to the concept of prompt engineering. The discussion of a templating language, a user interface for prompt development, and community-driven guidelines directly concerns the practice of prompt engineering. Although the article does not specifically address 'hard prefix prompts' but rather prompts in general, its relevance to the broader field of prompt engineering is significant and should be highly informative for those studying various aspects of prompt design and usage in natural language processing (NLP). Therefore, it receives a high relevance rating of 9." -adversarial soft prompt tuning for cross-domain sentiment analysis,gpt-4-1106-preview,7,"The study presents advancements in prompt tuning, specifically Adversarial Soft Prompt Tuning for cross-domain sentiment analysis, which is relevant to the field of prompt engineering, as it involves learning to use prompts effectively with language models. Although the study focuses on soft prompts rather than hard prefix prompts, the underlying principles of prompt design and its impact on model performance are highly pertinent to the broader topic of prompt engineering. The approach of using separate prompts for different domains connects to the customization and optimization of prompts for specific tasks. However, the relevance is not rated higher because the prompt mentioned here is 'soft', while the systematic review in question specifically targets 'hard prefix prompts'. Therefore, there is a slight mismatch, but the study still holds value for those exploring the varying applications and methodologies of prompt tuning in language models." -prompt-based rule discovery and boosting for interactive weakly-supervised learning,gpt-4-1106-preview,8,"The paper discusses a method for iteratively discovering novel labeling rules via prompts in the context of weakly-supervised learning. While not directly focused on 'hard prefix prompts', it does revolve around the use of prompts for generating rules and improving models, which is a vital component of prompt engineering. The study is relevant because it deals with the automated generation and refinement of prompts, which is closely related to the analysis and application of prompt effectiveness and efficiency, key considerations in prompt engineering studies. The rating is not a full 10, as the paper's abstract does not specify a focus on 'hard prefix prompts' specifically, but rather on a broader application of rule discovery using prompts." -hpt: hierarchy-aware prompt tuning for hierarchical text classification,gpt-4-1106-preview,8,"The given title and abstract provide information about a technique called Hierarchy-aware Prompt Tuning (HPT) for hierarchical text classification. Although this method is focused on a specific task - hierarchical text classification - rather than prompt engineering in general, the concept of 'prompt tuning' is highly relevant to the broader field of prompt engineering. HPT involves constructing dynamic virtual templates and label words as soft prompts, which are essentially a form of prompt engineering tailored to incorporate hierarchical information into the learning process of a PLM. Therefore, the study is quite pertinent to prompt engineering, particularly within the domain of improving model performance for complex classification tasks involving label hierarchies. It doesn't address a 'hard prefix prompt' specifically, which would be an exact match to the search query, but still has significant relevance due to its focus on prompt tuning methodologies." -connprompt: connective-cloze prompt learning for implicit discourse relation recognition,gpt-4-1106-preview,8,"The paper presents an approach that leverages the prompt engineering paradigm for Implicit Discourse Relation Recognition (IDRR), specifically developing a novel Connective-cloze Prompt (ConnPrompt) which includes Prefix-cloze Prompt (PCP) to improve task performance. This is highly relevant to prompt engineering as it demonstrates an innovative application of prompt-based methods to a natural language processing (NLP) task. The rating is not a full 10 because the study focuses on a specific application of prompt engineering within the IDRR context, rather than on prompt engineering in a more general sense, which may limit its broader relevancy to the field at large." -ptau: prompt tuning for attributing unanswerable questions,gpt-4-1106-preview,8,"The presented study 'ptau: prompt tuning for attributing unanswerable questions' is highly relevant to prompt engineering as it directly deals with the development of a system that leverages the concept of prompt tuning. The introduction of a cause-oriented template module for constructing continuous templates in a high-dimensional space and a semantics-aware label module through contrastive learning are indicative of advanced techniques in prompt engineering. Although the study's primary focus is question answering systems and their ability to identify unanswerable questions, the methods used for prompt tuning are applicable and insightful for the broader field of prompt engineering." -prompt-learning for short text classification,gpt-4-1106-preview,9,"The provided abstract describes a study on prompt-learning, specifically for the task of short text classification which directly relates to the field of prompt engineering. The approach of using knowledgeable expansion and the incorporation of knowledge graphs into the prompt-learning process are advanced techniques in the area, suggesting that the paper provides detailed insights into the engineering of prompts for language models. The outstanding improvement in accuracy mentioned in the abstract and TLDR indicates a significant contribution to the field. The reason it is not a full 10 is because it doesn't specifically mention 'hard prefix prompts', but it does deal with prompt-learning methods in general, which makes it highly relevant to prompt engineering studies." -continuous prompt tuning based textual entailment model for e-commerce entity typing,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it discusses a novel application of continuous prompt tuning, which is a subset of prompt engineering, in the context of e-commerce entity typing. The approach of reformulating entity typing into a textual entailment problem with the use of prompts indicates a significant contribution towards the field of prompt engineering. The automatic generation of hypotheses using prompt tuning is particularly pertinent, although the study's focus is more narrowly on textual entailment in the e-commerce domain rather than hard prefix prompts in general. Nonetheless, since prompt engineering techniques are pivotal in the study, it merits a relatively high score." -taxoprompt: a prompt-based generation method with taxonomic context for self-supervised taxonomy expansion,gpt-4-1106-preview,8,"The paper presents 'TaxoPrompt,' a framework for taxonomy expansion leveraging prompt tuning, which is directly related to prompt engineering. Although the focus is more specifically on incorporating taxonomic context rather than hard prefix prompts in a broad sense, the methodological approach to enhancing prompt templates and its use in a hierarchical classification context mean that the paper offers relevant insights into the application and development of prompt-engineering techniques." -bi-directional iterative prompt-tuning for event argument extraction,gpt-4-1106-preview,9,"The given abstract is highly relevant to prompt engineering study as it directly pertains to the development of a new prompt-tuning method for a specific NLP task, which is event argument extraction (EAE). The bi-directional iterative prompt-tuning approach uses cloze-style tasks and entity information, both key elements in the prompt engineering process. Moreover, the focus on improving interaction with pre-trained language models (PLMs) by considering the context of entities and the roles of arguments during prompt construction are advancements directly applicable to the field of prompt engineering. The only reason it did not receive a 10 is that it is specialized towards EAE rather than prompt engineering in general." -knowledge base construction from pre-trained language models by prompt learning,gpt-4-1106-preview,7,"The abstract describes a study that falls within the domain of prompt engineering as it involves designing prompts to extract factual knowledge from pre-trained language models. The relevance to prompt engineering is clear as the authors design prompt templates and explore strategies for generating responses using these models. However, the mention of 'hard prefix prompts' is not explicitly referenced, suggesting this work may not be fully centered on that specific aspect of prompt engineering. Therefore, while the study is related to prompt engineering, its relevance to the specific concept of 'hard prefix prompts' cannot be determined from the abstract alone." -schema-aware reference as prompt improves data-efficient knowledge graph construction,gpt-4-1106-preview,9,"The abstract discusses a new approach to improve data-efficient knowledge graph construction through the use of 'schema-aware Reference As Prompt (RAP)' which directly concerns the engineering of prompts to bridge the gap between natural language and structured knowledge. This is highly relevant to prompt engineering study as it proposes a method that advances the way prompts can be utilized in a practical application, namely knowledge graph construction. The only reason it is not a perfect 10 is that it does not cover the broader scope of prompt engineering but rather focuses on a specific application within the field." -prompt tuning for multi-label text classification: how to link exercises to knowledge concepts?,gpt-4-1106-preview,9,"The abstract describes the development and application of a prompt tuning method specifically for multi-label text classification, which is highly relevant to the field of prompt engineering. Prompt tuning is a technique within natural language processing that is used to adapt language models to specific tasks without the need for extensive training data. Since the study explores the use of prompt tuning to connect exercises to knowledge concepts, it contributes directly to advancing the methodologies within the area of prompt engineering. The high relevance score reflects the direct applicability of the findings to the study of prompt engineering, albeit the study doesn't focus on 'hard prefix prompts' specifically but on prompt tuning for a related task." -a prompt based approach for euphemism detection,gpt-4-1106-preview,8,"The abstract describes a study that involves developing prompts and verbalizers for euphemism detection, which is directly connected to prompt engineering. Prompt tuning is a subset of prompt engineering, and the use of templates indicates that the study engages in engineering prompts to elicit specific responses from a language model. However, the study is focused more on the specific application of euphemism detection rather than the broader topic of 'hard prefix prompts', so it may not cover all aspects of prompt engineering study, thus not receiving a perfect score." -scene-aware prompt for multi-modal dialogue understanding and generation,gpt-4-1106-preview,7,"The abstract discusses the use of a 'scene-aware prompt' in the context of multi-modal dialogue understanding and generation, which falls under the broader domain of prompt engineering as it pertains to enhancing AI's interaction with multi-modal data. Although it does not specifically address 'hard prefix prompts'—a more nuanced aspect of prompt design often associated with transformer-based language models—it does relate to the application and structuring of prompts for improved AI performance in a given task. Therefore, the relevance is moderate because it demonstrates an application of prompt engineering in a specific NLP contest, however, it is not directly focused on the study of prompt engineering as a standalone subject." -label prompt for multi-label text classification,gpt-4-1106-preview,8,"The abstract describes a model for multi-label text classification that uses a form of prompt learning for pre-trained language models. The relevance to prompt engineering is high because it involves designing templates (prompts) that integrate labels into the input of a pre-trained language model and optimizes it using Masked Language Models (MLM), which is a technique related to prompt engineering. The mention of designing a set of templates directly relates to the construction of prompts, which is a core aspect of prompt engineering. The rating isn't a full 10 because the information provided does not indicate if the study includes a 'comprehensive systematic review' or a focus on 'hard prefix prompts' specifically, as mentioned in the study topic." -improving sentence classification in abstracts of randomized controlled trial using prompt learning,gpt-4-1106-preview,8,"The study focuses on the application of Prompt Learning (PL) for sentence classification within the context of Randomized Controlled Trial (RCT) abstracts, which is highly relevant to the field of prompt engineering as it entails creating and utilizing prompt templates to guide models in performing specific tasks effectively. Although 'hard prefix prompts' are not specifically mentioned, the deployment of manual templates in PL is closely related to designing effective prompts for language models. The relevance of the study to prompt engineering is not at the maximal score because it does not directly address 'hard prefix prompts' but rather addresses prompt learning in a broad sense." -mtpl-g2t: graph-to-text generation task based on mixed template prompt learning,gpt-4-1106-preview,8,"The abstract discusses an approach to text generation that involves prompt learning, which is a method to guide pre-trained models to perform specific tasks without extensive fine-tuning. It also compares the effectiveness of different prompt templates, including mixed prompt templates. This is relevant to the study of 'hard prefix prompts,' a type of prompt engineering. However, the abstract does not specifically mention 'hard prefix prompts' but discusses prompt learning in a broader context. Therefore, it is highly relevant but not entirely focused on 'hard prefix prompts,' which results in a rating of 8." -masked prompt learning for formal analogies beyond words,gpt-4-1106-preview,9,"The paper's focus on the development of a generative model for analogies using prompt-based fine-tuning within the context of a pre-trained language model (PLM) is highly relevant to the study of prompt engineering. The exploration of masked prompt learning and the systematic approach to handling analogies by reformulating them using prompts deeply contribute to the field of prompt engineering. It addresses how different prompting techniques can enhance language models' ability to generalize beyond simple word-level tasks. The relevance rating is not a full 10 only because the study seems to be specifically tailored to the analogy task, whereas prompt engineering broadly covers a wider range of applications." -promptrgd: prompt learning with relation-aware gradient denoising for low-resource relation extraction,gpt-4-1106-preview,8,"The abstract discusses a framework for semi-supervised prompt learning for relation extraction. Since prompt engineering is about designing and implementing prompts to effectively interact with a model or a system, the paper's focus on 'prompt template construction' and 'relation-aware gradient denoising' directly relates to the design and optimization of such prompts, especially in low-resource settings. The relevance rating is not a perfect 10 because although it deals with prompt engineering, the paper centers more on a specific aspect of relation extraction rather than a comprehensive study of hard prefix prompts in a broader context." -prompt learning for multi-modal covid-19 diagnosis,gpt-4-1106-preview,7,"The paper presents a novel approach that utilizes prompt-based methods for COVID-19 diagnosis, which is relevant to the study of prompt engineering. Prompt learning, a key aspect of prompt engineering, is central to the paper's methodology where a cloze prompt template and label word set are constructed to redefine the diagnosis task. However, the specificity to the 'hard prefix prompts' is not mentioned, which may or may not be within the scope of the presented methods. The relevance is rated moderately high due to the application of prompt learning concepts, but not the maximum score given the potential difference in prompt types being studied." -uper: boosting multi-document summarization with an unsupervised prompt-based extractor,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering, as the core of this research involves creating 'prompting templates' to harness the knowledge within a Pre-trained Language Model (PLM) for determining the semantic relevance of documents in a multi-document summarization task. This innovative approach leverages prompt engineering to improve document salience assessment and abstract generation. The rating is not a perfect 10 only because the application is specific to multi-document summarization and the details on the 'hard prefix prompts' specifically are not provided, which may not cover all aspects of prompt engineering studied in a comprehensive systematic review on the topic." -graphprompt: biomedical entity normalization using graph-based prompt templates,gpt-4-1106-preview,8,"The paper introduces 'GraphPrompt', which is a prompt-based learning approach that operates within the domain of prompt engineering. It specifically creates prompt templates according to graph structures, which is directly related to engineering prompts to improve biomedical entity normalization. While the study is not about 'hard prefix prompts' in a general sense, the design and utilization of prompts is core to the paper, hence the high relevance score. The focus on a specific application (biomedical entity normalization) and the lack of a direct mention of 'hard prefix prompt' impacts the relevance rating mildly, preventing a full score." -a cueing strategy for prompt tuning in relation extraction,gpt-4-1106-preview,7,"The abstract describes a modified approach for utilizing prompt tuning in the context of relation extraction by incorporating task-specific cues. This relates to the concept of prompt engineering because it involves the design and use of prompts to guide pre-trained language models to understand and perform specific tasks more effectively. However, the relevance is not a perfect 10 because the abstract specifically addresses relation extraction and introduces a cueing strategy, rather than discussing 'hard prefix prompts' or providing a systematic review on prompts in general. 'Prompt engineering' covers a broader range of applications and methodologies, including but not limited to the cueing strategy mentioned." -"promptaid: prompt exploration, perturbation, testing and iteration using visual analytics for large language models",gpt-4-1106-preview,9,"The provided title and abstract describe a visual analytics system, PromptAid, aimed at assisting users in the creation, refinement, and testing of prompts for Large Language Models. The systems focus on interactive prompt exploration, perturbation, and iteration, which are central to the process of prompt engineering. The relevance to prompt engineering is high, as the paper's aim is to directly address challenges involved in crafting and refining prompts. Despite not specifically mentioning 'hard prefix prompts', the broad nature of the study on modifying prompts to improve task performance and its attention to the usability by non-experts make it highly relevant. Nevertheless, the rating is not a full 10 as the information provided does not indicate if hard prefix prompts were specifically considered or the primary focus of the study." -discourse-aware prompt for argument impact classification,gpt-4-1106-preview,8,"The abstract indicates that the paper is about developing a learnable continuous prompt that integrates discourse markers to improve the performance of pre-trained language models (PLMs) on the task of argument impact classification. Prompt engineering is vital for adapting PLMs to specific tasks, and the paper's focus on leveraging discourse information through prompts is relevant to the study of prompt engineering. The improvement in performance metrics (e.g., a 2.5% increase in the F1 score) suggests effective prompt engineering practices. However, the study does not focus on 'hard prefix prompts' specifically; it seems to emphasize the discourse-aware nature of prompts, which might make it slightly less relevant to a systematic review particularly centered on 'hard prefix prompts.'" -few-shot table-to-text generation with prompt-based adapter,gpt-4-1106-preview,9,"The paper presents a novel method for enhancing table-to-text generation in few-shot learning conditions by using a Prompt-based Adapter (PA) to incorporate domain-specific knowledge and bridge the structure gap between tables and text. This is highly relevant to the field of prompt engineering as it involves designing and using prompt templates to augment a language model's capabilities, which is a core concept within prompt engineering. The adaptation of prompts to improve the efficiency of models in specific tasks underlines the important role that prompts play in tailoring pre-trained language models to specialized applications. Therefore, the paper is of high relevance to studies on prompt engineering, particularly in the context of knowledge augmentation and few-shot learning scenarios." -graphprompt: graph-based prompt templates for biomedical synonym prediction,gpt-4-1106-preview,9,"The abstract describes a novel use of prompt-based learning specific to the task of biomedical synonym prediction. The study's focus on creating prompt templates derived from graph features directly aligns with prompt engineering by designing, tailoring, and applying prompts to specialized tasks. This approach is beneficial for expanding the understanding and applications of prompt engineering within biomedical datasets and is very relevant to studies on prompt engineering methods. The only reason it does not receive a full score is that it may not cover the broader aspects of prompt engineering across different domains but is highly relevant within its specified context." -prompt middleware: mapping prompts for large language models to ui affordances,gpt-4-1106-preview,9,"The described study is highly relevant to prompt engineering as it focuses on a framework (Prompt Middleware) to systematically generate prompts for large language models based on user interface affordances. The research specifically addresses static prompts, template-based prompts, and free-form prompts, all of which are direct aspects of prompt engineering. The application in a practical UI setting (FeedbackBuffet) and the discussion on development integration further emphasize its significance in the field. The reason for not giving a full score of 10 is because the paper might not cover the 'hard prefix prompts' as explicitly as the term implies, but rather discusses a broader scope of integrating prompts into UIs." -prompt learning for developing software exploits,gpt-4-1106-preview,7,"The abstract describes the use of a prompt learning approach, PT4Exploits, with pre-trained language models for generating software exploits and appears to employ prompt engineering by adding trainable prompt tokens. This is relevant to prompt engineering as it is an application of prompts in adjusting language model behavior. However, it is more focused on a specific application related to software vulnerability exploitation, rather than concentrating purely on the methodology of hard prefix prompts for a broad range of applications. Therefore, the relevance is notable but not entirely comprehensive regarding general prompt engineering studies." -clickprompt: ctr models are strong prompt generators for adapting language models to ctr prediction,gpt-4-1106-preview,8,"The paper introduces a novel method for integrating CTR prediction models with language models through the use of prompt engineering, in this case, the generation of 'soft prompts' based on a CTR model. This is highly relevant to the field of prompt engineering as it directly involves the creation and utilization of prompts to enhance the performance of language models in a specific task. The score is not a perfect 10 because the focus is specifically on CTR prediction, which is a narrower application within the broader scope of prompt engineering studies." -b . alternate design choices prompt initialization : table 8,gpt-4-1106-preview,8,"The given abstract discusses prompt initialization strategies and their impact on the performance of a model called MaPLe, which is directly relevant to the study of prompt engineering. It examines different initialization methods such as using a specific template or random initialization, and their effectiveness in different layers of the model. The depth of detail regarding the effect of learnable prompts and hierarchical learning within the layers indicates a high level of relevance, although the prompt engineering study question may be broader and involve other aspects not covered in the abstract. However, since the abstract provides empirical findings related to prompt design choices and their impact on a model's performance, it is substantially relevant to the field of prompt engineering." -this prompt is measuring : evaluating bias evaluation in language models,gpt-4-1106-preview,7,"The abstract provided discusses evaluating bias in language models by using prompts and templates, which is relevant to prompt engineering as it involves the design and analysis of prompts to diagnose social biases in NLP systems. The study contributes to the broader field of prompt engineering by highlighting the importance of carefully crafting prompts to achieve specific measurement goals in bias evaluation. The relevance is not maximum because the study is specifically focusing on the bias aspect rather than a comprehensive review of various uses and types of hard prefix prompts, but it is still significantly related to the overall endeavor of prompt engineering." -prompt tuning with contradictory intentions for sarcasm recognition,gpt-4-1106-preview,9,"The abstract discusses an advanced application of prompt tuning specifically designed for sarcasm recognition in NLP. It directly tackles the challenges of engineering prompts for a specialized task, which is highly relevant to studies on prompt engineering. The work's focus on incorporating domain-specific knowledge (contradictory intentions) into the prompts makes it particularly pertinent to the nuances involved in prompt engineering for complex language tasks. It is rated 9 instead of 10 because the abstract does not mention 'hard prefix prompts', the specific type of prompt the original query seemed to be interested in, but it still stays within the broader field of prompt engineering." -grammar correction for multiple errors in chinese based on prompt templates,gpt-4-1106-preview,9,"The given abstract describes a novel grammar error correction method that leverages prompt templates, making it highly relevant to prompt engineering studies. A key aspect of prompt engineering is designing effective prompts that interact optimally with language models, as seen with the use of BERT here. The proposed dynamic updating of templates is a specific application of prompt engineering to improve NLP tasks, showcasing how tweaks in prompt strategy can significantly enhance model performance. This research does not study hard prefix prompts but still falls under the broader domain of prompt engineering, hence the rating of 9 rather than a perfect 10." -ppm: prompt-free prompt-tuning for multi-task learning,gpt-4-1106-preview,8,"The abstract describes a novel approach in prompt-tuning for multi-task learning by using task-specific adapters in place of hand-crafted prompts, which is highly relevant to prompt engineering. It focuses on optimizing the training process and enhancing the model's performance on various downstream tasks without relying on manually designed prompts. While the abstract does not specifically mention 'hard prefix prompts,' it contributes to the broader field of prompt engineering by exploring alternative techniques to improve language models' efficiency in multi-task learning. This is valuable for prompt engineering studies, but not a direct examination of 'hard prefix prompts,' hence the rating is not the maximum." -teprompt: task enlightenment prompt learning for implicit discourse relation recognition,gpt-4-1106-preview,8,"The presented abstract discusses the development and use of a model called TEPrompt for the task of Implicit Discourse Relation Recognition (IDRR), which explicitly involves the concept of prompt learning. This fits within the realm of prompt engineering as it focuses on the design of prompts for specific tasks (DRR, SSC, ACP) which improve the performance of the main task (IDRR). The systematic review of 'hard prefix prompts' could potentially cover such applications of prompt learning in natural language processing tasks. However, the abstract does not directly discuss 'hard prefix prompts' specifically but rather a variant of prompt learning which makes it somewhat less directly relevant for a study exclusively focused on that area. Therefore, the rating is high but not maximum." -cover: a heuristic greedy adversarial attack on prompt-based learning in language models,gpt-4-1106-preview,8,"The abstract is highly relevant to prompt engineering as it discusses the vulnerabilities in prompt-based learning, a key component of prompt engineering. It focuses on how adversarial attacks can affect manual templates used within pre-trained language models, which is crucial for understanding the robustness and security of prompts. However, the study's primary concern is adversarial attacks rather than the design or optimization of prompts, hence the rating is not a perfect 10." -self-adaptive prompt-tuning for event extraction in ancient chinese literature,gpt-4-1106-preview,8,"The described study demonstrates a direct application of prompt engineering by developing a self-adaptive prompt-tuning mechanism to enhance the performance of a generative event extraction framework. The focus on crafting specialized prompts that account for the unique complexities of ancient Chinese literature and war events shows a sophisticated use of prompt engineering to improve the interpretation and generation capabilities of a pre-trained language model. While this isn't a systematic review of hard prefix prompts specifically, it's a practical application of tuned prompts within a complex domain. Hence, the rating reflects high relevance to prompt engineering but not a perfect match since the study is not a comprehensive review." -sptnet: span-based prompt tuning for video grounding,gpt-4-1106-preview,7,"The study introduces a methodology (SPTNet) that uses prompt tuning, a technique within the field of prompt engineering, to enhance the performance of a PLM in a video grounding task. This is relevant to prompt engineering as it involves the strategic modification of a prompt (via templates and mask tokens) to leverage a pre-trained model's knowledge more effectively. However, the focus on 'hard prefix prompts' is not explicitly mentioned, so while the paper is related to prompt engineering, it might not directly address the comprehensive systematic review on hard prefix prompts specifically." -prompt cache: modular attention reuse for low-latency inference,gpt-4-1106-preview,5,"The abstract discusses Prompt Cache which is related to the optimization of language model inference using attention state reuse in prompts; however, it does not directly address a 'hard prefix prompt' which might suggest a fixed prompt structure or content in the systematic review. The approach outlined in the abstract is relevant to prompt engineering, as it deals with efficient prompt handling in LLMs, but it doesn't focus on prompt content or structure, thereby making it moderately relevant." -low-resource multi-granularity academic function recognition based on multiple prompt knowledge,gpt-4-1106-preview,9,"The abstract demonstrates a direct application of prompt engineering by introducing Mix Prompt Tuning (MPT), which uses both manual and automatically learned prompt templates to improve the effectiveness of pre-trained language models in classifying academic functions with limited annotated data. This is highly relevant to the study of prompt engineering as it explores a practical use-case and contributes to the body of knowledge on how prompt strategies can be utilized to enhance model performance in low-resource settings." -promptcl: improving event representation via prompt template and contrastive learning,gpt-4-1106-preview,7,"The title 'promptcl: improving event representation via prompt template and contrastive learning' suggests that the study involves prompt engineering by focusing on the improvement of event representation using prompt templates. This implies that the study likely explores the design or optimization of prompts, which are critical in influencing the performance of language models. The use of contrastive learning could indicate an innovative approach to refining these prompts, potentially making the study relevant to the field of prompt engineering. However, without the abstract or a TLDR, it's difficult to ascertain the full scope and direct relevance to hard prefix prompts specifically, hence the rating does not reach the maximum score." -ground-truth labels matter: a deeper look into input-label demonstrations,gpt-4-1106-preview,7,"The study focuses on the impact of accurate ground-truth labels within the context of in-context learning (ICL), which is a significant component of prompt engineering for AI models. Accurate inputs and labels are critical for training models effectively, and the introduction of metrics like Label-Correctness Sensitivity and Ground-truth Label Effect Ratio can shed light on prompt design strategies. However, since the study seems to focus more on the labels rather than the prompts (the 'hard prefix prompts' mentioned in the initial query), it is not fully centered on prompt engineering. Thus, it receives a medium-high relevance rating, indicating that it is quite relevant but not entirely focused on the specified aspect of prompt engineering." -not all languages are created equal in llms: improving multilingual capability by cross-lingual-thought prompting,gpt-4-1106-preview,9,"The study introduces a method of prompt engineering named cross-lingual-thought prompting (XLT) which directly pertains to improving the efficacy of prompt-based tasks in Large Language Models (LLMs) across multiple languages. Given that the study focuses on a specialized prompting technique to enhance language model capabilities, it is highly relevant to the field of prompt engineering. The reason for not giving a full score is that the abstract does not describe 'hard prefix prompts' specifically, but rather a prompt engineering strategy for multilingual models." -unihd at tsar-2022 shared task: is compute all we need for lexical simplification?,gpt-4-1106-preview,8,"The title and abstract of the paper are highly relevant to prompt engineering as they detail the use of prompted GPT-3 responses for lexical simplification, which is an application of prompt engineering. The study investigates the efficacy of using prompts to guide a state-of-the-art language model in performing a specific task, thereby contributing to the field of prompt engineering by exploring the potential and limitations of different prompting techniques. The fact that the research describes differing levels of context within the prompts and examines their impact in a competitive setting (TSAR-2022 shared task) is particularly pertinent to the study of how prompts can be optimized for performance. The rating isn't a full 10 because the study focuses on lexical simplification rather than a broad examination of all possible applications of prompt engineering." -using natural sentence prompts for understanding biases in language models,gpt-4-1106-preview,8,"The study is highly relevant to prompt engineering as it explicitly addresses the design and use of prompts to evaluate biases in language models. It discusses the impact of different types of prompts (template-based vs natural sentence prompts) on bias assessments in language models, which is a crucial aspect of prompt engineering. The paper's focus on real-world natural sentences for generating prompts also aligns with the current direction in prompt engineering of using more contextually rich and realistic data. Although it doesn't specifically mention 'hard prefix prompts,' the general theme of prompt design and its implications on model behavior makes it relevant to the field of prompt engineering studies. The rating is not a full 10 as the abstract specifies a focus on gender-occupation biases, which is slightly more specific than general prompt engineering." -domain knowledge matters: improving prompts with fix templates for repairing python type errors,gpt-4-1106-preview,8,"The given abstract directly relates to prompt engineering as it discusses 'TypeFix,' which is a novel approach for improving prompts with domain knowledge fix templates specifically for Python type error repair tasks. This study is highly relevant to prompt engineering because it explores how to enhance prompts efficacy through automatic methods. It delves into using domain-specific knowledge to refine and adapt prompts to increase their effectiveness in a programming context, thus it scores an 8 instead of 10 because it is very specific to the domain of type error repair rather than general prompt engineering." -citeprompt: using prompts to identify citation intent in scientific papers,gpt-4-1106-preview,9,"The study is highly relevant to prompt engineering as it involves the development of a tool, Citeprompt, that utilizes prompt learning for citation intent classification. Prompt learning, as a part of prompt engineering, concerns the design of inputs that effectively leverage pretrained language models to perform specific tasks. The research focuses on the choice of prompt templates and verbalizers, which are essential components of prompt engineering. The improvements reported over baseline models and the exploration into few-shot and zero-shot settings underscore its significant contribution to the field of prompt engineering." -extracting structured seed-mediated gold nanorod growth procedures from literature with gpt-3,gpt-4-1106-preview,7,"The relevance to prompt engineering study is moderate to high. This abstract describes a practical application of prompt engineering, where the GPT-3 language model is used to interpret and structure unstructured scientific text data into a useful format (JSON documents). While the study is not solely focused on the theory of hard prefix prompts, it does involve the fine-tuning of prompts with the GPT-3 model to achieve specific outcomes. Therefore, the study contributes to the broader field of prompt engineering by showcasing how prompts can be designed and leveraged to extract complex information from literature, which is a subset of the prompt engineering domain." -a dataset for cross-domain reasoning via template filling,gpt-4-1106-preview,8,"The relevance to prompt engineering is high, as the abstract discusses the development of a dataset and a method (prompt-template-filling approach) for enabling sequence to sequence models to perform cross-domain reasoning. Prompt engineering involves creating prompts that guide models towards desired outputs; the prompt-template-filling approach is likely related to the construction of such prompts to facilitate reasoning across different domains. Even though it may not directly address 'hard prefix prompts', it does pertain to the broader field of prompt engineering and its application in NLP tasks. The additional focus on cross-domain reasoning is also relevant, as it indicates a level of complexity in the prompt design suited for advanced reasoning. However, without more explicit mention of 'hard prefix prompts', it cannot receive a full score." -prompting for automatic log template extraction,gpt-4-1106-preview,8,"The content is highly relevant to prompt engineering study due to the core focus on leveraging the in-context inference capabilities of large language models for log parsing. The precise framework, LogDiv, that is introduced, is a direct application of prompt engineering where log examples are used as prompts to extract information. This aligns with the concept of 'hard prefix prompts' as it uses a structured approach to guide the language model's output towards the generation of log templates. The rating is not a full 10 because the abstract mostly concerns log parsing rather than the broader scope of prompt engineering, but the techniques and findings are still very much applicable to the field." -dspy: compiling declarative language model calls into self-improving pipelines,gpt-4-1106-preview,9,"The abstract describes a programming model (DSPy) that deals with the creation and optimization of language model pipelines using declarative modules, which is closely related to prompt engineering. The abstraction of LM pipelines as text transformation graphs directly involves the crafting and application of prompts to achieve specific computational tasks. The optimization of pipelines to maximize performance metrics is also a key aspect of prompt engineering, as it relates to refining prompts for better outcomes. The introduction of a systematic approach with modules that can learn and improve over time suggests a significant relevance to the study and advancement of prompt engineering. Therefore, I have rated its relevance as high but not the maximum because the abstract does not discuss 'hard prefix prompts' specifically, which was the focus of the original prompt." -role knowledge prompting for document-level event argument extraction,gpt-4-1106-preview,7,The paper presents a new model for Document-level Event Argument Extraction (DEAE) which is relevant to prompt engineering as it discusses enhancing the interaction between templates (prompts) and roles for pretrained language models (PLMs). The use of a role knowledge guidance mechanism to aid PLMs in understanding semantics and generating arguments can be considered a contribution to the field of prompt engineering. The relevance is not at the highest level because the focus is on a specific application of prompt engineering within document-level event argument extraction rather than on prompt engineering more generally or on 'hard prefix prompts' as an overarching concept. -cocomo: computational consciousness modeling for generative and ethical ai,gpt-4-1106-preview,4,"The mention of 'prompt template formulation' implies some relevance to the area of prompt engineering, as this involves crafting inputs that guide the behavior of AI models. However, the CoCoMo model appears to focus more broadly on ethical and emotional intelligence in AI, rather than specifically on the study of 'hard prefix prompts' in prompt engineering. The relevance is present but not the primary focus of the study." -cot-bert: enhancing unsupervised sentence representation through chain-of-thought,gpt-4-1106-preview,8,"The abstract details the use of prompt engineering as a part of a two-stage approach for sentence representation learning with CoT-BERT, which suggests a direct relationship to the field of study. While prompt engineering is not the sole focus, it is integral to the proposed method's success, indicating high relevance. However, the abstract does not focus solely on hard prefix prompts, which would be necessary for a rating of 10." -advanced prompting as a catalyst: empowering large language models in the management of gastrointestinal cancers,gpt-4-1106-preview,9,"The abstract described relates directly to prompt engineering, as it discusses how different prompting strategies can affect the performance of Large Language Models (LLMs) in a specified domain, which is gastrointestinal oncology. The investigation of varying types of prompts, the development of an evaluation system, and the focus on optimizing LLMs' performance in medical scenarios demonstrate a high level of relevance to the field of prompt engineering. The reason for not rating it a perfect 10 is that the study's focus is on one specific application area within healthcare rather than a broad exploration of prompt engineering in multiple contexts." -incorporating instructional prompts into a unified generative framework for joint multiple intent detection and slot filling,gpt-4-1106-preview,8,"The abstract describes a method for addressing joint multiple Intent Detection (ID) and Slot Filling (SF) using a Unified Generative framework (UGEN) that relies on prompt-based instructions. Since it involves designing templates as instructional prompts in a question-answering format to improve understanding of intents and slots in natural language processing, it is highly relevant to prompt engineering. The focus on instructional prompts aligns with the study of how prompts can enhance performance in language models. However, it doesn't address 'hard prefix prompts' specifically, hence the rating is not a full 10." -a practical three-phase approach to fully automated programming using system decomposition and coding copilots,gpt-4-1106-preview,7,"The study focuses on enhancing the capabilities of language models in generating code, which indirectly relates to prompt engineering in the context of creating prompts that facilitate better code generation. The paper mentions empirical insights to create prompt templates, indicating that the research involves understanding how to structure prompts effectively to improve the performance of the language models. Thus, it has relevance to prompt engineering study, particularly the aspect of designing prompts for coding-related tasks. However, the paper's primary aim is not centered on the study of prompt engineering itself but rather on a neuro-symbolic approach to automated programming. This is why the relevance rating is not higher." -towards robust nlg bias evaluation with syntactically-diverse prompts,gpt-4-1106-preview,9,"The presented study is highly relevant to prompt engineering as it directly addresses the impact of syntactic variations in prompts on the output of NLG systems. It critiques the standard practice of using fixed templates for bias analysis and demonstrates the importance of diversifying prompt structures to obtain more reliable and representative outcomes. This research aligns with the motives of prompt engineering, which include understanding and optimizing how different prompts affect the behavior of language models." -ku x upstage’s submission for the wmt22 quality estimation: critical error detection shared task,gpt-4-1106-preview,8,"The paper discusses the application of prompt-based fine-tuning within the context of quality estimation and critical error detection tasks which is closely related to prompt engineering. The method of reformulating the task to fit a masked language model objective and the efforts to design intuitive templates and label words are directly relevant to the study of engineering effective prompts. Although the focus is on the specific application of QE and CED in machine translation, the techniques and insights derived could be beneficial for prompt engineering study. The rating is not a full 10 because the paper is specialized in QE and CED, which is only a subset of the broader field of prompt engineering." -vision encoders in visual question answering,gpt-4-1106-preview,8,"The relevance of the study to prompt engineering is significant as it examines the impact of strategically formatting prompts on the performance of Visual Language Models in the task of Visual Question Answering. This exploration is an essential aspect of prompt engineering, as it directly relates to how the models' input structure influences their ability to leverage learned knowledge. The improvement in task performance through prompt formatting highlights the importance of prompt engineering for optimizing model efficacy. However, it is not given a full score because the study is specifically focused on VQA tasks and VLMs, rather than the broader field of prompt engineering across various models and tasks." -keyword-optimized template insertion for clinical information extraction via prompt-based learning,gpt-4-1106-preview,9,"The abstract describes a study focused on prompt-based learning, specifically within clinical NLP tasks, and addresses the challenge of prompt design optimization for text classification. Although it doesn't mention 'hard prefix prompts' explicitly, the research on keyword-optimized template insertion is highly relevant to the field of prompt engineering. It explores how the position of the template (i.e., prompt) can affect model performance, which is a core aspect of prompt engineering studies. The research is very pertinent for anyone interested in the effects of prompt design on model efficacy, especially in data-sparse scenarios such as clinical note classification. Thus, it receives a high relevance rating." -kul@smm4h’22: template augmented adaptive pre-training for tweet classification,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering is significant as it discusses the use of template augmentations in pre-training models for tweet classification, which is a form of prompt engineering. The inclusion of 'template augmented task adaptive pre-training' indicates that the study explores how different prompt structures can aid in adapting language models to particular tasks, here being the classification of tweets mentioning Adverse Drug Effects. Although the study is focused on a specific application in the health domain and does not solely focus on 'hard prefix prompts', it demonstrates a practical implementation of prompt engineering through template augmentation. The relevance is not rated higher because the abstract does not directly address a systematic review on prompt engineering or 'hard prefix prompts' as a general concept, but rather reports on a specific application and its outcomes." -research on chinese short text classification based on prefix-vector attention template and probabilistic answer set,gpt-4-1106-preview,8,"The abstract discusses the use of a prefix-vector as a template in prompt learning for text classification, indicating a clear relevance to prompt engineering. It specifically addresses the optimization of prompts for improving performance in text classification tasks, which is a direct application of prompt engineering. However, it doesn't solely focus on 'hard' prefix prompts, hence the rating isn't a full 10." -stt: soft template tuning for few-shot learning,gpt-4-1106-preview,9,"The abstract discusses a new prompt-tuning framework called Soft Template Tuning (STT), which directly relates to prompt engineering as it involves the fine-tuning of prompts for few-shot learning applications with large language models. The study's focus on combining manual prompts and auto-prompts, as well as treating downstream tasks as masked language modeling tasks, is highly relevant to the field of prompt engineering. While it doesn't focus specifically on 'hard prefix prompts,' it does contribute significantly to the overall understanding of prompt tuning, which is a core aspect of prompt engineering. Therefore, it gets a high relevance rating." -cross-domain reasoning via template filling,gpt-4-1106-preview,8,"The paper discusses a prompt-template-filling approach which is highly relevant to the field of prompt engineering as it directly involves designing prompts to facilitate cross-domain reasoning in sequence to sequence models. The relevance is slightly lower than the maximum score because the prompt engineering study specified involves hard prefix prompts, and it is not clear from the abstract if the study specifically addresses hard prefix prompts or if it has a broader scope. Nevertheless, the methodology and case studies presented are likely to be informative for prompt engineering research, particularly in understanding and improving model's abilities in cross-domain applications." -daprompt: deterministic assumption prompt learning for event causality identification,gpt-4-1106-preview,8,"The paper 'daprompt: deterministic assumption prompt learning for event causality identification' is highly relevant to prompt engineering as it discusses the design and implementation of a novel prompt learning method for a specific NLP task (ECI). The focus on the deterministic assumption in prompt learning directly feeds into the broader discussion of how to engineer prompts for better utilization of pre-trained language models. While the study is not about hard prefix prompts in general, it contributes to the field of prompt engineering by exploring an alternative approach to conventional prompt design, thus the rating of 8." -let me check the examples: enhancing demonstration learning via explicit imitation,gpt-4-1106-preview,6,"The abstract discusses Imitation-Demo, a method to enhance demonstration learning for prompt-based predictions. While it does not directly mention 'hard prefix prompts,' it addresses prompt-demonstration dependencies and the optimization of prompt-based learning, which is relevant to prompt engineering. However, since the focus is on imitation and contrastive learning mechanisms rather than the systematic study of hard prefix prompts, the relevance is moderate rather than high." -stprompt: semantic-guided and task-driven prompts for effective few-shot classification,gpt-4-1106-preview,9,"The given title and abstract describe an approach to prompt engineering that is specifically tailored to improve few-shot classification performance in language models. The development of the STPrompt model, which utilizes semantic-guided and task-driven prompts, is highly relevant to the field of prompt engineering. The use of prompts that are constructed from semantic dependency trees and task-specific metadata is indicative of advanced prompt engineering techniques. Therefore, the study is almost directly aligned with prompt engineering, with the potential deduction of a point for not addressing 'hard prefix prompts' as the prompt is open-ended regarding the type of prompts studied." -supplementary material for mask-free ovis: open-vocabulary instance segmentation without manual mask annotations,gpt-4-1106-preview,8,"The abstract describes a process of using prompt templates to generate pseudo-captions from image-labels for vision-language models. This is highly relevant to the study of prompt engineering because it involves the creation of templates that structure input for language models in a way that improves their understanding and output generation. While it doesn't directly mention the term 'hard prefix prompts', the use of rigidly structured prompt templates hints at a similar concept. Prompt engineering is crucial in this context to ensure that the model correctly interprets the image categories and generates coherent and accurate captions. The rating isn't a full 10 as it doesn't cover the entire breadth of prompt engineering studies, especially those that pertain to non-image related tasks, but it remains significantly relevant for the subset of prompt engineering it pertains to." -pre-training extractive question-answer prompts for few-shot chinese text classification,gpt-4-1106-preview,8,"The document discusses the use of prompt learning for few-shot text classification, which is a subset of prompt engineering as it involves designing and training prompts to work effectively with pre-trained language models. The relevance to prompt engineering is high because it directly deals with the creation of prompts that fit a specific task, which is extractive question-answering in this case. The study also touches upon improving the efficiency of such prompts using contrastive learning, which is an advanced topic in prompt engineering. However, the specific term 'hard prefix prompts' is not mentioned, which suggests that while the document is highly relevant to prompt engineering, it may not cover the 'hard prefix' aspect explicitly." -enhancing cross-lingual natural language inference by soft prompting with multilingual verbalizer,gpt-4-1106-preview,8,"The study discusses soft prompt learning within the context of cross-lingual natural language inference, which is related to the field of prompt engineering. Although this is not specifically about 'hard prefix prompts,' soft prompting is an alternative prompting approach, and understanding it can contribute to the field of prompt engineering by offering insights into different methods of designing prompts. Furthermore, the study mentions the limitations of hard prompts, which implies a comparison that can be informative for prompt engineering studies. The rating is not a full 10 because the direct focus on 'hard prefix prompts' is lacking, but it is still highly relevant due to its implications for the broader field of prompt engineering." -exploring prompts in few-shot cross-linguistic topic classification scenarios,gpt-4-1106-preview,9,"The abstract describes research directly related to prompt engineering, specifically addressing the challenge of creating efficient prompts for few-shot learning in cross-linguistic scenarios. The study's exploration of discrete, continuous, and hybrid prompts, and their impact on model performance, makes it highly relevant to the field of prompt engineering. The deduction of one point is due to the abstract not mentioning 'hard prefix prompts' specifically, but it is otherwise very pertinent to the prompt engineering domain." -grounding language to entities and dynamics for generalization in reinforcement learning,gpt-4-1106-preview,7,"The described study involves creating templates for textual descriptions and has a component of paraphrasing, which relates to prompt engineering in that it deals with the systematic construction and variation of prompts. However, because it is situated within the context of reinforcement learning and generalization rather than directly focused on prompt engineering for language models or search queries, it is not a perfect match for the specific topic of a 'hard prefix prompts' systematic review." -random word retrieval for automatic story generation,gpt-4-1106-preview,7,"The paper's relevance to prompt engineering study is moderately high. It discusses automatic story generation using a method that mimics human writing prompts. The concept of leveraging random words as prompts and then using the internet to provide context aligns with aspects of prompt engineering, which involves creating stimuli that guide the output of generative models. While the paper focuses primarily on story generation rather than the intricacies of engineering prompts, the approach contributes to understanding how prompts can be constructed to initiate a creative process in AI systems. Hence, it offers insights applicable to prompt engineering, even if that is not the main focus of the study." -recruiting patients and collecting data for an observational study using computerised record pop-up prompts: the prog-res study,gpt-4-1106-preview,4,"The study described in the abstract demonstrates the practical application of electronic prompts in the context of patient recruitment and data collection for medical studies, which indirectly relates to prompt engineering as it showcases a real-world use case of prompts facilitating a task, in this case, recruitment, and data collection. However, the study is centered around improving operational aspects of medical research rather than exploring the theoretical or methodological aspects of prompt design, development, or optimization in automated systems or artificial intelligence, which are the central themes of prompt engineering. The relevance is therefore moderate as the conceptual link exists but is tangential to the principal focus of prompt engineering as a field." diff --git a/data/semantic_scholar_data/semantic_scholar_relevant_papers_with_pdf.csv b/data/semantic_scholar_data/semantic_scholar_relevant_papers_with_pdf.csv deleted file mode 100644 index 233e9f2..0000000 --- a/data/semantic_scholar_data/semantic_scholar_relevant_papers_with_pdf.csv +++ /dev/null @@ -1,946 +0,0 @@ -Title,Probability,Reasoning,Open Access PDF URL -"""do anything now"": characterizing and evaluating in-the-wild jailbreak prompts on large language models",7,"The paper provides valuable insights into jailbreak prompts, which are a specific type of adversarial prompt that attempts to circumvent the safeguards of large language models. Understanding the characteristics and evaluation of these prompts is relevant to prompt engineering because it guides the development of prompts that can resist misuse and prompts that align better with human values. Although it focuses on the adversarial aspect rather than constructive prompt engineering, the findings can inform the broader field of prompt engineering, particularly in designing robust and safe systems. Therefore, the study is quite relevant but not entirely centered on prompt engineering in its purest form, hence the rating of 7.",https://arxiv.org/pdf/2308.03825 -latent jailbreak: a benchmark for evaluating text safety and output robustness of large language models,9,"The paper focuses on evaluating the safety and robustness of large language models (LLMs) using a benchmark that entails analysis of prompt design, which is highly relevant to prompt engineering. Specifically, it investigates how malicious instructions embedded within prompts affect the LLM's behavior. This is crucial for understanding how different prompt structures (position of instructions, word replacements, and instruction replacements) influence the model's output, aligning closely with the broader field of prompt engineering that aims to optimize the interaction with LLMs. The systematic review mentioned in the query would likely cover such research, as it is integral to understanding how 'hard prefixes' or fixed parts of prompts can affect the LLM's outputs. The only reason it does not get a full 10 is because the study does not solely focus on the engineering aspect of prompts but also on the safety and ethical concerns related to prompts.",https://arxiv.org/pdf/2307.08487 -fuzzllm: a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models,7,"The relevance of 'fuzzllm: a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models' to prompt engineering is notable. While the primary focus is on discovering vulnerabilities through fuzz testing, the utilization of templates to understand the structure of prompts and the identification of features within these prompts are directly related to the study of prompt engineering. The process of ensuring that prompts do not lead to service guideline violations requires a deep understanding of how different prompts are constructed and how they interact with LLMs. Therefore, the research indirectly contributes to the field of prompt engineering by seeking ways to prevent manipulative prompts from eliciting undesired responses. However, the study does not directly address hard prefix prompts or the systematic review of such prompts, which would be more central to a targeted prompt engineering study.",https://arxiv.org/pdf/2309.05274 -"tricking llms into disobedience: understanding, analyzing, and preventing jailbreaks",8,"The study is highly relevant to prompt engineering as it addresses the manipulation of prompts to achieve unintended model behaviors, which is a critical aspect of prompt design and engineering. Understanding how to prevent these 'jailbreaks' is crucial for developing more secure and reliable prompt engineering practices. The study provides insights into the vulnerabilities of current models and offers potential solutions, which directly contribute to the field of prompt engineering. The rating is not a full 10 because the study is more focused on security and mitigation rather than the broader aspects of prompt engineering, such as the optimization of prompts for various tasks or the generation of more sophisticated prompts for improved performance.",http://arxiv.org/pdf/2305.14965 -jailbreaking black box large language models in twenty queries,8,"The abstract discusses an algorithm (PAIR) for generating 'semantic jailbreaks' using adversarial methods on large language models (LLMs) such as GPT-3.5/4, Vicuna, and PaLM-2. This is highly relevant to prompt engineering because understanding and preventing adversarial manipulation of LLMs is crucial for developing more effective and secure prompts. It is directly related to the field as it explores the vulnerabilities in the current engineering of prompts and how they can be exploited. The abstract, however, does not specifically address 'hard prefix prompts', which are a subset of prompts within prompt engineering, hence not warranting a full score of 10.",https://arxiv.org/pdf/2310.08419 -visual prompt tuning,7,"The topic of Visual Prompt Tuning (VPT) is relevant to the prompt engineering study since it deals with the adaptation of pre-trained models, which is a core concept in prompt engineering. However, VPT specifically addresses the visual domain and large-scale Transformer models in vision, which differs from the 'hard prefix prompts' that typically relate to textual input. Despite this difference, the underlying principles of efficient tuning and the introduction of new parameters to influence model behavior without extensive retraining are concepts shared with prompt engineering. This cross-domain relevance is valuable but not directly tied to the initial study of 'hard prefix prompts', hence the rating of 7.",http://arxiv.org/pdf/2203.12119 -conditional prompt learning for vision-language models,7,"The abstract describes a study that focuses on prompt learning, specifically in vision-language models, which is highly relevant to the field of prompt engineering. The study introduces Conditional Context Optimization (CoCoOp), which is a method for improving the generalization of learned prompts over unseen classes. While this is directly related to prompt engineering, it is specifically tailored to vision-language models, and not directly focused on 'hard prefix prompts' which the original prompt suggests. Therefore, the relevance rating is not a perfect 10, as 'hard prefix prompts' might imply a different subset of prompt engineering concerned with text prompts in NLP. Nevertheless, the concepts studied are transferable to prompt engineering more broadly, warranting a relatively high rating.",https://arxiv.org/pdf/2203.05557 -prompt-to-prompt image editing with cross attention control,8,"The provided abstract describes a study closely related to 'prompt engineering,' as it involves a framework for editing images using text prompts, which directly entails understanding and manipulating prompts for precise outcomes. The emphasis on cross-attention layers as a mechanism for controlling the relationship between text prompts and the spatial layout of images is particularly relevant to the field of prompt engineering, as it is concerned with the fine-tuned influence of textual input on generative models. While the study is not specifically about 'hard prefix prompts,' it contributes to the broader field of prompt engineering by showing how textual prompts can be used to control and manipulate the output of synthesis models. The 2-point deduction accounts for the specific focus on imagery rather than a systematic review of hard prefix prompts in various contexts.",http://arxiv.org/pdf/2208.01626 -p-tuning: prompt tuning can be comparable to fine-tuning across scales and tasks,9,"The provided abstract is highly relevant to prompt engineering study as it discusses 'prompt tuning', which is a method within the field of prompt engineering in natural language understanding (NLU). It compares prompt tuning with fine-tuning, highlighting its efficiency and effectiveness across different tasks and model scales, and introduces an advanced version named 'P-Tuning v2'. This research contributes to the understanding of how continuous prompts can be optimized and sheds light on prompt engineering as a potentially universal method for NLU tasks, making it a significant resource for studying prompt engineering methods.",https://aclanthology.org/2022.acl-short.8.pdf -"pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing",9,"The article provides a detailed survey of prompt-based learning in natural language processing, which is directly relevant to prompt engineering. It covers the adaptation of language models to new tasks using prompts, which is a core concept in prompt engineering. The systematic review and organization of research along with the introduction of a unified set of mathematical notations for existing work are valuable for understanding the breadth and depth of prompt-based methods, making it highly relevant to the study of prompt engineering. Moreover, the article's release of resources like NLPedia–Pretrain aids further research and accessibility. The rating is not a perfect 10 because it might not exclusively focus on 'hard prefix prompts' as the prompt engineering study inquires but generally covers prompting methods in NLP.",https://dl.acm.org/doi/pdf/10.1145/3560815 -learning to prompt for open-vocabulary object detection with vision-language model,8,"The abstract details a novel method, detection prompt (DetPro), which focuses on learning continuous prompts for open-vocabulary object detection, indicating an application of prompt engineering in vision-language models. The relevance is high because it directly tackles the challenge of designing effective prompts to improve model performance. However, it might not cover the theoretical foundations or a wider range of prompt engineering applications, hence not a full score.",https://arxiv.org/pdf/2203.14940 -an information-theoretic approach to prompt engineering without ground truth labels,9,"The article presents a technique for prompt engineering, which is highly relevant to the study of prompt engineering. It focuses on a method that maximizes mutual information between input and output to select effective prompts without labeled data or model access. This method is innovative in the field of prompt engineering as it bypasses the need for substantial labeled datasets and the necessity to tweak model parameters. However, the title does not specifically mention 'hard prefix prompts,' so it may not be entirely focused on 'hard prefix prompts' as the type of prompts being engineered, which is why it doesn't receive a perfect 10.",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/035D7C8A55B237942FB6DBAD7CAA4E49/S1047198723000025a.pdf/div-class-title-out-of-one-many-using-language-models-to-simulate-human-samples-div.pdf -prompt distribution learning,9,"The abstract and TLDR indicate that the study deals with prompt distribution learning, which is directly related to prompt engineering. It focuses on adapting pre-trained models to downstream tasks by learning prompt distributions, a technique relevant to constructing and using prompts to improve model performance. This is highly pertinent to studies in prompt engineering, which aims to optimize how models interact with prompts for better task performance. Although the term 'hard prefix prompts' is not explicitly mentioned, the overall concept of learning and utilizing prompts makes this study considerably relevant.",https://arxiv.org/pdf/2205.03340 -ignore previous prompt: attack techniques for language models,8,"The provided abstract is highly relevant to the subject of 'prompt engineering,' as it directly discusses PromptInject, a methodology for adversarial prompt composition designed to exploit vulnerabilities in transformer-based language models like GPT-3. This pertains to the broader category of prompt engineering by showcasing methods of prompting that could lead to model misalignment, thus revealing long-tail risks. Understanding these attack techniques is crucial for developing more robust prompt engineering practices, although the specific focus on 'hard prefix prompts' is not directly mentioned.",https://arxiv.org/pdf/2211.09527 -language models that seek for knowledge: modular search & generation for dialogue and prompt completion,7,"While the abstract provided doesn't directly address 'hard prefix prompts' or 'prompt engineering' specifically, it does pertain to the broader subject area of how language models can be improved to generate more factual and relevant responses. The research on modular search and generation in the context of dialogue and prompt completion is relevant to prompt engineering as it impacts the effectiveness of the prompts in eliciting accurate and meaningful responses from language models. Therefore, the rating is relatively high due to the indirect relevance of improving language model outputs, which is a fundamental aspect of prompt engineering.",http://arxiv.org/pdf/2203.13224 -test-time prompt tuning for zero-shot generalization in vision-language models,9,"The abstract describes a study directly related to prompt engineering, specifically the dynamic tuning of prompts for vision-language models to enhance zero-shot generalization. Although the provided text doesn't explicitly mention 'hard prefix prompts,' it discusses an advanced concept of prompt optimization at test-time which is highly relevant to the broader field of prompt engineering. The method's ability to adapt prompts using a single test sample fits well within the study of how prompts can be engineered and optimized to improve model performance, particularly in zero-shot settings.",http://arxiv.org/pdf/2209.07511 -diffusiondb: a large-scale prompt gallery dataset for text-to-image generative models,8,"The abstract describes a dataset (DiffusionDB) focused on the synthesis of text-to-image generation using prompts in natural language, which includes the study of syntactic and semantic characteristics of prompts. This relates closely to prompt engineering, as it involves analyzing how prompts influence the outputs of generative models and finding optimal prompts to achieve desired results. The only reason it does not score a perfect 10 is because 'hard prefix prompts' which the initial prompt specified are not mentioned, so it may not cover the specific focus on 'hard prefix prompts'. Nonetheless, it is highly relevant for studies about prompt engineering in the broader context of text-to-image generative models.",https://arxiv.org/pdf/2210.14896 -learning to prompt for continual learning,8,"The abstract discusses an approach to continual learning that focuses on using prompts as learnable parameters within a memory space to guide model predictions and manage knowledge. This is highly relevant to prompt engineering because it directly deals with the optimization and efficacy of prompts in a machine learning context. However, it is not a 'comprehensive systematic review on hard prefix prompts,' as the prompt specifies, but rather a presentation of a novel framework for continual learning using prompts, which is why the rating is not a perfect 10.",https://arxiv.org/pdf/2112.08654 -prompt-aligned gradient for prompt tuning,9,"The abstract describes a study focused on improving prompt tuning methods for vision-language models, presenting a new approach called Prompt-aligned Gradient (ProGrad) specifically designed to prevent the loss of general knowledge during the fine-tuning process. This is highly relevant to prompt engineering as it addresses a significant challenge in the field—maintaining the balance between task-specific adaptation and the retention of pre-trained capabilities. The paper shows potential advancements in prompt tuning, which is a core aspect of prompt engineering, hence the high relevance rating.",https://arxiv.org/pdf/2205.14865 -hyperprompt: prompt-based task-conditioning of transformers,9,"The provided text is highly relevant to prompt engineering study as it directly addresses a novel architecture called 'HyperPrompt' for prompt-based task-conditioning in Transformers, which is a key area in the field of prompt engineering. The text discusses the efficiency and effectiveness of HyperPrompt in the context of few-shot learning and multi-task learning, benchmarks that are essential for evaluating prompt-based methods. The relevance is not rated a full 10 only because the specific term 'hard prefix prompts' is not directly mentioned, although the description strongly suggests relevance to that concept.",https://arxiv.org/pdf/2203.00759 -prompt for extraction? paie: prompting argument interaction for event argument extraction,8,"The provided abstract describes a model (PAIE) that leverages prompt tuning as part of its methodology for Event Argument Extraction (EAE). The model's use of prompts to guide span selection and capture argument interactions is highly relevant to the study of prompt engineering, as it applies prompt-based methods for a specific NLP task. The paper also discusses extractive prompt tuning strategies and their effectiveness, which contributes to the understanding of prompt engineering. However, it does not specifically address 'hard prefix prompts' which might be a more specialized aspect within the field of prompt engineering, hence the rating isn't a full 10.",https://aclanthology.org/2022.acl-long.466.pdf -promda: prompt-based data augmentation for low-resource nlu tasks,8,"The paper's focus on 'Prompt-based Data Augmentation' and the method of training 'small-scale Soft Prompts' in PLMs directly relates to the concept of prompt engineering, a technique used to interface with and extract specific behaviors from language models. While the paper might not explicitly cover a 'hard prefix prompt,' it does deal with the broader topic of how prompts can be engineered and utilized to improve NLU tasks, which makes it highly relevant to studies within prompt engineering.",https://aclanthology.org/2022.acl-long.292.pdf -no more fine-tuning? an experimental evaluation of prompt tuning in code intelligence,8,"The abstract discusses prompt tuning as an alternative to fine-tuning in the context of code intelligence tasks. Prompt tuning is highly relevant to prompt engineering studies since it involves designing and inserting prompts that aid the pre-trained models in adapting to specific tasks. This specific paper evaluates the efficiency of prompt tuning over fine-tuning, which is a core topic within prompt engineering research. Although it focuses on code intelligence tasks and not 'hard prefix prompts' specifically, the principles and findings can have implications for prompt engineering in general. The relevance could be higher if the study specifically addressed hard prefix prompts or a broader range of prompt engineering techniques.",https://arxiv.org/pdf/2207.11680 -personalized prompt learning for explainable recommendation,8,"The given title and abstract focus on 'prompt learning', particularly in the context of explainable recommendation systems. It is highly relevant to prompt engineering since prompt learning is a crucial aspect of tailoring prompts to improve the performance of AI models, such as pre-trained transformer models mentioned in the text. Moreover, the paper discusses innovative approaches (discrete and continuous prompt learning) and training strategies, which are essential for advancing the field of prompt engineering. The rating is not a full 10 because the study specifically addresses the use of prompt learning for explainable recommendations rather than a broad systematic review on 'hard prefix prompts' in general, implying a more focused domain application rather than a comprehensive study across multiple domains or types of prompts.",https://arxiv.org/pdf/2202.07371 -towards unified conversational recommender systems via knowledge-enhanced prompt learning,7,"The abstract discusses the integration of recommendation and conversation modules in a conversational recommender system using a prompt learning paradigm, which is particularly relevant to prompt engineering. The use of knowledge-enhanced prompts and the unification of different subtasks into the prompt learning framework makes it pertinent to the study of how prompts can be designed to improve performance in AI systems. Although the primary focus is on conversational recommender systems rather than prompt engineering in general, the methodology and implications for the field of prompt engineering are significant enough to warrant a relevance rating of 7.",https://arxiv.org/pdf/2206.09363 -bridge-prompt: towards ordinal action understanding in instructional videos,7,"The paper describes an approach that involves 'reformulating individual action labels as integrated text prompts,' which relates to the concept of incorporating linguistic structures (prompts) to enhance the understanding of actions in videos. This suggests an innovative use of prompt engineering to bridge the semantic gap between actions, which is relevant to the study of prompts in the context of machine learning. However, this application is specific to action recognition in video data and does not address 'hard prefix prompts' directly, which is why the relevance rating is not higher.",https://arxiv.org/pdf/2203.14104 -prompt consistency for zero-shot task generalization,9,"The title and abstract describe a study focused on improving zero-shot task generalization by regularizing prompt consistency, which is highly relevant to prompt engineering. Prompt engineering involves the careful design of prompts to elicit the desired responses from language models, and this paper directly addresses and proposes a method for enhancing performance in this area. The relevance is not rated a full 10 because the study may not explicitly be about 'hard prefix prompts' as mentioned in the primary query but it does contribute significantly to the broader field of prompt engineering.",http://arxiv.org/pdf/2205.00049 -promptcap: prompt-guided task-aware image captioning,8,"The article describes 'PromptCap', a model that utilizes natural-language prompts to generate image captions that are tailored to assist large language models in performing visual question answering tasks. While the primary focus is on image captioning to aid knowledge-based VQA, the use of prompts to guide the model's output is directly related to prompt engineering. The research showcases how carefully engineered prompts can significantly enhance the performance of language models in understanding and responding to visual content. Therefore, the study has high relevance to prompt engineering, particularly in the context of integrating textual and visual information. However, it does not directly address hard prefix prompts in a systematic review, which is why the rating is not a perfect 10.",https://arxiv.org/pdf/2211.09699 -spot: better frozen model adaptation through soft prompt transfer,9,"The abstract describes a study directly relevant to prompt engineering, as it focuses on the use of prompts to enhance performance in natural language processing tasks through a method known as Soft Prompt Transfer (SPoT). The relevance is high because it involves leveraging soft prompts for model adaptation which is a specific aspect of prompt engineering. Moreover, it suggests a systematic approach to understanding task transferability, which can contribute significant insights into the field of prompt engineering. The only reason it does not receive a full 10 is that the abstract does not mention 'hard prefix prompts' which was the specific focus of the systematic review mentioned in the prompt.",https://aclanthology.org/2022.acl-long.346.pdf -prompt programming for large language models: beyond the few-shot paradigm,9,"The abstract discusses advanced concepts in prompt programming and evaluates the effectiveness of 0-shot prompts in comparison to few-shot prompts using GPT-3. It underlines the significant impact of prompt design on language model performance and outcomes. The introduction of 'metaprompt' suggests a forward-thinking approach in prompt engineering, indicating a relevance to the study of prompt engineering. The score is not a perfect 10 because the abstract doesn't specifically mention 'hard prefix prompts,' but the overall discussion is highly pertinent to the field of prompt engineering.",https://arxiv.org/pdf/2102.07350 -coda-prompt: continual decomposed attention-based prompting for rehearsal-free continual learning,9,"The paper is highly relevant to the field of prompt engineering as it discusses a novel approach for producing dynamic prompts through an attention-based key-query mechanism, specifically for continual learning in computer vision. This study directly addresses the issue of prompt generation in the context of large-scale pre-trained models and presents a solution for improving accuracy without the need for data rehearsal. Although it may not exclusively focus on 'hard prefix prompts', the concept of input-conditioned prompt components is a valuable contribution to prompt engineering studies, making it almost entirely pertinent to the field.",https://arxiv.org/pdf/2211.13218 -prompt learning with optimal transport for vision-language models,9,"The paper is highly relevant to prompt engineering, as it directly addresses the challenge of creating efficient prompts for vision-language models, which is a subset of the broader field of prompt engineering. The utilization of optimal transport to match vision and text modalities is a novel approach in learning multiple prompts, which aligns with the topic of systematic review on hard prefix prompts by exploring alternative strategies to enhance prompt effectiveness. The only reason the rating is not a full 10 is that the abstract does not explicitly mention 'hard prefix prompts', suggesting that the study might not be solely focused on that specific aspect of prompt engineering.",http://arxiv.org/pdf/2210.01253 -idpg: an instance-dependent prompt generation method,9,"The provided abstract directly pertains to prompt engineering within the realm of NLP transfer learning, making it highly relevant. The novel method of Instance-Dependent Prompt Generation (IDPG) is a significant contribution to prompt engineering because it introduces variability and personalization of prompts for different input instances. The effectiveness of this method is demonstrated through experiments on various NLU tasks, situating the paper at the forefront of prompt engineering research. The reason for not awarding a perfect 10 is that the study does not explicitly mention 'hard prefix prompts', but the concept of IDPG seems inherently related to the engineering of task-specific prompts which would include hard prefix prompts among others.",http://arxiv.org/pdf/2204.04497 -continual prompt tuning for dialog state tracking,7,"The abstract provided discusses 'Continual Prompt Tuning' which is a method for task adaptation in dialog systems that includes learning and storing prompt token embeddings to prevent catastrophic forgetting. Although not directly stated as 'hard prefix prompts,' the methodology is closely related to prompt engineering as it involves the manipulation of prompts to improve the performance of a pre-trained model in continual learning scenarios. This concept is relevant to the study of prompt engineering because it explores ways to effectively utilize prompts in a dynamic and evolving context, which is a crucial aspect of advanced prompt engineering strategies. However, the rating is not a full 10 because it is not directly focused on 'hard prefix prompts' specifically, which is narrower in scope compared to the broader concept of prompt engineering.",http://arxiv.org/pdf/2203.06654 -exploring the universal vulnerability of prompt-based learning paradigm,9,"The abstract describes a study that directly investigates the vulnerabilities of the prompt-based learning paradigm, which is highly relevant to prompt engineering. The focus on triggers that exploit these vulnerabilities is critical for understanding the limitations and potential risks associated with prompts in language models. While not focused on creating or optimizing prompts, it is fundamentally related to their integrity and security, which is an essential aspect of prompt engineering studies.",http://arxiv.org/pdf/2204.05239 -how many data points is a prompt worth?,9,"The abstract describes a study focusing on comparing the effectiveness of using prompts versus generic model heads in fine-tuning pretrained models for classification tasks. It specifically aims to quantify the benefits of prompts when working with limited data. Since the study investigates the impact of prompting on model performance across different tasks and data sizes, it contributes valuable insights to the field of prompt engineering. The high rating reflects the direct relevance of the findings to understanding how prompts can improve machine learning models, which is a core aspect of prompt engineering research. However, the rating is not a full 10 because it does not cover the breadth of prompt engineering, such as the design and optimization of prompts, which also includes areas beyond fine-tuning for classification tasks.",https://aclanthology.org/2021.naacl-main.208.pdf -knowprompt: knowledge-aware prompt-tuning with synergistic optimization for relation extraction,8,"The paper presents an advancement in prompt-tuning for the specific application of relation extraction. It introduces KnowPrompt, a technique that effectively incorporates domain knowledge into prompt templates, which is highly relevant to studies on prompt engineering. Although the focus is on relation extraction and not hard prefix prompts, the concepts of knowledge-aware prompts and learnable virtual type words are innovative contributions to the field of prompt-tuning as a whole. The lower score is because it does not directly address 'hard prefix prompts' as described in the original broad request, but it is still significantly relevant to the broader subject of prompt engineering.",https://arxiv.org/pdf/2104.07650 -knowledgeable prompt-tuning: incorporating knowledge into prompt verbalizer for text classification,9,"The paper directly relates to the field of prompt engineering by introducing a novel approach to improve prompt tuning performance for text classification tasks. This approach involves integrating external knowledge into the verbalizer component of the tuning process, which is a specific technique within the broader area of prompt engineering. This is highly relevant as it targets one of the fundamental challenges in the field, which is to optimize the interaction between pre-trained language models and task-specific prompts. The rating is not a full 10 because it does not cover 'hard prefix prompts' specifically, but focuses more broadly on knowledgeable prompt-tuning, which may or may not include hard prefixes.",https://aclanthology.org/2022.acl-long.158.pdf -pro-tuning: unified prompt tuning for vision tasks,8,"The abstract discusses the concept of prompt tuning, termed 'Pro-tuning', which is highly relevant to the field of prompt engineering as it applies prompt-based learning principles to computer vision tasks. While the principle is derived from work in natural language processing, the adaptation to vision models suggests a cross-disciplinary application of prompt engineering techniques, which is pertinent to the broader study of how prompts can be engineered for different types of models across fields. The relevance is not rated a full 10 as the study is specific to computer vision and may not cover all aspects of 'hard prefix prompts' in the context of a systematic review, which would more generally encompass various modalities and tasks.",http://arxiv.org/pdf/2207.14381 -interactive and visual prompt engineering for ad-hoc task adaptation with large language models,9,"The abstract provided outlines a study that is highly relevant to prompt engineering. It describes the development of PromptIDE, a tool that facilitates the experimentation and optimization of prompts for neural language models. The workflow mentioned is designed to enhance prompt creation and performance evaluation before deployment, which is central to the field of prompt engineering. Although it doesn't explicitly mention 'hard prefix prompts,' the focus on prompt variations and performance signifies a close connection to the concept of prompt design and engineering. Thus, the relevance to prompt engineering is very high, but not a perfect 10 due to the missing specific mention of 'hard prefix prompts'.",https://arxiv.org/pdf/2208.07852 -dynamic prompt learning via policy gradient for semi-structured mathematical reasoning,7,"The abstract describes a study that focuses on enhancing the performance of pre-trained language models like GPT-3 on mathematical reasoning tasks by using a novel approach called PromptPG. This approach uses policy gradient to optimize the selection of in-context examples for prompt construction, which is a core aspect of prompt engineering. While the study is not directly about 'hard prefix prompts', it addresses the broader concept of prompt optimization for improving model performance. Therefore, it is relevant to prompt engineering but not specifically focused on a comprehensive systematic review on hard prefix prompts.",http://arxiv.org/pdf/2209.14610 -conversing with copilot: exploring prompt engineering for solving cs1 problems using natural language,9,"The study is highly relevant to prompt engineering as it investigates the use of natural language interactions to guide GitHub Copilot, an AI code generation tool, in solving programming problems. It focuses on how changes to the wording of a problem can impact the AI's ability to generate correct code, which is at the core of prompt engineering techniques. The fact that the study includes an empirical evaluation of Copilot's performance across a dataset of programming problems and discusses the potential of prompt engineering as a learning tool underscores its relevance to the field. The rating is not a perfect 10 because the study is specific to the domain of programming problem solving and the tool GitHub Copilot, and while it is a significant component of prompt engineering, there may be additional facets of prompt engineering in broader contexts that are not covered by this study.",https://eprints.iisc.ac.in/81157/1/SIGCSE_2023.pdf -"zeroprompt: scaling prompt-based pretraining to 1, 000 tasks improves zero-shot generalization",8,"The abstract discusses a multitask pretraining approach named ZeroPrompt which is highly relevant to prompt engineering as it directly relates to enhancing the performance of zero-shot learning using prompts. It also mentions the introduction of a new prompting method that utilizes a genetic algorithm to discover the best prompts for unseen tasks. This is a significant contribution to the field of prompt engineering. Despite not mentioning 'hard prefix prompts,' the focus on task scaling and prompting methods in zero-shot scenarios are pertinent to prompt engineering study. The relevance rating is not a full 10 because the abstract does not explicitly discuss the comprehensive systematic review or focus exclusively on 'hard prefix prompts,' which are specified in the prompt.",https://aclanthology.org/2022.findings-emnlp.312.pdf -fantastically ordered prompts and where to find them: overcoming few-shot prompt order sensitivity,9,"The study directly investigates the effect of prompt order sensitivity and devises a method to overcome it in few-shot settings, which is highly relevant to prompt engineering. It leverages the generative capabilities of language models to improve the performance of GPT-family models without the need for additional data, indicating a significant contribution to the field of prompt engineering. The deduction of one point is due to the fact that it focuses specifically on order sensitivity and not on the entire scope of hard prefix prompts, but it is still highly pertinent.",https://aclanthology.org/2022.acl-long.556.pdf -iteratively prompt pre-trained language models for chain of thought,9,"The abstract describes an innovative approach to improving the capability of Pre-trained Language Models (PLMs) for tasks that require multi-step reasoning, an aspect that is central to prompt engineering. This iterative prompting framework that progressively elicits relevant knowledge and dynamically synthesizes prompts based on contexts directly pertains to the field of prompt engineering, as it looks at refining the prompts that are given to language models in order to achieve better performance on complex tasks. While it does not specifically mention 'hard prefix prompts', which is part of the original query, the idea of creating dynamic and context-aware prompts is highly relevant to the study of prompt design and engineering.",https://aclanthology.org/2022.emnlp-main.174.pdf -visual prompt tuning for test-time domain adaptation,8,"The presented work is highly relevant to prompt engineering study as it introduces a method named 'Data-efficient Prompt Tuning' (DePT), which is a direct application of prompt engineering to adapt models during test-time. It focuses on tuning prompts as a parameter-efficient way to adjust model representation to new data domains. Although the term 'prompt' in the context of this paper refers to visual prompts in a vision Transformer, which differs from textual prompts commonly discussed in NLP prompt engineering, the concept of adjusting a small set of parameters for domain adaptation is aligned with the principles of prompt engineering. The reason for not being a 10 is that the term 'hard prefix prompts' was not mentioned, which suggests that the exact topic of the prompt may not be covered in its entirety.",https://arxiv.org/pdf/2210.04831 -repository-level prompt generation for large language models of code,9,"The paper presents a framework that directly contributes to the field of prompt engineering by generating example-specific prompts for large language models of code. The fact that this system uses the context of the entire repository and does not rely on the internal weights of the models aligns well with the principles of prompt engineering, where context and relevance are crucial for effective prompt design. The relevance to engineering study is slightly less than perfect only because it is specific to code generation and not the broader application of prompts in general large language models.",http://arxiv.org/pdf/2206.12839 -visual prompt tuning for generative transfer learning,7,"The provided abstract discusses the topic of prompt tuning which is relevant to prompt engineering, a field that deals with optimizing the input given to AI models to elicit better performance. Although the context of the abstract is specific to the domain of generative image models and visual prompts, which is slightly different from hard prefix prompts in textual domain, the general principles and techniques of prompt tuning can be considered applicable across multiple domains. Hence, the content is substantially relevant to prompt engineering, especially in demonstrating knowledge transfer and domain adaptation which are significant challenges in the field. The lower rating reflects the domain-specific focus on visual transformers rather than a general treatment of all forms of prompt engineering.",https://arxiv.org/pdf/2210.00990 -prompt vision transformer for domain generalization,8,"The abstract describes a study that involves prompt learning with vision transformers for the purpose of domain generalization. Although the study does not specifically mention 'hard prefix prompts', it does focus on a prompt-based method (DoPrompt) for improving the performance of ViTs in unseen domains. This is relevant to prompt engineering because it is a direct application of using prompts to enhance model generalization. The relevance rating is not a full 10 because the study does not directly address 'hard prefix prompts' as specified in the initial prompt, but it is closely related and contributes to the field of prompt engineering.",http://arxiv.org/pdf/2208.08914 -prompt tuning for discriminative pre-trained language models,8,"The paper presents DPT, a novel framework for prompt tuning in the context of discriminative pre-trained language models, which is highly relevant to the field of prompt engineering as it explores how to adapt PLMs to different tasks. While it does not directly address 'hard prefix prompts', the concept of prompt tuning is central to prompt engineering. The study's systematic approach to reformulating NLP tasks to suit discriminative PLMs and its comprehensive experiments align closely with prompt engineering methodologies. Thus, the paper contributes valuable insights to the broader field of prompt engineering, even if it is not specialized in hard prefix prompts specifically. The rating is not a full 10 due to the abstract's lack of direct reference to hard prefix prompts.",https://arxiv.org/pdf/2205.11166 -incremental prompting: episodic memory prompt for lifelong event detection,7,"The presented abstract is relevant to prompt engineering study to a considerable extent because it introduces 'Episodic Memory Prompts (EMP)', which is a technique relevant to prompt engineering. It contributes to the field by addressing the issue of catastrophic forgetting and suggesting a prompt-based method to retain task-specific knowledge in a model that is being continually updated. This is pertinent as it deals with prompt optimization and its role in lifelong learning, both of which fall under the broad umbrella of prompt engineering. However, it is not a 'systematic review on hard prefix prompts' specifically; rather, it is an empirical study about a novel approach to prompting. Hence, the rating is not a full 10, as it does not exactly match the premise of a 'comprehensive systematic review on hard prefix prompts.'",http://arxiv.org/pdf/2204.07275 -prompt-matched semantic segmentation,7,"While the abstract discusses 'prompt learning' in the context of visual foundation models and semantic segmentation, which is somewhat related to the concept of 'prompt engineering,' it refers to a different domain (visual tasks rather than text-based tasks). The relevance to prompt engineering studies is indirect, as the principles of learning prompts for tasks could potentially be analogous across domains. However, the term 'prompt' in this context does not directly correspond to 'hard prefix prompts' typically discussed in language models and prompt engineering. The methodology and application are related in a broader sense to the concept of optimizing pre-trained models using prompts, so it receives a medium-high relevance rating.",https://arxiv.org/pdf/2208.10159 -multitask vision-language prompt tuning,9,"The abstract provides a detailed insight into an advanced application of prompt engineering—specifically in the area of multitask vision-language prompt tuning. It is highly relevant to the study of prompt engineering because it discusses a method for improving the performance of vision-language models through task-specific learned prompt vectors and shares empirical evidence of cross-task benefits. Furthermore, the concept of transferable prompts and their effect on model generalization is directly pertinent to the prompt engineering domain. The only reason the rating isn't a full 10 is because the prompt engineering here is specialized for vision-language tasks, which might be slightly narrower in focus than the broader concept of 'hard prefix prompts' mentioned in the initial prompt.",https://arxiv.org/pdf/2211.11720 -memory-assisted prompt editing to improve gpt-3 after deployment,9,"The relevance to prompt engineering is very high, as this study focuses on refining the interaction between users and GPT-3 through prompt modification using memory-assisted techniques. The study addresses improving the accuracy of responses from GPT-3 by using recorded instances of misunderstandings and user feedback to inform better prompt construction. This falls directly within the realm of prompt engineering, which is the practice of designing prompts to elicit better performance from language models.",https://aclanthology.org/2022.emnlp-main.183.pdf -openprompt: an open-source framework for prompt-learning,9,"The given abstract reviews a toolkit called OpenPrompt designed for prompt-learning in natural language processing, which is highly relevant to the study of prompt engineering. Prompt engineering deals with how to best structure and adapt prompts to get effective responses from language models. While it does not specifically mention 'hard prefix prompts', it offers a framework that likely supports experimenting with various prompt strategies, including hard prefixes. Therefore, the relevance to prompt engineering is high, but not maximum as it does not directly address 'hard prefix prompts'.",https://aclanthology.org/2022.acl-demo.10.pdf -adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections,8,"The study describes a process of 'meta-tuning' pre-trained language models on a variety of datasets and unifying label descriptions in a QA format to optimize them for zero-shot learning. While not specifically addressing 'hard prefix prompts,' it heavily involves the concept of using prompts to improve the performance of language models in tasks they were not explicitly trained for. This is highly relevant to the field of prompt engineering, as it explores how different methods of providing input to models (in this case, through meta-tuning) can result in better alignment with desired outcomes. The TLDR further confirms the study's relevance to prompt engineering by emphasizing the improved performance on answering prompts. However, given that it does not directly study hard prefix prompts, the rating is not a full 10.",https://aclanthology.org/2021.findings-emnlp.244.pdf -prompt-learning for fine-grained entity typing,9,"The abstract describes a study focused on prompt-learning, which is directly related to the field of prompt engineering. It highlights the use of language prompts to tune pre-trained language models for specific tasks, which is an essential component of research within prompt engineering. The relevance is high because the work specifically investigates prompt-learning methodologies and their applications, including a new self-supervised strategy for zero-shot scenarios, directly contributing to the understanding and advancement of how prompts can improve model performance on a granular level. The only detail preventing a perfect score is the lack of explicit mention of 'hard prefix prompts,' but the described study is likely to have significant implications for prompt engineering in general.",https://aclanthology.org/2022.findings-emnlp.512.pdf -a good prompt is worth millions of parameters: low-resource prompt-based learning for vision-language models,9,"The abstract clearly pertains to the study of prompt engineering, as it discusses the utilization and effects of prompts in few-shot learning tasks for vision-language models. The research focuses on how different types of prompts (noisy versus hand-crafted) influence the learning process and performance of the model. The mention of 'prefix language modeling' also directly relates to the prompt engineering study, specifically regarding hard prefix prompts. The high score reflects the direct relevance to the study of how prompts can improve or affect the learning capabilities of AI models, despite not exclusively being about 'hard prefix prompts', hence not a perfect score.",https://aclanthology.org/2022.acl-long.197.pdf -on transferability of prompt tuning for natural language processing,9,"The abstract is highly relevant to prompt engineering as it discusses prompt tuning (PT), which is an efficient method in natural language processing to utilize pre-trained language models with adjustable soft prompts. The study's focus on the transferability of these soft prompts and the implications for efficiency and performance improvements directly relates to the core concepts of prompt engineering. They explore how different prompts affect various models and how that can be harnessed to enhance the PT process. Although the study is not strictly about 'hard prefix prompts' as originally sought, the relevance to prompt engineering is significant, thus the high rating. The explicit mention of 'trained soft prompts' and 'prompt transfer' indicates a direct relationship to engineering the inputs to the language models.",https://aclanthology.org/2022.naacl-main.290.pdf -pada: example-based prompt learning for on-the-fly adaptation to unseen domains,9,"The paper detailed in the prompt directly pertains to prompt engineering, specifically in the application of 'example-based autoregressive Prompt learning for on-the-fly Any-Domain Adaptation'. It focuses on augmenting the ability of the T5 language model to generate prompts that effectively adapt to unseen domains without the need for prior examples or knowledge about the target domain, which is a crucial aspect of prompt engineering. The relevance rating is high because it directly addresses the generation and utilization of prompts to enhance the adaptability and performance of NLP systems in novel contexts, which is central to the study of prompt engineering.",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00468/2008061/tacl_a_00468.pdf -why johnny can’t prompt: how non-ai experts try (and fail) to design llm prompts,9,"The study described in the title and abstract addresses a core aspect of prompt engineering by investigating whether non-AI experts are capable of designing effective prompts for large language models (LLMs). It directly focuses on the challenges and learnability of prompt design, which is highly relevant to the field of prompt engineering. The rating is not a perfect 10 because the study appears to focus on the end-user experience and may not delve into technical aspects or methodologies of prompt crafting, such as hard prefix prompts, as much as a more narrowly focused technical paper would.",https://dl.acm.org/doi/pdf/10.1145/3544548.3581388 -the power of prompt tuning for low-resource semantic parsing,8,"The paper is highly relevant to prompt engineering as it specifically investigates 'prompt tuning', which is a technique within the domain of prompt engineering. The focus on how prompt tuning can enhance the performance of language models for the semantic parsing task suggests that this paper contributes to the understanding and application of prompt engineering. However, it may not cover all aspects of prompt engineering, such as the creation or manipulation of hard prompts, therefore the rating is not a full 10.",https://aclanthology.org/2022.acl-short.17.pdf -prompt waywardness: the curious case of discretized interpretation of continuous prompts,9,"The study addresses a central issue in prompt engineering by exploring the relationship between continuous and discrete prompt formats and their effectiveness in solving language tasks. The investigation into the 'waywardness' of prompt behavior is highly relevant to developing more robust and interpretable prompting methods, which aligns closely with the field of prompt engineering. The only reason the rating is not a full 10 is because the study does not specifically mention 'hard prefix prompts' but rather deals with continuous prompts more broadly.",https://aclanthology.org/2022.naacl-main.266.pdf -automated cross-prompt scoring of essay traits,7,"The abstract describes a study on cross-prompt automated essay scoring, which is not directly related to 'hard prefix prompts' or prompt engineering. However, the methodology involves training models to understand and score various traits of essay text, likely making use of several prompt design considerations to generalize across different essay prompts. While not explicitly focused on prompt engineering, the research indirectly involves the creation of prompts that can elicit features used for trait-focused scoring. Thus, the relevance to prompt engineering is moderate due to its indirect but significant implications for designing prompts that can be effectively utilized by AES systems in various contexts.",https://ojs.aaai.org/index.php/AAAI/article/download/17620/17427 -gptfuzzer : red teaming large language models with auto-generated jailbreak prompts,8,"The 'gptfuzzer : red teaming large language models with auto-generated jailbreak prompts' study is highly relevant to prompt engineering, but with a specific focus on security and adversarial testing. The research presented automates the generation of jailbreak prompts, which are a subset of prompts aimed at testing the robustness and safety of LLMs. This aspect makes it relevant as it deals with the automated creation and effectiveness of hard prefix prompts, tasks that closely relate to prompt engineering. Nonetheless, it does not cover the broader aspects of prompt engineering, such as optimizing prompts for constructive tasks, rephrasing for better understanding, or improving human-AI interaction, hence the rating is not a full 10.",https://arxiv.org/pdf/2309.10253 -autodan: generating stealthy jailbreak prompts on aligned large language models,8,"The paper directly deals with the issue of creating prompts that can influence the behavior of Large Language Models (LLMs), which is a subset of prompt engineering. Although it focuses on generating adversarial or 'jailbreak' prompts, rather than constructive hard prefix prompts, the techniques and insights from such a study could be highly relevant to prompt engineering, particularly in understanding and preventing unintended responses from LLMs. However, the relevance is not a perfect 10 as the study's primary goal is to address security concerns rather than the broader scope of prompt engineering for beneficial use cases.",https://arxiv.org/pdf/2310.04451 -developing an accuracy-prompt toolkit to reduce covid-19 misinformation online,8,"The study is highly relevant to prompt engineering as it explores various accuracy prompts that could be used to encourage the sharing of accurate information online, particularly in the context of COVID-19. The effectiveness of different prompts and their impact on behavior is central to the field of prompt engineering. However, the specificity of the prompts to the domain of misinformation may not encompass the full breadth of prompt engineering, which can also include prompts for eliciting information, generating text, or other interactions in user interfaces beyond accuracy checking.",https://misinforeview.hks.harvard.edu/wp-content/uploads/2021/05/epstein_toolkit_covid_19_misinformation_20210518.pdf -hard prompts made easy: gradient-based discrete optimization for prompt tuning and discovery,9,"The abstract details a study focused on prompt engineering, specifically regarding the optimization of 'hard' prompts, which are highly relevant to the field of prompt engineering. It introduces a method for automatically generating and optimizing these prompts, which aligns closely with the study of engineering prompts that are interpretable and easily manipulated. Furthermore, it has applications in both text-to-image and text-to-text models, indicating a broad relevance to different aspects of prompt engineering. The only reason for not giving a full score of 10 is that the abstract does not explicitly mention a 'systematic review', which suggests that the work may be more focused on original research or methodology rather than reviewing existing literature on hard prefix prompts.",http://arxiv.org/pdf/2302.03668 -more than you've asked for: a comprehensive analysis of novel prompt injection threats to application-integrated large language models,8,"The paper discusses 'prompt injection threats' in Large Language Models (LLMs) which are closely related to prompt engineering as it concerns how prompts are constructed and how they can be manipulated. Prompt engineering involves the strategic creation of prompts to guide the behavior of LLMs, and understanding prompt injection threats is crucial for developing robust and secure prompt engineering methods. Although the paper focuses more on security threats than on prompt engineering in general, the systematic analysis and discussion of these threats are highly relevant for developing better prompt engineering practices.",http://arxiv.org/pdf/2302.12173 -catastrophic jailbreak of open-source llms via exploiting generation,7,"The abstract details research on exploiting large language models (LLMs) through what is termed 'generation exploitation attack', by altering decoding methods. This is relevant to prompt engineering since understanding how different decoding methods and adversarial prompts affect the model's outputs can inform the development of better prompts. Moreover, the work's exploration of alignment methods to counteract the attack implies the significance of structured prompts to maintain LLMs' alignment with human values. While the study is not focused on 'hard prefix prompts' explicitly, it deals with model manipulations related to input prompts, hence the rating of 7 for its partial but significant relevance to prompt engineering study.",https://arxiv.org/pdf/2310.06987 -jailbreak and guard aligned language models with only few in-context demonstrations,9,"The abstract details an investigation into the application of In-Context Learning (ICL) for manipulating language models, which falls under the domain of prompt engineering. The study assesses the ability to guide language models towards either harmful or safe responses by providing specific examples or 'prompts'. Although the main focus is on the security aspect of language models, the techniques mentioned—In-Context Attack (ICA) and In-Context Defense (ICD)—are directly relevant to prompt engineering as they involve crafting prompts that significantly alter a model's outputs. Hence, the relevance to prompt engineering is high, but since the study seems to be more targeted at security (alignment and guarding against jailbreaking) rather than on prompt engineering in general, the rating is not a perfect 10.",https://arxiv.org/pdf/2310.06387 -prompt as triggers for backdoor attack: examining the vulnerability in language models,7,"The paper is relevant to prompt engineering as it discusses utilizing the prompt itself as a potential vector for backdoor attacks in language models, which falls under prompt manipulation and its potential risks. This indicates a direct relationship to the design and usage of prompts within AI models, showing the consequences that can arise from prompt engineering. However, it may not address the broader scope of prompt engineering techniques and their applications directly, focusing instead on the security aspect and vulnerability of the models to prompt-based attacks.",http://arxiv.org/pdf/2305.01219 -notable: transferable backdoor attacks against prompt-based nlp models,8,"The abstract describes a study that is highly relevant to prompt engineering as it specifically addresses vulnerabilities in prompt-based learning models. The focus on backdoor attacks that are independent of downstream tasks and prompting strategies indicates a notable concern for the prompt engineering domain, considering the increasing utilization of such models in various NLP tasks. The high relevance score is due to the direct relation to prompt-based models' security, an aspect that is crucial for understanding and improving prompt engineering techniques. However, the score is not a full 10, as the primary focus is on security, and while related, it does not exclusively cover the broader range of prompt engineering topics such as prompt design or optimization.",http://arxiv.org/pdf/2305.17826 -prompts should not be seen as secrets: systematically measuring prompt extraction attack success,8,"The paper is highly relevant to prompt engineering studies as it addresses the security aspect of prompt-based control of large language models. It directly explores how the prompts, which are integral to shaping model outputs, can be uncovered through extraction attacks. This is crucial for understanding the integrity and confidentiality of proprietary prompting methods, although it is a specialized focus on prompt security rather than the broader field of designing or optimizing prompts for general use.",https://arxiv.org/pdf/2307.06865 -sam on medical images: a comprehensive study on three prompt modes,7,"The study described in the title and abstract does revolve around the use of prompts—in this case, to guide a machine learning model (SAM) for the task of image segmentation. The research explores different prompt modes, specifically in relation to the performance of zero-shot generalization on medical images, which is a form of prompt engineering in the context of 'foundation models'. The relevance is not at the maximum because the prompt engineering mentioned here mostly refers to the application of prompt types like bounding boxes, rather than the systematic study of 'hard prefix prompts' that might be involved in other areas like NLP or more complex interactions. Nevertheless, the research still contributes to the field of prompt engineering by investigating how different prompts affect model performance, thus the rating is above average.",http://arxiv.org/pdf/2305.00035 -tdnn: a two-stage deep neural network for prompt-independent automated essay scoring,7,"The abstract pertains to the development of a deep neural network for automated essay scoring that is designed to work under a prompt-independent setting. This is somewhat relevant to prompt engineering as it relates to the broader field of natural language processing and the automated response to prompts (essays). However, the system is not centered around the creation or manipulation of prompts itself (prompt engineering), but rather on evaluating responses to prompts, which is indirectly related to understanding the prompts' influence on the response. Therefore, the relevance is notable but not direct.",https://www.aclweb.org/anthology/P18-1100.pdf -llm-grounded diffusion: enhancing prompt understanding of text-to-image diffusion models with large language models,7,"The study focuses on enhancing the understanding of complex prompts in text-to-image diffusion models by incorporating a large language model, which relates to prompt engineering as it involves interpreting and acting upon language input. While the study is not explicitly about 'hard prefix prompts' in the context of comprehensive systematic reviews, the improvement of prompt understanding and the interaction between language models and diffusion models is relevant to the broader field of prompt engineering. Therefore, the relevance rating is relatively high, but not maximum due to the lack of direct focus on 'hard prefix prompts' specifically.",https://arxiv.org/pdf/2305.13655 -prompt distillation for efficient llm-based recommendation,7,"The provided abstract directly relates to prompt engineering in the context of improving the efficiency of large language models (LLMs) for recommendation systems. Prompt distillation, as discussed in the abstract, is a technique aimed at refining the use of prompts in LLMs, which falls within the scope of prompt engineering. Although the term 'hard prefix prompts' is not explicitly mentioned, the concept of distilling discrete prompts to continuous vectors is relevant to the broader study of how prompts are structured and optimized for LLMs. Therefore, the relevance is high but not maximal due to the lack of specificity regarding 'hard prefix prompts'.",https://dl.acm.org/doi/pdf/10.1145/3583780.3615017 -"compress, then prompt: improving accuracy-efficiency trade-off of llm inference with transferable prompt",9,"The study highly relates to prompt engineering since it focuses on improving the performance of compressed Large Language Models (LLMs) by means of prompt engineering (i.e., the use of 'hard prompts'). The research suggests a method for enhancing prompt efficacy via a 'soft prompt learning method,' which is specifically tailored to work with compressed models. Although the primary focus of the paper is on model compression and its impact on efficiency and accuracy, the core of the study involves refining the prompt engineering process to ensure high-quality performance from these compressed models. The fact that the study explores the transferability of learned prompts to different tasks and models also demonstrates depth in research pertaining to prompt design and optimization, which is a fundamental aspect of prompt engineering.",https://arxiv.org/pdf/2305.11186 -prompt sapper: llm-empowered software engineering infrastructure for ai-native services,8,"The paper is highly relevant to prompt engineering study as it directly discusses the role of prompts in AI-native services and how natural language prompts can be used as executable code, which aligns with the subject of hard prefix prompts in the context of natural language processing and command execution. Although the paper does not specifically mention 'hard prefix prompts', the focus on prompt-based interaction systems and infrastructure indicates a clear relationship with the broader topic of prompt engineering, warranting a high relevance rating. The deduction in the score accounts for the lack of explicit mention of 'hard prefix prompts', which may be a key term if the research sought to target that specific sub-domain within prompt engineering.",http://arxiv.org/pdf/2306.02230 -prompt sapper: a llm-empowered production tool for building ai chains,8,"The paper introduces 'Prompt Sapper', a tool designed to help build AI services using foundation models like GPT-4. This is highly relevant to prompt engineering because the tool is meant to streamline the process of creating prompt-based AI services. It focuses on incorporating software engineering principles into AI chain engineering, which includes prompt engineering as a subset. The tool aims to make this process more accessible, efficient, and correct, which directly impacts the field of prompt engineering. The rating is not a full 10 because the abstract does not detail the specifics of 'hard prefix prompts' or focus solely on prompt engineering; it discusses AI chain engineering more broadly, of which prompt engineering is a part.",http://arxiv.org/pdf/2306.12028 -artificial intelligence for health message generation: an empirical study using a large language model (llm) and prompt engineering,9,"The given abstract directly pertains to the use of prompt engineering within the context of generating health awareness messages using a large language model. The study focuses on the method of using AI-generated prompts to compare message quality, clarity, and semantic content with human-generated content. The high relevance comes from the practical application of prompt engineering in creating AI-generated messages and the systematic evaluation of their effectiveness against a human-generated benchmark. It is slightly less than a perfect score because the study is specific to health messages and does not cover all aspects of prompt engineering, such as 'hard prefix prompts' which the original prompt suggests may be of particular interest.",https://www.frontiersin.org/articles/10.3389/fcomm.2023.1129082/pdf -"exploring the relationship between llm hallucinations and prompt linguistic nuances: readability, formality, and concreteness",8,"The study is highly relevant to prompt engineering as it investigates how various linguistic aspects of prompts affect the behavior of Large Language Models (LLMs), particularly in the context of hallucination, which is a significant issue related to the performance and reliability of LLMs. Understanding the relationship between prompt nuances and LLM output is central to prompt engineering. The only reason for not giving a full score is that the abstract specifies an exploratory investigation, indicating that the findings might not be comprehensive or definitive, which would be necessary for a perfect relevance rating.",https://arxiv.org/pdf/2309.11064 -promptcrafter: crafting text-to-image prompt through mixed-initiative dialogue with llm,8,"The presented paper focuses on a mixed-initiative system called PromptCrafter that aids in the crafting of text-to-image prompts using a step-by-step process facilitated by a Large Language Model. While it does not explicitly address 'hard prefix prompts', it is substantially related to the field of prompt engineering. It deals with the refinement of prompts and user interaction with language models to produce specific outputs, which are central issues in prompt engineering studies. Therefore, it is highly relevant in terms of offering practical solutions and methodologies for improving prompt design, even if it does not directly tackle the concept of hard prefixes.",https://arxiv.org/pdf/2307.08985 -llm-adapters: an adapter family for parameter-efficient fine-tuning of large language models,7,"The paper's focus on parameter-efficient fine-tuning (PEFT) of large language models (LLMs) through the use of adapters is relevant to prompt engineering, as it deals with the modification and adaptation of LLMs for specific tasks, which is intrinsic to prompt engineering. However, the study does not directly address 'hard prefix prompts,' which is the specific topic of interest. Although the techniques described could potentially be applied to improve the efficiency of prompt-based learning methods, the abstract does not explicitly mention the application to prompt engineering. Nevertheless, the relevance lies in the broader context of adapting and improving the performance of LLMs in different tasks, which is tangential to the field of prompt engineering.",https://arxiv.org/pdf/2304.01933 -llm-eval: unified multi-dimensional automatic evaluation for open-domain conversations with large language models,7,"The abstract describes 'LLM-eval,' an evaluation method for open-domain conversations with large language models, focusing on using single prompt-based approaches for comprehensive assessment. While it does not explicitly address 'hard prefix prompts' or prompt engineering studies, the methodology is relevant for understanding how prompt-based systems can be evaluated. Since prompt engineering is a key element in defining how language models interpret and respond to prompts, this study could indirectly contribute to the field by providing a framework for evaluating the effectiveness of different prompt strategies, albeit without directly targeting hard prefix prompts.",http://arxiv.org/pdf/2305.13711 -a first look at llm-powered generative news recommendation,8,"The abstract describes using a language model for personalized news recommendation, which implies that the system employs some form of prompt engineering to generate or summarize news according to a user's interests. The concept of moving from model design to prompt design suggests that prompt engineering is a significant component of the research. However, the study focuses more on the application of LLMs for recommendation systems rather than on the study of hard prefix prompts in isolation or comprehensive systematic reviews on prompt engineering. Therefore, the relevance is high but not entirely focused on prompt engineering study as it relates to the broader application within recommendation systems.",https://arxiv.org/pdf/2305.06566 -llm-assisted generation of hardware assertions,8,"The study is highly relevant to prompt engineering as it involves utilizing natural language prompts to generate code assertions, a clear instance of applying language model prompting to a specialized domain. The use of prompts in this case directly pertains to the concept of 'prompt engineering,' which is about optimizing inputs for language models to achieve desired outputs. However, since the focus is specifically on code generation for security assertions within hardware and not on hard prefix prompts in a broader context, it might not cover all aspects of prompt engineering study. This results in a slightly lower rating.",http://arxiv.org/pdf/2306.14027 -certifying llm safety against adversarial prompting,9,"The study is highly relevant to prompt engineering as it directly addresses the challenge of adversarial prompting and the need for developing techniques to ensure safe outputs from large language models (LLMs). Since prompt engineering involves crafting inputs that can influence or guide a model's behavior, the presented 'erase-and-check' framework is a significant contribution to understanding and mitigating the risks posed by adversarial prompts. The study’s focus on certifying the safety of prompts against adversarial attacks is essential for advancing the field of prompt engineering while ensuring responsible use of LLMs. It only slightly misses a perfect score because it does not directly cover 'hard prefix prompts,' but it extensively pertains to the broader domain of prompt safety and adversarial resistance.",https://arxiv.org/pdf/2309.02705 -graph-toolformer: to empower llms with graph reasoning ability via prompt augmented by chatgpt,7,"The abstract discusses a method to enhance large language models (LLMs) by teaching them graph reasoning abilities through the use of prompts augmented by ChatGPT. This is related to prompt engineering since it involves developing ways to optimize prompts to extend the capabilities of LLMs into new domains, like graph reasoning. However, the core focus of the study is on integrating external API tools with LLMs rather than the actual crafting or systematic review of 'hard prefix prompts' specifically. Therefore, while relevant due to the utilization of prompts, it doesn't directly address a comprehensive review of prompt engineering methodologies or specifics of 'hard prefix prompts,' leading to a score that indicates moderate relevance rather than being fully on-topic.",http://arxiv.org/pdf/2304.11116 -velma: verbalization embodiment of llm agents for vision and language navigation in street view,7,"The abstract describes VELMA as an embodied LLM agent that uses verbalization for navigation, which implies a form of prompt engineering is used to translate visual information into text prompts for the LLM to make decisions. Although not specifically about 'hard prefix prompts,' it does involve constructing and using prompts in a multimodal context (vision and language). Therefore, it is relevant to the field of prompt engineering, but slightly indirectly as the main focus seems to be on navigation and embodiment rather than prompt engineering itself.",https://arxiv.org/pdf/2307.06082 -llm-empowered chatbots for psychiatrist and patient simulation: application and evaluation,8,"The abstract describes research that is highly relevant to prompt engineering as it specifically addresses the impact of prompt designs on chatbot behavior and user experience. While it doesn't directly mention 'hard prefix prompts,' the study of prompt designs in the context of chatbot performance is directly related to the field of prompt engineering. Therefore, the findings could contribute valuable insights into the subtleties of prompt crafting and optimization, particularly in mental health applications.",http://arxiv.org/pdf/2305.13614 -chain-of-thought prompting for responding to in-depth dialogue questions with llm,8,"The study is highly relevant to prompt engineering as it investigates an approach (chain-of-thought prompting) to enhance the interaction between users and large language models (LLMs) by focusing on personalizing responses based on user status (personality, emotion, psychology). While it does not directly address 'hard prefix prompts,' it contributes to the field of prompt engineering by exploring advanced prompting techniques aimed at improving the efficacy and personalization of LLM responses. The relevance would be higher if the study specifically addressed hard prefix prompts, but it is still significant due to its focus on improving the quality of prompts and user-model interactions.",http://arxiv.org/pdf/2305.11792 -trapping llm hallucinations using tagged context prompts,8,"The study addresses the issue of 'hallucinations' in large language models (LLMs) and proposes a methodology that includes the use of context and embedded tags to mitigate this problem. Since prompt engineering involves crafting inputs to effectively interact with LLMs and obtain desired outputs, the technique described in the paper to minimize hallucinations is quite relevant to prompt engineering. It is likely to contribute to designing better prompts that can control or guide model behavior, ensuring more accurate responses. However, the study's specific focus is on combating hallucinations rather than on prompt engineering in its entirety, which explains why the rating is not a perfect 10.",http://arxiv.org/pdf/2306.06085 -free-bloom: zero-shot text-to-video generator with llm director and ldm animator,8,"The abstract describes using large language models (LLMs) to generate a 'semantic-coherent prompt sequence', which is directly relevant to prompt engineering, particularly in the niche area of text-to-video generation. While the study focuses more on the application of these prompts to generate video rather than the systematic review of hard prefix prompts themselves, the creation and optimization of prompts remains a central component of the research, justifying a high relevance rating.",https://arxiv.org/pdf/2309.14494 -promptly: using prompt problems to teach learners how to effectively utilize ai code generators,9,"The paper directly addresses prompt engineering by introducing the concept of 'Prompt Problems', which are designed to teach students how to effectively craft prompts for large language models that generate code. This is highly relevant to the study of prompt engineering as it focuses on improving the interaction between humans and AI through the construction of effective prompts. Although the paper doesn't specifically mention 'hard prefix prompts', it addresses the broader concept of prompts in the context of educational settings, which is why the rating is not a perfect 10.",https://arxiv.org/pdf/2307.16364 -promptbreeder: self-referential self-improvement via prompt evolution,9,"The abstract describes a system that revolves around the core idea of evolving and improving prompts, which is directly relevant to the study of prompt engineering. Since the system, Promptbreeder, is designed to enhance the ability of Large Language Models through prompt adaptation and is being compared to other prompt strategies, it holds significant relevance to the field. The only reason it does not receive a full score is that it may not relate exclusively to 'hard prefix prompts' as specified in the initial inquiry but addresses a broader scope of prompt engineering.",https://arxiv.org/pdf/2309.16797 -on the role of attention in prompt-tuning,9,"The provided abstract discusses the use of prompt-tuning within the context of attention mechanisms in language models, which is directly relevant to studies on prompt engineering. It provides insights into how prompts can be used to direct attention to relevant tokens within a given input, which is a crucial aspect of how prompts function in large language models. The abstract also mentions contextual data models and the expressiveness of prompt-tuning, indicating a deep exploration into prompt mechanics. The only reason it doesn't receive a perfect score is the absence of specific mention of 'hard prefix prompts', but otherwise, it has a high relevance to the field of prompt engineering.",http://arxiv.org/pdf/2306.03435 -a prompt log analysis of text-to-image generation systems,7,"The study is relevant to prompt engineering to a large extent, as it delves into the analysis of prompt logs from text-to-image generation systems, which is a direct application of understanding user interaction with prompts and could inform better prompt design. However, it focuses more on the analysis of user prompts and behavior rather than the construction of hard prefix prompts, which would be more closely aligned with 'prompt engineering' as it pertains to the design, syntax, and semantics of prompts themselves.",https://arxiv.org/pdf/2303.04587 -privacy-preserving prompt tuning for large language model services,8,"The paper is highly relevant to prompt engineering as it addresses prompt tuning, which is a method of customizing LLMs for specific tasks or applications. The concept of privacy-preserving mechanisms within the realm of prompt tuning is pertinent to prompt engineering study because it expands the scope of how prompts can be engineered, taking into account the crucial aspect of user privacy. The fact that this paper also introduces a novel approach to improve LLMs' learning with privatized data indicates a significant contribution to the field of prompt engineering. The reason the relevance rating is not a full 10 is because it focuses more on the privacy aspect than on the general techniques or effectiveness of prompt engineering.",http://arxiv.org/pdf/2305.06212 -deep language networks: joint prompt training of stacked llms using variational inference,9,"The abstract discusses the optimization of natural language prompts in stacked large language models (LLMs), which is directly relevant to the field of prompt engineering. The focus on learning and training prompts within a deep learning architecture (DLN) highlights crucial aspects of prompt design and efficacy. This paper would be quite significant for someone studying prompt engineering, as it provides insight into how prompts can be optimized to improve the performance of language models.",http://arxiv.org/pdf/2306.12509 -are chatbots ready for privacy-sensitive applications? an investigation into input regurgitation and prompt-induced sanitization,7,"The study investigates how LLM-powered chatbots handle sensitive information when provided in prompts and how instructions could influence the chatbot's ability to sanitize outputs to comply with privacy regulations. While this does not specifically address 'hard prefix prompts,' it is closely related to prompt engineering because it examines how specific instructions in prompts can affect the information handling of chatbots. The research could inform the development and refinement of prompts that elicit desired privacy-compliant behaviors from the models, which is a critical aspect of prompt engineering in privacy-sensitive applications.",http://arxiv.org/pdf/2305.15008 -extracting accurate materials data from research papers with conversational language models and prompt engineering - example of chatgpt,9,"The discussed paper is highly relevant to the field of prompt engineering study because it proposes a new method, 'ChatExtract', which utilizes engineered prompts in a conversational language model to automate data extraction from research papers. These prompts are specifically designed to identify pertinent data and ensure its accuracy, addressing a key challenge in prompt engineering. Although the paper does not specifically mention 'hard prefix prompts', it is an application of prompt engineering for a specific and practical task, thus meriting a high relevance score. Prompt engineering is central to the performance of the ChatExtract method, as it hinges on the quality of the prompts to retrieve and validate information from the language model.",http://arxiv.org/pdf/2303.05352 -discrete prompt optimization via constrained generation for zero-shot re-ranker,9,"The abstract describes a study focused specifically on the optimization of prompts for a zero-shot re-ranker, which is directly connected to prompt engineering. The proposed discrete prompt optimization method, Co-Prompt, is highly relevant to the field since it addresses the creation and refinement of prompts to improve the performance of pre-trained language models on specific tasks without additional parameter updates. This approach is an important aspect of prompt engineering, hence the high relevance rating. The study appears to contribute valuable insights into prompt effectiveness and optimization, which are key areas of interest in prompt engineering. The reason for not giving a perfect score is that it does not explicitly mention 'hard prefix prompts' as referred to in the original query, but its connection to prompt optimization is clear and significant.",http://arxiv.org/pdf/2305.13729 -sweeping heterogeneity with smart mops: mixture of prompts for llm task adaptation,9,"The abstract presents research on using a 'Mixture of Prompts' with 'smart gating functionality' to enhance the performance of Large Language Models (LLMs) on heterogeneous tasks. This is highly relevant to prompt engineering as it directly addresses the optimization of prompts for task adaptation in LLMs. It investigates a method of improving prompt tuning for diverse tasks, which is a core issue in prompt engineering. The paper aims to reduce training interference and improve efficiency, areas of significant interest in the prompt engineering field. The reasoning behind not giving a full 10 rating is that the abstract does not explicitly mention 'hard prefix prompts,' the specific focus of the prompt engineering study indicated in the query.",https://arxiv.org/pdf/2310.02842 -promptcare: prompt copyright protection by watermark injection and verification,7,"While the article 'promptcare: prompt copyright protection by watermark injection and verification' addresses prompts in the context of Large Language Models and is relevant to the field of prompt engineering, it focuses more specifically on the protection of intellectual property associated with prompts rather than on the techniques for engineering prompts to improve model performance (which would be directly related to 'hard prefix prompts'). However, the study does contribute to the broader ecosystem of prompt engineering by ensuring the safe and authorized use of prompts, which can be considered an aspect of the prompt engineering life cycle. Therefore, it receives a mid-high relevance rating.",https://arxiv.org/pdf/2308.02816 -batch calibration: rethinking calibration for in-context learning and prompt engineering,9,"The abstract describes a comprehensive analysis of calibration methods to reduce prompt brittleness and biases in large language models, which is directly related to prompt engineering. The study seems to offer a novel contribution with the Batch Calibration method, aiming to improve the effectiveness of prompts and in-context learning. Although it does not explicitly mention 'hard prefix prompts', the content is highly relevant to the broader field of prompt engineering, hence the high relevance rating.",https://arxiv.org/pdf/2309.17249 -selfzcot: a self-prompt zero-shot cot from semantic-level to code-level for a better utilization of llms,9,"The relevance of the study to prompt engineering is high because it focuses on the utilization of a self-prompt mechanism (SelfzCoT) that enhances zero-shot learning capabilities in large language models (LLMs). It directly pertains to the field of prompt engineering as it deals with improving the performance of LLMs on arithmetic reasoning tasks through the use of specialized prompts, which are an essential component of prompt engineering. The systematic improvement across different datasets indicates that the researchers are effectively engineering prompts to better utilize the models' existing knowledge without additional training, which is a core aspect of prompt engineering studies.",https://arxiv.org/pdf/2305.11461 -prompttts 2: describing and generating voices with text prompt,7,"The abstract indicates that the study is concerned with the use of text prompts in the context of text-to-speech (TTS) and addresses issues surrounding voice variability and the generation of text prompts, which are relevant to prompt engineering. Prompt engineering is often associated with designing and refining inputs to affect the outputs of AI models, which is closely related to what PromptTTS 2 aims to achieve in the TTS domain. However, the study's relevance to prompt engineering may not be a perfect fit as it is specialized towards TTS systems and does not broadly tackle hard prefixed prompts in various AI contexts, which a 'comprehensive systematic review on hard prefix prompts' would imply.",https://arxiv.org/pdf/2309.02285 -conversation regression testing: a design technique for prototyping generalizable prompt strategies for pre-trained language models,9,"The described study directly pertains to prompt engineering as it focuses on improving pre-trained language model outputs using prompt strategies and assessing the effects of these strategies through Conversation Regression Testing. Although it doesn't specifically mention 'hard prefix prompts,' the broad field of prompt engineering, including the design and systematic review of prompt effects, is central to the study. Thus, the relevance to prompt engineering is high.",http://arxiv.org/pdf/2302.03154 -prompts matter: insights and strategies for prompt engineering in automated software traceability,9,"The title and abstract indicate that the paper focuses on prompt engineering within the context of using Large Language Models for automated software traceability, which is a specific application of prompt engineering. The paper discusses the construction of effective prompts and proposes strategies for utilizing LLMs. This is highly relevant to the study of prompt engineering, particularly in a specialized domain. However, it is not directly related to 'hard prefix prompts' as the prompt specifies, suggesting there is room for more targeted relevance, hence not a perfect score.",https://arxiv.org/pdf/2308.00229 -model tuning or prompt tuning? a study of large language models for clinical concept and relation extraction,7,"The study explores different training strategies for large language models (LLMs), including hard prompts and soft prompts, focusing on clinical concept and relation extraction. It directly investigates prompt engineering by comparing the effectiveness of hard and soft prompts within different LLM training conditions. The relevance to prompt engineering study is high, although the primary focus is on soft prompts in a specific domain (clinical), rather than solely on hard prefix prompts as suggested by the original query. Consequently, the rating reflects substantial but not exclusive relevance.",https://arxiv.org/pdf/2310.06239 -progprompt: generating situated robot task plans using large language models,9,"The paper is highly relevant to prompt engineering as it deals with designing structured prompts for large language models to generate robot task plans, demonstrating an understanding of the importance of prompt design in achieving functional outputs from LLMs. Its focus on programmatic prompts, ablation experiments for prompt structures, and ultimately demonstrating success in a practical domain like robotics shows a clear overlap with the field of prompt engineering. The rating is not a full 10 because the study focuses specifically on robotics and situated task plans, which is just one application of prompt engineering, rather than a broad investigation of hard prefix prompts across various domains.",https://arxiv.org/pdf/2209.11302 -universal and transferable adversarial attacks on aligned language models,7,"The paper's focus on developing adversarial attacks against aligned language models is tangentially related to prompt engineering, as it concerns the specific construction of inputs (suffixes) designed to elicit certain responses from language models. While the study does not directly address 'hard prefix prompts', it does deal with the broader theme of how prompts can be engineered (in this case, to be adversarial) to manipulate model output. Therefore, the relevance to prompt engineering is significant, but it is not a direct match to the concept of 'hard prefix prompts' in a systematic review context.",https://arxiv.org/pdf/2307.15043 -principle-driven self-alignment of language models from scratch with minimal human supervision,8,"The abstract describes a novel approach to aligning language models with human intentions using minimal human supervision and includes stages relevant to prompt engineering, such as generating synthetic prompts to augment diversity. Although the study seems more focused on self-alignment principles and minimization of supervision rather than hard prefix prompts specifically, the method includes aspects like in-context learning from generated prompts, which is a key part of prompt engineering. Therefore, the relevance to prompt engineering is high but not entirely focused on the 'hard prefix prompts' aspect mentioned in the original prompt.",http://arxiv.org/pdf/2305.03047 -language models don't always say what they think: unfaithful explanations in chain-of-thought prompting,7,"The study investigates the reliability of chain-of-thought (CoT) prompting in the context of Large Language Models (LLMs). While this is highly relevant to the field of prompt engineering as it relates to the integrity and trustworthiness of prompts (especially CoT prompts), it does not specifically address 'hard prefix prompts'. Since the study has implications for how prompts can be engineered to elicit accurate explanations from models and discusses the manipulation of model inputs, which is a core concern in prompt engineering, it has relevance. However, the study's focus on the fidelity of CoT rather than on systematic reviews of 'hard prefix prompts' means it is only partially aligned with the prompt engineering study. Therefore, it receives a moderate to high relevance score.",http://arxiv.org/pdf/2305.04388 -ask me anything: a simple strategy for prompting language models,9,"The content of the abstract indicates a high relevance to prompt engineering as it discusses the development of prompt strategies to improve the performance of large language models (LLMs) and attempts to reduce the brittleness of prompting by aggregating multiple prompts. The concept of 'ASK ME ANYTHING' (AMA) is directly related to engineering effective prompts and influences how prompts are generated and utilized. The study also evaluates the performance across different models and benchmarks, which is essential for understanding the implications of prompt design strategies. While it may not explicitly focus on 'hard prefix prompts' as mentioned in the original request, the general exploration of prompt formats and strategies makes this abstract highly relevant to the field of prompt engineering.",https://arxiv.org/pdf/2210.02441 -progressive-hint prompting improves reasoning in large language models,9,"The provided abstract details a study on a novel prompting method, Progressive-Hint Prompting (PHP), designed to improve the reasoning capabilities of Large Language Models (LLMs) by leveraging previously generated responses. This relates directly to the field of 'prompt engineering' as it explores the structure and strategy behind prompts to enhance the performance of LLMs. The fact that it introduces a new methodology and reports on experimentation and results aligns closely with advancements and research in prompt engineering, justifying the high relevance rating. The only reason it is not a perfect 10 is that the abstract does not explicitly mention the 'hard prefix prompts' specified in the original query, otherwise, it charts the advancement in the field of prompt engineering which includes improvements over conventional methods like CoT and self-consistency.",https://arxiv.org/pdf/2304.09797 -frugalgpt: how to use large language models while reducing cost and improving performance,8,"The paper titled 'frugalgpt: how to use large language models while reducing cost and improving performance' is quite relevant to prompt engineering. One of the strategies mentioned for reducing inference cost is 'prompt adaptation,' which directly pertains to the field of prompt engineering. This strategy likely involves creating and refining prompts to produce more accurate or useful outputs from LLMs, thereby also reducing repetitive or unnecessary queries that could increase costs. Although the study's primary focus is on cost-reduction and performance improvement rather than the specifics of crafting hard-prefix prompts, the concept of prompt adaptation is a core part of prompt engineering. Therefore, it holds substantial relevance to someone interested in the efficient and effective use of prompts in LLMs.",http://arxiv.org/pdf/2305.05176 -conversational automated program repair,7,"While the abstract primarily outlines a study on conversational automated program repair, which is a different domain from prompt engineering, it does mention the use of constructed input/prompt and iteratively building the input to a large language model. The relevance to prompt engineering lies in the iterative process, engaging with the LLM in a conversational way, and adjusting the prompts based on feedback to avoid generating previously incorrect patches. This indicates that the study touches upon aspects of prompt engineering by refining the prompts to improve output, which is a key technique in prompt engineering. However, it does not directly focus on 'hard prefix prompts' or a comprehensive study of them. Therefore, the relevance is moderate, warranting a rating of 7 out of 10.",http://arxiv.org/pdf/2301.13246 -annollm: making large language models to be better crowdsourced annotators,9,"The study is highly relevant to prompt engineering as it explores a method to enhance the effectiveness of large language models for the purpose of data annotation, which is a significant aspect of prompt engineering. This paper suggests an innovative way of creating prompts by including explanations along with annotated examples (the 'explain-then-annotate' methodology). This strategy could be beneficial in refining the way prompts are designed to solicit more accurate responses from language models, thus contributing valuable insights to the field of prompt engineering.",http://arxiv.org/pdf/2303.16854 -keep the conversation going: fixing 162 out of 337 bugs for $0.42 each using chatgpt,8,"The provided abstract describes ChatRepair, a novel approach that leverages a conversational Large Language Model (LLM), specifically ChatGPT, for Automated Program Repair (APR). It uses a unique prompt engineering strategy by incorporating a feedback loop into the generation of input prompts. The methodology involves enhancing the prompts with relevant test failure information and learning from past patch attempts to refine the generation process. This study is highly relevant to prompt engineering as it applies advanced techniques to craft prompts that effectively utilize LLM capabilities to diagnose and fix software bugs. The relevance to prompt engineering is not absolute, as the main focus seems to be on the application of these prompts for APR rather than on the study or analysis of the prompt engineering itself, but it is still highly pertinent due to the innovative use of prompts for iterative and conversational task completion.",http://arxiv.org/pdf/2304.00385 -marked personas: using natural language prompts to measure stereotypes in language models,7,"The study focuses on using language prompts to measure and understand biases in language models, which is closely related to the field of prompt engineering. While it does not deal directly with 'hard prefix prompts,' it uses a prompt-based method for a specific and important application within the larger scope of prompt engineering—detecting and analyzing stereotypes. Thus, it contributes to the understanding of how prompts can elicit certain types of responses from language models, which is a relevant aspect of prompt engineering studies. The rating is not a perfect 10 because the study is not about prompt engineering techniques or optimizations but rather an application of prompts to understand model biases.",http://arxiv.org/pdf/2305.18189 -supporting qualitative analysis with large language models: combining codebook with gpt-3 for deductive coding,8,"The study mentioned in the abstract directly explores the use of large language models (LLMs) like GPT-3 for coding tasks in qualitative analysis without the need for task-specific model training or fine-tuning. It specifically illustrates an application of LLMs using prompt learning, which falls under the broader category of prompt engineering. While it is not centered on 'hard prefix prompts,' it does delve into the realm of using prompts effectively to interact with language models. Therefore, the relevance to prompt engineering is high, but not at the maximum because it does not focus exclusively on 'hard prefix prompts' as per the initial prompt.",https://arxiv.org/pdf/2304.10548 -assessment of chemistry knowledge in large language models that generate code,8,"The study specifically mentions the impact of prompt engineering strategies on the performance of Large Language Models (LLMs) in executing chemistry-related coding tasks. The fact that adding copyright notices at the tops of files leads to a 30-percentage point increase in accuracy directly relates to the field of prompt engineering. The study examines and validates the effectiveness of prompt engineering in enhancing the capabilities of LLMs within a specific domain (chemistry). However, it does not focus exclusively on 'hard prefix prompts' but on prompt engineering in a broader sense, hence the rating does not reach the maximum.",https://pubs.rsc.org/en/content/articlepdf/2023/dd/d2dd00087c -in-context impersonation reveals large language models' strengths and biases,8,"The study is highly relevant to prompt engineering as it explores the use of hard-coded persona prompts to elicit specific behaviors and capabilities from large language models (LLMs). By analyzing the LLMs' performance across various tasks in the context of the prompted persona, the study contributes insights into how the design of prompts can influence the output of LLMs, an essential aspect of prompt engineering. It directly addresses the impact of prompt construction on the quality and characteristics of the model's responses. However, it doesn't explicitly cover 'hard prefix prompts' in the more general sense, as it's focussed on role-impersonation, which is a subset of prompt engineering.",http://arxiv.org/pdf/2305.14930 -knn prompting: beyond-context learning with calibration-free nearest neighbor inference,8,"The presented abstract discusses advancements in 'kNN Prompting' which are relevant to the broader realm of prompt engineering in that it explores alternative ways to utilize language model prompts for task completion. kNN Prompting can be seen as an extension or improvement within the field of prompt engineering, particularly since it addresses limitations of typical in-context learning and provides a way to scale with additional training data without a context length restriction. This is highly relevant for studies looking to overcome the current constraints of hard prefix prompts in LLMs. However, the abstract does not address hard prefix prompts specifically, thereby making the relevance less than perfect for a systematic review focused solely on hard prefix prompt engineering.",http://arxiv.org/pdf/2303.13824 -"on second thought, let’s not think step by step! bias and toxicity in zero-shot reasoning",7,"The given abstract discusses the implications of using zero-shot Chain of Thought reasoning in large language models, which is relevant to prompt engineering studies in that it examines the effect of a specific prompting technique in the context of AI behavior. However, the focus on biases and toxicity rather than hard prefix prompts specifically somewhat limits its direct relevance to a systematic review on hard prefix prompts in prompt engineering.",http://arxiv.org/pdf/2212.08061 -evaluation of chatgpt for nlp-based mental health applications,7,"The abstract discusses the use of a specific input prompt for classification with ChatGPT in mental health applications, which aligns with the concept of prompt engineering. Even though the study's application is in mental health, the methodology involves designing and utilizing prompts to elicit accurate responses from a language model, which is a core aspect of prompt engineering. Though the focus is not on 'hard prefix prompts,' the relevance lies in how prompts are integral to the performance of LLMs in NLP tasks, which could translate to insights in prompt engineering studies generally. Hence, a rating of 7 suggests that the study is quite relevant but not directly focused on hard prefix prompt engineering.",http://arxiv.org/pdf/2303.15727 -exploiting asymmetry for synthetic training data generation: synthie and the case of information extraction,7,"The paper is partially relevant to the study of prompt engineering as it discusses the generation of synthetic data by prompting a large language model in reverse to create input text for a target output structure. Although it primarily focuses on synthetic data generation and its application to information extraction, the underlying methodology incorporates elements of prompt engineering by exploiting asymmetry in task difficulty to effectively communicate with the model. It doesn't directly address 'hard prefix prompts,' but the concept of utilizing prompts creatively to generate data is within the domain of prompt engineering research. Therefore, the relevance is significant, but not perfect, as the main focus is not directly on prompt engineering techniques or systematic reviews of said techniques.",http://arxiv.org/pdf/2303.04132 -guiding large language models via directional stimulus prompting,9,"The provided abstract describes a research approach that is highly relevant to the field of prompt engineering, particularly in the way it deals with the customization and optimization of prompts to guide the behavior of large language models. The concept of using a tunable policy model to generate instance-specific 'directional stimulus prompts' falls directly under the umbrella of prompt engineering techniques. The high relevance score reflects the paper's focus on creating prompts that steer the output of LLMs, which is a central concern in prompt engineering studies. Although the term 'hard prefix prompts' is not explicitly mentioned, the methodology proposed is very much related to the underlying principles of prompting language models.",https://arxiv.org/pdf/2302.11520 -motiongpt: finetuned llms are general-purpose motion generators,7,"The paper 'motiongpt: finetuned llms are general-purpose motion generators' seems to utilize prompt engineering by treating multimodal signals as special input tokens in large language models (LLMs) to generate human motion. However, it is not focused on 'hard prefix prompts' specifically, but rather on applying prompt engineering principles to multimodal inputs and finetuning LLMs for a specialized task. The concept of formulating signals into a unified prompt instruction is relevant to prompt engineering, but the study is more about motion generation rather than the systematic review of prompt engineering techniques.",http://arxiv.org/pdf/2306.10900 -up5: unbiased foundation model for fairness-aware recommendation,7,"The given abstract is relevant to prompt engineering study to a good extent because it covers the use of 'personalized prefix prompts' as part of the Counterfactually-Fair-Prompting (CFP) techniques. These techniques contribute to the broader field of prompt engineering by exploring how prompts can be designed or modified to address fairness and bias in recommendations. While the focus is not solely on hard prefix prompts, it does pertain to the sub-domain of prompt engineering for ethical and fairness considerations, which is an important aspect of the field. However, since the primary focus is on fairness-aware recommendation rather than prompt engineering itself, the rating is not a full 10.",http://arxiv.org/pdf/2305.12090 -fill in the blank: context-aware automated text input generation for mobile gui testing,7,"The paper introduces QTypist, a method which utilizes Large Language Models (LLMs) for the automated generation of semantic text inputs in mobile GUI testing. The relevance to prompt engineering study lies in the fact that the approach involves a 'prompt-based data construction and tuning method' which entails extracting prompts and answers for model tuning. This means the study directly involves designing and utilizing prompts to improve performance of AI models, which is closely related to prompt engineering. However, the study's primary focus is on the application of this technique for improving GUI testing rather than on the theory or principles behind prompt engineering itself. Hence, it's not entirely centered on prompt engineering but is highly related, warranting a 7 out of 10 for relevance.",https://arxiv.org/pdf/2212.04732 -explaining patterns in data with language models via interpretable autoprompting,9,"The abstract describes a study where a method called interpretable autoprompting (iPrompt) is used to generate and evaluate prompts for large language models, which is directly related to prompt engineering. The systematic review of 'hard prefix prompts' would likely cover different techniques and contributions in the area of prompt engineering, and iPrompt appears to be a notable example of innovation in this field. Therefore, the relevance to prompt engineering is high, although the study might not directly focus on hard prefix prompts but more generally on explanatory prompts and their iterative improvement.",http://arxiv.org/pdf/2210.01848 -instructzero: efficient instruction optimization for black-box large language models,9,"The abstract details a study that focuses on the optimization of instructional prompts for large language models (LLMs), particularly in the scenario where direct optimization of the instructions isn't possible, such as with black-box LLMs. The study introduces 'InstructZero', a method which indirectly optimizes instructions through the use of 'soft prompts' via Bayesian optimization, which is highly relevant to the field of prompt engineering. This systematic approach to improving efficiency and effectiveness of LLM instructions directly relates to studies of how prompts can be engineered to yield better performance from LLMs. The only reason the rating isn't a perfect 10 is that the abstract doesn't mention 'hard prefix prompts', the specific topic of interest, and focuses instead on 'soft prompts'.",https://arxiv.org/pdf/2306.03082 -language models enable simple systems for generating structured views of heterogeneous data lakes,8,"The abstract describes a system that leverages large language models (LLMs) for the purpose of generating queryable tables from semi-structured documents. Prompt engineering is an implicit but significant aspect of this work; the LLMs are used to either directly extract values or to generate code based on the natural language prompts given to them. The success of EVAPORATE and EVAPORATE-CODE+ hinges on effective prompt engineering to guide the LLMs. While the study does not seem to be explicitly focused on 'hard prefix prompts,' the underlying principle of using prompts to control LLM output aligns with studies in prompt engineering. Hence, the relevance is rated highly but not maximally due to the lack of specificity regarding 'hard prefix prompts.'",http://arxiv.org/pdf/2304.09433 -recurrentgpt: interactive generation of (arbitrarily) long text,8,"The paper presents a novel approach for prompting language models to generate long text sequences by incorporating an LSTM-like recurrence mechanism into GPT, termed RecurrentGPT. Despite not addressing 'hard prefix prompts' directly, the study is relevant to prompt engineering as it explores strategies for enhancing the capabilities of language models through sophisticated prompting techniques by simulating external memory mechanisms. This has implications for how prompts can be engineered to handle more complex tasks like generating long-form content, which can be an aspect of prompt engineering studies. However, the focus on 'hard prefix prompts' is not explicit, thus the rating does not receive a full score.",http://arxiv.org/pdf/2305.13304 -prd: peer rank and discussion improve large language model based evaluations,7,"The abstract discusses methodologies for improving the evaluation of large language model responses, including a peer rank algorithm and peer discussion system which both can be considered forms of prompt engineering, as they involve crafting prompts to facilitate a discussion between LLMs for better assessment. These processes are relevant to prompt engineering studies because they deal with how input prompts affect LLMs' output and evaluation. Although the study's primary focus is not on the hard prefix prompts but rather on the evaluation techniques for model outputs, it indirectly contributes to the field of prompt engineering by exploring methods to refine the interaction and ranking processes between different models which is a subset of prompting strategies.",https://arxiv.org/pdf/2307.02762 -open sesame! universal black box jailbreaking of large language models,8,"The study is highly relevant to prompt engineering as it explores a method (using a genetic algorithm) for exploiting and manipulating large language models (LLMs) through prompts. While it specifically deals with adversarial attacks and alignment issues, understanding these vulnerabilities is crucial for developing robust prompt engineering techniques. It contributes to the field by highlighting the importance of security measures in prompt design to prevent unintended model behavior. However, the paper's primary focus is on the security and manipulation aspect rather than the constructive development or direct study of prompt engineering techniques, hence the rating is not a full 10.",https://arxiv.org/pdf/2309.01446 -what language reveals about perception: distilling psychophysical knowledge from large language models,8,"Although the study does not specifically focus on 'prompt engineering' or 'hard prefix prompts,' it is highly relevant because it involves the use of prompt auto-completion features of a large language model (GPT-3) for psychophysical research. The method of eliciting similarity scores through prompt responses is a form of prompt engineering where the design of the prompts is critical for the success of the study. However, it did not directly address hard prefix prompts, which would be specific sequences of words or phrases designed to elicit particular behaviors from language models, leading to a rating slightly lower than the maximum.",http://arxiv.org/pdf/2302.01308 -boosting language models reasoning with chain-of-knowledge prompting,9,"The abstract describes a novel approach in prompt engineering, specifically focusing on enhancing reasoning capabilities in Large Language Models through Chain-of-Knowledge prompting. It directly relates to the field of prompt engineering by proposing a methodology for improving the quality of generated outputs by incorporating structured knowledge evidence. This is highly relevant to prompt engineering studies, especially those concerning the improvement of model reasoning and reliability. The reason for not giving a full score of 10 is that it does not directly mention 'hard prefix prompts,' but the approach is undoubtedly within the scope of advanced prompt engineering techniques.",https://arxiv.org/pdf/2306.06427 -herding ai cats: lessons from designing a chatbot by prompting gpt-3,9,"The given abstract is highly relevant to prompt engineering as it specifically addresses challenges and insights gained from attempting to design a chatbot using prompts with GPT-3/4. It highlights difficulties in achieving a fully positive user experience through prompting alone, discusses the limitations of prompt control in practical applications, and considers the broader implications for design methods using Large Language Models. The focus on UX design and interaction with chatbots powered by LLMs correlates directly with studies on prompt engineering, as it deals with crafting prompts to elicit desired behavior from the model. Although it does not explicitly mention 'hard prefix prompts', the study of prompting effectiveness in this context is still pertinent to the broader field of prompt engineering.",https://dl.acm.org/doi/pdf/10.1145/3563657.3596138 -exploring large language model for graph data understanding in online job recommendations,7,"The paper's relevance to prompt engineering is significant but not direct. The notion of using a 'meta-path prompt constructor' suggests a novel approach to prompt development, focusing on behavior graphs rather than text generation or parsing. While this represents an innovative application of LLMs in the context of recommendation systems, it is not a 'comprehensive systematic review on hard prefix prompts' as outlined in the initial prompt for engineering study. Yet, the paper does delve into prompt optimization relevant to a specific application (job recommendations), which is a pertinent aspect of prompt engineering. Thus, the relevance is high due to the contribution to the field of prompt construction and bias mitigation in LLMs, but not a perfect match since it doesn't directly address hard prefix prompts or provide a systematic review of the topic.",https://arxiv.org/pdf/2307.05722 -automated annotation with generative ai requires validation,7,"While the abstract does not mention 'prompt engineering' or 'hard prefix prompts' directly, it does discuss the quality of prompts as a factor that affects the performance of LLMs in text annotation tasks. The study highlights the importance of validation against human-generated labels, which indirectly ties into the importance of designing effective prompts to get the desired output from an LLM. Therefore, the relevance to prompt engineering is substantial but not explicit, hence the rating of 7 out of 10.",http://arxiv.org/pdf/2306.00176 -studenteval: a benchmark of student-written prompts for large language models of code,7,"The paper introduces StudentEval, a benchmark for evaluating the efficacy of prompts written by non-expert users (beginning programmers) when interacting with code-based Large Language Models (LLMs). This is relevant to the study of prompt engineering as it provides insight into how well different models respond to prompts that vary in quality and are created by non-experts. It highlights the importance of prompt variability in assessing model performance, which directly relates to the broader inquiry of prompt engineering. Additionally, it contributes to understanding the challenges faced by new programmers in effectively leveraging LLMs for coding tasks, which could inform the development of improved prompt engineering practices. However, the paper might be more narrowly focused on the code LLMs and the non-expert population, rather than a broad, comprehensive systematic review on hard prefix prompts in general prompt engineering.",http://arxiv.org/pdf/2306.04556 -mindmap: knowledge graph prompting sparks graph of thoughts in large language models,8,"The study described in the abstract appears to be highly relevant to the field of prompt engineering. It focuses on a specific technique of prompting large language models (LLMs) using knowledge graphs (KGs) to address common issues such as knowledge incorporation, hallucinations, and transparency. While the study does not specifically mention 'hard prefix prompts,' which may have been the focus of the requested 'comprehensive systematic review,' it does discuss the broader topic of enhancing the interaction between LLMs and external structured knowledge sources. The concept of 'MindMap' prompting could be considered as a type of advanced prompt engineering that aims to deepen the language model's understanding and reasoning capabilities. Hence, the relevance is rated at 8, acknowledging its importance to the field of prompt engineering but also noting that it does not directly address the specific aspect of 'hard prefix prompts.'",https://arxiv.org/pdf/2308.09729 -progprompt: program generation for situated robot task planning using large language models,8,"This publication appears to be highly relevant to prompt engineering as it discusses a structured approach to creating prompts for large language models (LLMs), specifically in the context of generating plans for situated robot tasks. It also mentions the use of ablation experiments to make concrete recommendations about prompt structure, which is an essential part of studying how different prompts affect the performance of LLMs. Although the study's primary focus is on prompts for programmatic tasks within robotics, the methodologies and findings could likely be generalized or applied to other areas of prompt engineering. The rating is not a perfect 10 since the review does not specify that it is a 'systematic review' or that it focuses on 'hard prefix prompts,' but it is still highly applicable to the field.",https://link.springer.com/content/pdf/10.1007/s10514-023-10135-3.pdf -clusterllm: large language models as a guide for text clustering,7,"The text describes a study on a text clustering framework called ClusterLLM that uses a large language model, ChatGPT, for gaining insights and for tuning clustering granularity based on text prompts. While the study is not specifically about 'prompt engineering', the use of 'hard triplet questions' and 'carefully designed pairwise questions' indicates a deliberate and strategic approach to crafting prompts to achieve specific outcomes from the language model. This shows relevance to the study of prompt engineering, as the effectiveness of ClusterLLM relies on the proper construction of these prompts to guide the clustering process. However, the application is specific to text clustering rather than prompt engineering in general, which is why the rating is not closer to 10.",http://arxiv.org/pdf/2305.14871 -how to unleash the power of large language models for few-shot relation extraction?,7,"The abstract indicates a study focused on few-shot relation extraction using large language models like GPT-3.5. It discusses in-context learning and data generation, which are both relevant to prompt engineering, as they deal with how to effectively use prompts to leverage the capabilities of language models for specific tasks. The mention of 'task-related instructions' is directly aligned with prompt engineering, as it involves designing prompts to guide the model's responses. However, the study appears to be more broadly focused on the applications of these methods in relation extraction rather than solely on prompt engineering techniques. Therefore, while there is clear relevance, it is not exclusively centered on prompt engineering, meriting a 7 out of 10.",http://arxiv.org/pdf/2305.01555 -knowledge refinement via interaction between search engines and large language models,7,"The described study 'knowledge refinement via interaction between search engines and large language models' is relevant to the concept of prompt engineering to a considerable extent. The 'InteR' framework focuses on refining the search and query processes by integrating search engines and LLMs, which directly relates to the creation and optimization of prompts to facilitate these interactions. The study touches upon enhancing prompt formulation using search engine-retrieved documents. Even though it does not focus exclusively on hard prefix prompts or a systematic review of such, it presents relevant research on improving input (which includes prompts) to LLMs to achieve better results in information retrieval tasks. Hence, it contributes to the broader field of prompt engineering by proposing practical ways to optimize the interaction between users, LLMs, and search engines.",http://arxiv.org/pdf/2305.07402 -introspective tips: large language model for in-context decision making,7,"The abstract describes a study focusing on improving the decision-making capabilities of large language models (LLMs) by generating 'Introspective Tips' which are likely a form of advanced prompts. This approach is related to prompt engineering in that it involves enhancing the prompt (a hard prefix, in this case) to improve the model's performance without altering the underlying model parameters. This relates to how prompting can be used to guide an LLM's output. However, it's not a perfect match, as it doesn't focus specifically on a 'systematic review on hard prefix prompts' but rather on a practical application of prompts for LLM decision-making enhancement. Therefore, it doesn't completely align with prompt engineering studies, but it has substantial relevance due to its focus on the optimization and application of prompts.",http://arxiv.org/pdf/2305.11598 -augmenting greybox fuzzing with generative ai,8,"The abstract describes ChatFuzz, a system that integrates generative AI (such as ChatGPT) with greybox fuzzing to enhance the generation of format-conforming inputs. The use of ChatGPT to transform initial seed inputs into variations through prompting is directly related to prompt engineering, as this process necessitates designing effective prompts to guide the generative model to produce useful outputs for fuzzing tasks. The paper outlines an application of prompt engineering in a cybersecurity context. The reason for not giving a full 10 is because it focuses specifically on the application of generative AI for fuzzing and not on the broader study of prompt engineering across various domains or on the details of how the prompts are constructed and optimized, which would be of direct interest in a systematic review on hard prefix prompts.",http://arxiv.org/pdf/2306.06782 -taming ai bots: controllability of neural states in large language models,8,"The abstract describes a study that is highly relevant to prompt engineering, as it addresses the ability to control AI bot behavior through prompts, which is a core aspect of prompt engineering. This study's focus on the formal definition of 'meaning' and the conditions under which an AI bot can be directed to reach any given 'meaning' is directly related to how prompts are engineered to achieve desired outcomes in language models. The exploration of controllability in the context of large language models (LLMs) also contributes to understanding how different prompts can influence the state of AI, which is a fundamental concern for prompt engineering. The reason for not giving a perfect score is that the abstract does not mention 'hard prefix prompts' specifically, which was the focus indicated in the prompt engineering study query.",http://arxiv.org/pdf/2305.18449 -spellburst: a node-based interface for exploratory creative coding with natural language prompts,7,"The described study 'Spellburst' is relevant to prompt engineering as it incorporates the use of natural language prompts to facilitate creative coding, an application of prompt engineering. It indicates the development of a system that allows users to interact using high-level semantic constructs ('expressive prompt-based interactions') for creative tasks, which is a part of prompt engineering. However, the focus on a node-based interface for artists suggests that prompt engineering is only a portion of the study's objectives, hence the study may not be exclusively dedicated to hard prefix prompts or the fundamental principles of prompt engineering.",https://arxiv.org/pdf/2308.03921 -smoothllm: defending large language models against jailbreaking attacks,7,"The study deals with defence mechanisms against adversarial attacks on large language models, specifically addressing the vulnerability at the level of input prompts. Although it is not directly related to 'hard prefix prompts,' it is highly relevant to the broader field of prompt engineering as it tackles the manipulation of prompts to secure desired or undisturbed outputs from language models. The relevance is particularly notable in the context of creating robust prompting strategies that could prevent adversarial attacks and thus maintain the integrity of the interaction with the models. However, the research does not specifically focus on the systematic review of hard prefix prompts, which would be the core topic for direct relevance.",https://arxiv.org/pdf/2310.03684 -fully autonomous programming with large language models,7,"The title and abstract indicate that this study deals with program synthesis using Large Language Models (LLMs) and explores different strategies for improving the code generation process, which includes evaluating various prompt-based instructions for the LLM. Although the study does not directly mention 'hard prefix prompts,' it implies a close examination of how to effectively prompt LLMs (like OpenAI Codex) to generate, repair, and debug programs. Given that the study involves exploring and comparing different prompt-generation techniques for improving the performance of LLMs in a programming context, it is relevant to prompt engineering to a significant extent. Thus, the rating recognizes the relevance of exploring effective instructions for LLMs, but it is not a perfect match since the study does not explicitly focus on 'hard prefix prompts' but rather on a broader set of prompt-generating techniques and program synthesis strategies.",https://arxiv.org/pdf/2304.10423 -large language models and (non-)linguistic recursion,7,"The abstract indicates that the study involves designing prompts to elicit certain behaviors from a large language model (LLM), specifically with respect to recursive structures in language. Since prompt engineering is about how to effectively design prompts to achieve desired outputs from LLMs, this study's focus on prompt design for testing meta-linguistic awareness of recursion is relevant to prompt engineering. Although it does not directly address 'hard prefix prompts', it does touch on a related aspect of prompt design. The relevance is not maximal as it doesn't seem to focus on different categories or types of prompts, such as 'hard prefixes', but rather on a specific feature of language (recursion) and how well it can be elicited and analyzed in LLMs.",http://arxiv.org/pdf/2306.07195 -domain knowledge distillation from large language model: an empirical study in the autonomous driving domain,8,"The paper's abstract discusses the use of prompt engineering with the LLM ChatGPT for the semi-automation of domain knowledge distillation in the engineering process, which is relevant to the subject of 'prompt engineering study'. It explores the practical application of prompts in creating knowledge-based systems, which aligns with the idea of 'hard prefix prompts' in that it examines structured interactions with an LLM. The paper presents empirical findings on the efficacy of prompt engineering in a specific domain, which is valuable for the broader study of prompt engineering techniques. The rating is not a full 10 since the 'hard prefix prompts' might refer to a more specific subset of prompts or methodologies within the field of prompt engineering, which the paper's abstract does not explicitly address.",https://arxiv.org/pdf/2307.11769 -reducing retraining by recycling parameter-efficient prompts,9,"The provided abstract is highly relevant to prompt engineering study as it addresses the issue of retraining prompts when an underlying language model is updated. The concept of 'Prompt Recycling' directly pertains to prompt engineering, by aiming to adapt prompts to new versions of a model without the need for extensive retraining. This research could significantly contribute to the efficiency and practicality of using prompts in various applications, hence the high relevance rating.",http://arxiv.org/pdf/2208.05577 -validating large language models with relm,8,"The abstract mentions the validation and evaluation of language model concerns including bias and inappropriate language, which are topics relevant to prompt engineering because they address the model's outputs in response to prompts. Furthermore, ReLM's increased prompt-tuning coverage directly pertains to prompt engineering as it suggests an improved method for evaluating and refining how prompts are designed and how models respond to them. The connection to 'hard prefix prompts' is not explicit, leading to a rating lower than 10, but the general subject matter is pertinent to studies in prompt engineering.",http://arxiv.org/pdf/2211.15458 -preserving in-context learning ability in large language model fine-tuning,9,"The discussed paper addresses a crucial aspect of prompt engineering, which is preventing the loss of a large language model's innate in-context learning abilities during the fine-tuning process. The proposed two-stage fine-tuning framework, ProMoT, is highly relevant as it involves prompt tuning, a method directly connected to prompt engineering. The study's findings on how to maintain a model's performance across various tasks and its ability to work with different formats add valuable insights to the field. The research is relevant to prompt engineering as it provides a potential solution to a common problem faced when fine-tuning models with hard prompts, although it does not directly discuss 'hard prefix prompts'. Nonetheless, the principles could be applicable to the systematic review on hard prefix prompts.",https://arxiv.org/pdf/2211.00635 -improving knowledge extraction from llms for robotic task learning through agent analysis,8,"The abstract outlines a study that, while not focusing exclusively on hard prefix prompts, does address the broader concept of prompt engineering within the context of LLMs and robotic task learning. It directly engages with how prompt engineering can be improved and augmented through a cognitive-agent approach, making it relevant to those interested in the intricacies and optimizations of prompting large language models. This is highly pertinent to the field of prompt engineering, although the text does not specifically mention 'hard prefix prompts.'",https://arxiv.org/pdf/2306.06770 -large language models as superpositions of cultural perspectives,7,"The abstract discusses the concept of 'perspective controllability' within Large Language Models (LLMs), which is relevant to prompt engineering. It highlights how LLMs can exhibit context-dependent values and personality traits, a concept critical to understanding how different prompts can influence the output of such models. Despite not directly addressing 'hard prefix prompts', the study does engage with the underlying mechanics that would be essential for designing effective prompts to guide LLM responses, which is a fundamental aspect of prompt engineering. Therefore, while not focused on hard prefix prompts specifically, the research contributes to the broader understanding of prompt design and LLM interaction methods, which could impact the study of prompt engineering.",https://arxiv.org/pdf/2307.07870 -robot task planning based on large language model representing knowledge with directed graph structures,7,"The given title and abstract involve the development of an LLM prompt template, which indicates a study related to prompt engineering as it aims to create a prompt structure with strong expressive power. This is directly relevant to the exploration of how prompts are structured and their relation to large language models (LLMs) in the context of task planning for robots. The systematic review of 'hard prefix prompts' could likely benefit from insights derived from this proposed method and its structured template. However, the study might be more focused on the application side of prompt engineering in robot task planning, rather than a broad and comprehensive review of prompt engineering techniques and theories. Therefore, it is not entirely focused on 'hard prefix prompts' but is relevant to the broader field of prompt engineering.",http://arxiv.org/pdf/2306.05171 -using large language models to generate engaging captions for data visualizations,8,"The abstract discusses the application of large language models to generate captions for data visualizations, with a focus on the process of 'prompt engineering'. Although it does not mention a 'hard prefix prompt' specifically, the study is centered around the broader concept of prompt engineering, which is designing the most effective prompts to elicit desired responses from a language model like GPT-3. This falls under the umbrella of prompt engineering and is therefore highly relevant to the study of how prompts can affect the output of language models. The rating is not a full 10 because the study abstract does not specifically address a 'systematic review' on 'hard prefix prompts' but seems more focused on practical experimentation and application.",http://arxiv.org/pdf/2212.14047 -using a large language model to control speaking style for expressive tts,7,"While the study primarily focuses on the use of a language model for controlling prosody in text-to-speech (TTS) systems, it is relevant to prompt engineering due to the use of prompts to control language model outputs. Specifically, the study involves engineering prompts that guide the language model to produce suggestions on pitch, energy, and duration for expressive TTS, which is an application of prompt engineering. Though the study’s main goal is not about prompt engineering itself, the methodology of designing prompts to achieve desired outcomes in model behavior is an essential aspect of prompt engineering. Therefore, this study would provide useful information for those interested in the intersection of prompt engineering and TTS technology.",https://arxiv.org/pdf/2305.10321 -gpt4tools: teaching large language model to use tools via self-instruction,8,"The paper is relevant to prompt engineering study because it discusses an advanced method of enabling Large Language Models (LLMs) to use tools through the generation of an instruction-following dataset using a form of prompt engineering. It specifically mentions 'sophisticated prompt engineering' as a crucial component for LLMs tool usage capabilities. Although the focus is more on self-instruction and tool usage within multimodal contexts, prompt engineering is a significant part of the methodology used in teaching the LLMs. However, it does not focus exclusively on 'hard prefix prompts,' which would be central to a study specifically addressing prompt engineering, hence the rating is not a full 10.",http://arxiv.org/pdf/2305.18752 -simulating h.p. lovecraft horror literature with the chatgpt large language model,9,"The study directly investigates the application and effectiveness of prompt engineering techniques to guide a language model's output to emulate H.P. Lovecraft's horror literature style. Given that the focus is on both the generation of text in a specific literary style and the examination of prompt engineering methods, this is highly relevant to the field of prompt engineering. The rating is not a perfect 10 because the study also delves into the model's architecture and comparative analysis, which, while related, are not exclusively focused on prompt engineering.",http://arxiv.org/pdf/2305.03429 -s3: social-network simulation system with large language model-empowered agents,8,"The paper is highly relevant to prompt engineering, as it explicitly mentions the use of prompt engineering and prompt tuning techniques to shape the behavior of agents within the social network simulation system. It indicates that these techniques are critical for the agents' performance in emulating human-like sensing, reasoning, and behavior, which are key in the context of the study. The rating is not a full 10 because the abstract does not provide detailed insight into the nature of the prompt engineering study or its findings specific to the 'hard prefix prompts', which is the specific focus of the prompt engineering study in question.",https://arxiv.org/pdf/2307.14984 -hierarchical prompting assists large language model on web navigation,8,"The abstract discusses a hierarchical prompting approach specifically designed to improve the performance of large language models on tasks involving complex observations, such as web navigation. While this is not directly related to 'hard prefix prompts', it falls under the broader category of prompt engineering which aims to enhance how models interpret and react to prompts. The hierarchical structure mentioned involves creating more efficient prompts that enable better decision making. Therefore, the study is highly relevant to the field of prompt engineering, albeit with a specific focus on a hierarchical strategy rather than hard prefix prompting techniques.",http://arxiv.org/pdf/2305.14257 -on robustness of prompt-based semantic parsing with large pre-trained language model: an empirical study on codex,7,"The study is relevant to prompt engineering to a significant extent as it investigates the robustness of prompt-based semantic parsing with a large pre-trained language model such as CODEX, which is a practical aspect of prompt engineering. However, the focus is more on adversarial robustness and less on the hard prefix prompts specifically. As the study involves understanding how prompts work with a language model trained on code, it has implications for the design of prompts (engineering) for better robustness, which is a critical aspect of prompt design. Nevertheless, the absence of a direct investigation into 'hard prefix prompts' as suggested by the original prompt, limits the full relevance of this study to the prompt engineering field described in the initial question.",http://arxiv.org/pdf/2301.12868 -investigating the translation performance of a large multilingual language model: the case of bloom,7,"The relevance of this study to prompt engineering is fairly high as it touches upon prompt design within the context of evaluating a multilingual language model's performance in machine translation tasks. While the study is not exclusively focused on 'hard prefix prompts,' it does examine how variations in prompts (0-shot vs. few-shot settings) influence the language model's output. Therefore, the investigation of prompt design as a factor in model performance is pertinent to the broader field of prompt engineering, particularly as it relates to enhancing the model's understanding and generating the correct language output. However, the rating is not a full 10 since the primary focus is on the translation performance rather than on prompt engineering methodologies or prompt optimization techniques exclusively.",http://arxiv.org/pdf/2303.01911 -cold-start data selection for few-shot language model fine-tuning: a prompt-based uncertainty propagation approach,8,"The title and abstract are highly relevant to prompt engineering as they discuss 'PATRON', a method utilizing prompt-based approaches for improving the data selection process in few-shot learning scenarios for language models. This method directly relates to engineering prompts to handle uncertainty, which is a subset of the broader field of prompt engineering. However, the study does not seem to concentrate on 'hard prefix prompts', which is the specific type mentioned in the prompt. Hence, it may not cover the full scope of the systematic review on hard prefix prompts if that is the sole focus, but still remains very relevant to the broader category of prompt engineering studies.",http://arxiv.org/pdf/2209.06995 -can large language models reason about medical questions?,8,"This abstract is highly relevant to prompt engineering as it discusses the effectiveness of different prompting scenarios such as Chain-of-Thought, zero-shot, few-shot, and retrieval augmentation in eliciting accurate responses from large language models for medical questions. The study focuses on the LLM's reasoning capabilities in the context of complex real-world questions, which is a critical component of prompt engineering, especially when evaluating the utility and reliability of prompts in domain-specific knowledge tasks.",http://arxiv.org/pdf/2207.08143 -tabllm: few-shot classification of tabular data with large language models,7,"The study addresses the conversion of tabular data to a natural-language string for classification tasks, which can be considered a specific form of prompt engineering. The relevance lies in the fact that the process involves crafting prompts that enable a language model to interpret and classify non-textual data efficiently. However, the study's primary focus is on tabular data and classification tasks, rather than the broader topic of hard prefix prompts used across various types of data and tasks in prompt engineering. Therefore, the rating is a 7, indicating that the study is relevant but not fully aligned with a systematic review of hard prefix prompts in prompt engineering.",http://arxiv.org/pdf/2210.10723 -prompting is programming: a query language for large language models,9,"The abstract provided discusses Language Model Programming (LMP) and Language Model Query Language (LMQL), which are novel approaches to prompt engineering. The focus on an efficient inference procedure and the ability to impose constraints on language model outputs is highly relevant to the field of prompt engineering, as it aims to optimize the way we interact with language models. The relevance is not rated a full 10 only because prompt engineering can encompass a broader range of techniques and considerations beyond the specific innovations of LMP and LMQL, such as different prompting strategies, the study of few-shot learning, etc. However, the presented work is undeniably pertinent and likely to contribute significantly to the advancement of prompt engineering methodologies.",https://dl.acm.org/doi/pdf/10.1145/3591300 -large language models are reasoning teachers,8,"The paper is highly relevant to the study of prompt engineering as it discusses an advanced technique, Fine-tune-CoT, which generates reasoning samples from large models to improve the prompt-based capabilities of smaller models. Although the technique focuses on fine-tuning smaller models rather than the creation of prompts per se, the central idea of using larger models as a 'reasoning teacher' is deeply intertwined with generating more effective prompts that leverage the large model's understanding to enhance reasoning in smaller models. This contributes to the field of prompt engineering by optimizing the efficiency and capability of prompts in eliciting desired responses, particularly for complex reasoning tasks.",http://arxiv.org/pdf/2212.10071 -class-aware visual prompt tuning for vision-language pre-trained model,9,"The title and abstract of the paper indicate a high relevance to prompt engineering as the study focuses on tuning prompts for a vision-language pre-trained model, which involves modifying and optimizing the input prompts to elicit desired responses from the model. Although the paper does not explicitly mention 'hard prefix prompts', it falls within the broader category of prompt engineering by exploring 'visual prompts' and 'text prompts'. This makes it significantly relevant to the topic of prompt engineering study as it contributes to the understanding of how to efficiently tune and adapt pre-trained models to specific tasks through prompt modifications.",http://arxiv.org/pdf/2208.08340 -analogy generation by prompting large language models: a case study of instructgpt,9,"The study's focus on prompt design and its effectiveness in generating analogies is highly relevant to prompt engineering. It explores how different prompts affect InstructGPT's output, which is a core aspect of the field. The sensitivity analysis to prompt structure and variations is also pertinent to understanding how to engineer prompts for better performance. The rating is not a full 10 because the study is specifically about analogy generation, so it might not cover other aspects of prompt engineering comprehensively.",http://arxiv.org/pdf/2210.04186 -using large language models to simulate multiple humans,8,"The presented abstract is highly relevant to prompt engineering as it discusses the use of prompt templates to generate varied responses from a language model in the context of behavioral experiments. The methodology relies heavily on designing effective prompts to ensure the simulation accuracy of human responses. This is directly related to prompt engineering, as it requires an understanding of how to tailor prompts to illicit specific reactions from the model. The study's validation and exploration of model responses to different scenarios are a core part of prompt engineering research. However, the study does not explicitly focus on 'hard prefix prompts', thus the rating is not a full 10.",https://arxiv.org/pdf/2208.10264 -large language models in the workplace: a case study on prompt engineering for job type classification,9,"The abstract provided discusses a case study that centers on the use of prompt engineering for the specific task of job classification. It details the comparative performance analysis of various models including state-of-the-art GPT-3.5-based language models. Considering that prompt engineering is both a focus of the study and is used as a tool to direct the language models toward the desired classification task, the relevance to prompt engineering is very high. A point is subtracted because the details on 'hard prefix prompts' specifically are not mentioned, which could be an aspect of prompt engineering but is not explicitly covered in the abstract provided.",http://arxiv.org/pdf/2303.07142 -soft-prompt tuning for large language models to evaluate bias,7,"The abstract discusses 'soft-prompt tuning' for evaluating biases in large language models, which is related to prompt engineering as it involves the refinement of prompts to achieve specific outcomes from language models. However, the study focuses specifically on sentiment classification tasks and the evaluation of bias, not on 'hard prefix prompts' as specified in the original query for a comprehensive systematic review. Therefore, the relevance to the precise subject of 'hard prefix prompts' is indirect, hence the rating of 7, indicating moderate relevance to prompt engineering but not closely aligned with the original request for information on hard prefix prompts.",http://arxiv.org/pdf/2306.04735 -promptify: text-to-image generation through interactive prompt exploration with large language models,8,"The paper describes 'Promptify', a system designed to aid in prompt engineering for text-to-image generation by making the process interactive, which is highly relevant to the study of prompt engineering. While it doesn't specifically address 'hard prefix prompts,' the general field of designing and refining prompts to achieve better alignment with user intent is central to prompt engineering. The suggestion engine's utilization of large language models to aid in crafting prompts further aligns this work with the broader domain of prompt engineering. However, the paper's focus on text-to-image and not purely text outputs means it's not a complete overlap with prompt engineering studies that may deal with a variety of output modalities (e.g., text-to-text, text-to-speech), hence the rating is not a full 10.",https://arxiv.org/pdf/2304.09337 -you only prompt once: on the capabilities of prompt learning on large language models to tackle toxic content,8,"The study directly investigates the use of prompt learning with large language models, which is a clear application of prompt engineering. It focuses on how prompting these models can be used to address toxicity, a significant part of language model applications. The relevance is high because it involves creating prompts for classification, detection, and detoxification tasks. However, the study is specific to toxic content moderation, which is a subset of prompt engineering, hence not a full 10.",https://arxiv.org/pdf/2308.05596 -controlling the extraction of memorized data from large language models via prompt-tuning,8,"The abstract details a study that is highly relevant to prompt engineering, as it directly involves the technique of prompt-tuning to manipulate the behavior of Large Language Models. It is relevant to the study of controlling the output of such models, particularly concerning data extraction and privacy issues, which are key considerations in prompt engineering. The deduction of two points reflects that the abstract specifically focuses on the memorization aspect and the privacy concerns rather than the broader field of prompt engineering or hard prefix prompts in general.",http://arxiv.org/pdf/2305.11759 -sensitivity and robustness of large language models to prompt in japanese,8,"The paper focuses on the sensitivity and robustness of Large Language Models to prompt changes, which is a core aspect of prompt engineering. It is highly relevant as it evaluates how minor alterations in prompts can impact model performance, directly relating to the study of prompt engineering. The slight deduction in rating is because it does not address 'hard prefix prompts,' the specific type of prompt mentioned in the original query, but rather the broader concept of prompt sensitivity and robustness in the context of Japanese language prompts.",http://arxiv.org/pdf/2305.08714 -bounding the capabilities of large language models in open text generation with prompt constraints,9,"The abstract presents a relevant study in the area of prompt engineering as it focuses on analyzing and bounding abilities of generative models with a prompt-centric approach. The researchers' use of structural and stylistic constraints directly pertains to prompt engineering, given that they are well-defined constraints that can affect how prompts guide model generation. The relevance is further supported by the use of a major model like GPT-3 as a case study and the consideration of generalizability to other large models. The deduction of one point is due to the absence of specific details about 'hard prefix prompts' from the given abstract, though the content is strongly related to prompt engineering overall.",http://arxiv.org/pdf/2302.09185 -linguist: language model instruction tuning to generate annotated utterances for intent classification and slot tagging,9,"The abstract describes a method called LINGUIST which involves fine-tuning a large language model using a flexible instruction prompt to improve the generation of annotated data for Intent Classification and Slot Tagging. This process is closely related to prompt engineering, as it involves the specific design of prompts to achieve desired outcomes in the model's performance. Although it is not exclusively focused on 'hard prefix prompts,' the practice of instruction tuning and prompt design to guide the model's output makes this study highly relevant to the field of prompt engineering. The fine-tuning on instruction prompts is a subset of prompt engineering that has a broad impact on the data generation process for natural language understanding tasks.",http://arxiv.org/pdf/2209.09900 -conal: anticipating outliers with large language models,8,"The abstract describes a methodology for improving text classification models' handling of out-of-distribution (OOD) examples by generating these examples via prompts to a large language model. The relevance to prompt engineering lies in the fact that it utilizes prompt-based techniques to generate new datasets that represent novel classes, which is a part of the broader field of prompt engineering. While the study does not focus on 'hard prefix prompts' specifically, the process of generating prompts to create OOD examples is an integral part of prompt engineering. Therefore, the relevance is rated as high but not maximal due to the specific approach not being the central topic of prompt engineering studies.",http://arxiv.org/pdf/2211.15718 -variational prompt tuning improves generalization of vision-language models,9,"The presented study is highly relevant to prompt engineering as it explores an innovative approach to prompt tuning for vision-language models, which is a key area in the field. It proposes a method that enhances the generalization capabilities of foundational models by using a probabilistic model to generate prompts. This addresses a common issue with prompt tuning where prompts may be too narrow or specific, thus hindering the ability of the model to generalize. The mention of integration with standard and conditional prompt learning frameworks suggests that this study is specifically tailored towards improving the efficacy of prompt engineering in practical applications. The only reason it doesn't receive a perfect score is because the study focuses on vision-language models, and while it is highly relevant, it may not encompass all aspects of prompt engineering that might be applicable in purely language-based models.",https://arxiv.org/pdf/2210.02390 -prompt-and-rerank: a method for zero-shot and few-shot arbitrary textual style transfer with small language models,8,"The abstract describes a method that directly involves prompt engineering through the use of zero-shot or few-shot prompting as part of a 'Prompt-and-Rerank' process for textual style transfer. Deliberate prompt design choices are discussed as affecting the quality of style transfer, including the use of prompt paraphrasing and delimiter-pair choice. This directly ties to the area of prompt engineering as it is about optimizing the prompts given to language models to achieve a certain task. However, the relevance is not a full 10 as the primary focus is on textual style transfer rather than the structure and formulation of the prompts themselves which would constitute a comprehensive systematic review on hard prefix prompts.",https://arxiv.org/pdf/2205.11503 -visual-language navigation pretraining via prompt-based environmental self-exploration,8,"The abstract presents a study on improving Vision-Language Navigation (VLN) by utilizing a method called Prompt-based Environmental Self-exploration (ProbES). This involves prompt tuning for language embeddings to adapt a pretrained model like CLIP to new environments without human supervision. Although not directly concerned with 'hard prefix prompts', it relates to prompt engineering significantly as it deals with the adaptation and tuning of prompts to enhance learning efficiency in AI models. The focus is more on vision-language applications and self-exploration but it still falls under the broad umbrella of prompt engineering.",http://arxiv.org/pdf/2203.04006 -prcbert: prompt learning for requirement classification using bert-based pretrained language models,9,"The relevance of the given paper to the study of prompt engineering is high. The paper discusses the application of prompt learning, a technique within prompt engineering, to the domain of software requirement classification using BERT-based pre-trained language models. Since it explicitly deals with the use of prompt templates to improve classification performance, it is highly relevant to the prompt engineering field, particularly in the context of applying these techniques to domain-specific tasks. However, the focus appears to be more on the classification performance rather than the prompt engineering methodology itself, which is why the rating is not a full 10.",https://dl.acm.org/doi/pdf/10.1145/3551349.3560417 -fundamental limitations of alignment in large language models,8,"The abstract discusses the concept of 'alignment' in language models and the theoretical approach to understand the limitations of alignment, which is highly relevant to prompt engineering. The Behavior Expectation Bounds (BEB) framework mentioned in the abstract directly relates to how prompts can influence a model's behavior, which is a core component of prompt engineering. The paper addresses the ability to trigger particular behaviors in large language models through the use of prompts, making it pertinent to the study of hard prefix prompts and how they can be engineered. Although the focus seems to be on the alignment aspect rather than the specific structure and content of prompts (i.e., 'hard prefixes'), the findings about adversarial prompting and the length of the prompt influencing behavior is crucial for the domain of prompt engineering. Therefore, I've rated it an 8 as it is quite pertinent but not exclusively centered on hard prefix prompts.",https://arxiv.org/pdf/2304.11082 -synthetic prompting: generating chain-of-thought demonstrations for large language models,9,"The relevance of the given article to prompt engineering is very high. Synthetic prompting, as described, directly addresses the creation and refinement of prompts for large language models, aiming to improve their reasoning capabilities. The systemic approach to generating chain-of-thought demonstrations ties closely to the study and evolution of prompt engineering techniques. It demonstrates the iterative process of generating questions and enhancing reasoning chains, which is at the heart of prompt engineering. The only reason it doesn't receive a perfect score is because the content might not be exclusively focused on 'hard prefix prompts' as mentioned in the original request, but rather on the broader concept of prompt generation and optimization.",http://arxiv.org/pdf/2302.00618 -prompting large language models with answer heuristics for knowledge-based visual question answering,8,"The relevance to prompt engineering is high, as the study directly addresses the utilization of prompts in improving the performance of a large language model (GPT-3) for the specific task of knowledge-based visual question answering (VQA). The approach involves training a model to generate 'answer heuristics' which are then used as part of the prompts to refine GPT-3's understanding of the questions, thereby enhancing its ability to produce accurate answers. This method represents a novel application of prompt engineering, highlighting its effectiveness in extracting and utilizing implicit knowledge for complex tasks. However, the focus is particularly on incorporating answer heuristics into prompts for a VQA task rather than on hard prefix prompts in general, so the rating is not a perfect 10.",https://arxiv.org/pdf/2303.01903 -large language models are effective text rankers with pairwise ranking prompting,9,"The paper addresses a technique called Pairwise Ranking Prompting (PRP) which is highly relevant to the field of prompt engineering for large language models (LLMs). It contributes to the understanding of how different prompting methods can affect the capabilities of LLMs in the context of ranking tasks. As prompt engineering is largely about optimizing the interaction between users and LLMs for specific tasks, a study that advances the state-of-the-art in this manner is closely related to prompt engineering studies.",http://arxiv.org/pdf/2306.17563 -exploring the mit mathematics and eecs curriculum using large language models,7,"The abstract describes a study where large language models are evaluated and fine-tuned for solving mathematics and EECS problems, which relates to prompt engineering in terms of optimizing inputs to enhance model performance. GPT-4's 'perfect solve rate' with prompt engineering indicates a direct application of prompt engineering techniques. However, the study focuses more broadly on the model's capabilities in academic problem-solving rather than strictly on prompt engineering methodologies and their systematic review, which would be the core interest of a 'hard prefix prompts' study. Hence, the relevance is strong but not complete.",http://arxiv.org/pdf/2306.08997 -sequential monte carlo steering of large language models using probabilistic programs,8,"The paper presents a method for controlling the outputs of large language models using sequential Monte Carlo steering, which is highly relevant to prompt engineering as it deals with influencing and guiding the performance of these models at inference time. This approach could be viewed as an advanced form of prompt engineering where the prompts are not fixed but are instead dynamic and take into account syntactic and semantic constraints. Although it does not explicitly tackle 'hard prefix prompts', it proposes a method that is applicable to prompt engineering in a broader sense. Hence, the relevance is high but not absolute, as it is not directly focusing on a 'systematic review' or explicitly on 'hard prefix prompts'.",http://arxiv.org/pdf/2306.03081 -fineval: a chinese financial domain knowledge evaluation benchmark for large language models,7,"While the title 'fineval: a chinese financial domain knowledge evaluation benchmark for large language models' and abstract presented do not directly deal with 'prompt engineering' in the context of designing or studying hard prefix prompts, the mention of employing various prompt types (zero-shot, few-shot, answer-only, and chain-of-thought) within the evaluation benchmark touches on the principles of prompt engineering. Assessing different prompting strategies is essential to understanding how LLMs like GPT-4 respond in domain-specific tasks. The study's focus on measuring the performance of these LLMs using a set of prompts tailored for the financial domain implies a level of relevance to prompt engineering, as it would provide insights into the effectiveness of prompt design in eliciting the desired response from the models. However, the absence of a specific focus on the systematic review of hard prefix prompts limits the rating from being higher.",https://arxiv.org/pdf/2308.09975 -analyzing chain-of-thought prompting in large language models via gradient-based feature attributions,9,"The provided abstract is highly relevant to the field of prompt engineering, as it focuses on the Chain-of-thought (CoT) prompting method, which is an advanced tactic in prompting for large language models. The study investigates the impact CoT has on the models' interpretation and weighting of input tokens, which is a fundamental aspect of prompt engineering. Although the paper does not specifically address 'hard prefix prompts,' the examination of CoT prompting mechanisms contributes valuable insights into the broader topic of prompt design effectiveness in LLMs, making it pertinent to the prompt engineering study. The reduction in relevancy score from a perfect 10 to a 9 is due to the specified focus on CoT rather than hard prefix prompts specifically.",https://arxiv.org/pdf/2307.13339 -"utilizing large language models to simplify radiology reports: a comparative analysis of chatgpt-3.5, chatgpt-4.0, google bard, and microsoft bing",8,"The presented study, while not focusing on 'hard prefix prompts' specifically, addresses the broader field of prompt engineering by evaluating the effectiveness of different prompts in guiding LLMs to simplify radiology reports. Since the performance variation based on the type of prompt used is central to the paper, it contributes relevant insights into how prompts can be engineered for specific applications in medical communication. Thus, the relevance is high, but not a perfect score due to it not focusing exclusively on 'hard prefix prompts'.",https://www.medrxiv.org/content/medrxiv/early/2023/06/07/2023.06.04.23290786.full.pdf -understanding the effectiveness of very large language models on dialog evaluation,8,"The study is highly relevant to prompt engineering as it investigates the structure of prompts and their impact on the performance of various large language models in dialog evaluation tasks. While it does not specifically address 'hard prefix prompts,' it does concern the broader category of prompting and example selection, which are integral components of prompt engineering. The systematic review of how the datasets influence prompt construction and the exploration of example quantity and selection type are directly related to understanding and optimizing prompt efficacy.",http://arxiv.org/pdf/2301.12004 -identifying and extracting rare disease phenotypes with large language models,9,"The abstract describes a study focused on the development and evaluation of novel prompts for named entity recognition (NER) in the context of extracting rare disease (RD) phenotypes using large language models such as ChatGPT. This work is highly relevant to the field of prompt engineering as it directly involves designing and testing prompts to improve NER performance in zero-shot and few-shot settings, as well as comparing these results to traditional fine-tuning methods. This investigation contributes to understanding the potential and limitations of prompt engineering in practical applications, although it is specific to a particular domain of rare diseases.",http://arxiv.org/pdf/2306.12656 -knowledge-augmented language model prompting for zero-shot knowledge graph question answering,8,"The relevance of this study to prompt engineering is significant, as it involves the augmentation of input prompts with factual information retrieved from a knowledge graph to improve the performance of Large Language Models (LLMs) in answering questions. This approach directly pertains to prompt engineering by structuring the input to LLMs in a way that aids in zero-shot knowledge graph question answering. Although the focus is not specifically on 'hard prefix prompts,' the method does relate to constructing effective prompts that align with the principles of prompt engineering. The high rating reflects the close relation of knowledge augmentation in prompting to enhance model performance without additional training, which is a core aspect of prompt engineering. The rating is not a perfect 10 because the study specifies a specialized application in knowledge graphs and does not broadly survey prompt engineering techniques or include a systematic review of hard prefix prompts generally.",http://arxiv.org/pdf/2306.04136 -purr: efficiently editing language model hallucinations by denoising language model corruptions,7,"The study discusses improving the editing and attribution of language model outputs through prompt-based editing methods, which is closely related to prompt engineering. However, the focus is specifically on reducing hallucinations and improving efficiency, rather than on hard prefix prompts. While it does pertain to the broader category of prompt engineering, it does not address the systematic review of hard prefix prompts directly, hence the relevance rating is above average but not maximum.",http://arxiv.org/pdf/2305.14908 -training language models to follow instructions with human feedback,7,"The abstract describes a study where language models are fine-tuned with human feedback to improve their alignment with user intent, which is a form of prompt engineering. The process of creating 'InstructGPT' involves using prompts and enhancing the model's response to them; thus, it's relevant to the study of how prompts can be engineered to elicit better responses from language models. However, the study focuses more broadly on model alignment rather than specifically on 'hard prefix prompts', which might be a more technical aspect of prompt engineering. Therefore, it does not entirely focus on hard prefix prompts but is still significantly related to the general field of prompt engineering.",http://arxiv.org/pdf/2203.02155 -"translating radiology reports into plain language using chatgpt and gpt-4 with prompt learning: results, limitations, and potential",7,"The relevance to prompt engineering is significant, given that the title suggests the study involves using GPT models to translate radiology reports and this would likely involve devising specific prompts to generate plain language explanations. This indicates the research is about the application of prompt engineering to improve language model outputs in a clinical education context. However, the absence of detailed information in the abstract limits the ability to fully assess the degree to which prompt engineering is the focus of the study, so the rating is not a full 10.",https://vciba.springeropen.com/counter/pdf/10.1186/s42492-023-00136-5 -a systematic survey of prompt engineering on vision-language foundation models,9,"The abstract provided is highly relevant to prompt engineering, as it specifically addresses the application of prompt engineering techniques to vision-language foundation models. These are a subset of tasks within the broader field of prompt engineering. The abstract indicates a systematic review of how prompts are used in this context, discusses different types of models and how they are prompted, and outlines research directions in prompt engineering. While it does not exclusively focus on 'hard prefix prompts', which would be the only aspect potentially limiting a perfect score, the content is indeed directly related to studies on prompt engineering, hence the high relevance rating.",https://arxiv.org/pdf/2307.12980 -pouf: prompt-oriented unsupervised fine-tuning for large pre-trained models,8,"The abstract describes a study focused on prompt-oriented unsupervised fine-tuning for pre-trained models, which is highly relevant to the field of prompt engineering. Although it does not specifically mention 'hard prefix prompts,' the concept of aligning discrete distributions from prompts and target data, as well as the application to various tasks, indicates a strong connection to the techniques and objectives in prompt engineering. The fact that it involves unsupervised learning approaches to enhance the performance of the models on unlabeled data by using prompts makes it valuable to the prompt engineering study despite it not being a systematic review or explicitly focused on 'hard prefix prompts'.",http://arxiv.org/pdf/2305.00350 -model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning,9,"The abstract discusses the approach of improving few-shot performance of prompt tuning through knowledge transfer and model ensembles, directly targeting the optimization of prompt engineering. Although it does not specifically mention 'hard prefix prompts', it is highly relevant to the broader area of prompt engineering which involves techniques to better adapt large language models to specific tasks with minimal examples. The proposed SESoM focuses on sample-specific adaptation that is a key aspect of prompt engineering, thus justifying the high relevance rating.",http://arxiv.org/pdf/2210.12587 -attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing,9,"The abstract describes a study on a new method for parameter-efficient language model tuning called ATTEMPT, which utilizes a novel approach of soft prompt tuning for multi-task knowledge sharing. This is highly relevant to prompt engineering as it directly involves the development and optimization of prompts that influence the behavior of language models. The introduction of a light-weight sub-network for computing instance-wise attention for prompt interpolation is a significant contribution to the field. The fact that this approach contributes to multi-task learning, parameter efficiency, and interpretability in prompt tuning makes it extremely pertinent. The reason why the rating is not a perfect 10 is that the abstract does not mention 'hard prefix prompts' specifically, which was the exact interest stated in the initial 'prompt engineering study' query.",https://arxiv.org/pdf/2205.11961 -prompting large pre-trained vision-language models for compositional concept learning,8,"The abstract describes research on the use of prompt-based learning within vision-language models, focusing on compositional learning. While the study emphasizes the use of 'soft-prompting' as opposed to 'hard-prompting', it still falls under the broader category of prompt engineering. The work is highly relevant to the field as it explores how prompts can be engineered to enhance the performance of machine learning models, which is a core part of prompt engineering studies. The rating is not a perfect 10 because the study does not exclusively deal with 'hard prefix prompts' as specified in the initial request but instead focuses on an alternative method within the same field.",https://arxiv.org/pdf/2211.05077 -proqa: structural prompt-based pre-training for unified question answering,9,"The abstract of 'proqa: structural prompt-based pre-training for unified question answering' is highly relevant to the study of prompt engineering. It details the use of structural prompts as a method to train a QA system, thus highlighting an approach to prompt engineering. The paper not only presents a model that is pre-trained with structural prompt-formatted data but also emphasizes the model's performance on benchmarks and its abilities in various learning scenarios. Although it doesn't specifically mention 'hard prefix prompts', the focus on structural prompt-based pre-training indicates a strong connection to prompt engineering studies.",http://arxiv.org/pdf/2205.04040 -novelty controlled paraphrase generation with retrieval augmented conditional prompt tuning,8,"The abstract describes research related to adapting pre-trained language models using a method called Retrieval Augmented Prompt Tuning and a variation for controlling lexical novelty in paraphrases. Although the study does not directly address 'hard prefix prompts', it is closely related to prompt engineering because it involves the use of specialized prompt tokens and is model-agnostic, which contributes to prompt engineering literature. This relevance is bolstered by the fact that altering prompts to control generation outcomes is a key area within prompt engineering. The study's focus on parameter-efficiency and controlled generation is not the primary focus of hard prefix prompts, hence the rating is not a full 10 but is still relatively high due to the overlapping interests.",https://ojs.aaai.org/index.php/AAAI/article/download/21297/21046 -discup: discriminator cooperative unlikelihood prompt-tuning for controllable text generation,9,"The paper describes an advanced technique for prompt learning with Casual Language Models, focusing on attribute-controllable text generation, which is a core aspect of prompt engineering. The method of utilizing a discriminator to refine the generation process is directly relevant to the study of hard prefix prompts and their optimization in prompt engineering. The relevance is not a perfect 10 since the abstract does not specifically mention 'hard prefix prompts,' yet the overall topic is highly pertinent to the field.",http://arxiv.org/pdf/2210.09551 -deep continuous prompt for contrastive learning of sentence embeddings,8,"The title and abstract describe a study that is highly relevant to prompt engineering, particularly with regard to optimizing and innovating within the framework of contrastive learning and sentence embeddings. The proposed method involves 'prefix deep continuous prompts,' which aligns with prompt engineering, though it does not explicitly mention 'hard prefix prompts.' Nonetheless, the focus on efficiently prompting a language model without full fine-tuning is a significant contribution to the field of prompt engineering. The emphasis on performance improvement with minimal parameter tuning and the avoidance of handcrafted prompt search provides valuable insights for prompt engineering studies. Thus, the relevance is rated high, but not full, due to the lack of direct reference to 'hard prefix prompts.'",http://arxiv.org/pdf/2203.06875 -improving the sample efficiency of prompt tuning with domain adaptation,9,"The given abstract describes research focused on improving the efficiency of prompt tuning for pretrained language models through domain adaptation methods. Although it does not directly mention the term 'hard prefix prompts', the study investigates 'soft prompts' and is highly relevant to the broader field of prompt engineering. It addresses a key challenge in the area, which is enhancing performance in data-scarce situations—a topic of interest for prompt engineering. The proposed OPTIMA method and its potential to improve the transferability and sample efficiency of prompt tuning are of significant value to prompt engineering studies. The rating is not a full 10 as the study might not be exclusively focused on hard prefix prompts, but it remains extremely relevant to the subject matter.",http://arxiv.org/pdf/2210.02952 -prompt-augmented linear probing: scaling beyond the limit of few-shot in-context learners,8,"The paper addresses an advanced technique in prompt engineering by combining linear probing with in-context learning, which directly pertains to how language models are prompted to enhance their understanding and usage of data. The concept of 'prompt-augmented linear probing' (PALP) is relevant to the field of prompt engineering as it seeks to improve the model's performance by carefully designing prompts that fit within the input constraints of language models and make the input more understandable for the model. This is central to the study of prompt engineering. However, it does not specifically address 'hard prefix prompts', though the technique may still be applicable to that subset of prompt engineering. The TLDR section does not provide information in this context, hence the rating is not a full 10.",http://arxiv.org/pdf/2212.10873 -reduce communication costs and preserve privacy: prompt tuning method in federated learning,8,"The study is highly relevant to prompt engineering as it discusses 'prompt tuning,' which is a method within the field of natural language processing that directly relates to how prompts are engineered and optimized. While the primary focus of the study appears to be on the application of prompt tuning in federated learning, which entails privacy-preserving and communication-efficient aspects, it still contributes to the broader understanding of prompt engineering by showcasing its efficiency and robustness in different data distribution scenarios. The presence of a 'backdoor threat' evaluation further adds to its relevance as it touches on the security aspect of prompt engineering.",http://arxiv.org/pdf/2208.12268 -doubly right object recognition: a why prompt for visual rationales,7,"The abstract discusses the development of a 'why prompt' for visual recognition models, which is relevant to the study of prompt engineering as it involves creating prompts that guide models to give not only correct classifications but also the underlying rationales. Although the study is focused more on visual rationales and the intersection of language models with visual models, it still pertains to the broader category of prompt engineering. However, it is not directly related to 'hard prefix prompts' specifically, as it doesn't mention them explicitly, leading to a slightly lower relevance rating.",https://arxiv.org/pdf/2212.06202 -xprompt: exploring the extreme of prompt tuning,9,"The paper directly relates to the domain of prompt engineering, as it explores prompt tuning techniques and their impact on performance with Pre-trained Language Models (PLMs). The research addresses a specific issue in prompt engineering—negative impact of trained prompt tokens—and introduces a novel solution (XPrompt) to mitigate this issue. Therefore, it is highly relevant to studies focused on refining the application of prompts in PLMs. The only reason it does not receive a full score of 10 is that the prompt does not specifically mention 'hard prefix prompts,' so it may slightly deviate from that narrow aspect of prompt engineering study if the method described does not strictly apply to hard prompts.",http://arxiv.org/pdf/2210.04457 -automatic prompt augmentation and selection with chain-of-thought from labeled data,9,"The content of the presented paper is highly relevant to prompt engineering study due to its focus on Chain-of-thought prompting (CoT), which is a technique used in prompt engineering. Automate-CoT, the proposed strategy in the paper, directly addresses the process of generating and selecting prompts in an automated fashion, which aligns with the core components of prompt engineering. This technique also impacts how language models can be efficiently used in various reasoning tasks, which are central to the application of prompt engineering. The reason the rating is not a perfect 10 is because the abstract does not specifically mention 'hard prefix prompts' that the user inquiry is about, instead it refers to CoT in a general sense.",http://arxiv.org/pdf/2302.12822 -multitask prompt tuning enables parameter-efficient transfer learning,8,"The provided abstract describes a method for prompt tuning in the context of adapting large language models to various tasks, which is highly relevant to the field of prompt engineering. Multitask prompt tuning (MPT) is a technique that is specifically designed to create versatile prompts that are applicable across multiple tasks, indicating a direct application to prompt engineering. The abstract focuses on the efficient use of prompts and parameter tuning, which are central themes in prompt engineering studies. However, the abstract does not directly mention 'hard prefix prompts' but rather it discusses soft prompts and their adaptation for multitask learning, so it may not be fully comprehensive in the context of a systematic review on hard prefix prompts. This is why the rating is not a full 10.",http://arxiv.org/pdf/2303.02861 -declaration-based prompt tuning for visual question answering,8,"The paper presents a method for fine-tuning visual-language models for VQA tasks (Declaration-based Prompt Tuning, DPT), which involves aligning downstream task objectives with pre-training objectives. While the paper focuses on an application within cross-modal tasks (visual question answering), the method of 'prompt tuning' is central to 'prompt engineering,' which involves designing inputs that efficiently guide models to perform specific tasks. Therefore, the concept of reformulating questions into declarative sentence form for prompt tuning is highly relevant to the study of prompt engineering, albeit in a more specialized context.",http://arxiv.org/pdf/2205.02456 -prompt generation networks for efficient adaptation of frozen vision transformers,9,"The abstract describes a new method in prompt engineering, the Prompt Generation Network (PGN), which is highly relevant to the study of how to efficiently adapt frozen vision transformers for various tasks without fine-tuning. The fact that PGN pertains to learning input-dependent prompts places it within the domain of prompt engineering. The reason it is not a full 10 is that it might not cover 'hard prefix prompts' specifically as the systematic review requires, but rather discusses a more generalized approach to prompt engineering.",http://arxiv.org/pdf/2210.06466 -spt: semi-parametric prompt tuning for multitask prompted learning,9,"The study titled 'spt: semi-parametric prompt tuning for multitask prompted learning' is highly relevant to prompt engineering since it directly deals with an innovative method for prompt tuning which is a central theme in prompt-based learning and modeling. The semi-parametric approach, utilizing a memory bank to retrieve memory prompts based on discrete prompts, is a novel contribution to the field of prompt engineering, and the extensive experiments conducted across various tasks and domains underscore its potential impact on the efficiency and generalization of large language models. The reason why the rating is not a full 10 is that the prompt engineering relevance is specific to semi-parametric methods, and it does not address the entire spectrum of prompt engineering techniques, such as hard prefix prompts.",http://arxiv.org/pdf/2212.10929 -cup: curriculum learning based prompt tuning for implicit event argument extraction,7,"The abstract describes a method for enhancing a machine learning model's ability to perform implicit event argument extraction—'Curriculum learning based Prompt tuning (CUP).' This approach is relevant to prompt engineering because it involves adapting prompt templates over different stages of learning to better utilize pre-trained language models. Although the paper does not exclusively focus on 'hard prefix prompts,' which the prompt engineering study may specifically be interested in, it talks about prompt-based models and their tuning, which is closely related to the domain of prompt engineering. Therefore, the relevance to prompt engineering is significant, although not perfectly aligned with the prompt engineering area targeting hard prefixes.",https://arxiv.org/pdf/2205.00498 -zero-label prompt selection,9,"The abstract describes a method named Zero-Label Prompt Selection (ZPS) that evidently pertains to the field of prompt engineering as it directly involves the selection and use of prompts for natural language models without the need for labeled data. Despite not explicitly mentioning 'hard prefix prompts', it addresses a critical component of prompt engineering, which is prompt performance in zero or few-shot settings. The relevance to prompt engineering is high because it contributes to the understanding of how to effectively utilize prompts to improve model performance under constrained conditions.",http://arxiv.org/pdf/2211.04668 -clip-tuning: towards derivative-free prompt learning with a mixture of rewards,8,"The paper describes an innovative approach to prompt learning that is highly relevant to the field of prompt engineering. Derivative-free prompt learning is a part of prompt engineering, and the technique of using 'thinned' networks to create a mixture of rewards is a novel contribution to optimizing prompts. While the paper focuses specifically on Clip-Tuning and derivative-free methods as opposed to a broader systematic review of hard prefix prompts, it still provides valuable insights and advancements in the area of prompt engineering. Therefore, the rating is high for relevance but not the maximum score since it doesn't cover the entire scope of 'hard prefix prompts'.",http://arxiv.org/pdf/2210.12050 -uom&mmu at tsar-2022 shared task: prompt learning for lexical simplification,8,"The paper describes an approach for using prompts in a language model to achieve lexical simplification. It directly relates to prompt engineering since it involves fine-tuning language models with a specifically designed prompt template. The method described is an example of how prompt engineering can be used to improve the performance of language tasks in different settings (zero-shot, fine-tuned, and multilingual). This is closely aligned with the study of prompt engineering, although it is focused on one particular application (lexical simplification) rather than hard prefix prompts in a broader sense.",https://aclanthology.org/2022.tsar-1.23.pdf -bidirectional language models are also few-shot learners,8,"The abstract discusses the concept of prompt-based learning in the realm of bidirectional language models, which is a central component of prompt engineering. It presents a novel technique (SAP) for prompting bidirectional models, which is highly relevant to the study of how to effectively design and use prompts to elicit desired responses from such models. While it doesn't directly address 'hard prefix prompts,' the subject of designing prompts and demonstrating their utility across different models (bidirectional and unidirectional) is pertinent to the broader field of studies into prompt engineering. The work's implications for the adaptability and performance of language models when prompted make it significantly relevant, though not perfectly aligned since the prompt primely focuses on 'hard prefix prompts.'",http://arxiv.org/pdf/2209.14500 -language models in the loop: incorporating prompting into weak supervision,9,"The document describes a methodology deeply tied to the application of prompt engineering, where large language models are prompted with multiple queries to generate labeled data for a classifier in a weak supervision context. This is highly relevant to prompt engineering studies as it directly involves developing and refining methods for eliciting structured responses from language models through prompts. The only reason why the rating is not a perfect 10 is the study's specific focus on weak supervision, which might not cover all aspects of prompt engineering, such as constructing prompts for different kinds of language tasks beyond weak supervision.",http://arxiv.org/pdf/2205.02318 -prompting as probing: using language models for knowledge base construction,8,"The study described in the abstract details the use of various prompting techniques with GPT-3 to perform Knowledge Base Construction, an advanced application of prompt engineering. The multi-step approach to optimizing prompts, including manual prompt curation and the use of true/false questions, directly relates to the field of prompt engineering. Although it does not specifically mention 'hard prefix prompts,' the overarching use of prompts to elicit specific information from a language model is highly relevant. Therefore, the paper is quite pertinent to the study of prompt engineering, but since 'hard prefix prompts' are not exclusively the focus, the rating is not a perfect 10.",http://arxiv.org/pdf/2208.11057 -what does clip know about a red circle? visual prompt engineering for vlms,9,"The abstract describes a study on prompt engineering within the domain of Vision-Language Models, such as CLIP, specifically focusing on the use of visual cues (a red circle) to direct the model's attention. Although the study is about visual prompt engineering rather than traditional text-based prompts ('hard prefix prompts'), it is still highly relevant to the broader field of prompt engineering as it explores how different types of prompts can influence model behavior and performance on various tasks. The rating is not a perfect 10 because it does not directly address 'hard prefix prompts' in text but instead a visual method, which may not be precisely what is meant by 'prompt engineering' in the original query context.",https://arxiv.org/pdf/2304.06712 -an automatically discovered chain-of-thought prompt generalizes to novel models and datasets,9,"The abstract discusses a study focused on the effectiveness of chain-of-thought (CoT) reasoning prompts across different language models and datasets, which is highly relevant to prompt engineering. The exploration of how previously devised prompts can be applied and generalized to new model generations provides valuable insights for prompt engineering research. The study investigates the impact of prompts on the performance of language models, which is central to the field of prompt engineering. However, the abstract doesn't specifically mention 'hard prefix prompts,' which might slightly reduce the relevance considering the precise topic in the initial prompt.",https://arxiv.org/pdf/2305.02897 -pbnr: prompt-based news recommender system,8,"The paper describes the 'prompt-based news recommendation' (PBNR) system which closely relates to prompt engineering as it involves designing personalized prompts to interact with a pre-trained language model (T5) for the specific task of news recommendation. This system is an example of applying prompt engineering to adapt language models for a specific application. However, the relevance is not a full 10 because the paper seems more focused on the application of prompt engineering in the context of news recommendation, rather than on the study of the hard prefix prompts or the systematic review of the methodology itself.",http://arxiv.org/pdf/2304.07862 -visual clues: bridging vision and language foundations for image paragraph captioning,7,"The study relates to prompt engineering in that it discusses the creation of structured textual prompts, termed 'visual clues,' from an image using a vision model, and then using these prompts to generate image captions with a language model. Although the research does not focus on 'hard prefix prompts' per se, it is relevant to the broader field of prompt engineering, considering it involves the construction and utilization of prompts to facilitate communication between vision and language models. Therefore, it offers insights into one aspect of the prompt engineering area - namely, how to effectively generate prompts for a specific cross-modal task.",http://arxiv.org/pdf/2206.01843 -few-shot self-rationalization with natural language prompts,8,"The presented study explores natural language prompts extensively in the context of self-rationalization models, which is a form of prompt engineering where the model is prompted to not only provide a decision but also to generate explanations for its decisions. Even though the study does not exclusively focus on 'hard prefix prompts', it is relevant to the broader topic of engineering prompts in such a way that enables models to perform complex tasks with minimal training data. The focus on few-shot learning and the use of prompts to improve plausibility ratings also contribute to the field of prompt engineering. However, the rating is not a full 10 as the specific term 'hard prefix prompts' is not directly addressed.",https://aclanthology.org/2022.findings-naacl.31.pdf -controllable generation from pre-trained language models via inverse prompting,9,"The abstract presents a direct application of prompt engineering by proposing a novel technique called inverse prompting to improve controllability in text generation from pre-trained language models. The concept of predicting the prompt from generated text during beam search for better alignment between the two is a clear attempt at enhancing the prompt engineering field. The study seems highly relevant to prompt engineering, especially in creating more efficient and controlled generation of texts. The rating is not a full 10 simply because the abstract does not mention 'hard prefix prompts' specifically, which was outlined in the original inquiry regarding a 'systematic review on hard prefix prompts'. However, inverse prompting is still clearly within the domain of prompt engineering.",https://arxiv.org/pdf/2103.10685 -progressive prompts: continual learning for language models,9,"The provided abstract directly addresses the development of a new method within the field of prompt engineering referred to as 'Progressive Prompts.' This approach is relevant because it is a specific technique aimed at improving the capabilities of language models by facilitating continual learning. Since prompt engineering involves the design and utilization of prompts to effectively interact with language models, a study on Progressive Prompts is highly pertinent to the field. The relevance is not rated as a full 10 only because the prompt specifically asks about 'hard prefix prompts,' while this method pertains to soft prompts learned for each task, and it's not clear whether hard prompts are considered or compared in the approach.",http://arxiv.org/pdf/2301.12314 -boosting natural language generation from instructions with meta-learning,7,"The abstract describes a study focused on improving natural language generation using meta-learning strategies, specifically in a multi-task instructional learning setting. While the study does not directly address 'hard prefix prompts,' it does explore how language models can better extract and utilize information from instructions, which is a critical aspect of prompt engineering. Enhancing the generalization of language models to perform unseen tasks based on instructions is relevant to prompt engineering as it addresses the challenge of designing prompts that can guide models to perform specific NLP tasks effectively. The application of meta-learning to MTIL is an innovative approach within the broader field of prompt engineering, thus earning a relevance rating of 7 out of 10.",http://arxiv.org/pdf/2210.11617 -strategic reasoning with language models,9,"The abstract highlights the use of 'systematically generated prompts' in conjunction with large language models to facilitate strategic reasoning, which is highly relevant to prompt engineering. The study's exploration of how prompts can guide AI to generalize to new tasks with little or no additional training intersects with the core concepts of creating effective prompts that drive AI performance. The slight deduction from a perfect score is due to the specific context of strategic games, which may not cover all aspects of prompt engineering, but the principles discussed are broadly applicable.",http://arxiv.org/pdf/2305.19165 -respectful or toxic? using zero-shot learning with language models to detect hate speech,8,"The paper focuses on prompt-based methods for hate speech detection, which falls under the broader category of prompt engineering within the field of natural language processing. Prompting is a core technique used in this study and is relevant to the understanding and development of effective prompt strategies in the context of language model applications. Although the paper's primary concern isn't about 'hard prefix prompts' specifically, it still contributes to the knowledge base regarding how prompts can be engineered to enhance zero-shot learning capabilities in AI models, which is pertinent to the study of prompt engineering.",https://aclanthology.org/2023.woah-1.6.pdf -"a sign language recognition system with pepper, lightweight-transformer, and llm",7,"The abstract indicates that prompt engineering was used as part of the process to enable the Pepper Robot to generate natural Co-Speech Gesture responses. While the focus of the study is on sign language recognition and robot interaction, the mention of tailoring interactions through prompt engineering shows relevance to the prompt engineering field. However, the study does not appear to be a comprehensive systematic review on hard prefix prompts specifically but instead applies prompt engineering within the scope of robot interaction and sign language processing. Therefore, the rating is a 7 out of 10, acknowledging the connection without it being the central theme of the research.",https://arxiv.org/pdf/2309.16898 -question decomposition improves the faithfulness of model-generated reasoning,7,"The study discusses a method of improving the quality and faithfulness of responses from large language models by decomposing questions into subquestions, which is related to prompt engineering. The utilization of specific prompting strategies to elicit more explainable and verifiable outputs from the models is a part of prompt engineering. Although the focus is more on question decomposition and the faithfulness of the reasoning process rather than on 'hard prefix prompts' specifically, the principles and findings can still have implications for prompt engineering practices in general, hence the relatively high relevance score.",https://arxiv.org/pdf/2307.11768 -improving gender fairness of pre-trained language models without catastrophic forgetting,8,"The study described in the abstract is highly relevant to prompt engineering because it develops a method called GEEP (GEnder Equality Prompt) to improve the performance of pre-trained language models. GEEP specifically involves learning gender-related prompts, which makes it a direct application of prompt engineering in addressing the issue of gender bias in AI models. Although the study is not a comprehensive systematic review on hard prefix prompts and is more focused on gender fairness, the concept of 'hard prefix prompts' as a key component of 'prompt engineering' makes this study quite relevant to the broader field of prompt engineering.",https://aclanthology.org/2023.acl-short.108.pdf -(ab)using images and sounds for indirect instruction injection in multi-modal llms,8,"The provided title and abstract are relevant to prompt engineering study as they discuss a method of manipulating the output of multi-modal LLMs (Large Language Models) through indirect prompt and instruction injection via images and sounds, which can be considered a form of prompt engineering. Although the focus is on adversarial perturbations and security, understanding this process is crucial for developing effective prompts, especially in the context of preventing misuse. It highlights the importance of prompt design in multi-modal systems and contributes to the broader field of prompt engineering by exploring potential vulnerabilities and manipulative techniques.",https://arxiv.org/pdf/2307.10490 -chat-rec: towards interactive and explainable llms-augmented recommender system,7,"The relevance of the provided study to prompt engineering is moderately high, with a rating of 7 out of 10. The study focuses on a method for augmenting recommender systems with large language models by converting user data into prompts, which falls within the scope of prompt engineering. Prompt design plays a crucial role in enabling the Chat-Rec system to function by guiding the language model to generate relevant and personalized recommendations. While the study does not specifically target 'hard prefix prompts,' it does explore a practical application of prompts within an interactive system and contributes to the body of knowledge on how to effectively leverage LLMs through prompt engineering. However, if the focus were specifically on a 'systematic review on hard prefix prompts,' the rating might be lower as this study presents an application rather than a review on hard prefix prompts.",http://arxiv.org/pdf/2303.14524 -dialogue for prompting: a policy-gradient-based discrete prompt optimization for few-shot learning,9,"The study described focuses on prompt-based optimization for few-shot learning in the context of pre-trained language models, which is directly relevant to prompt engineering. The novel Dialogue-comprised Policy-gradient-based Discrete Prompt Optimization (DP2O) method aims to improve the efficiency, quality, and applicability of prompt-based methods in NLP tasks. The use of a reinforcement learning framework to optimize discrete prompts signifies a technical advancement in the field. The only reason it doesn't score a perfect 10 is that it doesn't address 'hard prefix prompts' specifically but discusses discrete prompt optimization in a broader sense.",https://arxiv.org/pdf/2308.07272 -emotion-conditioned text generation through automatic prompt optimization,9,"The title and abstract discuss an automatic prompt optimization approach specifically for emotion-conditioned text generation, which is clearly within the domain of prompt engineering. The study focuses on refining prompts to improve the performance of instruction-fine-tuned models, which is at the core of prompt engineering studies. The relevance is not rated a perfect 10 as the study is narrowly focused on emotion-conditioned text generation and not prompt engineering in general. Overall, however, the relevance to prompt engineering is very high.",https://arxiv.org/pdf/2308.04857 -query-dependent prompt evaluation and optimization with offline inverse rl,8,"The abstract indicates a study focused on enhancing arithmetic reasoning of LLMs (Large Language Models) specifically through prompt optimization, which is directly related to prompt engineering. The introduction of Prompt-OIRL as a method to evaluate query-prompt pairs and recommend optimal prompts without requiring live interaction with LLMs is notable for prompt engineering efficiency and effectiveness. It suggests a more nuanced approach to evaluating and optimizing prompts based on query dependency, which is an important aspect of prompt engineering. However, the study is not centered on 'hard prefix prompts' specifically but rather on a broader prompt optimization problem, which includes but is not limited to hard prefix prompts. Therefore, the rating is not a perfect 10.",https://arxiv.org/pdf/2309.06553 -visual-language prompt tuning with knowledge-guided context optimization,8,"The presented abstract directly addresses an aspect of prompt engineering, focusing on improving the generalization ability of learnable prompts in the context of a visual-language model. The introduction of Knowledge-guided Context Optimization (KgCoOp) pertains to the optimization of prompts, which is a fundamental component of prompt engineering. The relevance rating is not a full 10 because the study specifically targets visual-language models and may not cover other prompt engineering contexts, such as text-based models or hard prefix prompts more broadly.",https://arxiv.org/pdf/2303.13283 -cpl: counterfactual prompt learning for vision and language models,8,"The paper discusses 'Counterfactual Prompt Learning (CPL)' for vision and language models, which is directly related to prompt tuning, a subset of prompt engineering. It introduces an innovative approach to optimize prompt learning and aims to improve generalization of learned representations for few-shot learning tasks. Although it does not specifically mention 'hard prefix prompts', it still contributes to the broader field of prompt engineering by advancing techniques for efficient and non-spurious prompt learning. This is highly relevant for the study of prompt engineering as it explores new methods and their impact on model performance. Therefore, the rating is high but not maximum, as the exact focus on 'hard prefix prompts' is not clear from the abstract.",http://arxiv.org/pdf/2210.10362 -understanding and mitigating overfitting in prompt tuning for vision-language models,8,"The abstract discusses the mitigation of overfitting in prompt tuning for vision-language models, which is highly relevant to prompt engineering studies. The focus on understanding and addressing overfitting issues during prompt tuning is pertinent as prompt engineering encompasses the design, optimization, and evaluation of prompts used to guide machine learning models. The abstract presents a direct application and improvement in the field of prompt engineering by proposing a new method (Subspace Prompt Tuning) to enhance the training process of models, making the study very relevant. However, it does not explicitly cover 'hard prefix prompts' which is specifically mentioned in the query, thus the rating is slightly reduced.",https://arxiv.org/pdf/2211.02219 -bbtv2: pure black-box optimization can be comparable to gradient descent for few-shot learning,8,"The paper is highly relevant to prompt engineering as it presents an advanced technique (BBTv2) for optimizing the prompts used in language models, seeking to improve performance in few-shot learning tasks without relying on gradient-based methods. This research is directly related to how prompts can influence model performance and efficiency, which is a core aspect of prompt engineering. Although it does not specifically address 'hard prefix prompts' as mentioned in the initial study prompt, it deals with the continuous prompt tokens and optimizing them, which falls under the broader umbrella of prompt engineering. Therefore, the rating is not a full 10 but remains high due to the close relevance.",http://arxiv.org/pdf/2205.11200 -connecting large language models with evolutionary algorithms yields powerful prompt optimizers,9,"The paper directly relates to prompt engineering by introducing a framework (EvoPrompt) for optimizing prompts using evolutionary algorithms, which is a novel approach within the field of prompt engineering study. The use of both large language models and evolutionary algorithms specifically to improve the efficiency and effectiveness of prompt generation is extremely relevant to those researching how to develop better prompts for LLMs. The only reason it does not receive a full 10 is that, without access to the full text, it's not clear how much the paper focuses on 'hard prefix prompts' specifically, if at all, since it doesn't mention this specific term in the provided abstract or TLDR content.",https://arxiv.org/pdf/2309.08532 -iterative prompt learning for unsupervised backlit image enhancement,8,"The abstract describes a study that focuses on the development of an unsupervised image enhancement method by using prompt learning within the CLIP framework. Although the primary application is not textual prompt engineering but rather improving visual quality in backlit images, the concept of iterative prompt learning is highly relevant to prompt engineering. The lifecycle of prompts, their optimization, and their iterative improvement are at the core of prompt engineering studies. This work can contribute to the understanding of prompt-based models and how they can be fine-tuned for specific tasks, which is valuable knowledge for the field of prompt engineering. Hence, the relevance rating is 8, acknowledging the connection to prompts and learning frameworks but also recognizing that the study doesn't focus on textual prompts or their direct use in text-based models.",https://arxiv.org/pdf/2303.17569 -temporally-extended prompts optimization for sam in interactive medical image segmentation,7,"The study described in the abstract is somewhat relevant to prompt engineering as it involves optimizing the interaction between human experts and a machine learning model through the form of prompts (e.g., points, bounding boxes). However, the primary focus seems to be on the application of this technique to the medical image segmentation field rather than the theory or methodology of prompt engineering itself. The relevance is thus rated a 7, recognizing the contribution to the prompt engineering field in the specific context of medical image segmentation but also noting that it does not address broader prompt engineering topics.",http://arxiv.org/pdf/2306.08958 -styleclip: text-driven manipulation of stylegan imagery,7,"The relevance to prompt engineering is substantial, as the study addresses a text-based interface which involves users providing text prompts that manipulate images generated by StyleGAN. This process inherently relies on prompt engineering to achieve meaningful image manipulations, effectively turning textual descriptions into stylistic changes in images. The use of CLIP models to understand and execute these prompt-induced manipulations highlights an important application of prompt engineering in the field of AI and image processing. However, the primary focus of the study is on the interface and leveraging CLIP for image manipulation rather than the detailed study of the prompt engineering itself, which slightly reduces the rating.",https://arxiv.org/pdf/2103.17249 -null-text inversion for editing real images using guided diffusion models,7,"The paper presents an inversion technique and a method for text-based image editing using diffusion models, which involves prompt engineering concepts such as working with textual embeddings and guiding diffusion models using text. While the focus is on image editing rather than constructing or evaluating hard prefix prompts explicitly, the techniques developed could be relevant to prompt engineering by enabling more sophisticated control and manipulation of generated content based on text prompts. However, the study does not directly address hard prefix prompts in systematic review, thus the relevance is significant but not complete.",https://arxiv.org/pdf/2211.09794 -clip-mesh: generating textured meshes from text using pretrained image-text models,8,"The given abstract presents a technique that utilizes a pre-trained CLIP model for the zero-shot generation of textured 3D models from text prompts, which aligns well with the field of 'prompt engineering' as it demonstrates a practical application of generating content from textual descriptions. The relevance is marked as an 8 because while it heavily leverages the engineering of prompts to create 3D models, the focus is on the product of the prompt (a 3D model) rather than on the study of prompt engineering itself. It does not address the systematic review aspect of hard prefix prompts, but it is related to the domain of how text prompts can guide AI to produce desired outputs.",https://arxiv.org/pdf/2203.13333 -what changes can large-scale language models bring? intensive study on hyperclova: billions-scale korean generative pretrained transformers,8,"The abstract indicates extensive exploration of prompt-based learning within the context of a non-English large-scale language model, HyperCLOVA, and discusses the integration of prompt optimization into the prompt engineering pipeline. This is highly relevant to prompt engineering, but not specifically centered on 'hard prefix prompts'. However, it does address prompt engineering more broadly and introduces an interactive prompt engineering interface, suggesting considerable coverage of the topic. Some points were deducted as the abstract does not focus precisely on 'hard prefix prompts', but instead on a wider range of prompt engineering aspects.",https://aclanthology.org/2021.emnlp-main.274.pdf -directed diffusion: direct control of object placement through attention guidance,7,"The study described in the abstract engages with the concept of hard prompt engineering by introducing methods for providing 'direction' to the model's output, specifically in terms of spatial object placement. This work falls under the study of prompt engineering to the extent that it addresses a fine-grained aspect of the control mechanism one might use in a prompt to guide the output of a generative model. However, the focus is somewhat tangential to hard prefix prompts specifically, as the emphasis seems to be on the manipulation of cross-attention maps rather than the construction of text prompt prefixes. The rating is not a perfect 10 because the abstract does not directly reference hard prefix prompts or their systematic review; rather, it offers a novel contribution that could be considered in the broader field of prompt engineering within generative AI.",https://arxiv.org/pdf/2302.13153 -clip-actor: text-driven recommendation and stylization for animating human meshes,7,"The relevance of the described paper 'clip-actor: text-driven recommendation and stylization for animating human meshes' to prompt engineering study is moderately high. While the main focus is on animating 3D human meshes using text prompts, the fact that it leverages natural language prompts to drive the animation process indicates an overlap with prompt engineering research. The system's ability to interpret and respond to natural language inputs demonstrates a practical application of prompt engineering in the field of computer graphics and animation. However, the study is not explicitly centered on the systematic review or theoretical examination of hard prefix prompts in the broader context of prompt engineering, which slightly limits its full relevance to the specific subject of a comprehensive systematic review on hard prefix prompts.",http://arxiv.org/pdf/2206.04382 -promptboosting: black-box text classification with ten forward passes,9,"The abstract discusses PromptBoosting, an approach to text classification that effectively uses prompts to train a classifier without needing access to the underlying language model's internal workings, which is highly relevant to prompt engineering. The method involves creating a set of prompts and using an ensemble learning algorithm to improve classification performance. This process aligns closely with prompt engineering by proposing a novel way to interface with and manipulate language models using prompts, thereby making it highly pertinent to studies in prompt engineering. The paper does not specifically focus on 'hard prefix prompts' as stated in the potentially narrower research interest of the initial inquiry but still provides significant insights into the general area of prompt-based methods.",http://arxiv.org/pdf/2212.09257 -reward collapse in aligning large language models,8,"The paper discusses an important aspect of prompt-based training in large language models, specifically how prompt-related information is incorporated into the training process. This is highly relevant to prompt engineering because it deals with the effectiveness of prompts and the responses generated by language models. The concept of 'reward collapse' is directly related to the outcomes of different prompts, and thus to the study of prompt engineering. The paper proposes a solution to make rewards prompt-dependent, which is a significant concern in prompt engineering. While it does not directly address 'hard prefix prompts', the study's implications for the design of prompts and training methods are closely related to prompt engineering.",http://arxiv.org/pdf/2305.17608 -late prompt tuning: a late prompt could be better than many prompts,9,"The provided abstract is highly relevant to prompt engineering study as it discusses prompt tuning—a specific area within prompt engineering. It introduces 'Late Prompt Tuning' as a method to improve efficiency and performance of prompt tuning, which is directly related to the concerns of prompt engineering. The only reason why it is not rated a perfect 10 is that the abstract does not explicitly mention 'hard prefix prompts,' but rather focuses on an improved methodology of soft prompt tuning. Nevertheless, understanding the prompt tuning aspect, even if it is soft prompt related, is essential for comprehensive knowledge in the overall field of prompt engineering.",http://arxiv.org/pdf/2210.11292 -making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning,9,"The paper presents a framework related to improving the efficiency of PLMs in low-resource scenarios through a method known as Contrastive Prompt Tuning. It tackles the challenge of creating task-specific prompts and verbalizers without manual engineering, which is highly relevant to the field of prompt engineering. The mention of 'task-invariant continuous prompt encoding' and 'fully trainable prompt parameters' directly relates to engineering prompts to improve few-shot learning capabilities of language models. Therefore, the study is highly pertinent to prompt engineering, especially considering its focus on end-to-end and contrastive learning approaches for enhancing language model performance. The only reason it is not rated a full 10 is that it doesn't explicitly mention 'hard prefix prompts,' which the original study inquiry specified, but it covers the overarching theme of prompt engineering sufficiently.",https://arxiv.org/pdf/2204.00166 -lpt: long-tailed prompt tuning for image classification,7,"The paper introduces an approach for adapting pretrained models to long-tailed classification problems using prompts. This is relevant to prompt engineering since LPT (Long-tailed Prompt Tuning) involves creating and tuning prompts as a method of model adaptation, which falls under the broader category of prompt engineering strategies. The systematic review sought is broader and looks for hard prefix prompts, which might imply a specific subset of prompt engineering. Nonetheless, as LPT involves modifying prompt mechanisms for a specific end, it shares concepts with the overall field of prompt engineering. The rating is not a full 10 because the described method does not directly focus on the general study of prompt engineering or the particular 'hard prefix prompts' but rather a specialized application of prompt tuning in image classification.",http://arxiv.org/pdf/2210.01033 -multi-prompt alignment for multi-source unsupervised domain adaptation,8,"The abstract describes the use of prompts in the context of unsupervised domain adaptation, introducing a new framework called Multi-Prompt Alignment (MPA). This is directly related to prompt engineering as it involves training and aligning prompts to minimize domain gaps. Although the focus here is more on domain adaptation rather than the study of 'hard prefix prompts' in isolation, the application of prompt learning techniques makes it relevant to the field of prompt engineering. The rating is not a full 10 because the abstract does not directly address a comprehensive systematic review on hard prefix prompts per se, but rather introduces a novel application of prompt engineering in UDA.",http://arxiv.org/pdf/2209.15210 -eliciting knowledge from pretrained language models for prototypical prompt verbalizer,9,"The paper describes an approach that directly pertains to prompt engineering by discussing the elicitation of knowledge from pretrained models and the optimization of said models for prompt-tuning. The concept of a prototypical prompt verbalizer and the use of contrastive learning are specific methodologies within the broader field of prompt engineering, thus highly relevant. The rating isn't a perfect 10 as the abstract provided is missing, and therefore the review may not cover all aspects of 'hard prefix prompts' specifically mentioned in the initial term.",https://arxiv.org/pdf/2201.05411 -fine-grained retrieval prompt tuning,7,"The paper titled 'Fine-grained Retrieval Prompt Tuning' is relevant to prompt engineering as it introduces a method (FRPT) involving prompts to steer a pre-trained model's behavior without fine-tuning the entire model. This is in line with the concept of prompt engineering wherein strategic prompts are used to harness a model's capabilities for specific tasks. Although the paper deals with a specialized domain of fine-grained object retrieval and is more focused on the retrieval aspect rather than prompt engineering in a broad sense, the principles and methods it introduces are applicable to the study of prompt engineering, especially in how prompts can be used to adapt a model's output without extensive retraining. The rating is not a full 10 because the paper appears to be narrowly focused on a specific instance of prompt use, rather than a comprehensive systematic review on hard prefix prompts as potentially indicated by the phrase 'prompt engineering study.'",http://arxiv.org/pdf/2207.14465 -improving chatgpt prompt for code generation,9,"The abstract provided details an empirical study on how prompt design, particularly in the use of ChatGPT for code generation tasks, affects performance. This is highly relevant to prompt engineering, as it outlines a method of prompt optimization (leveraging the chain-of-thought strategy) and discusses the impact of different prompts on the efficacy of an AI model. It does not focus specifically on 'hard prefix prompts,' as might be suggested by the original query on 'prompt engineering study,' but it does deal with the broader area of prompt engineering, warranting a high relevance rating.",http://arxiv.org/pdf/2305.08360 -dynamic prompting: a unified framework for prompt tuning,9,"The paper in the title focuses on the topic of prompt tuning, specifically the effectiveness of dynamic prompts versus fixed soft prompts. It directly addresses optimizing prompt position and how it affects performance in extracting knowledge from various pretrained models. The 'hard prefix prompts' mentioned in the request for a systematic review relates to the broader field of prompt engineering and tuning, and while the paper appears to discuss a more advanced approach (dynamic prompts), it is highly relevant to the study of prompts in general, including hard prefixes. The abstract provided offers insights and tangible outcomes of prompt tuning research, thus the relevance rating is high. However, it is not exclusively focused on 'hard prefix prompts' but considers prompt tuning more broadly, hence the rating is not a perfect 10.",http://arxiv.org/pdf/2303.02909 -stylediffusion: prompt-embedding inversion for text-based editing,7,"The given abstract is moderately relevant to prompt engineering study. It discusses a method for text-based editing of images using pretrained diffusion models, which involves prompt-editing. The relevance is substantial because working with prompts is integral to guiding AI models in generating or editing content. The paper proposes improvements for image editing using text prompts, which is related to prompt engineering in the way that it attempts to refine how prompts influence the AI's output. However, the focus seems to be more on image editing and attention regularization rather than hard prefix prompts, which would be the core topic in a prompt engineering study. Hence, the relevance is not complete, but the approach to handle and edit prompts for better results is pertinent to the field.",https://arxiv.org/pdf/2303.15649 -a simple zero-shot prompt weighting technique to improve prompt ensembling in text-image models,8,"The abstract presents a study that is directly related to prompt engineering, focusing on automated scoring and ensembling of prompts to improve the accuracy of zero-shot text-image models. Although the study does not specifically mention 'hard prefix prompts', it does address the broader topic of prompt engineering and optimization, which is highly relevant. The only reason it does not receive a full 10 is the absence of a direct discussion about 'hard prefix prompts', which might be considered more specialized within the domain of prompt engineering.",https://arxiv.org/pdf/2302.06235 -drpt: disentangled and recurrent prompt tuning for compositional zero-shot learning,8,"The provided abstract describes research on prompt tuning, specifically a novel framework called DRPT, in the context of Compositional Zero-shot Learning (CZSL). Its relevance to prompt engineering is high, given that it addresses the optimization of prompts through the use of disentangled and recurrent tuning strategies. While the study might not focus exclusively on 'hard prefix prompts' as mentioned in the initial prompt, the described techniques are directly related to enhancing the efficacy of prompts in interacting with vision-language models (VLMs). Therefore, the content is substantially pertinent to the broader field of prompt engineering.",http://arxiv.org/pdf/2305.01239 -reprompt: automatic prompt editing to refine ai-generative art towards precise expressions,9,"The abstract pertains directly to the field of prompt engineering, specifically concerning the refinement of AI-generated images based on textual prompts. The introduction of RePrompt, an automatic method for editing prompts to achieve precise emotional expressiveness in AI-generated images, represents a focused study within prompt engineering. This is highly relevant since it deals with optimizing text prompts, albeit in the context of generative art rather than 'hard prefix prompts' used for textual outputs or structured data queries. The reason it's not a 10 is the study's specific angle on emotional expressiveness, which may not encompass the entirety of prompt engineering studies, such as technical or informational aspects.",https://arxiv.org/pdf/2302.09466 -prompt engineering for text-based generative art,8,"The paper is significantly relevant to prompt engineering study as it explores prompt modifiers in the context of text-based generative art, which is a direct application of prompt engineering techniques. The identification of a taxonomy of prompt modifiers aids in understanding how prompts can be engineered or modified for specific outcomes in creative AI applications. Although the study is not exclusively on 'hard prefix prompts', it does provide valuable insights into the broader field of prompt engineering, which is inclusive of various types of prompts including hard prefixes. The conclusion mentioning further research opportunities suggests its utility in expanding the knowledge base of prompt engineering. The rating is not a full 10 because the study is specific to the domain of text-based generative art and does not focus solely on hard prefix prompts, which may be a subset of the broader topic of prompt modifiers.",http://arxiv.org/pdf/2204.13988 -prompting ai art: an investigation into the creative skill of prompt engineering,9,"The provided abstract directly pertains to the study of prompt engineering, focusing on understanding the skillset necessary for effective text-to-image generation, which is indeed a form of prompt engineering. The research explores participants' abilities to assess, write, and improve prompts, which is highly relevant to the study of prompt engineering as a creative process. The conclusion that prompt engineering requires expertise and practice is a significant insight into the field. The only reason the full score is not given is that the abstract does not specifically address 'hard prefix prompts' which was mentioned in the initial query, indicating it may not cover all possible facets of prompt engineering.",http://arxiv.org/pdf/2303.13534 -grimm in wonderland: prompt engineering with midjourney to illustrate fairytales,8,"The given abstract describes a study that is highly relevant to prompt engineering, as it focuses on refining text inputs to achieve better outcomes in text-to-image generation, specifically for the purpose of illustrating popular fairytales. The investigation into a methodical process for converting pre-existing text into image prompts aligns with the essence of prompt engineering. However, the study's relevance is slightly limited as it emphasizes action research within the context of fairytales' illustration rather than a broad analysis of the hard prefix prompts aspect in the general field of prompt engineering.",https://arxiv.org/pdf/2302.08961 -prompt engineering in medical education,8,"The abstract discusses the importance of prompt engineering within the context of medical education using generative language models (GLMs). It highlights the necessity of properly formulated instructions (or prompts) to maximize the utility of GLMs like ChatGPT, Perplexity AI, and Google Bard. The relevance is high because it directly addresses how prompt crafting affects the performance of GLMs in delivering personalized learning and feedback, which is core to prompt engineering studies. However, it is not a perfect 10 as it does not focus solely on the systematic review of 'hard prefix prompts' but rather on prompt engineering in a broader sense within the specific domain of medical education.",https://www.mdpi.com/2813-141X/2/3/19/pdf?version=1693479951 -"multi-party goal tracking with llms: comparing pre-training, fine-tuning, and prompt engineering",9,"The study involves a direct comparison of different adaptation methods for language models, including prompt engineering, to handle a complex task such as multi-party goal-tracking and intent-slot recognition in conversations. The relevance to prompt engineering is high as the paper specifically evaluates and discusses the efficacy of prompt engineering techniques and compares it to other methodologies such as fine-tuning and pre-training in the context of understanding user goals in multi-party conversations. The high performance of prompt engineering in the few-shot setting demonstrates its significance in the study of language model capabilities and applications.",https://arxiv.org/pdf/2308.15231 -improving formality-sensitive machine translation using data-centric approaches and prompt engineering,8,"The paper appears to be highly relevant to prompt engineering as it explicitly mentions the use of 'empirically-grounded prompt engineering' as a part of its methodology to improve machine translation relative to a baseline. Prompt engineering is used here in conjunction with a data-centric approach to specifically address the challenge of formal language variations in translation, indicating a direct application of prompt engineering for enhancing model performance. The rating is not a full 10 since the focus is not solely on prompt engineering, but also includes language-specific data-driven approaches.",https://aclanthology.org/2023.iwslt-1.40.pdf -artificial intelligence prompt engineering as a new digital competence: analysis of generative ai technologies such as chatgpt,9,"The provided abstract for the article, 'artificial intelligence prompt engineering as a new digital competence: analysis of generative ai technologies such as chatgpt,' is highly relevant to the field of prompt engineering. It discusses creating a theoretical framework for AI prompt engineering, analyzing best practices through extensive literature review, and introducing the AI PROMPT framework, which is directly related to the study of prompt engineering. It only falls short of a perfect score because the abstract does not mention 'hard prefix prompts' specifically, which was the core subject of the initial statement. However, the general discussion on AI prompt engineering strategies and their implications in various sectors makes it significantly relevant to the topic at hand.",https://eber.uek.krakow.pl/index.php/eber/article/view/2142/863 -cases of efl secondary students' prompt engineering pathways to complete a writing task with chatgpt,9,"The paper presents an empirical study about how EFL secondary students engineer prompts for a chatbot, specifically ChatGPT, in the context of completing a writing task. It explores the strategies students use and the trial-and-error process they undergo, which is central to understanding the practical applications and educational needs for prompt engineering. The study is highly relevant to the subject of prompt engineering as it shows the significance of this skill in educational settings and provides direct insight into the ways in which non-technical users interact with language models. The reason for not giving a full score of 10 is that it does not cover the theoretical or systematic review aspect of prompt engineering, but focuses specifically on the practical application and user experience.",https://arxiv.org/pdf/2307.05493 -"optimizing mobile-edge ai-generated everything (aigx) services by prompt engineering: fundamental, framework, and case study",9,"The title and abstract indicate that the study is highly relevant to prompt engineering as it directly discusses optimizing services through prompt engineering methods. The study reviews the evolution from AI-Generated Content (AIGC) to AI-Generated Everything (AIGX), and presents a framework that uses prompt engineering to enhance the performance of AI services on edge devices. It also includes a case study on training a prompt optimizer, which is directly related to employing prompt engineering techniques. The only reason the rating is not a full 10 is that the study focuses on a specific application (mobile-edge services) rather than prompt engineering in the broadest sense, which could include other domains and use-cases.",https://arxiv.org/pdf/2309.01065 -exploring the intersection of large language models and agent-based modeling via prompt engineering,9,"The title and abstract are highly relevant to prompt engineering as they describe research that directly utilizes large language models through prompt engineering to simulate human behavior. By exploring two specific simulations (a negotiation and a murder mystery game), the study emphasizes the application of prompt engineering in creating believable scenarios, which aligns closely with the prompt engineering discipline. One point is deducted because the abstract does not explicitly mention 'hard prefix prompts,' which was specified in your original request; however, it does focus on the broader context of prompt engineering within large language models.",https://arxiv.org/pdf/2308.07411 -contextual stance classification using prompt engineering,9,"The paper is highly relevant to prompt engineering as it directly addresses the use of natural language prompts in the domain of few-shot learning. Furthermore, it relates to the creation of prompts based on existing conversation threads, which is a specific application of prompt engineering. The focus on how these prompts can potentially replace supervised methods while maintaining accuracy and reducing development costs further emphasizes the practical significance of prompt engineering in machine learning tasks such as contextual stance classification. The rating is not a full 10 because the abstract does not explicitly mention 'hard prefix prompts' which was a specific aspect mentioned in the initial query.",https://sol.sbc.org.br/index.php/stil/article/download/25435/25256 -promptmagician: interactive prompt engineering for text-to-image creation,8,"The described research directly addresses prompt engineering within the context of text-to-image generation. It focuses on helping users effectively generate prompts that produce the desired image outcomes, which is a core aspect of prompt engineering. The relevance rating is not a full 10 because the study does not specifically discuss 'hard prefix prompts' as mentioned in your query; rather, it deals with prompt engineering in a broader sense. However, the system it introduces, PromptMagician, is very relevant as it is a direct application of prompt engineering principles to improve user interaction with generative models.",https://arxiv.org/pdf/2307.09036 -logprompt: prompt engineering towards zero-shot and interpretable log analysis,8,"The abstract describes a novel approach to log analysis using zero-shot learning through the employment of large language models (LLMs) with advanced prompt strategies, which is highly relevant to the field of prompt engineering. The significant performance improvements and the use of no training data underscore the utility of prompt engineering techniques in practical applications. However, the paper seems to be focused more on the application of prompt engineering within the specific domain of log analysis rather than a broad study of hard prefix prompts or a general evaluation of various prompt engineering strategies across different domains.",https://arxiv.org/pdf/2308.07610 -a survey on segment anything model (sam): vision foundation model meets prompt engineering,7,"While the title suggests the primary focus of the study is on the Segment Anything Model (SAM), the abstract indicates a secondary aspect that touches upon the versatility of SAM when combined with various models, including some that involve prompt engineering (e.g., ChatGPT). Although prompt engineering is not the central theme of the study, the impact of the work on prompt engineering is tangential and relevant as it involves the integration of SAM with models that may require or benefit from prompt engineering techniques. Therefore, the relevance to prompt engineering is moderate to high.",http://arxiv.org/pdf/2306.06211 -plain template insertion: korean-prompt-based engineering for few-shot learners,8,"The abstract indicates that the study is highly relevant to prompt engineering as it focuses on the application of prompt-based few-shot learning to Korean-language datasets, and it specifically mentions the introduction of a plain template insertion method. The fact that it addresses few-shot learning, data scarcity, and the adaptability of prompts to language-specific contexts means that it offers valuable insights into the field of prompt engineering. However, it does not explicitly address 'hard prefix prompts' as mentioned in the original query, which is why the rating is not a full 10.",https://ieeexplore.ieee.org/ielx7/6287639/6514899/09913979.pdf -polyglot prompt: multilingual multitask prompt training,9,"The paper is highly relevant to prompt engineering as it explores the concept of 'Polyglot Prompting', a framework specifically designed for prompt-based learning across multiple languages and tasks. Prompt engineering is central to the approach of creating a unified semantic space within a multilingual context. Additionally, the paper's comprehensive evaluation and the development of an interpretable multilingual evaluation methodology further contribute to the field of prompt engineering by providing insights and tools that can be used to gauge the effectiveness of different prompting methods in a multilingual setting.",https://aclanthology.org/2022.emnlp-main.674.pdf -"chatgpt prompt patterns for improving code quality, refactoring, requirements elicitation, and software design",9,"The paper outlines a set of patterns for prompt designing, explicitly targeting the automation of software engineering tasks through large language models (LLMs) like ChatGPT. The relevance to prompt engineering is high because it directly discusses prompt design techniques for specific professional tasks and contributes a catalog of patterns that can enhance the effectiveness of LLMs in software engineering contexts. The reason for not giving a full 10 is that the paper does not solely focus on the general concept of 'hard prefix prompts' but rather on broader prompt patterns for software engineering activities.",http://arxiv.org/pdf/2303.07839 -"a study on prompt design, advantages and limitations of chatgpt for deep learning program repair",8,"The study directly relates to prompt engineering by investigating how ChatGPT's performance in deep learning program repair can be enhanced through tailored prompts. It explores ChatGPT's debugging capabilities and proposes prompt templates, which are central to prompt engineering. Additionally, the study addresses the effectiveness of dialogue in facilitating program repair, which is a novel aspect of prompt design. The rating is not a perfect 10 because the focus is more on program repair rather than exclusively on prompt engineering. However, prompt design is a significant component of this research, making it highly relevant to the field of prompt engineering.",http://arxiv.org/pdf/2304.08191 -ip-adapter: text compatible image prompt adapter for text-to-image diffusion models,7,"The paper describes IP-Adapter, an adapter for text-to-image diffusion models to incorporate image prompts along with text prompts. Although not focused on 'hard prefix prompts' specifically within text prompt engineering, it tackles the broader area of prompt engineering by enhancing the interface between human input and AI models to improve the generation of images. It is relevant to the field as it addresses the complexity of prompt engineering and offers a solution that enhances multimodal interactions, thus providing insights into how prompt systems could be improved. However, the paper's main focus is on the technical implementation of the adapter and the decoupled cross-attention mechanism for image prompts, so it is not entirely centered on the systematic review or standard text-based prompt engineering.",https://arxiv.org/pdf/2308.06721 -prompt space optimizing few-shot reasoning success with large language models,9,"The title and abstract indicate that the study is highly relevant to prompt engineering with a particular focus on optimizing prompt strategies for large language models in few-shot reasoning contexts. The introduction of 'Prompt Space' and its theoretical foundation based on text embeddings and matrix decomposition aligns closely with the field of prompt engineering. The claimed improvements over state-of-the-art methods further validate the study's pertinence to the topic. The only reason it is not a perfect 10 is that the study does not appear to narrowly focus on 'hard prefix prompts', but rather on prompt engineering as a whole, which may include a broader range of techniques beyond just hard prefix prompts.",http://arxiv.org/pdf/2306.03799 -bim-gpt: a prompt-based virtual assistant framework for bim information retrieval,7,"The abstract presents a study focused on utilizing prompt-based virtual assistant technologies for information retrieval in the construction industry, which is tangentially relevant to prompt engineering. While the primary application is specific to building information models (BIM), the fact that it involves engineering prompt systems (in this case, for integration with GPT models) to interpret natural language makes it partially relevant to the study of prompt engineering. The rating is not higher because the study is not solely focused on the systematic review of hard prefix prompts or prompt engineering specifically but rather on an application of those principles within a specific domain.",http://arxiv.org/pdf/2304.09333 -api entity and relation joint extraction from text via dynamic prompt-tuned language model,7,"The paper discusses the use of a dynamic prompt-tuned language model for the task of API entity and relation extraction, which is a form of prompt engineering applied to software engineering tasks. Although the main focus is on API extraction rather than the prompt engineering itself, the use of dynamic prompts is a relevant application of prompt engineering techniques. Hence, the relevance to prompt engineering study is significant, but not entirely central to the work, as prompt engineering seems to be a part of the method rather than the sole focus.",https://dl.acm.org/doi/pdf/10.1145/3607188 -performance of chatgpt on the us fundamentals of engineering exam: comprehensive assessment of proficiency and potential implications for professional environmental engineering practice,7,"The study focuses on the use of ChatGPT in the context of an engineering certification exam, which is highly relevant to the engineering field. It examines the role of AI in educational settings, specifically related to professional environmental engineering practice. However, the study is narrowly tailored to the Environmental sector of the FE exam and does not directly address 'prompt engineering' as a systematic study across various disciplines or in a broad context. Prompt engineering usually refers to how prompts are structured to elicit the best response from an AI model, and while the abstract mentions 'noninvasive prompt modifications', it does not seem to be the central focus of the study. Therefore, the rating is a 7, indicating substantial but not complete relevance to prompt engineering study.",http://arxiv.org/pdf/2304.12198 -symbolic knowledge distillation: from general language models to commonsense models,9,"The abstract provided discusses the use of prompt engineering as a central technique in the process of Symbolic Knowledge Distillation. The careful construction of prompts and the use of a critic model to refine the results from a general language model like GPT-3 directly relate to the field of prompt engineering. It demonstrates the effectiveness of well-engineered prompts in training more specialized commonsense models. Although the abstract does not focus exclusively on 'hard prefix prompts,' the relevance of the work to the broader field of prompt engineering is substantial, meriting a high rating.",https://aclanthology.org/2022.naacl-main.341.pdf -"chat2vis: generating data visualizations via natural language using chatgpt, codex and gpt-3 large language models",9,"The paper discusses a novel system, Chat2VIS, which relies heavily on effective prompt engineering to guide large language models (LLMs) like ChatGPT and GPT-3 to generate data visualizations from natural language text. Although the focus is more on the application side of using LLMs for data visualization, the process inevitably involves the study and construction of prompts that can accurately convey user queries to these models, despite potential misspecification or under-specification. This reliance on specialized prompt design for improving the reliability and accuracy of LLM outputs suggests a significant overlap with the topic of prompt engineering. The rating is not a full 10 because the abstract does not indicate if the study explicitly covers theoretical aspects of hard prefix prompts or a systematic review of such.",https://ieeexplore.ieee.org/ielx7/6287639/10005208/10121440.pdf -"chatgpt evaluation on sentence level relations: a focus on temporal, causal, and discourse relations",7,"The abstract provided is relevant to prompt engineering to a significant extent as it describes the evaluation of an AI language model, specifically ChatGPT, using different prompt templates such as zero-shot, zero-shot PE (prompt engineering), and ICL (in-context learning). These templates are inherently connected to the study of prompt engineering as they directly impact the performance and accuracy of the model on various tasks related to inter-sentential relations. Although the abstract does not directly address 'hard prefix prompts', the use of different prompt templates including the PE template aligns with the broader field of prompt engineering. The systematic approach taken in evaluating these templates relates to the systematic review aspect of a 'comprehensive systematic review on hard prefix prompts.' However, given that the focus is on sentence-level relations rather than hard prefix prompts explicitly, it does not fully align with the prompt, hence the rating is not a full 10.",http://arxiv.org/pdf/2304.14827 -cutting down on prompts and parameters: simple few-shot learning with language models,8,"The abstract discusses how fine-tuning language models in a few-shot setting can reduce the need for prompt engineering, indirectly addressing the challenges associated with hard prefix prompts by proposing an alternative solution. Although the study targets the broader concept of prompt engineering, its findings offer valuable insights into the specific area of hard prompting, demonstrating ways to optimize the process. The lower rating reflects that while the study is relevant, it is not exclusively focused on hard prefix prompts.",https://aclanthology.org/2022.findings-acl.222.pdf -fake it till you make it: learning transferable representations from synthetic imagenet clones,7,"The abstract describes a study where the researchers explore using class-agnostic prompt engineering to generate ImageNet clones with Stable Diffusion, suggesting a focus on prompt engineering to enhance synthetic image training for image classification models. While the focus on 'hard prefix prompts' isn't explicitly mentioned, the paper still significantly revolves around the concept of prompt engineering and its effects on machine learning model outcomes. Thus, the study is quite relevant to the broader field of prompt engineering, albeit in the context of image generation, rather than text-based applications.",https://arxiv.org/pdf/2212.08420 -text-guided synthesis of artistic images with retrieval-augmented diffusion models,7,"The abstract describes a method where 'prompt-engineering' is used to achieve a certain visual style in synthesized images, which is relevant to the study of how prompts are engineered to guide AI models. However, the focus on 'retrieval-augmented diffusion models' which use external databases for conditioning, offers an alternative to crafting hard prefix prompts. The relevance is rated a 7 as it deals with prompt engineering indirectly by presenting an alternative method to achieve specific outcomes in generative tasks. The study emphasizes the conditioning of models post training rather than the design of the prompts themselves.",http://arxiv.org/pdf/2207.13038 -bigbio: a framework for data-centric biomedical natural language processing,8,"The text discusses the creation of BigBIO, a library that contains numerous biomedical NLP datasets, supporting meta-dataset curation. Its compatibility with current platforms for prompt engineering makes it highly relevant for studies focused on prompting, though the abstract does not specifically address 'hard prefix prompts'. Therefore, its relevance to the broader subject of prompt engineering is high, but it may not directly address the specificity of hard prefix prompts, thus the rating is not a full 10.",http://arxiv.org/pdf/2206.15076 -repair is nearly generation: multilingual program repair with llms,7,"The abstract describes a research study on RING, a multilingual repair engine that uses a large language model for code repair tasks, which relies on prompts to guide the repair process. Although the study focuses on automated program repair, the use of a prompt-based strategy to assist in the repairing process is aligned with prompt engineering concepts. This suggests that the study contributes to the understanding of how prompts can be engineered to interact with AI models, specifically in the context of code repair. However, it doesn't specifically target 'hard prefix prompts' in prompt engineering, nor does it seem to focus on the systematic review of such prompts. Therefore, the relevance rating is not a perfect 10, but still substantial given the use of prompt-based strategies in the context of AI-powered code repair.",https://arxiv.org/pdf/2208.11640 -prompting is all your need: automated android bug replay with large language models,9,"The abstract describes the use of prompt engineering to automatically reproduce bugs from bug reports using a methodology called AdbGPT. This directly involves prompt engineering as a crucial component for leveraging Large Language Models (LLMs) to understand and process bug reports, enabling automated bug replay. The relevance to prompt engineering is high, as it is a key part of the proposed system for understanding and acting on natural language inputs, which demonstrates an advanced application of prompt engineering in software maintenance. The reason the rating is not a perfect 10 is because the focus is on the application of prompt engineering in a specific context (automated android bug replay) rather than a general study or comprehensive review of hard prefix prompts within the broader scope of engineering studies.",https://arxiv.org/pdf/2306.01987 -qaner: prompting question answering models for few-shot named entity recognition,9,"The abstract discusses the development of a new method for prompt-based learning in the context of Named Entity Recognition (NER), which is directly related to the field of prompt engineering. The research is aimed at refining prompt strategies, generating prompts, and tuning QA models with prompts, addressing various challenges in prompt-based methods. This is highly relevant to the study of prompt engineering, especially in its application to NER tasks. The reason for not giving a full 10 is because the abstract does not explicitly mention 'hard prefix prompts,' suggesting that the study might not cover that specific aspect of prompt engineering.",http://arxiv.org/pdf/2203.01543 -prompting the hidden talent of web-scale speech models for zero-shot task generalization,9,"The study is highly relevant to prompt engineering as it focuses on adapting a web-scale speech model, Whisper, to perform zero-shot tasks by using specialized prompt engineering techniques. The paper demonstrates significant performance improvements on new tasks by designing task-specific prompts, which directly pertains to the field and thereby scores a high relevance rating. It only falls short of a perfect score because it is not a comprehensive systematic review, but rather an experimental study illustrating practical applications of prompt engineering.",https://arxiv.org/pdf/2305.11095 -the creativity of text-based generative art,8,"The abstract indicates that the paper focuses on 'text-based generative art' and discusses the role of human creativity in the context of prompt engineering, which is directly related to prompt engineering study. It references Rhodes’s conceptual model of creativity, which could provide insight into the design and evaluation of prompts. The critique of product-centered creativity views hints at a theoretical exploration relevant to understanding how prompts are engineered and used in practice. Although the paper does not seem to be exclusively about 'hard prefix prompts' in prompt engineering, it appears to address the broader context and implications of prompt use and creativity in text-based generative systems. Thus, the relevance to prompt engineering study is high, but it is not a perfect match since it does not focus solely on 'hard prefix prompts'.",http://arxiv.org/pdf/2206.02904 -no token left behind: explainability-aided image classification and generation,8,"The paper abstract indicates that the research addresses issues related to the instability in zero-shot learning when using models like CLIP, which is related to how input prompts are constructed and used (prompt engineering). The study proposes an explainability-based approach to ensure that the model considers all relevant semantic parts of the input, likely including how the prompts are designed and their tokens. This is highly relevant to prompt engineering, although the study focuses more broadly on zero-shot learning and explainability, not solely on prompt engineering. Thus, the relevance rating is high, but not maximum.",http://arxiv.org/pdf/2204.04908 -automatically generating cs learning materials with large language models,7,"The content of the provided abstract is relevant to prompt engineering in that it discusses the application of Large Language Models (LLMs) in generating code and educational content based on natural language prompts. Although it does not specifically mention 'hard prefix prompts', it is related to the broader subject of how prompts can be utilized to facilitate computer science learning and to the design of prompts for effective interaction with models like GPT-3 and Codex. The abstract also touches upon the implications of LLM integration in pedagogy, which could include discussions on the crafting of prompts for educational purposes. Therefore, while it is not a direct study on prompt engineering, it is certainly relevant to the field, especially in the context of their application in education.",https://arxiv.org/pdf/2212.05113 -language-aware soft prompting for vision & language foundation models,8,"The shared abstract and summary are highly relevant to prompt engineering, specifically in the context of Vision & Language (V&L) models, indicating a study of prompt design and their application to model training. Although the study focuses on 'soft' prompts and not 'hard' prompts as mentioned in the initial query, it significantly engages with prompt engineering concepts by discussing the creation and adjustment of prompts. It researches how prompts can be optimized and regularized to improve model performance and addresses an important aspect of prompt engineering: the resistance to overfitting and the ability to generalize to unseen classes. Therefore, it contributes to the overall understanding and methodology of prompt engineering even if it does not directly address 'hard prefix prompts'.",http://arxiv.org/pdf/2210.01115 -chatgpt4pcg competition: character-like level generation for science birds,8,"The paper's focus on a competition that centers on creating prompts for ChatGPT to generate specific game levels is highly relevant to the field of prompt engineering. Although it doesn't address 'hard prefix prompts' specifically, it contributes to the understanding and application of prompt engineering in procedural content generation. This relevance is somewhat niche as it applies to a gaming context, yet the principles and methods used can offer valuable insights into prompt engineering best practices and strategies.",http://arxiv.org/pdf/2303.15662 -will it blend? mixing training paradigms & prompting for argument quality prediction,9,"The paper is highly relevant to prompt engineering as it specifically describes the use of prompt engineering with GPT-3 for the task of Argument Quality Prediction. The focus on mixing training paradigms and the experimentation to determine the best setup for predicting different aspects of argument quality are central to the study of how different prompts can influence the output of large language models. The relevance is not a full 10 only because the paper also delves into training paradigms along with prompt engineering, which implies it does not solely concentrate on prompt engineering but rather on a combination of techniques.",http://arxiv.org/pdf/2209.08966 -the infinite index: information retrieval on generative text-to-image models,9,"The abstract discusses the concept of 'prompt engineering' directly in the context of generative models like DALL-E and Stable Diffusion, which is highly relevant to the field of prompt engineering study. It addresses a unique challenge within prompt engineering—information retrieval based on prompts given to generative models, which is an advanced aspect of prompt engineering. The introduction of the 'infinite index' concept and the exploration of active learning for image retrieval are pertinent to the engineering of prompts and the optimization of results from generative models. The deduction of one point is due to the lack of explicit mention of 'hard prefix prompts,' which may or may not be part of the 'interactive text-based retrieval' system referenced. However, the content is still highly relevant for researchers and practitioners interested in the intricacies of prompt engineering for generative text-to-image models.",https://dl.acm.org/doi/pdf/10.1145/3576840.3578327 -exploring the benefits of visual prompting in differential privacy,7,"The relevance to prompt engineering is significant due to the mention of Visual Prompting (VP), which constitutes a form of prompt engineering applied to visual tasks. This technique aligns with the concept of prompt engineering in the machine learning context, which involves designing inputs that guide the model to perform specific tasks or improve its performance. Even though 'hard prefix prompts' are not explicitly mentioned, the study still falls within the broader scope of prompt engineering by exploring the modification and utilization of input prompts to enhance the performance of machine learning models with differential privacy. The incorporation of VP into DP training methods like PATE and the exploration of its benefits in neural network classifiers make it relevant to the study of prompt engineering. However, the specific exploration of 'hard prefix prompts' is not addressed, which led to a rating of 7 instead of 10.",https://arxiv.org/pdf/2303.12247 -an empirical evaluation of prompting strategies for large language models in zero-shot clinical natural language processing,9,"The described paper is highly relevant to prompt engineering as it conducts an empirical evaluation of prompting strategies for large language models specifically within the clinical NLP context. It assesses several prompt types like simple prefix, chain of thought, and introduces new types such as heuristic prompting and ensemble prompting, which are directly related to the study of prompt engineering. The only reason it doesn't receive a perfect score is that it is focused on the clinical domain and the prompt types are not limited to 'hard prefix prompts' as inquired in the original query.",https://arxiv.org/pdf/2309.08008 -generating disentangled arguments with prompts: a simple event extraction framework that works,9,"The presented study is highly relevant to prompt engineering as it introduces a prompt-based learning strategy to the domain of Event Extraction. The use of prompts to automate the exploitation of label semantics indicates a direct application of prompt engineering. The fact that this work sets new records for Argument and Trigger Extractions suggests that it advances the field significantly. While the paper does not focus on 'hard prefix prompts' specifically, its contribution to prompt-based methods in Event Extraction demonstrates its relevance to studies on prompt engineering.",https://eprints.whiterose.ac.uk/191435/1/jinghui_GDAP_icassp2022.pdf -how to prompt? opportunities and challenges of zero- and few-shot learning for human-ai interaction in creative applications of generative models,9,"The abstract provided outlines a study that delves into the usage, challenges, and potential advancements in the field of prompt engineering, specifically in the context of zero-shot and few-shot learning for creative applications with generative models. The focus on how end-users interact with AI through prompts and the subsequent proposal of design goals for user interfaces that support prompt-based interactions is highly relevant to prompt engineering. The study appears to be concerned with improving the effectiveness and intuitiveness of prompts, which is crucial to the field. Therefore, the relevance rating is high, albeit not maximum, as it might not cover the 'hard prefix prompts' as specified in the original prompt, but it still relates significantly to the broader subject of prompting in AI.",http://arxiv.org/pdf/2209.01390 -few-shot learning with multilingual generative language models,8,"The study appears to be highly relevant to prompt engineering as it includes an in-depth analysis of different multilingual prompting approaches and demonstrates the utility of templates and example demonstrations in achieving strong few-shot learning performance across languages. Although the abstract does not explicitly mention 'hard prefix prompts', the principle of engineering effective prompts to enhance model performance in few-shot learning scenarios is fundamentally related to prompt engineering. The rating is not a full 10 because the abstract does not directly address 'hard prefix prompts', but it ishigh due to the clear relevance of the study's focus on prompting techniques and few-shot learning.",https://aclanthology.org/2022.emnlp-main.616.pdf -tuning language models as training data generators for augmentation-enhanced few-shot learning,8,"The study deals with few-shot learning in pretrained language models (PLMs) leveraging prompts which is highly relevant to prompt engineering. It explores how to effectively utilize a limited amount of data to tune PLMs and then generate additional data to enhance performance on various language tasks. Even though the study does not specifically mention 'hard prefix prompts', it discusses training methodology that involves prompt formulation for modeling, which is a significant aspect of prompt engineering. For this reason, the work is very much related to prompt engineering but does not directly address the systematic review of 'hard prefix prompts', hence the rating of 8 instead of 10.",http://arxiv.org/pdf/2211.03044 -true few-shot learning with prompts—a real-world perspective,8,"This abstract describes an extensive study on Pet (Pattern-exploiting Training), which is a method that leverages prompt-based few-shot learning without relying on a development set for tuning. This research is highly relevant to prompt engineering because it evaluates the effectiveness of prompt-based approaches in few-shot learning scenarios. This can help understand how different prompting strategies can be designed and employed effectively in real-world settings. However, the study seems to focus specifically on Pet rather than a broader range of hard prefix prompts, hence the rating is not a full 10.",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00485/2030692/tacl_a_00485.pdf -cins: comprehensive instruction for few-shot learning in task-oriented dialog systems,7,"The study is highly relevant to prompt engineering as it details an approach for leveraging pre-trained language models (PLMs) using task-specific instructions, which is a core aspect of prompt engineering. The 'CINS' system's specific focus on utilising instructions for few-shot learning in task-oriented dialog systems indicates relevance to the field. However, the paper might not center exclusively on hard prefix prompts or a systematic review of such prompts, thus not fully aligning with the potential scope implied by the term 'comprehensive systematic review on hard prefix prompts'. The rating reflects the significance of instructional design in prompting while acknowledging the potential mismatch in the specificity of the topic.",https://ojs.aaai.org/index.php/AAAI/article/download/21356/21105 -story centaur: large language model few shot learning as a creative writing tool,7,"The study is relevant to prompt engineering to some extent, as it deals with the application of few shot learning with large language models, which is an aspect of prompt engineering. The design of the Story Centaur interface can imply the use of prompts to guide the language model in generating text based on the writer's input. However, the relevance is not full (i.e., not a 10) because the abstract does not specifically mention 'hard prefix prompts' or a systematic review of prompt engineering techniques. It is more focused on the end-user experience and tool creation for creative writing rather than the detailed study of prompt engineering methods.",https://aclanthology.org/2021.eacl-demos.29.pdf -few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning,7,"The abstract discusses Parameter-efficient fine-tuning (PEFT), which includes prompt tuning, a technique directly relevant to prompt engineering. Prompt tuning is a method of adjusting a pre-trained model to understand and perform new tasks using prompt-based instructions. The PEFT, and specifically the novel (IA)$^3$ method mentioned, likely relate to how prompts can be engineered or optimized for better performance with fewer resources, making it relevant to the study of prompt engineering. However, the focus on the comparative benefits over in-context learning and the overarching goal to improve model efficiency and performance, while related, do not strictly fall within the typical exploration of hard prefix prompts, and therefore do not warrant a maximum relevance rating.",http://arxiv.org/pdf/2205.05638 -exploring effectiveness of gpt-3 in grammatical error correction: a study on performance and controllability in prompt-based methods,9,"The study is highly relevant to prompt engineering as it investigates how prompt-based methods, a key aspect of prompt engineering, impact GPT-3's performance in Grammatical Error Correction tasks. It examines the effects of varying task instructions and examples, which are central to designing effective prompts. The focus on the controllability aspect of GPT-3 with different instructional prompts makes this study pertinent to understanding and enhancing the use of language models in prompt engineering.",http://arxiv.org/pdf/2305.18156 -improved universal sentence embeddings with prompt-based contrastive learning and energy-based learning,8,"The abstract discusses 'PromCSE', a method which focuses on using a 'Soft Prompt', that is, a set of trainable vectors, in a prompt-based contrastive learning setting for sentence embeddings. This is related to prompt engineering, a domain that comprises methods to better integrate and tune prompts for effective use with pre-trained language models. Although the abstract does not explicitly mention 'hard prefix prompts', it addresses the topic of prompt-based learning and even touches on energy-based learning mechanisms. For these reasons, the abstract is highly relevant to the study of prompt engineering, but slightly less so specifically to 'hard prefix prompts'. Hence, the rating is 8 instead of 10.",https://aclanthology.org/2022.findings-emnlp.220.pdf -do we still need human assessors? prompt-based gpt-3 user simulation in conversational ai,8,"The study directly addresses a critical aspect of prompt engineering by exploring the generation of synthetic data through prompting a language model, which is a subset of the broader field. It assesses the viability of using prompted synthetic responses as a replacement for human-generated data, an inquiry that overlaps with prompt engineering since it evaluates the quality and utility of the prompts and the resulting data. The relevance to prompt engineering is high, although not perfect, because it does not focus on 'hard prefix prompts' specifically but rather on the general application of prompts for data generation in AI conversational models.",https://dl.acm.org/doi/pdf/10.1145/3543829.3544529 -towards open-vocabulary scene graph generation with prompt-based finetuning,8,"The abstract indicates the use of 'prompt-based techniques' for fine-tuning a pre-trained model in the context of scene graph generation (SGG). Although it does not explicitly mention 'hard prefix prompts,' it does involve the concept of prompt engineering as it leverages prompts to adapt the model to new tasks without updating parameters. This is directly related to studying different prompt engineering strategies, particularly in the open-vocabulary setting. Thus, the relevance to prompt engineering is high but not focused solely on the aspect of hard prefix prompts, hence the rating is not a full 10.",http://arxiv.org/pdf/2208.08165 -zero-shot cross-lingual transfer of prompt-based tuning with a unified multilingual prompt,9,"The abstract describes research on prompt-based tuning for multilingual pretrained language models with a focus on a unified, language-agnostic prompt, which is highly relevant to the field of prompt engineering. It addresses the challenge of creating prompts that work across multiple languages and demonstrates significant performance improvements, which is a core aspect of engineering effective prompts. The only reason it does not receive a full score is because it does not address 'hard prefix prompts' specifically, but it is still very relevant to the broader topic of prompt engineering.",https://aclanthology.org/2022.emnlp-main.790.pdf -prompt-based connective prediction method for fine-grained implicit discourse relation recognition,8,"The study introduces a Prompt-based Connective Prediction (PCP) method that is relevant to prompt engineering since it discusses instructing pre-trained models to utilize prompts for tasks in natural language processing. This is directly involved with prompt design and its implications on model performance. Although the main focus is on discourse analysis, the core concept of using prompts to guide model understanding and predictions is inherent to prompt engineering studies. Therefore, the relevance rating is high but not perfect due to the niche application within discourse relation recognition, rather than a broad study of prompt engineering techniques.",http://arxiv.org/pdf/2210.07032 -prompt-based distribution alignment for domain generalization in text classification,8,"The abstract mentions 'prompt-based learning' or 'prompting' as a key method for improving text classification across different domains. Although the study focuses on domain generalization and distribution alignment, the technique of prompting described is indeed crucial within the understanding of prompt engineering. It speaks to the customization of prompts to align data distributions across domains which could be understood as an advanced topic in prompt engineering. The study, however, does not directly address 'hard prefix prompts' but explores the broader concept of prompting and its application for domain generalization in natural language processing tasks. The rating is therefore not a full 10, as it does not specifically focus on hard prefix prompts but is still highly relevant due to its broader application in task alignment which is a subset of prompt engineering.",https://aclanthology.org/2022.emnlp-main.690.pdf -zero-shot event detection based on ordered contrastive learning and prompt-based prediction,7,"The relevance to prompt engineering is significant since the abstract mentions the use of prompt-based prediction in a zero-shot natural language processing model. The study's methods directly involve prompt engineering by utilizing prompts to identify trigger words. However, prompt engineering is not the sole focus of the study, as it also involves ordered contrastive learning techniques. Therefore, while prompt engineering is relevant, it may not be the central theme of the research.",https://aclanthology.org/2022.findings-naacl.196.pdf -prompt-based time series forecasting: a new task and dataset,7,"The paper introduces a novel approach to time series forecasting by leveraging prompt-based methods, which is within the realm of prompt engineering. This is relevant as it explores the adaptation of language models to tasks outside their initial scope (i.e., forecasting) using prompts. However, the study does not focus specifically on 'hard prefix prompts' but on transforming numerical time series forecasting problems into a language model-friendly format. Therefore, it is a contribution to the broader context of prompt engineering rather than a targeted study on the more specific 'hard prefix prompts'.",http://arxiv.org/pdf/2210.08964 -prompt-based meta-learning for few-shot text classification,9,"The abstract discusses the application of prompt-tuning within a meta-learning framework for few-shot text classification, which is directly related to prompt engineering. As prompt-based systems are a critical study area within the broader scope of prompt engineering, this work's focus on a Prompt-Based Meta-Learning (PBML) model is highly relevant. It contributes to understanding how prompts can be effectively used in conjunction with meta-learning to enhance performance in low-data regimes. The paper offers insights into the practical application and theoretical underpinning of using prompts in machine learning, which is at the core of prompt engineering studies.",https://aclanthology.org/2022.emnlp-main.87.pdf -ai illustrator: translating raw descriptions into images by prompt-based cross-modal generation,7,"The study explores a Prompt-based Cross-Modal Generation Framework (PCM-Frame), which is relevant to prompt engineering as it involves using prompts to bridge the semantic gap between text descriptions and image generation. While the field of prompt engineering often refers to optimizing input language for language models, the abstract suggests a broader scope where prompts assist in mapping text to image embeddings. This makes it pertinent to the study of how prompts can be engineered to improve cross-modal generation tasks. However, the paper's focus seems more on the application of prompt engineering in the context of AI illustration and image generation, rather than a comprehensive review of prompt engineering techniques or hard prefix prompts specifically. Hence, the rating is not a full 10.",https://arxiv.org/pdf/2209.03160 -clamp: prompt-based contrastive learning for connecting language and animal pose,7,"The abstract discusses the use of prompt-based methods (in the context of CLAMP) to connect language models with animal pose estimation tasks, which is highly relevant to prompt engineering as it involves crafting prompts to facilitate an application of language understanding. The relevance is not a perfect 10 because the study focuses specifically on contrastive learning for animal pose estimation, rather than a broad systematic review of hard prefix prompts in general. Nevertheless, the adaptation and engineering of prompts for a specific task like this contributes to the understanding of how prompts can be effectively utilized in various domains, which is a pertinent aspect of prompt engineering research.",https://arxiv.org/pdf/2206.11752 -promptattack: prompt-based attack for language models via gradient search,8,"The paper discusses 'Prompt Learning', a method directly related to prompt engineering, and addresses security vulnerabilities within this approach, a relevant aspect not often considered in standard prompt engineering studies. The focus is on constructing malicious prompts to reveal security issues, which is a valuable angle in prompt engineering research. Although the paper does not specifically mention a 'hard prefix prompt', it does delve into prompt-based methods and their implications, thus warranting a high relevance rating. However, the rating is not a full 10 because the paper's core topic is security rather than the effectiveness or optimization of prompt engineering itself.",http://arxiv.org/pdf/2209.01882 -prompt-based metric learning for few-shot ner,8,"The abstract describes a method that uses multiple prompt schemas to enhance label semantics in the context of few-shot named entity recognition, which is relevant to prompt engineering as it involves the design of prompts to influence the model's performance. The proposed method indicates an improvement in metric learning for NER by incorporating prompt-based representations, aligning with the study of how different prompting techniques can affect machine learning tasks. However, it does not explicitly address 'hard prefix prompts,' which may be a more specialized area within the broader field of prompt engineering, hence the rating is not a full 10.",https://arxiv.org/pdf/2211.04337 -on the robustness of dialogue history representation in conversational question answering: a comprehensive study and a new prompt-based method,7,"The title and abstract suggest a study that investigates the robustness of dialogue history representation in Conversational Question Answering (CQA), which does not directly deal with 'prompt engineering' per se. However, the introduction of a 'prompt-based history modeling approach' signifies the study's partial relevance to prompt engineering, as it involves the strategic integration of prompts into the passage text to enhance model performance. The mention of 'textual prompts' indicates that part of the study is concerned with understanding how prompts can affect the outcome of a CQA task. Even though the study is not solely dedicated to 'hard prefix prompts' or prompt engineering in general, the development of a new prompt-based method implies that it could offer insightful data and practices relevant to prompt engineering research. The rating is not higher because the primary focus still seems to be on robustness and not explicitly on the engineering of prompts.",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00549/2080031/tacl_a_00549.pdf -promptda: label-guided data augmentation for prompt-based few shot learners,9,"The abstract describes a study on the use of a novel framework called PromptDA that focuses on data augmentation in the context of prompt-based few-shot learning for natural language understanding tasks. The study appears highly relevant to prompt engineering as it directly addresses the development of prompts for few-shot learners and investigates ways to improve their performance through specialized data augmentation that leverages label semantic information. This relates closely to the study of 'hard prefix prompts' as it pertains to the design and enhancement of prompt-based methods. The only reason the rating is not a 10 is it doesn't specify if 'hard prefix prompts' are specifically addressed, but it's clear that the work is valuable to the field of prompt engineering.",http://arxiv.org/pdf/2205.09229 -adversarial robustness of prompt-based few-shot learning for natural language understanding,7,"The study focuses on prompt-based few-shot learning (FSL) methods within natural language understanding, which is a subset of prompt engineering as it investigates the utilization of prompts for model fine-tuning. Evaluating the adversarial robustness of prompt-based FSL is relevant as it considers the stability and reliability of these prompts under adversarial conditions, a crucial aspect for prompt engineering. However, the study is more focused on the robustness to adversarial attacks rather than on the broader aspects of prompt engineering such as prompt design, optimization, or the systematic review of 'hard prefix' prompts. Therefore, while the study is highly relevant to a specialized area of prompt engineering, it does not cover the full scope of a 'comprehensive systematic review on hard prefix prompts,' so it gets a rating of 7.",http://arxiv.org/pdf/2306.11066 -decorate the newcomers: visual domain prompt for continual test time adaptation,8,"The paper described involves the concept of 'prompt learning' from NLP but applies it to the visual domain, suggesting a novel crossover of prompt engineering techniques to continual test-time adaptation for images. While the research isn't about textual 'hard prefix prompts' in NLP, the principles of designing prompts for domain adaptation and mitigating issues like catastrophic forgetting are closely related to prompt engineering in how they shape model inputs for better performance. Thus, it is relevant but not directly focused on the prompt engineering study in the text domain.",http://arxiv.org/pdf/2212.04145 -"toward human readable prompt tuning: kubrick's the shining is a good movie, and a good prompt too?",9,"The paper discussed is highly relevant to prompt engineering as it addresses the direct issue of how to create effective and fluent prompts through a novel tuning method. It contributes to the understanding of what makes a prompt effective, ensuring topical relevance and adjusting prior probabilities. The only reason it is not rated a perfect 10 is that the prompt engineering study specifically asked for 'hard prefix prompts,' which this summary does not explicitly state that the paper addresses. However, the general principles and methodology presented are very likely applicable to prompt engineering as a whole.",http://arxiv.org/pdf/2212.10539 -parameter-efficient prompt tuning makes generalized and calibrated neural text retrievers,9,"The abstract discusses prompt tuning, a form of prompt engineering, in the context of neural text retrievers. It emphasizes parameter efficiency, which is a crucial factor in the design and use of prompts for AI models. Moreover, the study explores prompt tuning's impact on generalizability across various domains, directly relating to advancements in prompt engineering methodologies. Hence, it is highly relevant to the prompt engineering study, although it focuses on a specific application rather than a broad range of use cases.",http://arxiv.org/pdf/2207.07087 -relation extraction as open-book examination: retrieval-enhanced prompt tuning,8,"The abstract discusses a novel application of prompt tuning in the context of relation extraction, by utilizing a retrieval-enhanced prompt tuning approach. While it does not directly address 'hard prefix prompts' or a 'comprehensive systematic review', it certainly falls within the broader category of prompt engineering studies. The focus on improving performance on hard or rare patterns, and the method of combining parametric and non-parametric techniques, relate closely to the challenges prompt engineering aims to address, especially in the context of improving prompt-based models' generalization capabilities. Thus, the relevance is high, although not perfect, due to the absence of a specific focus on 'hard prefix prompts' or a 'systematic review' aspect.",https://arxiv.org/pdf/2205.02355 -rethinking reinforcement learning for recommendation: a prompt perspective,7,"The relevance to prompt engineering in this study lies in the proposed Prompt-Based Reinforcement Learning (PRL) framework for recommendations, which intersects with the field of prompt engineering by leveraging state-reward inputs as prompts during the decision-making process. The study doesn't center on prompt engineering as it typically applies to language models or processes of tuning textual inputs, but it does conceptualize a similar method within the RL context, angling prompts as essential elements in training RL models for improved recommendation systems. Therefore, its relevance is notable but not directly central to the typical application of prompt engineering, which more commonly refers to optimizing inputs for generative language models.",https://dl.acm.org/doi/pdf/10.1145/3477495.3531714 -generative prompt tuning for relation classification,9,"The abstract presents a study that is highly relevant to the field of prompt engineering. It addresses the limitations of the existing prompt tuning methods when dealing with complex label spaces for relation classification tasks. By introducing a generative prompt tuning approach that reformulates the problem into an infilling task, the study directly applies to developing new techniques within prompt engineering. The relevance is therefore rated a 9 out of 10 because it contributes significantly to the understanding and development of prompt-based methods, although it focuses specifically on relation classification rather than prompt engineering in general.",http://arxiv.org/pdf/2210.12435 -"prompt, generate, then cache: cascade of foundation models makes strong few-shot learners",7,"The abstract discusses the use of GPT-3 to 'Prompt, Generate, then Cache', indicating an application of language generation for creating prompts, which is relevant to prompt engineering. Additionally, the integration of multi-modal models such as CLIP and DALL-E implies the use of prompts to facilitate communication across language and image domains, which is an advanced form of prompt engineering. However, the primary focus of the paper appears to be on few-shot learning and integrating diverse pre-training knowledge, rather than on systematic review of hard prefix prompts specifically. Therefore, while it is related to prompt engineering, it is not directly focused on a comprehensive review of that domain, hence the rating is not a full 10.",https://arxiv.org/pdf/2303.02151 -instructionner: a multi-task instruction-based generative framework for few-shot ner,7,"The relevance of the provided abstract to prompt engineering is quite significant, as it discusses the usage of prompt-based methods in few-shot learning and the refinement of those prompts for a specific downstream task, which is named entity recognition (NER). While the focus of the study is on the development of a framework for NER, the essence of reformulating tasks as generation problems and enriching source sentences with task-specific instructions is closely related to prompt engineering. This process involves creating prompts that effectively guide the language model to perform a desired task. However, because the abstract does not explicitly mention 'hard prefix prompts' or conduct a systematic review on prompt engineering, the rating is not a full 10.",http://arxiv.org/pdf/2203.03903 -finding skill neurons in pre-trained transformer-based language models,7,"The paper is moderately relevant to prompt engineering study. It doesn't directly focus on generating or optimizing prompts -- which would be the core subject of a prompt engineering study. However, the identification of 'skill neurons' within transformers after prompt tuning relates to understanding how prompts can affect neural language models and how specific neurons contribute to processing tasks after prompt-based training. This has implications for prompt engineering, as insight into which neurons are 'skill neurons' might inform how to structure or alter prompts to target these neurons and improve task performance.",https://arxiv.org/pdf/2211.07349 -good examples make a faster learner: simple demonstration-based learning for low-resource ner,8,"The abstract details a study on demonstration-based learning, which is a part of prompt-based learning methodologies. Although it focuses specifically on named entity recognition (NER), the principles of designing demonstrations and templates are directly related to the broader field of prompt engineering. The study's emphasis on the effect of different demonstration strategies on performance and the exploration of in-context learning provide insights that are applicable to prompt engineering. The relevance to prompt engineering is notable due to the systematic study of these strategies, which is a component of the hard prefix prompts mentioned in the initial query. However, the rating is not a full 10, as the abstract suggests a specific application (NER) rather than a focus on prompt engineering in general.",https://aclanthology.org/2022.acl-long.192.pdf -promptbert: improving bert sentence embeddings with prompts,9,"The paper describes a method that directly pertains to prompt engineering, specifically within the context of improving sentence embeddings using a novel contrastive learning method named PromptBERT. The emphasis on overcoming the limitations of BERT by integrating prompts into the sentence embedding process is highly relevant to the study of prompts in engineering. The research not only introduces a new prompt-based embedding method but also explores prompt representation and searching methods, which are central themes in prompt engineering. The proposed unsupervised training objective with template denoising is similarly a significant contribution to this field. The only reason the score is not a full 10 is that it doesn't mention 'hard prefix prompts' explicitly, but the overall context is very much applicable to the subject of prompt engineering.",https://aclanthology.org/2022.emnlp-main.603.pdf -how can we know what language models know?,9,"The paper directly addresses a core aspect of prompt engineering by focusing on the automatic generation of high-quality and diverse prompts to elicit more accurate information from language models. Improving prompt quality is a fundamental part of prompt engineering, and the paper's experimental results on the enhancement of LM accuracy are highly relevant to studies of prompt effectiveness. The slight deduction from a perfect score is due to the abstract not specifying 'hard prefix prompts', indicating the review might not focus exclusively on that particular subset of prompt engineering.",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00324/1923867/tacl_a_00324.pdf -realfusion 360° reconstruction of any object from a single image,8,"The abstract describes the use of a conditional image generator and the engineering of a prompt to improve the neural network's ability to 'dream up' or synthesize novel views of an object from a single image. This directly relates to the field of prompt engineering, as the research involves designing and refining a prompt to guide an AI model to perform a specific task more effectively. The relevance to prompt engineering study is high because it involves a practical application of prompt design to achieve better results in an AI-driven task. The score is not a full 10 because the abstract focuses on the application of this prompt engineering in the context of 3D reconstruction rather than the study of prompt engineering itself.",https://arxiv.org/pdf/2302.10663 -active prompting with chain-of-thought for large language models,9,"The paper addresses an advanced technique within prompt engineering, specifically for large language models (LLMs), by introducing active prompting with example-based CoT reasoning. This is highly relevant to the field of prompt engineering as it involves creating task-specific example prompts and evaluating their effectiveness for LLMs' performance on complex reasoning tasks. The mention of uncertainty metrics and the adaptation of the active learning framework to prompt design underscore the paper's direct and substantial contribution to developing and improving prompting strategies. The reason it's not a 10 is that it doesn't cover 'hard prefix prompts' which may suggest a more specific subset of prompt engineering techniques not explicitly mentioned in the abstract.",http://arxiv.org/pdf/2302.12246 -gpt3mix: leveraging large-scale language models for text augmentation,7,"The paper is highly relevant to the study of prompt engineering as it discusses a method to leverage large-scale language models, like GPT-3, using prompts to generate text for data augmentation. This is intrinsically linked to the concept of prompt engineering, which involves designing prompts to elicit desired responses from language models. However, the focus of the paper is more on the application of these prompts for data augmentation rather than a systematic review on hard prefix prompts specifically. The relevance is high because the technique proposed is a practical application of prompt engineering, but it is not a comprehensive review on the topic.",https://aclanthology.org/2021.findings-emnlp.192.pdf -warp: word-level adversarial reprogramming,8,"The abstract presents research that extends earlier work on automatic prompt generation, which is highly relevant to the prompt engineering field. Adversarial reprogramming, as discussed in the paper, is a method for learning task-specific prompts that improve the performance of language models on various tasks. The focus on prompt generation suggests a strong relevance to studies on 'hard prefix prompts' or engineered prompts intended to direct model behavior. However, as the abstract does not explicitly mention 'hard prefix prompts', the rating is not a full 10.",https://aclanthology.org/2021.acl-long.381.pdf -prompting for multimodal hateful meme classification,8,"The study appears to be highly relevant to prompt engineering as it involves the creation of a prompt-based model (PromptHate) that specifically addresses the task of hateful meme classification by leveraging the capabilities of pre-trained language models through the use of prompts. The use of 'simple prompts' alongside 'in-context examples' indicates a direct application of prompt engineering techniques to extract and utilize implicit knowledge from the models. However, the study seems to focus on a specific application of prompts in the context of multimodal tasks (hateful meme classification), which may slightly limit its generalizability to prompt engineering as a whole. Despite this, the study's effort in optimizing prompts for a complex task adds valuable insights to the field of prompt engineering.",http://arxiv.org/pdf/2302.04156 -badprompt: backdoor attacks on continuous prompts,8,"The study is highly relevant to prompt engineering as it focuses on the security aspect of continuous prompt-learning algorithms, which are a core component of prompt-based learning paradigms. Although the study is not directly analyzing 'hard prefix prompts' but is rather investigating 'backdoor attacks' on continuous prompts, understanding such vulnerabilities is crucial for the overall field of prompt engineering, particularly for ensuring the robustness and reliability of models using prompts. However, it may be slightly less relevant if the specific focus of the inquiry is on 'hard prefix prompts,' as this paper investigates continuous prompts, which could be conceptually distinct.",https://arxiv.org/pdf/2211.14719 -multilingual relation classification via efficient and effective prompting,8,"The study is highly relevant to prompt engineering as it focuses on the application of prompting methods in multilingual relation classification, a specific area within NLP tasks that can benefit from engineered prompts. The research introduces a method for constructing prompts efficiently and evaluates its effectiveness in various scenarios, including low-resource languages. The relevance to hard prefix prompts is indirect since the focus is more on the application and efficacy of prompt methods rather than on the systematic analysis of different prompt structures, but it still contributes valuable insights to the field of prompt engineering.",https://arxiv.org/pdf/2210.13838 -knowledge prompting in pre-trained language model for natural language understanding,9,"The abstract describes a method for incorporating factual knowledge into Pre-trained Language Models (PLMs) via a 'knowledge prompting' technique, which is highly relevant to prompt engineering. The study not only discusses the integration of knowledge prompts with PLMs but also introduces novel knowledge-aware tasks. This indicates a direct application and exploration of prompting mechanisms within language models, thereby warranting a high relevance rating. A point is withheld because the abstract does not explicitly mention 'hard prefix prompts,' suggesting that while the paper is relevant to prompt engineering, it may not specifically cover the systematic review aspect of hard prefix prompts.",http://arxiv.org/pdf/2210.08536 -multi-stage pre-training for automated chinese essay scoring,7,"The relevance to prompt engineering is significant given that the paper outlines a method that requires fine-tuning an essay scoring model on different types of prompts. This aligns with prompt engineering since the quality and characteristics of these prompts directly influence the training and performance of the AI model. Furthermore, including weakly supervised and cross-prompt fine-tuning stages implies a deep understanding of how prompts interact with the model. However, the focus appears to be more on automated essay scoring than on hard prefix prompts specifically, which is why the score is not higher.",https://www.aclweb.org/anthology/2020.emnlp-main.546.pdf -punifiedner: a prompting-based unified ner system for diverse datasets,8,"The paper presents PUnifiedNER, a model leveraging prompt learning, which is a subfield of prompt engineering. This NER system's ability to train across multiple datasets and efficiently recognize a wide range of entity types by using prompts directly relates to the study of prompt design and utilization within models, a key aspect of prompt engineering. The relevance is not maximal since the abstract does not specifically discuss the nature of the 'hard prefix prompts' mentioned in the initial query, but it does focus on prompt-based learning which is closely related to the field of study in question.",http://arxiv.org/pdf/2211.14838 -prompting through prototype: a prototype-based prompt learning on pretrained vision-language models,8,"The abstract describes a relevant method in the field of prompt engineering, specifically focusing on a prototype-based prompting approach for few-shot image recognition tasks using pretrained vision-language models (PVLMs). Although the study presented is not directly examining 'hard prefix prompts', it is relevant to the broader topic of prompt engineering as it explores how prompts can be optimized and tailored for specific instances or tasks. The prototype-based approach is an innovative instance-level technique that directly contributes to the understanding and development of prompt-based methods in machine learning. The high rating reflects the study's potential contributions to the field of prompt engineering, despite not addressing hard prefix prompts explicitly.",http://arxiv.org/pdf/2210.10841 -self-prompting large language models for open-domain qa,8,"The abstract describes a research study focusing on the use of Large Language Models (LLMs) for Open-Domain Question Answering (ODQA) by introducing a Self-Prompting framework that relies on in-context learning through prompts generated by the LLMs themselves. This approach directly involves the concept of 'prompt engineering,' as it requires the design and structuring of prompts to effectively guide LLMs to produce useful pseudo QA pairs for learning. It is highly relevant to prompt engineering because it explores an innovative way of using prompts to leverage the internal knowledge of LLMs, thereby eliminating the dependency on external datasets. Although the study does not focus specifically on 'hard prefix prompts', it does tackle the broader area of how prompts can be used to enhance the performance of LLMs in a specific task, which makes it quite relevant to the field of prompt engineering.",http://arxiv.org/pdf/2212.08635 -dialogue state tracking with a language model using schema-driven prompting,8,"The abstract discusses a novel approach that employs 'schema-driven prompting' for dialogue state tracking, which is relevant to prompt engineering as it involves designing prompts that guide a language model's behavior. The use of prompts for task-aware history encoding aligns with the subject of prompt engineering. Although it does not directly reference 'hard prefix prompts', the concept of schema-driven prompts is closely related to the topic of how prompts affect the performance of language models. The high rating reflects the relevance of schema-driven prompting in the broader field of prompt engineering study, despite it not being an exact match for 'hard prefix prompts'.",https://aclanthology.org/2021.emnlp-main.404.pdf -mapl: parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting,7,"The abstract describes a method (MAPL) for adapting pre-trained unimodal models for few-shot learning in multimodal vision-language settings, which is relevant to prompt engineering as it involves leveraging existing models to perform new, related tasks with minimal training data. However, the focus is on a parameter-efficient adaptation technique rather than the systematic study of prompt design or hard prefix prompts specifically, hence the rating of 7 reflecting substantial relevance but not a direct focus on the prompt engineering methodology.",http://arxiv.org/pdf/2210.07179 -transprompt: towards an automatic transferable prompting framework for few-shot text classification,8,"The mentioned study focuses on a prompting framework aimed at few-shot text classification tasks, which is highly relevant to prompt engineering. The transferability aspect of the prompts across similar NLP tasks suggests novel techniques in prompt design and application, contributing to the field of prompt engineering. The use of cross-task transferable knowledge is especially pertinent, although the provided abstract does not specifically mention 'hard prefix prompts,' which was the topic requested. Therefore, while the study is much related to prompt engineering, it may not entirely focus on the subset of 'hard prefix prompts,' leading to a slightly lower rating.",https://aclanthology.org/2021.emnlp-main.221.pdf -context-faithful prompting for large language models,9,"The paper presents methods for improving the performance of Large Language Models (LLMs) on context-sensitive tasks using advanced prompt engineering techniques. Although it does not explicitly mention 'hard prefix prompts,' the focus on 'carefully designed prompting strategies' is highly relevant to the broader field of prompt engineering. Opinion-based prompts and counterfactual demonstrations are specific types of prompts that could fall under the category of systematic review on hard prefix prompts. Therefore, the paper is likely to contribute valuable insights to the study of prompt engineering.",http://arxiv.org/pdf/2303.11315 -prompting technologies: a comparison of time-based and context-aware transition-based prompting.,7,"The study presented in the abstract is relevant to prompt engineering as it investigates the timing and context of delivering prompts, which can be crucial for the effectiveness of interventions in cognitive tasks. Although the study does not directly address 'hard prefix prompts,' which are specifically designed prompts in language models or AI environments, the underlying principles of effective prompting are closely related to prompt engineering. The comparison between time-based and context-aware prompting can inform how to design better prompts by understanding user interaction and response patterns. Therefore, this study holds relevance for the broader field of prompt engineering, especially in user-centric applications where user experience and interaction timing are important, even though it doesn't directly deal with hard prefix prompts.",https://europepmc.org/articles/pmc4803438?pdf=render -"self-contradictory hallucinations of large language models: evaluation, detection and mitigation",8,"The provided abstract is highly relevant to prompt engineering as it discusses a prompting-based framework to address self-contradictions in large language models. Self-contradiction is a critical issue that can affect the effectiveness of prompts, and the study's focus on evaluation, detection, and mitigation is directly related to improving the performance of prompts in generating consistent and reliable output from LMs. The high relevance rating is justified because the paper tackles the challenge of crafting prompts that can lead to better-managed discourse by the LM, which is a core aspect of prompt engineering. While the study does not specifically mention 'hard prefix prompts,' it is closely allied with prompt engineering principles and practices.",https://arxiv.org/pdf/2305.15852 -a prompting-based approach for adversarial example generation and robustness enhancement,9,"The paper is highly relevant to prompt engineering as it focuses on the development of prompt-based adversarial attacks and a robustness enhancement technique that uses prompts to improve model resistance to attacks. It indicates the potential of prompting paradigms in identifying and mitigating the vulnerabilities of pre-trained language models, which are at the core of prompt engineering. The only reason it is not rated a full 10 is that it is more focused on the application of prompts for adversarial purposes rather than a comprehensive study of hard prefix prompts in general.",http://arxiv.org/pdf/2203.10714 -dictionary-based phrase-level prompting of large language models for machine translation,8,"The article titled 'dictionary-based phrase-level prompting of large language models for machine translation' is highly relevant to prompt engineering as it describes a novel method for improving machine translation through the use of prompt engineering techniques. Specifically, it explores the use of large language models for MT and addresses the challenge of rare words by incorporating bilingual dictionaries into prompts, which directly falls within prompt engineering. The rating is not a full 10 because the study focuses on machine translation and the use of dictionaries for assisting translation of rare words which is a specific application of prompt engineering rather than a comprehensive review of hard prefix prompts in general.",http://arxiv.org/pdf/2302.07856 -fine-grained controllable text generation using non-residual prompting,8,"The abstract presents an approach to improve fine-grained control of text generation in Causal Language Models (CLMs) using an encoder-decoder architecture and intermediate text prompts. While the study is focused on text generation control rather than prompt engineering directly, it is highly relevant to the field of prompt engineering as it proposes a method for enhancing the control and versatility of prompts within these models. The introduction of intermediate prompts as a mechanism for controlling text generation could be applicable to 'hard prefix prompts' research, hence the high relevance score. However, it does not address 'hard prefix prompts' specifically, which prevents a full score.",https://aclanthology.org/2022.acl-long.471.pdf -understanding and improving visual prompting: a label-mapping perspective,7,"The study deals with visual prompting (VP), which is closely related to the concept of 'prompt engineering' in the sense that both involve techniques for effectively leveraging pre-trained models for new tasks. However, the focus on 'label-mapping' and visual tasks diverges from the typical context of 'hard prefix prompts,' which often relates to text prompts in natural language processing. Still, the principles investigated can be relevant to prompt engineering in a broader sense as it explores the relationship between prompting and label mapping to improve task accuracy.",https://arxiv.org/pdf/2211.11635 -automatic multi-label prompting: simple and interpretable few-shot classification,9,"The study presents a new method within the field of prompt engineering, directly aiming to improve the efficiency and efficacy of prompt-based few-shot text classification. As prompt engineering is a critical aspect of utilizing pretrained language models, and the paper offers a systematic approach to select label mappings for prompts, it is highly relevant to the field of prompt engineering. The only reason it does not receive a 10 is because it does not specifically address 'hard prefix prompts,' but rather prompt-based learning in a broader sense.",http://arxiv.org/pdf/2204.06305 -fs-detr: few-shot detection transformer with prompting and without re-training,7,"The paper discusses a new few-shot detection transformer (FS-DETR) that uses visual prompting, which is a form of prompt engineering. Visual prompts are used to provide the model with additional context without re-training. While the study does not specifically focus on 'hard prefix prompts', it does explore the concept of using prompts in a transformer-based model, which is a relevant aspect of prompt engineering. Therefore, the relevance to prompt engineering is significant but not directly focused on 'hard prefix prompts' which may suggest a slightly lower rating.",https://arxiv.org/pdf/2210.04845 -prompting contrastive explanations for commonsense reasoning tasks,9,"The study directly involves the use of language models to generate explanations for commonsense reasoning tasks by contrasting alternatives, which is a form of prompt engineering. This approach modifies how prompts are presented to the language model to elicit more informative and justifiable outputs, closely aligning with the concept of 'hard prefix prompts' where the prompt structure is critical to guide the language model's generation process. The relevance is high because the research focuses on improving the interpretability and effectiveness of prompts given to PLMs.",https://aclanthology.org/2021.findings-acl.366.pdf -generated knowledge prompting for commonsense reasoning,8,"The paper is highly relevant to prompt engineering since it discusses 'generated knowledge prompting,' which is a method of using generated knowledge as a prompt to enhance performance in commonsense reasoning tasks. This falls within the purview of prompt engineering as it involves the strategic manipulation of inputs to a language model to garner better performance. Although it does not specifically mention 'hard prefix prompts,' it does approach the broader topic of how prompts can be used to integrate external knowledge into a language model's reasoning process, which may be beneficial to those studying ways to optimize prompting techniques.",https://aclanthology.org/2022.acl-long.225.pdf -dynamic prefix-tuning for generative template-based event extraction,7,"The abstract discusses a generative template-based event extraction method that utilizes dynamic prefix (GTEE-DynPref), which is highly relevant to prompt engineering as it involves type-specific prefixes that are adaptively integrated with context information. This suggests an innovation in how prompts are engineered to be context-specific rather than static, contributing to the study of prompts in NLP tasks. However, the focus on event extraction as a specific application may slightly limit the relevance to the broader field of prompt engineering since it doesn't address prompt engineering in a variety of other AI model contexts.",https://aclanthology.org/2022.acl-long.358.pdf -an empirical study of gpt-3 for few-shot knowledge-based vqa,7,"The paper describes a novel approach to using GPT-3 with prompts, specifically tailored for knowledge-based visual question answering (VQA). Although the primary focus is on VQA and not on 'hard prefix prompts' in general, the method of incorporating prompts using image captions is indeed relevant to the broader field of prompt engineering. The study explores how prompts can effectively guide a language model to utilize its latent knowledge base for a specific task. The systematic investigation into what text formats best describe image content and how to select in-context examples could provide valuable insights for prompt engineering studies.",https://ojs.aaai.org/index.php/AAAI/article/download/20215/19974 -interactive-chain-prompting: ambiguity resolution for crosslingual conditional generation with interaction,7,"The study's focus on 'interactive-chain prompting' as a mechanism to resolve ambiguities in crosslingual conditional generation suggests a significant overlap with prompt engineering techniques, especially within the context of natural language processing and machine learning. Even though the paper does not explicitly study 'hard prefix prompts,' the proposed method represents a form of advanced prompting strategy that can be valuable in the broader field of prompt engineering. The paper could hence provide insights into the design and effectiveness of complex prompting mechanisms, which is relevant for the study of prompt engineering. However, since the paper's primary focus is not on prompt engineering but on improving translation quality through interaction, the rating is not a full 10.",http://arxiv.org/pdf/2301.10309 -fast and constrained absent keyphrase generation by prompt-based learning,7,"The prompt engineering relevance of the study is substantial, considering it details a novel approach for keyphrase generation using prompt-based learning, which falls under the domain of controlled natural language generation—a key aspect of prompt engineering. The proposed method's constrained generation technique, which uses prompts derived from keywords to guide the production of absent keyphrases, is closely related to the concept of 'hard prefix prompts' where prompts direct the generative process. Although the main focus of the study is on efficient and consistent keyphrase generation rather than prompt engineering per se, the techniques employed for creating and utilizing prompts in the learning process have significant implications for the field of prompt engineering. It demonstrates a method to control and speed up the language generation process, which are key challenges in the development of efficient prompt engineering strategies. Nonetheless, the relevance is not given a full score as the primary focus seems to be on absent keyphrase generation rather than on the prompt engineering itself.",https://ojs.aaai.org/index.php/AAAI/article/download/21402/21151 -cold-start data selection for better few-shot language model fine-tuning: a prompt-based uncertainty propagation approach,9,"The abstract describes a study focusing on a prompt-based data selection method (PATRON) for fine-tuning pre-trained language models, which is highly relevant to prompt engineering. The mention of designing a prompt-based uncertainty propagation approach directly involves the development and refinement of prompts, and thus it directly contributes to the study of prompt engineering. The 'partition-then-rewrite (PTR) strategy' is slightly less relevant to the core concept of 'hard prefix prompts' but still within the domain of prompt engineering. The only reason the rating is not a full 10 is that the detailed application to 'hard prefix prompts' is not specified, as this technique seems broader than just hard prefix prompts, covering aspects such as data selection and sample diversity.",https://aclanthology.org/2023.acl-long.141.pdf -visual prompt based personalized federated learning,7,"The paper presents a novel personalized federated learning framework that uses visual prompts to capture the data distribution information of clients. This approach is relevant to the study of prompt engineering because it involves the use of prompts (visual in this case) as a mechanism to improve the performance of machine learning models. While the term 'hard prefix prompts' typically refers to textual prompts, the use of visual prompts in this context is an extension of the idea to the visual domain. Hence, the relevance is substantial due to the innovation in prompt utilization, although it may not directly address 'hard prefix prompts' as understood in natural language processing.",http://arxiv.org/pdf/2303.08678 -memobert: pre-training model with prompt-based learning for multimodal emotion recognition,8,"The paper's abstract discusses the use of a prompt-based method in the context of multimodal emotion recognition, which is highly relevant to prompt engineering. The relevance is underscored by the fact that the prompt-based learning is used to redefine a downstream task, which is a core area of interest in prompt engineering studies. However, the focus on emotion recognition rather than hard prefix prompts specifically means it is not entirely focused on prompt engineering, hence the rating is not a perfect 10.",https://arxiv.org/pdf/2111.00865 -prompt-based text entailment for low-resource named entity recognition,7,"The abstract discusses a methodology for adapting pre-trained language models to named entity recognition tasks by changing the task to text entailment with entity type-specific prompts. This is related to prompt engineering as it involves crafting prompts to interact with language models and manipulate their behavior to improve performance on specific tasks without extensive labeled data. However, the term 'hard prefix prompt' is not explicitly mentioned, indicating that the study might not be focused on hard prefix prompts specifically but rather on prompt-based methods in a broader sense. The relevance is significant due to the use of prompts in adjusting language model behavior but is not fully aligned with a study specifically on hard prefix prompts.",https://arxiv.org/pdf/2211.03039 -consprompt: easily exploiting contrastive samples for few-shot prompt learning,9,"The title and abstract indicate the study is highly relevant to prompt engineering. It discusses the development of a model (ConsPrompt) that leverages contrastive samples to enhance the fine-tuning process in prompt learning, particularly in few-shot settings. The paper's focus on finding strategies for more effective prompt initialization and improving the robustness of prompt learning aligns well with the topic of prompt engineering. It offers a novel approach, aligns with current challenges in the field, and claims to set a new standard for performance and robustness in few-shot learning tasks.",https://arxiv.org/pdf/2211.04118 -towards informative few-shot prompt with maximum information gain for in-context learning,9,"The study addresses a fundamental aspect of prompt engineering by exploring the effect of data example selection on the stability and performance of LLMs in few-shot scenarios. The introduction of a method to quantify Information Gain from data examples and the proposal to choose examples with maximum IG are directly relevant to enhancing prompt design. Additionally, the identification and mitigation of template bias in assessing IG can improve the quality of prompt engineering. While not exclusively focused on 'hard prefix prompts', this work contributes to the broader field of prompt engineering, thus receiving a high relevance rating.",https://arxiv.org/pdf/2310.08923 -commonsense knowledge-aware prompt tuning for few-shot nota relation classification,9,"The paper presents a study on commonsense knowledge-aware prompt tuning, which is directly related to prompt engineering as it discusses constructing relation-oriented templates and incorporating external knowledge for improving pre-trained language model performance in few-shot tasks. This is highly relevant to the field of prompt engineering, as it deals with optimizing prompts to effectively utilize the knowledge within language models. The only reason it doesn't receive a full 10 is that the focus is specifically on NOTA relation classification, which is a subset of the broader field of prompt engineering.",https://www.mdpi.com/2076-3417/12/4/2185/pdf?version=1645269904 -dual context-guided continuous prompt tuning for few-shot learning,9,"The abstract describes a research work that is highly relevant to prompt engineering, specifically in the niche of continuous prompt tuning methods. The paper introduces a novel method to improve the efficiency of prompts in few-shot learning scenarios, which is a direct contribution to the field of prompt engineering. The proposal of dual context-guided continuous prompts (DCCP) and the discussion of its advantages over existing methods highlight its significance for studies on how prompts influence the performance of NLP models. The reason for not giving a full score of 10 is that while the paper is highly relevant, it may not cover the 'hard prefix prompts' aspect mentioned in the original prompt but focuses more broadly on continuous prompt tuning.",https://aclanthology.org/2022.findings-acl.8.pdf -a dual prompt learning framework for few-shot dialogue state tracking,8,"The paper describes the application of prompt learning in the context of Dialogue State Tracking (DST), which is a highly relevant area within natural language processing for task-oriented dialogue systems. The use of dual prompts and the idea of formulating the DST task as a language modeling task under few-shot settings directly concerns the engineering of prompts for effective model learning with limited data. The relevance to prompt engineering is high because it explores how to use prompts to assist pre-trained language models in understanding and generating dialogue states, which is an innovative approach to embed task-specific knowledge into the language model's processes. The paper's focus on incorporating task-related knowledge into prompts for language models aligns with prompt engineering objectives, such as improving model performance on targeted tasks with minimal examples. However, it does not cover all aspects of prompt engineering, such as the systematic study of different types of prompts (e.g., hard prefixes), hence the rating is not a full 10.",https://dl.acm.org/doi/pdf/10.1145/3543507.3583238 -multi-task pre-training of modular prompt for few-shot learning,9,"The abstract pertains directly to the field of prompt engineering, discussing an approach to improving few-shot learning in language models through pre-trained modular prompts (MP2). This is highly relevant to prompt engineering as it addresses enhancing the adaptability and efficiency of prompt tuning, which is a core aspect of the application of language models to downstream tasks. It presents empirical results showing the method's superiority over traditional prompt tuning and full model tuning in few-shot settings. The relevance is not rated a full 10 because the abstract mentions the specific application to Chinese tasks, which might not cover the full breadth of the general field of prompt engineering, but it is otherwise highly pertinent.",http://arxiv.org/pdf/2210.07565 -idiapers @ causal news corpus 2022: efficient causal relation identification through a prompt-based few-shot approach,8,"The paper's methodology is highly relevant to prompt engineering as it specifically deals with fine-tuning language models using prompts in a few-shot learning configuration. The approach treats a specialized task, Causal Relation Identification, as a masked language modeling problem, which aligns with the concept of utilizing prompts to steer LMs towards desired outputs without extensive training data. This suggests relevance to prompt-engineering techniques, although it is not a direct study on 'hard prefix prompts,' which might be a specific subset of prompt engineering.",http://arxiv.org/pdf/2209.03895 -few-shot natural language inference generation with pdd: prompt and dynamic demonstration,7,"The study introduces a framework to improve performance in few-shot natural language inference generation tasks by incorporating prompts and dynamic demonstrations within a language model. Although it does not directly study 'hard prefix prompts', it is relevant to prompt engineering because it involves the development of prompts and their application to enhance model performance in natural language processing tasks. The improvements on benchmark datasets and the claim of good generalizability suggest that the techniques used could potentially inform prompt engineering strategies, particularly in few-shot learning contexts.",https://arxiv.org/pdf/2205.10593 -discriminative language model as semantic consistency scorer for prompt-based few-shot text classification,9,"The paper introduces a finetuning method for text classification using prompts, which is highly relevant to prompt engineering. ELECTRA, being a language model used to distinguish between genuine and artificially generated text, contributes to the creation and evaluation of prompts, indicating a direct application to prompt engineering. This method is focused on improving the performance of language models in few-shot learning scenarios, which is a subset of prompt engineering. The rating is not a perfect 10 because the paper appears to be more focused on the application of a discriminative language model rather than on the prompt engineering process itself.",http://arxiv.org/pdf/2210.12763 -a study on prompt-based few-shot learning methods for belief state tracking in task-oriented dialog systems,8,"The study is highly relevant to prompt engineering as it explores prompt-based few-shot learning, which directly relates to the development and use of prompts in the training of language models. The formulation of DST as a prompt-based task indicates a significant engagement with prompt design and optimization, which is a core aspect of prompt engineering. The empirical analysis of the performance of these prompt-based methods contributes to the understanding of their effectiveness, which is crucial for prompt engineering research. The study might not be focused exclusively on 'hard prefix prompts' as mentioned in the systematic review title, but it addresses a related and important aspect of the field.",http://arxiv.org/pdf/2204.08167 -adaptive prompt learning-based few-shot sentiment analysis,9,"The paper appears highly relevant to prompt engineering as it proposes an adaptive prompt learning method for few-shot sentiment analysis, directly addressing the construction of prompts. The specific focus on adaptive prompts demonstrates an advanced application of prompt engineering aimed at improving the effectiveness of language models in few-shot learning scenarios. The only reason it is not rated a full 10 is due to the lack of information on 'hard prefix prompts', which may be a specific subset of the broader prompt engineering field.",https://arxiv.org/pdf/2205.07220 -investigating prompt learning for chinese few-shot text classification with pre-trained language models,8,"The abstract describes a study on a prompt-based framework for Chinese text classification, especially in few-shot learning scenarios, which is highly relevant to prompt engineering. However, it specifically addresses the adaptation of prompt-based methods for Chinese language tasks, which may not be directly applicable to the concept of 'hard prefix prompts' as it is not clear if the techniques are universally applicable to other languages or specific to Chinese. Therefore, while the study is related to prompt engineering, the rating is not a full 10 due to potential limitations in generalizability.",https://www.mdpi.com/2076-3417/12/21/11117/pdf?version=1667385041 -psp: pre-trained soft prompts for few-shot abstractive summarization,9,"The abstract provided discusses a novel methodology for few-shot abstractive summarization that relates closely to prompt engineering. It introduces a new concept of soft prompts along with a training paradigm focussed on these prompts. Although the study introduces 'soft prompts' rather than 'hard prefix prompts', it is still highly relevant due to its focus on the broader area of prompt tuning and engineering for model performance improvement. This contribution to prompt architecture and training directly informs how prompts can be effectively implemented and optimized in various machine learning scenarios. The difference in the type of prompts (soft vs. hard prefix) results in a rating of 9 instead of a perfect 10.",http://arxiv.org/pdf/2204.04413 -decomposed two-stage prompt learning for few-shot named entity recognition,8,"The study presents a novel approach to prompt learning within the task of Named Entity Recognition (NER) in a few-shot setting, which is directly related to prompt engineering as it contributes to advancements in precision and efficiency of using prompts in machine learning models. The relevance to prompt engineering is high because it involves creating and using prompts specifically designed to improve the performance of NER tasks. The deduction of points is due to the specificity of the application to NER rather than a broader exploration of prompt engineering in general.",https://www.mdpi.com/2078-2489/14/5/262/pdf?version=1682672547 -few-shot table-to-text generation with prompt planning and knowledge memorization,8,"The study presents a framework called PromptMize, intended for table-to-text generation within few-shot learning scenarios, which focuses on the design of prompts to guide pre-trained language models. While it does not specifically address 'hard prefix prompts', it is highly relevant to the field of prompt engineering due to its emphasis on designing prompts (prompt planner) to bridge the gap between different data modalities (tabular data and text). This is a direct application of prompt engineering techniques in the context of natural language generation from structured data, and it advances the domain by integrating domain-specific knowledge into the prompting process. Therefore, this study should be of significant interest for those researching or studying prompt engineering, albeit not directly focused on hard prefix prompts.",https://arxiv.org/pdf/2302.04415 -locoop: few-shot out-of-distribution detection via prompt learning,8,"The abstract describes an advancement in prompt learning specifically applied to few-shot out-of-distribution detection in the context of a vision-language model, which is relevant to the field of prompt engineering. However, the study focuses more on the application of prompt learning for improving OOD detection rather than the structure, phrasing, or systematic review of 'hard prefix' prompts. Despite this, the introduction of a local regularization technique called LoCoOp that enhances performance in prompt-based models indicates a significant contribution to the prompt engineering domain, particularly in algorithmic improvement for better model generalization. Therefore, it is not a perfect match to the study of 'hard prefix prompts,' but it is closely related due to its focus on enhancing prompt learning methods.",http://arxiv.org/pdf/2306.01293 -few-shot joint multimodal aspect-sentiment analysis based on generative multimodal prompt,8,"The study introduces a Generative Multimodal Prompt model within the context of Multimodal Aspect-Based Sentiment Analysis, a subfield of prompt engineering related to few-shot learning. Prompt engineering typically involves crafting inputs that guide machine learning models, especially in few-shot or zero-shot settings. The relevance to prompt engineering is substantiated by the creation and use of prompts to handle multimodal data when labeled instances are sparse. This implies a strong connection to the strategies involved in prompt engineering. However, the study is specifically targeted at multimodal data and aspect-sentiment analysis, and it doesn't cover the entire breadth of prompt engineering, which may also include text-only or other single-modality frameworks. Thus, the relevance is rated as high but not absolute.",http://arxiv.org/pdf/2305.10169 -partseg: few-shot part segmentation via part-aware prompt learning,9,"The paper presents a method for few-shot part segmentation by leveraging a part-aware prompt learning technique, which directly relates to the process of prompt engineering. The relevance is high because prompt engineering involves generating inputs that help models like CLIP better interpret and process information, which is what the paper appears to be achieving with its part-specific prompts. It's not a perfect 10 because the paper is application-specific (focused on few-shot part segmentation), whereas prompt engineering can also encompass broader methodologies and applications beyond this context.",https://arxiv.org/pdf/2308.12757 -evolutionary verbalizer search for prompt-based few shot text classification,9,"The given abstract describes research focused on improving prompt-based tuning, specifically within the realm of few-shot text classification by developing a novel evolutionary verbalizer search (EVS) algorithm. Since prompt-based tuning is a direct application of prompt engineering, and this paper deals with the construction of optimal verbalizers, which are integral to the functioning of prompt-based models, its relevance to prompt engineering is high. However, it doesn't cover every aspect of prompt engineering, such as hard prefix prompts specifically, thus warranting a slightly less than perfect score.",http://arxiv.org/pdf/2306.10514 -a chinese few-shot text classification method utilizing improved prompt learning and unlabeled data,8,"The abstract discusses a method for Chinese few-shot text classification (FSTC) that employs an improved prompt learning technique, indicating a close relevance to prompt engineering. It details an approach for creating and optimizing prompt prefixes specifically designed for Chinese, which falls directly within the study of prompt engineering. The method's use of multiple masks in prompt learning and its application in a semi-supervised context with unlabeled data enhance the relevance. The reason for not giving a full 10 is because the focus seems heavily on the application to Chinese text and the improvement of performance in FSTC; the abstract does not broadly address various aspects of prompt engineering beyond its specific use case.",https://www.mdpi.com/2076-3417/13/5/3334/pdf?version=1678093925 -prompt-based zero- and few-shot node classification: a multimodal approach,7,"The study mentioned in the abstract focuses on the use of prompts in a multimodal approach for node classification, which is relevant to the field of prompt engineering in the context of machine learning. The 'prompt- and graph-based module' specifically indicates that prompts are engineered as part of the model to handle zero-shot scenarios, which is an application of prompt engineering. However, the primary focus seems to be on integrating text and graph modalities rather than on the systematic review of hard prefix prompts, which would more directly address the prompt engineering study. Thus, while the study is relevant due to the inclusion of prompts in the machine learning model, it may not fully represent a comprehensive review strictly on prompt engineering with 'hard prefix prompts'.",https://arxiv.org/pdf/2307.11572 -dreamartist: towards controllable one-shot text-to-image generation via contrastive prompt-tuning,7,"The paper discusses 'contrastive prompt-tuning,' which is a technique relevant to prompt engineering. Since prompt engineering involves methods to efficiently communicate with AI models, and in this case, to control text-to-image generation, the paper's subject is pertinent to the field. However, it doesn't focus on the 'hard prefix prompts,' which the initial request emphasizes. Therefore, the relevance is substantial but not entirely on point with the specific systematic review criteria stated.",http://arxiv.org/pdf/2211.11337 -one-shot and partially-supervised cell image segmentation using small visual prompt,7,"The abstract describes a framework for cell image segmentation that uses concepts from prompt learning, which is related to the field of prompt engineering. While the main focus is on the application of these concepts to one-shot and partially-supervised learning for cell image segmentation, the utilization of 'small prompt images' and the attention given to prompt learning techniques in the study suggest relevance to prompt engineering. However, it does not appear to closely study hard prefix prompts as applied in NLP or broader prompt engineering, hence it is not a perfect match for the prompt engineering study.",https://arxiv.org/pdf/2304.07991 -pøda: prompt-driven zero-shot domain adaptation,8,"The paper is highly relevant to prompt engineering because it introduces a novel methodology that utilizes natural language prompts to drive the process of zero-shot domain adaptation. Though it does not focus specifically on 'hard prefix prompts', it does explore the role of prompts in guiding the adaptation of models to new domains, which is an essential aspect of prompt engineering in the broader sense. The use of CLIP and the approach to optimize feature transformations based on target text embeddings are elements that connect closely to the principles of prompt engineering, which includes crafting prompts to guide model behavior.",https://arxiv.org/pdf/2212.03241 -prompt-based extraction of social determinants of health using few-shot learning,7,"The study described in the abstract involves the use of one-shot prompting with GPT-4 to extract social determinants of health from unstructured text. This is relevant to prompt engineering because it focuses on the methodology of leveraging language models through prompts to achieve a specific task. While it does not directly study 'hard prefix prompts', which suggests a more specific kind of prompt engineering, the exploration of one-shot prompts and their comparison to traditional supervised approaches is within the broader domain of prompt engineering. Therefore, its relevance is high but not entirely focused on hard prefix prompts, warranting a rating of 7.",http://arxiv.org/pdf/2306.07170 -augmenters at semeval-2023 task 1: enhancing clip in handling compositionality and ambiguity for zero-shot visual wsd through prompt augmentation and text-to-image diffusion,7,"The paper focuses on enhancing the performance of the CLIP model by addressing issues related to prompt engineering, such as the compositionality and ambiguity in natural language and generating more contextual prompts using large language models. While it is not specifically about 'hard prefix prompts', it does involve an in-depth look at modifying and improving prompts for better results, which is relevant to the broader field of prompt engineering study.",https://arxiv.org/pdf/2307.05564 -enhancing black-box few-shot text classification with prompt-based data augmentation,7,"The provided abstract focuses on the use of large-scale language models (LLMs) like GPT-3 for few-shot text classification and explores a method of interacting with them purely through their inference APIs, without requiring access to the gradients. The relevance to prompt engineering is found in the application of prompt-based data augmentation, which is a technique integral to the practice of prompt engineering. Although the primary focus seems to be on the black-box modeling approach and parameter-efficient adaptation, the utilization of prompts to augment data for better performance in few-shot scenarios suggests that the research contributes to the prompt engineering field. It does not, however, directly address a systematic review on hard prefix prompts, which would be the core topic of a prompt engineering study. Hence, the relevance is significant but not complete, leading to a rating of 7.",http://arxiv.org/pdf/2305.13785 -lm-cppf: paraphrasing-guided data augmentation for contrastive prompt-based few-shot fine-tuning,8,"The paper 'lm-cppf: paraphrasing-guided data augmentation for contrastive prompt-based few-shot fine-tuning' directly relates to prompt engineering as it discusses the use of prompt-based tuning in the context of language model fine-tuning. Since prompt engineering fundamentally involves crafting input prompts to elicit the desired output from a language model, this paper's focus on leveraging paraphrasing-guided augmentation within the prompt-based few-shot fine-tuning framework demonstrates an application of prompt engineering. The relevance is not rated as a perfect 10 because the study seems to emphasize data augmentation and contrastive learning in addition to prompt-based methods rather than focusing solely on prompt engineering.",https://aclanthology.org/2023.acl-short.59.pdf -syntax-aware hybrid prompt model for few-shot multi-modal sentiment analysis,9,"The paper describes a novel approach to prompt engineering by integrating hand-crafted and learnable prompts within a hybrid model for few-shot multi-modal sentiment analysis. Since prompt engineering involves crafting input prompts to guide models, especially in few-shot learning scenarios, this paper is highly relevant to prompt engineering studies. It also touches upon optimizing prompt encoders using attention mechanisms, which is a sophisticated technique within this field. The only reason it doesn't receive a full 10 is that it is specific to sentiment analysis and may not cover the entire breadth of prompt engineering applications.",https://arxiv.org/pdf/2306.01312 -enhancing few-shot ner with prompt ordering based data augmentation,7,"The relevance is fairly high because the paper discusses a Prompt Ordering based Data Augmentation (PODA) method, which is related to prompt engineering in that it involves manipulating data to improve the performance of language models in a low-resource setting. Prompt engineering typically involves crafting prompts that guide the model's predictions or generating capabilities, and while this method is specifically targeting a data augmentation approach for named entity recognition, it is relevant insofar as it involves ordered prompts and their effect on the training process. However, it does not directly address 'hard prefix prompts' or a broader range of prompt engineering outside the context of few-shot NER, hence the rating is not a full 10.",http://arxiv.org/pdf/2305.11791 -image-object-specific prompt learning for few-shot class-incremental learning,8,"The study presents a novel training framework in the context of few-shot class-incremental learning (FSCIL), incorporating the use of specialized prompts, which are biased towards specific attributes of class objects to guide the learning process. This biasing through prompts is relevant to prompt engineering as it involves the strategic use of prompts to direct the model's attention to specific features, which is an integral concept in prompt engineering. The use of key-prompt pairs is directly associated with designing effective prompts. While the study does not explicitly state 'hard prefix prompts' or a comprehensive review on it, it does demonstrate practical application and manipulation of prompts in a machine learning context, which is relevant to the broader field of prompt engineering.",https://arxiv.org/pdf/2309.02833 -overcoming catastrophic forgetting in zero-shot cross-lingual generation,9,"The abstract discusses the use of prompt tuning, a parameter-efficient adaptation technique, to overcome challenges in zero-shot cross-lingual generation, which is directly relevant to prompt engineering. The study focusses on how prompts can be engineered and factored to enable a generative multilingual model to perform tasks in languages it wasn't explicitly trained on, without catastrophic forgetting. Although it does not specifically mention 'hard prefix prompts,' the concept of prompt tuning is a crucial part of prompt engineering studies, so the relevance to the broader field of prompt engineering is high.",http://arxiv.org/pdf/2205.12647 -nearest neighbor zero-shot inference,7,"The abstract presents kNN-Prompt, a k-nearest neighbor Language Model with expanded verbalizers, which is relevant to the study of prompt engineering because it involves the automatic expansion of prompts for improved zero-shot learning. While the study emphasizes retrieval-augmented language models and zero-shot inference rather than directly focusing on 'hard prefix prompts,' the concept of expanding verbalizers to include synonyms directly pertains to the engineering of prompts to enhance model performance. Thus, the relevance to prompt engineering study is significant, though not entirely focused on 'hard prefix' prompts specifically.",https://arxiv.org/pdf/2205.13792 -kecp: knowledge enhanced contrastive prompting for few-shot extractive question answering,7,"The abstract describes an approach involving a novel method of prompt-tuning, which is highly relevant to prompt engineering studies. The focus on Knowledge Enhanced Contrastive Prompt-tuning (KECP) is especially pertinent to the field as it introduces a non-conventional method of leveraging prompts through external knowledge bases and contrastive learning objectives. Nevertheless, since the study doesn't specifically address 'hard prefix prompts' but rather a broader prompt-tuning strategy for EQA, the rating is not a full 10.",http://arxiv.org/pdf/2205.03071 -cross-lingual retrieval augmented prompt for low-resource languages,7,"The study described in the abstract is relevant to prompt engineering because it discusses the creation and use of a pipeline (PARC) that augments prompts to enhance the performance of Multilingual Pretrained Language Models (MPLMs) in zero-shot learning scenarios for low-resource languages. This directly relates to the field of prompt engineering, as it involves designing and manipulating prompts to improve language model performance. However, it may not be directly related to 'hard prefix prompts,' as it does not specify the nature of the prompts used (whether hard-coded, soft, or another type). The focus is more on cross-lingual retrieval and augmentation rather than the systematic review of the prompt types or their design characteristics, hence the rating is not a full 10.",https://arxiv.org/pdf/2212.09651 -indirect: language-guided zero-shot deep metric learning for images,7,"The abstract introduces Language-Guided Zero-Shot Deep Metric Learning (LanZ-DML) which emphasizes the use of natural language text prompts to control image representation without the need for training data. The model InDiReCT mentioned utilizes CLIP to transfer the variation in text prompt embeddings to the image embedding space. Although the study focuses on the metric learning aspect and the application in image retrieval systems, it is highly relevant to prompt engineering because it involves the use of text prompts to guide a zero-shot learning process. This showcases an intricate way that prompts can interact with deep learning models to influence their behavior. However, it does not directly address hard prefix prompts or a systematic review of such, which limits the rating to a 7.",https://arxiv.org/pdf/2211.12760 -jurassic is (almost) all you need: few-shot meaning-to-text generation for open-domain dialogue,8,"The given title and TLDR indicate research related to few-shot meaning-to-text generation using semantic prompts. This is relevant to prompt engineering as it specifically pertains to the utilization of prompts to guide natural language generation (NLG) systems to produce text in a conversational context. Despite not explicitly mentioning 'hard prefix prompts', the study appears to contribute to the broader field of prompt-based learning and NLG. Hence, the rating is high but not maximum, due to the lack of direct reference to 'hard prefix prompts'.",https://arxiv.org/pdf/2110.08094 -prompt scoring system for dialogue summarization using gpt-3,8,"The abstract provided discusses the development of a scoring system specifically designed for improving few-shot training performances in the context of dialogue summarization with GPT-3, which involves an aspect of prompt engineering. Prompt engineering is integral to optimizing few-shot learning techniques by crafting effective prompts that guide language models like GPT-3 to perform specific tasks. The research focuses on the structure of dialogues and how tuned prompts can enhance the summarization task, which is highly relevant to the study of prompt engineering. Although the paper does not explicitly mention 'hard prefix prompts', it addresses the broader subject of prompt design and effectiveness, thus earning a high relevance rating. The 2-point deduction from a perfect score is due to the lack of specificity regarding 'hard prefix prompts', which may be a more narrow area within prompt engineering.",https://www.techrxiv.org/articles/preprint/Prompt_scoring_system_for_dialogue_summarization_using_GPT-3/16652392/2/files/35289613.pdf -is a prompt and a few samples all you need? using gpt-4 for data augmentation in low-resource classification tasks,8,"The described study is highly relevant to prompt engineering as it directly involves using prompts to leverage GPT-4 and ChatGPT for the purpose of data augmentation in classification tasks. Prompt engineering is a core component of this because the quality of the generated synthetic data heavily depends on the design and effectiveness of the prompts used. Although the study does not exclusively focus on 'hard prefix prompts,' it covers an application of prompts that is central to understanding and improving the use of language models in low-resource situations. The only reason the rating is not a 10 is that it does not specifically mention 'hard prefix prompts' or explore a comprehensive systematic review of such prompts, rather it looks at practical applications of prompt-related techniques for data augmentation.",http://arxiv.org/pdf/2304.13861 -residual prompt tuning: improving prompt tuning with residual reparameterization,9,"The abstract presents a study that directly addresses improvements in prompt tuning, which is an essential aspect of prompt engineering. The introduction of Residual Prompt Tuning as a method that advances the performance and stability of prompt tuning is highly relevant to engineers and researchers working with language models. The fact that it outperforms standard prompt tuning and shows robustness against various hyper-parameters and initializations makes it a significant contribution to the study of prompt engineering. The reason the rating is not a perfect 10 is that the abstract doesn't directly address 'hard prefix prompts', but it is relevant to the broader field of prompt engineering.",http://arxiv.org/pdf/2305.03937 -ds4dh at mediqa-chat 2023: leveraging svm and gpt-3 prompt engineering for medical dialogue classification and summarization,8,"The study described in the title uses prompt engineering as a part of its methodology to generate summaries for medical dialogues using GPT-3.5. Even though the study focuses on a specific application of prompt engineering within the medical field and combines it with Support Vector Machines (SVM) for classification tasks, the use of one-shot prompts to operate with GPT-3.5 embeds elements of prompt engineering which are relevant to the study of this domain. The relevance is not rated a full 10 due to the specificity of the application (medical dialogues), as opposed to a broader coverage of hard prefix prompts in prompt engineering.",https://access.archive-ouverte.unige.ch/access/metadata/290c4289-0017-45ec-baa9-ff2fdd7948f9/download -soft prompt tuning for augmenting dense retrieval with large language models,8,"The article presents a novel approach for enhancing dense retrieval through the use of soft prompt tuning with large language models, which is a technique within the scope of prompt engineering. This is closely relevant to the study of prompt engineering since it involves the optimization of prompts to improve the performance of language model tasks. Although the study focuses specifically on 'soft' prompt tuning rather than 'hard' prefix prompts, the methods and insights from soft prompt tuning contribute to the broader understanding of how prompts can influence language model behavior and performance. Therefore, the relevance is high but not absolute, hence the rating of 8.",https://arxiv.org/pdf/2307.08303 -self-prompting large vision models for few-shot medical image segmentation,8,"The abstract discusses the application of a segmentation model (SAM) in medical image analysis and introduces a novel technique for self-prompting in the context of few-shot learning. This is highly relevant to prompt engineering as it deals directly with how to leverage and optimize prompts for a model to improve its performance, especially in a domain like medical imaging where data can be scarce. The self-prompting approach relies on prompt tuning strategies which are an integral part of prompt engineering. The rating is not a full 10 because the abstract does not specifically mention 'hard prefix prompts' or the systematic review aspect of prompt engineering, which would cover a broader scope including various strategies beyond the one mentioned in the paper.",https://arxiv.org/pdf/2308.07624 -multi-mask label mapping for prompt-based learning,8,"The abstract discusses a novel prompt-based learning method called Multi-Mask Label Mapping (MMLM) that is designed to address the issues of misleading lexical cues in few-shot learning. Although the study does not specifically mention 'hard prefix prompts', its focus on improving prompt-based learning through strategic label mapping and instance augmentation is very relevant to the field of prompt engineering. Given that prompt engineering involves crafting prompts to effectively communicate with a model, the methodology proposed in this study could potentially be applied to the study of hard prefix prompts, thereby enhancing the state of prompt engineering. The deducted points are due to the lack of direct reference to 'hard prefix prompts', which was the specific focus of the prompt engineering study mentioned.",https://ojs.aaai.org/index.php/AAAI/article/download/26579/26351 -prompts can play lottery tickets well: achieving lifelong information extraction via lottery prompt tuning,8,"The relevance to prompt engineering is high, given that the abstract discusses a novel prompt tuning method called Lottery Prompt Tuning (LPT) which directly pertains to modifying prompts in the context of a universal information extraction system trained for lifelong learning. Prompt engineering broadly encompasses the tweaking and optimization of prompts to improve the performance of language models, and the LPT method falls within this field. Although it is not explicitly focused on 'hard prefix prompts', the study of prompt tuning methods is a significant aspect of prompt engineering. Therefore, the relevance is rated as an 8, with some points deducted because the description might not target 'hard prefix prompts' specifically but rather a related area within prompt engineering.",https://aclanthology.org/2023.acl-long.16.pdf -tuning multi-mode token-level prompt alignment across modalities,9,"The presented abstract discusses a novel approach to prompt tuning that emphasizes token-level prompt alignment across different modalities, which is a specific aspect of prompt engineering. Although it does not explicitly address 'hard prefix prompts,' it concentrates on the generalizable and nuanced aspects of prompt tuning in the context of vision-language models, which is highly relevant to the field of prompt engineering. The focus on multi-mode prompts and token-level alignment is crucial for fine-tuning prompt-based models, which is why it receives a high relevance rating.",https://arxiv.org/pdf/2309.13847 -metricprompt: prompting model as a relevance metric for few-shot text classification,8,"The paper described is highly relevant to the field of prompt engineering as it discusses MetricPrompt, a method that directly addresses the optimization of prompt design for text classification tasks. It specifically tackles the challenge of designing verbalizers and leverages the power of prompting models as relevance metrics, which falls within the domain of prompt engineering. The relevance rating is not a perfect 10 because, while the study is related to prompt engineering, the term 'hard prefix prompts' is not explicitly mentioned, and it is unclear how closely the proposed MetricPrompt methodology aligns with 'hard prefix prompts' specifically.",https://arxiv.org/pdf/2306.08892 -speak foreign languages with your own voice: cross-lingual neural codec language modeling,7,"The abstract describes the use of speech in the source language as a 'prompt' to generate speech in another language, preserving the voice, emotion, and acoustic environment of the original speaker. Although the term 'prompt' in this context does not directly refer to 'hard prefix prompts' as used in prompt engineering for text-based language models, it is relevant as it shows an application of prompts in a different but related domain of language processing and AI, i.e., speech synthesis and cross-lingual translation. The technology leverages in-context learning similar to how prompts are used in text-based models to guide the generation of synthetic speech, suggesting a form of prompt engineering in a speech synthesis model. Therefore, the rating is moderately high for relevance to the broader field of prompt engineering but is not a direct match since it pertains to speech rather than text-based prompting.",http://arxiv.org/pdf/2303.03926 -pointclip v2: adapting clip for powerful 3d open-world learning,7,"The abstract discusses leveraging large-scale language models to automatically design a more descriptive 3D-semantic prompt for CLIP’s textual encoder, indicating a study or application of prompt engineering to improve performance in 3D classification tasks. While it does not explicitly focus on 'hard prefix prompts,' it does deal with the broader topic of prompt engineering in the context of a real-world application—enhancing the compatibility of language-image pre-training models with 3D point cloud data. Therefore, the study is relevant to the subject of prompt engineering but perhaps less so to the specific aspect of 'hard prefix prompts.'",https://arxiv.org/pdf/2211.11682 -image segmentation using text and image prompts,8,"The study presents a system for generating image segmentations based on arbitrary prompts, which directly involves prompt engineering as it requires understanding and designing prompts that the model can interpret accurately. The use of text and image prompts to dictate model behavior demonstrates a practical application of prompt engineering. However, the specifics of 'hard prefix prompts' mentioned in the study inquiry are not directly addressed, so it may not fully cover the systematic review aspect of the inquiry but is still highly relevant to the field of prompt engineering.",https://arxiv.org/pdf/2112.10003 -sega: instructing diffusion using semantic dimensions,7,"The studied paper 'sega: instructing diffusion using semantic dimensions' discusses a method for providing semantic control over text-to-image diffusion models through something called SEGA. Although it doesn't directly address 'hard prefix prompts,' it is highly relevant to the field of prompt engineering because it focuses on improving the interaction between user inputs and the model's output. Such research contributes to the broader understanding of how to engineer prompts to achieve desired results, which is a crucial aspect of prompt engineering. The relevance to 'hard prefix prompts' itself is indirect but still significant due to the overlap in goals of increasing control over generative models' responses to textual prompts.",http://arxiv.org/pdf/2301.12247 -learnable ophthalmology sam,8,"The provided abstract and TLDR indicate a study that involves a form of prompt engineering, as it discusses a 'learnable prompt layer' in the context of a deep learning model for ophthalmology image analysis. This is pertinent to prompt engineering study, specifically within the domain of medical image analysis, as it involves the tailoring of prompts (inputs to the model which guide its responses) to improve performance on specialized tasks. The connection to 'hard prefix prompts' is not directly stated, but the concept of learnable prompts closely relates to the broader field of prompting techniques in machine learning, hence the relevance to prompt engineering studies.",http://arxiv.org/pdf/2304.13425 -prompting multilingual large language models to generate code-mixed texts: the case of south east asian languages,8,"The study is highly relevant to prompt engineering as it investigates how different prompt templates and language pairings affect the capability of multilingual large language models to generate code-mixed texts, which is a key aspect of designing effective prompts. While the study does not focus exclusively on 'hard prefix prompts', it does explore the broader topic of how prompts can influence the output of language models. This falls within the range of studies related to prompt engineering. The findings have implications for how one should engineer prompts for multilingual contexts, particularly in the domain of code-mixing.",https://arxiv.org/pdf/2303.13592 -the stable artist: steering semantics in diffusion latent space,7,"The abstract describes an approach that improves the precision of text-conditioned generative diffusion models, which is relevant to prompt engineering because it addresses the challenge of achieving fine-grained control over generated content through text input modifications. While the study's focus is on image generation and editing, the semantic guidance technique is applicable to the broader concept of steering output in response to precise prompt adjustments. The relevance rating is not higher because the study is not specifically about hard prefix prompts or their systematic review but instead about a related yet distinct area of prompt-based control in generative models.",http://arxiv.org/pdf/2212.06013 -internet-augmented language models through few-shot prompting for open-domain question answering,8,"The study focuses on the utilization of few-shot prompting to enhance language models' ability to answer open-domain questions by conditioning the responses on web-searched information. Although it does not specifically mention 'hard prefix prompts,' it is highly relevant to the field of prompt engineering since it explores methodologies for effective prompting to improve information retrieval and question-answering capabilities of language models. This closely aligns with the goal of prompt engineering, which is to design prompts that enable language models to perform specific tasks more accurately. Therefore, the relevance rating is high.",https://arxiv.org/pdf/2203.05115 -few-shot prompting towards controllable response generation,8,"The paper discusses an advanced application of prompt-based learning in the context of chatbot response generation, showing relevance to the field of prompt engineering. The use of prompting combined with reinforcement learning to direct model output without parameter access aligns with the concept of hard prefix prompts because it explores methods of prompt manipulation for controllable outcomes. The emphasis on few-shot learning for task generalization is also pertinent to prompt engineering as it demonstrates efficiency in prompt application. Though the study doesn't solely focus on hard prefix prompts, its methods and objectives are closely related to the core ideas of engineering prompts for language models.",http://arxiv.org/pdf/2206.03931 -zero- and few-shot prompting with llms: a comparative study with fine-tuned models for bangla sentiment analysis,7,"The study's focus on zero- and few-shot prompting with language models is closely related to prompt engineering, as it deals with the efficacy of prompts when minimal examples are provided to the model. While the study is not specifically about hard prefix prompts, it explores in-context learning with language models, which is an essential aspect of prompt engineering. The investigation into the effectiveness of different prompting strategies for a low-resource language like Bangla is relevant because it contributes to the understanding of how various models respond to prompts in different scenarios, which is a critical component of prompt engineering research. However, the title and abstract do not mention 'hard prefix prompts' specifically, which would have made it a perfect match for the topic of comprehensive systematic review on hard prefix prompts. Thus, the rating is above average for relevance but not a perfect score.",https://arxiv.org/pdf/2308.10783 -multilingual social media text generation and evaluation with few-shot prompting,8,"The abstract describes a research study on adapting large language models to generate multilingual social media text with specific objectives, mentioning the use of prompts to achieve these goals. Since prompt engineering is the process of designing and formulating prompts to effectively interact with language models, and this work includes developing generalizable prompt formation techniques, it is highly relevant. The relevance is not rated as a perfect 10 due to the lack of explicit discussion of 'hard prefix prompts' which would be required for a comprehensive systematic review specific to that sub-topic of prompt engineering.",https://aclanthology.org/2022.gem-1.39.pdf -sparsefit: few-shot prompting with sparse fine-tuning for jointly generating predictions and natural language explanations,8,"The study described focuses on a fine-tuning strategy for Pre-trained Language Models (PLMs) that utilizes 'discrete prompts' to generate predictions and Natural Language Explanations (NLEs), which is highly relevant to prompt engineering. While it does not directly study 'hard prefix prompts,' the use of prompts in the context of few-shot learning to enhance the model's performance and explanations is closely related to prompt engineering and how it can be optimized in practice. The relevance is not maximum because the abstract does not detail the nature of the prompts used (e.g., hard prefix prompts specifically), but the methodology is still pertinent to the field of prompt engineering.",http://arxiv.org/pdf/2305.13235 -prompting electra: few-shot learning with discriminative pre-trained models,7,"The provided abstract details an approach to adapting ELECTRA, a discriminative pre-trained model, to prompt-based few-shot learning. Although the focus is primarily on the model's learning capabilities and performance rather than 'hard prefix prompts' specifically, the relevance lay in the use of prompts to facilitate model understanding and few-shot learning, which is a component of prompt engineering. The study explores how the model interacts with prompts, an essential aspect of prompt engineering, hence the relatively high relevance rating. However, it does not directly address a 'comprehensive systematic review on hard prefix prompts', so it cannot receive a perfect score.",https://arxiv.org/pdf/2205.15223 -knowledge prompting for few-shot action recognition,7,"The study described in the abstract addresses the use of structured external knowledge (knowledge prompting) to enhance the performance of a pre-trained vision-language model for few-shot classification. Although it does not specifically mention 'hard prefix prompts,' it does involve the engineering of prompts (text proposals) to improve machine learning performance. This indicates a moderate level of relevance to the broader topic of prompt engineering study, especially considering the systematic approach taken to generate and utilize these prompts. However, without a specific focus on the concept of 'hard prefix prompts' as described in the original query, the relevance is not complete.",https://arxiv.org/pdf/2211.12030 -promptner: a prompting method for few-shot named entity recognition via k nearest neighbor search,7,"The paper discusses PromptNER, a method that incorporates prompting, which relates to prompt engineering by using prompts to construct label prototypes for few-shot Named Entity Recognition. While the primary focus is on NER and not on prompt engineering as a general concept, the use of prompts as a way to improve machine learning models' performance through fine-tuning with limited data is pertinent to the study of prompt engineering. However, as the paper does not seem to conduct a comprehensive systematic review specifically on hard prefix prompts and does not address prompt engineering in the broader sense, the relevance is not maximal.",http://arxiv.org/pdf/2305.12217 -prompting large language models with chain-of-thought for few-shot knowledge base question generation,9,"The abstract discusses an advanced application of prompt engineering where Chain-of-Thought (CoT) prompting is used to enhance few-shot question generation over Knowledge Bases (KBQG). It is highly relevant to prompt engineering because it directly involves the process of designing prompts to improve the performance of Large Language Models. The research proposes a novel methodology (KQG-CoT) which leverages the CoT prompting technique, and the paper claims significant improvement over state-of-the-art results. The only reason it doesn't score a perfect 10 is because it doesn't explicitly mention 'hard prefix prompts', which is the specific focus of prompt engineering study mentioned in the initial query.",https://arxiv.org/pdf/2310.08395 -investigating prompting techniques for zero- and few-shot visual question answering,8,"The described study is highly relevant to the field of prompt engineering, as it directly investigates how different prompting strategies can influence the performance of a visual question answering (VQA) system in zero- and few-shot scenarios. The systematic examination of various question templates and the use of few-shot exemplars are core aspects of prompt engineering. The exploration of chain-of-thought reasoning and the integration of additional visual cues also fall within the scope of prompting techniques. Although the study specifically targets the VQA domain and does not mention 'hard prefix prompts', the general principles and findings are pertinent to the prompt engineering literature. The rating is not a full 10 because the paper focuses more broadly on VQA performance via prompting rather than the specific 'hard prefix prompts' indicated by the original prompt.",http://arxiv.org/pdf/2306.09996 -lmcap: few-shot multilingual image captioning by retrieval augmented language model prompting,7,"The study involves prompting a language model with retrieved captions, which is a form of prompt engineering. However, the focus is on multilingual image captioning rather than hard prefix prompts specifically. While it does not address hard prefix prompts in its methodology, the concept of using prompts to generate language model outputs is relevant to the broader field of prompt engineering. Therefore, the relevance is moderate to high.",http://arxiv.org/pdf/2305.19821 -hiprompt: few-shot biomedical knowledge fusion via hierarchy-oriented prompting,9,"The study introduces HiPrompt, a framework that leverages hierarchy-oriented prompts to improve few-shot biomedical knowledge fusion tasks by utilizing large language models. This is highly relevant to prompt engineering because it directly involves designing and employing prompts that are specifically structured to leverage and extract hierarchical relationships within large language models. The fact that it deals with prompting techniques to enhance the model's reasoning capabilities makes it pertinent to the field. The only reason it does not receive a perfect score is that the information provided centers more on biomedical knowledge fusion rather than a generalized application in prompt engineering.",https://arxiv.org/pdf/2304.05973 -adversarial knowledge stimulated contrastive prompting for few-shot language learners,9,"The abstract describes a method for improving the efficiency of pre-trained language models for few-shot learning tasks by introducing a novel prompting framework, which is highly relevant to prompt engineering studies. The AKSCP framework leverages Cloze-driven prompts for prompt-based learning and joint prompt tuning, which directly relates to the development and optimization of prompts for language models. Additionally, the use of adversarial contrastive learning to enhance generalization further aligns with advanced prompt engineering techniques. The only reason it does not receive a full 10 is that it does not specifically mention 'hard prefix prompts' which the original prompt inquires about, however, the general relevance to prompt engineering is very high.",https://aclanthology.org/2023.findings-acl.852.pdf -multi-step prompting for few-shot emotion-grounded conversations,7,"The paper presented is relevant to prompt engineering as it discusses the design of a prompting approach, which is a core concept within prompt engineering. By identifying emotions and using them to inform subsequent prompts, the study contributes to the field by showing how prompts can be adapted based on contextual information (emotional content in this case). However, the paper focuses specifically on a two-step prompting method for conversational AI and emotion recognition rather than on 'hard prefix prompts' in a broad sense. Therefore, while the paper is relevant to prompt engineering, it does not directly address the topic of hard prefix prompts, hence the rating is not a full 10.",https://dl.acm.org/doi/pdf/10.1145/3583780.3615265 -leveraging few-shot data augmentation and waterfall prompting for response generation,8,"The abstract mentions the development of methodologies and strategies for response generation in task-oriented conversational modeling, including the use of a 'waterfall prompting technique'. This indicates an exploration into how prompts are structured and how they can be optimized for better performance in conversation engines using AI like GPT-3 and ChatGPT. Although 'hard prefix prompts' are not explicitly mentioned, the study is still highly relevant to prompt engineering as it focuses on improving and understanding how prompts can be leveraged along with few-shot learning for effective response generation. The lower rating is due to the lack of specific mention of 'hard prefix prompts', suggesting that while the study is relevant, it may not directly tackle the named concept.",https://arxiv.org/pdf/2308.01080 -self-convinced prompting: few-shot question answering with repeated introspection,8,"The provided abstract outlines a study involving 'few-shot question answering with repeated introspection' which is closely related to the field of prompt engineering, particularly in refining prompts to improve the performance of large language models (LLMs). Although the study does not specifically mention 'hard prefix prompts', it does deal with the broader category of prompts and their optimization through an iterative process. This makes the work relevant to prompt engineering but not exclusively focused on the hard prefix aspect. Therefore, the relevance to 'prompt engineering' is high, but it might be less directly related to a 'systematic review on hard prefix prompts'.",https://arxiv.org/pdf/2310.05035 -continued pretraining for better zero- and few-shot promptability,9,"The provided abstract discusses continued pretraining with an emphasis on enhancing the effectiveness of natural language prompts in zero-shot and few-shot learning contexts, which is highly relevant to prompt engineering. The systematic examination of pretraining methods, identification of gaps, and concrete recommendations based on experimental results are directly related to the advancements in the field of prompt engineering. Although it does not directly mention 'hard prefix prompts', the focus on trainable prompts during multi-task learning and prompt tuning is integral to the broader field of prompt engineering. A point is deducted because the relevance to 'hard prefix prompts' specifically is not clear, but otherwise, it is highly pertinent to the study of how prompts can be engineered and optimized for better performance in machine learning models.",http://arxiv.org/pdf/2210.10258 -what makes pre-trained language models better zero/few-shot learners?,9,"The paper directly addresses prompt learning, which is a critical aspect of prompt engineering. It presents both a theoretical framework to understand the efficiency of prompts and a practical approach to select prompts without relying on development sets. The focus on zero/few-shot scenarios is particularly relevant to the current challenges faced in prompt engineering where labeled data is scarce. Although the paper does not address 'hard prefix prompts' specifically, it does contribute to the broader field of prompt engineering which encompasses the study of prompts and their optimization. Therefore, it receives a high relevance score.",http://arxiv.org/pdf/2209.15206 -plan-and-solve prompting: improving zero-shot chain-of-thought reasoning by large language models,9,"The abstract discusses a novel approach to prompt engineering for large language models, focusing on improving chain-of-thought reasoning in a zero-shot context. It addresses key issues such as calculation errors, missing-step errors, and semantic misunderstandings by introducing the Plan-and-Solve (PS) Prompting technique. As prompt engineering is central to optimizing the performance of LLMs in multi-step reasoning tasks, this study is highly relevant to the field. The high rating is due to the direct application of prompt engineering strategies to enhance the capabilities of these models without relying on multiple examples for training, which is an innovative contribution to the prompt engineering literature. However, it does not explicitly mention 'hard prefix prompts', which the original prompt might specifically refer to, hence not a perfect 10.",http://arxiv.org/pdf/2305.04091 -better zero-shot reasoning with self-adaptive prompting,8,"The provided abstract and TLDR relate closely to prompt engineering, as they describe the development and application of a novel prompt design method intended to enhance the zero-shot reasoning capabilities of large language models (LLMs) without relying on handcrafted responses or ground-truth labels. The method, Consistency-based Self-adaptive Prompting (COSP), addresses a core aspect of prompt engineering by strategically selecting and constructing prompts to improve LLM performance. While the abstract doesn't mention 'hard prefix prompts' explicitly and instead focuses on the broader field of prompt design and optimization, the relevance is high due to the overall focus on improving prompt-based LLM interactions.",http://arxiv.org/pdf/2305.14106 -multi-modal prompting for low-shot temporal action localization,8,"The paper is highly relevant to the study of prompt engineering as it involves the design and utilization of prompts to guide a pre-trained text encoder (CLIP) to perform open-vocabulary classification in the context of temporal action localization. The experimentation with both detailed action descriptions and visually-conditioned instance-specific prompt vectors directly ties into the methodologies of prompt engineering, aiming to improve the model performance on low-shot learning tasks. However, the primary focus on temporal action localization slightly reduces its direct relevance to general prompt engineering studies that are not focused on the specific application of action localization.",http://arxiv.org/pdf/2303.11732 -program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks,8,"The provided abstract details a study relevant to prompt engineering by introducing a 'Program of Thoughts' (PoT) method which separates computation from reasoning in numerical reasoning tasks. This separation directly impacts how prompts are designed for language models, as it leads to a fundamental change in the expected output (programs vs. solutions). The study's relevance is high because it exemplifies an advanced application of prompt engineering to improve performance on language models for specific tasks. The reason the rating is not a full 10 is because the study focuses specifically on numerical reasoning tasks and might not be directly applicable to other prompt engineering domains.",http://arxiv.org/pdf/2211.12588 -generative zero-shot prompt learning for cross-domain slot filling with inverse prompting,8,"The paper described is highly relevant to the field of prompt engineering as it discusses a novel methodology for zero-shot prompt learning in the context of cross-domain slot filling, which is a specific application within the broader domain of prompt engineering. It focuses on using prompts to transfer knowledge between domains without additional labeled data, which is a core aspect of prompt engineering. The proposed inverse prompting strategy is particularly pertinent for creating effective prompts that can distinguish between different types of data. Although the paper does not directly address 'hard prefix prompts', the concepts and strategies discussed are likely to contribute valuable insights to the prompt engineering literature and thus receive a high relevance rating.",https://arxiv.org/pdf/2307.02830 -language-aware soft prompting: text-to-text optimization for few- and zero-shot adaptation of v &l models,8,"The given title discusses 'Language-Aware Soft Prompting (LASP)' which is directly related to prompt engineering, especially in the context of optimizing text-to-text models for few- and zero-shot tasks. This indicates a high level of relevance as prompt engineering is about devising and employing prompts to guide or improve the performance of language and vision-and-language (V&L) models. The proposed method seems to enhance the interaction between hand-crafted textual prompts and model-generated outputs. Although the study doesn't explicitly mention 'hard prefix' prompts, the focus on soft prompting suggests it is in the broader area of prompt engineering, thus earning a high relevance rating.",https://link.springer.com/content/pdf/10.1007/s11263-023-01904-9.pdf -"large language model is not a good few-shot information extractor, but a good reranker for hard samples!",8,"The abstract discusses the effectiveness of LLMs relative to SLMs in few-shot information extraction tasks and introduces a paradigm that involves prompting strategies. The relevance to prompt engineering is significant because it examines the role of prompts in improving performance of LLMs when combined with SLMs. Although the primary focus is on LLMs as rerankers for hard samples rather than on constructing or studying 'hard prefix prompts' specifically, the concept of using adaptive prompting to achieve better results is closely related to the field of prompt engineering. This suggests that the paper could offer valuable insights into prompt strategies that may be beneficial for designing or evaluating hard prefix prompts.",http://arxiv.org/pdf/2303.08559 -towards few-shot identification of morality frames using in-context learning,8,"The study discusses using pre-trained Large Language Models for few-shot in-context learning, which is directly related to prompt engineering as it involves designing prompts for these models to handle specific tasks, in this case, identifying morality frames. However, it doesn't focus specifically on 'hard prefix prompts,' which the original request mentions, but rather on prompting methodologies in a broader sense. Therefore, the rating isn't a perfect 10 but still high due to the relevance of few-shot learning and in-context learning methodologies, which are integral to prompt engineering.",http://arxiv.org/pdf/2302.02029 -enhancing few-shot text-to-sql capabilities of large language models: a study on prompt design strategies,9,"The paper's focus on exploring various prompt design strategies and their systematic investigation into demonstration selection methods and optimal instruction formats for prompting LLMs in the Text-to-SQL task is highly relevant to the field of prompt engineering. The study is specifically addressing how to effectively use prompts to improve the performance of LLMs on a specialized task, which is a core aspect of prompt engineering. The relevance rating is not a full 10 because the paper is specialized in the Text-to-SQL context and prompt engineering can be applied to a broader range of tasks beyond this specific application. Nonetheless, the findings and methodology could be valuable for prompt engineering studies in general.",http://arxiv.org/pdf/2305.12586 -few-shot and prompt training for text classification in german doctor's letters,8,"The given abstract describes the use of prompt-based methods, specifically pattern-exploiting training, for text classification in a few-shot learning context, which is highly relevant to the field of prompt engineering. Although the focus is on a specific application within the medical domain for German doctor's letters, the core concept of using prompts to effectively guide a language model and improve performance with limited data is central to the study of prompt engineering. The improvement in accuracy and efficiency mentioned aligns with the goals of prompt engineering to enhance model performance. The rating is not a full 10 as the study seems to be applied and specific rather than a comprehensive and systematic review on hard prefix prompts in general.",https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI230275 -exploring zero and few-shot techniques for intent classification,8,"This study is highly relevant to prompt engineering as it explores zero and few-shot learning techniques, which are integral to the development of efficient prompting methods. The use of zero-shot intent classification with descriptions and parameter-efficient fine-tuning indicates a direct application of prompt engineering principles. The fact that they are testing these methods on large language models, which are often used in conjunction with prompts, further adds to the relevance. While the study does not focus exclusively on 'hard prefix prompts,' its implications on prompt engineering strategies are significant, particularly for intent classification in low-resource settings.",http://arxiv.org/pdf/2305.07157 -knowledge-guided prompt learning for few-shot text classification,9,"The abstract discusses a study that is highly relevant to prompt engineering, specifically within the context of leveraging implicit knowledge in pre-trained language models for few-shot text classification. The introduction of a knowledge-guided prompt learning method directly relates to prompt engineering, as it addresses how prompts can be optimized to improve model performance. The slight deduction from a perfect score is due to the lack of explicit mention of 'hard prefix prompts' which may or may not be a part of their 'knowledge prompting template'. Despite this, the study's focus on improving and understanding prompt-based learning is closely aligned with the field of prompt engineering.",https://www.mdpi.com/2079-9292/12/6/1486/pdf?version=1679462243 -a smashed glass cannot be full: generation of commonsense explanations through prompt-based few-shot learning,8,"The study is highly relevant to prompt engineering due to its focus on generating commonsense explanations through the use of prompts on pre-trained language models. Although it does not specifically mention 'hard prefix prompts', the methodology involving prompting and few-shot learning is a core technique within the field of prompt engineering. The ability to generate explanations from semantically related sentences is an important aspect of prompt engineering, which contributes to the relevance of this study to the field. However, full relevance to 'hard prefix prompts' specifically would require a more direct investigation into that subset of prompt engineering techniques.",https://aclanthology.org/2023.nlrse-1.3.pdf -successive prompting for decomposing complex questions,8,"The abstract discusses 'Successive Prompting', a methodology directly related to prompt engineering, involving the iterative process of breaking down complex questions for large language models. This is highly relevant to prompt engineering studies as it provides insights into the structuring of prompts for complex problem-solving. The approach could lead to more effective design of prompts, which is a core element of prompt engineering, thereby improving the performance of LMs in complex question-answering tasks. The rating is not a full 10 because it is more focused on the iterative prompting process rather than a broad application of prompt engineering techniques across different domains.",https://arxiv.org/pdf/2212.04092 -"structured prompting: scaling in-context learning to 1, 000 examples",9,"The abstract presents a study directly related to prompt engineering, with a focus on structured prompting to overcome length constraints in in-context learning for large language models. This is highly relevant as it addresses a limitation often encountered in the field of prompt engineering, where the length of input can restrict the number of examples a language model can learn from. The improvement of end-task performance and reduction of variance in results mentioned in the abstract suggests significant empirical findings for prompt engineering applications. Although the study does not specifically mention 'hard prefix prompts,' its relevance lies in advancing the methodologies used in prompt engineering, which could be applicable or foundational to hard prefix prompts as well.",http://arxiv.org/pdf/2212.06713 -zero-shot prompting for implicit intent prediction and recommendation with commonsense reasoning,7,"The paper abstract discusses a framework for multi-domain dialogue systems that can understand implicit user intents and appropriately trigger task-oriented bots using zero-shot prompting. While this is not specifically about 'hard prefix prompts' as might be investigated in a prompt engineering study, the relevance is reasonably high because zero-shot prompting is a closely related concept where the effectiveness of the prompt in eliciting the correct response from a language model without prior examples is crucial. The system's dependence on 'commonsense knowledge' and inference of 'implicit intents' also implies that there is prompt engineering occurring to facilitate these operations. However, the abstract does not directly mention the study or optimization of prompts, which would be the primary focus of a prompt engineering study, hence the relevance is not rated higher.",http://arxiv.org/pdf/2210.05901 -naturalspeech 2: latent diffusion models are natural and zero-shot speech and singing synthesizers,7,"The abstract describes a text-to-speech system, NaturalSpeech 2, which includes a speech prompting mechanism as a means to facilitate in-context learning. Although the system is not primarily focused on 'hard prefix prompts' for text input, the speech prompting mechanism can be seen as related to prompt engineering, particularly for speech synthesis. The relevance is significant because the paper addresses how prompting can be utilized in TTS systems to improve performance. However, it is not an exact match because the study does not focus solely on the prompt engineering aspect but rather on the overall TTS system that includes prompting as one of its components.",http://arxiv.org/pdf/2304.09116 -udapdr: unsupervised domain adaptation via llm prompting and distillation of rerankers,7,"The abstract discusses the use of large language models (LLMs) to generate synthetic queries which relates to prompt engineering as it may involve crafting prompts to elicit these queries from the LLMs. The focus on domain adaptation and efficient information retrieval can be seen as an application of prompt engineering, particularly in the context of generating useful data for model fine-tuning. However, the abstract doesn't specifically mention 'hard prefix prompts' or detail the prompt engineering process, hence the rating is not a full 10.",https://arxiv.org/pdf/2303.00807 -probing power by prompting: harnessing pre-trained language models for power connotation framing,8,"The abstract describes a study on probing pre-trained language models (PLMs) by using prompts to understand and predict power connotations in language, which is relevant to prompt engineering. The research focuses on how prompts can elicit different connotations about power from language models and the impact of fine-tuning on the models' accuracy in this task. Although the study primarily explores connotation framing rather than hard prefixes specifically, the methodology closely relates to prompt engineering as it involves designing prompts to harness the capabilities of language models. This indicates a high level of relevance, but not the maximum score as it does not directly focus on 'hard prefix prompts'.",https://aclanthology.org/2023.eacl-main.61.pdf -what do language models know about word senses? zero-shot wsd with language models and domain inventories,7,"The paper discusses an innovative use of language models for Word Sense Disambiguation (WSD) by casting the problem as one of textual entailment, which inherently involves crafting prompts that effectively convey the different domain-relevant hypotheses that are matched against the given word senses. This is related to prompt engineering as it shows a specific application where the design of the prompts (i.e., the relation between word senses and domains phrased as hypotheses) is crucial for the successful application of language models to this task. Although not directly addressing 'hard prefixes', which are a specific type of prompt, the study does engage with the broader notion of how to construct prompts to extract desired outputs from language models. Therefore, the relevance is quite high, albeit not perfectly aligned with the specific topic of hard prefix prompts.",http://arxiv.org/pdf/2302.03353 -compresso: structured pruning with collaborative prompting learns compact large language models,7,"The abstract discusses 'Compresso,' a new paradigm for structurally pruning Large Language Models, which includes a 'collaborative prompt' to foster collaboration between the LLM and the pruning algorithm. While the main focus is on model compression, the use of collaborative prompts for enhancing the pruning process does touch upon the broader field of prompt engineering. Prompt engineering generally refers to the design and optimization of prompts to elicit desired responses from language models, and the collaborative prompt in this context serves to improve the interaction between model components during compression. However, it is not directly focused on prompt engineering study in the conventional sense, which typically deals with how different prompts affect the output of LLMs in natural language tasks, rather than model pruning. Therefore, the relevance is moderate but not entirely central to traditional prompt engineering studies.",https://arxiv.org/pdf/2310.05015 -you can generate it again: data-to-text generation with verification and correction prompting,8,"The paper discusses an advanced methodology in the field of text generation which involves a multi-step process including generation, verification, and correction stages. This is directly relevant to the practice of prompt engineering, as the proposed VCP method deals with iteratively refining the prompts based on feedback, which is a key aspect of designing effective prompts that can lead to high-quality outputs. The relevance is not a perfect score because the study does not focus exclusively on hard prefix prompts or prompt engineering in general, but rather on a multi-step generation process with verification and correction, which is just one aspect of prompt engineering.",http://arxiv.org/pdf/2306.15933 -transprompt v2: a transferable prompting framework for cross-task text classification,8,"The abstract discusses the development of TransPrompt v2, which is a prompting framework specifically designed for improving performance in few-shot text classification tasks across various NLP applications. By focusing on prompt-based fine-tuning and transferring prompting knowledge across tasks, it is highly relevant to studies on prompt engineering, especially in the context of how prompts can be optimized and utilized to enhance the capabilities of pre-trained language models with limited data. Though the abstract does not mention 'hard prefix prompts' specifically, the overall framework is pertinent to the field of prompt engineering. The significant increase in performance compared to other baselines, as evidenced in the text, further solidifies its relevance to the study of efficient prompting methods.",https://arxiv.org/pdf/2308.15010 -dynamic strategy chain: dynamic zero-shot cot for long mental health support generation,8,"The abstract presents a novel methodology involving prompting Large Language Models with chain-of-thought techniques, specifically tailored for generating long counseling texts for mental health support. The development of the zero-shot Dynamic Strategy Chain (DSC) prompting method is a direct application of prompt engineering, as it focuses on improving the performance of the LLM by designing specialized prompts based on dynamic mental health counseling strategies. This is highly relevant to the study of prompt engineering because it demonstrates an advanced use-case of prompt design to produce more effective and personalized responses from language models. The use of GPT2 and the claim of state-of-the-art performance further indicates an engagement with prompt engineering techniques. However, it does not fully match the requirement for a 'systematic review on hard prefix prompts' as it seems to introduce a new prompting strategy rather than review existing strategies.",https://arxiv.org/pdf/2308.10444 -adapt and decompose: efficient generalization of text-to-sql via domain adapted least-to-most prompting,8,"The paper describes a method for improving generalization in Text-to-SQL tasks by preparing and adapting prompts for specific domains and compositions. This research directly involves creating efficient prompts for large language models, which is an important aspect of prompt engineering. The relevance is high because it devises strategies for prompt construction and adaptation, which is a part of prompt engineering studies. It gets an 8 instead of a perfect 10 because it is focused on a specific application (Text-to-SQL) rather than prompt engineering in general.",https://arxiv.org/pdf/2308.02582 -leveraging large language models for multiple choice question answering,7,"The abstract focuses on improving the effectiveness of large language models (LLMs) such as GPT-3 in multiple choice question answering (MCQA) tasks. It highlights an approach where the LLM is presented with both the question and the answer options and outputs a symbol representing its chosen answer. This method is related to prompt engineering because it involves structuring the input to the LLM in a way that helps it utilize its capabilities more efficiently (known as natural prompting). The concept of multiple choice symbol binding (MCSB) reflects a specialized form of prompt engineering that is highly relevant to developing efficient prompting strategies for MCQA. Although the text does not explicitly use the term 'prompt engineering' or focus broadly on various types of prompts (e.g., hard prefix prompts), it is relevant as it tackles a specific challenge within the field of prompting LLMs to optimize performance on MCQA tasks.",http://arxiv.org/pdf/2210.12353 -data augmentation for intent classification with off-the-shelf large language models,8,"The study described in the title and abstract is highly relevant to prompt engineering as it deals with the generation of training data for intent classification using prompts with large language models like GPT-3. Although it does not address the 'hard prefix prompts' specifically, the research is indeed focused on utilizing prompting techniques to improve the data generation process for machine learning tasks, which is a core concept in prompt engineering. The relevance is not maximum because the study concentrates more on the application of prompt-generated data for classification and its quality rather than on the systematic study of the prompts themselves.",http://arxiv.org/pdf/2204.01959 -unraveling chatgpt: a critical analysis of ai-generated goal-oriented dialogues and annotations,7,"The paper titled 'unraveling chatgpt: a critical analysis of ai-generated goal-oriented dialogues and annotations' addresses the use of large pre-trained language models like ChatGPT for generating high-quality text, which is tangentially related to the study of prompt engineering. Although the specific focus on 'hard prefix prompts' is not mentioned, the exploration of 'prompting techniques' for data generation and annotation in AI models directly influences studies related to crafting prompts to achieve desired outputs. Thus, the relevance is quite high as it may provide insights into prompt efficiency and effectiveness, crucial for prompt engineering. However, the rating is not a full 10 because it does not directly discuss hard prefix prompts, which is the central theme of the prompt engineering study.",http://arxiv.org/pdf/2305.14556 -improving patient pre-screening for clinical trials: assisting physicians with large language models,8,"The paper discusses the use of InstructGPT and prompt-engineering techniques, such as chaining one-shot, selection-inference and chain-of-thought prompts, to improve the process of determining eligibility for clinical trials. Although the study is not directly focused on hard prefix prompts, it is within the domain of prompt engineering and examines how tailored prompts can enhance a language model's performance in a specific, practical application. Thus, the relevance rating is high due to the examination of prompts' design and efficacy in a real-world task, which is a central aspect of prompt engineering studies.",http://arxiv.org/pdf/2304.07396 -sinc: spatial composition of 3d human motions for simultaneous action generation,7,"The abstract discusses the use of large language models, particularly GPT-3, to understand the relationship between actions and body parts through prompt engineering ('what are the body parts involved in the action?'). This implies a relevance to prompt engineering study as it involves designing prompts to extract specific knowledge from the language model that can be used for another application, which in this case is 3D human motion synthesis. However, the main focus of the study is on the spatial composition of 3D human motions rather than prompt engineering itself, and thus the rating is not a perfect 10.",https://arxiv.org/pdf/2304.10417 -the potential and pitfalls of using a large language model such as chatgpt or gpt-4 as a clinical assistant,8,"The provided abstract describes studies assessing the performance of GPT-4 and ChatGPT in the medical field, specifically with tasks such as identifying patients with specific diagnoses and providing diagnostic assistance. The relevance to prompt engineering is high because the study involves the use of 'chain of thought and few-shot prompting' indicating that prompt engineering techniques were indeed utilized and studied in the context of their effectiveness in a real-world application. The rating is not a full 10 because the study does not solely focus on prompt engineering but also on the broader application and implications of using language models in clinical settings.",https://arxiv.org/pdf/2307.08152 -hitachi at semeval-2023 task 4: exploring various task formulations reveals the importance of description texts on human values,7,"While the paper primarily focuses on the task of human value detection behind arguments, it is relevant to prompt engineering because it also explores various task formulations, including question answering with chain-of-thought prompting. The exploration of different task approaches, the effectiveness of including description texts, and the evaluation of model performance directly relate to how prompts are engineered and optimized for specific NLP tasks. Additionally, the insights on zero-shot learning and the importance of task formulation could inform prompt design strategies. However, since the primary focus isn't solely on prompt engineering but a broader scope of task formulation, the relevance is not at its maximum.",https://aclanthology.org/2023.semeval-1.240.pdf -category-specific prompts for animal action recognition with pretrained vision-language models,7,"The study described in the abstract appears to be relevant to prompt engineering because it involves the development of a 'category-specific prompting module' which generates adaptive prompts for text and video inputs based on detected animal categories. This is a form of prompt engineering where prompts are crafted to improve the performance of a vision-language model on the task of animal action recognition. Although the focus is not on 'hard prefix prompts' specifically, the creation and utilization of tailored prompts is a pertinent aspect of prompt engineering. The relevance is not rated higher because the abstract does not provide details on how the prompts are engineered or whether hard prefix prompts are a part of the study, which would be critical for a 'comprehensive systematic review on hard prefix prompts.'",https://dl.acm.org/doi/pdf/10.1145/3581783.3612551 -"a survey of graph prompting methods: techniques, applications, and challenges",7,"The survey is highly relevant to prompt engineering as it discusses the 'pre-train, prompt, predict training' paradigm, which is at the core of how prompts are used within modern machine learning frameworks to make models generalize better with less labeled data. The focus on graph prompting methods indicates a novel approach to designing prompts using structured graph knowledge, which is a specific aspect within the broader field of prompt engineering. The relevance is not a full 10 because the survey is specialized in graph-based prompting rather than covering all aspects of prompt engineering, including 'hard prefix' prompts or other prompting techniques not related to graphs.",http://arxiv.org/pdf/2303.07275 -help me think: a simple prompting strategy for non-experts to create customized content with models,9,"The abstract describes a novel approach to prompting language models. It is highly relevant to the study of prompt engineering as it directly addresses the problem of how non-expert users can effectively interact with such models. The HELP ME THINK strategy is a form of prompt engineering designed to aid users in generating customized content, an area of growing interest in the field. It also touches on the challenge of control within language model outputs, a central issue in prompt engineering. The slightly less than perfect score is due to the paper potentially not addressing a 'systematic review on hard prefix prompts' specifically, which would be necessary for a 10 rating.",http://arxiv.org/pdf/2208.08232 -neuro-symbolic causal language planning with commonsense prompting,9,"The paper presents a method called Neuro-Symbolic Causal Language Planner (CLAP) that directly addresses the challenge of eliciting procedural knowledge from large language models (LLMs) through advanced prompting techniques that involve commonsense knowledge. Given that prompt engineering involves the strategic construction of prompts to extract or generate specific responses from LLMs, this paper's focus on using prompts as causal interventions to improve language planning capabilities in AI systems is highly relevant to the field of prompt engineering. The fact that it also employs a Structural Causal Model (SCM) to construct structured prompts makes it even more pertinent, as it represents a sophisticated approach to prompt design. However, it does not focus exclusively on 'hard prefix prompts', thus the rating is not a full 10.",http://arxiv.org/pdf/2206.02928 -generative speech recognition error correction with large language models and task-activating prompting,9,"The study addresses the use of large language models (LLMs) for speech recognition error correction and investigates various prompting schemes, which directly relates to prompt engineering. The focus on in-context learning, task activation prompting, and the combination of causal instructions with demonstrations are key elements of prompt engineering, showing how different prompts can improve the performance of LLMs in specific tasks without fine-tuning. Although the study does not explicitly mention 'hard prefix prompts', it explores related methods of instruction prompting, making it highly relevant to prompt engineering studies.",https://arxiv.org/pdf/2309.15649 -llm-rec: personalized recommendation via prompting large language models,8,"The given abstract directly relates to prompt engineering, as it investigates various prompting strategies to improve the performance of large language models, particularly for personalized recommendations. The relevance to prompt engineering is high because the study specifically examines how different types of prompts can enhance LLM's capabilities. This is pertinent to prompt engineering as it contributes to understanding how LLMs can be tuned for better performance on specific tasks by using tailored prompts. The mention of 'hard prefix prompts' is not explicitly stated; however, the exploration of prompting strategies such as 'recommendation-driven' and 'engagement-guided' prompting falls within the broader scope of prompt engineering studies.",https://arxiv.org/pdf/2307.15780 -enabling conversational interaction with mobile ui using large language models,8,"The paper is highly relevant to prompt engineering as it explores the design of prompting techniques to adapt large language models (LLMs) for conversational interactions with mobile UIs. This indicates a direct engagement with the process of developing and refining prompts to elicit desired responses from LLMs, which is the essence of prompt engineering. However, it is not exclusively focused on 'hard prefix prompts' as might be suggested by a comprehensive systematic review on such. Its focus on mobile UIs also suggests a specific application area rather than a broad study of prompting techniques. Nevertheless, the work contributes significantly to the field of prompt engineering by demonstrating the practical application of LLMs in a relevant domain without the need for task-dedicated resources.",https://dl.acm.org/doi/pdf/10.1145/3544548.3580895 -recent advances in natural language processing via large pre-trained language models: a survey,7,"The title and abstract indicate that the survey covers pre-trained language models and their applications in various NLP tasks, including 'prompting.' Since prompt engineering is a subset of techniques applied to language models to improve performance on various tasks, this survey's content is relevant to the study of prompt engineering, particularly concerning the 'prompting' methods mentioned in the abstract. However, it does not appear to focus exclusively on 'hard prefix prompts' or prompt engineering, hence the rating is not a full 10.",https://arxiv.org/pdf/2111.01243 -are large language models ready for healthcare? a comparative study on clinical language understanding,8,"The provided abstract discusses the evaluation of large language models (LLMs) for clinical language understanding tasks in healthcare, which is indirectly related to prompt engineering, as it involves creating effective prompts for complex tasks. More specifically, it introduces a new prompting strategy known as self-questioning prompting (SQP), which is a direct application of prompt engineering aimed at improving the performance of LLMs on healthcare-related tasks. Although the main focus is not on 'hard prefix prompts', SQP likely employs principles of prompt engineering to elicit better responses from the models. This justifies the high relevance rating, although it is not a perfect match since it doesn't focus solely on prompt engineering but includes broader topics of LLM application in healthcare.",https://arxiv.org/pdf/2304.05368 -voyager: an open-ended embodied agent with large language models,7,"The abstract describes an AI agent (Voyager) that uses a new iterative prompting mechanism, which is relevant to prompt engineering studies. This mechanism involves environment feedback and self-verification processes, which are significant topics within prompt engineering research. However, the focus is on an embodied agent in a gaming environment, rather than on hard prefix prompts. While there is significant overlap with interests in prompt engineering, the specific focus on 'hard prefix prompts' in a comprehensive systematic review is not directly addressed, thus the relevance is rated as a 7 instead of a higher score.",http://arxiv.org/pdf/2305.16291 -the flan collection: designing data and methods for effective instruction tuning,9,"The given abstract is highly relevant to prompt engineering study as it specifically discusses design decisions, task balancing, enrichment techniques, and mixed prompt settings, which are central concepts in the development and improvement of instruction tuning for language models. Despite not using the term 'hard prefix prompts', it directly addresses the broader domain of prompt optimization and the impact on model performance, therefore meriting a high relevance rating.",http://arxiv.org/pdf/2301.13688 -learning to compose soft prompts for compositional zero-shot learning,8,"The abstract discusses the development of Compositional Soft Prompting (CSP), which is directly relevant to prompt engineering, as CSP is a form of prompt-related technique designed to improve the interaction between users (or systems) and AI models, specifically pretrained vision-language models. While the reference to 'soft prompts' and not 'hard prefix prompts' might suggest a slight deviation, the overall study is still highly pertinent to the field of prompt engineering, especially given its focus on parameter efficiency, zero-shot learning, and the manipulation of prompt structures (attributes and objects) to optimize model performance. Hence, the rating of 8 acknowledges its strong relevance with a minor deduction for the difference in prompt type (soft versus hard).",http://arxiv.org/pdf/2204.03574 -factual probing is [mask]: learning vs. learning to recall,8,"The abstract discusses the use of cloze-style prompts to retrieve factual information from a pre-trained language model, which is highly relevant to the field of prompt engineering. The introduction of OptiPrompt, which optimizes prompts in continuous embedding space, is a direct contribution to the development of prompt engineering techniques. The paper's investigation into the distinction between 'learning' and 'learning to recall' is also pertinent to understanding how models respond to prompts. However, the paper does not specifically address 'hard prefix prompts,' hence the rating is not a full 10.",https://aclanthology.org/2021.naacl-main.398.pdf -how does prompt engineering affect chatgpt performance on unsupervised entity resolution?,9,"The study directly investigates the impact of prompt engineering on the performance of ChatGPT in the context of unsupervised entity resolution, which is a relevant topic in natural language processing and artificial intelligence. The systematic experimental approach to understanding how different prompts can influence the results of entity resolution tasks using a language model like ChatGPT is highly pertinent to studies in prompt engineering. The deduction of one point is due to the preliminary nature of the results mentioned in the abstract, which suggests that there could be further work required to fully understand the relationship and generalize the findings.",https://arxiv.org/pdf/2310.06174 -user-friendly image editing with minimal text input: leveraging captioning and injection techniques,8,"The study focuses on making prompt engineering more user-friendly by categorizing prompts by semantic details and proposing methods to simplify the text prompt process for image editing, which is relevant to prompt engineering. The relevance is marked down slightly because the abstract suggests a specific application to image editing rather than a comprehensive systematic review on hard prefix prompts, but it still contributes to the broader topic of prompt optimization and efficiency.",http://arxiv.org/pdf/2306.02717 -ascm: an answer space clustered prompting method without answer engineering,8,"This paper is highly relevant to prompt engineering as it proposes an innovative approach to prompt-based learning, addressing limitations in answer mapping by using semantic clustering and synonym initialization. Although not explicitly focused on 'hard prefix prompts,' the concept of improved answer-category mapping in prompt-based learning and the influence on model performance is central to the study of efficient and effective prompt designs. The model's approach of clustering answers to manage diverse linguistic expressions without manual or automatic answer constraints is integral to the broader conversation of how prompts interact with pre-trained language models in tasks like classification and NLI. The semi-supervised stair learning method could also contribute to a better understanding of knowledge distillation in the context of prompt engineering.",https://aclanthology.org/2022.findings-acl.193.pdf -do llms possess a personality? making the mbti test an amazing evaluation for large language models,7,"The paper addresses the feasibility of using the Myers-Briggs Type Indicator (MBTI) to evaluate large language models (LLMs), which involves investigating if the personality types of LLMs can be influenced by prompt engineering. This suggests that the study explores, to some extent, how prompts can be used to shape the output of LLMs, aligning with the broader topic of prompt engineering. However, the focus on MBTI and personality assessment is somewhat tangential to the core aspects of prompt engineering, such as prompt formats or effectiveness, and does not directly address the concept of hard prefix prompts. Therefore, while the study is related to prompt engineering, it is not entirely centered on it, leading to the rating of 7 for relevance.",https://arxiv.org/pdf/2307.16180 -the application of chatgpt in healthcare progress notes: a commentary from a clinical and research perspective,7,"The text discusses the use of ChatGPT, an AI-driven language model, in the context of healthcare progress notes, emphasizing the relevance of 'prompt engineering techniques' for effective integration into clinical practice. While the text does not focus specifically on a 'comprehensive systematic review on hard prefix prompts,' it does reference the application of prompt engineering in a practical setting, demonstrating its significance in real-world applications and hence has relevance to the field of prompt engineering. That said, the focus on healthcare rather than the technical aspects of prompt engineering itself means the relevance is substantial but not complete.",https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ctm2.1324 -copilot for xcode: exploring ai-assisted programming by prompting cloud-based large language models,8,"The paper's relevance to prompt engineering is significant as it describes how an AI-assisted tool, Copilot for Xcode, utilizes prompt engineering through a chat interface to enable features such as code generation, autocompletion, documentation, and error detection. The integration of Large Language Models with a development environment and the tool's ability to process 'small' decisions for program composition signifies the application of prompt engineering techniques, making it highly relevant to the study of prompt engineering, especially within the domain of software development and AI-assisted programming tools.",https://arxiv.org/pdf/2307.14349 -towards equitable representation in text-to-image synthesis models with the cross-cultural understanding benchmark (ccub) dataset,7,"The abstract discusses a 'culturally-aware priming approach' and mentions the use of automated prompt engineering with GPT-3, which is relevant to the topic of prompt engineering. However, the main focus seems to be on text-to-image synthesis and fighting bias through data curation, rather than on the details of prompt engineering itself. Therefore, while prompt engineering is a component of the study, it is not the central topic, hence the rating of 7 for relevance.",http://arxiv.org/pdf/2301.12073 -omniscientdb: a large language model-augmented dbms that knows what other dbmss do not know,8,"The paper is highly relevant to prompt engineering study as it discusses automatic prompt engineering within the context of a database management system (DBMS). It specifically addresses the issue of constructing appropriate prompts to a large language model in response to SQL queries for the purpose of data augmentation. The paper's focus on exploring different prompting techniques and their application in a DBMS setting makes it pertinent to the field of prompt engineering. However, it does not cover the topic of hard prefix prompts exclusively or systematically, as the abstract suggests a broader application, hence the rating is not a full 10.",http://publikationen.ub.uni-frankfurt.de/files/74426/06_08.pdf -data-driven approach for formality-sensitive machine translation: language-specific handling and synthetic data generation,8,"The paper presents an empirical prompt engineering strategy as part of its data-driven approach to FSMT. Although it does not focus solely on hard prefix prompts, the mention of prompt engineering indicates that this aspect was a significant component of their research methodology. The study's focus on artificial data generation and tailoring the model performance using prompt engineering suggests that the paper would be relevant to someone interested in prompt engineering, even if the main context is machine translation rather than a 'comprehensive systematic review' of hard prefix prompts specifically.",http://arxiv.org/pdf/2306.14514 -exploring the impact of prompt engineering on chatgpt 3.5 text summarization: a bert score evaluation,9,"The described study directly investigates the impact of prompt engineering on ChatGPT 3.5, with a particular emphasis on text summarization tasks. It measures the performance by using BERT score evaluation, which is highly relevant to understanding how different prompts can affect the output of AI in NLP applications. Thus, the relevance to prompt engineering studies is high. The reason for not giving a perfect score is the absence of a 'TL;DR' which could provide a concise summary of the results, an element that could further solidify its relevance by directly showcasing how prompts influence the summarization process.",https://doi.org/10.56726/irjmets45268 -promptor: a conversational and autonomous prompt generation agent for intelligent text entry techniques,9,"The abstract discusses the creation and impact of an agent called Promptor, which directly relates to the field of prompt engineering, as it generates prompts for language models. This is highly relevant because it addresses the challenge of creating effective prompts, a core issue in prompt engineering. Moreover, it involves actual user studies to compare prompts generated by Promptor against those created by human designers. The slight deduction in rating is due to the abstract not focusing exclusively on 'hard prefix prompts,' which was specified in the original prompt, but the overall study still contributes significantly to the domain of prompt engineering.",https://arxiv.org/pdf/2310.08101 -simple llm prompting is state-of-the-art for robust and multilingual dialogue evaluation,8,"The abstract discusses the use of a novel framework that incorporates prompting Large Language Models (LLMs) for improving dialogue evaluation, which is relevant to prompt engineering. Prompt engineering involves designing inputs that help LLMs produce more effective and relevant outputs, and the context here is applying such techniques to evaluate dialogues in multiple languages and ensuring robustness. The relevance might not be a perfect 10 because it is specific to dialogue evaluation rather than prompt engineering in general, but the principles and implications of this research can contribute significantly to the field of prompt engineering as it applies to dialogue systems.",https://arxiv.org/pdf/2308.16797 -towards understanding chain-of-thought prompting: an empirical study of what matters,9,"The study is highly relevant to prompt engineering as it delves into the specifics of how Chain-of-Thought prompting impacts the performance of language models. Understanding the effectiveness of CoT, even with invalid demonstrations, offers significant insights into prompt design and how language models can generate coherent reasoning steps. This may directly influence future prompt engineering strategies.",http://arxiv.org/pdf/2212.10001 -improving language model prompting in support of semi-autonomous task learning,9,"The abstract provided discusses the development of an agent capability for constructing effective prompts that elicit useful responses from language models for the purpose of learning new tasks. This is highly relevant to the field of prompt engineering, as it directly involves optimizing interaction strategies (prompts) to improve the utility of language model outputs in specific contexts. Although the term 'hard prefix prompts' from the initial prompt is not explicitly mentioned, the essence of the study is deeply intertwined with the principles of prompt engineering, hence the high relevance rating.",https://arxiv.org/pdf/2209.07636 -boosting theory-of-mind performance in large language models via prompting,9,"The study is highly relevant to prompt engineering as it investigates the effectiveness of in-context learning prompts in improving the theory-of-mind performance of large language models (LLMs). It directly addresses how tailored prompts can enhance the reasoning capabilities of AI systems, which is a core aspect of prompt engineering. Although the focus is specifically on theory-of-mind tasks, the findings have broader implications for the field of prompt engineering, especially concerning the design of prompts that can guide LLMs towards better understanding and interpreting human mental states and intentions.",http://arxiv.org/pdf/2304.11490 -"see, think, confirm: interactive prompting between vision and language models for knowledge-based visual reasoning",7,"The paper introduces a framework, IPVR, which integrates interactive prompting mechanisms within a vision-language reasoning context. While the study primarily focuses on knowledge-based visual reasoning tasks, the use of prompting in the 'think stage' directly relates to prompt engineering as it involves designing prompts to steer a large language model's (LLM) output. This is relevant to the concept of 'hard prefix prompts' which consist of prefixed instructions that guide the model's generation process. Thus, the relevance to prompt engineering is significant, but not exclusive since the paper also emphasizes few-shot learning, transparency, and trustworthiness in reasoning, deviating from prompt engineering as the sole focus.",http://arxiv.org/pdf/2301.05226 -"prompting and evaluating large language models for proactive dialogues: clarification, target-guided, and non-collaboration",8,"The abstract points to a study focused on the evaluation and enhancement of conversational systems using Large Language Models through prompt engineering techniques such as the 'Proactive Chain-of-Thought'. Although the main emphasis does not appear to be on 'hard prefix prompts' specifically, the relevance to prompt engineering is clear as it discusses prompting strategies and schemes to handle complex dialogue scenarios. This aligns with the study of how different prompts can influence the behavior and responses of language models. However, because it does not explicitly mention 'hard prefix prompts', it cannot receive a perfect score for relevance.",http://arxiv.org/pdf/2305.13626 -pive: prompting with iterative verification improving graph-based generative capability of llms,8,"The study involves a specific application of prompt engineering in the context of generating structured data from large language models (LLMs). The introduction of a framework (PiVe) that uses fine-grained prompts through iterative verification to enhance an LLM's output is directly related to the mechanics of designing effective prompts, which is a key aspect of prompt engineering. While the focus is on improving graph-based generation, which is a specialized subfield, the core concept of using prompts iteratively to refine outcomes is highly relevant to prompt engineering studies. The rating is not a perfect 10 as the extract does not mention 'hard prefix prompts' directly, but the methodology is clearly within the realm of prompt engineering.",http://arxiv.org/pdf/2305.12392 -enhancing small medical learners with privacy-preserving contextual prompting,8,"The abstract describes a study focused on enhancing the capabilities of small language models (SLMs) in the medical field through advanced prompting techniques that involve large language models (LLMs) without compromising patient privacy. The core of the study revolves around prompt engineering by designing a system that uses LLMs to generate contextual prompts, which then assist SLMs in performing medical tasks more effectively. This falls under the realm of prompt engineering as it pertains to the creation and optimization of prompts to elicit desired responses from language models. Although it is specific to the medical domain and privacy preservation, the principles and methodologies employed are relevant to the broader study of prompt engineering, especially in how it can be tailored to enhance the performance of smaller models within confidential constraints.",http://arxiv.org/pdf/2305.12723 -grammar prompting for domain-specific language generation with large language models,8,"The abstract describes an approach to improve the performance of large language models on domain-specific language generation tasks by using grammar prompting. Although the term 'hard prefix prompts' is not explicitly mentioned, grammar prompting can be considered a form of structured prompt engineering, and the systematic review would likely be interested in various methods of prompt engineering, including grammar prompting. This would make the study significantly relevant to those looking to understand different prompting techniques, especially in the context of generating highly structured languages. The relevance is not rated as a full 10 because the abstract does not directly address a review on 'hard prefix prompts' but rather discusses a related concept in prompt engineering.",http://arxiv.org/pdf/2305.19234 -prompting language-informed distribution for compositional zero-shot learning,7,"The abstract indicates that the paper introduces a model called PLID that uses prompting strategies with pre-trained large language models for compositional zero-shot learning, which aligns with the field of prompt engineering. While the term 'hard prefix prompts' is not directly mentioned, prompting language-informed distributions could potentially involve relevant concepts. The relevance is rated as 7 because prompt engineering constitutes a significant aspect of the research, but it's not clear if it specifically and directly addresses 'hard prefix prompts' as the primary focus.",https://arxiv.org/pdf/2305.14428 -retrieval-augmented gpt-3.5-based text-to-sql framework with sample-aware prompting and dynamic revision chain,9,"The paper is highly relevant to prompt engineering as it delves into the utilization of large language models (LLMs) and the design of efficient prompts to generate SQL queries from natural language questions. It directly addresses the challenges of prompt learning in contexts that require precise syntax, such as SQL, and proposes innovative solutions to improve the process. The concepts of retrieval-augmented prompting, sample-aware prompting, and a dynamic revision chain are advanced techniques within the scope of prompt engineering, showing how refined prompting strategies can lead to better model performance on specialized tasks.",https://arxiv.org/pdf/2307.05074 -towards better chain-of-thought prompting strategies: a survey,9,"The abstract indicates that the study is a systematic survey of the Chain-of-Thought (CoT) prompting technique, a relevant aspect of prompt engineering for large language models (LLM). CoT is directly tied to the strategies used to elicit better performance from LLMs, which is a central concern of prompt engineering. The survey’s aims to provide a comprehensive analysis and guide on the influencing factors of CoT prompting makes it highly relevant. However, since it does not cover the 'hard prefix prompts' explicitly, but rather prompting strategies as a whole, one point is deducted, thus not making it a perfect 10.",https://arxiv.org/pdf/2310.04959 -"reinforcement learning in the era of llms: what is essential? what is needed? an rl perspective on rlhf, prompting, and beyond",7,"The paper in question discusses Reinforcement Learning from Human Feedback (RLHF) and its applications to Large Language Models (LLMs). Prompt engineering is relevant to the use of LLMs, as it encompasses the techniques and strategies used to effectively instruct LLMs to produce desired outputs. While the paper does not focus narrowly on 'hard prefix prompts' specifically, the discussion around RLHF and prompting evaluation is pertinent to prompt engineering as a whole. Understanding RLHF and its implications can contribute to more advanced prompt engineering methods, particularly in evaluating and optimizing prompts for better performance in various tasks assigned to LLMs. Thus, the relevance to prompt engineering study is significant, though not exclusively focused on hard prefix prompts.",https://arxiv.org/pdf/2310.06147 -can instruction fine-tuned language models identify social bias through prompting?,8,"The study is relevant to prompt engineering as it specifically investigates the use of zero-shot prompting, including Chain-of-Thought (CoT) prompts, to evaluate the capability of language models at bias identification tasks. Since prompt engineering encompasses designing and refining the prompts given to language models to elicit specific types of responses or measure certain capabilities, the study’s focus on how these prompts can be used to detect social biases is pertinent to the field. However, the study does not appear to specifically address 'hard prefix prompts', which would be necessary for a 10 rating since the initial query asked for relevance to prompt engineering studies focused on hard prefix prompts.",https://arxiv.org/pdf/2307.10472 -march in chat: interactive prompting for remote embodied referring expression,7,"The provided title and abstract describe a study that engages with large language models (LLMs) for the task of Vision-and-Language Navigation (VLN), particularly focusing on generating navigation plans from high-level instructions — a form of interactive prompting. Although it doesn't directly address the concept of 'hard prefix prompts' in the described system, the use of prompts to communicate with LLMs is relevant to the field of prompt engineering. The March-in-Chat (MiC) model's interactive prompting mechanism that adapts to visual observations could lend insights into how prompt engineering can be applied in dynamic, real-world environments. While the study emphasizes action planning over strict prompting techniques, the interaction between the LLM and the environment via prompts and the adaptability of these prompts is related to the broader topic of engineering prompts for specific tasks. Hence, the rating reflects that the paper has relevance but is not entirely focused on 'hard prefix prompts' specifically within prompt engineering study.",https://arxiv.org/pdf/2308.10141 -prompting a large language model to generate diverse motivational messages: a comparison with human-written messages,9,"The study directly investigates prompt engineering by comparing the effectiveness of different prompt structures on the output diversity of a large language model (GPT-4). The use of a crowdsourcing pipeline as a model for constructing LLM prompts, and then measuring the impact on message diversity, provides empirical insights into the principles of prompt engineering. It explores the nuances of constructing prompts based on successful human instruction strategies and their potential utility in eliciting quality and diverse outputs from AI systems. This is highly relevant to the field of prompt engineering, although not focused on 'hard prefix prompts' specifically, it evaluates the broader concept of structured prompting.",https://arxiv.org/pdf/2308.13479 -large language models can self-improve,8,"The abstract outlines a method for self-improvement of large language models using 'Chain-of-Thought' prompting and self-consistency without requiring ground truth labels. This is highly relevant to the field of prompt engineering, as it deals with the creation and use of specific prompts ('high-confidence' rationale-augmented answers) to enhance a model's performance. The study's relevance is not a full 10 because the prompt engineering is focusing specifically on 'hard prefix prompts,' and it is not clear from the abstract if the 'high-confidence' prompts used exactly fit under this category. However, the techniques are closely related to prompt engineering and have implications for the development of prompts used in training LLMs.",http://arxiv.org/pdf/2210.11610 -multimodal chain-of-thought reasoning in language models,7,"The abstract pertains to the field of language models and their ability to perform complex reasoning, a topic which is inherently connected to prompt engineering as it explores how prompts can be structured to improve performance. While the study focuses on CoT (chain-of-thought) prompting, which is a specific technique within prompt engineering, it also introduces a multimodal approach by incorporating both text and images. The relevance to prompt engineering is significant, as the multimodal CoT could be a novel prompt engineering strategy, but it does not directly address hard prefix prompts, which would have been the direct subject of a prompt engineering study according to the initial prompt inquiry. Therefore, the rating is not a perfect score.",http://arxiv.org/pdf/2302.00923 -towards expert-level medical question answering with large language models,8,"The abstract provided discusses the use of Large Language Models (LLMs) and their application in medical question answering. It emphasizes the role of prompting strategies, including a 'novel ensemble refinement approach', which are essential components of prompt engineering. This indicates that the study involves research into optimizing prompts for LLMs to improve their performance in a specific domain, which is highly relevant to the broader field of prompt engineering. The rating is not a full 10 because the abstract focuses on medical question answering and LLM improvements in a specific domain rather than a general examination of hard prefix prompts or prompt engineering as a standalone subject.",http://arxiv.org/pdf/2305.09617 -language models can solve computer tasks,8,"The abstract describes a study related to the use of a prompting scheme called RCI in improving the performance of a pre-trained large language model (LLM) for computer tasks and natural language reasoning. While the study does not specifically mention 'hard prefix prompts', it directly involves the broader field of prompt engineering by showcasing how an LLM can be prompted to enhance its ability to interpret and execute tasks based on natural language commands. The emphasis on the efficacy of specialized prompting schemes (including the comparison with 'chain of thought' prompting) indicates that this research is highly relevant to the study and development of prompt engineering methods. The rating is not a full 10 as it does not explicitly focus on hard prefixes but prompt engineering in general.",http://arxiv.org/pdf/2303.17491 -is chatgpt the ultimate programming assistant - how far is it?,8,"The title and abstract provided describe an empirical study of ChatGPT's capabilities as a programming assistant and, importantly, they highlight the significance of prompt engineering in its effectiveness. Although the study itself is not about 'hard prefix prompts' specifically, the ramifications of the research touch upon the broader theme of how to interact effectively with LLMs (like ChatGPT) to solve programming tasks. The mention of 'demonstrating the importance of prompt engineering' illustrates a direct relevance to the field of study, however, since it's not strictly about 'hard prefix prompts', but more broadly covers ChatGPT's functionality, the rating is slightly reduced.",https://arxiv.org/pdf/2304.11938 -art: automatic multi-step reasoning and tool-use for large language models,8,"The provided abstract describes a framework (ART) that enhances the capabilities of Large Language Models by enabling them to automatically generate intermediate reasoning steps and integrate tool use. This is related to prompt engineering because it explores advanced techniques to optimize how prompts are given to large language models to evoke sophisticated reasoning and external information integration. Although it does not specifically mention 'hard prefix prompts,' the research is highly relevant to the field of prompt engineering as it advances how models are prompted to solve tasks. It falls slightly short of a perfect relevance score because it does not directly address 'hard prefix prompts' but rather focuses on the broader context of generating reasoning steps and tool integration, which can be considered a part of prompt engineering.",http://arxiv.org/pdf/2303.09014 -graph of thoughts: solving elaborate problems with large language models,9,"The provided abstract relates closely to prompt engineering study as it introduces a new framework for advancing prompting capabilities in LLMs, which is directly relevant to the field. The introduction of 'Graph of Thoughts' as a means to improve LLM reasoning and the possibility of it being used to develop new prompting schemes suggest a high relevance to the study and practice of prompt engineering. The abstract alleges an enhancement over existing prompting paradigms, pointing to a significant contribution to the field. However, the exact term 'hard prefix prompts' is not mentioned, which prevents a full rating of 10.",https://arxiv.org/pdf/2308.09687 -task and motion planning with large language models for object rearrangement,8,"The abstract describes 'LLM-GROP,' a system that leverages large language models (LLMs) through prompting to understand commonsense knowledge about object arrangements. Prompt engineering is directly used to retrieve information about object configurations, which is relevant to studies of prompt engineering. The paper seems to explore the efficacy of different prompts to enable a robot to understand and execute tasks involving physical objects, thus demonstrating a practical application of prompts in AI/robotic systems. While the main focus appears to be on task and motion planning, the use of prompt engineering is a significant aspect of the study, hence the high relevance rating.",http://arxiv.org/pdf/2303.06247 -interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions,7,"The study presents an approach, IRCoT, that combines retrieval techniques with chains-of-thought reasoning for enhancing multi-step question-answering in large language models. While it doesn't specifically talk about hard prefix prompts, it is indirectly relevant to prompt engineering as it deals with improving the quality and relevance of the responses generated by the AI. Considering that prompt engineering is all about optimizing how we interact with AI models to improve their output, the study's focus on utilizing a CoT to guide retrieval and improve the AI's reasoning steps is a valuable contribution to the field. It could be applied to warrant investigations into how prompts can be optimized to generate more accurate and contextually relevant retrieval queries, which is a crucial aspect of prompt engineering. However, it does not address hard prefix prompts directly, hence the rating is not a full 10.",http://arxiv.org/pdf/2212.10509 -unleashing cognitive synergy in large language models: a task-solving agent through multi-persona self-collaboration,8,"The described study is quite relevant to prompt engineering as it explores the concept of Solo Performance Prompting (SPP) which is a method of engaging a Large Language Model (LLM) in multi-turn self-collaboration with multiple personas. This relates to prompt engineering because it involves designing prompts that can elicit certain behaviors or responses from the model, akin to engaging with different facets or 'personas' of the AI. Crafting these nuanced prompts that can stimulate cognitive synergy is a direct example of prompt engineering. The paper does not specifically address 'hard prefix prompts', but the concept of using predetermined prompts to instigate particular responses or modes of operation in the LLM are within the scope of prompt engineering studies. Thus, the study is highly relevant to the development of sophisticated prompt engineering techniques.",https://arxiv.org/pdf/2307.05300 -safety assessment of chinese large language models,8,"The abstract describes a study focused on the development of a benchmark for the safety assessment of Chinese large language models (LLMs) using a method that involves providing test prompts and evaluating the safety of the model's responses. Since this method relies heavily on 'prompt engineering' (the strategy of crafting prompts to elicit specific responses or behaviors from AI models), there is a high relevance to prompt engineering studies. Specifically, the benchmark involves prompting as a core part of the assessment process. However, it does not directly focus on improving or innovating prompt engineering techniques, therefore the rating is not a perfect 10.",http://arxiv.org/pdf/2304.10436 -can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms,8,"The presented study is highly relevant to prompt engineering as it explores confidence elicitation in large language models (LLMs) without the need for fine-tuning or access to proprietary information. Prompt engineering is a subset of AI research focused on finding ways to improve the performance of AI models by crafting effective prompts. The methods investigated, which include verbalize-based, consistency-based, and hybrid methods, are directly related to how prompts can be designed to elicit more accurate confidence levels from LLMs. This is a key aspect of prompt engineering because it relates to improving the interaction with and the outputs of LLMs, which is a central goal of prompt engineering. However, it doesn’t focus specifically on 'hard prefix' prompts, which slightly reduces its relevance from a perfect score.",http://arxiv.org/pdf/2306.13063 -when to make exceptions: exploring language models as accounts of human moral judgment,8,"The paper addresses the development and application of a novel prompting strategy (MORALCOT) with the goal of improving the performance of Large Language Models (LLMs) on rule-breaking question-answering tasks that relate to human moral judgments. Since prompt engineering involves crafting inputs that guide AI models to produce the desired outputs, and the MORALCOT strategy is essentially a method of prompt engineering tailored for moral reasoning contexts, this study is quite relevant to prompt engineering. Although it focuses specifically on moral judgments rather than the broader range of prompt engineering applications, the insights gleaned from creating effective prompts in this challenging area are valuable for the field. The rating is not a full 10 as the content of the paper is narrowly focused on moral reasoning, which is just one of many domains where prompt engineering can be applied.",https://arxiv.org/pdf/2210.01478 -expertprompting: instructing large language models to be distinguished experts,9,"The paper is highly relevant to prompt engineering as it discusses a novel strategy 'ExpertPrompting' to improve the performance of large language models by crafting detailed prompts that contextualize the model as an expert. This approach is directly aligned with the study and applications of prompt engineering, aiming to enhance the quality of outputs generated by LLMs. While the paper may not specifically mention 'hard prefix prompts', the concept of customizing prompts to induce expert-level answers fits well into the broader category of prompt engineering techniques, making the paper's content significantly pertinent to the field.",http://arxiv.org/pdf/2305.14688 -automatic evaluation of attribution by large language models,7,"The relevance to prompt engineering study is significant because the abstract describes research on prompting Large Language Models (LLMs) as one of the approaches for automatic evaluation. Although the main focus is on evaluating attribution, the fact that prompting is used as a method indicates that the results and methodologies could be applicable and informative for prompt engineering studies. However, the primary emphasis seems to be on attribution evaluation rather than prompt construction or optimization itself, which prevents a full relevance score.",https://arxiv.org/pdf/2305.06311 -logic-lm: empowering large language models with symbolic solvers for faithful logical reasoning,7,"The study presents a framework that improves the logical reasoning capabilities of Large Language Models (LLMs) through the integration with symbolic solvers. While the topic is not directly related to 'hard prefix prompts' or prompt engineering, the methodology described includes a step to translate natural language problems into symbolic formulation, which could be considered as a form of complex prompt engineering. The method's aim to enhance LLMs' problem-solving skills with better input translation is relevant to the wider field of prompt engineering, especially in terms of constructing prompts that require logical reasoning. Therefore, the relevance is somewhat high, but not directly focused on the core concept of 'hard prefix prompts'.",http://arxiv.org/pdf/2305.12295 -red-teaming large language models using chain of utterances for safety-alignment,8,"The study presents relevant information to prompt engineering by discussing the effects of 'Chain of Utterances-based (CoU) prompting,' which directly relates to how prompts are structured and used to interact with large language models. Additionally, the work on safety evaluation benchmark RED-EVAL and proposing RED-INSTRUCT for the safety alignment of LLMs contributes to understanding and improving prompt-based interactions with these models. This has a direct implication on prompt engineering as it informs the construction of prompts that can be used to evaluate and align LLMs for safety. However, the paper primarily focuses on the safety and ethical implications of prompting rather than on prompt engineering for improving the general performance or functionality, which is why the rating is not a full 10.",https://arxiv.org/pdf/2308.09662 -can large language models write good property-based tests?,7,"The abstract describes research into leveraging large language models (LLMs) to synthesize property-based tests, which is a subset of prompt engineering because it specifically looks at how to prompt LLMs to perform a particular software engineering task. The relevance to prompt engineering study is significant as it involves the design of prompts to effectively communicate with LLMs and generate meaningful output. However, it is not directly focused on hard prefix prompts or a comprehensive systematic review of such prompts, which would be the central concern in prompt engineering studies. Therefore, the rating is not a full 10 but still high due to the close connection with the practice of prompt engineering in the context of LLMs.",https://arxiv.org/pdf/2307.04346 -i spy a metaphor: large language models and diffusion models co-create visual metaphors,8,"The described study involves a sophisticated form of prompt engineering where the Large Language Models (LLMs) are specifically instructed to generate textual content that then serves as a prompt for diffusion-based text-to-image models. Although the study focuses on the creation of visual metaphors, the process requires careful engineering of text-based prompts to elicit the desired visual outputs from the AI. Therefore, while the research does not directly study 'hard prefix prompts,' it contributes to the broader understanding of how different prompting strategies can guide AI behavior, which is highly relevant to the field of prompt engineering.",https://arxiv.org/pdf/2305.14724 -"despite ""super-human"" performance, current llms are unsuited for decisions about ethics and safety",8,"The abstract discusses the development and evaluation of a new prompting strategy for Large Language Models (LLMs), and specifically mentions how this strategy outperforms humans at ethical reasoning tasks. Since prompt engineering involves crafting inputs that can significantly affect the performance of LLMs, and this abstract describes a prompting strategy that notably changes the model's output, the content is highly relevant to the study of prompt engineering. The reduction of two points is due to the focus also being on ethical reasoning and model limitations rather than purely prompt engineering techniques.",http://arxiv.org/pdf/2212.06295 -human-in-the-loop through chain-of-thought,7,"The abstract presents a study that is related to improving the performance of language models through human intervention, specifically in the context of Chain-of-thought prompting. While not directly addressing 'hard prefix prompts,' it discusses the broader topic of prompt engineering and the optimization of human-in-the-loop systems. This is relevant to the field of prompt engineering as it explores enhancing reasoning by correcting intermediate steps, which could be considered a form of prompt optimization. However, since it does not specifically mention 'hard prefix prompts,' the rating is not a full 10.",http://arxiv.org/pdf/2306.07932 -an evaluation of log parsing with chatgpt,8,"The evaluation study focuses on the performance of ChatGPT in log parsing tasks and how different prompting methods affect this performance. While it does not specifically mention 'hard prefix prompts', it does address the broader concept of 'prompting methods', which is directly relevant to prompt engineering. The focus on few-shot prompting and the exploration of effective prompts for log parsing imply that understanding prompt engineering is a significant component of the research. The study's relevance to prompt engineering is therefore high, but it is not a perfect match since it is not a 'comprehensive systematic review on hard prefix prompts' specifically.",https://arxiv.org/pdf/2306.01590 -evaluating gpt-3 generated explanations for hateful content moderation,7,"The abstract is relevant to prompt engineering study to a considerable extent, as it discusses the utilization of GPT-3's language model for generating explanations which requires careful design of prompts to tailor the model's outputs for hate speech moderation. The study's focus on evaluating the effectiveness and limitations of explanations prompted from a language model directly ties in with the principles of prompt engineering, which seeks to understand how best to interface with language models to achieve desired outcomes. However, it does not specifically discuss 'hard prefix prompts' but rather general prompting strategies, so the relevance is not absolute.",https://arxiv.org/pdf/2305.17680 -large language models are strong zero-shot retriever,8,"The relevance to prompt engineering study is high since the abstract describes the use of a large language model (LLM) to improve the efficiency and effectiveness of information retrieval through a prompt-based approach. Specifically, it mentions augmenting a query with potential answers and using prompts to make the LLM generate more precise answers, which aligns with understanding and improving the interaction with language models via prompts. However, it did not focus exclusively on 'hard prefix prompts' which might have been a part of a more targeted study of prompt engineering.",https://arxiv.org/pdf/2304.14233 -careful data curation stabilizes in-context learning,7,"The abstract discusses in-context learning (ICL) and the impact of data selection on the performance of large language models (LLMs), which is pertinent to prompt engineering study as it relates to the optimization of input data to improve model response. While the focus appears to be on data curation rather than prompt formulation (i.e., hard prefix prompts), the principles of selecting high-quality examples and understanding their influence on model performance are relevant. The methods described, such as CONDACC and DATAMODELS, could potentially be applied to or inform approaches in prompt engineering, making the study somewhat relevant although not exclusively focused on prompt design.",https://arxiv.org/pdf/2212.10378 -forward-backward reasoning in large language models for verification,8,"The paper discusses a method related to prompt engineering, specifically 'Chain-of-Though (CoT) prompting', which is a form of structuring prompts to guide large language models (LLMs) in reasoning tasks. The introduction of 'forward-backward reasoning,' as a means to enhance the verification of candidate answers generated by LLMs, represents a novel approach within the domain of prompt engineering. Although the paper does not directly mention 'hard prefix prompts', the relevance is high due to the focus on developing novel prompting methodologies to improve the performance and reliability of LLMs in complex reasoning tasks, which falls under the broader umbrella of prompt engineering studies.",https://arxiv.org/pdf/2308.07758 -how to catch an ai liar: lie detection in black-box llms by asking unrelated questions,7,"The study presents an approach for detecting lies from LLMs that involves crafting and using follow-up prompts or questions, which is related to the concept of prompting in language models. Lie detection in this context can be considered a fringe or specialized aspect of prompt engineering aimed at improving the reliability and truthfulness of LLM responses. While not directly focused on 'hard prefix prompts', the research highlights the impact of prompt design on the behavior of LLMs, which falls within the broader scope of prompt engineering. Hence, the rating reflects that the paper is relevant but not central to a comprehensive systematic review on prompt engineering, specifically with a focus on 'hard prefix prompts'.",https://arxiv.org/pdf/2309.15840 -self-checker: plug-and-play modules for fact-checking with large language models,8,"The abstract describes the 'Self-Checker' framework, which is relevant to prompt engineering, as it involves constructing prompts for large language models to perform fact-checking tasks in a zero-shot or few-shot setting. While the main focus of the paper is on the application of fact-checking, it directly involves prompt engineering to enable the large language models to understand and execute the task without extensive training or fine-tuning. Therefore, the paper is highly relevant to prompt engineering, especially in the context of using prompts to elicit specific functionalities from pre-trained models. However, it does not exclusively focus on 'hard prefix prompts' as indicated in the prompt engineering study, which might slightly limit its relevance in terms of specificity to that particular type of prompting.",http://arxiv.org/pdf/2305.14623 -what do llms know about financial markets? a case study on reddit market sentiment analysis,8,"The study's focus on using large language models for sentiment analysis is highly relevant to prompt engineering, as it explores the effect of different prompting strategies on the performance of the model. The mention of Chain-of-Thought summaries and forcing the LLM through several reasoning paths is particularly pertinent to how prompts can be designed to elicit better responses from language models. Although the primary application is market sentiment analysis, the techniques used for prompting can be generalized and applied to other domains, making this research relevant to the study of prompt engineering. The rating is not a full 10 because the paper's primary goal is not the study of prompt engineering itself, but rather the application of prompting techniques to a specific problem, i.e., financial sentiment analysis.",http://arxiv.org/pdf/2212.11311 -enhancing in-context learning with answer feedback for multi-span question answering,8,"The paper describes a methodology for improving the performance of large language models in specific tasks through in-context learning and a novel prompting approach which involves providing feedback on model outputs. This is highly relevant to prompt engineering as it directly pertains to techniques for constructing prompts that can better guide models like ChatGPT. The focus on multi-span question answering does not explicitly pertain to 'hard prefix prompts' as indicated in the original query, but it does explore the broader field of prompt design and optimization, which is why the relevance is rated an 8 instead of a perfect 10.",http://arxiv.org/pdf/2306.04508 -retrieving texts based on abstract descriptions,8,"The abstract describes research on using Large Language Models (LLMs) to generate training data for a new model focused on semantic retrieval, which pertains to prompt engineering in that the data sourcing process involves prompting a LLM effectively. The relevance lies in addressing the use of LLMs to formulate prompts that yield useful data for specific tasks, which is a key part of prompt engineering. However, the text does not explicitly address 'hard prefix prompts', a more specialized topic within prompt engineering, hence the rating is not a full 10.",http://arxiv.org/pdf/2305.12517 -queer people are people first: deconstructing sexual identity stereotypes in large language models,8,"The study is highly relevant to the field of prompt engineering because it discusses a post-hoc method to alter the prompts (chain-of-thought prompting) in order to influence the output of large language models. It addresses the issue of bias in LLMs, particularly against marginalized groups, an essential consideration within prompt engineering to ensure responsible AI practices. Recovering fair and unbiased responses from LLMs is a key application of prompt engineering, even though the study does not focus solely on 'hard prefix prompts' but rather on a broader set of prompt modification strategies.",http://arxiv.org/pdf/2307.00101 -retrieving supporting evidence for llms generated answers,8,"The described paper focuses on an experiment which involves prompting a Large Language Model (LLM) with a combination of a question and a retrieved answer to check for support of the LLM's generated answer. While it's not directly studying 'hard prefix prompts', it tackles a closely related topic in the field of prompt engineering: the verification and reliability of responses from LLMs, which could involve a form of prompt crafting. The relevance is high because understanding how prompts can be engineered to elicit verification behavior from an LLM is within the scope of prompt engineering studies. However, because it does not directly address the systematic review or exploration of 'hard prefix prompts', it does not get a full 10.",http://arxiv.org/pdf/2306.13781 -knowledge sanitization of large language models,7,"The abstract describes an approach for modifying the behavior of large language models using a specific fine-tuning technique to avoid disclosing sensitive information, which is relevant to the field of prompt engineering. This study is indirectly related to prompt engineering as it involves the engineering of prompts to ensure that the model's responses meet certain security and privacy requirements. This demonstrates the use of prompts to control and influence the output of language models. However, it does not specifically address 'hard prefix prompts,' which was the original topic, hence it doesn't receive a full relevance score.",https://arxiv.org/pdf/2309.11852 -reasoning in large language models through symbolic math word problems,8,"The study's focus on improving the alignment between the symbolic reasoning and the numeric answers of LLMs using a self-prompting approach is closely related to prompt engineering. It hints at the optimization of prompts to yield better performance from large language models in the context of solving symbolic math word problems, which is an exercise in prompting strategies. This aligns with the notion of hard prefix prompts that guide the LLMs towards a specific mode of reasoning. However, the study is not exclusively centered on prompt engineering but also explores the model's reasoning capabilities, hence the rating is not a full 10.",https://aclanthology.org/2023.findings-acl.364.pdf -alphazero-like tree-search can guide large language model decoding and training,7,"The abstract discusses an approach to enhance the decoding and reasoning of LLMs by incorporating an AlphaZero-like tree-search framework. This is indirectly relevant to prompt engineering, as the paper seems to focus on improving LLMs' performance on tasks through tree-search algorithms rather than prompting techniques. However, the fact that it references the use of prompts in traditional models, such as CoT, and seeks to provide a method that reduces reliance on human-designed prompts, makes it relevant to the study of prompt engineering. It addresses a limitation of current prompt-based techniques and offers an alternative that could influence future prompt design and utilization.",https://arxiv.org/pdf/2309.17179 -exploring human-like translation strategy with large language models,7,"The study focuses on the MAPS framework, which involves Multi-Aspect Prompting and Selection, a system that seemingly pertains to 'prompt engineering' as it includes the design of prompts that enable LLMs to extract and utilize translation-related knowledge. While the study does not directly address 'hard prefix prompts', it is implicitly relevant because it involves the engineering of prompts to improve the translation process of LLMs. Therefore, it has relevance to the subject of prompt engineering, albeit not strictly focused on hard prefix prompts specifically.",http://arxiv.org/pdf/2305.04118 -"mmhqa-icl: multimodal in-context learning for hybrid question answering over text, tables and images",7,"The paper describes a novel method for improving question answering across multiple modalities using in-context learning strategies with large language models, which is relevant to prompt engineering. The technique enhances LLM prompting strategies for the task, which is a core aspect of prompt engineering. However, it does not focus directly on hard prefix prompts but on a broader application of prompts within multimodal question answering. Therefore, the relevance is significant but not entirely focused on the specific topic of hard prefix prompts.",https://arxiv.org/pdf/2309.04790 -gear: augmenting language models with generalizable and efficient tool resolution,7,"The title and abstract provided discuss an algorithm named GEAR that is relevant to the domain of prompt engineering, as it deals with enhancing the efficiency and effectiveness of large language models (LLMs) by using smaller models for tool grounding. Prompt engineering is a process that's closely related to how a language model interacts with external tools and uses prompts to perform tasks. While the study does not directly address 'hard prefix prompts' which may be a specific kind of prompt engineering technique, it does engage with the overall theme of improving the interaction between language models and tool utilization. Thus, its relevance is considerable but not entirely specific to 'hard prefix prompts' as suggested by the initial inquiry.",https://arxiv.org/pdf/2307.08775 -retrieving supporting evidence for generative question answering,8,"The abstract provided discusses experiments on the validation of generated answers by large language models (LLMs) using a combination of questions and answers as prompts for retrieval processes. This work is directly connected to the concept of prompt engineering, as it involves designing and refining prompts (in this case, combining questions with generated answers) to improve the performance of LLMs. The relevance is not a perfect 10 because the study focuses specifically on verification of LLM-generated content against a corpus, and not broadly on 'hard prefix prompts' or a systematic review of prompt engineering techniques. However, it addresses a key aspect of prompt construction and interaction with language models, which is essential to the field of prompt engineering.",https://arxiv.org/pdf/2309.11392 -synergistic integration of large language models and cognitive architectures for robust ai: an exploratory analysis,7,"The abstract describes the integration of Large Language Models (LLMs) with Cognitive Architectures (CAs), which is relevant to prompt engineering to the extent that it deals with utilizing prompts for directing LLM behavior. Mention of 'chain-of-thought prompting' indicates a direct relevance to prompt engineering techniques. However, the primary focus seems to be on the broader framework of integrating LLMs and CAs, rather than specifically on the development or study of hard prefix prompts within prompt engineering. Therefore, the relevance is substantial but not complete.",https://arxiv.org/pdf/2308.09830 -large language models can learn rules,9,"The provided abstract is highly relevant to prompt engineering study as it discusses a method for improving the performance of large language models (LLMs) in reasoning tasks through a novel prompting framework, Hypotheses-to-Theories (HtT). This framework directly relates to the development and refinement of prompts to enhance the reasoning capabilities of LLMs, which is at the core of prompt engineering. The systematic approach to generate, verify, and use rules for modeling better represents the kind of systematic review that could be applied in hard prefix prompts research. The only reason it doesn't receive full marks is that it does not specifically mention 'hard prefix prompts', but it addresses the broader field of prompting methods.",https://arxiv.org/pdf/2310.07064 -less is more for long document summary evaluation by llms,7,"The abstract describes a novel approach to evaluating summaries of long documents by LLMs that involves a key step of prompting the models after key sentence extraction, which is closely related to the concept of 'prompt engineering.' While the study is not directly focused on 'hard prefix prompts,' its relevance lies in the method of using prompts to efficiently guide LLMs towards desired tasks, which is an essential component of prompt engineering. Additionally, the results and practical recommendations could indirectly contribute to the understanding of how prompts affect the performance of language models in processing long documents. However, it is not a direct study of 'hard prefix prompts' in the sense of a comprehensive systemic review or an exploration of prompt structures and their effects, hence the rating does not reach the top of the scale.",https://arxiv.org/pdf/2309.07382 -developing a scalable benchmark for assessing large language models in knowledge graph engineering,8,"The described benchmarking framework for assessing Large Language Models in knowledge graph engineering seems to be highly relevant to prompt engineering as it deals with automatic evaluation and storage of LLM responses. This indicates that prompt engineering plays a crucial role in how well these models perform on the specified tasks of syntax and error correction, facts extraction, and dataset generation. The relevance is not a full 10 because the abstract does not specifically focus on 'hard prefix prompts', but rather on prompt engineering in a more general context within knowledge graph generation.",https://arxiv.org/pdf/2308.16622 -s3-dst: structured open-domain dialogue segmentation and state tracking in the era of llms,7,"The study presents a structured prompting technique, which is relevant to prompt engineering as it involves mechanisms used to improve the interfacing with language models. The concept of 'Pre-Analytical Recollection' could offer insights into designing prompts that facilitate better state tracking and context understanding in conversations with language models. However, the focus seems to be more on dialogue state tracking and segmentation in the context of LLM-based systems, rather than directly on engineering prompts using hard prefixes. The relevance is therefore not maximal, as it does not directly address hard prefix prompts; however, the structured prompting approach is a component of prompt engineering within the larger scope of utilizing language models for complex tasks.",https://arxiv.org/pdf/2309.08827 -automatic chain of thought prompting in large language models,9,"The abstract presents a direct study on improving the effectiveness of large language models using a specific type of prompt engineering strategy known as Chain-of-Thought (CoT) prompting. This is highly relevant to prompt engineering as it addresses the optimization of the prompting process to enhance the performance of language models. The approach of automatically generating CoT prompts (Auto-CoT) to replace manual effort is a significant contribution to the field of prompt engineering. The only reason this is not rated a 10 is that the study does not specifically address 'hard prefix prompts' but rather CoT prompting in general, which is a subset of prompt engineering.",http://arxiv.org/pdf/2210.03493 -analyzing bert’s knowledge of hypernymy via prompting,9,"The study on BERT's knowledge of hypernymy through prompting directly relates to prompt engineering because it investigates the effectiveness of using prompts to elicit specific linguistic knowledge from a language model. The paper analyzes how well BERT responds to direct prompts about lexical semantic relations, which is a key aspect of prompt engineering. The relevance is rated at 9 instead of a perfect 10 because the focus is specifically on hypernymy recognition, not on the broader range of prompt engineering strategies or types of prompts (like hard prefix prompts mentioned in the original topic), which could have an impact on how language models generate more diverse responses.",https://aclanthology.org/2021.blackboxnlp-1.20.pdf -prompter: utilizing large language model prompting for a data efficient embodied instruction following,8,"The abstract discusses 'Prompter,' an approach that involves replacing a semantic search module with language model prompting, which is highly relevant to prompt engineering. The utilization of language model prompting to control robots based on natural language instructions is a practical application of prompt engineering, demonstrating how well-crafted prompts can improve performance in embodied instruction following tasks. The work implies a novel use of prompts and their significance in improving data efficiency, which are key topics in prompt engineering research. The rating is not a full 10 because while the paper is related to the use of prompts, it does not explicitly focus on 'hard prefix prompts' per se, but broadly on the application of language model prompts in a different context.",https://arxiv.org/pdf/2211.03267 -rethinking with retrieval: faithful large language model inference,7,"The paper described involves using 'chain-of-thought (CoT) prompting' which falls under the broader category of prompt engineering in the context of large language models. Although the main focus appears to be on improving the model's ability to integrate external knowledge and thus enhance inference, it is still relevant because it discusses a method that modifies how prompts are used to obtain explanations from a model. However, the paper doesn't exclusively focus on the design or study of 'hard prefix prompts', so it may not completely align with studies exclusive to prompt engineering techniques. Therefore, the rating indicates moderate relevance, with points deducted for not being directly focused on hard prefix prompts, yet still relating to prompt engineering methodology.",http://arxiv.org/pdf/2301.00303 -least-to-most prompting enables complex reasoning in large language models,9,"The described research directly investigates a novel prompting strategy for language models, which is highly relevant to the field of prompt engineering. The 'least-to-most prompting' method addresses a common limitation in generalizing from easy to hard problems. Given that the strategy involves designing prompts to guide the model through incrementally challenging subproblems, this study contributes significantly to the understanding and development of advanced prompt engineering techniques. Therefore, it scores a 9, as it may not solely focus on 'hard prefix' prompts, but covers a broader approach to prompting that includes handling complex problems.",http://arxiv.org/pdf/2205.10625 -thoughtsource: a central hub for large language model reasoning data,7,"While the provided title and abstract do not specifically mention hard prefix prompts, the mention of 'large language model reasoning data' implies that the study could include research into various prompt engineering techniques, which may encompass hard prefix prompts. The 'ThoughtSource' project aims to facilitate a qualitative understanding of chain-of-thoughts (CoTs), which is a technique often used in prompt engineering to improve language models' performance. Furthermore, the focus on 'empirical evaluations' and 'providing training data' could be relevant to optimizing hard prefix prompts for better language model outputs. Thus, the study might contribute valuable insights to prompt engineering, albeit not exclusively to hard prefix prompts.",https://www.nature.com/articles/s41597-023-02433-3.pdf -large language model prompt chaining for long legal document classification,9,"The study is highly relevant to prompt engineering as it focuses on the technique of 'prompt chaining' to improve the classification of lengthy and complex legal documents. The method specifically involves breaking down the task into parts and using successive prompts, which is at the core of advanced prompt engineering strategies. The successful performance improvement over the zero-shot method and the comparison with larger models like ChatGPT demonstrate a direct application and advancement in the field of prompt engineering for complex tasks such as legal document classification.",https://arxiv.org/pdf/2308.04138 -generate rather than retrieve: large language models are strong context generators,8,"The abstract describes a novel prompting method within the context of large language models, specifically applied to knowledge-intensive tasks. It details a process where the model generates contextual documents from a given question, which aligns with the concept of 'hard prefix prompts' in that it involves crafting inputs to elicit specific types of outputs from a model. Despite not using the exact term 'hard prefix prompt,' the essence of designing prompts to guide the generation of content is central to prompt engineering. The significance of an 8 rather than a 10 is because the abstract doesn't explicitly discuss hard prefix prompts or broader prompt engineering strategies beyond its specific 'generate-then-read' method.",http://arxiv.org/pdf/2209.10063 -a recipe for arbitrary text style transfer with large language models,8,"The paper focuses on a prompting method named 'augmented zero-shot learning' for text style transfer using large language models (LLMs). While it does not directly address 'hard prefix prompts,' it is significantly relevant to the broader field of prompt engineering. The concept of instructing LLMs to perform specific style transformations through natural language prompts aligns with the principles of prompt engineering, which involves crafting input prompts to guide the behavior of AI models. Although the study's primary application is text style transfer, the prompting techniques developed could have implications for the design and effectiveness of hard prefix prompts.",https://aclanthology.org/2022.acl-short.94.pdf -towards a mathematics formalisation assistant using large language models,8,"The study discusses the efficacy of large language models in formalizing mathematical statements, emphasizing the importance of 'careful inputdependent prompt selection and postprocessing.' This relates closely to prompt engineering as it highlights the critical role of prompt design in achieving higher performance with language models. Though it doesn't focus on 'hard prefix prompts' specifically, the overall concept of optimizing prompts to improve a model's ability to understand and generate specific outcomes is central to prompt engineering studies.",https://arxiv.org/pdf/2211.07524 -tree of thoughts: deliberate problem solving with large language models,9,"The title 'Tree of Thoughts: Deliberate Problem Solving with Large Language Models' directly refers to an advanced method of prompt engineering for language models. It describes a new framework, Tree of Thoughts (ToT), which improves upon the existing 'Chain of Thought' approach. The abstract explains how this method allows language models to explore different reasoning paths and make more informed decisions. The fact that it facilitates exploration over coherent units of text is highly relevant to the study of hard prefix prompts, as it implies a structured and systematic way to lead and evaluate the language model's output. The significant improvement in problem-solving tasks like Game of 24, Creative Writing, and Mini Crosswords demonstrates the practical impact of this approach on prompt engineering. Despite not using the term 'hard prefix prompts' specifically, the concept and results are very pertinent to the field.",http://arxiv.org/pdf/2305.10601 -have llms advanced enough? a challenging problem solving benchmark for large language models,7,"While the abstract discusses a comprehensive benchmark for evaluating large language models on complex problem-solving tasks, involving hard problems from IIT JEE-Advanced exam, it indirectly relates to prompt engineering. The techniques mentioned like self-consistency, self-refinement, and chain-of-thought prompting are part of prompt engineering strategies. These strategies contribute to shaping the input provided to the models in order to improve their output. However, the focus of the study is more on the assessment of the models' abilities and the development of a confidence-thresholding method, rather than on the design or study of prompts (hard prefix prompts) specifically. Thus, the relevance to prompt engineering is significant but not the central theme of the paper.",https://arxiv.org/pdf/2305.15074 -radadapt: radiology report summarization via lightweight domain adaptation of large language models,7,"The study discusses adaptation strategies for large language models, including 'discrete prompting', which is relevant to prompt engineering as it involves designing specific prompts to guide the model's performance on a task. While the main focus is on domain adaptation through pretraining and fine-tuning, the mention of discrete prompting shows that the methodology studied does intersect with prompt engineering, especially in how the prompts can affect RRS model effectiveness. Thus, the relevance is significant but not central to prompt engineering studies, which might have a broader scope beyond domain adaptation and parameter tuning.",https://arxiv.org/pdf/2305.01146 -evaluating factual consistency of summaries with large language models,9,"The abstract addresses the evaluation of factual consistency in summaries using large language models and places a significant focus on the role of prompting methods. The relevance to prompt engineering is high, given that it explores various prompting methods including vanilla, chain-of-thought, and sentence-by-sentence, which are integral to the way LLMs are leveraged to perform tasks. This empirical study contributes to the understanding of how different prompts affect the performance of LLMs, which is a core aspect of prompt engineering. The rating is not a perfect 10 as the study is not exclusively on 'hard prefix prompts' (which was specified in the original prompt engineering study question), but the subject matter is very closely related.",https://arxiv.org/pdf/2305.14069 -large language models are diverse role-players for summarization evaluation,8,"The provided abstract outlines a study focused on leveraging large language models (LLMs) for the evaluation of text summarization, which is relevant to the domain of prompt engineering. Although the study does not solely concentrate on 'hard prefix prompts', it does propose a framework that involves 'roleplayers prompting mechanism' and 'context-based prompting,' which are examples of prompt engineering techniques used to guide LLMs towards a specific task. The 'multi-roleplayer prompting technology' and 'integrating multiple outputs into the final evaluation results' are indicative of advanced prompt engineering methods to evaluate LLMs' performance on text summarization tasks. The study's high relevance comes from its methodological innovation in prompt engineering for LLM evaluation, but it falls slightly short of perfect relevance due to the absence of a direct focus on 'hard prefix prompts.'",https://arxiv.org/pdf/2303.15078 -can chatgpt detect intent? evaluating large language models for spoken language understanding,8,"The paper in question focuses on the ability of language models like ChatGPT to understand and classify intent in spoken language, which is closely related to prompt engineering. In-context learning and prompting are integral parts of language model interactions in natural language understanding tasks. Even though the study does not directly address 'hard prefix prompts,' it discusses the broader context of using prompts to elicit specific model behaviors and understandings, such as intent classification, which is a fundamental part of prompt engineering. The rating is not a full 10 because the study does not specifically focus on 'hard prefix prompts,' but it is highly relevant for anyone studying how prompting affects large language models' abilities.",https://arxiv.org/pdf/2305.13512 -complexity-based prompting for multi-step reasoning,9,"The given abstract discusses the concept of complexity-based prompting as a method for improving the multi-step reasoning capabilities of large-scale language models. This is highly relevant to prompt engineering because it explores how the complexity of prompts affects the performance of models like GPT-3 and Codex on reasoning tasks. The study directly relates to the process of crafting prompts that elicit better responses from language models, thus contributing to the field of prompt engineering. The systematic assessment of how prompt complexity influences the quality of model-generated reasoning chains is a specific aspect of prompt engineering, making the study pertinent though it doesn't focus on 'hard prefix prompts' as a specific type of prompt construction method.",http://arxiv.org/pdf/2210.00720 -"""according to ..."" prompting language models improves quoting from pre-training data",9,This study is highly relevant to prompt engineering as it explores a specific technique (according to prompting) aimed at improving the accuracy and reliability of Large Language Models by directing them to reference their pre-training data. The introduction of a novel evaluation metric (QUIP-Score) to measure grounding in underlying text corpora is also a significant contribution to the field. The focus on grounding responses and the empirical evidence showing the impact of different prompts on model output are central to the discipline of prompt engineering.,http://arxiv.org/pdf/2305.13252 -prompting for a conversation: how to control a dialog model?,9,"The paper directly addresses the challenge of prompt engineering by discussing a method to condition prompts on specific queries, which is a key issue in the field of dialog model control. Exploring alternatives to fine-tuning with this form of prompt engineering has direct implications on how to effectively influence the behavior of language models without compromising their diversity and expressiveness. The relevance to prompt engineering is very high because it contributes to the understanding and application of prompting techniques to guide dialog models. The paper's findings on improved BLEU scores and response diversity are valuable metrics when evaluating the performance of prompt-based methods. The only aspect keeping this from a perfect score may be the specificity of the application in dialogue systems, which, while still under the umbrella of prompt engineering, could be seen as a subset of larger prompt engineering challenges.",http://arxiv.org/pdf/2209.11068 -scaling instruction-finetuned language models,8,"The study is highly relevant to prompt engineering as it focuses on the effects of finetuning language models with instructions, which is a key method for improving the performance of language models on task-specific prompts. However, the study does not directly address 'hard prefix prompts', which may suggest specific, fixed prompts that are difficult for models to interpret, rather than the general approach of instruction finetuning. While the study has a strong connection to the field of prompt engineering by demonstrating the benefits of instruction-based finetuning on various models and benchmarks, the absence of a direct focus on 'hard prefixes' warrants a slightly lower rating.",http://arxiv.org/pdf/2210.11416 -multi-stage prompting for knowledgeable dialogue generation,8,"The paper presents a relevance to prompt engineering study as it focuses on improving dialogue generation by proposing a multi-stage prompting approach with a pretrained language model. This methodology directly relates to the design and refinement of prompts to enhance the model's performance in generating knowledgeable responses. Although the title suggests a dialogue system rather than an explicit 'hard prefix prompt' structure, the concepts of controlling and structuring prompts to improve output are central to prompt engineering. The high relevance score reflects the significance of multi-stage prompting within the broader scope of prompt engineering techniques.",http://arxiv.org/pdf/2203.08745 -unnatural instructions: tuning language models with (almost) no human labor,9,"The described study is highly relevant to prompt engineering as it involves the creation of a large dataset of instructions for fine-tuning language models, which is a core facet of prompt engineering. The method of using language models to generate additional prompts and then employing these prompts for subsequent model training directly pertains to techniques in prompt engineering. The effectiveness of using generated prompts to achieve comparable or superior performance to human-curated datasets provides valuable insights into prompt engineering methodologies and their potential efficiencies. The point deduction is due to the abstract not addressing 'hard prefix prompts' directly, which may indicate the study doesn't focus specifically on that aspect of prompt engineering.",http://arxiv.org/pdf/2212.09689 -language models are multilingual chain-of-thought reasoners,7,"The content is relevant to prompt engineering because it discusses the use of prompts (chain-of-thought prompting) to evaluate the reasoning abilities of language models in a multilingual context. Although the focus is on the reasoning abilities and multilingual capabilities of the models rather than on the engineering of prompts per se, the effectiveness of different types of prompts, especially those encouraging a chain of thought, is an essential aspect of prompt engineering. Hence, the study indirectly contributes valuable insights to the field of prompt engineering by showcasing the impact of prompt types on the performance of language models across various languages.",http://arxiv.org/pdf/2210.03057 -teaching small language models to reason,7,"The abstract is highly relevant to the field of prompt engineering as it discusses the teaching of reasoning capabilities to smaller language models via knowledge distillation from larger models. Even though it does not specifically mention 'hard prefix prompts', it is related to the concept of improving model performance through advanced prompting strategies like chaining thoughts. The study's outcome indicates that refined prompting techniques can transfer complex reasoning skills to smaller models, which is a significant aspect of prompt engineering.",http://arxiv.org/pdf/2212.08410 -instruction induction: from few examples to natural language task descriptions,9,"The provided title and abstract are highly relevant to prompt engineering study as they explicitly discuss the ability of large language models to generate natural language instructions from a few examples. This ability is directly related to the engineering of prompts, as it involves designing prompts that help the model infer the desired task. The systematic exploration and evaluation of this ability are fundamental to understanding and improving prompt engineering strategies. The mention of a novel evaluation metric and differentiation between models based on their alignment with instructions also suggests a nuanced approach to prompt engineering that may yield insights for the systematic review on hard prefix prompts.",https://arxiv.org/pdf/2205.10782 -weakly supervised data augmentation through prompting for dialogue understanding,8,"The study presented in the prompt directly engages with prompt engineering as it discusses the use of 'prompting' with large pre-trained language models for data augmentation in dialogue understanding tasks, which is a subset of prompt engineering. The relevance is high because it examines the iterative improvement of prompts through weakly-supervised techniques, although it may not focus exclusively on 'hard prefix prompts' but rather on the broader context of prompts for few-shot learning and augmentation. Given that it deals with prompts and language models and their application in a practical task, the study is substantially related to the field of prompt engineering.",https://arxiv.org/pdf/2210.14169 -errors are useful prompts: instruction guided task programming with verifier-assisted iterative prompting,7,"The relevance of the provided abstract to prompt engineering is fairly high, as the paper focuses on a method, CLAIRIFY, that uses iterative prompting combined with program verification. These techniques are critical for refining the interaction between humans and AI to generate accurate outputs, which is a central theme in prompt engineering. While the study is not about 'hard prefix prompts' specifically, it contributes to prompt engineering by exploring error utilization and iterative prompting to improve task programming, which could be applied in the broader context of prompt engineering studies. Therefore, a rating of 7 seems appropriate, given it may indirectly inform methodologies within prompt engineering but is not wholly centered on the specific concept of hard prefix prompts.",http://arxiv.org/pdf/2303.14100 -language is not all you need: aligning perception with language models,7,"While the provided abstract does not directly discuss 'hard prefix prompts' or 'prompt engineering,' it details the capabilities of Kosmos-1, a Multimodal Large Language Model (MLLM), which is relevant to the field of prompt engineering. The ability of Kosmos-1 to learn in context and follow instructions, including zero-shot and few-shot settings, as well as its evaluation in multimodal chain-of-thought prompting, relates closely to how prompts can be engineered and optimized to interact with language models. Moreover, the cross-modal knowledge transfer mentioned is a component of understanding how prompts can be designed to leverage language in multimodal environments. However, since the focus is primarily on the model's capabilities rather than on the study of prompts themselves, the relevance rating is not a maximal score.",http://arxiv.org/pdf/2302.14045 -improving factuality and reasoning in language models through multiagent debate,8,"The paper described is highly relevant to prompt engineering as it discusses a novel method for improving language model responses through a multiagent debate system. Although it does not specifically mention a 'hard prefix prompt', the techniques involved in creating prompts that facilitate a debate among language models are closely linked to advanced prompt engineering strategies. The 'society of minds' approach likely involves intricate prompting mechanisms to orchestrate the debate process. This has a direct bearing on the study and advancement of prompting methods, making the paper's content pertinent to the field. However, the rating is not a full 10 due to the lack of explicit mention of 'hard prefix prompts', which are the specific focus of the prompt engineering study mentioned.",http://arxiv.org/pdf/2305.14325 -orca: interpreting prompted language models via locating supporting data evidence in the ocean of pretraining data,8,"The abstract discusses a novel method named ORCA for interpreting how prompted language models such as BERT perform tasks by locating supporting data from pretraining, which is highly relevant to studies on 'prompt engineering.' Understanding how models relate to pretraining data when generating responses to prompts is a crucial aspect of prompt engineering. It informs how models process prompts and can lead to designing better prompts that leverage the model's knowledge effectively. However, the focus on 'hard prefix prompts' hasn't been explicitly mentioned, which might slightly reduce its relevance to that specific field of study.",https://arxiv.org/pdf/2205.12600 -prefix-tuning: optimizing continuous prompts for generation,8,"The paper discusses 'prefix-tuning,' which is highly relevant to the field of prompt engineering as it involves optimizing task-specific vectors (prefixes) to improve performance on natural language generation tasks without the need to fine-tune all parameters of a language model. While the term 'hard prefix prompts' isn't explicitly used, the concept of prefix-tuning relies on a similar principle of using prompts (in this case, a trainable prefix) to guide the behavior of a language model. This is pertinent to the study of how prompts affect model performance and behavior, thus earning a high relevance rating. However, it's not a perfect match because the prompt specified a 'hard prefix prompts' review, and this paper focuses on a subset of prompt engineering that is not strictly the 'hard prefix.'",https://aclanthology.org/2021.acl-long.353.pdf -segment everything everywhere all at once,8,"The abstract provided describes the creation of an interactive and promptable model (SEEM) for image segmentation tasks that is inspired by the mechanism of large language models (LLMs). Since prompt engineering refers to the design and refinement of prompts to effectively interact with models, such as LLMs, the study of SEEM's novel decoding mechanism that allows for diverse prompting is relevant to the field of prompt engineering. SEEM's ability to handle different types of dynamic prompts and its focus on a joint visual-semantic space are aspects that can provide valuable insights into how prompts can be optimized for better interaction with models across various domains. The work also touches on compositionality and semantic-awareness, both of which are key concepts in prompt engineering. While the focus is on image segmentation, the principles of designing prompts for interactive and semantic tasks align closely with prompt engineering methodologies. Therefore, the relevance rating is high but not maximum because the primary application is in the domain of image segmentation rather than text-based models, which are more commonly associated with prompt engineering.",https://arxiv.org/pdf/2304.06718 -verify-and-edit: a knowledge-enhanced chain-of-thought framework,8,"The abstract describes a method for improving the performance of large language models by addressing the factuality of generated content through a Verify-and-Edit framework in the context of Chain-of-Thought prompting. This is highly relevant to prompt engineering as it presents a new technique for refining prompts to enhance model factuality and trustworthiness. Although it does not directly address 'hard prefix prompts,' it contributes to the broader field of prompt engineering by presenting a strategy to improve output quality, which is a crucial aspect of the study of prompts and their optimizations. Therefore, it scores high on relevance, but not the maximum due to its specific focus on factuality rather than prompt types.",https://arxiv.org/pdf/2305.03268 -graphprompt: unifying pre-training and downstream tasks for graph neural networks,8,"The paper discusses a novel framework called GraphPrompt, which is directly related to prompt engineering in the context of graph neural networks (GNNs). While the study's focus is on the application of prompts to GNNs rather than text-based models traditionally associated with prompt engineering, it still contributes to the overall field of prompt engineering by extending its principles to another domain of artificial intelligence. The relevance to prompt engineering is high as it involves the development of a learnable prompt to bridge the gap between pre-training and downstream tasks, which is a core concept in prompt engineering studies.",https://dl.acm.org/doi/pdf/10.1145/3543507.3583386 -symbolic chain-of-thought distillation: small models can also “think” step-by-step,9,"The abstract describes a method called Symbolic Chain-of-Thought Distillation (SCoTD) that directly relates to prompt engineering, as it involves training smaller language models on the rationalizations produced by larger models. This process is a form of prompt engineering since it deals with enhancing the ability of smaller models to sequentially reason through problems, akin to crafting effective prompts that guide model reasoning. The high relevance rating is due to the focus on improving model performance through engineered prompts (chain-of-thought prompting), which is central to prompt engineering studies. However, the rating is not a full 10 because the abstract does not explicitly mention 'hard prefix prompts' or a systematic review, which is specifically noted in the prompt.",http://arxiv.org/pdf/2306.14050 -towards revealing the mystery behind chain of thought: a theoretical perspective,8,"The provided title and abstract discuss the effectiveness of Chain-of-Thought (CoT) prompting in improving the performance of Large Language Models (LLMs), particularly for complex tasks. While the study does not explicitly mention 'hard prefix prompts,' it is closely related to prompt engineering, as CoT is a form of prompting strategy used to enhance the problem-solving capabilities of LLMs. The relevance to prompt engineering is high because the theoretical perspective on the mechanism of CoT can contribute significantly to the understanding and development of advanced prompt engineering techniques. However, the rating is not a full 10 because the explicit focus is not on hard prefix prompts but rather on a broader category of CoT prompting strategies.",http://arxiv.org/pdf/2305.15408 -zeroshotdataaug: generating and augmenting training data with chatgpt,8,"This paper is highly relevant to prompt engineering study as it directly explores the generation of synthetic data using task-specific prompts with ChatGPT. The study delves into the principles of prompt engineering by designing appropriate prompts that lead to superior performance in data augmentation for low resource scenarios. While the paper does not specifically mention 'hard prefix prompts' and the focus is more on data augmentation rather than the core concept of prompt engineering, the underlying premise involves crafting effective prompts to elicit desired outputs from a language model, which is a central aspect of prompt engineering.",http://arxiv.org/pdf/2304.14334 -meet your favorite character: open-domain chatbot mimicking fictional characters with only a few utterances,8,"The paper presents a method, Pseudo Dialog Prompting (PDP), which is highly relevant to prompt engineering study as it directly involves designing prompts to induce specific behaviors from a language model (mimicking fictional characters). This directly contributes to the broader field of prompt engineering by exploring how to effectively use limited data (a few utterances) to shape the output of a language model. It might not cover 'hard prefix prompts' in the systematic review sense but provides practical insights into the application of prompt engineering for conversational AI.",http://arxiv.org/pdf/2204.10825 -promptchainer: chaining large language model prompts through visual programming,8,"The study is highly relevant to prompt engineering as it involves creating complex tasks by sequencing multiple prompt-driven interactions with a Large Language Model (LLM). While it doesn't specifically mention 'hard prefix prompts,' it approaches the broader topic of prompt design and chaining, which is a subset of prompt engineering. It also focuses on the user-interface side of prompt engineering through the PromptChainer tool, making it relevant for researchers and practitioners interested in optimizing the human-model interaction process. However, the rating is not a full 10 because the study does not directly focus on 'hard prefix prompts' specifically, which is the exact topic of interest.",https://arxiv.org/pdf/2203.06566 -"grips: gradient-free, edit-based instruction search for prompting large language models",9,"The article describes an innovative approach to prompt engineering specifically designed for large language models, which is directly relevant to the prompt engineering study. The 'Gradient-free Instructional Prompt Search (GrIPS)' is highly relevant as it directly addresses the challenge of improving language model performance through prompt optimization without the need for computationally expensive gradient-based methods. The relevance is slightly below 10 because the systematic review is not solely focused on hard prefix prompts, but on a broader method of prompt improvement. Nevertheless, the study's contributions to the field of prompt engineering are substantial and directly applicable to the systematic review topic.",http://arxiv.org/pdf/2203.07281 -ai chains: transparent and controllable human-ai interaction by chaining large language model prompts,8,"The study addresses a novel approach to interacting with large language models through 'Chaining LLM steps', indicating a clear relevance to the field of prompt engineering. Chaining can be viewed as an advanced form of prompt engineering where prompts are not static but follow a dynamic, modular process. Although the study does not directly discuss 'hard prefix prompts,' it explores the controllability and transparency of LLMs, which are crucial aspects in designing effective prompts. The relevance rating is not a full 10 because the study's focus is on chaining mechanisms rather than the specific concept of 'hard prefix prompts.'",https://dl.acm.org/doi/pdf/10.1145/3491102.3517582 -craft an iron sword: dynamically generating interactive game characters by prompting large language models tuned on code,7,"The abstract indicates a study that involves using example conversational prompts with a language model to enhance NPC interactions in games. While the main focus seems to be on generating natural language and code for game development purposes, the underlying premise is that these prompts are essential in directing the behavior of the language model. This relates to the subject of prompt engineering, as the quality and design of the prompts directly affect the output and capabilities of the conversational agent. However, the study does not appear to focus primarily on the systematic review of 'hard prefix prompts' specifically, hence the rating is not a perfect 10. The findings could still contribute valuable insights into prompt engineering as it relates to practical applications in game design and NPC character development.",https://aclanthology.org/2022.wordplay-1.3.pdf -zero-shot rumor detection with propagation structure via prompt learning,8,"The abstract discusses a new approach to rumor detection using a prompt learning framework which is directly relevant to the field of prompt engineering. The study addresses the design of prompts and their integration with data representations and structural features, which are core considerations for prompt engineering. However, the study is more focused on the application of prompt learning for rumor detection rather than the general study of 'hard prefix prompts', so it may not fully cover the systematic review aspect that the hypothetical study on hard prefix prompts suggests.",http://arxiv.org/pdf/2212.01117 -matching exemplar as next sentence prediction (mensp): zero-shot prompt learning for automatic scoring in science education,8,"The abstract describes a study that investigates the use of a zero-shot approach to automatically score student responses in science education using a novel method called Matching Exemplars as Next Sentence Prediction (MeNSP). This approach is highly relevant to the field of prompt engineering, as it involves the use of prompts to align with a scoring procedure without the need for fine-tuning. While the abstract does not explicitly mention 'hard prefix prompts', it does discuss prompt-based techniques for language model adaptation, which falls under the broader umbrella of prompt engineering. Therefore, the rating is an 8, indicating high relevance due to the innovative application of prompt-related methods in an educational context, but not a perfect score as the specific term 'hard prefix prompts' was not discussed.",http://arxiv.org/pdf/2301.08771 -controlling personality style in dialogue with zero-shot prompt-based learning,9,"The abstract describes a study focused on 'prompt-based learning' for controlling both personality and semantic accuracy in natural language generation, which is highly relevant to the field of prompt engineering. The experimentation with different classes of prompts and their effects on the NLG performance directly pertains to how prompts can be engineered to achieve specific outcomes. The high rating acknowledges the direct relevance to prompt engineering studies, especially within the context of controlling specific attributes in generated text, which is a crucial aspect of prompt engineering. The only reason it does not receive a full score might be because it does not explicitly address 'hard prefix prompts' but rather prompt-based learning in general.",http://arxiv.org/pdf/2302.03848 -structured prompt interrogation and recursive extraction of semantics (spires): a method for populating knowledge bases using zero-shot learning,8,"The given abstract describes a method, SPIRES, for populating knowledge bases using Large Language Models (LLMs) through zero-shot learning and prompt interrogation. As prompt engineering involves the design and refinement of prompts to effectively communicate with AI models, this abstract is highly relevant, as it suggests a structured way to use prompts to extract information and populate databases, a task that directly pertains to how prompts are constructed and their effectiveness. The rating is not a perfect 10 as the abstract specifically focuses on knowledge extraction and ontologies, which is a subset of prompt engineering.",http://arxiv.org/pdf/2304.02711 -zero-shot generative model adaptation via image-specific prompt learning,7,"The provided abstract discusses Image-specific Prompt Learning (IPL), a methodology related to adapting generative models using text-based prompts, which is highly relevant to the field of prompt engineering. Although the text does not directly address 'hard prefix prompts', it does tackle the use of text prompts in controlling and improving the output of generative models, thus making significant contributions to the broader topic of prompt engineering. The connection to prompt engineering is substantial as IPL is an innovative way of providing domain-specific textual directions to a generative model, which aligns with the disciplines involved in studying how prompts affect the behavior of AI models. However, it does not fully align with a 'comprehensive systematic review on hard prefix prompts' as the abstract seems to focus on a specific application rather than a broad review. Hence, the rating is not a perfect score.",https://arxiv.org/pdf/2304.03119 -relationprompt: leveraging prompts to generate synthetic data for zero-shot relation triplet extraction,9,"The study directly addresses prompt engineering by exploring how prompts can be used to generate synthetic data for a Zero-Shot Relation Triplet Extraction task. It presents a novel method of leveraging language model prompts in conjunction with structured text approaches to create relation samples, which is a significant contribution to prompt engineering literature. The fact that they also designed a novel decoding method to work with their prompting strategy further emphasizes its high relevance to the field of prompt engineering.",http://arxiv.org/pdf/2203.09101 -decoupling knowledge from memorization: retrieval-augmented prompt learning,9,"The presented abstract is highly relevant to prompt engineering study as it directly addresses the concept of prompt learning, which is a cornerstone of prompt engineering. It proposes a novel method, RetroPrompt, which aims to enhance the general learning capabilities of language models by decoupling knowledge from memorization. This pertains to an advanced area within prompt engineering that targets improvements in model generalization and few-shot learning abilities, both of which are critical metrics in evaluating the effectiveness of prompts. Although it does not explicitly mention 'hard prefix prompts,' the subject matter is closely related to the broader field of prompt design and optimization.",https://arxiv.org/pdf/2205.14704 -zero-shot video captioning with evolving pseudo-tokens,7,"The abstract describes a method for zero-shot video captioning that involves a form of prompt engineering by optimizing part of the prompt during the generation process. This relates to the prompt engineering study as it includes the manipulation of prompts to improve language model outputs. Although it does not specifically mention 'hard prefix prompts,' the concept of evolving pseudo-tokens could potentially fall under a broader interpretation of prompt engineering. Therefore, the relevance is fairly high but not completely aligned, as the central focus is on video captioning rather than prompt engineering in isolation.",http://arxiv.org/pdf/2207.11100 -improving few-shot performance of language models via nearest neighbor calibration,7,"The study targets the optimization of in-context learning for pre-trained language models (PLMs), which is closely related to prompt engineering, as it deals with the arrangement and selection of prompts to enhance few-shot learning performances. The introduction of a nearest-neighbor calibration framework addresses the effectiveness of prompts. Even though the study does not explicitly mention 'hard prefix prompts', the principles and methodologies used for calibration and enhancement of few-shot learning may be applicable to the systematic review and improvement of hard prefix prompts. Hence, the study is relevant but not fully focused on hard prefix prompts, leading to a rating of 7.",https://arxiv.org/pdf/2212.02216 -few-shot fine-grained entity typing with automatic label interpretation and instance generation,7,"The abstract discusses a novel framework for few-shot Fine-grained Entity Typing (FET) that utilizes prompt-based tuning, which is directly related to the concept of prompt engineering. It addresses the challenge of how to effectively design prompts (verbalizers) automatically, considering the target corpus and label hierarchy, which is a core problem in prompt engineering studies. Moreover, it also introduces a generation aspect to create new instances, hinting at iterative prompt improvement or instance augmentation, which could be relevant for generating more effective prompts. However, the study seems to focus more on entity typing within a few-shot learning framework rather than on hard prefix prompts specifically or prompt engineering more broadly, which may include a variety of other techniques and applications. Therefore, the rating is not a full 10 but still significant due to its partial relevance.",https://arxiv.org/pdf/2206.13746 -natural language inference prompts for zero-shot emotion classification in text across corpora,9,"The paper is highly relevant to prompt engineering as it examines the effects of different prompt formulations on the performance of a natural language inference-based zero-shot-learning classifier. This is directly related to the field of prompt engineering, which involves studying how the design of prompts influences the behavior and output of language models. The study's focus on tailoring prompt selection to fit specific language corpora aligns well with prompt engineering objectives, which seek to optimize interactions with language models for various tasks, including emotion classification mentioned in the abstract.",http://arxiv.org/pdf/2209.06701 -clinical prompt learning with frozen language models,8,"The abstract discusses the application of prompt learning within the specialized domain of clinical texts, comparing its effectiveness to traditional fine-tuning methods. While it doesn't focus exclusively on 'hard prefix prompts', prompt learning is a closely related aspect of prompt engineering. It's highly relevant to a study on prompt engineering, particularly due to the exploration of efficiency and domain-specific challenges, which are key considerations in the field. However, the absence of a specific mention of 'hard prefix prompts' precludes a perfect score.",http://arxiv.org/pdf/2205.05535 -few-shot table-to-text generation with prefix-controlled generator,8,"The study presents a prompt-based approach, specifically the Prefix-Controlled Generator, which is highly relevant to the field of prompt engineering. It addresses the challenge of few-shot table-to-text generation by pre-pending task-specific prompts to improve the ability of Pre-trained Language Models to handle structured data like tables. The focus on controlling the output through hard prefixes is directly applicable to prompt engineering. The two-point deduction from a perfect score acknowledges that the paper might be tangentially related to a 'systematic review on hard prefix prompts' since it appears to be a novel methodology rather than a review. However, the proposed method's successful application in a few-shot learning context and control over PLM outputs keeps it highly relevant to the study of engineering prompts for language models.",http://arxiv.org/pdf/2208.10709 -p3 ranker: mitigating the gaps between pre-training and ranking fine-tuning with prompt-based learning and pre-finetuning,8,"The abstract provided discusses the utilization of prompt-based learning in the context of adapting pre-trained language models for search ranking tasks. This approach aligns closely with prompt engineering, which focuses on designing prompts that effectively guide models to perform specific tasks or understand particular contexts. The P3 Ranker's emphasis on converting the ranking task to fit a pre-training schema using prompts directly relates to the study of prompt engineering, justifying a high relevance rating. Although the paper specifically targets the search ranking domain and may not address hard prefix prompts directly, the principles of prompt-based learning discussed are central to prompt engineering studies.",https://dl.acm.org/doi/pdf/10.1145/3477495.3531786 -prompt tuning with soft context sharing for vision-language models,9,The paper presents research directly relevant to prompt engineering by discussing a novel methodology for prompt tuning in vision-language models. The primary focus on fine-tuning models for few-shot tasks using a shared meta network for prompt generation aligns closely with advanced techniques in prompt engineering. The relevance is only slightly less than maximum because it is specifically about vision-language models and may not cover the broader aspects or methods used in all types of models related to 'prompt engineering.',http://arxiv.org/pdf/2208.13474 -prompt-tuning can be much better than fine-tuning on cross-lingual understanding with multilingual language models,8,"The abstract discusses the effectiveness of prompt-tuning compared to fine-tuning in multilingual language models for natural language understanding tasks. The relevance to prompt engineering is significant, as prompt-tuning is a method of prompt engineering that modifies the input prompt to improve model performance, without extensive retraining. This is particularly applicable to the engineering study of 'hard prefix prompts' as it provides empirical evidence of how different prompting strategies can impact cross-lingual understanding and transferability of language models. The reason why it is not a full 10 is that it does not specifically discuss 'hard prefix prompts,' but rather prompt tuning in a general sense, and thus, it is not exclusively focused on the prompt engineering aspect described in the original query.",http://arxiv.org/pdf/2210.12360 -exploiting domain-slot related keywords description for few-shot cross-domain dialogue state tracking,7,"The paper describes an approach to enhancing dialogue state tracking by using domain-slot related descriptions which act as prompts to identify slot information. This is relevant to prompt engineering because the paper discusses a method of designing and utilizing prompts (in the form of domain-slot descriptions) to improve the performance of an NLP model. Furthermore, the results indicate that these engineered prompts (domain-slot descriptions) help the model to outperform other methods. While the focus is on dialogue state tracking rather than on prompt engineering directly, the usage of customized descriptions to improve model performance does partially fall under the broader umbrella of prompt engineering.",https://aclanthology.org/2022.emnlp-main.157.pdf -decorate the examples: a simple method of prompt design for biomedical relation extraction,9,"The title and abstract indicate that the paper directly addresses prompt design, an essential aspect of prompt engineering, specifically for the task of biomedical relation extraction. The use of a systematic method to generate prompts and the evaluation of their effectiveness in the context of fine-tuning and few-shot learning are highly relevant to studying prompt engineering. Furthermore, the concrete results showing improved performance by using prompts suggest practical significance in the field. The only reason for not giving a full score of 10 is that the paper focuses on a specific domain (biomedical), which may slightly limit the breadth of its relevance to prompt engineering in general, even though the methodology may be applicable across different domains.",http://arxiv.org/pdf/2204.10360 -pre-trained language models can be fully zero-shot learners,8,"The abstract is highly relevant to prompt engineering as it discusses a method (NPPrompt) for zero-shot language understanding that relies on pre-trained language models without the need for labeled data, fine-tuning, or human-constructed prompts. This directly pertains to the study of prompting since it tackles the challenge of leveraging the underlying knowledge of PLMs for various NLP tasks using a novel prompting technique. While it doesn't specifically mention 'hard prefix prompts,' it is within the domain of research and advancing the understanding of how to use prompts effectively with PLMs. The rating is not a full 10 because the direct relevance to 'hard prefix prompts' is not explicit, which might be specifically addressed in a comprehensive systematic review on that sub-topic.",http://arxiv.org/pdf/2212.06950 -tess: zero-shot classification via textual similarity comparison with prompting using sentence encoder,8,"The mentioned study on the TeSS (Text Similarity Comparison using Sentence Encoder) framework is highly relevant to prompt engineering because it focuses on a method where label assignment in zero-shot classification is achieved through the comparison of embeddings from text input and label prompts. This process is integral to prompt engineering as it relies on the design and utilization of prompts that can effectively represent the semantic space for classification tasks. The use of external corpora to enhance the descriptive power of label prompts (TeSS-R) is particularly pertinent to prompt engineering research. However, the study did not explicitly focus on 'hard prefix prompts,' which would encompass a specific subset of prompting techniques and strategies, hence the rating of 8 rather than a perfect 10.",http://arxiv.org/pdf/2212.10391 -zero-shot program representation learning,7,"The abstract discusses 'Zecoler', which utilizes the concept of inserting trainable prompts into code to elicit knowledge from pre-trained models in the context of code representation learning tasks. This approach is relevant to prompt engineering study because it involves optimizing the input to a pre-trained model through trainable prompts, which is akin to hard prompting strategies. The concept of transforming downstream tasks into the form of pre-training tasks using prompts is central to prompt engineering. However, the focus on code intelligence tasks and domain-specific applications like Solidity reduces the relevance slightly, as a comprehensive systematic review on hard prefix prompts may encompass a broader range of tasks and domains beyond code representation learning.",https://dl.acm.org/doi/pdf/10.1145/3524610.3527888 -queryform: a simple zero-shot form entity query framework,7,"The study presents a zero-shot transfer learning framework called QueryForm, which includes a 'dual prompting mechanism.' Although the paper does not focus specifically on 'hard prefix prompts' as a separate study area, the concept of using prompts to extract information from a model without task-specific training data is a form of prompt engineering. The relevance to prompt engineering lies in the framework's ability to influence a model's behavior with carefully constructed queries (prompts). However, the paper discusses prompting within the context of a specific document understanding task rather than a wider exploration of various prompt engineering techniques. The rating reflects relevance in terms of prompting mechanisms and their application, but it is not a direct study of hard prefix prompts in a comprehensive manner.",http://arxiv.org/pdf/2211.07730 -prompt gating: a parameter efficient tuning method for zero-shot multi-source translation,8,"The paper introduces 'Prompt Gating', a method that appends prompts to model inputs, which is directly related to prompt engineering as it involves manipulating prompts to achieve better performance in a machine learning task. The study's relevance to prompt engineering is high because it deals with the integration of prompts into translation models and discusses their impact. The fact that it is applied to machine translation, however, makes it slightly less relevant than if it would have been a study solely focused on prompt engineering for a broader range of applications.",http://arxiv.org/pdf/2212.09387 -peinet: joint prompt and evidence inference network via language family policy for zero-shot multilingual fact checking,8,"Although the title and abstract do not specifically mention 'hard prefix prompts', they discuss the concept of using joint prompt and evidence inference for zero-shot multilingual fact-checking. This is relevant to prompt engineering as it involves the design of prompts (in this case, for understanding and verifying multilingual claims) and how these prompts interact with an AI model to achieve better performance in a specific task. The novel approach of combining prompts with a mechanism for evidence aggregation aligns with prompt-based methodologies. Hence, the paper is quite relevant to the study of prompt engineering, although it is not directly focused on 'hard prefix prompts,' which might be a specific subset of prompt engineering.",https://www.mdpi.com/2076-3417/12/19/9688/pdf?version=1664345340 -prompt-guided scene generation for 3d zero-shot learning,7,"The paper presents an application of prompt engineering in the context of 3D zero-shot learning, where prompts are used to guide scene generation and are integral to the architecture of the learning model. Although prompt engineering is usually discussed in relation to natural language processing, this study adapts the concept for a novel application in 3D data augmentation and model training. It is relevant to the broader field of prompt engineering in that it showcases its adaptability and potential in different areas of AI. However, it might not be considered a pure study of prompt engineering in the textual or linguistic sense, hence the rating is not a full 10.",https://arxiv.org/pdf/2209.14690 -from visual prompt learning to zero-shot transfer: mapping is all you need,8,"The article discusses a novel approach to adapting large-scale pre-trained models to new tasks using a technique called SeMap, which aligns semantic knowledge for visual prompt learning. The relevance to prompt engineering is high because the research deals with the optimization and creation of prompts that facilitate the use of pre-trained models in new tasks without fine-tuning (zero-shot transfer). This is closely related to the concept of hard prefix prompts in prompt engineering, where the goal is to improve the interaction with a model to produce better performance on target tasks. However, since the main focus is on visual prompt learning rather than hard prefix prompts specifically, the rating is not a full 10.",http://arxiv.org/pdf/2303.05266 -layout and task aware instruction prompt for zero-shot document image question answering,7,"The relevance to prompt engineering is moderately high because the paper discusses the use of instruction-tuning language models and emphasizes the understanding of layout via spaces and line breaks, which relates to generating prompts that are layout-aware. The proposed LATIN-Prompt and LATIN-Tuning are direct applications of modifying prompts to include layout information and improve task performance, which is a form of prompt engineering. However, the paper is more focused on the interaction between layout awareness and zero-shot learning, rather than on hard prefix prompts specifically. Therefore, while the study is relevant to prompting techniques and their optimizations in the context of language models, it does not directly address the systematic review of hard prefix prompts.",https://arxiv.org/pdf/2306.00526 -navigating prompt complexity for zero-shot classification: a study of large language models in computational social science,9,"The study directly addresses the role of different prompting strategies in the performance of large language models on classification tasks, which is a core component of prompt engineering. The exploration of how prompt complexity and modifications affect model performance is highly relevant to understanding the mechanisms by which prompts can be engineered for better outcomes in natural language processing tasks. Although the study does not specifically mention 'hard prefix prompts,' it does analyze the influence of variations in prompts, which is closely related to the concept of prompt engineering.",https://arxiv.org/pdf/2305.14310 -zero-shot continuous prompt transfer: generalizing task semantics across language models,9,"The presented study is highly relevant to prompt engineering as it directly addresses an advanced application of prompt tuning—namely, the transferability of continuous prompts between different language models. The zero-shot learning aspect and the focus on preserving 'task semantics' when transferring prompts make the research important for the broader understanding of how prompt engineering can be applied across various models. It does not, however, directly address 'hard prefix prompts,' but is still substantially connected to the field of prompt engineering.",https://arxiv.org/pdf/2310.01691 -prompt-based zero-shot text classification with conceptual knowledge,8,"The paper described seems highly relevant to prompt engineering as it directly discusses the use of prompts for text classification in a zero-shot learning context. The incorporation of conceptual knowledge into prompt-based systems is closely aligned with the study of how different prompt formulations can impact AI performance. While the study's focus on zero-shot learning is slightly broader than prompt engineering alone, its relevance is still significant since prompt engineering is a major component of zero-shot learning approaches.",https://aclanthology.org/2023.acl-srw.4.pdf -"synthesize, prompt and transfer: zero-shot conversational question generation with pre-trained language model",7,"The paper presents a multi-stage knowledge transfer framework (SPARTA) that involves a prompt-based approach for conversational question generation in a zero-shot setting. While it is not explicitly focused on 'hard prefix prompts' in prompt engineering study, the utilization of prompts in the training process to facilitate knowledge transfer from single-turn instances to conversational question generation does relate to prompt engineering. Therefore, it holds relevance for those studying the broader field of prompt engineering, though the exact technique may differ from hard prefix prompting.",https://aclanthology.org/2023.acl-long.500.pdf -"entities, dates, and languages: zero-shot on historical texts with t0",8,"This abstract is highly relevant to prompt engineering as it directly discusses using prompts to achieve zero-shot Named Entity Recognition with the T0 model on historical texts in various languages. It indicates an exploration of prompt-based methods and their efficacy in a challenging domain, which is central to prompt engineering studies. However, the paper does not focus solely on 'hard prefix prompts' but also addresses broader topics such as zero-shot learning and Named Entity Recognition, hence the rating of 8 instead of a perfect 10.",http://arxiv.org/pdf/2204.05211 -pesco: prompt-enhanced self contrastive learning for zero-shot text classification,8,"The abstract describes PESCO, a framework that uses prompts as part of its contrastive learning approach for zero-shot text classification, which is relevant to the field of prompt engineering. Although it does not focus exclusively on 'hard prefix prompts,' the use of prompts to enhance label retrieval is a direct application of prompt engineering techniques. Therefore, the relevance is high, but not perfect since the abstract does not specify 'hard prefix prompts' as its primary subject.",http://arxiv.org/pdf/2305.14963 -prompt to be consistent is better than self-consistent? few-shot and zero-shot fact verification with pre-trained language models,7,"The paper's focus on a novel method called ProToCo, which stands for 'Pro' to 'Co'nsistent, involves prompt engineering as it seeks to improve the accuracy of pre-trained language models (PLMs) for fact verification by generating multiple prompt variants and using consistency as a constraint. This method is directly related to prompt engineering as it involves crafting prompts that can effectively query PLMs. However, the paper does not seem to concentrate specifically on 'hard prefix prompts' but on prompting techniques in general to enforce consistency in predictions. Therefore, while it is relevant, it might not directly address the specifics of hard prefix prompt engineering as indicated by your query but still offers significant insights into the broader field of prompt engineering for PLMs.",http://arxiv.org/pdf/2306.02569 -hierarchical prompt learning for compositional zero-shot recognition,7,"The paper appears to address the concept of prompt engineering by exploring hierarchical prompt learning within the context of Compositional Zero-Shot Learning (CZSL). While it is not a comprehensive systematic review of hard prefix prompts as such, it does contribute to the field of prompt engineering by proposing a novel approach to learning prompts hierarchically, and is thus relevant. The use of prefixed prompts to improve the performance of a vision-language model like CLIP could be considered a form of prompt engineering. However, the rating is not a full 10 because the study is not specifically a systematic review of hard prefix prompts, which was the exact topic requested.",https://www.ijcai.org/proceedings/2023/0163.pdf -zero-shot domain adaptation for neural machine translation with retrieved phrase-level prompts,9,"The paper is highly relevant to prompt engineering as it investigates a prompt-based method for domain adaptation in neural machine translation, which is a novel approach within the field of machine learning and specifically relates to the engineering of prompts. It does not focus on 'hard prefix prompts' specifically, but the usage of bilingual phrase-level prompts for domain adaptation suggests a strong connection to the concept of engineering prompts to improve the performance of a language model. The improvement in BLEU scores and translation accuracy further attests to the effectiveness of the prompt-based method, highlighting its potential relevance in the study of prompt engineering.",http://arxiv.org/pdf/2209.11409 -"electra is a zero-shot learner, too",8,"The provided abstract primarily relates to prompt engineering as it discusses a novel prompt-based learning method using ELECTRA for zero-shot learning tasks. Prompt engineering is explicitly mentioned as part of the new 'pre-train, prompt, and predict' paradigm. Even though it does not specifically discuss 'hard prefix prompts,' the focus on prompt-based approaches and their effectiveness in improving model performance is highly relevant to studies of prompt design and implementation in NLP models.",http://arxiv.org/pdf/2207.08141 -evaluating prompts across multiple choice tasks in a zero-shot setting,8,"This abstract describes a study focused on the evaluation of natural language prompts across multiple choice tasks in a zero-shot setting, which is highly relevant to the field of prompt engineering. It seeks to understand the impact of prompt qualities on model performance, aligning well with the interests of prompt engineering research. The study’s goal to standardize prompts for tasks they were not initially designed for and the quantitative analysis of prompt attributes is significant for the design of effective prompts. Although the study does not explicitly mention 'hard prefix prompts', it contributes to the broader context of prompt engineering, thus the rating of 8 rather than a perfect 10.",http://arxiv.org/pdf/2203.15754 -zerotop: zero-shot task-oriented semantic parsing using large language models,8,"The paper presents a novel application of large language models (LLMs) for zero-shot semantic parsing, which is indirectly related to prompt engineering. Prompt engineering involves crafting inputs to LLMs in a way that optimizes their performance on a given task, and the study's focus on decomposing the semantic parsing problem into a series of QA problems is a form of prompt engineering. They are effectively engineering prompts to elicit specific types of information from an LLM in a structured format. However, the paper is more about the application of LLMs in a zero-shot learning setting than about the systematic study of prompt engineering techniques. Therefore, the relevance is rated high but not perfect.",http://arxiv.org/pdf/2212.10815 -"how to prompt llms for text-to-sql: a study in zero-shot, single-domain, and cross-domain settings",9,"The abstract describes a study focused on the effectiveness of different prompt constructions in the context of using large language models for the text-to-SQL task. This directly relates to prompt engineering as it explores how varying prompts influence the performance of language models in specific language processing tasks. The study's investigation into the impact of different prompts and its goal to provide insights for future work is highly relevant to the field of prompt engineering, although it is more specialized towards text-to-SQL rather than hard prefix prompts specifically.",http://arxiv.org/pdf/2305.11853 -malm: mixing augmented language modeling for zero-shot machine translation,7,"The abstract discusses the usage of large pre-trained language models and their effectiveness in avoiding off-target language errors for zero-shot machine translation when conditioned with prompts. This suggests that the study delves into prompt engineering to some extent, particularly with regard to its influence on language model behavior in translation tasks. However, the core focus seems to be on zero-shot translation and multilingual model performance rather than exclusively on prompt engineering, so the relevance is significant but not complete.",http://arxiv.org/pdf/2210.00320 -zero-shot domain-sensitive speech recognition with prompt-conditioning fine-tuning,8,"The study described is highly relevant to prompt engineering as it involves fine-tuning a pre-trained model using text prompts to achieve domain sensitivity and adaptation in speech recognition tasks. Such conditioning on prompts is a direct application of prompt engineering principles to improve model performance on specific domains, showcased by the significant Word Error Rate reductions. However, it is focused specifically on speech recognition and does not cover a broader spectrum of 'hard prefix prompts', which might include other areas beyond speech recognition, hence the rating is not a full 10.",https://arxiv.org/pdf/2307.10274 -zero-shot clinical entity recognition using chatgpt,8,"The abstract indicates that the study investigates the use of different prompt strategies for enhancing the performance of ChatGPT in a zero-shot clinical entity recognition task. It directly tackles prompt engineering by comparing the effectiveness of prompts in a specialised application (clinical NER), which is highly relevant to the study of how prompts affect AI behavior. However, it doesn't specify that it focuses on 'hard prefix prompts,' which would be essential for a 'comprehensive systematic review on hard prefix prompts,' hence not a perfect score.",http://arxiv.org/pdf/2303.16416 -a preliminary evaluation of chatgpt for zero-shot dialogue understanding,7,"The paper's relevance to prompt engineering is notable due to the exploration of ChatGPT's capabilities in zero-shot dialogue understanding tasks, which inherently involves crafting prompts that can elicit the desired outcomes without task-specific training. The mention of 'multi-turn interactive prompt' within the dialogue state tracking (DST) task highlights an aspect of prompt engineering. Understanding how ChatGPT responds to different kinds of prompts, especially in zero-shot scenarios, is crucial for developing better prompt-engineering strategies. However, the study does not focus primarily on the 'hard prefix prompts' which is specific to the systematic review in question, hence the rating is not a full 10.",http://arxiv.org/pdf/2304.04256 -"clip for all things zero-shot sketch-based image retrieval, fine-grained or not",7,"The abstract discusses the application of prompt learning specifically tailored to the sketch community and its impact on zero-shot sketch-based image retrieval. While it does not explicitly focus on 'hard prefix prompts,' it does mention the implementation of a prompt learning setup, and designing sketch-specific prompts which are relevant to prompt engineering. The substantial performance gains reported indicate the relevance and effectiveness of prompt tuning in this domain. However, the focus seems to be more on the application of prompts in conjunction with the CLIP model rather than a comprehensive study of prompts engineering itself, hence the rating is not a perfect 10.",https://arxiv.org/pdf/2303.13440 -rapgen: an approach for fixing code inefficiencies in zero-shot,8,"The abstract describes a method called Retrieval-Augmented Prompt Generation (RAPGen) that involves the construction and utilization of prompts to fix performance issues in code. Although it specifically targets performance bugs and uses a pre-constructed knowledge-base intended for this purpose, the basic principles of constructing and using prompts for a language model are at the core of both tasks. Therefore, this paper is highly relevant to the study of prompt engineering because it explores a novel, prompt-based method to interact with a language model to solve a specific problem.",http://arxiv.org/pdf/2306.17077 -clipn for zero-shot ood detection: teaching clip to say no,8,"The abstract reveals that the study involves designing a 'learnable no prompt' and a 'no text encoder' to capture negation semantics within images, which is directly related to prompt engineering as it focuses on developing prompts that enable a language-image model to understand and respond with negation, a nuanced language feature. This development aligns with engineering prompts that can enhance model performance in specific tasks, such as OOD detection in this case. Although the emphasis is on OOD detection rather than on prompt engineering itself, the methodology is highly relevant to the study of prompt engineering techniques.",https://arxiv.org/pdf/2308.12213 -zero-shot information extraction for clinical meta-analysis using large language models,8,"The abstract describes a study that employs large language models for zero-shot prompt-based information extraction in the medical field, which is directly related to the concept of prompt engineering. The investigation of zero-shot performance implicates the design and structuring of prompts to elicit accurate information from language models without any training examples, which is a subset of prompt engineering. While the study focuses on a specialized application in clinical meta-analysis rather than a broad systematic review of hard prefix prompts, it does contribute to the overall knowledge of prompt engineering effectiveness and challenges. Therefore, the relevance is high, but not absolute given the specialized context.",https://aclanthology.org/2023.bionlp-1.37.pdf -towards realistic zero-shot classification via self structural semantic alignment,7,"The relevance of the text to prompt engineering is moderate to high. The paper discusses a Self Structural Semantic Alignment (S^3A) framework that involves generating discriminative prompts using large language models, which is directly related to the field of prompt engineering. The fact that the S^3A framework includes a component where prompts are generated to discern confusing candidates demonstrates the application of prompt engineering in the paper. However, the overarching goal of the paper is zero-shot classification using Vision Language Models, and prompt engineering is only one aspect of the complex methodology being proposed. The rating is not higher because the main focus is not solely on prompt engineering; instead, it's a part of a larger framework designed for a specific application in machine learning.",https://arxiv.org/pdf/2308.12960 -zero-shot relation triple extraction with prompts for low-resource languages,8,"The study directly deals with prompt engineering as it involves creating and using prompts to guide a language model for relation extraction. The work focuses on zero-shot learning for low-resource languages, specifically using prompts to generate structured texts that facilitate the extraction of relation triplets. The structured relation prompt template mentioned also indicates a direct manipulation of prompts to improve model performance. However, the use of the term 'hard prefix prompts' is not specifically mentioned, so the study may not align perfectly with a systematic review on hard prefix prompts but still is highly relevant to the field of prompt engineering.",https://www.mdpi.com/2076-3417/13/7/4636/pdf?version=1681110517 -instance needs more care: rewriting prompts for instances yields better zero-shot performance,9,"The abstract describes a study that directly involves prompt engineering, focusing on improving large language model (LLM) performance in zero-shot tasks by customizing prompts for individual test instances. The approach aligns closely with prompt engineering as it involves the strategic rewriting of prompts to enhance model understanding and performance, which is central to the study of prompt engineering. The high relevance is due to the proposed method's focus on the construction and optimization of prompts for better task execution by LLMs, although the study seems to be more practical and application-oriented rather than theoretical, as implied by the term 'systematic review' in the original query.",https://arxiv.org/pdf/2310.02107 -zyn: zero-shot reward models with yes-no questions,8,"The abstract describes a method of using yes-no questions as prompts to guide the behavior of a language model without additional labeled data, which is highly relevant to prompt engineering. It addresses the use of prompts to achieve zero-shot learning and align a model's output with user preferences, which are core areas of interest in the study of prompts. However, it is not focused specifically on 'hard prefixes,' but on a broader application of prompts, so the rating is not a full 10.",https://arxiv.org/pdf/2308.06385 -random word data augmentation with clip for zero-shot anomaly detection,8,"The paper presents a method that uses CLIP, a visual-language model, and involves prompt-guided classification which is clearly related to prompt engineering. Although the focus is on zero-shot anomaly detection and data augmentation, the use of prompts to guide the CLIP model's text encoder for generating data brings it within the domain of prompt engineering studies. The prompts are crucial for the generation of text embeddings which are subsequently used to train the anomaly detection model, significantly impacting the performance of the system. The paper does not focus on 'hard prefix prompts' specifically, so it may not align completely with a comprehensive review of that exact topic, but it certainly provides relevant information about prompt usage in the context of AI-powered anomaly detection.",https://arxiv.org/pdf/2308.11119 -model-generated pretraining signals improves zero-shot generalization of text-to-text transformers,7,"The paper is relevant to prompt engineering, particularly in the exploration of training strategies that could impact how effectively models respond to prompts. Although the main focus is on zero-shot generalization of text-to-text Transformers and pretraining strategies (e.g., using model-generated signals), the fact that it includes prompt-finetuning on a mixture of NLP tasks indicates relevance. The creation of METRO-T0, which competes with state-of-the-art models on prompted NLP benchmarks, underscores the potential impact of pretraining on prompt-based tasks. However, the paper does not seem to focus specifically on 'hard prefix prompts' but rather on a broader approach to pretraining and finetuning.",http://arxiv.org/pdf/2305.12567 -global constraints with prompting for zero-shot event argument classification,9,"The abstract describes a novel approach that leverages prompting techniques, specifically prefix prompts, in the context of event argument classification which is highly relevant to prompt engineering. The study's focus on how prompts can be used to improve performance in a zero-shot learning scenario indicates a significant contribution to the area of natural language processing related to prompt engineering. Although the work is not solely about hard prefix prompts in general, the application and development of new prompt templates for a specific task align closely with prompt engineering studies. The only reason it does not receive a full 10 is that it does not address a 'comprehensive systematic review' on prompts but rather presents a specific applied use-case of prompt engineering.",http://arxiv.org/pdf/2302.04459 -large language models are frame-level directors for zero-shot text-to-video generation,7,"The provided abstract discusses the use of large language models (LLMs) to generate frame-by-frame descriptions for text-to-video generation, which is relevant to prompt engineering. While the primary focus seems to be on video generation, the role of LLMs in interpreting and directing user prompts aligns with the study of designing and improving prompts to achieve specific outcomes. The framework's ability to translate user prompts into separate and temporally consistent frame prompts demonstrates an application of prompt engineering techniques. Therefore, the approach of dissecting abstract prompts into frame-level instructions can be viewed as a form of prompt engineering. The rating is not a full 10 because the abstract does not explicitly focus on the study of prompt engineering in general but rather its application within a specific context of video generation.",http://arxiv.org/pdf/2305.14330 -sc vall-e: style-controllable zero-shot text to speech synthesizer,7,"The title of the study 'SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer' indicates a research focus on text to speech (TTS) synthesis with style control, which is tangentially relevant to prompt engineering. Although prompt engineering typically involves refining input prompts to achieve better performance in language models, the abstract describes a system that takes text and prompt audio as input to control speech attributes like emotion and pitch. This relates to a form of prompt engineering where the prompt is not just textual but also auditory. The mention of 'tokens in the style embedding matrix' also suggests a relationship with prompt engineering as it implies the manipulation of specific elements to guide the model's output. However, the primary focus on TTS synthesis and lack of explicit discussion on prompt engineering in language models warrants a rating that isn't at the highest relevance.",https://arxiv.org/pdf/2307.10550 -applenet: visual attention parameterized prompt learning for few-shot remote sensing image generalization using clip,7,"The provided abstract demonstrates relevance to prompt engineering as it discusses the development of a novel approach to prompt learning, which is central to adapting language models to specific tasks. The Visual Attention Parameterized Prompts Learning Network (APPLeNet) incorporates visual tokens combined with textual tokens, indicating that it deals with the intersection of language (through prompts) and vision, which is a component of prompt engineering. Additionally, the TLDR section reinforces the focus on prompt learning strategies. However, the application is specifically for remote sensing image generalization, which is a niche area within the broader scope of prompt engineering studies. Hence, the rating is not a full 10, because while it does contribute to the field, it does so in a specific context rather than addressing hard prefix prompts in a broad sense.",https://arxiv.org/pdf/2304.05995 -schema-aware reference as prompt improves data-efficient relational triple and event extraction,9,"The abstract presents research on a novel approach for prompt-based information extraction using pre-trained language models, which directly relates to the study of engineering prompts for better performance in language understanding tasks. As the study introduces a schema-aware mechanism to improve the efficiency of prompts by leveraging global training data and knowledge, it is highly relevant to the concept of 'hard prefix prompts' in the prompt engineering field. The approach is designed to overcome the semantic gap and representation learning limitations, which are critical considerations in prompt engineering. The only reason it does not receive a 10 is because the abstract does not explicitly mention 'hard prefix prompts', but the content is otherwise highly relevant.",http://arxiv.org/pdf/2210.10709 -prompt combines paraphrase: teaching pre-trained models to understand rare biomedical words,8,"The abstract describes an approach to prompt-based fine-tuning tailored towards the biomedical domain, which is relevant to the field of prompt engineering. It focuses on helping models learn and understand rare biomedical terminology, a challenge unique to this specialized area. The approach is directly related to improving the capabilities of pre-trained models with prompt engineering in a specific and practical instance, which can be beneficial for the broader study of prompts in different contexts. However, the abstract does not discuss 'hard prefix prompts' specifically, which may slightly reduce its relevance to the precise topic of a systematic review on such prompts. Therefore, while it is highly relevant to prompt engineering overall, it is not a perfect match for the subject of 'hard prefix prompts.', which is why the rating is not a perfect 10.",http://arxiv.org/pdf/2209.06453 -domain prompt learning for efficiently adapting clip to unseen domains,9,The abstract describes Domain Prompt Learning (DPL) as a novel approach for domain inference through the generation of conditional prompts. This is highly relevant to prompt engineering as it explicitly deals with the creation of prompts to improve the performance of a foundation model in domain generalization. The approach's focus on prompt generation and its impact on model accuracy makes it a significant contribution to the field of prompt engineering.,https://www.jstage.jst.go.jp/article/tjsai/38/6/38_38-6_B-MC2/_pdf -feature normalization and cartography-based demonstrations for prompt-based fine-tuning on emotion-related tasks,8,"The relevance to prompt engineering is high because the paper discusses a novel approach to prompt-based fine-tuning, which is a method within prompt engineering. It focuses on improving the performance of language models on NLP tasks through feature normalization and the introduction of training dynamics to select informative samples for prompts. The paper's central theme revolves around optimizing the input context for prompt-based models, which is directly relevant to prompt engineering. However, it does not specifically address 'hard prefix prompts,' but rather the broader concept of prompt-based fine-tuning. Hence the reasoning for not giving a full score of 10.",https://ojs.aaai.org/index.php/AAAI/article/download/26514/26286 -few shot learning approaches to essay scoring,8,"The abstract provided discusses few-shot learning methods, specifically the use of a prompt-based few-shot learning method (PET) in the context of automated essay scoring. Although the primary focus is on AES, the implementation of prompt-based learning is highly relevant to the study of prompt engineering, as PET is a methodology that relies on engineering prompts to improve model performance with limited training data. Therefore, the study is substantially relevant to prompt engineering, specifically within the field of NLP and machine learning. The deduction in the rating arises because the prompt engineering for AES may not cover the entire scope of 'hard prefix prompts' but is nevertheless significant in demonstrating the application and impact of prompt engineering techniques.",https://caiac.pubpub.org/pub/gdf5n6gs/download/pdf -byoc: personalized few-shot classification with co-authored class descriptions,8,"The study presents a novel approach to few-shot text classification with the involvement of an LLM and interaction with users to generate class descriptions. This is highly relevant to prompt engineering, as the method relies on creating effective prompts that enable the LLM to categorize texts with minimal training data. Although the research focuses specifically on text classification and user interaction for class description generation, rather than hard prefix prompts exclusively, the process of prompt construction and its role in model performance is central to the field of prompt engineering. Therefore, the study contributes valuable insights to prompt engineering by exploring interactive ways to enhance LLM understanding and classification accuracy.",https://arxiv.org/pdf/2310.06111 -hard sample aware prompt-tuning,9,"The provided abstract describes research directly related to prompt-tuning, specifically addressing challenges in differentiating between informative hard samples and misleading samples during few-shot learning for NLP tasks. The relevance to prompt engineering is high, considering that the study introduces a 'Hard Sample Aware Prompt-Tuning framework (HardPT)' to improve the effectiveness of prompts in machine learning models by using advanced techniques such as reinforcement learning and contrastive learning. These methodologies directly contribute to the field of prompt engineering by enhancing the model's ability to learn from limited data. The only reason for not giving a perfect score is the focus on 'hard sample' differentiation may be considered a specific subset within the broader domain of prompt engineering.",https://aclanthology.org/2023.acl-long.690.pdf -voucher abuse detection with prompt-based fine-tuning on graph neural networks,8,"The study presents a novel application of prompt-based fine-tuning, albeit in the domain of graph neural networks for voucher abuse detection rather than natural language processing. The focus on designing a prompting function to better align the pre-training and fine-tuning tasks shows relevance to prompt engineering, as it involves creating effective prompts to improve machine learning models’ performance. The improvement in performance with this method demonstrates the potential effectiveness of prompt engineering strategies in various domains, which is relevant for the broader field of study. However, the specificity to graph neural networks slightly reduces its direct applicability to studies focused exclusively on text-based prompt engineering.",https://dl.acm.org/doi/pdf/10.1145/3583780.3615505 -stabilized in-context learning with pre-trained language models for few shot dialogue state tracking,8,"The study addresses designing prompts for complex tasks like dialogue state tracking (DST) and discusses techniques to stabilize in-context learning performance with pre-trained language models. As prompt engineering involves both the creation of effective prompts and the stability of model performance when using those prompts, this study is highly relevant to the field. However, it specifically focuses on few-shot learning techniques and dialogue tasks, which may not fully cover the broad spectrum of prompt engineering topics such as hard prefix prompts. Thus, it does not merit a perfect score, but it is still significantly pertinent.",http://arxiv.org/pdf/2302.05932 -emotionprompt: leveraging psychology for large language models enhancement via emotional stimulus,8,"The presented abstract is highly relevant to prompt engineering, as it specifically addresses the enhancement of large language models (LLMs) through 'EmotionPrompt', which is essentially an innovative technique in prompt engineering involving emotional stimuli. Although the focus on 'hard prefix prompts' is not directly mentioned, the research could be considered adjacent or complementary due to its emphasis on improving the interaction between humans and LLMs by refining the way prompts are engineered. Hence, the relevance to prompt engineering is significant, warranting a high rating. Nonetheless, the specificity to 'hard prefix prompts' is not clearly stated, which is why the rating is not a full 10.",https://arxiv.org/pdf/2307.11760 -scone: benchmarking negation reasoning in language models with fine-tuning and in-context learning,7,"The abstract describes a study focusing on negation reasoning in language models, particularly in the context of NLI (Natural Language Inference) and sentence completion tasks. Although the study is not directly about 'hard prefix prompts', prompt engineering is inherent in the design of tasks for language models to assess their abilities. The construction of the ScoNe-NLG and the insights from testing different prompt strategies with InstructGPT are relevant to prompt engineering, as they can inform how prompts can be optimized for better model performance, especially in handling negations. Therefore, the study is moderately relevant to prompt engineering, even if the primary focus is not on prompt construction itself.",http://arxiv.org/pdf/2305.19426 -enabling classifiers to make judgements explicitly aligned with human values,9,"The abstract describes a study that is highly relevant to prompt engineering. It discusses how prompt-based few-shot learning is used to generate training data from large-scale language models, which is a key aspect of prompt engineering. The focus on value alignment and the construction of classifiers based on explicit human input also reflects on the prompt's ability to direct model behavior in a specific way, showcasing an advanced application of prompt engineering. The only reason it doesn't receive a perfect score is that it does not exclusively deal with 'hard prefix prompts', which the study request specifically asks for, but addresses a broader topic of prompt-based few-shot learning and classifier fine-tuning.",http://arxiv.org/pdf/2210.07652 -bits of grass: does gpt already know how to write like whitman?,7,"The study is relevant to prompt engineering insofar as it examines how generative language models like GPT-3.5 and GPT-4 respond to zero-shot and many-shot prompts without fine-tuning. It evaluates the model's ability to generate poetry in a specific style, which is closely related to the effectiveness of the prompts used. It does not, however, specifically address 'hard prefix prompts,' but rather the broader concept of prompt effectiveness in generating author-specific language patterns. Therefore, the relevance is high but not entirely focused on the specific aspect of 'hard prefix prompts'.",http://arxiv.org/pdf/2305.11064 -do prompts solve nlp tasks using natural language?,9,"The given title and abstract are highly relevant to prompt engineering as they discuss the effectiveness of different types of prompts in NLP tasks, a core issue in the study of prompt engineering. The research specifically evaluates human-designed prompts, schema prompts, and null prompts, which are directly related to the process of engineering and optimizing prompts for language models. However, it might not be a 'comprehensive systematic review' as the prompt specifies, which is why it doesn't receive a full 10 rating.",http://arxiv.org/pdf/2203.00902 -bertnet: harvesting knowledge graphs with arbitrary relations from pretrained language models,7,"The research is highly relevant to prompt engineering as it involves using prompts to interrogate pretrained language models for extracting knowledge graph relationships. While the study does not focus on 'hard prefix prompts' specifically, the concept of designing prompts to elicit specific types of knowledge from language models is central to prompt engineering. Therefore, the use of prompts to define relations and the subsequent extraction process aligns with studying the effectiveness and methodology of prompt engineering, despite not directly addressing the systematic review topic on 'hard prefix prompts'.",https://aclanthology.org/2023.findings-acl.309.pdf -learning disentangled prompts for compositional image synthesis,9,"The abstract describes a study highly relevant to prompt engineering, focusing on a specific application in image synthesis. The research introduces a framework for learning disentangled prompts that separate semantic and domain information, which is a concept closely associated with constructing effective prompts in generative models. The ability to control these aspects and the application to zero-shot domain adaptation show a direct relevance to the field of prompt engineering. However, the focus is specific to image synthesis rather than a broad range of applications or a purely theoretical exploration, hence the rating is not a full 10.",http://arxiv.org/pdf/2306.00763 -language models as black-box optimizers for vision-language models,9,"The provided abstract describes research into a novel fine-tuning approach for vision-language models (VLMs) using natural language prompts, which is highly relevant to prompt engineering. The study's focus on refining prompts using large language models and without requiring white-box access aligns with the core principles of prompt engineering. The research advances the understanding of how effective prompts can be generated and optimized, which is a fundamental aspect of prompt engineering. The deduction of one point is due to the specificity of the application to vision-language models and not to the broader spectrum of prompt engineering, but it still remains a significant contribution to the field.",https://arxiv.org/pdf/2309.05950 -weak supervision for question type detection with large language models,8,"The study is highly relevant to prompt engineering as it investigates the use of rules as an alternative to manual prompts for leveraging large pre-trained language models in a specific NLP task, which is question type detection in dialogue. This aligns with prompt engineering by exploring how to effectively communicate with LLMs to produce desired outputs. The systematic review aspect is not directly mentioned, but given that the work compares different models and addresses the design of prompts versus rules, it reflects an understanding of the prompt engineering landscape, which is essential for a systematic review.",https://hal.science/hal-03786135/document -automatic data transformation using large language model: an experimental study on building energy data,8,"The study presents a framework that includes a prompt generator for large language models, which is highly relevant to the field of prompt engineering. The iterative prompt optimization mechanism for flaw detection aligns well with advanced prompt engineering techniques. Although the focus is on building energy data and SQL code transformation, the core concept of utilizing LLMs with a prompt-based interface has broad implications for prompt engineering. The study emphasizes the integration of domain knowledge and adaptive learning, which are crucial components of prompt engineering. The reason for not rating it a full 10 is that the primary application is data transformation rather than a broad analysis of 'hard prefix prompts' in general.",https://arxiv.org/pdf/2309.01957 -leveraging vision-language foundation models for fine-grained downstream tasks,7,"The abstract mentions developing a multitask fine-tuning strategy based on a positive/negative prompt formulation to improve the performance of vision-language foundation models on fine-grained attribute detection and localization tasks. This indicates a utilization of prompt engineering for improving model accuracy on specific tasks. While it is not specifically about 'hard prefix prompts' which could be more related to text-based tasks, the concept of using prompt strategies to finetune models, even in the vision-language domain, is related to the broader field of prompt engineering. Hence, the relevance is moderately high but not entirely direct with respect to the specific topic of hard prefix prompts.",https://arxiv.org/pdf/2307.06795 -towards expert systems for improved customer services using chatgpt as an inference engine,8,"The abstract indicates that the paper discusses an iterative procedure that involves prompt engineering as part of the process to develop ChatGPT-powered expert systems for customer services. Since it addresses the design of descriptive knowledge and few-shot prompts, which are key components of prompt engineering for AI models, it is relevant to the study of prompt engineering. The relevance is not at the maximum since the abstract suggests that the paper covers a broader range of topics within the AI application in customer service, and prompt engineering is only one part of the study.",https://rgu-repository.worktribe.com/preview/1987218/EZENKWU%202023%20Towards%20expert%20systems%20%28AAM%29.pdf -ccprompt: counterfactual contrastive prompt-tuning for many-class classification,9,"The provided abstract relates to the development and analysis of a specific type of prompt-tuning approach named 'Counterfactual Contrastive Prompt-Tuning (CCPrompt)' which is highly relevant to the field of prompt engineering. Prompt engineering involves the design and optimization of prompts to improve the performance of neural language models on various tasks. The described CCPrompt method focuses on enhancing many-class classification by identifying contrastive attributes and using them to construct elaborate prompts, which is a direct application of prompt engineering techniques. The high relevance rating is supported by the abstract's discussion on the method's effectiveness for different NLP tasks and the use of prompts as a core element of the model. The rating is not a perfect 10 primarily because it does not cover a 'systematic review' of hard prefix prompts but instead introduces a novel approach within prompt engineering.",https://arxiv.org/pdf/2211.05987 -what does a platypus look like? generating customized prompts for zero-shot image classification,8,"The abstract describes research on generating prompts to improve the performance of open-vocabulary image classification models, which is a significant contribution to the field of prompt engineering, particularly in the realm of zero-shot learning. While the study focuses on image classification and doesn't specifically mention 'hard prefix prompts', it does address the creation and optimization of prompts to improve task performance, which is relevant to the general area of prompt engineering.",https://arxiv.org/pdf/2209.03320 -better zero-shot reasoning with role-play prompting,9,"The study's theme is highly relevant to prompt engineering as it focuses on advanced techniques of prompting, specifically role-play prompting, and its impact on the performance of large language models (LLMs). Prompt engineering is crucial for the effective utilization of LLMs, and this research delves into the significant aspect of how different prompting methods, like role-play, can enhance a model's reasoning abilities in zero-shot scenarios across a variety of benchmarks. Although the study is not specifically about 'hard prefix prompts,' the broader category of prompt engineering still applies, thus the high relevance rating.",https://arxiv.org/pdf/2308.07702 -zero-shot slot filling with slot-prefix prompting and attention relationship descriptor,8,"The described paper introduces a novel prompting scheme specifically designed for zero-shot slot filling, which is directly related to prompt engineering. Prompt engineering involves creating effective prompts to guide models' behavior without extensive training, and this paper's approach to including learnable tokens and slot names fits within that scope. The use of attention values to enhance the prompts further ties it to advancements in the methodology of how prompts are constructed and their relationship to the model's attention mechanisms. The rating is not a perfect 10 because the paper is more focused on slot filling and attention features rather than a broad study on prompt engineering, but it still offers significant insights into the field.",https://ojs.aaai.org/index.php/AAAI/article/download/26566/26338 -distilling hypernymy relations from language models: on the effectiveness of zero-shot taxonomy induction,8,"The study is highly relevant to prompt engineering as it discusses the extraction of structured knowledge from language models via prompting techniques, which is a core aspect of prompt engineering. Although it specifically focuses on taxonomy learning, prompt engineering is central to the methodology, making the paper relevant to the field. However, the exact match for 'hard prefix prompts' is not indicated, so the paper might not address that specific aspect of prompt prompting, hence the rating is not a full 10.",https://aclanthology.org/2022.starsem-1.13.pdf -zero-shot next-item recommendation using large pretrained language models,8,"The abstract describes the process of using prompting strategies for LLMs to conduct next-item recommendations, which is directly related to prompt engineering. The study details a prompting approach specific for improving the performance of LLMs in a zero-shot recommendation task. While the focus is on the application of prompts in recommender systems, rather than on the study of 'hard prefix prompts' more generally, it contributes valuable insights into how prompts can be engineered and utilized to enhance the capabilities of LLMs in a practical scenario. This aligns with the broader field of prompt engineering, hence the high relevance rating.",http://arxiv.org/pdf/2304.03153 -selfcheck: using llms to zero-shot check their own step-by-step reasoning,7,"While the study described in the abstract is not directly related to prompt engineering in terms of developing or enhancing hard prefix prompts, it does address an important aspect of how LLMs (Large Language Models) can be improved in processing and verifying their reasoning, which can indirectly benefit prompt engineering. The ability of an LLM to self-check its reasoning is valuable for prompt engineering as it can lead to more effective prompting strategies that rely on the model's self-assessment of its reasoning process. Specifically, if an LLM can recognize errors in its own reasoning and adjust accordingly, this can inform the development of more advanced prompting techniques. The study is relevant to the field of prompt engineering, but it's not a direct study on prompt engineering itself, hence the rating of 7.",https://arxiv.org/pdf/2308.00436 -c3: zero-shot text-to-sql with chatgpt,8,"The paper is highly relevant to prompt engineering because it focuses on a method that involves 'Clear Prompting' which is essentially a form of prompt engineering. It must strategically craft inputs to guide the ChatGPT model to generate correct SQL queries without previous training (zero-shot capability). Although the main focus is on Text-to-SQL, the principles and methods applied are directly related to prompt engineering as they deal with how to effectively prompt a language model to achieve a specific task.",https://arxiv.org/pdf/2307.07306 -tab-cot: zero-shot tabular chain of thought,8,"The abstract describes Tab-CoT, a novel prompting method that enhances the structure and explicit detailing of the reasoning process for complex tasks in a tabular format. This is highly relevant to prompt engineering, particularly as it relates to refining the interventions used to elicit specific and structured responses from AI systems. However, it is specifically tailored for tabular data and reasoning tasks, so it might not cover all aspects of prompt engineering study which can include other types of data and tasks. Hence the rating is not a perfect 10.",http://arxiv.org/pdf/2305.17812 -the benefits of label-description training for zero-shot text classification,7,"The abstract describes a method to improve zero-shot text classification accuracies by using data that describes labels, which aligns with prompt engineering efforts that involve describing tasks or labels to better inform the model's predictions. Although it doesn't explicitly address 'hard prefix prompts', the concept of using label descriptions can be relevant to designing more effective prompts. Thus, the relevance to prompt engineering is substantial but not direct, hence the rating of 7.",http://arxiv.org/pdf/2305.02239 -self-icl: zero-shot in-context learning with self-generated demonstrations,7,"The abstract describes a novel approach to in-context learning (ICL) with language models, which is indeed relevant to the study of prompt engineering as it focuses on generating and utilizing prompts to improve the performance of models without the need for additional demonstrations. The concept of Self-ICL generates pseudo-inputs and pseudo-labels as part of the prompting process, which aligns with the techniques used in prompt engineering. The relevance is not a perfect 10 because the study doesn't specifically address 'hard prefix prompts' as mentioned in the original query, but it is still highly relevant to the broader field of prompt engineering and the design of prompting strategies to improve language model outcomes in a zero-shot setting.",http://arxiv.org/pdf/2305.15035 -jack-ryder at semeval-2023 task 5: zero-shot clickbait spoiling by rephrasing titles as questions,7,"The paper addresses the use of pre-trained models to manipulate and interact with prompts by rephrasing clickbait titles into questions to optimize the models' response towards the task of clickbait spoiling. Although not directly focusing on 'hard prefix prompts', this study is relevant to the broader field of prompt engineering, as it involves the strategic alteration of prompts to suit the capabilities of pre-trained QA models and to achieve specific outcomes without task-specific training. The rephrasing technique and optimization strategy for better alignment with pre-trained models' strengths are of interest in prompt engineering research.",https://aclanthology.org/2023.semeval-1.150.pdf -anovl: adapting vision-language models for unified zero-shot anomaly localization,7,"The abstract discusses the adaptation of CLIP models for zero-shot anomaly localization which involves designing specialized prompts for text supervision, a key aspect of prompt engineering. The introduction of a unified domain-aware contrastive state prompting template is directly related to the study of how prompts influence model performance, which is a subset of prompt engineering. The focus on aligning text with specific visual representations indicates relevance as it showcases a practical application of prompt engineering in the field of computer vision and anomaly detection. However, the paper's primary focus is on anomaly localization rather than prompt engineering itself, which is why the rating is not closer to 10.",https://arxiv.org/pdf/2308.15939 -instruction tuning with lexicons for zero-shot style classification,7,"The abstract discusses the use of lexicons for instructing language models in style classification without the need for fine-tuning. This study is relevant to prompt engineering, as it explores how specific language structures (style lexicons) can be used to guide pre-trained language models to perform new tasks without additional training. The concept of using lexical cues fits within the larger framework of prompt engineering, which seeks to optimize prompts to elicit desired outputs from language models. However, the focus on 'style classification' and 'zero-shot performance' is slightly tangential to prompt engineering's central theme of crafting and testing various prompts, hence the rating is not a full 10.",http://arxiv.org/pdf/2305.14592 -the art of socratic questioning: zero-shot multimodal reasoning with recursive thinking and self-questioning,7,"The study introduces Socratic Questioning as a method to improve problem-solving in large-scale language models, which is closely related to prompt engineering as it informs how prompts can be structured to facilitate more complex reasoning in AI. The emphasis on recursive thinking and self-questioning aligns with designing prompts that elicit more detailed and nuanced responses. However, it slightly diverges from the specific topic of 'hard prefix prompts' as it discusses a broader technique rather than focusing solely on the effects of hard prefixes in prompts.",http://arxiv.org/pdf/2305.14999 -zero-shot refinement of buildings' segmentation models using sam,8,"The abstract discusses the adaptation of foundation models using prompting strategies, which is relevant to prompt engineering. Specifically, it mentions the use of prompts to augment a Segment Anything Model (SAM) with recognition abilities. This is a direct application of prompt engineering to improve the performance of AI models. The focus is not on a 'hard prefix prompt' as outlined in the initial request, which would fit the definition of prompt engineering more closely, but the use of prompts to refine the SAM model's capabilities suggests a strong relevance to the field.",https://arxiv.org/pdf/2310.01845 -mm-react: prompting chatgpt for multimodal reasoning and action,7,"The title and abstract of the study discuss 'MM-REACT,' a system designed to enhance the capabilities of language models like ChatGPT by integrating them with vision experts for multimodal reasoning and action. The relevance to prompt engineering study is significant given that MM-REACT involves designing textual prompts that can facilitate multimodal information processing. Although the study does not exclusively focus on 'hard prefix prompts,' the concept of textual prompt design lies at the core of prompt engineering, hence the relevance. This system demonstrates an application of prompt engineering principles in the context of multimodal reasoning, which is a subset of the broader field of prompt engineering.",http://arxiv.org/pdf/2303.11381 -the art of prompting: event detection based on type specific prompts,9,"The study is highly relevant to prompt engineering as it explores the effectiveness of type-specific prompts for event detection in various scenarios, including few-shot and zero-shot learning. It directly addresses how the construction and application of prompts can affect model performance, a crucial aspect of prompt engineering.",http://arxiv.org/pdf/2204.07241 -clip also understands text: prompting clip for phrase understanding,8,"The paper explores the use of the text encoder of CLIP for phrase understanding, which relates directly to prompt engineering as it involves designing effective prompts to leverage the model's capabilities. The comparison with other language models like BERT underlines the importance of how prompts are formulated in model performance. This research contributes to the understanding of how different prompting strategies can impact the outcome of language understanding tasks. Although it doesn't focus on 'hard prefix prompts' as specified, the study is highly relevant to the broader field of prompt engineering and how prompts can be optimized for model understanding.",http://arxiv.org/pdf/2210.05836 -on the evaluations of chatgpt and emotion-enhanced prompting for mental health analysis,8,"The study evaluates how different prompting strategies, specifically those with emotional cues, affect the performance of a large language model like ChatGPT in the context of mental health analysis. Since prompt engineering involves the design of inputs that can effectively guide AI models to produce desired outputs, the research's focus on the impact of prompts enhanced with emotional information is highly relevant to the field of prompt engineering. The study's analysis of the efficacy of these prompts directly contributes to understanding and optimizing prompt design, which is a central concern in prompt engineering. However, the score is not a perfect 10 because the study is not exclusively dedicated to prompt engineering — it also delves into the broader scope of mental health analysis performance of language models.",https://arxiv.org/pdf/2304.03347 -reasoning implicit sentiment with chain-of-thought prompting,8,"The study addresses advanced prompt engineering techniques for implicit sentiment analysis (ISA) using chain-of-thought prompting, introducing a Three-hop Reasoning (THOR) framework. This is highly relevant to the field as it demonstrates prompt engineering's applicability in complex reasoning tasks and shows how to structure prompts to induce reasoning steps. The relevance is not rated a perfect 10 since the study focuses more on the reasoning aspect than on prompt engineering itself, but it is nonetheless a significant contribution to the area of prompt construction and optimization.",http://arxiv.org/pdf/2305.11255 -pearl: prompting large language models to plan and execute actions over long documents,9,"The study introduces PEARL, a framework specifically designed for prompting large language models (LLMs) that enhances their capability to process and reason over lengthy texts. This is highly relevant to prompt engineering as it directly tackles challenges in designing prompts that assist LLMs in managing complex tasks such as decomposing questions, planning, and executing a sequence of actions to generate accurate responses. The successful application of PEARL over challenging datasets and its comparison with other prompting methods like zero-shot and chain-of-thought demonstrates a significant advancement in the field of prompt engineering, particularly for tasks involving extensive reasoning. It only falls short of a perfect rating because it addresses a specific subset of prompt engineering focused on long documents rather than the entire breadth of prompt engineering.",http://arxiv.org/pdf/2305.14564 -multimodal procedural planning via dual text-image prompting,9,"The provided abstract discusses a dual-modality prompting method involving text and image prompts to guide procedural planning, which is highly relevant to the field of prompt engineering since it directly deals with how prompts can be engineered and optimized for multi-modal tasks. The method described leverages the capabilities of large language models and text-to-image generation, which are both core technologies relevant to prompt engineering. The relevance isn't perfect 10 due to the specific focus on the generation of text-image pairs for task completion, rather than on the hard prefix prompts mentioned in the initial query, but the study still contributes significantly to the broader topic of how prompts can be structured and used effectively.",http://arxiv.org/pdf/2305.01795 -federated prompting and chain-of-thought reasoning for improving llms answering,7,"The study appears to address question handling and improving response accuracy in Large Language Models through techniques that could be considered part of prompt engineering, namely the Self-Consistency (SC) and Chain-of-Thought (CoT) techniques. Prompt engineering often involves strategies to enhance the model's understanding and output, and these techniques align with such goals. While the study does not directly mention 'hard prefix prompts', it engages with the broader area of prompts and their optimization, therefore the relevance is moderate to high.",http://arxiv.org/pdf/2304.13911 -code prompting: a neural symbolic method for complex reasoning in large language models,8,"The study is highly relevant to prompt engineering as it explores advanced prompting methods (code prompting) in the context of improving the performance of large language models in complex reasoning tasks. This directly pertains to the development and evaluation of new prompting techniques, which is a core aspect of prompt engineering. The abstract indicates significant experimental work and analysis that can contribute to the field, such as comparing code prompting with the existing chain-of-thought (CoT) prompting. However, the study seems to focus on a specific type of prompting (neural symbolic prompting with code), rather than a comprehensive systematic review. Hence, the rating is not a full 10, but it's still high because of the clear relevance and potential impact on the study of prompting methods.",https://arxiv.org/pdf/2305.18507 -self-explanation prompting improves dialogue understanding in large language models,8,"The study focuses on a novel 'Self-Explanation' prompting strategy specifically designed to improve Large Language Models' (LLMs) understanding in task-oriented dialogues, which falls under the broader category of prompt engineering. Although it does not deal with 'hard prefix prompts' per se, the research is highly relevant to the field of prompt engineering because it explores new methods for improving the performance of LLMs in processing complex dialogue contexts. The relevance rating is not a full 10 because the study is not directly about 'hard prefix prompts,' but it is significant due to its contribution to the overarching goal of optimizing prompts to enhance model comprehension.",https://arxiv.org/pdf/2309.12940 -fixed input parameterization for efficient prompting,10,"The abstract provided discusses the Fixed Input Parameterization (FIP) problem in the context of prompt engineering and how it aims to make the use of fixed prompts more efficient by integrating them into the parameters of a Language Model (LM). This is highly relevant to prompt engineering study as it tackles the optimization of prompt usage, which is a core aspect of prompt engineering in language models. The efficiency improvements and the exploration of methodologies for FIP in specific tasks such as persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions offer direct insights into prompt engineering. Therefore, the content of this abstract is directly related to the field of prompt engineering, addressing both the technical and application aspects of the topic.",https://aclanthology.org/2023.findings-acl.533.pdf -p5: plug-and-play persona prompting for personalized response selection,8,"The presented paper is highly relevant to prompt engineering due to its focus on using prompt sequences for personalized response selection in chatbots, which is a specific application of prompt engineering. The proposed method integrates the use of prompts to manage conversation flow based on persona, and it directly pertains to the engineering of prompts that help personalize chatbot responses. However, the paper is not exclusively about 'hard prefix prompts' (a term often related to the fixed instruction or text added to input data in language models to steer the response), which might have been implied in the phrase 'comprehensive systematic review on hard prefix prompts' in the original prompt. The paper focuses on persona prompting, which is a subset of prompt engineering but does not represent a broad overview or systematic review of hard prefix prompts in general. Therefore, while very relevant, the rating is not a full 10.",https://arxiv.org/pdf/2310.06390 -prompting segmentation with sound is generalizable audio-visual source localizer,8,"The abstract describes the use of a novel 'encoder-prompt-decoder' paradigm which directly relates to prompt engineering, as it involves constructing Semantic-aware Audio Prompts (SAPs) to improve model performance. This approach aims to enable pre-trained models to focus on sounding objects and deal with data scarcity and varying distributions, both of which are significant concerns in prompt engineering. Although the study focuses specifically on the audio-visual domain and not directly on general prompt engineering methodologies, its innovative use of prompts to bridge the semantic gap between modalities indicates its relevance to the field of prompt engineering. Therefore, it receives a high relevance rating.",https://arxiv.org/pdf/2309.07929 -prompting strategies for citation classification,8,"The paper directly addresses prompt engineering by investigating the effectiveness of various prompting strategies for a specific NLP task – citation classification. This is highly relevant to the study of prompt engineering as it explores how different prompting methods can influence the performance of language models. Although it doesn't specifically mention 'hard prefix prompts', the mention of 'Fixed-prompt LM tuning' suggests it touches on the subject of static prompts, which could be related. The research's systematic approach to comparing these strategies and the inclusion of newly proposed methods indicate a substantial contribution to the understanding of how prompting affects language model performance, making it fairly relevant to the field of prompt engineering.",https://dl.acm.org/doi/pdf/10.1145/3583780.3615018 -can large language models transform computational social science?,8,"The research discussed in the title clearly has implications for prompt engineering, as it talks about using Large Language Models (LLMs) for Computational Social Science (CSS) tasks. The abstract mentions 'prompting best practices,' indicating that the study likely delves into how to formulate prompts to optimize LLM performance in CSS applications. While the study might not focus exclusively on 'hard prefix prompts' but rather on a broader range of prompting techniques, the findings would still be highly relevant to the field of prompt engineering since they contribute to understanding how to effectively employ prompts in complex analysis tasks, such as CSS. The relevance is not rated as a full 10 because the study’s primary focus seems to be on broad LLM application in CSS rather than focused on prompt engineering alone.",http://arxiv.org/pdf/2305.03514 -solving challenging math word problems using gpt-4 code interpreter with code-based self-verification,8,"The abstract describes a study focusing on the development of a prompting strategy (explicit code-based self-verification) to enhance the performance of the GPT-4 Code Interpreter in solving math problems. Although this study is centered on prompting methods, it is specifically tailored to mathematical reasoning and involves verification of the model's output. It is highly relevant to the field of prompt engineering in that it presents a novel approach to using prompts to improve the accuracy of a language model's responses. The reason for not giving a full score of 10 is that the study is particularly focused on math word problems, which is just one aspect of prompt engineering.",https://arxiv.org/pdf/2308.07921 -learning to decompose visual features with latent textual prompts,8,"The abstract provided discusses an innovation in prompt engineering, specifically within the domain of vision-language models. The study introduces Decomposed Feature Prompting (DeFo), which utilizes textual prompts as part of the learning process, aligning with the concept of prompt engineering. The relevance to prompt engineering is high because it directly involves the use of textual inputs to improve the feature extraction in a dual-model architecture. However, it does not address 'hard prefix prompts' specifically, which suggests that the content is more general in the realm of prompt engineering rather than focused on a comprehensive systematic review of hard prefix prompts.",http://arxiv.org/pdf/2210.04287 -xricl: cross-lingual retrieval-augmented in-context learning for cross-lingual text-to-sql semantic parsing,7,"The abstract describes a system (XRICL) that involves constructing prompts to improve cross-lingual Text-to-SQL semantic parsing, which is relevant to the field of prompt engineering as it deals with the creation and optimization of prompts for language models. However, the focus on retrieval-augmented in-context learning and the cross-lingual aspect means it is not entirely centered on 'hard prefix prompts,' which suggests a subset of prompt engineering focusing on rigid or inflexible prompts. The study still contributes valuable insights to the broader domain of prompt engineering, hence the moderately high relevance rating.",http://arxiv.org/pdf/2210.13693 -multidimensional evaluation for text style transfer using chatgpt,7,"The paper's relevance to prompt engineering study is moderate to high because it investigates the use of ChatGPT as an evaluator for text style transfer, which involves prompt engineering to some extent. Getting ChatGPT to perform a zero-shot evaluation entails designing prompts that effectively convey the evaluation task to the model. Therefore, the study indirectly contributes to understanding how different prompts affect the performance of large language models in generating or evaluating stylized text. However, the paper primarily focuses on the application of ChatGPT as an evaluator and correlates its performance with human judgments, rather than explicitly studying the hard prefix prompts or the mechanics of prompt construction, hence the rating is not a full 10.",http://arxiv.org/pdf/2304.13462 -yes but.. can chatgpt identify entities in historical documents?,7,"The abstract indicates that the study explores ChatGPT's ability to recognize entities within historical documents, specifically addressing the specificity of prompting, which is an integral aspect of prompt engineering. Although the core focus seems to be on entity recognition and classification, the mention of 'the specificity of prompting' suggests that the study does delve into how different prompts affect ChatGPT's performance in a task relevant to natural language processing. Therefore, while it is not entirely focused on 'prompt engineering' as a primary subject area, it is relevant due to its examination of prompts' effectiveness, which is a significant component of prompt engineering studies.",https://arxiv.org/pdf/2303.17322 -is chatgpt a good personality recognizer? a preliminary study,8,"The study is highly relevant to prompt engineering as it involves evaluating ChatGPT's abilities in a specific natural language processing task using various prompting strategies, including the 'level-oriented' strategy, which is a type of hard prompt engineering tailored to guide the AI's reasoning. Although the primary focus is on personality recognition, the methodology and implications of different prompting strategies, including zero-shot chain-of-thought, directly contribute to the knowledge and optimization of prompt engineering. Hence, the relevance rating is high but not maximum, as the study does not exclusively concentrate on prompt engineering but also includes the application of the derived prompts in various downstream tasks.",https://arxiv.org/pdf/2307.03952 -let's do a thought experiment: using counterfactuals to improve moral reasoning,8,"The provided abstract discusses a new prompting framework, 'Thought Experiments,' which involves the engineering of prompts to teach language models improved moral reasoning using counterfactuals. While the study itself is not directly focused on 'hard prefix prompts,' it is highly relevant to the field of prompt engineering, as it explores the design of specialized prompts to enhance the performance of language models in a specific type of reasoning task. Therefore, the relevance is quite high for those interested in the broader topic of how different prompting approaches can impact model performance. However, it doesn't address 'hard prefix prompts' explicitly, hence the rating is not a perfect 10.",http://arxiv.org/pdf/2306.14308 -improving zero-shot generalization and robustness of multi-modal models,8,"The study is highly relevant to prompt engineering as it explicitly addresses the issue of improving the performance of multi-modal models by refining how text prompts are used. The research investigates how ambiguity in text prompts can lead to a performance gap in zero-shot tasks and proposes a methodology to enhance the accuracy by leveraging semantic label hierarchies in prompts. While the study does not focus on 'hard prefix prompts' per se, it does contribute to the overall understanding of how prompt design influences model predictions, making it relevant to the field of prompt engineering.",https://arxiv.org/pdf/2212.01758 -precise zero-shot dense retrieval without relevance labels,7,"The relevance to prompt engineering is fairly high, as the abstract describes a process where a language model is prompted to generate a hypothetical document in a zero-shot context, which is clearly a form of prompt engineering. However, the focus of the study seems to be more on dense retrieval and encoding relevance rather than on the detailed study of prompt engineering or the effects of different prompting techniques. Thus, while relevant, the study may not be addressing prompt engineering in a direct or comprehensive manner as a primary focus.",http://arxiv.org/pdf/2212.10496 -seqzero: few-shot compositional semantic parsing with sequential prompts and zero-shot models,7,"The paper presents a novel approach in few-shot learning and semantic parsing, which directly relates to improving the performance of language models with limited data. Prompt engineering is an aspect of tuning language models to better interpret and respond to prompts. Since SeqZero involves creating sequential prompts that aid in generating outputs for sub-problems in semantic parsing, this study is relevant to prompt engineering as it pertains to the construction and optimization of prompts for improved model performance. However, the study's primary focus is not on the prompt engineering process itself, but rather on how prompts are utilized within a specific application of semantic parsing to achieve state-of-the-art results. Therefore, it is relevant, but not exclusively focused on the prompt engineering aspect.",https://arxiv.org/pdf/2205.07381 -rethinking the role of demonstrations: what makes in-context learning work?,8,"The presented paper is highly relevant to prompt engineering as it delves into the mechanics of in-context learning, which is a core aspect of prompt engineering for large language models. Understanding the role of demonstrations and the impact of various aspects of those demonstrations informs how prompts should be designed. While the paper does not directly address 'hard prefix prompts,' it does explore the components of demonstrations that influence a model's performance, which can be directly applied to the design and optimization of prompts (including hard prefixes) to improve model behavior. Therefore, the findings of this study are important for advancing the science of prompt engineering, though not exclusively focused on 'hard prefix prompts.'",https://aclanthology.org/2022.emnlp-main.759.pdf -a survey for in-context learning,7,"The survey deals with in-context learning (ICL), which is closely related to prompt engineering, as ICL often involves using prompts to deliver the training examples to language models. Although hard prefix prompts, which are more specific in their constructions, are not mentioned explicitly, prompting strategies in general are an integral part of ICL. The survey's focus on the broader aspects of prompting strategies makes it relevant to the field of prompt engineering. However, a more direct discussion on hard prefix prompts would be required to make the paper fully applicable to a comprehensive systematic review on that specific topic.",http://arxiv.org/pdf/2301.00234 -what can transformers learn in-context? a case study of simple function classes,7,"The abstract discusses 'in-context learning' which is a key aspect of prompt engineering as it deals with the ability of models to learn from the information provided in a prompt. The study's focus on how transformers can learn from in-context examples to perform tasks is relevant to understanding and improving prompt-based learning mechanisms, albeit it focuses more specifically on function classes rather than hard prefix prompts. It does not directly address prompt engineering as a systematic review but is certainly related to the broader category of how models respond to prompts. Therefore, it receives a high but not maximum relevance score.",https://arxiv.org/pdf/2208.01066 -what makes good in-context examples for gpt-3?,9,"The abstract describes a study focused on optimizing the selection of in-context examples for GPT-3's prompt generation, which is highly relevant to the field of prompt engineering. The research aims to improve GPT-3's performance by retrieving semantically-similar examples to the test query, which directly involves engineering better prompts for the model. The significant improvements reported in the benchmarks further underscore the relevance of this study to prompt engineering. The only reason it does not receive a perfect score is that it is focused on GPT-3, and prompt engineering can also involve other models or broader methodologies.",https://aclanthology.org/2022.deelio-1.10.pdf -developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer,9,"The abstract presents a focused application of prompt engineering to improve data extraction from medical records using a large language model, which is highly relevant to prompt engineering studies. The study evaluates the effectiveness of specialized prompts for the task and discusses their development cost and accuracy, providing concrete data about prompt engineering in a real-world context. It doesn't directly address 'hard prefix prompts', but it's substantially related to engineering prompts for specific purposes.",https://www.e-roj.org/upload/pdf/roj-2023-00633.pdf -swectrl-mini: a data-transparent transformer-based large language model for controllable text generation in swedish,8,"The relevance to prompt engineering study is high because the abstract describes the 'SweCTRL-Mini' model, which utilizes special tokens in generation prompts to control the genre of the generated text. This capability is directly related to prompt engineering, where prefixes or special tokens are crafted to steer the output of language models. While the abstract does not specifically focus on a 'systematic review on hard prefix prompts,' it does highlight the use of controlled prompts which is a significant aspect of prompt engineering. Therefore, the rating is slightly lowered because the paper does not explicitly cover a systematic review but is substantially related to the concept of hard prompts in controlling text generation.",http://arxiv.org/pdf/2304.13994 -optimizing continuous prompts for visual relationship detection by affix-tuning,7,"This abstract details a novel method involving affix-tuning transformers for optimizing visual relationship detection. While it does not explicitly use the term 'hard prefix prompts,' it does discuss the concept of 'affix-tuning,' which could be seen as a form of prompt engineering where a 'continuous task-specific vector' is optimized. This is somewhat relevant to prompt engineering as it relates to the training and utilization of model parameters in a task-specific manner. The approach of using 'prompt template' also indicates work in the direction of designing inputs that can influence model behavior, which is central to prompt engineering. However, the main focus appears to be on visual relationship detection rather than on the study or characterization of prompts (textual) in NLP tasks, hence not a perfect fit, but still relevant.",https://ieeexplore.ieee.org/ielx7/6287639/6514899/09815128.pdf -contextual transformer for offline meta reinforcement learning,8,"The presented abstract is relevant to prompt engineering as it discusses the use of prompts to improve sequence modeling-based offline reinforcement learning algorithms. The concept of prompt tuning is central to the study, and the introduction of the Contextual Meta Transformer (CMT) shows an innovative application of prompts in guiding the model towards desired outcomes and improving generalization on unseen tasks. The relevance is high since prompt engineering is explicitly mentioned and is a key part of the methodology. However, it focuses specifically on RL contexts and may not cover other aspects or domains of prompt engineering, hence the rating is not a full 10.",http://arxiv.org/pdf/2211.08016 -learning to compress prompts with gist tokens,9,"The abstract describes a method directly related to prompt engineering, focusing on the efficiency of using prompts with language models. The introduction of 'gisting' to compress prompts into 'gist' tokens falls within the field of prompt engineering as it aims to optimize the use of prompts in terms of computational resources. The mentioned benefits, such as compute efficiency, compression ratios, and minimal loss in output quality, are highly relevant to the study of prompt engineering. The relevance is not rated as a perfect 10 because the specific context of 'hard prefix prompts' is not directly addressed, but the overall subject is still substantially pertinent to the field.",https://arxiv.org/pdf/2304.08467 -zero-shot entity and tweet characterization with designed conditional prompts and contexts,8,"The study is highly relevant to prompt engineering as it involves the use of 'hard prefix prompts' which are a form of prompt construction. It explores the capabilities of GPT-2 in zero-shot settings, which is an important aspect of prompt engineering, particularly when it comes to designing prompts that guide the model to perform specific tasks without prior task-specific training. The focus on human psychology-inspired and logical conditional prefixes is directly related to engineering prompts to produce desired outputs. However, the research is not exclusively focused on the systematic review of hard prefix prompts but rather on the application of these prompts for a specific task, which is why it does not receive a full score.",http://arxiv.org/pdf/2204.08405 -instruction-vit: multi-modal prompts for instruction learning in vit,8,"The paper presents an application of prompt engineering in the context of visual transformers, focusing on multi-modal prompts for instruction learning, which is highly relevant to prompt engineering. Although it primarily discusses visual transformer models and their application to image classification tasks, the concept of using text or image prompts to improve model performance is directly connected to the field of prompt engineering. The review on 'hard prefix prompts' might have a different focus compared to multi-modal prompts in visual transformers, but both share the overarching theme of enhancing model capabilities through prompts. Hence, the relevance is high, although not exact, hence not a perfect score of 10.",http://arxiv.org/pdf/2305.00201 -clinical decision transformer: intended treatment recommendation through goal prompting,7,"The relevance of the study titled 'clinical decision transformer: intended treatment recommendation through goal prompting' to prompt engineering is moderately high. The concept of 'goal prompting' directly connects to the practice of designing prompts to achieve specific outputs in a natural language processing context. Although this paper is primarily focused on a medical application, the technique of formulating prompts to guide the decision-making output of an AI model is a key aspect of prompt engineering. The concept could potentially be applied to other areas in AI where prompt design is crucial. However, the specificity to clinical recommendations and the absence of a direct focus on hard prefix prompts or a broad range of prompt engineering applications slightly reduce its overall relevance.",http://arxiv.org/pdf/2302.00612 -adversarial transformer language models for contextual commonsense inference,8,"The paper discusses the use of both hard prompts (specific words) and soft prompts (virtual learnable templates) in the context of language model prompting to control the generation of commonsense assertions, which is directly related to prompt engineering. Although the paper's primary focus is on commonsense inference, the technique of 'hinting' as described involves engineering prompts to guide the language model, which is relevant to the study of prompt engineering.",http://arxiv.org/pdf/2302.05406 -prompt-based tuning of transformer models for multi-center medical image segmentation of head and neck cancer,7,"The paper describes the use of prompts in the form of 'learnable parameters' for fine-tuning pre-trained vision transformer models in medical image segmentation tasks, which is relevant to the concept of prompt engineering. This kind of study could potentially contribute to the field of prompt engineering as it explores how altering input prompts (in this case, learnable parameters) can adapt a model to new data. However, the focus here is on medical image segmentation and not on textual data or NLP models which are more common areas for prompt engineering. Thus, the relevance is significant but not entirely direct to studies narrowly focused on hard prefix prompts for NLP applications.",https://www.mdpi.com/2306-5354/10/7/879/pdf?version=1690208147 -prompt guided transformer for multi-task dense prediction,7,"The presented abstract describes a research paper regarding a model called Prompt Guided Transformer (PGT), which explicitly utilizes task-specific prompts within its architecture. The use of prompts is integral to the model's operation, making it highly relevant to studies on prompt engineering. However, it seems to focus more on parameter efficiency and architecture design for multi-task learning rather than the systematic review of 'hard prefix prompts' or broad prompt engineering strategies, hence the rating does not reach the maximum.",https://arxiv.org/pdf/2307.15362 -efficient model personalization in federated learning via client-specific prompt generation,8,"The abstract describes a methodology for personalizing machine learning models in a federated learning context using client-specific prompt generation. Although it does not explicitly mention 'hard prefix prompts', it is highly relevant to prompt engineering as it discusses the generation and adaptation of prompts to improve model performance on distributed client-specific data. This is a crucial aspect of prompt engineering, which typically involves optimizing inputs to pre-trained models to achieve better customization and efficiency. Therefore, the relevance of the paper to prompt engineering is high, although it may not directly focus on the specific subset of 'hard prefix prompts'.",https://arxiv.org/pdf/2308.15367 -kosmos-2.5: a multimodal literate model,8,"The abstract describes a model that uses task-specific prompts to achieve its multimodal literate capabilities, which is highly relevant to the study of prompt engineering. The ability to adapt the model for various text-intensive image understanding tasks with different prompts through supervised fine-tuning underscores the relevance of prompt engineering to the model's functionality. Although the main focus of Kosmos-2.5 is on machine reading of text-intensive images, the mention of flexible text representations and task-specific prompts indicates that prompt engineering is a significant component of the research. The rating is not a full 10 because the primary focus seems to be on the model's multimodal capabilities rather than exclusively on prompt engineering.",https://arxiv.org/pdf/2309.11419 -automated reading passage generation with openai's large language model,7,"The study is relevant to prompt engineering as it involves using 'carefully engineered prompts' to guide GPT-3 in generating reading passages that are appropriate for a specific educational level and style. The engineering aspect of the prompts plays a crucial role in the automated item generation process mentioned in the abstract, ensuring that the AI-generated text conforms to certain standards and matches original content in terms of structure and difficulty. While the focus is on AIG and not specifically on the study of 'hard prefix prompts,' the research contributes valuable insights into how tailored prompts can be used to guide the output of a language model to meet predefined criteria. Therefore, it has a significant relevance to the field of prompt engineering, even though it might not directly address the concept of hard prefix prompts in systematic review terms.",http://arxiv.org/pdf/2304.04616 -llama-adapter: efficient fine-tuning of language models with zero-init attention,7,"The abstract describes the development of a method for fine-tuning language models using a set of learnable adaption prompts, which is relevant to prompt engineering, particularly in the context of instruction-following models. The integration of these prompts into the higher transformer layers is a technique related to prompt engineering as it involves modifying the input sequence to achieve a desired behavior from the model. However, the study seems to be more focused on an efficient fine-tuning mechanism rather than on the specifics of designing prompts (hard prefixes), so it is not a perfect match to prompt engineering studies that focus exclusively on hard prefix prompts. Therefore, the rating acknowledges the relevance of the learnable adaption prompts but is not a full 10 due to the broader scope of the study.",http://arxiv.org/pdf/2303.16199 -in-context learning of large language models explained as kernel regression,7,"The study presents an analysis of in-context learning in large language models (LLMs), a concept closely related to prompt engineering since in-context learning involves providing LLMs with carefully crafted prompts (examples) to shape their output without updating the models' parameters. Understanding the mechanism behind LLMs' in-context learning capabilities could contribute valuable insights into the design of effective prompts, potentially improving prompt engineering strategies. However, the study does not directly focus on 'hard prefix prompts,' which are specific types of prompts, or on a systematic review of prompt engineering studies, so the relevance is substantial but not complete.",https://arxiv.org/pdf/2305.12766 -prompt tuning of deep neural networks for speaker-adaptive visual speech recognition,8,"The study presents prompt tuning methods for speaker-adaptive Visual Speech Recognition (VSR), which parallels prompt tuning in Natural Language Processing (NLP). Though the context is VSR rather than text-based models, the principles of prompt engineering (e.g., fine-tuning prompts for adaptation without changing the entire pre-trained model) are highly relevant to the prompt engineering study. As such, the techniques and results from this study could inform prompt engineering practices, especially those that deal with adaptation to new data or domains using small amounts of adaptation data. This makes it significantly relevant, though slightly less if the focus of the prompt engineering study is strictly on text-based NLP models.",http://arxiv.org/pdf/2302.08102 -à-la-carte prompt tuning (apt): combining distinct data via composable prompting,9,"The abstract discusses 'À-la-carte Prompt Tuning (APT)' which is directly related to prompt engineering as it deals with the methodology of tuning and composing prompts for transformer-based models. The approach to train individual prompts and compose them based on user-defined criteria is highly relevant to the study of prompt engineering. This could offer insights into the mechanics of prompt tuning and its practical applications in customizing machine learning models to specific data sets or user preferences. The only reason it doesn't score a perfect 10 is that the description does not explicitly mention 'hard prefix prompts', thus it may not cover the entire scope of the prompt engineering study mentioned in the prompt.",https://arxiv.org/pdf/2302.07994 -proof of concept: using chatgpt to teach emergency physicians how to break bad news,7,"The abstract highlights the use of detailed prompts to create realistic clinical scenarios and provide feedback, which directly relates to the concept of prompt engineering. The study illustrates the impact of carefully designed prompts on the AI's performance in a specific application (medical training), which is relevant to the field of prompt engineering. However, the focus is not solely on the theoretical or systematic aspects of prompt engineering but rather its practical implementation in a medical training context, which may not cover the depth or breadth of a 'comprehensive systematic review on hard prefix prompts' as the original query suggests.",https://assets.cureus.com/uploads/original_article/pdf/154391/20230609-458-1qfzq7g.pdf -promptonomyvit: multi-task prompt learning improves video transformers using synthetic scene data,7,"The relevance of this study to prompt engineering is moderate to high because it introduces the concept of 'task prompts' within video transformers, which are specialized parameters used for enhancing performance on different video understanding tasks. 'Promptonomy' is essentially an application of prompt engineering in the context of video transformers, where prompts are designed to model task-specific structure and improve machine learning model aptitude. While the study does not explicitly cover 'hard prefix prompts' or their systematic review, it does involve the creation and utilization of prompts in a learning context, thus contributing to the broader field of prompt engineering. However, the main focus is on the usage of synthetic scene data and improving video transformers, so it is not entirely centered on the theory or methodology of prompt engineering itself.",http://arxiv.org/pdf/2212.04821 -language prompt for autonomous driving,8,"The abstract describes a study focused on the intersection of natural language prompts and autonomous driving technology, which involves prompt engineering to some extent. Although the primary application is within the domain of computer vision and autonomous driving, the creation of the object-centric language prompt set and the formulation of a new prompt-based driving task indicates a substantial involvement of prompt engineering. The study's goal to predict object trajectories based on language descriptions necessitates understanding and engineering of prompts to be suitable for machine comprehension within a driving context. This is highly relevant to prompt engineering as it deals with generating and utilizing prompts to guide AI models. However, the rating is not a perfect 10 as the core application differs from general prompt engineering studies and focuses specifically on driving scenarios.",https://arxiv.org/pdf/2309.04379 -clinical prompt learning with frozen language models.,8,"The abstract discusses the application of prompt learning in a clinical context, which is a subset of prompt engineering. It highlights the advantages of prompt learning over traditional fine-tuning, such as fewer trainable parameters, less training time, and lower computational resources, all of which are key considerations in prompt engineering. Although it does not explicitly mention 'hard prefix prompts,' the focus on prompt learning's efficiency and effectiveness is highly relevant to the overarching field of prompt engineering. The reason for not giving a full score of 10 is because the study is specific to clinical applications rather than a broad systematic review of hard prefix prompts in general.",https://arxiv.org/pdf/2205.05535 -fedyolo: augmenting federated learning with pretrained transformers,7,"The abstract discusses modularity in the context of using modules such as prompts for adapting large pretrained transformer models in federated learning setups. While it does not specifically focus on 'hard prefix prompts,' it does touch on the general relevance of prompts (or similar kinds of modules) for model adaptation. This relevance is given a rating of 7 because the study could provide useful insights into the applications of prompt engineering within federated learning, even though it does not directly focus on a comprehensive systematic review of hard prefix prompts.",https://arxiv.org/pdf/2307.04905 -prores: exploring degradation-aware visual prompt for universal image restoration,8,"The abstract discusses the use of degradation-aware visual prompts within a universal image restoration model, which is a form of prompt engineering applied to visual tasks rather than language tasks. It touches on the principle of encoding information (degradation types) into prompts to guide the behavior of a model (Vision Transformer), a concept parallel to hard prefix prompts in NLP. While the paper does not deal directly with linguistic prompt engineering, the underlying ideas of customizing prompts to steer model behavior are highly relevant to the study of prompt engineering as a broader concept. Hence, a lower rating would be given if the question strictly asked for relevance to text-based prompts, but since it outlines the foundation of 'prompt engineering' which can extend beyond just language models, a higher rating is appropriate.",http://arxiv.org/pdf/2306.13653 -making humanoid robots teaching assistants by using natural language processing (nlp) cloud-based services,7,"The study involves using NLP and GPT language models, which are relevant to prompt engineering. The research is focused on fine-tuning GPT models with prompts derived from environmental context and robot actions, directly linking to the construction of prompts for language models. The rating is not a full 10 because the main application is on human-robot interaction and the deployment of these models, rather than on the systematic review of 'hard prefix prompts' or the discipline of prompt engineering itself.",https://www.extrica.com/article/22720/pdf -bootstrapping vision-language learning with decoupled language pre-training,8,"The paper describes the use of a model (P-Former) to predict ideal prompts within the context of vision-language learning, by focusing on language component optimization. This relates closely to prompt engineering, as the research aims to determine how best to elicit desired responses from language models, which is a fundamental aspect of prompt engineering. The methodology of prompt prediction is directly relevant to the art of crafting effective prompts. However, the specific application to vision-language learning might be slightly tangential to more general prompt engineering studies that might not focus on multimodal contexts. Despite that, the principles discussed could nonetheless provide valuable insights into prompt engineering for LLMs in general.",https://arxiv.org/pdf/2307.07063 -prompt-based ingredient-oriented all-in-one image restoration,7,"The abstract describes a novel technique for image restoration that uses 'prompt-based learning' as part of its methodology. This indicates some relevance to prompt engineering as it pertains to the use of prompts to guide the decoder in image processing tasks. However, the term 'prompt-based learning' in this context is more related to the domain of image restoration rather than to the development and study of textual or linguistic prompts in AI and machine learning. Even though the technique involves 'prompts' in some form, it may not specifically address the systematic review of 'hard prefix prompts' as one might expect in the study of AI or natural language processing. Therefore, the relevance is moderate since it's within the area of prompts as a concept but not directly focused on the linguistic aspect of prompt engineering.",https://arxiv.org/pdf/2309.03063 -hierarchical prompt tuning for few-shot multi-task learning,9,"The paper is highly relevant to prompt engineering as it discusses a novel approach to prompt tuning, which is a key aspect of prompt engineering. The hierarchical prompt tuning model addresses the need for effective prompts in multi-task learning, especially in few-shot scenarios. The introduction of shared prompts, auto-adaptive prompts, and task-specific prompts directly pertains to the methodology of engineering prompts to enhance performance. Although the study is not specifically about 'hard prefix prompts', the relevance to prompt engineering is strong because the paper contributes to the broader understanding of how to construct and implement prompts in complex, multi-layer neural networks such as Transformers.",https://dl.acm.org/doi/pdf/10.1145/3583780.3614913 -pm-detr: domain adaptive prompt memory for object detection with transformers,8,"The document describes the use of prompts (though in a different context from language models) to improve the domain adaptability of object detection models. It focuses on prompt-based strategies to bridge the gap between different data distributions. The concept of 'prompt memory' is relevant to prompt engineering, as it involves using prompts to encode domain-specific knowledge which can then influence the behavior of a model. However, the application of prompts here differs from their use in language models, where the term 'prompt engineering' is often used to describe the process of crafting inputs that elicit desired outputs. In this context, prompts are aiding domain adaptation of object detection systems rather than natural language processing tasks. Nonetheless, the use of prompts as a technique to improve machine learning models is relevant to the broader field of prompt engineering study.",http://arxiv.org/pdf/2307.00313 -visual prompt flexible-modal face anti-spoofing,7,"The abstract discusses the development of a visual prompt-based approach for improving the robustness of face anti-spoofing systems, which is indirectly related to prompt engineering. Although prompt engineering is primarily associated with natural language processing and the use of textual prompts in language models, the abstract suggests an adaptation of prompt learning principles to the domain of computer vision and multimodal learning. The concept of 'visual prompts' and their application in a flexible-modal face anti-spoofing task is relevant to the study of how prompts can be engineered and utilized in AI models, extending beyond textual inputs to visual and multimodal contexts. The relevance is not a direct match to 'hard prefix prompts,' indicating that the context of prompts is being extended to a different domain, thus the rating does not reach the maximum.",https://arxiv.org/pdf/2307.13958 -on the relationship between skill neurons and robustness in prompt tuning,9,"The paper discusses Prompt Tuning, which is highly relevant for prompt engineering as it studies how prompt tuning affects the robustness and transferability of pre-trained language models for specific tasks. Although it does not directly address 'hard prefix prompts', the concept of 'skill neurons' and their role in prompt tuning is crucial for understanding and engineering effective prompts. It hints at an underlying mechanism that could influence the construction and refinement of prompts, potentially making this area of study valuable for those engaged in prompt engineering.",https://arxiv.org/pdf/2309.12263 -medical intervention duration estimation using language-enhanced transformer encoder with medical prompts,7,"The study describes a framework that integrates medical prompts within a transformer encoder to improve the estimation of medical intervention durations. While this approach does utilize 'prompts' in the form of medical queries to improve the model's understanding of free-text EHR data, these prompts do not appear to be 'hard prefix prompts' in the context of prompting techniques typically discussed in natural language processing (NLP). The focus of the study is not on exploring the design or effectiveness of various prompts but rather on the application of medical prompts to harmonize different data modalities for medical predictions. Therefore, while prompts are relevant to the system being developed, the study does not seem to primarily address 'prompt engineering' as it would pertain to the generation or optimization of prompts themselves. This results in a moderate rating of relevance.",https://arxiv.org/pdf/2303.17408 -planning with learned entity prompts for abstractive summarization,8,"The study discusses the use of entity chains as prompts to improve the quality of abstractive summarization, which is a form of prompt engineering. The research directly involves engineering prompts (entity chains) to guide a model's generation process, making it highly relevant to the subject of prompt engineering. However, it is not solely focused on 'hard prefix prompts', as it encompasses a broader scope of learned entity prompts for content planning in summarization tasks.",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00438/1979348/tacl_a_00438.pdf -a survey of controllable text generation using transformer-based pre-trained language models,7,"The provided abstract discusses the controllability of text generation using Transformer-based pre-trained language models, which is relevant to the field of prompt engineering since it deals with methods to direct language models in generating text that fulfills certain constraints. However, the abstract does not specifically mention 'hard prefix prompts' or delve into the topic of prompt engineering within controllable text generation. Therefore, while the survey has relevance due to its focus on control mechanisms, which could encompass prompt engineering techniques, it is not a perfect match for a study specifically on 'hard prefix prompts.' The rating reflects the general relevance but indicates that the document is not exclusively focused on the narrower subject of prompt engineering, especially centered around hard prefix prompts.",https://arxiv.org/pdf/2201.05337 -promptcal: contrastive affinity learning via auxiliary prompts for generalized novel category discovery,8,"The abstract discusses advancements in semi-supervised learning through the use of auxiliary visual prompts and contrastive learning methods. Though not explicitly centered on 'hard prefix prompts,' the research explores the usage of prompts (in the form of visual cues) to improve semantic clustering and discover novel classes. This is closely related to 'prompt engineering,' as it deals with the optimization of prompts to enhance model performance. Therefore, it is quite relevant to the field of prompt engineering, though it may not directly address the systematic review aspect of hard prefix prompts mentioned in the initial study description.",https://arxiv.org/pdf/2212.05590 -matchprompt: prompt-based open relation extraction with semantic consistency guided clustering,8,"The text describes a novel approach to open relation extraction using a prompt-based framework, which directly relates to the field of prompt engineering as it entails designing and utilizing prompts to train models with a small amount of pre-defined relational instances. This suggests innovation in the area of using prompts for machine learning tasks, which is relevant to the prompt engineering study. It is not a 'comprehensive systematic review on hard prefix prompts', but it is a practical application of prompt engineering principles, thus the relevance rating is 8 rather than 10.",https://aclanthology.org/2022.emnlp-main.537.pdf -dynamic visual prompt tuning for parameter efficient transfer learning,8,"The paper describes a method of parameter efficient transfer learning through the creation of dynamic, instance-wise tokens or 'prompts' for each image in visual tasks. While it is not directly related to 'hard prefix prompts', it discusses 'prompt tuning', which falls under the broader category of prompt engineering. The proposed method aims to adapt pre-trained models to new tasks more efficiently, which is relevant to the study of how prompts can be engineered to improve model performance. The high relevance score is given because the paper's core focus on dynamic visual prompts is closely aligned with the principles of prompt design and optimization, which are essential concepts in prompt engineering studies.",https://arxiv.org/pdf/2309.06123 -efficiently aligned cross-lingual transfer learning for conversational tasks using prompt-tuning,9,"The abstract discusses the use of 'prompt-tuning-based method for learning alignment prompts' which is directly related to prompt engineering. Specifically, it addresses the development of prompts that facilitate cross-lingual transfer learning, a key component of prompt engineering in the context of creating efficient language models for conversational tasks. The systematic review might explore various prompt techniques, including this efficient prompt-tuning method, making it highly relevant to the study. The reason it's not a perfect 10 is that the focus is also on the creation of a multilingual dataset and cross-lingual transfer learning, which, while related, are broader topics than prompt engineering alone.",http://arxiv.org/pdf/2304.01295 -clinical concept and relation extraction using prompt-based machine reading comprehension,7,"The described study makes significant use of prompt-based machine reading comprehension (MRC) architecture in the context of natural language processing for clinical data, which is directly related to the use of prompts in AI systems. Prompt engineering is central to designing the MRC architecture that can comprehend and extract relevant information from clinical texts. The fact that different prompting strategies were examined for their effects on MRC model performance bolsters its relevance to prompt engineering. However, the focus on clinical concept and relation extraction may mean that the specific prompt engineering details relevant to other domains or applications of prompt engineering are not explored in the abstract provided. Thus, the content is relevant due to its reliance on prompts and their optimization in an MRC system, but it is not exclusively focused on the concept of 'hard prefix prompts' as might be expected in a systematic review specifically dedicated to that subject.",https://arxiv.org/pdf/2303.08262 -is prompt-based finetuning always better than vanilla finetuning? insights from cross-lingual language understanding,8,"The abstract provided discusses the comparison of prompt-based fine-tuning versus vanilla fine-tuning in the context of cross-lingual language understanding tasks. This is highly relevant to the field of prompt engineering, as it studies the effectiveness of prompt-based approaches in model training. It may not be a perfect match to 'hard prefix prompts' specifically, but the exploration of prompt-based fine-tuning methods, such as the proposed ProFiT pipeline, contributes to the broader understanding of prompt efficacy in different scenarios, including multilingual tasks, and hence holds substantial relevance to studies in prompt engineering.",https://arxiv.org/pdf/2307.07880 -pro-cs : an instance-based prompt composition technique for code-switched tasks,8,"The abstract discusses a prompt composition technique for code-switched tasks, which is highly relevant to prompt engineering, as it directly pertains to designing prompts that effectively interact with language models on code-switched data. The fact that it compares its approach to both prompt-tuning and fine-tuning indicates an in-depth analysis of prompts in the context of significant efficiency in parameter use. The relevance is not rated a full 10 because the abstract does not explicitly mention 'hard prefix prompts,' which could be a more specific aspect of prompt engineering, but the overall content is very relevant to the broader field of prompt engineering study.",https://aclanthology.org/2022.emnlp-main.698.pdf -"continuous detection, rapidly react: unseen rumors detection based on continual prompt-tuning",9,"This paper is highly relevant to prompt engineering as it presents a framework for 'Continual Prompt-Tuning' specifically designed to tackle rumor detection. It directly deals with the optimization and storage of task-specific soft-prompts, which are central to the concept of prompt engineering within the context of language models. It also introduces strategies for knowledge transfer and a hypernetwork approach, both of which could influence future work in prompt engineering for continual learning scenarios. The only reason it is not a 10 is that it is specific to the context of rumor detection and the systematic review aspect might not be covered comprehensively.",http://arxiv.org/pdf/2203.11720 -soft prompt guided joint learning for cross-domain sentiment analysis,8,"The abstract discusses a 'soft prompt-based joint learning method' which is highly relevant to the topic of prompt engineering, particularly in the context of transfer learning and aspect term extraction. It explores how learnable vectors, as soft prompts, can be used to bridge domain differences and enhance model performance. While not focused exclusively on hard prefix prompts, the concept of soft prompts is intrinsically linked to prompt engineering, thus the study can contribute valuable insights to the broader field of prompt engineering research.",http://arxiv.org/pdf/2303.00815 -adaptive prompt learning with distilled connective knowledge for implicit discourse relation recognition,9,"The abstract describes a novel approach in the area of prompt engineering, focusing on the development of an advanced prompt learning framework called AdaptPrompt, which uses continuous prompts and connective knowledge distillation. This is highly relevant to the field of prompt engineering because it addresses a common challenge in the manual design of prompts and offers a solution that could be broadly applicable to other prompt engineering tasks. Although the study is specifically applied to implicit discourse relation recognition, the methods and findings are likely to have implications for prompt engineering in general, making it a valuable study within this domain. The only reason the rating is not a perfect 10 is that it focuses on a specific usage of prompt engineering within the context of discourse relation recognition, which may not cover all aspects of prompt engineering studies, such as hard prefix prompts explicitly.",https://arxiv.org/pdf/2309.07561 -prompt learning with knowledge memorizing prototypes for generalized few-shot intent detection,7,"The abstract mentions the use of 'prompt learning' as a technique within a two-stage learning framework for the purpose of Few-Shot Intent Detection. Prompt learning is relevant to prompt engineering as it involves designing and utilizing prompts to teach models specific tasks. However, the focus on 'knowledge memorizing prototypes' and issues specifically connected with intent detection makes it less directly relevant to the broader field of prompt engineering study. The use of prompts is a significant aspect of the research, but the particulars seem more narrowly focused on a specific application (intent detection) rather than on hard prefix prompts in general.",https://arxiv.org/pdf/2309.04971 -can unsupervised knowledge transfer from social discussions help argument mining?,7,"The abstract describes a study focused on argument mining that utilizes a novel prompt-based strategy for inter-component relation prediction, which is relevant to the concept of prompt engineering. The use of finetuned language models in conjunction with prompt-based techniques to leverage discourse context indicates a level of innovation and practical application in the realm of prompt engineering, warranting a rating of 7. The relevance is not at the maximum because the study is not exclusively concentrated on hard prefix prompts or comprehensive systematic review, but it does provide insights into the domain of prompt engineering within the context of argument mining.",http://arxiv.org/pdf/2203.12881 -approximated prompt tuning for vision-language pre-trained models,8,"The abstract provided discusses prompt tuning, which is a technique relevant to prompt engineering studies. The focus on approximating the impact of soft prompt tokens and proposing a method for reducing computational complexity directly impacts the efficiency of prompt engineering for vision-language pre-trained (VLP) models. The fact that it explores a novel Approximated Prompt Tuning (APT) approach and demonstrates the performance and efficiency improvements through experiments makes it quite relevant to the field. However, it does not specifically mention 'hard prefix prompts,' which was the focus of the initial request. Therefore, the rating is not a perfect 10.",https://arxiv.org/pdf/2306.15706 -p3o: transferring visual representations for reinforcement learning via prompting,7,"The study focuses on the transfer of learned policies in deep reinforcement learning using a process called 'prompting', which aligns with the concept of 'prompt engineering'. While the prompting here is specific to visual representation and policy optimization in DRL, it shows an application of prompts to modify behavior of a model without full retraining. This is relevant to prompt engineering as it demonstrates how prompts can be employed to adapt models to new situations. However, the study does not discuss 'hard prefix prompts' or explore the general space of natural language processing, which are commonly associated with prompt engineering, hence the relevance is not maximum.",https://arxiv.org/pdf/2303.12371 -icpc: instance-conditioned prompting with contrastive learning for semantic segmentation,8,"The paper is high in relevance to prompt engineering for a couple of reasons. Firstly, it deals directly with designing prompts for semantic segmentation, which is part of the broader spectrum of prompt engineering studies. The study focuses on dynamic prompting as opposed to static prompts, which is a notable aspect of prompt design. Secondly, the paper proposes an align-guided contrastive loss to refine the vision and text embeddings' alignment, which is an advanced technique in prompt tuning for multimodal models. The only reason it does not score a perfect 10 is that it is applied to semantic segmentation specifically, rather than prompt engineering in general. Nevertheless, the methods developed could potentially influence or be part of prompt engineering techniques in a broader context.",https://arxiv.org/pdf/2308.07078 -gradient-based automated iterative recovery for parameter-efficient tuning,8,"The paper discusses the use of gradient-based explainability methods like TracIn for improving model performance specifically mentioning 'prompt-tuning' which is a form of prompt engineering. It shows the process of recovering performance in the context of parameter-efficient tuning (PET), a concept closely related to optimizing prompts for language models. While the paper does not focus exclusively on prompt engineering, the application of TracIn in the PET context suggests significant relevance to the study of how prompts can be engineered and debugged effectively.",http://arxiv.org/pdf/2302.06598 -extracting latent steering vectors from pretrained language models,8,"The work discussed in the abstract is highly relevant to prompt engineering since it deals with controlling language models to produce desired outputs, which is a core aspect of prompt engineering. The idea of extracting latent steering vectors aligns with engineering prompts to manipulate model behavior. However, it's not centered on hard prefix prompts specifically but rather on a broader control mechanism within the language model, thus not warranting a full 10 rating.",http://arxiv.org/pdf/2205.05124 -rethinking efficient tuning methods from a unified perspective,7,"The abstract discusses Parameter-efficient transfer learning (PETL) where tuning methods such as prompt, prefix, and adapter are briefly mentioned. Although the focus is on the development of a unified framework called U-Tuning, it is relevant to prompt engineering study as it involves task-specific lightweight adjustments and potentially new approaches for parameter-efficient transfer learning which could include improvements in prompt engineering techniques. However, the abstract does not solely concentrate on 'hard prefix prompts' but rather a broader range of PETL methods, hence the 7 out of 10 rating for relevance.",http://arxiv.org/pdf/2303.00690 -retrieval-augmented generative question answering for event argument extraction,7,"The relevance of the study to prompt engineering is significant as it discusses the augmentation of prompts with retrieved QA pairs to improve event argument extraction. Such a retrieval-augmented approach is directly related to prompt engineering because it involves the strategic manipulation of prompts to enhance model performance. While the primary focus of the study appears to be on augmenting prompts for a specific task of argument extraction, the underlying principles and methods could be widely applicable to other areas of prompt engineering. Therefore, the study could contribute valuable insights into the prompt engineering domain, even though it may not address hard prefix prompts specifically.",https://arxiv.org/pdf/2211.07067 -integrated parameter-efficient tuning for general-purpose audio models,7,"The abstract of the study discusses the use of a 'prompt-based learning approach' as part of the proposed Integrated Parameter-Efficient Tuning (IPET) framework, indicating that prompt engineering is relevant to the framework's methodology. The embedding prompt as one of its components suggests that the study investigates a form of prompt engineering within the context of audio model adaptation. Although the study is specific to the audio domain and does not directly address the broader concept of hard prefix prompts in general, the inclusion of a prompt-based learning approach within the IPET framework and its application to pre-trained models is indeed relevant to prompt engineering techniques. Therefore, the study would likely be of interest to those researching prompt engineering in specific applications, albeit with a specific focus on audio tasks rather than a comprehensive systematic review on hard prefix prompts.",http://arxiv.org/pdf/2211.02227 -alexander knox at semeval-2023 task 5: the comparison of prompting and standard fine-tuning techniques for selecting the type of spoiler needed to neutralize a clickbait,8,"The study directly compares prompt engineering with standard fine-tuning techniques, which is highly relevant to prompt engineering research. Its focus on the application of prompt engineering for a specific NLP problem—clickbait neutralization—demonstrates the practical implications of prompt-based approaches and allows for insights into their effectiveness when contrasted with traditional fine-tuning. While the study is not exclusively about prompt engineering and also encompasses fine-tuning methods, its comparative analysis of the two techniques makes it significant for researchers interested in the area of prompt engineering.",https://aclanthology.org/2023.semeval-1.202.pdf -auto-prompting sam for mobile friendly 3d medical image segmentation,8,"The abstract discusses the development of an 'AutoSAM Adapter' that automatically generates prompts for 3D medical image segmentation, which is a specific application of prompt engineering. While it does not generalize to all forms of prompt engineering, this study focuses on automatic prompt generation to improve the performance of a segmentation model. Therefore, it is highly relevant to the study of prompt engineering, particularly in the field of medical image analysis using machine learning models. The deduction of two points is due to the specialized application rather than a broad, comprehensive review of techniques across different domains.",https://arxiv.org/pdf/2308.14936 -transferring pre-trained multimodal representations with cross-modal similarity matching,7,"The abstract and the TLDR mention designing context-based prompt augmentation (CPA), which indicates a direct relevance to prompt engineering as it pertains to refining the text prompts for improved performance in multimodal models. Although the main focus is on representation transfer and not on prompt engineering per se, the use of prompts to achieve cross-modal similarity matching shows that prompts are a noteworthy aspect of the proposed method's overall framework and application, thus suggesting moderate relevance to prompt engineering studies.",http://arxiv.org/pdf/2301.02903 -controllable generation of dialogue acts for dialogue systems via few-shot response generation and ranking,9,"The article presents a novel approach for controllable generation of dialogue acts (DAs) in dialogue systems through a few-shot learning and ranking method, which is highly relevant to prompt engineering. The use of few-shot prompts and the creation of methods for ranking generated responses based on their semantic accuracy and adherence to specific DAs are directly related to improving and refining the efficacy of prompts in generation tasks. The research aims to control the output of language models using prompt-based learning, a core aspect of prompt engineering.",https://arxiv.org/pdf/2307.14440 -adapting pre-trained language models to vision-language tasks via dynamic visual prompting,8,"The abstract discusses 'Dynamic Visual Prompting (DVP)', which is a novel approach to adapt pre-trained language models to vision-language tasks. While the focus is on bridging the gap between single- and multi-modal learning, the relevance to prompt engineering study lies in the exploration and implementation of prompts as a transfer learning approach. DVP as a means to reduce redundancy and optimize the placement of prompt tokens in the context of visual features directly pertains to prompt engineering, particularly in the way it demonstrates prompt effectiveness and modification techniques. Although the study is not exclusively about 'hard prefix prompts', it contributes to the broader field of prompt engineering by showing how prompts can be dynamically integrated with pre-trained models for enhanced performance in multi-modal tasks. The rating is given an 8 instead of a 10 because the study's primary focus is not on the comprehensive systematic review of hard prefix prompts, but rather on a particular application of prompts in vision-language tasks.",https://arxiv.org/pdf/2306.00409 -eco: ensembling context optimization for vision-language models,8,"The paper is highly relevant to prompt engineering, as it discusses improving image classification in vision-language models by engineering or learning textual prompts to optimize performance. The ensemble of prompts strategy directly ties to the manipulation and optimization of prompts, which is the essence of prompt engineering. Although the prompt engineering in question is utilized for vision-language scenarios rather than the 'hard prefix prompts' mentioned, the principles and goals appear to be closely aligned. Hence, the paper is not entirely focused on 'hard prefix prompts' but is still within the broader domain of prompt engineering.",https://arxiv.org/pdf/2307.14063 -enhancing cross-lingual natural language inference by prompt-learning from cross-lingual templates,8,"The abstract is highly relevant to prompt engineering as it discusses a prompt-learning based framework to enhance cross-lingual natural language inference (XNLI), which is a direct application of prompt engineering techniques. The use of cloze-style questions constructed from cross-lingual templates is an example of hard prefix prompts, which fits within the broader category of prompt engineering studies. The significance of the research is supported by experimental results on benchmark datasets, although it focuses specifically on the XNLI task rather than prompt engineering in general, which prevents it from receiving a full 10.",https://aclanthology.org/2022.acl-long.134.pdf -nlpbench: evaluating large language models on solving nlp problems,8,"The abstract and TLDR describe a study focused on evaluating the performance of large language models on NLP problems using a new benchmarking dataset. Prompting strategies like chain-of-thought (CoT) and tree-of-thought (ToT) are an integral part of this performance evaluation. These strategies are directly related to prompt engineering as they involve devising ways to present problems to LLMs in a manner that leverages their strengths. Although the abstract does not specifically mention 'hard prefix prompts,' the discussion of prompting strategies is closely related to the field of prompt engineering and the study appears to contribute to our understanding of how LLMs can be more effectively prompted. The rating is not a full 10 because the provided abstract doesn't focus exclusively on prompt engineering but rather on a wider scope of NLP problem-solving capabilities.",https://arxiv.org/pdf/2309.15630 -retuyt-inco at bea 2023 shared task: tuning open-source llms for generating teacher responses,8,"This paper is highly relevant to prompt engineering as it discusses the fine-tuning of Open-Source Large Language Models (LLMs) for a specific application, which is the generation of teacher responses in educational dialogues. The exploration of different prompting strategies, such as Few-Shot and Chain-of-Thought, directly pertains to the field of prompt engineering. While the paper does not focus solely on 'hard prefix prompts,' which the original question inquires about, it examines relevant techniques that would influence the design and implementation of effective prompts for LLMs. The deduction of two points accounts for the absence of a direct focus on 'hard prefix prompts,' but overall, the study presents material that would be of significant interest to anyone researching prompting methods.",https://aclanthology.org/2023.bea-1.61.pdf -aligning large language models for clinical tasks,7,"The abstract discusses the alignment of Large Language Models (LLMs) for clinical tasks, focusing on strategies such as 'expand-guess-refine' for question-answering applications. Although it does not directly mention 'hard prefix prompts' or conduct a comprehensive systematic review on them, the alignment strategy includes in-prompt strategies like few-shot and chain-of-thought prompting which are related to prompt engineering. Therefore, while it is not wholly focused on prompt engineering, it is still relevant due to the discussion of prompt-based techniques for improving LLM performance in a specific domain.",https://arxiv.org/pdf/2309.02884 -naisteacher: a prompt and rerank approach to generating teacher utterances in educational dialogues,9,"The paper is highly relevant to prompt engineering as it specifically deals with the generation of teacher responses using a prompt-based approach with GPT-3.5-turbo and involves reranking, which is an advanced form of prompt engineering. The only reason it does not receive a full score is that it may not directly address 'hard prefix prompts,' assuming 'hard prefix prompts' refers to a specific sub-category or method within prompt engineering.",https://aclanthology.org/2023.bea-1.63.pdf -visual prompting via image inpainting,8,"The abstract presents a study relevant to prompt engineering in the context of visual models rather than textual ones. It discusses a method analogous to prompting in NLP but applied to image processing tasks using image inpainting. Even though it doesn't involve 'hard prefix prompts' directly and focuses on the visual domain, the concept of adapting pre-trained models to new tasks with example-based prompts is closely related to the principles of prompt engineering. Therefore, the relevance is high, but not absolute, as this study does not directly discuss textual prompt engineering or hard prefix prompts specifically.",http://arxiv.org/pdf/2209.00647 -can adaptive pedagogical agents' prompting strategies improve students' learning and self-regulation?,7,"The study addresses prompting strategies in the context of adaptive pedagogical agents, which can be considered a form of prompt engineering as it relates to optimizing the prompts for better learning and self-regulation outcomes. Although it does not directly address 'hard prefix prompts' in a systematic review manner, the concept of a 'fading prompting strategy' is related to how prompts are engineered for effectiveness over time, which could be relevant in the broader scope of prompt engineering study.",https://hal.archives-ouvertes.fr/hal-01376429/file/Bouchet_et_al._ITS2016.pdf -low-resource ner by data augmentation with prompting,8,"The mentioned paper is highly relevant to prompt engineering study, especially considering its use of prompting strategies to elicit knowledge from a language model (BERT) for named entity recognition (NER) in a low-resource setting. The relevance score is not a perfect 10 because the focus is on data augmentation for NER and not solely on hard prefix prompts, which are a subset of prompt engineering techniques. Furthermore, the emphasis on label-conditioned word replacement and generation of new training data via QA prompting demonstrates a practical application of prompt engineering within a specific NLP task, underscoring its importance and relevance to the field.",https://www.ijcai.org/proceedings/2022/0590.pdf -this joke is [mask]: recognizing humor and offense with prompting,8,"The study described in the title and abstract focuses on the effectiveness of prompting, which is a technique used in NLP and directly relevant to prompt engineering. The investigation of humor recognition through prompts falls within the scope of prompt engineering studies, as it explores how prompts can be designed and utilized to achieve a specific task (humor recognition in this case). The fact that the paper compares prompting to fine-tuning and looks at low-resource scenarios also adds to its relevance. However, the specificity to humor and offense slightly limits the rating as prompt engineering can encompass a broader range of tasks beyond these topics.",http://arxiv.org/pdf/2210.13985 -demonstrate-search-predict: composing retrieval and language models for knowledge-intensive nlp,7,"The abstract provided discusses an advanced technique in the domain of natural language processing that could clearly relate to prompt engineering. The Demonstrate-Search-Predict (DSP) framework integrates language models (LM) and retrieval models (RM) in a complex pipeline to improve performance on knowledge-intensive tasks. While this does not directly reference 'hard prefix prompts', it aligns with the broader field of prompt engineering due to its focus on improving the interaction between models for better information retrieval and processing. Prompt engineering is crucial in designing the inputs to such systems to ensure the most relevant and accurate outputs. However, without explicit mention of 'hard prefix prompts', the relevance is not a perfect fit; hence, a rating of 7 is assigned to indicate its substantial relevance but not a direct match to the specific topic of prompt engineering study.",http://arxiv.org/pdf/2212.14024 -error analysis prompting enables human-like translation evaluation in large language models: a case study on chatgpt,9,"The study specifically focuses on the development and refinement of a prompting method, namely Error Analysis Prompting (EAPrompt), which is a direct application of prompt engineering. The use of prompts in this context is to enhance the capability of generative LLMs, such as ChatGPT, to evaluate machine translation quality more effectively. This falls within the domain of prompt engineering, as it involves designing prompts to elicit desired behaviors from a language model. However, it does not directly address 'hard prefix prompts' as mentioned in the initial request, but it is highly relevant to the overall field of prompt engineering.",https://arxiv.org/pdf/2303.13809 -explicit visual prompting for low-level structure segmentations,8,"The relevance to prompt engineering is significant as the study adapts the concept of prompt tuning from natural language processing (NLP) to the visual domain, which is a novel application of prompt engineering principles. Prompt tuning is a core area of study within prompt engineering, and the paper's proposition of a new visual prompting model called 'Explicit Visual Prompting (EVP)' shows direct influence from NLP prompt tuning methods, indicating that the findings could be beneficial to the field. Although EVP is tailored for image-based tasks and not textual prompt engineering, the conceptual crossover and potential implications for the development of similar strategies in NLP make this study relevant. The rating is not a perfect 10 because the study does not directly address textual prompt engineering but rather adapts its concepts to a different domain.",https://arxiv.org/pdf/2303.10883 -pushing the limits of chatgpt on nlp tasks,9,"The abstract presents research that directly involves the optimization of prompts and input strategies for improving ChatGPT's performance on a variety of NLP tasks. Techniques such as 'one-input-multiple-prompts' and the development of modules to address specific issues inherent in language model tasks are inextricably linked to prompt engineering. Although the study's title does not explicitly mention 'hard prefix prompts,' the body of work encompasses strategies that likely include or are related to prompt engineering concepts. Therefore, the study is highly relevant to prompt engineering, meriting a rating of 9 out of 10. It loses one point because it does not specifically mention the systematic review on 'hard prefix prompts,' which might be considered a subset or particular aspect of prompt engineering the inquiry could be asking about.",https://arxiv.org/pdf/2306.09719 -all in one: multi-task prompting for graph neural networks,8,"The paper focuses on the adaptation of prompt learning from NLP to graph tasks, seeking to bridge the gap between pre-trained models and diverse graph tasks by proposing a novel multi-task prompting method. This is highly relevant to prompt engineering as it explores the concept of prompts, albeit in the domain of graph models. The integration of NLP prompting techniques into a different domain suggests a broader potential application of prompt engineering principles. The rating is not a full 10 due to the specific focus on graph models rather than a general prompt engineering approach.",https://arxiv.org/pdf/2307.01504 -diffusion-nat: self-prompting discrete diffusion for non-autoregressive text generation,7,"The abstract discusses the integration of discrete diffusion models with non-autoregressive text generation and the improvement of this integration via a novel strategy called 'iterative self-prompting.' While it does not directly mention 'hard prefix prompts,' the concept of self-prompting is related to prompt engineering because it involves the manipulation of prompts to improve the text generation process. This means that the study contributes to the field of prompt engineering, even if it doesn't directly address the specific topic of hard prefix prompts. Therefore, it has relevance to the broader field of prompt engineering but is not a perfect match for a systematic review focused exclusively on hard prefix prompts.",http://arxiv.org/pdf/2305.04044 -parafuzz: an interpretability-driven technique for detecting poisoned samples in nlp,8,"The relevance to prompt engineering is quite high in this study. The abstract mentions the formulation of the trigger-removal task as a prompt engineering problem, indicating a direct engagement with prompt engineering techniques. Furthermore, the application of 'fuzzing' to discover optimal paraphrase prompts for the purpose of maintaining input semantics while eliminating backdoor triggers in NLP models is aligned with innovative practices within prompt engineering. Although the primary focus is on the detection of poisoned samples and ensuring interpretability, the use of prompt engineering as a method to achieve these aims supports the rating of 8 out of 10.",https://arxiv.org/pdf/2308.02122 -self-diagnosis and self-debiasing: a proposal for reducing corpus-based bias in nlp,8,"The paper is highly relevant to prompt engineering as it addresses the critical aspect of bias mitigation in NLP models, which is an essential consideration when designing prompts. The concept of 'self-diagnosis' is particularly pertinent, as it implies that models can detect undesirable biases in response to prompts. Similarly, 'self-debiasing', where the model actively avoids generating problematic outputs based on the prompt description, is a direct application of prompt engineering principles. The techniques discussed could be employed in designing prompts that encourage models to produce less biased content. Although the paper does not directly elaborate on 'hard prefix prompts,' it does contribute to the overarching field of prompt engineering by exploring decoding algorithms and model behavior in response to prompts and bias management.",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00434/1979270/tacl_a_00434.pdf -automatically correcting large language models: surveying the landscape of diverse self-correction strategies,8,"The paper's focus on 'self-correction strategies' for large language models (LLMs) is highly relevant to prompt engineering study, as prompt engineering often involves designing prompts to elicit the desired behavior or correct the output of an LLM. The detailed review of automated feedback methods can be crucial for advancing the prompt engineering field, especially in the context of minimizing the necessity for human intervention in generating effective prompts. However, the paper may not be specifically centered on 'hard prefix prompts,' which the original prompt suggested, hence not a perfect 10.",https://arxiv.org/pdf/2308.03188 -adversarial attacks on large language model-based system and mitigating strategies: a case study on chatgpt,9,"The abstract details a study that is highly relevant to prompt engineering as it focuses on using prefix prompts as a mitigating strategy against adversarial attacks on language models, directly impacting how prompts are engineered for safety and robustness. Evaluating and enhancing the security of language models like ChatGPT with prefix prompts falls within the scope of prompt engineering research. Although the study may not solely concentrate on the engineering of hard prompts, the development of a 'training-free prefix prompt mechanism' indicates a significant contribution to the field of prompt design and mitigation strategies, which is a crucial aspect of prompt engineering.",https://downloads.hindawi.com/journals/scn/2023/8691095.pdf -evaluating tuning strategies for sequence generation with protein language models,8,"The response evaluates a study that involves adapting NLP models for use in generating artificial protein sequences, with a focus on prompt tuning as an alternative to fine-tuning. Although the study is not directly examining 'hard prefix prompts,' it is investigating the efficiency and effectiveness of tuning strategies, particularly prompt tuning, within the context of a language model adapted for a specialized domain. This makes the study highly relevant to prompt engineering as it explores adaptable methodologies for model tuning, which can include prompt engineering strategies. The study's results and the discussion of the quality assessment tools also contribute valuable insights for future developments in prompt engineering, despite not specifically addressing 'hard prefix prompts.'",https://www.biorxiv.org/content/biorxiv/early/2023/03/01/2023.02.28.530492.full.pdf -iie-nlp-nut at semeval-2020 task 4: guiding plm with prompt template reconstruction strategy for comve,9,"The paper is highly relevant to prompt engineering because it discusses a prompt template reconstruction strategy within the context of a natural language processing task (i.e., SemEval Task4). The use of prompt templates to guide pre-trained language models (PLMs) for specific tasks like commonsense validation and explanation is a direct application of prompt engineering. Even though the study does not seem to be a systematic review on 'hard prefix prompts', the introduction of input reconstruction strategy with prompt templates is closely related to the engineering and structuring of prompts to improve the performance of language models, which is a key aspect of prompt engineering. Therefore, the paper's content aligns well with the field of study.",https://aclanthology.org/2020.semeval-1.42.pdf -news summarization and evaluation in the era of gpt-3,8,"The paper is highly relevant to prompt engineering as it directly involves prompting a large language model (GPT-3) and studying its performance in a specific NLP task - news summarization. Although it does not focus exclusively on 'hard prefix prompts', the mentioned concept of 'task description' prompting is a critical element of prompt engineering. The examination of how effectively GPT-3 can generate summaries with only a task description highlights the importance of designing prompts to elicit desired responses from AI models. The relevance to prompt engineering study is not rated a perfect 10 because the paper seems to cover broader aspects of model evaluation and summarization tasks rather than focusing solely on the detailed structure and impact of prompts.",http://arxiv.org/pdf/2209.12356 -opt-iml: scaling language model instruction meta learning through the lens of generalization,8,"The study pertains to the broader field of instruction-tuning, which is closely related to prompt engineering, as it involves optimizing language models to understand and execute instructions from prompts more effectively. Although the specific term 'hard prefix prompts' is not mentioned, the principles and findings from such instruction-tuning experiments can be highly relevant and applicable to prompt engineering, including the development and assessment of hard prefix prompts.",http://arxiv.org/pdf/2212.12017 -how good are gpt models at machine translation? a comprehensive evaluation,7,"The relevance of the presented paper to prompt engineering is significant, mainly due to the examination of the 'effect of prompting strategies' on the performance of GPT models in machine translation. Prompt engineering is crucial for optimizing the model's output, and this paper's exploration of how GPT models respond to different prompts could provide valuable insights for the field. Although the study's primary focus is on machine translation, the inclusion of prompting strategies as one of the evaluated aspects means that the findings could potentially contribute to a better understanding of prompt engineering. Therefore, the rating acknowledges the indirect but important relation to prompt engineering within the context of machine translation.",http://arxiv.org/pdf/2302.09210 -enabling large language models to generate text with citations,8,"The study is highly relevant to prompt engineering as it directly addresses the construction of prompts to enable large language models to generate text that includes citations. This requires the development of novel prompting strategies that guide the model not just to produce answers, but also to provide evidence through citations. While the study is not solely focused on 'hard prefix prompts,' it falls within the broader field of prompt engineering and is very relevant due to its focus on the performance and verification of information produced by LLMs. Prompt engineering is a critical component in achieving the goals outlined in the study.",http://arxiv.org/pdf/2305.14627 -diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine,9,The paper is highly relevant to prompt engineering as it specifically focuses on the development and use of 'diagnostic reasoning prompts' designed to investigate the ability of LLMs (like GPT-4) to replicate clinical reasoning processes. This research directly contributes to the field of prompt engineering by demonstrating that prompts can be designed in a way that not only elicits specific types of reasoning from LLMs but can also do so with a level of interpretability that aligns with the cognitive processes of professionals in the field of medicine. The study's aim to enhance understanding and trust in LLMs through better-designed prompts is squarely within the goals of prompt engineering.,https://arxiv.org/pdf/2308.06834 -llm-funcmapper: function identification for interpreting complex clauses in building codes via llm,8,"The abstract describes the use of a large language model (LLM) to interpret complex regulatory texts, which is relevant to prompt engineering study as it involves the development of a prompt template with chain of thought thinking. While the study isn't focused on 'hard prefix prompts' specifically, the creation of this tailored template and its adjustment using a classification-based tuning strategy are key examples of prompt engineering. The approach of identifying functions and utilizing LLM for understanding complex clauses is closely related to how prompts are engineered to improve the performance of language models on specific tasks. The rating is not a full 10 because the research is not exclusively centered on prompt engineering, but rather on the application of LLMs in the context of interpreting building codes; nonetheless, the methodology includes relevant elements of prompt engineering.",https://arxiv.org/pdf/2308.08728 -resolving the imbalance issue in hierarchical disciplinary topic inference via llm-based data augmentation,7,"The paper discusses the use of large language models for data augmentation in order to tackle the problem of data imbalance in the context of hierarchical disciplinary topic inference. This is relevant to the field of prompt engineering because designing effective prompts is essential for guiding language models like Llama V1 to generate meaningful and well-aligned augmented text data. The study's emphasis on prompt design for keyword-based research proposal generation is a significant aspect of prompt engineering. However, the primary focus appears to be on addressing data imbalances in the machine learning system, rather than the nuances of prompt engineering itself. Therefore, while prompt engineering is undoubtedly a component of the study, it is not the singular focus.",https://arxiv.org/pdf/2310.05318 -workshop on large language models' interpretability and trustworthiness (llmit),8,"The abstract discusses the significance of context (prompts) and the need for research on the effects of inputs on Large Language Models (LLMs) and their outputs. It directly relates to prompt engineering, as it addresses the importance of understanding how small changes in prompts can significantly alter the behavior of LLMs (a key issue in prompt engineering). However, it doesn't explicitly mention 'hard prefix prompts' or a systematic review on prompt engineering, hence it doesn't fully match the comprehensive systematic review aspect of the prompt engineering study specified.",https://dl.acm.org/doi/pdf/10.1145/3583780.3615311 -improving zero-shot visual question answering via large language models with reasoning question prompts,8,"The title and abstract describe a study focused on improving the effectiveness of Large Language Models (LLMs) for zero-shot Visual Question Answering tasks by using 'Reasoning Question Prompts'. This is relevant to prompt engineering as it involves the strategic design of prompts to enhance the performance of LLMs in interpreting and answering questions without any prior specific training on the task. Although the study does not specifically mention 'hard prefix prompts,' it nonetheless pertains to the broader field of crafting prompts to guide the LLMs towards better comprehension and response generation. Therefore, the relevance to prompt engineering is high, but not the maximum as the study doesn't directly address the concept of 'hard prefix prompts'.",https://dl.acm.org/doi/pdf/10.1145/3581783.3612389 -psychologically-informed chain-of-thought prompts for metaphor understanding in large language models,9,"The study presents the application of chain-of-thought prompts to large language models in order to incorporate structured reasoning, similar to probabilistic models, particularly focusing on metaphor understanding. Although it does not specifically address 'hard prefix prompts,' it does fall within the broader category of prompt engineering, which involves designing prompts to elicit specific behaviors or capabilities in language models. The emphasis on structured reasoning through prompts and the reference to improving performance on a specific language task, metaphor paraphrase selection, make it highly relevant to studies in prompt engineering. The only reason it does not receive a full 10 is that it is not exclusively centred on 'hard prefix prompts' as the original term suggests.",http://arxiv.org/pdf/2209.08141 -towards llm-based fact verification on news claims with a hierarchical step-by-step prompting method,9,"The presented paper is highly relevant to prompt engineering study as it explores a novel prompting method, the Hierarchical Step-by-Step (HiSS), specifically for the task of fact verification of news claims using large language models (LLMs). This approach falls directly within the scope of prompt engineering, where the design of prompts is used to guide the LLMs to perform complex tasks such as dissecting claims into subclaims and verifying them, which is a more nuanced application of prompt engineering. The relevance is not rated a full 10 only because the abstract does not explicitly discuss the engineering of 'hard prefixes,' but the prompting methodology itself is a significant contribution to the field of prompt engineering.",https://arxiv.org/pdf/2310.00305 -unified human-scene interaction via prompted chain-of-contacts,7,"The relevance of the 'unified human-scene interaction via prompted chain-of-contacts' study to prompt engineering is significant, as it describes a system that uses language commands to control interactions within a virtual environment. This means that it requires engineered prompts to interpret human language and convert it into actionable commands, aligning closely with the concept of prompt engineering. Although the study focuses specifically on Human-Scene Interaction and does not explicitly discuss the process of designing prompts or the systematic review of hard prefix prompts, the usage of a Large Language Model (LLM) Planner to translate these commands indicates that prompt engineering is an integral part of the framework. Therefore, it is relevant to the study of prompt engineering but not entirely focused on it; hence, it receives a rating of 7.",https://arxiv.org/pdf/2309.07918 -majority rule: better patching via self-consistency,8,"The abstract provided discusses an advanced application of prompting techniques in the specific context of software engineering problem-solving. While the focus is on a particular domain, the techniques used, such as few-shot prompts, chain of thought explanations, and the self-consistency method are directly related to prompt engineering. The paper's contribution to prompt engineering is substantial as it explores the effectiveness of particular prompting strategies (like using commit logs as explanations) that lead to state-of-the-art results. However, the research does not appear to be about 'hard prefix prompts' specifically, so it is not a perfect match for a 'comprehensive systematic review on hard prefix prompts.' Therefore, the rating is not a full 10.",https://arxiv.org/pdf/2306.00108 -llm-assisted content analysis: using large language models to support deductive coding,7,"The paper 'llm-assisted content analysis: using large language models to support deductive coding' is moderately relevant to prompt engineering studies. The study investigates the potential of Large Language Models like GPT-3.5 to assist with the labor-intensive process of deductive coding in qualitative research, which is a specific application of natural language processing. Although it does not directly focus on 'hard prefix prompts,' it does explore the broader realm of using prompts (or queries) to facilitate analysis with an LLM, and it examines how LLMs can be used to refine prompts for better deductive coding outcomes, which is a core part of prompt engineering. Therefore, the principles and findings regarding prompt optimization and evaluation in this research can be valuable for those studying prompt engineering, even if the primary focus of the study does not directly align with the construction or systematization of hard prefix prompts.",http://arxiv.org/pdf/2306.14924 -toolkengpt: augmenting frozen language models with massive tools via tool embeddings,7,"The abstract provided does pertain to the general field of prompt engineering, given it discusses an approach to augment large language models in a way that could enhance their use of prompts for tool execution. Although it doesn't specifically mention 'hard prefix prompts' or conduct a 'systematic review' on them, the description of ToolkenGPT and the concept of 'toolkens' is relevant to the field of prompting language models for specific tasks. The paper suggests a method for improving the interaction between language models and the tools they can utilize, which could be considered a form of advanced prompt engineering. Therefore, the rating is moderately high for relevance, but not a full score because it does not directly address a systematic review or the specific concept of 'hard prefix prompts.'",http://arxiv.org/pdf/2305.11554 -enhance reasoning ability of visual-language models via large language models,8,"The provided abstract is relevant to prompt engineering study because it describes a method (TReE) for enhancing the reasoning ability of visual-language models by using prompts derived from a large language model. This is particularly applicable to hard prefix prompts, as it involves structuring input to the models in a way that guides them through a multi-stage reasoning process. Although the abstract may not explicitly state 'hard prefix prompts', the thinking and re-thinking stages likely involve constructing prompts that carefully direct the model's reasoning, a key concept in prompt engineering.",http://arxiv.org/pdf/2305.13267 -violation of expectation via metacognitive prompting reduces theory of mind prediction error in large language models,7,"The abstract describes a study on the application of a metacognitive prompting framework in the context of LLMs and their ability to perform Theory of Mind tasks. These tasks are directly related to the prediction capabilities and interpretation strategies of the models, which are essential elements in the broader scope of prompt engineering. Though the concept of 'hard prefix prompts' as specified in the initial request is not addressed directly, the nature of modifying LLM behavior through specific prompting techniques (metacognitive prompting) is highly relevant to enhancing the understanding of how prompts affect model performance and behavior. Therefore, the study is considerably relevant as it focuses on systematic approaches to improve interaction quality between humans and AI via prompts, which could indirectly contribute to the understanding and development of hard prefix prompts in prompt engineering.",https://arxiv.org/pdf/2310.06983 -gpt-4 is too smart to be safe: stealthy chat with llms via cipher,8,"The relevance of this study to prompt engineering is high because it directly investigates the interaction dynamics between humans and LLMs (Large Language Models) by introducing a novel method of communication—CipherChat. This approach challenges existing safety alignment techniques, which are crucial for prompt engineering as they ensure that model responses align with intended outcomes and ethical guidelines. The use of ciphers as a tool to test and potentially enhance LLMs' interpretative faculties aligns with prompt engineering strategies that seek to refine how models understand and generate language-based responses. Furthermore, the discovery of a 'secret cipher' within LLMs and the development of a SelfCipher method pertains to advanced prompt engineering, where understanding model behavior in non-natural languages can lead to more sophisticated and safer human-AI interactions. However, because the study primarily focuses on safety alignment and communication in ciphers, which are a subset of prompt engineering tasks, it does not fully encompass the breadth of prompt engineering studies. Hence, the rating falls short of a perfect score.",https://arxiv.org/pdf/2308.06463 -ask an expert: leveraging language models to improve strategic reasoning in goal-oriented dialogue models,8,"The study focuses on incorporating strategic reasoning into dialogue models through the use of specialized prompts, which is related to prompt engineering. Although the 'hard prefix prompt' is not explicitly mentioned, the concept of structured prompts guiding dialogue systems is fundamental to prompt engineering and is reflected in the 'Ask an Expert' framework. This framework relies on pre-specified prompts to direct the conversation, which is a core aspect of prompt engineering. The relevance to prompt engineering is high, but the rating is not a full 10 due to the absence of a direct focus on 'hard prefix prompts' specifically.",http://arxiv.org/pdf/2305.17878 -zero-shot visual relation detection via composite visual cues from large language models,9,"The described study's focus on using language model-generated description-based prompts, referred to as 'Composite Description prompts', to improve zero-shot visual relation detection directly relates to the field of prompt engineering. The systematic review of 'hard prefix prompts' could encompass studies that explore innovative ways of combining language models with vision tasks, including the generation of prompts to guide visual recognition. Furthermore, the introduction of a chain-of-thought method to prompt language models for weight generation aligns with strategic prompt design to elicit specific model behaviors. Thus, the relevance is high, though not a perfect 10 as the primary focus is on visual relation detection rather than prompt engineering exclusively.",https://arxiv.org/pdf/2305.12476 -distinguish before answer: generating contrastive explanation as knowledge for commonsense question answering,8,"The abstract describes CPACE, a model that uses explanation prompts to generate contrastive explanations from symbolic knowledge, which is particularly relevant to the field of prompt engineering. The use of prompts to guide the generation of explanations indicates that this research is focused on enhancing the interpretability and effectiveness of a question answering system through careful design of prompts. While not exclusively focused on 'hard prefix prompts', the study emphasizes the use of prompts in an AI model, which aligns with studies in prompt engineering. The relevance rating is not the maximum because the connection to 'hard prefix prompts' is not direct, yet the concept of using prompts to drive AI behavior is central to the research presented.",http://arxiv.org/pdf/2305.08135 -epa: easy prompt augmentation on large language models via multiple sources and multiple targets,8,"The paper describes a method called EPA (Easy Prompt Augmentation) which is directly related to prompt engineering. It improves the performance of large language models by augmenting task prompts with paraphrased demonstrations, reducing the user's effort in creating effective prompts. Since the study is about a technique to enhance prompt efficacy for NLP tasks, it has high relevance to the field of prompt engineering. However, the information provided does not explicitly mention 'hard prefix prompts', which was the specific topic of interest mentioned in the original inquiry, thus the rating is not a full 10.",https://arxiv.org/pdf/2309.04725 -expclip: bridging text and facial expressions via semantic alignment,7,"The abstract describes a research study that focuses on using natural language prompts to control the style of facial expressions in speech-driven animation, which is relevant to prompt engineering in the context of using language prompts for specific tasks. However, the primary application is in the domain of facial animation rather than prompt engineering for text generation or data processing tasks. Nevertheless, the study's use of a CLIP-based model and the development of a Text-Expression Alignment Dataset (TEAD) suggests significant overlap with prompt engineering methodologies, as it involves the alignment of text prompts with emotional expressions. The relevance is not complete as the scope of prompt engineering can be more extensive, but the techniques and mechanisms such as automatic annotation with LLMs and Expression Prompt Augmentation (EPA) are of interest to the field of prompt engineering.",https://arxiv.org/pdf/2308.14448 -pitl: cross-modal retrieval with weakly-supervised vision-language pre-training via prompting,8,"The study is highly relevant to the field of prompt engineering as it describes a method to improve the performance of vision-language pre-training models by using prompts to elicit knowledge from large language models. The method, called Prompts-in-The-Loop (PiTL), uses prompts to generate language counterparts for images, which reduces the need for paired image-text data and is a direct application of prompt engineering techniques. Although the study does not specifically focus on 'hard prefix prompts', it is still related to the broader area of prompt engineering, hence the rating of 8.",https://dl.acm.org/doi/pdf/10.1145/3539618.3592038 -prompting with pseudo-code instructions,8,"The paper directly addresses the concept of 'prompt engineering' by exploring the use of pseudo-code as a form of prompt style for improving the performance of pre-trained language models. It compares pseudo-code prompts with natural language prompts and presents empirical results showing the effectiveness of pseudo-code, which includes structural elements pertinent to the field of prompt engineering. The improvement in performance metrics like F1 scores for classification and ROUGE-L scores for generative tasks indicates a significant relevance to the area of study. However, it focuses specifically on pseudo-code prompting rather than a broader range of hard prefix prompts, which is why the rating is not a full 10.",http://arxiv.org/pdf/2305.11790 -towards general visual-linguistic face forgery detection,8,"The abstract describes a study that centers on using 'fine-grained sentence-level prompts' for more effective face forgery detection. Prompt engineering is directly related to the design of these fine-grained prompts, making it highly relevant to the stated topic. The use of prompts within a Visual-Linguistic Face Forgery Detection system to improve semantic information and interpretability aligns with the study of hard prefix prompts which are designed for better interaction between language and models. The rating isn't a full 10 because the study focuses on a specific application of prompts in face forgery detection rather than a broad systematic review of hard prefix prompts across various domains.",https://arxiv.org/pdf/2307.16545 -forgetful large language models: lessons learned from using llms in robot programming,9,"The abstract indicates a study focused on reducing errors in execution of robotic programming tasks by employing language models with prompts. Although it concentrates on the 'forgetfulness' of LLMs and proposes solutions through prompt engineering tactics, it doesn't strictly cover 'hard prefix prompts' as the original study question suggests. However, the relevance is quite high as the paper seems to be a direct application of prompt engineering to improve task performance. Just the focus on prefix prompts specifically is not stated, which slightly reduces the rating.",https://arxiv.org/pdf/2310.06646 -interpretable unified language checking,8,"The abstract mentions the use of a 'simple, few-shot, unified set of prompts' for improving the performance of large language models (LLMs) on a variety of language checking tasks. This indicates that the research involved studies on how prompt engineering can enhance the capabilities of LLMs in detecting misinformation, stereotypes, and hate speech. Although the focus is not solely on 'hard prefix prompts,' the relevance to prompt engineering is clear because the study explores how different kinds of prompts can affect the performance of LLMs on specific language tasks. The rating is not a full 10 because the abstract does not focus exclusively on systematic review of prompt engineering or on 'hard prefix prompts', which are specific types of prompts used to control the behavior of language models.",http://arxiv.org/pdf/2304.03728 -genrec: large language model for generative recommendation,7,"The abstract indicates the use of 'specialized prompts' to improve the ability of a Large Language Model (LLM) to understand recommendation tasks, which implies a form of prompt engineering. Since prompt engineering is essential for fine-tuning LLMs to perform specific tasks such as generative recommendation, and this paper discusses formulating these prompts, it has a substantial relevance to prompt engineering study. However, the focus of the abstract seems more on the application of large language models for recommendation systems rather than the detailed study of hard prefix prompts, which prevents a perfect score.",https://arxiv.org/pdf/2307.00457 -"a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity",8,"The abstract describes an evaluation framework that specifically includes assessments of ChatGPT's capabilities in a 'multi-turn ""prompt engineering"" fashion,' indicating that the study examines and utilizes prompt engineering as a part of the evaluation process. Since prompt engineering is integral to optimizing the performance of ChatGPT in various tasks as mentioned in the abstract, it is highly relevant to the study of prompt engineering. However, it is not entirely focused on 'hard prefix prompts,' which would be explicitly tailored cues designed to guide the language model's responses, therefore the rating is not a full 10.",http://arxiv.org/pdf/2302.04023 -chain-of-thought prompt distillation for multimodal named entity and multimodal relation extraction,8,"This abstract describes a study focused on prompt distillation, which is a technique related to prompt engineering. The core concept of prompt engineering is leveraged here, as it involves crafting prompts to extract reasoning abilities from large language models and effectively transfer this knowledge to smaller models. This research is relevant to the field of prompt engineering, specifically concerning the design of 'chain-of-thought' prompts to facilitate multimodal understanding. Although the study doesn't explicitly focus on 'hard prefix prompts,' it aligns closely with the larger domain of prompt engineering, thus meriting a high relevance rating.",https://arxiv.org/pdf/2306.14122 -fedlogic: interpretable federated multi-domain chain-of-thought prompt selection for large language models,9,"The relevance of the paper 'FedLogic: Interpretable Federated Multi-domain Chain-of-Thought Prompt Selection for Large Language Models' to prompt engineering is high. It directly addresses the challenge of prompt selection in LLMs, aiming to improve both the precision of responses and the interpretability of the prompting process. The focus on Chain-of-Thought reasoning, a method that has shown promise for enhancing the quality of LLM outputs, further emphasizes its relevance to the current landscape of prompt engineering. The introduction of FedLogic to navigate the complexities of multi-domain prompt selection and its emphasis on a theoretical framework and constraint incorporation suggests significant contributions to the field of prompt engineering. The only reason it doesn't score a full 10 is that the abstract does not mention 'hard prefix prompts,' which might be understood as a subset or a particular method within prompt engineering; the paper seems to focus more broadly on CoT prompts.",https://arxiv.org/pdf/2308.15324 -robust preference learning for storytelling via contrastive reinforcement learning,7,"The abstract describes an approach to controlled automated story generation that involves a level of prompt engineering, particularly in the fine-tuning phase using prompt-learning techniques. This suggests relevance to the study of prompt engineering, especially in the context of enhancing the robustness of a generative model's outputs with respect to user preferences. However, the focus of the study is on contrastive reinforcement learning rather than exclusively on hard prefix prompts or a detailed dissection of prompt engineering approaches. The relevance is therefore notable but not comprehensive concerning prompt engineering as a broad field.",http://arxiv.org/pdf/2210.07792 -using natural language explanations to rescale human judgments,7,"The abstract describes a study involving the use of large language models (LLMs) to rescale human judgment annotations based on natural language explanations. This is relevant to prompt engineering as it directly pertains to the optimization of LLM outputs through the integration of human feedback. Specifically, feeding Likert ratings and explanations into an LLM to homogenize ratings across annotators is a form of prompt design that guides the model to generate more consistent and possibly more reliable numeric scores. The technique is studied within the context of a specific NLP task (document-grounded question answering), and it addresses challenges inherent in subjective human evaluations which are critical for training and evaluating LLMs. The relevance is not rated higher because the study is more focused on the annotation process and the rescaling of human judgments rather than the construction of hard prefix prompts specifically.",http://arxiv.org/pdf/2305.14770 -who wrote it and why? prompting large-language models for authorship verification,9,"The abstract outlines a study that uses engineered prompts, specifically 'step-by-step stylometric explanation prompts,' as a key component of their proposed method (PromptAV) for authorship verification. This directly falls into the scope of prompt engineering studies as it involves designing prompts that enable a Large-Language Model to perform a specific task more effectively. The work not only engages with prompt design but also tackles the challenges of data efficiency and model interpretability, which are pertinent to the development and assessment of prompts in language models. The one point deduction is due to the possibility that the study may not encompass a 'comprehensive systematic review' on the topic, but rather presents a novel approach within the field.",https://arxiv.org/pdf/2310.08123 -context-aware prompt tuning for vision-language model with dual-alignment,9,"The abstract describes the development and application of a method called Dual-Aligned Prompt Tuning (DuAl-PT) in the context of vision-language models, which is highly relevant to the field of prompt engineering. Prompt engineering is a critical aspect of adapting large models to specific tasks, and the introduction of a novel method that utilizes both pre-trained language models and alignment techniques directly pertains to advancements in prompt engineering. The high relevance is underscored by the explicit focus on improving the efficiency and context-awareness of prompts, which are key goals in prompt engineering. The reason for not giving a perfect 10 is that the abstract does not focus on 'hard prefix prompts' specifically but rather on prompt learning methods in general, which encompasses a wider field than the specified study area.",https://arxiv.org/pdf/2309.04158 -automated assessment of comprehension strategies from self-explanations using llms,8,"The study's focus on leveraging open-source Large Language Models for the assessment of comprehension strategies is highly relevant to prompt engineering given that it employs the technique of fine-tuning LLMs and providing examples via prompts to improve performance. This is particularly pertinent to the field of prompt engineering as it directly involves strategies for optimizing the interaction with LLMs to achieve better outcomes in understanding and generating text. Although the study does not specifically mention 'hard prefix prompts', the practice of providing examples via the prompt and the implicit structuring of input to elicit specific types of responses are at the core of prompt engineering studies. Hence, the relevance to prompt engineering is quite significant, but not entirely focused on the 'hard prefix prompts' aspect, leading to a rating of 8.",https://www.mdpi.com/2078-2489/14/10/567/pdf?version=1697284775 -distilled language models are economically efficient for the enterprise. ...mostly.,7,"The abstract discusses the comparison of three strategies to specialize a Large Language Model (LLM) for enterprise use in assisting customer service agents, one of which is prompt engineering. While the main focus appears to be on the economic efficiency of using distilled language models, prompt engineering is directly mentioned as one of the methods assessed. Therefore, it is relevant from the perspective of comparing the effectiveness and costs of different methods of leveraging LLMs, including prompt engineering. However, the complete focus on prompt engineering is not evident, thus not deserving a full score.",http://arxiv.org/pdf/2306.07402 -curriculum prompt learning with self-training for abstractive dialogue summarization,8,"The paper presents a curriculum-based prompt learning method which is highly relevant to the field of prompt engineering. The method's gradual increase in prompt perturbation is particularly pertinent to the study of hard prefix prompts, as it deals with enhancing the model's understanding through strategically structured prompts. However, it doesn't focus exclusively on 'hard prefix prompts' but rather on prompt learning in general within the specific application of dialogue summarization. Thus, while the paper is relevant due to its focus on innovative prompt engineering techniques, the relevance is not perfect as the study does not solely center on hard prefix prompts per se.",https://aclanthology.org/2022.emnlp-main.72.pdf -meta-augmented prompt tuning for better few-shot learning,7,"The study mentioned in the abstract addresses issues related to prompt tuning, particularly in the context of few-shot learning. While prompt tuning is directly relevant to prompt engineering, the study focuses on soft prompts rather than hard prefix prompts. The proposed SUMMER framework seeks to improve the initialization and generalizability of soft prompts, which is suggestive of techniques that could potentially be applicable to a broader set of prompt engineering challenges. However, since the study is not specifically about hard prefix prompts, the relevance is significant but not direct, leading to a rating of 7.",http://arxiv.org/pdf/2303.12314 -learning to perform complex tasks through compositional fine-tuning of language models,7,"The abstract describes a method related to prompt engineering — compositional fine-tuning (CFT). While it does not directly address 'hard prefix prompts,' it does engage with the broader theme of structuring the interaction with language models to improve task performance. The work on CFT contributes to our understanding of how tasks can be decomposed and taught to language models, which is relevant to the study of how prompts can be designed and optimized. This is tangentially related to hard prefix prompts, as both are concerned with the efficacy of input structures for language models. However, the focus on CFT instead of hard prefix prompts directly means the relevance is significant but not complete.",http://arxiv.org/pdf/2210.12607 -cona: a novel context-aware instruction paradigm for communication using large language model,8,"The abstract discusses CONA, a context-aware instruction paradigm designed for effective knowledge dissemination with GPT models, which certainly falls under the broader category of prompt engineering, as it explores new methods for communication and interaction with LLMs. Despite not addressing 'hard prefix prompts' specifically, it presents a framework that utilizes the mechanisms of prompt engineering to optimize interactions with LLMs. However, the connection to 'hard prefix prompts' is not explicit, hence the rating is not a full 10.",http://arxiv.org/pdf/2305.18620 -incremental learning of humanoid robot behavior from natural interaction and large language models,7,"The study discusses the integration of Large Language Models (LLMs) into the behavior orchestration of a humanoid robot, focusing on natural-language interaction and incremental learning through feedback loops. While not directly focusing on hard prefix prompts, the concept of 'incremental prompt learning' is introduced, where the system learns and modifies its interactions based on human feedback. This relates to prompt engineering in the broader sense because it involves designing and refining prompts that the LLM uses to generate proper Python statements, which directly affect the robot's actions. However, the study does not appear to specifically address hard prefix prompts or a systematic review thereof, hence the score is not a full 10, reflecting its partial relevance to the specific area of prompt engineering mentioned in the initial query.",https://arxiv.org/pdf/2309.04316 -tree-planner: efficient close-loop task planning with large language models,8,"The paper discusses an approach to task planning with Large Language Models that includes the use of prompts to generate plans, which is closely linked to the concept of prompt engineering. While the focus is on efficiency and error reduction in iterative actions rather than the study of hard prefix prompts specifically, the principles of designing effective prompts are implicitly a part of the paper due to the need for clear and structured input to guide the LLMs' plan generation and decision-making processes. Therefore, the study is relevant to the broader context of prompt engineering, although it does not directly address a comprehensive systematic review on hard prefix prompts.",https://arxiv.org/pdf/2310.08582 -batchprompt: accomplish more with less,9,"The abstract describes research focused on improving the efficiency of large language model prompting through batching strategies, specifically 'BatchPrompt.' This is highly relevant to prompt engineering as it directly tackles the challenge of optimizing prompts for better performance in terms of processing time and resource consumption, which is a core aspect of prompt engineering. The introduction of strategies like Batch Permutation and Ensembling (BPE) and Self-reflection-guided Early Stopping (SEAS) to address performance issues associated with batching denotes a significant contribution to the field. The detailed experimental results showing comparative performance with traditional single-data prompting further underscore the relevance of this study to prompt engineering. The deduction of a point from a perfect score is due to the abstract slightly broader focus on overall efficiency rather than the fine-grained specifics of prompt crafting. However, the study's outcome directly impacts prompt engineering practices for large language models.",https://arxiv.org/pdf/2309.00384 -modular and parameter-efficient multimodal fusion with prompting,7,"The paper discusses the use of prompt vectors to align modalities in multimodal fusion, which is relevant to the field of prompt engineering as it involves the use of prompts to achieve model efficiency and modularity. However, it may not directly address the exact concept of 'hard prefix prompts' as might be suggested by a 'comprehensive systematic review'. Nonetheless, the paper still contributes to the broader area of prompt engineering by exploring efficient alternatives to finetuning in multimodal pre-training, thus the rating is above average but not maximum.",http://arxiv.org/pdf/2203.08055 -attempt: parameter-efficient multi-task tuning via attentional mixtures of soft prompts,9,"The abstract presents a novel approach to multi-task learning in language models that leverages soft prompts—small prefix embedding vectors—for efficient parameter tuning. Given that the study explicitly addresses prompt engineering through soft prompts and their application in multi-task learning and knowledge transfer, it is highly relevant to the field of prompt engineering. The approach's efficiency and effectiveness in comparison to other tuning methods underscore its significance within the realm of prompt engineering studies. The score is not a perfect 10 because the focus is specifically on 'soft' prompts rather than 'hard' prompts as mentioned in your inquiry, suggesting a slightly wider scope than just hard prefix prompts.",https://aclanthology.org/2022.emnlp-main.446.pdf -effective structured prompting by meta-learning and representative verbalizer,8,"The provided abstract details the use of prompts in natural language processing with a focus on prompt tuning and the introduction of a new method called MetaPrompter. It relates directly to prompt engineering as it discusses the initialization of prompts, the use of meta-learning for task-specific prompts, and the creation of a more efficient system for prompt application in pre-trained MLMs. The relevance score is not a full 10 because the abstract does not specifically mention 'hard prefix prompts' which is the particular focus of the solicited comprehensive review. However, it discusses the broader field of prompt engineering and provides insights into the recent developments in prompt tuning techniques, which are pertinent to the study of hard prefix prompts.",http://arxiv.org/pdf/2306.00618 -prompting classes: exploring the power of prompt class learning in weakly supervised semantic segmentation,8,"The provided abstract details a study that explores prompt tuning in the context of weakly supervised semantic segmentation (WSSS), which is a specific application of prompt engineering. The focus on how the modification of text prompts can impact the Class Activation Map (CAM) and the introduction of a novel PrOmpt cLass lEarning (POLE) strategy demonstrate a direct relevance to prompt engineering as it pertains to adapting language-vision models to downstream tasks. While the study is specific to WSSS and does not cover the broader topic of 'hard prefix prompts' comprehensively, the principles and findings can contribute valuable insights into the broader field of prompt engineering, hence the high relevance rating.",https://arxiv.org/pdf/2307.00097 -rewoo: decoupling reasoning from observations for efficient augmented language models,7,"The study introduces ReWOO (Reasoning WithOut Observation) which aims to make Augmented Language Models more efficient by decoupling the reasoning process from knowledge retrieval. This approach could be highly relevant to prompt engineering, especially in complex systems that require prompt optimization to reduce computational costs and improve efficiency. Since the methodology addresses issues related to prompt redundancy and token optimization, it would contribute to the design of better-engineered prompts that effectively interact with external tools without unnecessary computational overhead. However, the study does not directly focus on 'hard prefix prompts' or the systematic review of various prompt types, therefore the relevance is notable but not absolute.",http://arxiv.org/pdf/2305.18323 -efficient domain adaptation of language models via adaptive tokenization,7,"The study discussed in the title 'efficient domain adaptation of language models via adaptive tokenization' is relevant to prompt engineering study to a significant extent. While it does not directly address 'hard prefix prompts', it focuses on improving the adaptation of language models to new domains, which is a related aspect of prompt engineering. The process of optimizing tokenizer behavior for domain-specific understanding can enhance prompt responses by tailoring model input to better represent contextual nuances. This indirect relation to prompt construction and optimization reflects an underlying relevance to prompt engineering, as tokenization is a foundational component that influences the quality of prompts and their interpretation by language models. Nevertheless, the study does not directly tackle prompt engineering methodologies or the systematic review of 'hard prefix prompts', thus the relevance is not maximal.",https://aclanthology.org/2021.sustainlp-1.16.pdf -parameter-efficient low-resource dialogue state tracking by prompt tuning,9,"The abstract discusses the use of soft prompt token embeddings, which is a technique within the paradigm of prompt engineering. Although it does not discuss 'hard prefix prompts' specifically, it relates closely to the topic as prompt tuning is a key area within prompt engineering studies. The research aims to enhance dialogue state tracking by using prompts to tune language models with fewer parameters, which is a direct application of prompt engineering principles. Therefore, the rating is high because it is very relevant to the broader field of prompt engineering, but not a perfect score as it does not directly pertain to 'hard prefix prompts'.",http://arxiv.org/pdf/2301.10915 -panda: prompt transfer meets knowledge distillation for efficient model adaptation,9,"The provided abstract and TLDR discuss research on prompt-tuning and prompt transfer (PoT) as methods for efficient model adaptation in the context of pretrained language models (PLMs), addressing the challenges with smaller PLMs and the innovation of a new approach named PANDA. Since prompt engineering studies how to design and use prompts to communicate effectively with language models, the mentioned techniques of prompt transfer and the novel PANDA approach are highly relevant to the field. It focuses on the optimization and enhancement of prompts, which is a core aspect of prompt engineering. The only reason the rating is not a 10 is because the study is narrower in scope, focusing on efficiency and specific techniques rather than a broader methodological investigation into prompt design or the theory behind prompt engineering.",http://arxiv.org/pdf/2208.10160 -toward efficient language model pretraining and downstream adaptation via self-evolution: a case study on superglue,7,"The relevance of this study to prompt engineering is moderate to high as it discusses the 'prompt transfer technique' which is a form of prompt engineering. This technique involves transferring knowledge from one task to another, which is central to the idea of adapting language models to various downstream tasks using prompts. The study's focus on leveraging this technique to improve low-resource tasks indicates that it involves modifying or engineering prompts to enhance performance, which is pertinent to the study of prompt engineering. However, the report does not seem to specifically address 'hard prefix prompts,' which was the explicit focus mentioned in your query. Therefore, the study is relevant due to its inclusion of prompt-based techniques, but not as high as it would be if it were centered on hard prefix prompts specifically.",https://arxiv.org/pdf/2212.01853 -degree: a data-efficient generation-based event extraction model,8,"The study appears highly relevant to prompt engineering as it involves the design of manual prompts to guide a data-efficient event extraction model, termed DEGREE. The model's dependency on these prompts for semantic guidance indicates that a significant portion of the research likely involves understanding and improving how prompts are constructed (prompt engineering) to better capture event arguments. Although the primary focus is event extraction, the reliance on manually designed prompts for model training and the discussion of prompt-encoded information suggest a substantial relevance to the field of prompt engineering.",https://aclanthology.org/2022.naacl-main.138.pdf -fedprompt: communication-efficient and privacy-preserving prompt tuning in federated learning,7,"The paper discusses prompt tuning within the context of federated learning, which directly relates to the broader field of prompt engineering. While it does not explicitly mention 'hard prefix prompts,' the study of prompt tuning techniques and their efficiency and privacy implications within federated learning frameworks adds to the understanding of how prompts can be optimized. Given that prompt engineering encompasses the exploration and application of prompts in various scenarios, the relevance is high. However, it is not rated a full 10 because the specific focus on communication efficiency and privacy in federated learning does not directly address the systematic review aspect of hard prefix prompts, which seems to be a more targeted area within the field of prompt engineering.",https://arxiv.org/pdf/2208.12268 -prompt tuning for parameter-efficient medical image segmentation,8,"The abstract presents a study on the application of prompt tuning, a concept closely related to prompt engineering, in the context of medical image segmentation. Although the study focuses on a specific application (parameter-efficient adaptations for semantic segmentation in medical imaging), it explores the use of prompts (learnable prompt tokens) to adapt a neural network model to new tasks without full model fine-tuning. Since prompt engineering involves techniques for efficiently integrating prompts in order to steer model behavior, albeit typically in the context of language models, this work's investigation into prompts in the UNet architecture for medical imaging is relevant to the broader study of prompt engineering principles and methods. The rating is not a full 10 because the study is highly specialized and may not directly address 'hard prefix prompts' or the specificities of prompt engineering in natural language processing, which often is the primary focus of prompt engineering literature.",https://arxiv.org/pdf/2211.09233 -rethinking visual prompt learning as masked visual token modeling,7,"The discussed paper is relevant to the study of prompt engineering, despite its focus on the vision domain rather than natural language processing (NLP). The paper introduces a method for visual prompt learning, which parallels the concept of prompt engineering in NLP by adapting pre-trained models to downstream tasks. The proposal of Visual Prompt learning as Masked visual Token Modeling (VPTM) to unify the form of pre-training and downstream tasks is conceptually similar to hard prompt methods in NLP that aim to bridge the gap between the two stages. Although the specific application to visual tasks might not directly correspond to textual 'hard prefix prompts,' the underlying principles of prompting and task reformulation involved in VPTM are relevant to the broader study of prompt engineering. The emphasis on consistency, robustness, and unified deployment also echoes concerns in prompt engineering research.",http://arxiv.org/pdf/2303.04998 -parameter-efficient tuning helps language model alignment,7,"The given abstract presents a method for aligning language models with human preferences by using a technique called 'alignMEnt with parameter-Efficient Tuning (MEET)'. This involves optimizing control tokens using parameter-efficient tuning strategies such as prompting tuning and low-rank adaptation, which is highly relevant to prompt engineering. The reference to 'control tokens' and 'hand-crafted prompts' directly relates to the design and engineering of prompts for tuning model behavior. The focus on parameter-efficiency is also pertinent to prompt engineering because it relates to optimizing the input given to models without overhauling the entire model architecture. However, the abstract does not specifically address 'hard prefix prompts' which would be the focus of a comprehensive systematic review on that topic. For this reason, the relevance is not rated a full 10, as it is more broadly about language model alignment with control tokens rather than narrowly focused on hard prefix prompts in prompt engineering.",https://arxiv.org/pdf/2310.00819 -prompt tuning for generative multimodal pretrained models,8,"This abstract is quite relevant to prompt engineering as it discusses 'prompt tuning', which is a specific method within the broader area of prompt engineering. Prompt tuning is a new paradigm where prompts are specifically crafted or optimized to improve the performance of pretrained models on various tasks. The focus on generative multimodal pretrained models suggests that the study addresses complex scenarios where prompt engineering could be crucial for model tuning. Despite the high relevance, the rating is not a complete 10 because the study seems to be more focused on implementing prompt tuning as a lightweight alternative to full model finetuning, rather than a comprehensive systematic review of hard prefix prompts as the original prompt might suggest.",http://arxiv.org/pdf/2208.02532 -opal: multimodal image generation for news illustration,7,"The paper's focus on a system named Opal that navigates the challenges of finding the right visual language for text prompts does relate to prompt engineering, particularly in multimodal AI contexts. Although the paper does not directly address 'hard prefix prompts,' it does deal with the structured creation of text prompts to guide AI in generating images, which is an essential part of prompt engineering. The relevance is high because prompt engineering is critical for effective human-AI co-creation, especially in text-to-image generation tasks. However, the paper centers more on the application of such a system for news illustrations rather than the theoretical or methodological aspects of prompt engineering study.",https://arxiv.org/pdf/2204.09007 -draw your art dream: diverse digital art synthesis with multimodal guided diffusion,7,"The paper presented addresses the usage of multimodal prompts which involve feeding a model with inputs from different modalities such as text and image, which aligns with the concept of 'prompt engineering' that typically involves crafting inputs to guide a model’s output. Although not directly focused on 'hard prefix prompts', the concept of using complex, multimodal inputs for guiding a diffusion model in digital art synthesis demonstrates advanced prompt techniques and is indirectly related to the engineering of prompts to achieve desired outcomes in AI systems. Hence, there is a significant relevance to prompt engineering, but it is not a perfect match as the primary study is not about hard prefix prompts in the context of systematic reviews.",https://dl.acm.org/doi/pdf/10.1145/3503161.3548282 -lvp-m3: language-aware visual prompt for multilingual multimodal machine translation,7,"The paper introduces a model LVP-M3 that utilizes visual prompts for the task of Multilingual Multimodal Machine Translation. While the study focuses primarily on translation and the integration of visual features for understanding context across multiple languages, the concept of 'visual prompts' does relate to the idea of 'prompt engineering' as it involves designing inputs to improve the machine's understanding and performance. Although these visual prompts are not 'hard prefix prompts' explicitly, the process of generating and utilizing prompts to enhance model performance overlaps with the broader theme of prompt engineering. Thus, the relevance is significant but not directly focused on the systematic study of hard prefix prompts, hence the rating of 7.",http://arxiv.org/pdf/2210.15461 -few-shot multimodal sentiment analysis based on multimodal probabilistic fusion prompts,7,"The study addresses prompt engineering to some extent by introducing a novel method that includes the design of 'unified multimodal prompts' to decrease discrepancies between different modalities in the few-shot sentiment analysis. This involves engineering prompts that cater to more than just textual data, integrating multimodal data which is a unique and relevant approach to prompt engineering. Additionally, the concept of 'probabilistic fusion method to fuse output predictions from multiple diverse prompts' indicates an advanced level of prompt engineering where different prompts and their predictions are combined. However, the study focuses more specifically on multimodal sentiment analysis and few-shot learning, rather than solely on prompt engineering or 'hard prefix prompts' as stated in the initial topic. Therefore, it is not exclusively aligned with the concept of 'hard prefix prompts' in prompt engineering studies but still significantly contributes to the broader domain of prompt engineering.",https://dl.acm.org/doi/pdf/10.1145/3581783.3612181 -π-tuning: transferring multimodal foundation models with optimal multi-task interpolation,7,"The abstract mentions compatibility with diverse types of parameter-efficient experts, including prompts, which implies that the study covers aspects of prompt engineering. However, the focus seems to be broader, targeting transfer learning methods in general rather than specifically on 'hard prefix prompts'. Thus, while it has relevance due to its inclusion of prompts within the scope of parameter-efficient transfer learning, it's not solely dedicated to prompt engineering, leading to a rating of 7.",http://arxiv.org/pdf/2304.14381 -mass-producing failures of multimodal systems with language models,7,"The abstract describes a novel system, MultiMon, which involves in part the use of language models to identify and generate natural language descriptions of patterns of failures in multimodal systems. This bears relevance to the prompt engineering field since the process includes feeding certain inputs (prompts) to a language model to elicit descriptive outputs regarding the failures. However, the main focus appears to be on the identification of systematic failures in multimodal systems rather than the study of hard prefix prompts themselves. Thus, while related to prompt engineering in the context of multimodal system failure analysis, it is not entirely centered on a comprehensive study of prompts or their structures.",http://arxiv.org/pdf/2306.12105 -multimodal prompt learning in emotion recognition using context and audio information,7,"The study is relevant to prompt engineering due to its focus on improving language models' performance using prompt learning techniques. Although it primarily deals with multimodal sources (text and audio) rather than being strictly about hard prefix prompts, it addresses the aspect of how prompts are engineered to enhance a pre-trained model's ability to perform specific tasks, in this case, emotion recognition. The study proposes a method for prompt learning that considers the context and emotional information, which is a valuable insight into prompt engineering for specialized tasks. However, the relevance is not at the maximum because the study diverges from hard prefix prompts specifically to a broader application of prompts in multimodal learning.",https://www.mdpi.com/2227-7390/11/13/2908/pdf?version=1688017556 -multimodal parameter-efficient few-shot class incremental learning,7,"The abstract mentions the use of 'learnable prompts for both the language and vision encoders' in the proposed Continual Parameter-Efficient CLIP (CPE-CLIP) model, which directly relates to prompt engineering. While the main focus is on Few-Shot Class Incremental Learning (FSCIL) and the use of CLIP for transfer learning across sessions, the mention of learnable prompts indicates that prompt engineering is a component in the study's approach to improve performance in learning tasks. However, since prompt engineering is not the central theme but rather a part of the methodology, the relevance rating is a 7.",http://arxiv.org/pdf/2303.04751 -multitask instruction-based prompting for fallacy recognition,8,"The abstract describes a study on how instruction-based prompting in a multitask setup can improve the recognition of fallacies by computational models. This is highly relevant to prompt engineering as it explores the construction and optimization of prompts to enhance model performance. The use of a multitask setup indicates a sophisticated approach to prompt engineering which is likely to be of interest to those studying prompt design. However, the focus on fallacy recognition means the research is specialized and may not cover all areas of interest within the broader field of prompt engineering.",http://arxiv.org/pdf/2301.09992 -when do you need chain-of-thought prompting for chatgpt?,8,"The abstract discusses the performance and challenges of Chain-of-Thought prompting for ChatGPT, which is directly related to the field of prompt engineering. It explores the limitations and potential of CoT instructions in improving LLM output, providing insights into instruction-based finetuning. The analysis of instruction memorization and potential dataset leakage is crucial for understanding how to engineer prompts effectively for different tasks. Despite not focusing specifically on 'hard prefix prompts,' the study provides valuable information for prompt engineering in a broader sense, which is why it does not receive a perfect score.",http://arxiv.org/pdf/2304.03262 -coder reviewer reranking for code generation,8,"The abstract describes an advanced technique in prompt engineering where two models are used in tandem for code generation – a 'Coder' model to generate programs and a 'Reviewer' model to evaluate these programs. This process of generating and reranking outputs based on prompt-engineered models is clearly relevant to the study of prompt engineering. The methodology explores optimizing the interaction between these models to produce better results, which is a critical part of prompt engineering – refining inputs and evaluating outputs to improve performance. The reason why the rating is not a full 10 is because the abstract focuses on the application of prompt engineering to code generation, which may be a subset of the broader prompt engineering field. However, the principles and techniques exemplified are directly applicable to prompt engineering studies.",https://arxiv.org/pdf/2211.16490 -dualprompt: complementary prompting for rehearsal-free continual learning,8,"The content of the abstract is highly relevant to prompt engineering study because it discusses a novel framework called DualPrompt, which involves learning a tiny set of parameters (prompts) that instruct a pre-trained model on handling new tasks sequentially without the need for rehearsing previous tasks. This approach to prompt engineering is significant as it addresses the challenge of catastrophic forgetting in continual learning models and does so without the need for storing old examples, hence respecting privacy and memory constraints. The abstract focuses on the application of prompt learning in the context of continual learning models, which is a subset of the broader prompt engineering field. The rating is not a full 10 because the study is specific to the continual learning application and may not cover all possible aspects or methodologies of prompt engineering, especially those outside the scope of continual learning.",http://arxiv.org/pdf/2204.04799 -editeval: an instruction-based benchmark for text improvements,7,"The provided abstract discusses 'EditEval', which is an evaluation suite for text generation models, specifically focusing on their editing capabilities. While it does not directly address 'hard prefix prompts' or 'prompt engineering', its core concept of evaluating and optimizing text generation models is relevant to the field. The study examines InstructGPT and PEER models in the context of editing tasks and acknowledges the challenges in prompt optimization. This can inform prompt engineering studies by providing insights into how models respond to instructions and the issues with current metrics, therefore facilitating the creation of better prompts for model evaluations. However, the direct application to hard prefix prompts is tangential and not the central focus of the study, which affects the overall relevance rating.",http://arxiv.org/pdf/2209.13331 -promptsource: an integrated development environment and repository for natural language prompts,9,"The paper describes 'PromptSource', a system designed specifically for creating, sharing, and using natural language prompts, which is central to the concept of prompt engineering. The discussion of a templating language, a user interface for prompt development, and community-driven guidelines directly concerns the practice of prompt engineering. Although the article does not specifically address 'hard prefix prompts' but rather prompts in general, its relevance to the broader field of prompt engineering is significant and should be highly informative for those studying various aspects of prompt design and usage in natural language processing (NLP). Therefore, it receives a high relevance rating of 9.",https://aclanthology.org/2022.acl-demo.9.pdf -adversarial soft prompt tuning for cross-domain sentiment analysis,7,"The study presents advancements in prompt tuning, specifically Adversarial Soft Prompt Tuning for cross-domain sentiment analysis, which is relevant to the field of prompt engineering, as it involves learning to use prompts effectively with language models. Although the study focuses on soft prompts rather than hard prefix prompts, the underlying principles of prompt design and its impact on model performance are highly pertinent to the broader topic of prompt engineering. The approach of using separate prompts for different domains connects to the customization and optimization of prompts for specific tasks. However, the relevance is not rated higher because the prompt mentioned here is 'soft', while the systematic review in question specifically targets 'hard prefix prompts'. Therefore, there is a slight mismatch, but the study still holds value for those exploring the varying applications and methodologies of prompt tuning in language models.",https://aclanthology.org/2022.acl-long.174.pdf -prompt-based rule discovery and boosting for interactive weakly-supervised learning,8,"The paper discusses a method for iteratively discovering novel labeling rules via prompts in the context of weakly-supervised learning. While not directly focused on 'hard prefix prompts', it does revolve around the use of prompts for generating rules and improving models, which is a vital component of prompt engineering. The study is relevant because it deals with the automated generation and refinement of prompts, which is closely related to the analysis and application of prompt effectiveness and efficiency, key considerations in prompt engineering studies. The rating is not a full 10, as the paper's abstract does not specify a focus on 'hard prefix prompts' specifically, but rather on a broader application of rule discovery using prompts.",http://arxiv.org/pdf/2203.09735 -hpt: hierarchy-aware prompt tuning for hierarchical text classification,8,"The given title and abstract provide information about a technique called Hierarchy-aware Prompt Tuning (HPT) for hierarchical text classification. Although this method is focused on a specific task - hierarchical text classification - rather than prompt engineering in general, the concept of 'prompt tuning' is highly relevant to the broader field of prompt engineering. HPT involves constructing dynamic virtual templates and label words as soft prompts, which are essentially a form of prompt engineering tailored to incorporate hierarchical information into the learning process of a PLM. Therefore, the study is quite pertinent to prompt engineering, particularly within the domain of improving model performance for complex classification tasks involving label hierarchies. It doesn't address a 'hard prefix prompt' specifically, which would be an exact match to the search query, but still has significant relevance due to its focus on prompt tuning methodologies.",http://arxiv.org/pdf/2204.13413 -ptau: prompt tuning for attributing unanswerable questions,8,"The presented study 'ptau: prompt tuning for attributing unanswerable questions' is highly relevant to prompt engineering as it directly deals with the development of a system that leverages the concept of prompt tuning. The introduction of a cause-oriented template module for constructing continuous templates in a high-dimensional space and a semantics-aware label module through contrastive learning are indicative of advanced techniques in prompt engineering. Although the study's primary focus is question answering systems and their ability to identify unanswerable questions, the methods used for prompt tuning are applicable and insightful for the broader field of prompt engineering.",https://dl.acm.org/doi/pdf/10.1145/3477495.3532048 -continuous prompt tuning based textual entailment model for e-commerce entity typing,8,"The study is highly relevant to prompt engineering as it discusses a novel application of continuous prompt tuning, which is a subset of prompt engineering, in the context of e-commerce entity typing. The approach of reformulating entity typing into a textual entailment problem with the use of prompts indicates a significant contribution towards the field of prompt engineering. The automatic generation of hypotheses using prompt tuning is particularly pertinent, although the study's focus is more narrowly on textual entailment in the e-commerce domain rather than hard prefix prompts in general. Nonetheless, since prompt engineering techniques are pivotal in the study, it merits a relatively high score.",https://arxiv.org/pdf/2211.02483 -taxoprompt: a prompt-based generation method with taxonomic context for self-supervised taxonomy expansion,8,"The paper presents 'TaxoPrompt,' a framework for taxonomy expansion leveraging prompt tuning, which is directly related to prompt engineering. Although the focus is more specifically on incorporating taxonomic context rather than hard prefix prompts in a broad sense, the methodological approach to enhancing prompt templates and its use in a hierarchical classification context mean that the paper offers relevant insights into the application and development of prompt-engineering techniques.",https://www.ijcai.org/proceedings/2022/0615.pdf -bi-directional iterative prompt-tuning for event argument extraction,9,"The given abstract is highly relevant to prompt engineering study as it directly pertains to the development of a new prompt-tuning method for a specific NLP task, which is event argument extraction (EAE). The bi-directional iterative prompt-tuning approach uses cloze-style tasks and entity information, both key elements in the prompt engineering process. Moreover, the focus on improving interaction with pre-trained language models (PLMs) by considering the context of entities and the roles of arguments during prompt construction are advancements directly applicable to the field of prompt engineering. The only reason it did not receive a 10 is that it is specialized towards EAE rather than prompt engineering in general.",https://arxiv.org/pdf/2210.15843 -schema-aware reference as prompt improves data-efficient knowledge graph construction,9,"The abstract discusses a new approach to improve data-efficient knowledge graph construction through the use of 'schema-aware Reference As Prompt (RAP)' which directly concerns the engineering of prompts to bridge the gap between natural language and structured knowledge. This is highly relevant to prompt engineering study as it proposes a method that advances the way prompts can be utilized in a practical application, namely knowledge graph construction. The only reason it is not a perfect 10 is that it does not cover the broader scope of prompt engineering but rather focuses on a specific application within the field.",https://arxiv.org/pdf/2210.10709 -prompt tuning for multi-label text classification: how to link exercises to knowledge concepts?,9,"The abstract describes the development and application of a prompt tuning method specifically for multi-label text classification, which is highly relevant to the field of prompt engineering. Prompt tuning is a technique within natural language processing that is used to adapt language models to specific tasks without the need for extensive training data. Since the study explores the use of prompt tuning to connect exercises to knowledge concepts, it contributes directly to advancing the methodologies within the area of prompt engineering. The high relevance score reflects the direct applicability of the findings to the study of prompt engineering, albeit the study doesn't focus on 'hard prefix prompts' specifically but on prompt tuning for a related task.",https://www.mdpi.com/2076-3417/12/20/10363/pdf?version=1666593518 -a prompt based approach for euphemism detection,8,"The abstract describes a study that involves developing prompts and verbalizers for euphemism detection, which is directly connected to prompt engineering. Prompt tuning is a subset of prompt engineering, and the use of templates indicates that the study engages in engineering prompts to elicit specific responses from a language model. However, the study is focused more on the specific application of euphemism detection rather than the broader topic of 'hard prefix prompts', so it may not cover all aspects of prompt engineering study, thus not receiving a perfect score.",https://aclanthology.org/2022.flp-1.2.pdf -scene-aware prompt for multi-modal dialogue understanding and generation,7,"The abstract discusses the use of a 'scene-aware prompt' in the context of multi-modal dialogue understanding and generation, which falls under the broader domain of prompt engineering as it pertains to enhancing AI's interaction with multi-modal data. Although it does not specifically address 'hard prefix prompts'—a more nuanced aspect of prompt design often associated with transformer-based language models—it does relate to the application and structuring of prompts for improved AI performance in a given task. Therefore, the relevance is moderate because it demonstrates an application of prompt engineering in a specific NLP contest, however, it is not directly focused on the study of prompt engineering as a standalone subject.",http://arxiv.org/pdf/2207.01823 -label prompt for multi-label text classification,8,"The abstract describes a model for multi-label text classification that uses a form of prompt learning for pre-trained language models. The relevance to prompt engineering is high because it involves designing templates (prompts) that integrate labels into the input of a pre-trained language model and optimizes it using Masked Language Models (MLM), which is a technique related to prompt engineering. The mention of designing a set of templates directly relates to the construction of prompts, which is a core aspect of prompt engineering. The rating isn't a full 10 because the information provided does not indicate if the study includes a 'comprehensive systematic review' or a focus on 'hard prefix prompts' specifically, as mentioned in the study topic.",http://arxiv.org/pdf/2106.10076 -graphprompt: biomedical entity normalization using graph-based prompt templates,8,"The paper introduces 'GraphPrompt', which is a prompt-based learning approach that operates within the domain of prompt engineering. It specifically creates prompt templates according to graph structures, which is directly related to engineering prompts to improve biomedical entity normalization. While the study is not about 'hard prefix prompts' in a general sense, the design and utilization of prompts is core to the paper, hence the high relevance score. The focus on a specific application (biomedical entity normalization) and the lack of a direct mention of 'hard prefix prompt' impacts the relevance rating mildly, preventing a full score.",https://www.biorxiv.org/content/biorxiv/early/2021/12/01/2021.11.29.470486.full.pdf -"promptaid: prompt exploration, perturbation, testing and iteration using visual analytics for large language models",9,"The provided title and abstract describe a visual analytics system, PromptAid, aimed at assisting users in the creation, refinement, and testing of prompts for Large Language Models. The systems focus on interactive prompt exploration, perturbation, and iteration, which are central to the process of prompt engineering. The relevance to prompt engineering is high, as the paper's aim is to directly address challenges involved in crafting and refining prompts. Despite not specifically mentioning 'hard prefix prompts', the broad nature of the study on modifying prompts to improve task performance and its attention to the usability by non-experts make it highly relevant. Nevertheless, the rating is not a full 10 as the information provided does not indicate if hard prefix prompts were specifically considered or the primary focus of the study.",http://arxiv.org/pdf/2304.01964 -few-shot table-to-text generation with prompt-based adapter,9,"The paper presents a novel method for enhancing table-to-text generation in few-shot learning conditions by using a Prompt-based Adapter (PA) to incorporate domain-specific knowledge and bridge the structure gap between tables and text. This is highly relevant to the field of prompt engineering as it involves designing and using prompt templates to augment a language model's capabilities, which is a core concept within prompt engineering. The adaptation of prompts to improve the efficiency of models in specific tasks underlines the important role that prompts play in tailoring pre-trained language models to specialized applications. Therefore, the paper is of high relevance to studies on prompt engineering, particularly in the context of knowledge augmentation and few-shot learning scenarios.",https://arxiv.org/pdf/2302.12468 -graphprompt: graph-based prompt templates for biomedical synonym prediction,9,"The abstract describes a novel use of prompt-based learning specific to the task of biomedical synonym prediction. The study's focus on creating prompt templates derived from graph features directly aligns with prompt engineering by designing, tailoring, and applying prompts to specialized tasks. This approach is beneficial for expanding the understanding and applications of prompt engineering within biomedical datasets and is very relevant to studies on prompt engineering methods. The only reason it does not receive a full score is that it may not cover the broader aspects of prompt engineering across different domains but is highly relevant within its specified context.",https://ojs.aaai.org/index.php/AAAI/article/download/26256/26028 -prompt middleware: mapping prompts for large language models to ui affordances,9,"The described study is highly relevant to prompt engineering as it focuses on a framework (Prompt Middleware) to systematically generate prompts for large language models based on user interface affordances. The research specifically addresses static prompts, template-based prompts, and free-form prompts, all of which are direct aspects of prompt engineering. The application in a practical UI setting (FeedbackBuffet) and the discussion on development integration further emphasize its significance in the field. The reason for not giving a full score of 10 is because the paper might not cover the 'hard prefix prompts' as explicitly as the term implies, but rather discusses a broader scope of integrating prompts into UIs.",http://arxiv.org/pdf/2307.01142 -clickprompt: ctr models are strong prompt generators for adapting language models to ctr prediction,8,"The paper introduces a novel method for integrating CTR prediction models with language models through the use of prompt engineering, in this case, the generation of 'soft prompts' based on a CTR model. This is highly relevant to the field of prompt engineering as it directly involves the creation and utilization of prompts to enhance the performance of language models in a specific task. The score is not a perfect 10 because the focus is specifically on CTR prediction, which is a narrower application within the broader scope of prompt engineering studies.",https://arxiv.org/pdf/2310.09234 -this prompt is measuring : evaluating bias evaluation in language models,7,"The abstract provided discusses evaluating bias in language models by using prompts and templates, which is relevant to prompt engineering as it involves the design and analysis of prompts to diagnose social biases in NLP systems. The study contributes to the broader field of prompt engineering by highlighting the importance of carefully crafting prompts to achieve specific measurement goals in bias evaluation. The relevance is not maximum because the study is specifically focusing on the bias aspect rather than a comprehensive review of various uses and types of hard prefix prompts, but it is still significantly related to the overall endeavor of prompt engineering.",http://arxiv.org/pdf/2305.12757 -prompt tuning with contradictory intentions for sarcasm recognition,9,"The abstract discusses an advanced application of prompt tuning specifically designed for sarcasm recognition in NLP. It directly tackles the challenges of engineering prompts for a specialized task, which is highly relevant to studies on prompt engineering. The work's focus on incorporating domain-specific knowledge (contradictory intentions) into the prompts makes it particularly pertinent to the nuances involved in prompt engineering for complex language tasks. It is rated 9 instead of 10 because the abstract does not mention 'hard prefix prompts', the specific type of prompt the original query seemed to be interested in, but it still stays within the broader field of prompt engineering.",https://aclanthology.org/2023.eacl-main.25.pdf -grammar correction for multiple errors in chinese based on prompt templates,9,"The given abstract describes a novel grammar error correction method that leverages prompt templates, making it highly relevant to prompt engineering studies. A key aspect of prompt engineering is designing effective prompts that interact optimally with language models, as seen with the use of BERT here. The proposed dynamic updating of templates is a specific application of prompt engineering to improve NLP tasks, showcasing how tweaks in prompt strategy can significantly enhance model performance. This research does not study hard prefix prompts but still falls under the broader domain of prompt engineering, hence the rating of 9 rather than a perfect 10.",https://www.mdpi.com/2076-3417/13/15/8858/pdf?version=1690869487 -teprompt: task enlightenment prompt learning for implicit discourse relation recognition,8,"The presented abstract discusses the development and use of a model called TEPrompt for the task of Implicit Discourse Relation Recognition (IDRR), which explicitly involves the concept of prompt learning. This fits within the realm of prompt engineering as it focuses on the design of prompts for specific tasks (DRR, SSC, ACP) which improve the performance of the main task (IDRR). The systematic review of 'hard prefix prompts' could potentially cover such applications of prompt learning in natural language processing tasks. However, the abstract does not directly discuss 'hard prefix prompts' specifically but rather a variant of prompt learning which makes it somewhat less directly relevant for a study exclusively focused on that area. Therefore, the rating is high but not maximum.",http://arxiv.org/pdf/2305.10866 -cover: a heuristic greedy adversarial attack on prompt-based learning in language models,8,"The abstract is highly relevant to prompt engineering as it discusses the vulnerabilities in prompt-based learning, a key component of prompt engineering. It focuses on how adversarial attacks can affect manual templates used within pre-trained language models, which is crucial for understanding the robustness and security of prompts. However, the study's primary concern is adversarial attacks rather than the design or optimization of prompts, hence the rating is not a perfect 10.",https://arxiv.org/pdf/2306.05659 -low-resource multi-granularity academic function recognition based on multiple prompt knowledge,9,"The abstract demonstrates a direct application of prompt engineering by introducing Mix Prompt Tuning (MPT), which uses both manual and automatically learned prompt templates to improve the effectiveness of pre-trained language models in classifying academic functions with limited annotated data. This is highly relevant to the study of prompt engineering as it explores a practical use-case and contributes to the body of knowledge on how prompt strategies can be utilized to enhance model performance in low-resource settings.",http://arxiv.org/pdf/2305.03287 -ground-truth labels matter: a deeper look into input-label demonstrations,7,"The study focuses on the impact of accurate ground-truth labels within the context of in-context learning (ICL), which is a significant component of prompt engineering for AI models. Accurate inputs and labels are critical for training models effectively, and the introduction of metrics like Label-Correctness Sensitivity and Ground-truth Label Effect Ratio can shed light on prompt design strategies. However, since the study seems to focus more on the labels rather than the prompts (the 'hard prefix prompts' mentioned in the initial query), it is not fully centered on prompt engineering. Thus, it receives a medium-high relevance rating, indicating that it is quite relevant but not entirely focused on the specified aspect of prompt engineering.",http://arxiv.org/pdf/2205.12685 -not all languages are created equal in llms: improving multilingual capability by cross-lingual-thought prompting,9,"The study introduces a method of prompt engineering named cross-lingual-thought prompting (XLT) which directly pertains to improving the efficacy of prompt-based tasks in Large Language Models (LLMs) across multiple languages. Given that the study focuses on a specialized prompting technique to enhance language model capabilities, it is highly relevant to the field of prompt engineering. The reason for not giving a full score is that the abstract does not describe 'hard prefix prompts' specifically, but rather a prompt engineering strategy for multilingual models.",http://arxiv.org/pdf/2305.07004 -unihd at tsar-2022 shared task: is compute all we need for lexical simplification?,8,"The title and abstract of the paper are highly relevant to prompt engineering as they detail the use of prompted GPT-3 responses for lexical simplification, which is an application of prompt engineering. The study investigates the efficacy of using prompts to guide a state-of-the-art language model in performing a specific task, thereby contributing to the field of prompt engineering by exploring the potential and limitations of different prompting techniques. The fact that the research describes differing levels of context within the prompts and examines their impact in a competitive setting (TSAR-2022 shared task) is particularly pertinent to the study of how prompts can be optimized for performance. The rating isn't a full 10 because the study focuses on lexical simplification rather than a broad examination of all possible applications of prompt engineering.",http://arxiv.org/pdf/2301.01764 -using natural sentence prompts for understanding biases in language models,8,"The study is highly relevant to prompt engineering as it explicitly addresses the design and use of prompts to evaluate biases in language models. It discusses the impact of different types of prompts (template-based vs natural sentence prompts) on bias assessments in language models, which is a crucial aspect of prompt engineering. The paper's focus on real-world natural sentences for generating prompts also aligns with the current direction in prompt engineering of using more contextually rich and realistic data. Although it doesn't specifically mention 'hard prefix prompts,' the general theme of prompt design and its implications on model behavior makes it relevant to the field of prompt engineering studies. The rating is not a full 10 as the abstract specifies a focus on gender-occupation biases, which is slightly more specific than general prompt engineering.",https://arxiv.org/pdf/2205.06303 -domain knowledge matters: improving prompts with fix templates for repairing python type errors,8,"The given abstract directly relates to prompt engineering as it discusses 'TypeFix,' which is a novel approach for improving prompts with domain knowledge fix templates specifically for Python type error repair tasks. This study is highly relevant to prompt engineering because it explores how to enhance prompts efficacy through automatic methods. It delves into using domain-specific knowledge to refine and adapt prompts to increase their effectiveness in a programming context, thus it scores an 8 instead of 10 because it is very specific to the domain of type error repair rather than general prompt engineering.",http://arxiv.org/pdf/2306.01394 -citeprompt: using prompts to identify citation intent in scientific papers,9,"The study is highly relevant to prompt engineering as it involves the development of a tool, Citeprompt, that utilizes prompt learning for citation intent classification. Prompt learning, as a part of prompt engineering, concerns the design of inputs that effectively leverage pretrained language models to perform specific tasks. The research focuses on the choice of prompt templates and verbalizers, which are essential components of prompt engineering. The improvements reported over baseline models and the exploration into few-shot and zero-shot settings underscore its significant contribution to the field of prompt engineering.",https://arxiv.org/pdf/2304.12730 -extracting structured seed-mediated gold nanorod growth procedures from literature with gpt-3,7,"The relevance to prompt engineering study is moderate to high. This abstract describes a practical application of prompt engineering, where the GPT-3 language model is used to interpret and structure unstructured scientific text data into a useful format (JSON documents). While the study is not solely focused on the theory of hard prefix prompts, it does involve the fine-tuning of prompts with the GPT-3 model to achieve specific outcomes. Therefore, the study contributes to the broader field of prompt engineering by showcasing how prompts can be designed and leveraged to extract complex information from literature, which is a subset of the prompt engineering domain.",http://arxiv.org/pdf/2304.13846 -prompting for automatic log template extraction,8,"The content is highly relevant to prompt engineering study due to the core focus on leveraging the in-context inference capabilities of large language models for log parsing. The precise framework, LogDiv, that is introduced, is a direct application of prompt engineering where log examples are used as prompts to extract information. This aligns with the concept of 'hard prefix prompts' as it uses a structured approach to guide the language model's output towards the generation of log templates. The rating is not a full 10 because the abstract mostly concerns log parsing rather than the broader scope of prompt engineering, but the techniques and findings are still very much applicable to the field.",https://arxiv.org/pdf/2307.09950 -dspy: compiling declarative language model calls into self-improving pipelines,9,"The abstract describes a programming model (DSPy) that deals with the creation and optimization of language model pipelines using declarative modules, which is closely related to prompt engineering. The abstraction of LM pipelines as text transformation graphs directly involves the crafting and application of prompts to achieve specific computational tasks. The optimization of pipelines to maximize performance metrics is also a key aspect of prompt engineering, as it relates to refining prompts for better outcomes. The introduction of a systematic approach with modules that can learn and improve over time suggests a significant relevance to the study and advancement of prompt engineering. Therefore, I have rated its relevance as high but not the maximum because the abstract does not discuss 'hard prefix prompts' specifically, which was the focus of the original prompt.",https://arxiv.org/pdf/2310.03714 -role knowledge prompting for document-level event argument extraction,7,The paper presents a new model for Document-level Event Argument Extraction (DEAE) which is relevant to prompt engineering as it discusses enhancing the interaction between templates (prompts) and roles for pretrained language models (PLMs). The use of a role knowledge guidance mechanism to aid PLMs in understanding semantics and generating arguments can be considered a contribution to the field of prompt engineering. The relevance is not at the highest level because the focus is on a specific application of prompt engineering within document-level event argument extraction rather than on prompt engineering more generally or on 'hard prefix prompts' as an overarching concept.,https://www.mdpi.com/2076-3417/13/5/3041/pdf?version=1677492694 -cot-bert: enhancing unsupervised sentence representation through chain-of-thought,8,"The abstract details the use of prompt engineering as a part of a two-stage approach for sentence representation learning with CoT-BERT, which suggests a direct relationship to the field of study. While prompt engineering is not the sole focus, it is integral to the proposed method's success, indicating high relevance. However, the abstract does not focus solely on hard prefix prompts, which would be necessary for a rating of 10.",https://arxiv.org/pdf/2309.11143 -advanced prompting as a catalyst: empowering large language models in the management of gastrointestinal cancers,9,"The abstract described relates directly to prompt engineering, as it discusses how different prompting strategies can affect the performance of Large Language Models (LLMs) in a specified domain, which is gastrointestinal oncology. The investigation of varying types of prompts, the development of an evaluation system, and the focus on optimizing LLMs' performance in medical scenarios demonstrate a high level of relevance to the field of prompt engineering. The reason for not rating it a perfect 10 is that the study's focus is on one specific application area within healthcare rather than a broad exploration of prompt engineering in multiple contexts.",https://www.the-innovation.org/data/article/export-pdf?id=64db4fd54228a72545780714 -towards robust nlg bias evaluation with syntactically-diverse prompts,9,"The presented study is highly relevant to prompt engineering as it directly addresses the impact of syntactic variations in prompts on the output of NLG systems. It critiques the standard practice of using fixed templates for bias analysis and demonstrates the importance of diversifying prompt structures to obtain more reliable and representative outcomes. This research aligns with the motives of prompt engineering, which include understanding and optimizing how different prompts affect the behavior of language models.",https://arxiv.org/pdf/2212.01700 -daprompt: deterministic assumption prompt learning for event causality identification,8,"The paper 'daprompt: deterministic assumption prompt learning for event causality identification' is highly relevant to prompt engineering as it discusses the design and implementation of a novel prompt learning method for a specific NLP task (ECI). The focus on the deterministic assumption in prompt learning directly feeds into the broader discussion of how to engineer prompts for better utilization of pre-trained language models. While the study is not about hard prefix prompts in general, it contributes to the field of prompt engineering by exploring an alternative approach to conventional prompt design, thus the rating of 8.",https://arxiv.org/pdf/2307.09813 -enhancing cross-lingual natural language inference by soft prompting with multilingual verbalizer,8,"The study discusses soft prompt learning within the context of cross-lingual natural language inference, which is related to the field of prompt engineering. Although this is not specifically about 'hard prefix prompts,' soft prompting is an alternative prompting approach, and understanding it can contribute to the field of prompt engineering by offering insights into different methods of designing prompts. Furthermore, the study mentions the limitations of hard prompts, which implies a comparison that can be informative for prompt engineering studies. The rating is not a full 10 because the direct focus on 'hard prefix prompts' is lacking, but it is still highly relevant due to its implications for the broader field of prompt engineering.",http://arxiv.org/pdf/2305.12761 -exploring prompts in few-shot cross-linguistic topic classification scenarios,9,"The abstract describes research directly related to prompt engineering, specifically addressing the challenge of creating efficient prompts for few-shot learning in cross-linguistic scenarios. The study's exploration of discrete, continuous, and hybrid prompts, and their impact on model performance, makes it highly relevant to the field of prompt engineering. The deduction of one point is due to the abstract not mentioning 'hard prefix prompts' specifically, but it is otherwise very pertinent to the prompt engineering domain.",https://www.mdpi.com/2076-3417/13/17/9944/pdf?version=1693814326 -random word retrieval for automatic story generation,7,"The paper's relevance to prompt engineering study is moderately high. It discusses automatic story generation using a method that mimics human writing prompts. The concept of leveraging random words as prompts and then using the internet to provide context aligns with aspects of prompt engineering, which involves creating stimuli that guide the output of generative models. While the paper focuses primarily on story generation rather than the intricacies of engineering prompts, the approach contributes to understanding how prompts can be constructed to initiate a creative process in AI systems. Hence, it offers insights applicable to prompt engineering, even if that is not the main focus of the study.",https://scholarworks.bridgeport.edu/xmlui/bitstream/123456789/545/3/FRD_RColon_Story_Gen_Poster_Mar03_2014.pdf diff --git a/data/semantic_scholar_data/semantic_scholar_relevant_papers_without_pdf.csv b/data/semantic_scholar_data/semantic_scholar_relevant_papers_without_pdf.csv deleted file mode 100644 index 34ef976..0000000 --- a/data/semantic_scholar_data/semantic_scholar_relevant_papers_without_pdf.csv +++ /dev/null @@ -1,329 +0,0 @@ -Title,Probability,Reasoning -latent jailbreak: a test suite for evaluating both text safety and output robustness of large language models,7,"The paper's primary focus is on evaluating the safety and robustness of large language models (LLMs), which is relevant to prompt engineering as it deals with how different prompts (including those that contain harmful or malicious content) can affect the performance of LLMs. The concept of 'latent jailbreak' and the creation of a benchmark that includes 'malicious instruction embedding' directly relates to the study of prompts, particularly 'hard prefixes' which could be considered a form of adversarial input designed to test the limits of the model's behavior. This relevance is crucial because ensuring that models perform consistently well and generate safe content across a variety of prompt types is a key aspect of prompt engineering. However, it does not directly discuss the 'hard prefix prompts' in a systematic review context but rather the safety and robustness in a broader sense, hence the rating does not reach the maximum." -autodan: automatic and interpretable adversarial attacks on large language models,8,"The paper describes an adversarial attack method named 'AutoDAN' that is highly relevant to prompt engineering as it involves the generation of attack prompts, a form of input manipulation which is a key aspect of prompt engineering. This research contributes to a deeper understanding of Large Language Model vulnerabilities and strategies that can be used to manipulate model outputs, which is pertinent to the field of prompt engineering. However, the study is focused specifically on adversarial attacks rather than the broader topic of 'hard prefix prompts,' therefore it does not fully align with systematic review studies on prompt engineering techniques in general, which might include non-adversarial methods and a wider range of applications. Hence, the rating is high but not at the maximum." -"recommendation as language processing (rlp): a unified pretrain, personalized prompt & predict paradigm (p5)",7,"The abstract describes a 'Pretrain, Personalized Prompt, and Predict Paradigm' (P5) which is closely related to the concept of hard prefix prompts in prompt engineering. The study's emphasis on personalized prompts and instruction-based recommendation indicates that it deals with the design and utilization of prompts to elicit desired behaviors from a language model, which is a core element of prompt engineering. However, because the abstract specifically focuses on recommendation tasks and does not explicitly mention 'hard prefix prompts' as a category or detail the systematic review elements that might be expected from a 'comprehensive systematic review,' it does not fully align with a study exclusively centered on hard prefix prompts. Despite this, the principles discussed are relevant to the broader field of prompt engineering." -p-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks,9,"The abstract discusses the concept of prompt tuning in the context of Natural Language Understanding (NLU) and proposes a new method called P-Tuning v2, indicating a significant advancement in the field of prompt engineering. The stated goals of matching the performance of full model fine-tuning with a fraction of tuned parameters make it highly relevant. The only reason it is not rated a perfect 10 is that the abstract does not specifically mention 'hard prefix prompts', but it is likely that the methodology could be applied to or has implications for such prompts, hence the high rating." -domain adaptation via prompt learning,7,"The abstract describes a study on 'domain adaptation via prompt learning (DAPrompt)', which is relevant to the field of prompt engineering, as it specifically focuses on the use of prompts in unsupervised domain adaptation. The relevance is not at the maximum because the study concentrates on a particular application of prompt learning (i.e., unsupervised domain adaptation) rather than a comprehensive overview or systematic review of hard prefix prompts in prompt engineering. Nonetheless, it contributes valuable insights into prompt engineering by illustrating how prompts can dynamically adapt classifiers to different domains, which is a significant aspect of the study area." -promptmaker: prompt-based prototyping with large language models,8,"The content of the article appears to be highly relevant to prompt engineering as it discusses prototyping ML-powered features using natural language prompts, which is a core component of prompt engineering. The emphasis on the experiences of industry professionals indicates insights into practical applications and challenges of prompt-based approaches. The article's focus on broadening access, speeding up prototyping, and improving collaboration directly relates to the evolution of prompt engineering techniques. However, the specific term 'hard prefix prompts' is not mentioned, which might suggest that the study doesn't exclusively focus on that subtype of prompts within prompt engineering. Therefore, the rating is an 8 instead of a perfect 10." -ptr: prompt tuning with rules for text classification,8,"The document presents research on 'prompt tuning with rules' (PTR), which directly relates to the field of prompt engineering study. It involves constructing prompts with sub-prompts and integrating logic rules, which is a form of hard prefix prompt design in the establishment of many-class text classification tasks. The concept of using human prior knowledge and pre-trained language models (PLMs) in prompt construction is relevant to the study of how prompts can guide or improve the performance of machine learning models. However, the rating is not a perfect 10 because the abstract is missing (listed as 'nan'), which suggests that there may be additional context to the relevance that is not provided in the TLDR summary." -black-box prompt learning for pre-trained language models,9,"The paper presents a method for adapting pre-trained language models (PLMs) through black-box discrete prompt learning without needing access to the model's parameters or gradients, which is highly relevant to the field of prompt engineering. The study focuses on efficient optimization of discrete prompts and even though it does not specifically mention 'hard prefix prompts', the concept of discrete prompts is within the scope of prompt engineering. The proposed black-box setting for secure interaction between cloud and edge devices is innovative and directly linked to the adaptability of PLMs for various tasks using prompts. The paper's significant improvements across benchmarks and in-depth case studies on prompt characteristics are valuable contributions to the study of prompt engineering." -gppt: graph pre-training and prompt tuning to generalize graph neural networks,9,"The paper's abstract describes a novel transfer learning framework, which includes the concept of prompt tuning to generalize Graph Neural Networks (GNNs) for downstream tasks. It is highly relevant to prompt engineering study as it involves modifying prompts (by creating token pairs) to influence the behavior of the pre-trained GNNs without extensive fine-tuning. This approach aligns with the practice of designing prompts to effectively elicit desired responses from pre-trained models, which is central to prompt engineering. The only reason it doesn't receive a full 10 is because the paper is specifically about the domain of graph data and might not cover other aspects or generalities of prompt engineering." -cpt: colorful prompt tuning for pre-trained vision-language models,8,"The abstract describes an innovative approach called Cross-modal Prompt Tuning (CPT) for pre-trained vision-language models (VL-PTMs), which involves a form of prompt engineering by utilizing color-based co-referential markers in image and text to reformulate visual grounding. This is highly relevant to the study of prompt engineering as it presents a specific instance where prompts are engineered to bridge the gap between pre-training and fine-tuning, enhancing the model's performance on downstream tasks with few-shot or zero-shot learning. Although the study focuses specifically on vision-language models and doesn't address hard prefix prompts in general, the concept of tailoring prompts for better performance is directly applicable to the field of prompt engineering. Thus, the rating reflects its high relevance due to its innovative approach to prompt design, with some points deducted for not directly addressing the broader topic of hard prefix prompts." -differentiable prompt makes pre-trained language models better few-shot learners,8,"The paper presents a method (DART) for enhancing the few-shot learning capabilities of small language models without traditional prompt engineering. Although it claims to bypass 'any prompt engineering,' the method still inherently deals with prompts by differentially optimizing prompt templates. Therefore, it is relevant to the study of prompt engineering since it explores an alternative avenue for prompt manipulation. The rating is not a full 10 because the study appears to focus more on the model's few-shot learning improvement rather than prompting techniques themselves." -nsp-bert: a prompt-based few-shot learner through an original pre-training task —— next sentence prediction,9,"The paper described pertains directly to prompt engineering, as it deals with a prompt-based few-shot learner and demonstrates how prompts can be used in conjunction with the BERT model's original pre-training task of Next Sentence Prediction (NSP). The relevance to prompt engineering is clear since it discusses an innovative approach to prompts at the sentence level, contrasting with the common token-level prompts. Furthermore, the paper's focus on how prompt-based learning can be effective in different NLP tasks, and its exploration of factors like the pre-training corpus on the few-shot learning capabilities of the model, are pertinent issues within the study of prompt engineering." -lightner: a lightweight generative framework with prompt-guided attention for low-resource ner,8,"The paper discusses the use of 'prompt-guided attention' within a generative framework for Named Entity Recognition (NER) in low-resource settings. This approach is quite relevant to prompt engineering, as it involves the manipulation of continuous prompts to improve the performance of a pre-trained language model on a specific task, without the need for extensive re-training or large datasets. Although the paper is specifically about NER and not about the broader topic of 'hard prefix prompts', the concept of integrating prompts into the attention mechanism is very much related to the study of how prompts can be effectively used to direct the focus of language models. The rating is not a full 10 because it concentrates on a specific application (NER) and does not cover the entire breadth of prompt engineering, which could also include other tasks and models." -pada: a prompt-based autoregressive approach for adaptation to unseen domains,8,"The abstract describes PADA, a prompt-based approach, which is directly related to prompt engineering as it involves the generation of unique prompts to adapt to unseen domains in NLP tasks. The approach's autoregressive nature and its reliance on Domain Related Features (DRFs) suggest a nuanced and advanced application of prompt engineering. While the study seems to focus more on domain adaptation rather than hard prefix prompts specifically, the technique's success in outperforming other approaches highlights its relevance to the broader field of prompt engineering and its potential contributions to the prompt engineering literature. The paper could provide valuable insights into designing effective prompts for domain adaptation, which is a subset of the overall prompt engineering research area." -the biases of pre-trained language models: an empirical study on prompt-based sentiment analysis and emotion detection,9,"The study is highly relevant to prompt engineering as it focuses on the biases of PLMs when used in prompt-based tasks such as sentiment analysis and emotion detection. These findings are directly applicable to prompt engineering since the biases in label-word mappings, prompt templates, formation of prompts, and others impact how prompts are engineered for effective interaction with PLMs. The high rating is due to the direct investigation and empirical study of issues that would be fundamental to anyone engaged in engineering prompts for PLMs." -adaprompt: adaptive prompt-based finetuning for relation extraction,8,"The paper presents an approach that is highly relevant to prompt engineering as it involves the novel use of adaptive prompts in the context of fine-tuning language models for relation extraction, a specific NLP task. The adaptive label words selection mechanism directly relates to how prompts are engineered to handle complex label spaces, and the auxiliary entity discriminator may be considered a form of prompt that encourages the model to concentrate on certain aspects of input data. Thus, the relevance to prompt engineering studies is significant, though not perfect, as the paper might not cover the entire breadth of prompt engineering topics." -sentiprompt: sentiment knowledge enhanced prompt-tuning for aspect-based sentiment analysis,8,"The study presents a method of enhancing language model performance for aspect-based sentiment analysis through the use of customized prompts that incorporate sentiment knowledge. This directly relates to the engineering of prompts, as it involves designing and applying specialized prompt structures (consistency and polarity judgment templates) to improve task-specific model outputs. While the study is not just about 'hard prefix prompts', it still involves the systematic design of prompts to encode task-specific knowledge, which is a significant component of prompt engineering. Therefore, it gets a high relevance score but is not a perfect match due to the specificity of 'hard prefix prompts' not being the central focus." -masterkey: automated jailbreak across multiple large language model chatbots,8,"The abstract discusses a study related to 'jailbreak' attacks on Large Language Models (LLMs), which directly involve the manipulation of prompts to achieve unintended outcomes. This is highly relevant to the field of prompt engineering because it pertains to understanding how prompts can be engineered to exploit or circumvent the intended use of LLMs. Although the specific term 'hard prefix prompts' is not mentioned, the concept of automated jailbreak prompt generation suggests a close relationship with prompt engineering techniques. The research's emphasis on reverse-engineering defensive strategies and developing countermeasures is also pertinent to the design and analysis of prompts in LLMs. The rating is not a full 10 as the abstract doesn't directly address 'hard prefix prompts' specifically, but rather the broader issue of jailbreak prompts." -not what you've signed up for: compromising real-world llm-integrated applications with indirect prompt injection,8,"The abstract presents an in-depth look at how natural language prompts can be used maliciously to exploit LLM-integrated applications, which is closely relevant to the field of prompt engineering. It reveals new attack vectors in the form of Indirect Prompt Injection and stresses on the importance of understanding prompts from a security perspective. While it does not focus solely on 'hard prefix prompts', the study of adversarial prompting is critical to the broader domain of prompt engineering where designing robust and secure prompts is key. Hence, the information is highly relevant, though not exclusively centered on hard prefix prompting methodologies." -an llm can fool itself: a prompt-based adversarial attack,9,"The study directly addresses prompt engineering by proposing PromptAttack, a method that uses a prompt-based approach to generate adversarial attacks against large language models (LLMs). The study's focus on how prompts can be engineered to manipulate LLM outputs is highly relevant to the field of prompt engineering. The only reason it does not receive a full score is that the study is focused specifically on adversarial attacks rather than a broader range of prompt engineering applications." -two-stage llm fine-tuning with less specialization and more generalization,9,"The abstract describes a method (ProMoT) directly addressing the issues related to prompt engineering by proposing a two-stage fine-tuning framework that reduces format specialization and improves generalization, which is highly relevant to engineering more adaptable and effective prompts for large language models (LLMs). The fact that it seeks to enhance in-context learning through prompt tuning suggests a close connection to the field of prompt engineering, making the study's relevance to prompt engineering very high. The only reason it does not get a 10 is because it doesn't focus exclusively on 'hard prefix prompts' as the original query specifies, but rather on prompt tuning in a broader sense." -mental-llm: leveraging large language models for mental health prediction via online text data,7,"The study involves the evaluation of large language models (LLMs) with a focus on prompt designs such as zero-shot and few-shot prompting, which are directly related to the field of prompt engineering. Moreover, it discusses instruction fine-tuning, which is a more advanced form of prompt engineering that tailors the model to specific tasks. Although the main application discussed in the study is mental health prediction, which is not directly related to 'hard prefix prompts,' the methodology and findings could have implications for prompt engineering in general, making it moderately relevant to the field." -benchmarking a foundation llm on its ability to re-label structure names in accordance with the aapm tg-263 report,7,"The study described in the title and abstract is relevant to prompt engineering to a significant extent because it involves using a large language model (GPT-4) with specifically tuned prompts to perform a complex, domain-specific task. However, while the focus of the study is on the application of an LLM to re-label structure names in medical imaging in accordance with a specific standard, it also implicitly involves designing and refining prompts to obtain this accurate outcome. This prompt engineering aspect is an essential part of the study as it directly affects the performance of the LLM, but the study is not explicitly about prompt engineering methodologies or their systematic review. Therefore, the rating is not a perfect 10, but still notably high due to the implicit involvement of prompt fine-tuning and the potential insights it might offer for prompt engineering best practices." -backdooring instruction-tuned large language models with virtual prompt injection,9,"The paper discusses the concept of Virtual Prompt Injection (VPI), which directly relates to manipulating the behavior of Large Language Models (LLMs) through the use of hidden or embedded prompts. This is a specific, albeit adversarial, example of prompt engineering. It demonstrates how the model's response can be engineered to follow certain instructions without visible modification to the prompt input. Since prompt engineering is about designing prompts to achieve desired outputs from a model, this study is highly relevant as it explores the consequences and defensive strategies related to prompt manipulation. Although the focus is on a security vulnerability, understanding such backdoor methods contributes to a broader comprehension of how prompt mechanisms work in LLMs and the importance of data integrity in instruction tuning." -cataloging prompt patterns to enhance the discipline of prompt engineering,9,"The paper is highly relevant to the field of prompt engineering as it directly addresses the conceptualization and codification of prompt patterns to enhance interactions with Large Language Models (LLMs) such as ChatGPT. It underscores the significance of establishing more systematic and repeatable approaches within prompt engineering to improve the performance and evaluation of LLMs across various domains. The only reason for not giving a full 10 is because the abstract does not explicitly mention 'hard prefix prompts', which is the specialized topic of the study in question (assuming that 'hard prefix prompts' refer to a specific subset or technique within prompt engineering)." -survival of the most influential prompts: efficient black-box prompt search via clustering and pruning,9,"The paper directly addresses the process of optimizing prompt-based learning for large language models by introducing an efficient black-box prompt search method. The inclusion of clustering and pruning to focus on influential prompt tokens is highly relevant for the field of prompt engineering, as it seeks to refine the approach by which prompts are selected and used to drive LLM predictions. The presented Clustering and Pruning for Efficient Black-box Prompt Search (ClaPS) technique is pertinent to the challenge of search space design in prompt engineering. The study's focus on enhancing the efficiency of the prompt search process validates its high relevance to the topic, although it may not cover the full breadth of 'hard prefix prompts' and could be missing some other aspects of prompt engineering not detailed in the abstract." -prompt engineering or fine tuning: an empirical assessment of large language models in automated software engineering tasks,9,"The study directly explores multiple prompt engineering techniques applied to GPT-4 for ASE tasks. The empirical assessment compares the efficacy of prompt engineering against fine-tuned models, providing valuable insights into the current capabilities and limitations of prompt engineering. The high relevance score reflects the detailed analysis of specific prompting strategies, such as task-specific prompting and conversational prompts, which contributes significantly to the body of knowledge on prompt engineering." -poisonprompt: backdoor attack on prompt-based large language models,9,"The study titled 'poisonprompt: backdoor attack on prompt-based large language models' is highly relevant to prompt engineering as it directly deals with the security vulnerabilities associated with the use of prompts in Large Language Models, which can be either hard (fixed) or soft (more flexible). Although the study's primary focus is on the backdoor attack mechanism (POISONPROMPT), it inherently contributes to the understanding and advancement of prompt engineering by identifying potential threats and exploring the robustness of different prompting methods. This information is crucial for researchers and practitioners working on prompt engineering to create more secure and reliable systems. The rating is not a full 10, as the paper focuses more on the security aspect rather than core prompt engineering techniques or their optimization for better performance on tasks." -graph-toolformer: to empower llms with graph reasoning ability via prompt dataset augmented by chatgpt,8,"The paper is highly relevant to the field of prompt engineering as it specifically looks into the development of a framework that leverages prompts augmented by ChatGPT to improve the performance of large language models when tasked with graph reasoning. While it does not focus on the 'hard prefix prompts' mentioned in the initial prompt, it explores the prompt-based teaching approach and the construction of prompt datasets for specialized applications, which is a component of prompt engineering. The systematic review aspect isn't directly addressed, but the paper proposes a practical application of prompts in the context of LLMs, indicating significant relevance to the study of prompt engineering." -evoke: evoking critical thinking abilities in llms via reviewer-author prompt editing,9,"The provided abstract directly pertains to prompt engineering, as it discusses the development of a framework called Evoke that refines prompts for Large Language Models (LLMs) to enhance their performance. The inclusion of an automatic feedback loop, which considers 'hard' samples implying a form of 'hard prefix prompts', suggests it is highly relevant to the study of refining and improving prompts to elicit better performance from AI models. The main reason the rating is not a perfect 10 is that while Evoke's approach includes working with challenging prompts, it may not strictly constitute a 'systematic review' of hard prefix prompts but appears to be an application or development of that concept." -decoding prompt syntax: analysing its impact on knowledge retrieval in large language models,9,"The provided abstract focuses on the evaluation of prompt syntax and its impact on knowledge retrieval in Large Language Models (LLMs), which is a significant aspect of prompt engineering. The systematic approach to paraphrase prompts and analyze their structures provides valuable insights into how different types of prompts affect the performance of LLMs. This research can inform the design of more effective prompts (including hard prefix prompts), making it highly relevant to the field of study. The reason for not giving a full score of 10 is the absence of a specific mention of 'hard prefix prompts' in the context of the abstract, but it is still generally relevant to prompt engineering." -lion: adversarial distillation of proprietary large language models,8,"The abstract describes a method of adversarial distillation where a 'teacher' large language model generates 'hard' instructions to enhance the training of a 'student' model. This falls under the umbrella of prompt engineering, as it includes the design of specific prompts to identify and produce instructional data that challenges the student model, thereby improving its performance. The innovative use of 'hard' instructions to drive the adversarial loop is particularly relevant to prompt engineering studies, as it directly relates to the crafting of prompts aimed at maximizing learning potential. However, it does not directly address a comprehensive systematic review on the subject, hence the deduction of two points." -towards parameter-efficient automation of data wrangling tasks with prefix-tuning,9,"The title 'towards parameter-efficient automation of data wrangling tasks with prefix-tuning' is highly relevant to prompt engineering study because it directly addresses the development of a method ('prefix-tuning') to optimize the way prompts are used with Large Language Models to perform data wrangling tasks, which is an example of a practical application of prompt engineering. Furthermore, the abstract details the benefits of using prefix-tuning over full fine-tuning, which is central to the efficiency and effectiveness of using language models in various tasks. The mention of learning continuous prompts automatically and the assessment of prefix-tuning on specific tasks provide concrete evidence of the method's applicability and performance, underscoring its relevance to the field of prompt engineering." -ten quick tips for harnessing the power of chatgpt/gpt-4 in computational biology,7,"The article provides practical advice for incorporating ChatGPT into computational biology workflows, which includes a component of 'prompt engineering'. Even though the title suggests a broader usage within computational biology, the mention of 'prompt engineering' in the context of using ChatGPT implies that the article will address how to effectively design prompts to interact with the chatbot for various tasks. This makes it relevant to the study of prompt engineering. However, it is not entirely focused on 'hard prefix prompts' specifically, as indicated by the initial prompt request for a 'comprehensive systematic review on hard prefix prompts'. Therefore, it doesn’t fully match the specificity requested in terms of prompt engineering study, but it is still relevant due to the inclusive nature of the tips and discussion on the best use of prompts." -prompting is not a substitute for probability measurements in large language models,7,"The study addresses an aspect of prompt engineering by comparing metalinguistic prompting with direct probability measurements in large language models. Although the study does not specifically discuss 'hard prefix prompts,' it does examine prompting techniques and their effectiveness in understanding linguistic knowledge, which is relevant to the field of prompt engineering. However, since the study is more focused on the comparison with direct probability methods and on metalinguistic judgment rather than on prompt engineering techniques, the rating is not a perfect 10." -selecting better samples from pre-trained llms: a case study on question generation,8,"The paper presents a study on selecting the best outputs from samples generated by Large Language Models (LLMs) using prompt-based approaches, which is highly relevant to the field of prompt engineering. Although the study focuses specifically on the task of question generation, the research on improving the diversity and quality of LLM outputs through prompt manipulation is a direct application of prompt engineering principles. The rating is not a full 10 because the paper is a case study limited to question generation and does not cover the broader spectrum of hard prefix prompts or systematic reviews of prompt engineering." -spec: a soft prompt-based calibration on performance variability of large language model in clinical notes summarization,7,"The relevance of the provided title and abstract to prompt engineering is quite significant, given that the study centers on the application of prompts, specifically 'soft prompts,' to refine the performance of large language models in the context of summarizing clinical notes. Prompt engineering fundamentally involves the strategic use of prompts to effectively steer language models towards desired outputs. The research introduces a Soft Prompt-Based Calibration (SPeC) pipeline, which pertains to optimizing the use of prompts to achieve more consistent and accurate results. Although the study is situated in a specific application area—healthcare—and focuses on 'soft prompts' rather than 'hard prefixes,' it contributes to the broader understanding of how prompt design can affect language model behavior and performance. Nonetheless, it does not directly address the systematic review of hard prefix prompts, which would be the core of a prompt engineering study, hence the rating is not a perfect 10." -co-training improves prompt-based learning for large language models,9,"The abstract describes research on enhancing prompt-based learning with co-training, which is directly relevant to the field of prompt engineering. It explores methods to improve and iterate on prompt models, which are integral to the efficiency and effectiveness of large language models like GPT-3. Although the title and abstract do not specifically mention 'hard prefix prompts,' the systematic review of improving prompt-based learning in LLMs is encompassed within the broader scope of prompt engineering. A small deduction is made because the exact term 'hard prefix prompts' was not discussed, but the overall content is highly pertinent." -prompt text classifications with transformer models! an exemplary introduction to prompt-based learning with large language models,8,"The study is highly relevant to prompt engineering as it investigates prompt-based learning, a key concept within this field, especially as it pertains to the use of transformer models and large language models for classification tasks. Although it does not specifically mention engineering 'hard prefix prompts', it still examines the broader subject of using prompts in machine learning. The emphasis on the practical application of prompt-based learning and comparison with human ratings also adds value to the context of prompt engineering." -sensitivity and robustness of large language models to prompt template in japanese text classification tasks,8,"The given abstract is highly relevant to prompt engineering as it investigates the effects of prompt template modifications on the performance of Large Language Models (LLMs), specifically in the context of Japanese text classification tasks. It addresses critical aspects of prompt engineering, such as sensitivity and robustness of language models to changes in prompt templates. The study's focus on how simple changes can lead to significant discrepancies in model performance is directly linked to prompt engineering. The rating is not a full 10 because the abstract mentions a specific application (Japanese text classification) rather than providing a broader analysis across various applications and languages, which could impact the generalizability of the findings to all areas of prompt engineering." -prompt tuning or fine-tuning - investigating relational knowledge in pre-trained language models,8,"The relevance of the study to prompt engineering is high since it directly deals with the optimization of query prompts for relational knowledge extraction from pre-trained language models. The study compares prompt tuning techniques against adaptive fine-tuning, which is an essential contrast in the field of prompt engineering, as it investigates how pre-trained models can be made more efficient in understanding and responding to prompts without extensive additional training. While the paper does not focus solely on 'hard prefix prompts', it addresses the broader topic of optimizing prompts for better model performance which is integral to prompt engineering studies." -on transferability of prompt tuning for natural language understanding,9,"The provided abstract is highly relevant to prompt engineering study, specifically within the domain of natural language understanding. It discusses prompt tuning, an essential aspect of prompt engineering, where the reusability and transferability of prompts across different tasks and models are investigated. The exploration of knowledge transfer for improving prompt tuning efficiency is directly applicable to strategies in prompt engineering for large pre-trained language models. The reason for not giving a perfect score is the absence of a direct mention of 'hard prompts,' but the study's content is still very pertinent to the broader field of prompt engineering." -revealing the unwritten: visual investigation of beam search trees to address language model prompting challenges,8,"The study is highly relevant to the field of prompt engineering as it explores prompt refinement and the intricacies of guiding outputs of generative language models. By introducing a method to investigate the beam search tree visually, it aids in understanding how prompts affect generation, which is a key area in prompt engineering. The paper focuses on improving human understanding of the model decision-making process, which is crucial for effective prompt engineering. Although it does not directly address 'hard prefix prompts,' the broader topic of prompt refinement and model output guidance is closely related to prompt engineering. The rating is not a full 10 because it is not specific to 'hard prefix prompts,' but it is still highly relevant to the general area of study." -healthprompt: a zero-shot learning paradigm for clinical natural language processing,8,"The abstract outlines a research study that is highly relevant to prompt engineering study. It describes the development of a new prompt-based learning framework specifically for clinical NLP tasks, which is an example of applying prompt engineering to a specialized domain (healthcare). The fact that this framework operates in a zero-shot learning context enhances its relevance, as it illustrates the potential of prompt engineering in scenarios where annotated datasets are scarce or non-existent. However, while the study does focus on prompt-based learning, which is a subset of prompt engineering, it does not explicitly mention 'hard prefix prompts' as the prompt type being investigated. Consequently, the rating is not a full 10, as it might not cover the comprehensive systematic review aspect explicitly focused on hard prefixes." -p rompt c ap : prompt-guided image captioning for vqa with gpt-3,8,"The paper is highly relevant to prompt engineering as it introduces 'P ROMPT C AP', a model that utilizes natural-language prompts to guide the image captioning process which in turn enhances the performance of visual question answering (VQA) with a language model like GPT-3. The method directly involves engineering prompts to control the content of image captions, ensuring they contain the necessary details for LMs to answer questions. This is a specific application of prompt engineering in the context of integrating textual prompts with image understanding for improved knowledge-based task performance. The paper's focus on synthesizing prompts for effective LM use aligns closely with the study of prompt engineering." -response generation with context-aware prompt learning,8,"The paper is highly relevant to prompt engineering as it focuses on a novel approach that treats dialogue generation as a prompt-learning task. The methodology of learning continuous prompt embeddings customized for dialogue contexts aligns closely with prompt engineering, as it involves designing prompts that can effectively interact with pre-trained language models to produce desired responses. Despite the paper not explicitly mentioning the term 'hard prefix prompts', it is implicit in the context of prompt embeddings. The reduction of two points is because it doesn't directly address the systematic review aspect of hard prefix prompts but is still very much within the realm of prompt engineering for dialogue systems." -meta-tuning language models to answer prompts better,9,"The abstract discusses a method called 'meta-tuning' for improving the ability of large pretrained language models to answer prompts, which is directly related to prompt engineering. The relevance is high because the study aims to specialize and generalize language models to better understand and respond to prompts, which is a core aspect of prompt engineering. The only reason it doesn't score a perfect 10 is because the abstract doesn't directly address 'hard prefix prompts', but the concept can likely be applied to various types of prompts including hard prefixes." -few-shot instruction prompts for pretrained language models to detect social biases,8,"The study involves the construction of few-shot instruction-based prompts for pretrained language models, which is highly relevant to the field of prompt engineering. It examines how effectively these prompts can guide language models in detecting social biases in text, which is a specific application of prompt engineering. Although it does not directly mention 'hard prefix prompts,' the methodology of using instructional prompts to achieve a task with a language model fits under the broader umbrella of prompt engineering. The relevance is rated an 8 instead of a 10 because the focus is more on detecting social biases rather than on the systematic review of prompting techniques themselves." -evaluating the instruction-following robustness of large language models to prompt injection,9,"The study directly examines the interaction between large language models and prompts, specifically investigating the challenge of adversarial instruction injection. This is highly relevant to prompt engineering as it deals with understanding and improving the robustness of LLMs in discerning and responding to prompts. The focus on how models discern and follow instructions is a critical aspect of prompt engineering, especially when considering the creation of prompts that intend to guide the model towards producing specific outcomes or behaviors without succumbing to manipulation." -promptagent: strategic planning with language models enables expert-level prompt optimization,9,"The article is highly relevant to prompt engineering as it discusses 'PromptAgent', an optimization method aimed at automating the generation of expert-level prompts, which is directly aligned with prompt engineering studies. It addresses the strategic planning problem within prompt optimization and demonstrates the system's effectiveness across various domains and tasks. The only reason it does not receive a 10 is that the specific focus on 'hard prefix prompts' is not explicitly stated, but the scope still remains within the general field of prompt engineering." -multiprompter: cooperative prompt optimization with multi-agent reinforcement learning,9,"The paper presents a new framework, MultiPrompter, that directly addresses the issue of prompt optimization, which is a core aspect of prompt engineering. It introduces a novel concept of using multi-agent reinforcement learning for cooperative prompt optimization. Such a technique is highly relevant for studies in prompt engineering, as it could lead to improvements in the generation of interpretable prompts and better interaction with foundation models. Although the paper is applied to the text-to-image task, the concepts and methodologies presented could be generalizable and thus highly relevant to the broader field of prompt engineering." -robust prompt optimization for large language models against distribution shifts,9,"The presented paper directly addresses a key issue in prompt engineering, namely the optimization of prompts for large language models, especially in the context of distribution shifts, which is a crucial aspect in the robustness of language models. Although the abstract does not specify the use of 'hard prefix prompts,' the focus on prompt optimization and generalization across different distributions indicates a close relevance to the broader field of prompt engineering. The proposed Generalized Prompt Optimization framework, which utilizes unlabeled data in optimization, is highly pertinent to advancing the study and application of prompt engineering." -copner: contrastive learning with prompt guiding for few-shot named entity recognition,9,"The study introduces the use of class-specific prompts for few-shot NER, employing these prompts as supervision signals and metric referents, which is highly relevant to prompt engineering. The methodology specifically addresses the optimization of token representations and inferencing strategies, which are central concerns in prompt engineering. The relevance score is not a full 10 because the study focuses on one specific application (NER) and it is not a systematic review on hard prefix prompts in general." -prompt engineering for zero‐shot and few‐shot defect detection and classification using a visual‐language pretrained model,9,"The abstract indicates that the study focuses on the optimization of prompts, which is intrinsic to prompt engineering. It investigates how different types of prompts affect the performance of a VLP model, particularly for the task of defect detection and classification. The findings on domain-specific definitions, sentence structure, and modality of information are directly relevant to understanding how prompts can be engineered for better performance in zero-shot and few-shot learning tasks, which is a key component of prompt engineering. The only reason the rating is not a full 10 is that it doesn't discuss 'hard prefix prompts' specifically but prompt optimization in a broader sense within the context of VLP models." -pfedprompt: learning personalized prompt for vision-language models in federated learning,8,"The abstract describes a study on a method called pFedPrompt that focuses on personalizing prompts for pre-trained vision-language models in a federated learning context. It directly engages in prompt engineering by refining how the prompts adapt to user characteristics, attempting to improve performance and relevance of the model outputs. While it doesn’t address 'hard prefix prompts' directly, the study is highly relevant to prompt engineering as it talks about optimizing prompts, which is a core area of interest in prompt engineering studies. The methodological focus on personalization in a federated learning framework is an innovative contribution to the field." -meta learning for domain agnostic soft prompt,8,"The abstract discusses a new approach to prompt-based learning, which is highly relevant to the field of prompt engineering as it focuses on optimizing soft prompts for domain-agnostic applications. The method aims to improve the generalizability of prompts which is a critical aspect in the study of prompt engineering. The relevance is not a full 10 because it specifically addresses soft prompts and unsupervised domain adaptation rather than hard prefixes or a comprehensive review of prompt engineering techniques." -speechprompt: an exploration of prompt tuning on generative spoken language model for speech processing tasks,8,"The provided document is highly relevant to prompt engineering as it discusses prompt tuning, which is a key aspect of prompt engineering. Although the focus is on speech processing tasks rather than hard prefix prompts in textual contexts, the principles of prompt tuning and leveraging pre-trained models with minimal additional parameter training are central to the concept of prompting in both speech and text applications. The exploration of this technique's effects on efficiency and performance in speech models contributes useful insights to the broader field of prompt engineering. The rating is not a full 10 as the study specifics are tailored towards speech models, thereby making it somewhat less directly applicable to prompt engineering studies focused exclusively on text-based models." -kipt: knowledge-injected prompt tuning for event detection,9,"The described study directly relates to prompt engineering by discussing Knowledge-injected Prompt Tuning (KiPT) for event detection, which is a technique to enhance the performance of prompt-based models by injecting external knowledge. It is highly relevant to the field of prompt engineering, as it proposes a specific way to refine prompts (a core component of prompt engineering) to increase precision. This is applicable to the broader study of prompt engineering, particularly in the context of few-shot learning tasks and the integration of external knowledge bases into the prompting process." -exploring low-dimensional intrinsic task subspace via prompt tuning,8,"The abstract and TLDR provided pertain to the study of prompt tuning within pre-trained language models (PLMs), and they discuss how adjustments to these models for various tasks can be achieved by optimizing a small set of parameters within a low-dimensional subspace. This suggests a strong relevance to prompt engineering, as it directly explores methodologies for tuning prompts to improve task adaptability of language models. The only reason the rating is not a full 10 is that, while highly relevant, the study seems to focus on a specific aspect of prompt engineering rather than a comprehensive review of hard prefix prompts in general." -exploring universal intrinsic task subspace via prompt tuning,9,"The study is highly relevant to prompt engineering as it investigates the adaptability of pre-trained language models to different NLP tasks by optimizing a small number of parameters. It directly examines prompt tuning, which is a crucial aspect of prompt engineering, and explores the concept of an intrinsic task subspace that could significantly impact how PLMs are fine-tuned for various tasks. Although the focus is on intrinsic prompt tuning (IPT) rather than hard prefix prompts specifically, the findings are broadly applicable to the field of prompt engineering." -how to design the perfect prompt: a linguistic approach to prompt design in automotive voice assistants – an exploratory study,8,"The provided title and abstract are highly relevant to the broad field of prompt engineering, especially in the context of voice user interfaces (VUIs). The exploratory study focuses on the linguistic aspects of prompt design, which covers syntactical, lexical, and grammatical elements that are fundamental to the construction of effective prompts within the automotive industry's voice assistants. Although the study is specific to a particular application (automotive VUIs) and language (German), the methodology and findings regarding the impact of language parameters on user perception can offer significant insights for prompt engineering in general. The rating falls short of a perfect score because the study's scope is restricted to a single language and use case, which may or may not be directly applicable to hard prefix prompts specifically mentioned in the original query." -exploring sparse visual prompt for domain adaptive dense prediction,8,"The provided abstract is highly relevant to prompt engineering study because it discusses an advanced application of prompts—Sparse Visual Domain Prompts (SVDP)—in the context of Test-Time Adaptation (TTA) for domain adaptive dense prediction tasks. It examines the role of prompts in addressing domain shift challenges and introduces methods for optimal prompt placement and updating on a per-sample basis. Although the abstract focuses specifically on visual domain prompts, which may be a more specialized area within the broader field of prompt engineering, the concepts of domain-specific knowledge extraction and efficient adaptation to target domains through prompts are essential to the study of prompt engineering. Therefore, the relevance is rated highly but not at the maximum because it is specific to the visual domain and dense prediction tasks rather than general prompt engineering." -efficient transfer learning for visual tasks via continuous optimization of prompts,8,"The title suggests that the study involves optimizing prompts for transfer learning in visual tasks, indicating a focus on prompt engineering as it applies to machine learning and possibly to neural networks that process visual data. Although details are lacking in the abstract and TLDR, the title implies relevance to prompt engineering, particularly in the context of improving the efficiency of transfer learning through some form of prompt optimization. The rating is not a full 10 due to the lack of information provided in the other fields, which could have either strengthened or weakened the relevance." -real estate insights unleashing the potential of chatgpt in property valuation reports: the “red book” compliance chain-of-thought (cot) prompt engineering,9,"The article specifically addresses prompt engineering within the context of property valuation and compliance with industry standards, namely the 'Red Book'. It discusses the direct application and importance of crafted prompts for instructing large language models to generate specific, accurate results that comply with professional property valuation standards. Even though it does not focus on 'hard prefix prompts' in a general sense, its contribution to prompt engineering for practical, domain-specific use cases is highly relevant. The deduction of one point is due to the lack of a TLDR and no explicit mention of 'hard prefix prompts', which would have given a precise summary and tied the relevance more directly to the topic." -enhancing automated program repair through fine-tuning and prompt engineering,8,"This abstract discusses a study where language models such as PLBART and CodeT5 are fine-tuned with datasets that contain code review and code changes to improve automated program repair. The relevance to prompt engineering comes from the part of the study that focused on utilizing zero-shot and few-shot learning-based prompt engineering with advanced code generative models like Codex and GPT-3.5-Turbo to assess their performance. Although the primary focus of the study appears to be automated program repair through fine-tuning of language models with specific datasets, the inclusion of prompt engineering as a method to enhance model performance gives it substantial relevance to the topic of prompt engineering. It does not directly address 'hard prefix prompts' as specified in the original inquiry, but it does deal with the employment of prompts in the context of language models, which is why the relevance is rated slightly lower." -"supporting self-directed learning and self-assessment using teachergaia, a generative ai chatbot application: learning approaches and prompt engineering",8,"The abstract indicates that the study involves leveraging prompt engineering to guide the interactions of an AI chatbot, named TeacherGAIA, to support self-directed learning and self-assessment. It specifically contrasts the engineered prompts with the default behavior of a chatbot like ChatGPT, suggesting a focus on how prompts can be tailored to achieve specific educational objectives. While the study is not exclusively focused on 'hard prefix prompts', it clearly involves a significant component of prompt engineering. The rating is not a full 10 because the abstract does not explicitly mention a 'systematic review' or a focus on 'hard prefix prompts', which are key aspects of the complete prompt stated in the requirement." -ncu-iisr: prompt engineering on gpt-4 to stove biological problems in bioasq 11b phase b,9,"The abstract indicates a high relevance to prompt engineering study as it describes a system that focuses on the application of prompt engineering strategies using GPT-4. The system's design for addressing biomedical questions implies substantial engagement with the crafting of prompts to interact with a language model effectively. The paper details experimental steps on prompt engineering, compares methodologies, and notes performance improvements due to optimized prompts. This offers considerable insight into how prompt engineering can be applied to enhance the utility of language models in a specific domain. The point deduction from a perfect score is due to the absence of details about 'hard prefix prompts', which may or may not have been a part of their strategies, as it is not explicitly stated." -prompt engineering as an important emerging skill for medical professionals: tutorial,8,"The title and abstract provided describe a paper that is significantly relevant to the field of prompt engineering. It specifically discusses the application of prompt engineering in the context of medical professionals, thereby addressing a niche yet important aspect of prompt engineering. The relevance is not a full 10 because the focus is narrowed to the medical field, and the study is a tutorial rather than a comprehensive systematic review on 'hard prefix prompts'. Therefore, while it is highly relevant to prompt engineering, it does not fully address the broader aspect of the engineering study as requested in the initial prompt." -the prompt engineering librarian,7,"The abstract discusses the role of librarians in the emerging field of prompt engineering, which is directly related to the study of prompt engineering as a discipline. It also covers the concept of optimizing prompts for artificial intelligence models, which is a fundamental aspect of prompt engineering. However, it focuses more on the potential professional development for librarians rather than a systematic review of hard prefix prompts specifically, which is why the rating is not a full 10." -"the artificially intelligent entrepreneur: chatgpt, prompt engineering, and entrepreneurial rhetoric creation",8,"The title suggests that the study focuses on the use of chatbot technology, specifically ChatGPT, in the context of prompt engineering. It implies an analysis of how entrepreneurial rhetoric can be generated through prompt engineering techniques, which is closely related to the study of how prompts are used to steer the performance of AI models like ChatGPT. Although the 'hard prefix prompts' are not explicitly mentioned, the title indicates a strong relevance to the field of prompt engineering in general." -retrieval-based prompt selection for code-related few-shot learning,8,"The provided abstract is highly relevant to prompt engineering as it discusses a technique centered around the creation of effective prompts, specifically for code-related few-shot learning tasks. The approach, Cedar, leverages retrieval-based methods to choose appropriate code demonstrations to accompany the task prompt, which is a direct application of prompt engineering principles. The results indicating the technique's effectiveness and its comparison with state-of-the-art models further underscore its relevance to the field. The deduction of two points is due to the lack of direct mention of 'hard prefix prompts', as the abstract focuses more broadly on prompt creation rather than the specific systematic review mentioned in the initial prompt." -exploring the effects of the design prompt on students’ design cognition,8,"The abstract discusses the influence of design prompts on students' design cognition, which is highly relevant to prompt engineering in the context of educational research. It examines the hypothesis that the task provided (the design prompt) impacts the student's design process and experience. While the concept of 'hard prefix prompts' is not specifically mentioned, the study of how prompts affect design cognition is closely related to exploring how different types of prompts (potentially including hard prefixes) can shape the design process. Therefore, the relevance to prompt engineering study is high, but not maximal due to the absence of a specific focus on 'hard prefix prompts'." -textgraphs-16 natural language premise selection task: zero-shot premise selection with prompting generative language models,9,"The paper seems to directly address the use of prompt engineering in the context of a natural language premise selection task, which is relevant to the study of prompt engineering effects on AI models' capabilities. It specifically assesses the performance of prompt engineering with GPT-3 in comparison to semantic similarity ranking with SBERT, and although it doesn't outperform SBERT when used alone, the combined approach yields better results. This indicates the paper significantly contributes to the understanding of prompt engineering's influence and utility in complex NLP tasks such as automated theorem proving, making it highly relevant to prompt engineering study." -generating requirements elicitation interview scripts with large language models,9,"The referenced study focuses on the application of prompt engineering to the generation of requirements elicitation interview scripts using large language models. It specifically discusses the use of prompt engineering techniques to generate various structured outputs, and even touches on refining prompts for better performance. This directly correlates with the study of prompt engineering as it involves optimizing and fine-tuning prompts to achieve specific outcomes with AI models. The reason for not giving a full 10 is that it's not exclusively about 'hard prefix prompts', but more broadly about prompt engineering applied within a specific context. However, it still holds high relevance to the overall field of prompt engineering." -an experimental investigation of analogy formation using the engineering-to-biology thesaurus,7,"The study focuses on the use of an Engineering-to-Biology thesaurus to facilitate analogy formation, which is a cognitive strategy closely related to the concept of 'hard prefix prompts'. Although it does not explicitly mention 'hard prefix prompts', the experimentation with keywords to generate ideas is akin to the process of using specific prompts to guide thought processes. However, its relevance is not a perfect match as it does not directly deal with the systematic review of hard prefix prompts or their use in studies; instead, it focuses on the application of a thesaurus in bioinspired design, which is just one aspect of prompt engineering." -an empirical study on few-shot knowledge probing for pretrained language models,8,"The study presents an empirical analysis of prompt-based knowledge probing with a focus on few-shot settings, which is highly relevant to the field of prompt engineering as it explores how models can be effectively used with limited data. Although it does not directly analyze 'hard prefix prompts,' the mention of optimizing prompts and a comparison of various approaches is pertinent to prompt engineering techniques and strategies. The findings related to finetuning bias vectors could contribute to the prompt engineering literature, especially since they claim to outperform existing methods." -knowledge injected prompt based fine-tuning for multi-label few-shot icd coding,7,"The abstract presents a study that involves using prompt-based fine-tuning for a multi-label classification task, which is a relevant aspect of prompt engineering. However, the focus is more on the injection of domain-specific knowledge into the model and its application to ICD coding rather than a broad analysis of hard prefix prompts across various domains or a generalizable framework. The relevance is therefore significant but not entirely central to prompt engineering and lacks discussion on hard prefix prompts specifically." -promptcast: a new prompt-based learning paradigm for time series forecasting,8,"The paper's focus on 'prompt-based time series forecasting (PromptCast)' is highly relevant to the study of prompt engineering as it explores transforming numerical inputs and outputs into prompts, thus framing the forecasting task as a language model problem. This suggests innovative applications of prompt engineering techniques outside of traditional language tasks. The relevance is not a perfect 10 because the paper may not deal specifically with 'hard prefix prompts' and there is no explicit mention of a 'systematic review'. However, it still represents a significant piece of research within the broader field of prompt engineering." -lego-absa: a prompt-based task assemblable unified generative framework for multi-task aspect-based sentiment analysis,8,"The paper is highly relevant to prompt engineering as it discusses a generative framework that uses task prompts, which are akin to hard-coded prompts, to control the generation of outputs for different tasks in ABSB. The methodology directly relates to how prompts are engineered to produce specific responses from a generative model. Its approach to assemblable task prompts is a novel application within the area of prompt engineering, even if the focus is more on sentiment analysis rather than on hard prefix prompts specifically." -context variance evaluation of pretrained language models for prompt-based biomedical knowledge probing,9,"The abstract discusses advanced methods in prompt engineering, particularly in the context of biomedical knowledge probing. It details creating 'context variance' prompts, which directly relates to the development of prompt engineering techniques and introduces a new evaluation metric (UCM) for this purpose. These aspects are highly relevant to the study of prompt engineering as they contribute to the understanding and improvement of prompting methods for evaluating language models, though it doesn't explicitly mention 'hard prefix prompts,' hence the rating is not a perfect 10." -parabart: a prompt-based method with parabiotic decoder for few-shot named entity recognition,7,"The abstract describes a novel method, ParaBART, for improving few-shot named entity recognition (NER) by enhancing entity boundary detection with a specialized decoder. While it does not directly address 'hard prefix prompts' in the context of prompt engineering, the research does involve 'prompt-based methods' (as mentioned in line 001) in the application of NER. Prompt engineering is a broader field that includes the design and use of prompts to improve model performance in language tasks. Therefore, the relevance to prompt engineering study is significant, but not directly focused on addressing hard prefix prompts specifically, warranting a rating of 7." -pts: a prompt-based teacher-student network for weakly supervised aspect detection,8,"The paper describes a method that utilizes prompts to enhance the performance of weakly supervised aspect detection by using a teacher-student network structure. This is directly relevant to the field of prompt engineering as it involves constructing and utilizing prompts to train language models more effectively, especially with limited labeled data. The use of hand-crafted and auto-generated prompts also indicates a deeper exploration into prompt methodologies, which is significant for prompt engineering studies. The primary reason why the rating is not a 10 is due to the specificity of the application to aspect detection and the paper's focus on a novel network architecture, which may slightly deviate from a 'comprehensive systematic review' of hard prefix prompts, thus not completely aligning with the broader aspect of the prompt engineering study." -prompt-based zero-shot video moment retrieval,8,"The abstract is highly relevant to prompt engineering as it directly involves the design and usage of prompts ('Proposal Prompt' and 'Verb Prompt') for a zero-shot learning task in video moment retrieval. Although the focus is on video and text, the principles of prompt learning and their application to a zero-shot context align well with studies in prompt engineering, particularly in the innovative use of 'hard prefixes' or structured prompts in neural language models. However, the rating is not a full 10 because it may not directly tackle the methodological aspects of prompt engineering or address a 'hard prefix prompt' in a broader sense but rather applies prompt concepts to a specialized domain." -nsp-bert: a prompt-based zero-shot learner through an original pre-training task-next sentence prediction,7,"The abstract indicates that the study introduces a novel method for utilizing BERT's Next Sentence Prediction (NSP) in zero-shot scenarios, which contrasts with the token-level methods most prompt-based learning approaches use. Seeing as prompt engineering is fundamentally about designing inputs and templates that effectively harness the capabilities of language models, the methods proposed in the paper for various NLP tasks and prompt construction templates contribute to the field of prompt engineering. Additionally, the abstraction of token-level constraints aligns with the goal of refining prompt engineering to achieve better performance with language models. However, the paper appears to focus more on the pre-training task and zero-shot learning rather than the detailed intricacies of prompt engineering, which is why the relevance is scored as a 7 rather than higher." -unified multimodal pre-training and prompt-based tuning for vision-language understanding and generation,7,"The abstract discusses the use of prompt-based methods for fine-tuning models on different downstream tasks. This is directly related to prompt engineering as it involves designing and choosing the right prompts for effective model performance. The information provided is relevant, as the study deals with how prompts can be used in model tuning, particularly in few-shot scenarios, although it does not specifically discuss 'hard prefix prompts'. This might slightly reduce the relevance as the prompt seems to inquire about a systematic review on a specific type of prompts known as 'hard prefix prompts', which is not mentioned in the abstract. Nevertheless, the general relevance to prompt engineering is still significant." -point prompt tuning for temporally language grounding,7,"The abstract discusses 'Point Prompt Tuning (PPT)' as a novel approach that integrates prompt-based strategies within a multi-modal learning framework, specifically applied to the task of temporally language grounding (TLG) in videos. Since the methodology involves formulating a query rewriting strategy as prompts and integrating it with a multi-modal transformer, it directly relates to the concept of prompt engineering. The relevance to prompt engineering is quite high since it involves designing and using prompts to improve task performance. However, it is not a comprehensive systematic review on hard prefix prompts, as the initial prompt suggested, but rather an application of prompt tuning strategies in a specific domain. Therefore, the rating is not a perfect 10, but still significant due to the use of prompt engineering techniques." -prompt learning for few-shot dialogue state tracking,8,"The paper described is relevant to prompt engineering as it discusses a prompt learning framework for few-shot dialogue state tracking (DST), which is inherently related to the utilization of prompts to improve model performance with limited labeled data. The use of value-based prompts and an inverse prompt mechanism connects directly to the design and implementation of prompts in the context of leveraging pre-trained language models (PLM). While the study is not specifically about 'hard prefix prompts' and does not perform a systematic review, it is still highly relevant to the broader field of prompt engineering due to its focus on improving the efficiency of knowledge probing from PLMs using specially designed prompts, which is an essential aspect of prompt engineering. Therefore, the paper receives a high relevance score." -enhancing cross-lingual prompting with mask token augmentation,8,"The title 'Enhancing Cross-Lingual Prompting with Mask Token Augmentation' suggests a focus on improving the effectiveness of prompts within the context of multilingual language models. The abstract confirms that the paper investigates prompt-based approaches, particularly in cross-lingual scenarios, and proposes a method to optimize this process. Although the study deals with 'prompting' in the broader sense of language model applications and doesn't specify 'hard prefix prompts', it is still highly relevant to the field of prompt engineering. It presents empirical analysis and a novel framework for prompt enhancement. However, without explicit mention of 'hard prefix prompts', the rating is not a full 10." -prompt-based re-ranking language model for asr,8,"The abstract discusses the application of a prompt-based method in the context of re-ranking for Automatic Speech Recognition, which is a form of prompt engineering. Although it does not directly address 'hard prefix prompts' in the systematic review sense, it describes a practical application of prompts in a machine learning model, BERT, indicating an overlap with prompt engineering studies. Therefore, the relevance is significant but not complete, as the focus is on a specific use-case rather than a broad analysis of prompt engineering techniques." -lfpt5: a unified framework for lifelong few-shot language learning based on prompt tuning of t5,7,"The paper presents a framework for lifelong few-shot language learning based on prompt tuning of T5, which is relevant to the concept of prompt engineering. Although the main focus is on lifelong learning and few-shot learning capabilities, the utilization of prompt tuning indicates that the work contributes to the understanding of how prompts can be engineered and optimized for specific language tasks. Additionally, the generation of pseudo samples for preventing forgetting involves creating prompts that are conducive to the model's learning process. Therefore, the paper has significant relevance to prompt engineering, despite not focusing exclusively on 'hard prefix prompts.'" -few-shot multi-modal sentiment analysis with prompt-based vision-aware language modeling,7,"The described study focuses on multi-modal sentiment analysis (MSA) using a few-shot learning approach and a prompt-based vision-aware language modeling (PVLM) method. The relevance to prompt engineering lies in the paper's emphasis on 'prompt tuning' as a method to incorporate multimodal information into a pre-trained language model for sentiment analysis tasks. This suggests that the study addresses the use of prompts within a deep learning model, specifically to bridge the gap between pre-training and specific NLP tasks. However, it does not primarily focus on 'hard prefix prompts', as mentioned in the prompt engineering study interest. Instead, it appears to be utilizing prompts as part of a broader framework for multi-modal learning. Therefore, the relevance is significant but not entirely on-topic with respect to studies centered specifically on 'hard prefix prompts'." -unified multi-modal pre-training for few-shot sentiment analysis with prompt-based learning,7,"The abstract presents work related to 'prompt-based fine-tuning (PF)' for 'few-shot multi-modal sentiment analysis (MSA)', which suggests relevance to prompt engineering particularly in the context of model fine-tuning. The concept of using prompts to bridge modalities and improve few-shot learning is applicable to the study of prompt engineering, especially considering the innovative approach of a multi-modal prompt-based system. However, the focus is specifically on sentiment analysis and not on hard prefix prompts or a comprehensive systematic review of them. Therefore, while the study is related to prompt engineering, it is not a direct match for a comprehensive systematic review on hard prefix prompts, which affects the rating." -p4e: few-shot event detection as prompt-guided identification and localization,8,"The provided abstract describes P4E, a framework for event detection that utilizes prompting (cloze-based prompting) as part of its methodology. The usage of prompts in the identification task is directly relevant to the field of prompt engineering. The study shows how prompts can be effectively integrated into the pre-training of language models for specific tasks like event detection, which falls within the scope of prompt engineering studies. However, the abstract also covers broader aspects of event detection, such as structured prediction and not exclusively prompts, so the rating is not a full 10." -dfs-ner: description enhanced few-shot ner via prompt learning and meta-learning,7,"The paper's abstract indicates that it involves 'prompt learning' as a part of the proposed DFS-NER model. The focus on using prompts to guide a masked-language model learning objective for semantic information absorption is relevant to prompt engineering, as it implies constructing and employing prompts for improving model performance. However, the paper is more specifically about Named Entity Recognition and how prompt learning can be integrated with meta-learning for this task, rather than a broad study of prompt engineering itself. Thus, it is only moderately relevant to the prompt about 'hard prefix prompts,' as the paper might not be directly focused on studying prompts in a comprehensive systematic manner but rather using them as a tool for a specific application in NER." -a prompt-based few-shot machine reading comprehension model for intelligent bridge management,8,"The abstract describes a machine reading comprehension model that utilizes prompt-based techniques, which are relevant to the field of prompt engineering. The model's use of domain-specific heuristic rules to design prompt templates indicates a direct application and study of prompt engineering principles. However, the focus appears to be more on the model's application to bridge management rather than a comprehensive systematic review of prompt engineering, which might be expected from a study explicitly titled 'hard prefix prompts.' Therefore, the rating reflects its high relevance but not a perfect match due to the specific application context." -prompt and contrastive learning for few-shot sentiment classification,7,"The abstract you've provided describes a paper which is relevant to prompt engineering as it addresses a method for few-shot sentiment classification that uses prompts as part of the strategy. The proposed Prompt and Contrastive Learning (PCL) is directly related to the field of prompt engineering because it deals with bridging the gap between pre-training and fine-tuning of language models, a central issue in the utilization of prompts in NLP tasks. However, it does not specifically address 'hard prefix prompts' as mentioned in the prompt engineering study, therefore the rating is not a full 10. It is relevant due to its focus on the application of prompts to improve language model performance but does not directly address the systematic review aspect of 'hard prefix prompts'." -ti-prompt: towards a prompt tuning method for few-shot threat intelligence twitter classification*,8,"The paper is highly relevant to prompt engineering as it details a prompt-based method specifically designed for a few-shot classification task which is a key area of interest in prompt engineering studies. The approach of leveraging prompt tuning and refining verbalizer techniques directly pertains to the domain of prompt engineering, as it involves crafting and optimizing prompts to interface with language models effectively. Although the study is focused on a niche application of threat intelligence classification on Twitter, the methodologies and insights could be broadly applicable to other prompt engineering contexts." -prompt-based few-shot learning for table-based fact verification,8,"The abstract discusses the use of the prompt method in the context of few-shot learning for table-based fact verification, which is directly relevant to prompt engineering because it explores how to design and utilize prompts to improve the performance of a pre-trained model on a specific NLP task with limited data samples. Although the main focus is on structured information in tables, the application of prompt-based approaches is a key part of prompt engineering. The rating is not a full 10 because the study seems to be more focused on a particular application of prompt engineering (table-based fact verification) rather than a broad systematic review of hard prefix prompts." -"few-shot information extraction is here: pre-train, prompt and entail",8,"The abstract discusses an approach that employs prompting and fine-tuning pre-trained language models (PLMs) for achieving state-of-the-art results in Information Extraction with minimal annotations. Although it does not specifically mention 'hard prefix prompts', it centrally addresses prompt engineering by explaining how natural language prompts are used to harness PLMs and enhance their inference abilities for specific tasks. This work is highly relevant to prompt engineering studies, as it showcases the effectiveness of prompts in the context of improving PLM performance. The reason for not giving a full score is that the exact term 'hard prefix prompts' is not referenced, which may indicate this study focuses on a broader range of prompting methodologies." -towards unified prompt tuning for few-shot learning,9,"The abstract discusses the concept of prompt-based fine-tuning and introduces a novel framework, Unified Prompt Tuning (UPT), designed for improving few-shot learning in BERT-style pre-trained language models by capturing prompt semantics. This is highly relevant to the field of prompt engineering as it directly addresses the enhancement of model performance through better understanding and utilization of prompts. It may not receive a perfect score as the abstract does not specifically mention 'hard prefix prompts' which could infer a nuanced subset within prompt engineering." -cqare: contrastive question-answering for few-shot relation extraction with prompt tuning,9,"The abstract discusses prompt tuning, a relevant aspect of prompt engineering, specifically in the context of relation extraction (RE). The entire concept of 'prompt tuning' is central to the field of prompt engineering as it involves the refinement and manipulation of prompts to improve performance with pre-trained language models (PLMs). While the abstract does not discuss 'hard prefix prompts' directly, it does mention the challenges of prompt engineering for label mapping and the attempt to improve prompt tuning with Contrastive Question-Answering method (CQARE). Considering the abstract's focus on developing improved methods for prompt tuning which is a vital part of prompt engineering, the relevance rating is high." -prompt-guided few-shot event detection,8,"The abstract describes the use of cloze prompts to assist in few-shot event detection by eliciting knowledge from pretrained language models. Although the main focus is on event detection, the study's reliance on prompt engineering is clear as it uses specifically crafted prompts to enhance the capabilities of machine learning models in a limited data scenario. The term 'hard prefix prompts' isn't mentioned directly, but the concept of designing efficient prompts is crucial to their methodology. This makes the study relevant to the field of prompt engineering, justifying the high rating." -few-shot text-to-sql translation using structure and content prompt learning,9,"The abstract presents a novel approach to prompt engineering within the specific domain of Text-to-SQL translation. It discusses the design of a hybrid prompt strategy that is particularly relevant for enhancing the performance of pre-trained language models on few-shot learning tasks. This directly ties into the study of prompt engineering by exploring how prompts can be optimized to guide language models more effectively. Although the application is specialized in Text-to-SQL, the concepts of structure stage and content stage prompting are highly relevant to the field of prompt engineering. The high rating reflects the paper's substantive contribution to the methodology of crafting and utilizing prompts to improve the performance of AI models." -few-shot text-to-sql translation using structure and content prompt learning,9,"The abstract presents a novel approach to prompt engineering within the specific domain of Text-to-SQL translation. It discusses the design of a hybrid prompt strategy that is particularly relevant for enhancing the performance of pre-trained language models on few-shot learning tasks. This directly ties into the study of prompt engineering by exploring how prompts can be optimized to guide language models more effectively. Although the application is specialized in Text-to-SQL, the concepts of structure stage and content stage prompting are highly relevant to the field of prompt engineering. The high rating reflects the paper's substantive contribution to the methodology of crafting and utilizing prompts to improve the performance of AI models." -vppt: visual pre-trained prompt tuning framework for few-shot image classification,8,"The abstract describes a method for prompt tuning in the context of few-shot image classification with pre-trained transformers, which is closely related to prompt engineering. Although the subject is applied to computer vision rather than language models (which are more commonly associated with prompts), the principles of tuning prompts to adapt to downstream tasks are highly relevant. The approach discussed involves specific challenges and solutions in initializing and fine-tuning prompt modules in a parameter-efficient way, which is a key area of prompt engineering. The reason why the rating is not a full 10 is that the prompt engineering discussed is specific to visual tasks and may not directly translate to linguistic prompt engineering studies." -unified prompt learning makes pre-trained language models better few-shot learners,8,"The paper described is highly relevant to prompt engineering because it discusses a novel approach to prompt-based learning, which is an essential aspect of prompt engineering. It specifically addresses the challenge of balancing task-specific and instance-dependent information in prompts to enhance few-shot learning in language models. While it may not focus exclusively on 'hard prefix prompts,' which would be directly related to a systematic review on such prompts, it deals with the broader question of how to design and utilize prompts effectively, crucial for the field of prompt engineering." -boosting prompt-based few-shot learners through out-of-domain knowledge distillation,7,"The abstract describes a method to improve prompt-based learning in the context of few-shot learning and knowledge distillation (KD), which is relevant to prompt engineering as it deals with enhancing the efficiency and performance of prompt-tuned Pre-trained Language Models (PLMs). Although the study focuses on knowledge distillation and model compression rather than the direct creation or manipulation of prompts, the optimization of models for prompt-based few-shot learning is a significant aspect of prompt engineering. Therefore, the relevance is fairly high, but not maximal due to the indirect focus on the engineering of prompts themselves." -prompt-distiller: few-shot knowledge distillation for prompt-based language learners with dual contrastive learning,8,"The article is highly relevant to prompt engineering as it directly addresses an aspect of prompt-based learning, which is a key area in prompt engineering. It offers innovative solutions for the deployment of prompt-tuned Pre-trained Language Models in few-shot learning scenarios through Knowledge Distillation. The focus on the few-shot KD algorithm designed for prompt-tuned PLMs ('Prompt-Distiller') aligns with the broader topic of engineering effective prompts for language models to enhance learning performance. While it may not specifically cover 'hard prefix prompts,' the overall context of prompt-based learning and improving the efficiencies of such systems makes it pertinent to the field of prompt engineering. A full 10 is not awarded as the abstract does not directly mention 'hard prefix prompts,' which was the specific subject of the systematic review requested." -few-shot text-to-sql translation using structure and content prompt learning,9,"The paper describes a hybrid prompt strategy that leverages learnable and fixed vectors to guide Pre-trained Language Models (PLMs) for few-shot Text-to-SQL translation tasks. This is highly relevant to prompt engineering as it relates directly to the development of prompts that assist in task-specific predictions and facilitate model understanding. Although 'hard prefix prompts' are not mentioned explicitly, the approach is fundamentally connected to creating effective prompts for language models, thus making it pertinent to studies in prompt engineering." -few-shot text-to-sql translation using structure and content prompt learning,9,"The paper describes a hybrid prompt strategy that leverages learnable and fixed vectors to guide Pre-trained Language Models (PLMs) for few-shot Text-to-SQL translation tasks. This is highly relevant to prompt engineering as it relates directly to the development of prompts that assist in task-specific predictions and facilitate model understanding. Although 'hard prefix prompts' are not mentioned explicitly, the approach is fundamentally connected to creating effective prompts for language models, thus making it pertinent to studies in prompt engineering." -dreamartist: towards controllable one-shot text-to-image generation via positive-negative prompt-tuning,8,"The abstract discusses the use of prompt-tuning strategies, specifically introducing a 'positive-negative prompt-tuning learning strategy' in the context of text-to-image generation, which falls within the realm of prompt engineering. Prompt engineering is about finding effective ways to interface with language models or other AI systems using written prompts; the mention of positive and negative prompt tuning is a concrete example of this, tailored for a specific application. Therefore, this study is relevant to the broader field of prompt engineering as it explores a novel method to enhance the controllability and quality of outputs from AI systems. However, it does not specifically address 'hard prefix prompts,' which would be even more directly related to the prompt engineering study mentioned in the request. Thus, the rating is not a full 10." -cohoz: contrastive multimodal prompt tuning for hierarchical open-set zero-shot recognition,7,"The abstract describes CoHOZ, an approach for open-set recognition and zero-shot learning by leveraging hierarchical label trees and contrastive continuous prompt tuning. While it does not directly mention 'hard prefix prompts', it does engage with 'prompt tuning', which is a relevant aspect of prompt engineering. The relevance is marked as a 7 because the techniques and experiments could potentially contribute to the broader understanding of prompt engineering without being specifically focused on 'hard prefix prompts'. The concept of prompt tuning, particularly in a contrastive and multimodal setting, is pertinent to the study of how prompts are constructed and used, especially in zero-shot learning scenarios." -proze: explainable and prompt-guided zero-shot text classification,7,"The abstract discusses 'ProZe,' a text classification approach that utilizes prompting pretrained language models, which is directly relevant to prompt engineering as it involves the method of using prompts to guide language models. However, the abstract also includes mention of querying ConceptNet for adding explainability, which is somewhat peripheral to the core concept of prompt engineering. Moreover, the study focuses on zero-shot text classification, which is only one aspect of the broader field of prompt engineering. Therefore, while prominently featuring elements of prompt engineering, the paper's focus on the combination of prompts with an external knowledge base and its aim for explainability dilutes the pure relevance to hard prefix prompts, hence the rating of 7." -self-supervised meta-prompt learning with meta-gradient regularization for few-shot generalization,9,"The abstract describes an approach to prompt tuning, particularly focusing on few-shot generalization, which is highly relevant to the field of prompt engineering. The method outlined involves learning soft prompts and touches on the challenges of generalization and overfitting, key issues in prompt engineering. The proposed framework, SUPMER, addresses these problems by creating a universal initialization for prompts, which contributes significantly to the study and advancement of prompt engineering methods. The reason the rating is not a perfect 10 is that the abstract does not explicitly discuss 'hard prefix prompts,' which was mentioned in the user's request for a 'comprehensive systematic review on hard prefix prompts.'" -"cocoopter: pre-train, prompt, and fine-tune the vision-language model for few-shot image classification",7,"The document's title suggests the use of a process that includes 'prompt' as part of the procedure for improving few-shot image classification. This indicates that the study involves some level of modification or creation of prompts to enhance model performance, which is relevant to prompt engineering. However, without further details on the nature of these prompts, particularly whether they pertain to language prompts typically used in prompt engineering, or are more broadly related to model conditioning, it's difficult to assess the full relevance. The mention of 'hard prefix prompts' in the initial query was not directly addressed, resulting in a rating that acknowledges relevance but cannot confirm an exact match." -few-shot fake news detection via prompt-based tuning,8,"The abstract presents a study on a Fake News Detection model that utilizes prompt-based tuning, which is directly relevant to prompt engineering. The model's design incorporates contextual prompts to enhance the detection capabilities of pre-trained language models in few-shot scenarios. While the study is not a comprehensive systematic review on hard prefix prompts, it does focus on the application of prompts in a specific important area, hence the relatively high relevance score." -list: lite self-training makes efficient few-shot learners,7,"The abstract discusses a method related to fine-tuning pre-trained language models with the use of prompts, which is relevant to prompt engineering. LiST improves prompt-tuning with techniques like self-training and lightweight fine-tuning, which fall within the realm of prompt optimization strategies. However, the abstract does not specifically mention 'hard prefix prompts' as in the initial prompt, so it may not address the complete systemic review aspect of hard prefix prompts in prompt engineering. Thus, the relevance to prompt engineering study is significant but not fully aligned with the specificity of 'hard prefix prompts'." -prompt-based multi-modal image segmentation,8,"The study presents a system that utilizes prompts in the form of text or images to generate image segmentation, indicating a strong relevance to 'prompt engineering.' Although the primary focus is image segmentation and not prompt engineering itself, the system's capability to interpret and process arbitrary prompts at test time is indicative of a significant application of prompt engineering principles. This demonstrates the integration of prompt-based methods into AI tasks, which is a key aspect of prompt engineering research. The rating is not a full 10 because the study's primary aim is not the investigation of the prompts themselves or their optimization, but rather their application to a particular AI task." -inverse is better! fast and accurate prompt for slot tagging,8,"The abstract describes an innovative method in prompt engineering, specifically for the task of slot tagging in few-shot learning scenarios. While it doesn't discuss 'hard prefix prompts' directly, it presents the concept of 'inverse prompting', which is a technique within the broader domain of prompt engineering. The improvement in efficiency and accuracy mentioned in the abstract is highly relevant to studies in prompt engineering, especially when considering the impact on state-of-the-art performance. The score is not a full 10 because it is not explicitly tied to 'hard prefix prompts' but does address closely related concepts within prompt engineering." -cipta: contrastive-based iterative prompt-tuning using text annotation from large language models,8,"The study described in the title and abstract is highly relevant to prompt engineering as it focuses on 'prompt tuning,' which is a method used to enable models to quickly adapt to new tasks or domains using a limited amount of data or examples. The innovation in prompt tuning that the study proposes, CIPTA, particularly targets low-resource scenarios, which is a critical area of research in prompt engineering for improving the efficiency and applicability of large language models. The study's use of contrastive embedding training as part of the prompt-tuning process also contributes to the field. Therefore, it scores high in relevance. It doesn’t get a full score because it is specifically angled towards public opinion analysis rather than covering prompt engineering in broader scenarios." -unleashing the potential of prompt engineering in large language models: a comprehensive review,9,"The abstract provided is highly relevant to the field of prompt engineering as it covers a breadth of topics within the discipline, including foundational principles, advanced methodologies, assistance tools, prospective research directions, and applications in various fields. The rating is not a perfect 10 as there is some information missing, such as empirical data or case studies that would make it an exhaustive review. Nevertheless, the paper appears to be a comprehensive resource that would substantially benefit those interested in the workings and advancements of prompt engineering for Large Language Models." -llm comparative assessment: zero-shot nlg evaluation through pairwise comparisons using large language models,7,"The paper focuses on zero-shot NLG evaluation using large language models (LLMs) and specifically addresses new methods for assessment, which closely relates to the field of prompt engineering as it pertains to the performance assessment of language model outputs. While it does not directly study 'hard prefix prompts' or design prompts for LLMs, the study of assessment methods is relevant for fine-tuning and validating prompts during the engineering process. The inclusion of discussion on prompt positional biases and debiasing methods is particularly relevant, as these considerations can impact the effectiveness of engineered prompts." -unsupervised dual modality prompt learning for facial expression recognition,9,"The abstract describes a study that is highly relevant to prompt engineering, as it proposes an 'Unsupervised Dual Modality Prompt Learning framework' which is directly related to adapting and tuning prompts for better performance in facial expression recognition tasks. This study focuses on optimizing the prompts used in vision-language models, which is a core area of interest in prompt engineering. The only reason it does not receive a perfect score is that it is specialized in facial expression recognition rather than covering prompt engineering in a broader sense across various applications." -label-aware automatic verbalizer for few-shot text classification,8,"The study focuses on the verbalizer component within prompt-based learning, a crucial element of prompt engineering, especially in the context of few-shot text classification. The relevance to prompt engineering is strong as it addresses the optimization of prompt output translation into class predictions, which is directly related to how prompts are engineered to interact with language models. Although the study does not explicitly mention 'hard prefix prompts,' it aligns with the broader field of prompt engineering. The rating is not a perfect 10 because it does not directly address a comprehensive systematic review of hard prefix prompts, which the initial query specifies." -the unreliability of explanations in few-shot prompting for textual reasoning,8,"The described study directly investigates the role of explanations within the context of few-shot prompting, which is a pertinent area of research within prompt engineering. Although it does not explicitly mention 'hard prefix prompts', it explores the impact of the style and quality of prompts (including explanations) on the performance of large language models in textual reasoning tasks. This is relevant to prompt engineering as it informs on how different prompt constructs can affect model outputs, especially in tasks requiring understanding and explanations. The relevance is not rated as a perfect 10 since there's no detailed focus on 'hard prefix prompts' specifically, but the research closely aligns with investigating prompt effects in LLMs." -prod: prompting-to-disentangle domain knowledge for cross-domain few-shot image classification,8,"The paper presents a method named prompting-to-disentangle (ProD) that utilizes prompts to improve the performance of image classification in cross-domain few-shot learning scenarios. This approach is directly related to prompt engineering as it involves designing prompts to manipulate the behavior of a model (in this case, a transformer) for better performance. The technique specifically leverages prompts to separate domain-general and domain-specific knowledge, which demonstrates an application of prompt engineering in the context of machine learning and image classification. However, it does not address 'hard prefix prompts' as mentioned in the original study prompt, which suggests a more specific focus within the broader area of prompt engineering. The rating is not a full 10 due to the absence of a direct alignment with 'hard prefix prompts,' but it remains high because the paper still significantly contributes to the overarching field of prompt engineering." -template-free prompting for few-shot named entity recognition via semantic-enhanced contrastive learning.,9,"The paper presents a novel technique for named entity recognition (NER) using prompt-based contrastive learning that does not require prompt templates or label word mappings, which is highly relevant to prompt engineering. It focuses on token-level classification tasks and introduces a new way to apply prompts in few-shot learning scenarios, which is a key area of interest in prompt engineering studies. The only reason it does not receive a full score is that it does not specifically address 'hard prefix prompts,' which was the indicated topic of interest, but it is still very pertinent to the broader field of prompt engineering." -few-shot learning with prompting methods,9,"The abstract describes research focused on prompting methods in the context of few-shot and zero-shot learning within the field of natural language processing. It specifically addresses the use of hard prefixes in prompting by mentioning pattern-exploiting methodologies such as PET and iPET. These methodologies are a form of prompt engineering that modify the input to language models in a structured way to improve performance with limited data. Given that the paper reviews studies on prompt-based learning and relates to hard prefix prompts through the use of structured input, it is highly relevant to prompt engineering studies. The rating is not a full 10 because the abstract does not exclusively focus on hard prefix prompts but also discusses prompt-based learning more broadly." -gpt-3 for few-shot dialogue state tracking,9,"The abstract details a study focused on few-shot Dialogue State Tracking (DST) using GPT-3 and the influence of prompt crafting on performance. It explores methodologies around prompt engineering, such as different completion strategies, and the effects of fine-tuning, ensembling, and context example selection. This information is highly relevant to prompt engineering, as it contributes to the understanding of how prompts can be optimized for certain tasks. However, the study doesn't strictly focus on 'hard prefix prompts', which might be a specific subset of prompt engineering, hence the rating is not a perfect 10." -little giants: exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task,9,"The paper's focus on assessing the effectiveness of prompt-based techniques directly addresses prompt engineering, which is the practice of formulating prompts to elicit specific responses from language models. The use of various prompting techniques and the integration with zero-shot and one-shot learning methods are key components of prompt engineering studies. Although the paper's primary domain is quality estimation for summaries and machine translations, the core of the research involving systematic experiments with prompts is highly relevant to prompt engineering. The only reason the rating is not a perfect 10 is because it might be more narrowly focused on evaluation metrics rather than the broader context of prompt engineering." -"investigating the perception of the future in gpt-3, -3.5 and gpt-4",7,"The given study indirectly relates to prompt engineering by exploring how models like GPT-3, GPT-3.5, and GPT-4 process and generate concepts of the future through different prompting techniques such as fine-tuning, prompt-tuning, and few-shot prompting. These methods fall under the broader category of prompt engineering. Although the study's primary focus is on the models' perception of time, rather than exclusively on prompt engineering efficiency or methodology, understanding the nuances of how different models perform with various prompt designs is relevant to prompt engineering practices. The detailed investigation into the efficacy of these prompting methods can provide insights into how to craft better prompts to achieve specific outcomes, which is a critical aspect of prompt engineering." -tree of clarifications: answering ambiguous questions with retrieval-augmented large language models,8,"The study introduces a novel framework, Tree of Clarifications (ToC), which is directly related to prompt engineering as it involves few-shot prompting to disambiguate open-domain questions. The method of recursively constructing a tree of disambiguations and leveraging external knowledge for generating long-form answers shows an application of designing and engineering prompts to improve question-answering systems. While it doesn't specifically mention 'hard prefix prompts', the concept is within the realm of prompt engineering, hence the high relevance rating. However, it doesn't fully match the exact concept of 'hard prefix prompts' as it does not mention systematic review of them directly." -evaluation of prompts to simplify cardiovascular disease information using a large language model,9,"The described study directly relates to prompt engineering by proposing and evaluating a 'rubric prompting' strategy to optimize the simplification of complex medical information using large language models. The focus on evaluating different prompting techniques, particularly the comparison with zero-shot or one-shot prompting methods, indicates a high relevance to the field of prompt engineering. The systematic approach to developing prompts that yield complete, readable, and syntactically simple outputs, especially in a critical domain like healthcare, illustrates the application of prompt engineering principles. Although not specifically using 'hard prefix prompts,' the study is highly pertinent as it discusses the design and impact of prompt structures on the quality of AI-generated text, reflecting on a major aspect of prompt engineering." -deeplyrics: gpt2 for lyrics generation with finetuning and prompting techniques,7,"The study outlined in the abstract describes the use of 'tuning-free prompting' as a method to assist lyric generation with AI, indicating that it does involve prompt engineering. However, the specifics of 'hard prefix prompts', which are the main focus of the implied systematic review, are not explicitly mentioned. It implies work on prompting techniques without giving details on whether these are 'hard prefix prompts' or another form of prompting. Therefore, the relevance is significant but not fully aligned due to the lack of explicit mention of 'hard prefix prompts'." -a general language assistant as a laboratory for alignment,8,"The abstract describes a study that investigates various techniques and evaluations, including prompting, to align large language models with human values. While it does not specifically mention 'hard prefix prompts,' prompting in general is a significant aspect of prompt engineering. The investigation into baseline techniques for alignment has relevance to the field of prompt engineering, as it can inform the development of more sophisticated prompts that are better aligned with human intentions. The study also examines the scalability of different training objectives relevant to alignment, which is pertinent to advancing the effectiveness of prompt engineering in large language models. However, without a focus on 'hard prefix prompts' specifically, the relevance is not absolute, hence the rating is not a perfect score." -generating training data with language models: towards zero-shot language understanding,8,"The study's focus on using prompts to generate class-conditioned texts with a unidirectional PLM directly pertains to the field of prompt engineering. The prompts guide the text generation process, which is a practical application of hard prefix prompts within the context of zero-shot learning. Although the study isn't exclusively a systematic review on hard prefix prompts, it demonstrates a relevant application of prompts in the engineering process to improve NLU tasks, making it highly relevant to the subject." -"using chatgpt standard prompt engineering techniques in lesson preparation: role, instructions and seed-word prompts",8,"The abstract provided discusses the use of standard prompt engineering techniques (which could potentially include hard prefix prompts as a subset) in the context of lesson preparation for an AI tool, specifically ChatGPT. It emphasizes the effectiveness of structuring prompts with additional defined roles and seed words. Although it does not explicitly mention 'hard prefix prompts,' it is closely related to the broader topic of prompt engineering. The study's findings could contribute valuable insights into the usage of specific prompting methods, which may include but are not explicitly limited to hard prefix prompts. Therefore, it is relevant to the study of prompt engineering, but it would have a higher relevance rating if it directly addressed hard prefix prompts." -generative ai tools in art education: exploring prompt engineering and iterative processes for enhanced creativity,8,"The study directly addresses prompt engineering within the context of generative AI tools in art education, which involves teaching students how to craft and refine prompts for creative purposes. Although the focus is tailored to art and design, the principles of prompt engineering discussed are relevant to the broader field of study. The emphasis on iterative processes and the detail-oriented approach required for effective prompt engineering are particularly pertinent. The study's lower relevance in terms of being a 'comprehensive systematic review on hard prefix prompts' specifically, is acknowledged, hence the rating is not a full 10." -multimodal propaganda detection via anti-persuasion prompt enhanced contrastive learning,8,"The relevance to prompt engineering is substantial, given that the study introduces a novel model (APCL) that utilizes prompt engineering as a core component for detecting propaganda in memes. The model specifically incorporates category words from propaganda techniques in its prompt engineering strategy, using these prompts to enhance contrastive learning in a multi-label classification task. Though the focus is on propaganda detection rather than prompt engineering itself, the use of 'persuasion' and 'anti-persuasion' prompts directly relates to the study of how prompts can be engineered to improve machine learning tasks. Therefore, the rating is high but not maximum because prompt engineering is a means to an end in this study, rather than the primary focus." -how understanding large language models can inform their use in physics education,8,"The paper is highly relevant to prompt engineering study as it specifically discusses the impact of prompt-engineering techniques on an LLM's (ChatGPT) performance in physics education. It includes practical illustrations of how prompt engineering can be used to aid understanding and problem-solving in physics, which is a direct application of prompt engineering. The only reason the rating isn't a 10 is that the paper's focus is somewhat narrow—specific to physics education—and does not address the broader spectrum of hard prefix prompts across various domains." -automatic bug fixing via deliberate problem solving with large language models,9,"The abstract discusses leveraging a large language model to improve automated program repair, specifically by using an interactive prompting technique called Tree of Thoughts (ToT). Since this technique is directly related to the use and innovation of prompt engineering to enhance the model's ability to solve complex tasks such as bug fixing, the relevance to prompt engineering study is very high. The only reason it's not a perfect 10 is that the description doesn't solely focus on the prompt engineering aspect but also on the overall capability of large language models in automated program repair." -noisy exemplars make large language models more robust: a domain-agnostic behavioral analysis,8,"The abstract discusses the use of systematic approaches in prompt engineering to assess the robustness of large language models (LLMs) in multi-hop reasoning tasks. While it doesn't specifically mention 'hard prefix prompts', it does cover the broader topic of prompt engineering and the perturbations used to test and potentially improve the models' responses. The research's emphasis on few-shot prompting and robustness is highly relevant to the field of prompt engineering, thus warranting a high rating." -original research,7,"The abstract describes a study that focuses on the use of prompt engineering techniques within the context of art and design education, specifically using OpenAI's DALL-E2. The research explores how students were taught to refine their ideas and prompts iteratively to produce better visual outcomes with generative AI tools. This emphasis on the iterative refinement of prompts is directly relevant to the field of prompt engineering, as it pertains to understanding and improving the way prompts are constructed and their effects on the outputs generated by AI models. However, the study also touches on ethical considerations, the replacement of artists, and the integration of AI tools into the art and design curriculum, which, while important, are somewhat tangential to the technical and methodological aspects of prompt engineering. Therefore, the rating reflects the study's substantial relevance to prompt engineering due to the focus on teaching and practicing prompt refinement, but it is not exclusively centered on the systematic study of hard prefix prompts." -chatgpt-based debate game application utilizing prompt engineering,9,"The abstract describes a study focused on the application of prompt engineering within an educational debate game, utilizing ChatGPT. It illustrates the use of prompt engineering to control and refine the outputs of the language model for a specific domain, which is directly related to prompt engineering study. The research aims to improve ChatGPT's responses by providing specific instructions and case-based prompts, aligning perfectly with the concept of hard prefix prompts in prompt engineering. One point is docked because the abstract does not explicitly mention 'hard prefix prompts' as a focus; however, the content is highly relevant to the overall field of prompt engineering." -allies: prompting large language model with beam search,7,"The described study, 'allies: prompting large language model with beam search', presents a method that iteratively refines and expands on initial queries. This iterative process of generating new queries can be seen as a form of prompt engineering, where the goal is to improve the performance of a large language model for complex tasks. Although the study does not directly focus on 'hard prefix prompts' as specified in the prompt engineering study request, the concept of refining and modifying prompts to leverage hidden knowledge aligns with the broader field of prompt engineering. Therefore, the relevance to prompt engineering is significant but not entirely focused on the specific aspect of 'hard prefix prompts'." -approximating online human evaluation of social chatbots with prompting,9,"The study introduces a Dialog system Evaluation framework based on Prompting (DEP), which directly relates to prompt engineering as it involves using large language models conditioned with specific prompts to evaluate conversational agents. This is highly relevant to the study of prompts and their impact on the performance of language models. The relevance is not a full 10 because the study seems to be more focused on evaluating chatbots rather than on the 'hard prefix prompts' and the methodology might be broader than hard prefix prompts alone." -prompting gpt-3.5 for text-to-sql with de-semanticization and skeleton retrieval,8,"The paper is highly relevant to prompt engineering as it discusses a framework for improving the performance of large language models (LLMs) on the Text-to-SQL task, which is inherently based on the concept of providing effective prompts to the model. The de-semanticization process and skeleton retrieval align with hard prefix prompts since they involve manipulating the input to the LLM to enhance its understanding and output. This systematic approach to tailoring demonstrations and prompts to an LLM's requirements is a direct application of prompt engineering strategies. The reason why it's not a full 10 is that it focuses specifically on Text-to-SQL tasks, which is just a subset of all possible applications of prompt engineering." -llms to the moon? reddit market sentiment analysis with large language models,7,"The relevance to prompt engineering is significant since the abstract describes a semi-supervised learning approach that utilizes a large language model (LLM) and involves prompting the LLM to generate Chain-of-Thought summaries to improve the quality of sentiment analysis on social media. This indicates the study focuses on how to engineer prompts to obtain more accurate outputs from the LLM, which is a key aspect of prompt engineering. However, the study does not specifically mention 'hard prefix prompts', which suggests that while it is related to prompt engineering, it does not directly address the comprehensive systematic review of such prompts. Therefore, the rating is not a full 10." -leveraging commonsense knowledge from large language models for task and motion planning,8,"The abstract describes the use of prompting techniques within Large Language Models (LLMs) to extract commonsense knowledge for task and motion planning, which is highly relevant to the field of prompt engineering. Specifically, the LLMGROP system leverages prompts to guide the LLM in generating information about plausible physical arrangements, a task that aligns closely with the development of hard prefix prompts for specific applications. Although the study focuses on a practical application for service robots rather than a broad systematic review of prompt engineering, the underlying methodology and use of prompts to gain desired outputs from an LLM provide valuable insights into the prompt engineering process. The rating is not a full 10 as the paper does not explicitly focus on a systematic review of prompt engineering techniques, which appears to be the central requirement of the 'prompt engineering study' in question." -knowing what llms do not know: a simple yet effective self-detection method,8,"The paper proposes a method that relies on prompt engineering to elicit different responses from LLMs to the same question, which directly involves the study of how prompts can be constructed and used to understand and evaluate the model's knowledge boundaries. Although it does not focus on 'hard prefix prompts' explicitly, the concept of diversifying textual expressions as prompts is closely related to the field of prompt engineering. The systematic approach to identify nonfactual responses through the analysis of divergences in LLM outputs is pertinent to the broader study of prompt engineering strategies, hence the high relevance rating." -constitutionmaker: interactively critiquing large language models by converting feedback into principles,8,"The abstract provided discusses an interactive tool called ConstitutionMaker that is directly involved in prompt engineering by allowing users to refine large language model outputs and steer chatbot behavior through feedback. While the study does not cover 'hard prefix prompts' in specific, it engages with the broader field of prompt engineering through user feedback and principles, which are fundamental to prompt engineering methodology. Thus, the relevance is high but not maximal since the specific focus on 'hard prefix prompts' is not mentioned." -theory of mind in large language models: examining performance of 11 state-of-the-art models vs. children aged 7-10 on advanced tests,7,"The study is relevant to prompt engineering to a significant degree, as it includes examining and scoring the performance of LLMs on complex cognitive tasks using various types of prompts, potentially revealing how different prompts can elicit sophisticated language understanding and reasoning. While the primary focus of the study seems to be on the cognitive abilities of LLMs, particularly Theory of Mind, the aspect of using prompts and evaluating different kinds of prompts (open versus closed questions) is a substantial component of prompt engineering. However, the study doesn't seem to be centered exclusively on 'hard prefix prompts' or the mechanics of prompt design, thus it's not fully aligned with a 'systematic review on hard prefix prompts'. Therefore, the rating isn't a perfect 10." -empirical study of zero-shot ner with chatgpt,7,"The abstract describes research focused on improving the performance of language models on the zero-shot named entity recognition task, which involves strategies related to prompt engineering such as 'syntactic prompting' and 'tool augmentation'. This indicates relevance to prompt engineering as it involves designing inputs to elicit better performance from the model. However, the focus is more on the specific application of NER and the methodology to enhance LLMs like ChatGPT, rather than on prompt engineering in general or 'hard prefix prompts' specifically. This constitutes a partial but significant relevance to the broader field of prompt engineering studies." -c o rrpus: codex-leveraged structured representations for neurosymbolic story understanding,7,"The abstract discusses the enhancement of neurosymbolic work in natural language generation and understanding tasks through the use of structured prompts (referred to as 'abstracted prompting procedures'). Although the study primarily focuses on story understanding and generation, the mention of 'abstracted prompting procedures' which can be considered a technique within prompt engineering, signifies a relevance to the broader field of prompt engineering studies. However, the context is specific to story understanding tasks rather than a 'comprehensive systematic review on hard prefix prompts,' hence the rating is not a full 10." -corrpus: detecting story inconsistencies via codex-bootstrapped neurosymbolic reasoning,8,"The provided abstract discusses the use of abstracted prompting procedures alongside neurosymbolic approaches for story understanding tasks. Although it does not specifically mention 'hard prefix prompts,' the subject of prompt engineering is still highly relevant. The abstract explicitly refers to the design of specialized prompts to guide large language models, which aligns with the broader field of prompt engineering studies. The creation and optimization of prompts to improve the performance of language models on specific tasks is a direct example of prompt engineering work. Therefore, the study appears to be very relevant to those interested in how tailored prompting can enhance model performance, even if it doesn't directly address hard prefix prompts." -neuro-symbolic procedural planning with commonsense prompting,8,"The given abstract discusses the use of commonsense-infused prompting to improve procedural planning in large language models, which aligns with prompt engineering concepts. The study presents a neuro-symbolic approach that incorporates commonsense knowledge into prompts to form a causal structure, reflecting an advanced and targeted application of prompts to enhance model performance. Although the focus is more on procedural planning and less on the structure of prompts themselves, the use of prompts generated from knowledge bases and their optimization for better outcomes in language models is fundamentally connected to prompt engineering." -chain of thought prompting elicits reasoning in large language models,9,"The abstract directly discusses the impact of 'chain of thought prompting' on the performance of large language models. Given that 'chain of thought prompting' is a technique used in prompt engineering to elicit detailed reasoning from language models, and the abstract indicates significant performance improvements on complex tasks, it is highly relevant to the study of prompt engineering. It may not score a perfect 10 as it is not exclusively focused on 'hard prefix prompts' which might be a more specialized subset of prompt engineering." -pop quiz! can a large language model help with reverse engineering?,8,"The abstract discusses the use of prompting techniques with Codex, a large language model, to investigate its utility in reverse engineering tasks. This falls under the broader category of 'prompt engineering' as it involves the strategic formulation of prompts to elicit specific information from a language model regarding code comprehension. The study's focus on the model's response to these prompts and the development of a structured quiz to measure its performance is highly relevant to understanding how different prompt strategies might affect the outcome of interactions with AI. However, it is not precisely about 'hard prefix prompts', which suggests a more specialized aspect of prompt engineering, hence the deduction of 2 points." -lessons learned from gpt-sw3: building the first large-scale generative language model for swedish,7,"While the primary focus of the paper seems to be on the development and evaluation of a Swedish language model (GTP-SW3), it is mentioned that an 'extensive prompting study' was part of the research. Although the details of the prompting study are not provided, it suggests that there was an investigation into how the model responds to different prompts, which is relevant to prompt engineering. The rating isn't higher because the prompt study is not the central focus of the paper and without more information on the 'hard prefix prompts' aspect, the overall relevance to the specific area of prompt engineering study mentioned cannot be fully assessed." -dehallucinating large language models using formal methods guided iterative prompting,8,"The abstract describes a study focused on refining the prompting process to reduce 'hallucinations' in large language models, such as ChatGPT, especially for safety-critical applications. Although it doesn't specifically mention 'hard prefix prompts,' the study's aim to create an architecture for iterative prompting and self-monitoring to ensure the accuracy of the models' responses is relevant to prompt engineering. Prompt engineering involves crafting prompts to obtain better performance from language models, and the research on reducing hallucinations can be seen as an advanced form of prompt engineering. The paper's relevance is not a perfect 10, as it doesn't directly address hard prefix prompts but instead looks at a broader issue within prompt engineering itself." -htlm: hyper-text pre-training and prompting of language models,8,"The abstract describes the development and advantages of the HTLM model which is relevant to prompt engineering insofar as it discusses the model's improved efficiency with hyper-text prompts over plain text prompts. This indicates a focus on how different formats of prompts influence the performance of language models. It also touches on 'structured prompting' which is a key aspect of prompt engineering. The relevance is not a perfect 10 since the study is about hyper-text specific prompting rather than 'hard prefix prompts' in general, but the study is still highly pertinent to the field of prompt engineering." -pointclip v2: prompting clip and gpt for powerful 3d open-world learning,7,"The study discusses utilizing both CLIP and GPT models in unison to enhance 3D open-world learning, with specific emphasis on zero-shot learning capabilities in classification, segmentation, and detection tasks. The relevance to prompt engineering is evident in the methodology where the authors design prompts for both the visual (CLIP) and textual (GPT) components to align 3D data with the pre-trained language knowledge. This indicates an element of prompt engineering to facilitate the interface between visual and language models for processing 3D point cloud data. Nevertheless, the study appears to be more focused on the application of these models in the 3D domain rather than specifically on the engineering of prompts. Hence, while prompt engineering is a component of the paper, it is not the core focus, which is why the rating is not higher." -towards facet-driven generation of clarifying questions for conversational search,8,"The study described in the provided title and abstract demonstrates relevance to prompt engineering as it involves generating clarifying questions in response to user queries using a fine-tuned GPT-2 language model. This is closely related to prompt engineering as it requires careful design of prompts, or inputs, to the language model to ensure that the generated questions are coherent, relevant, and useful in the context of conversational search. While the main focus of the paper seems to be on the generation of clarifying questions rather than on hard prefix prompts specifically, the techniques and findings are likely applicable to prompt engineering studies, especially those concerned with improving interaction patterns with AI systems through conversational interfaces. The only reason the rating isn't higher is because 'hard prefix prompts' isn't explicitly mentioned, but the methodology and goals are nevertheless aligned with the principles of prompt engineering." -learning to prompt clip for monocular depth estimation: exploring the limits of human language,9,"The study is highly relevant to prompt engineering as it explores the efficiency of CLIP—a model trained on language and vision inputs—when prompted for a specialized task like Monocular Depth Estimation. The research discusses replacing human-language prompts with continuous learnable tokens, which directly pertains to prompt engineering by investigating alternative ways to communicate with AI models. It demonstrates how prompt design can influence performance and understanding of AI models, which is a central concern of prompt engineering studies. The fact that it also touches upon the limitations of human language in prompts and investigates non-linguistic tokens is a novel contribution to the field." -efficiently enhancing zero-shot performance of instruction following model via retrieval of soft prompt,8,"The described study focuses on the use of soft prompts to improve the zero-shot performance of instruction-following models, specifically mentioning the assistance of these soft prompts to hard prompts. This is relevant to prompt engineering as the research is exploring an innovative approach to optimize how prompts are used, which lies at the core of prompt engineering. The relevance is not maximized (10 out of 10) because the study does not directly focus on 'hard prefix prompts' as specified in the original query but is sufficiently related as it investigates the conjunction of soft and hard prompts in the context of model tuning and performance enhancement. Therefore, it contributes valuable insights to the broader field of prompt engineering studies." -enhancing class understanding via prompt-tuning for zero-shot text classification,8,"The paper is highly relevant to prompt engineering as it proposes a method that explicitly uses prompts to enhance semantic understanding in zero-shot text classification tasks. This approach falls within the scope of prompt engineering as it involves the generation of discriminative words (presumably prompts) and a matching model conditioned on prompts. The study focuses on enhancing class understanding which is a key aspect of prompt-based models, although it does not specifically mention 'hard prefix prompts', which was the focus of the original prompt." -prompt-based zero-shot relation classification with semantic knowledge augmentation,9,"The abstract describes a study focused on leveraging prompt-based approaches along with semantic knowledge to address the challenge in relation classification, especially for unseen relations under a zero-shot setting. The methodology described involves creating prompts that incorporate semantic knowledge from an external knowledge graph and using these to train a model. This aligns closely with the field of prompt engineering as it specifically addresses the development and use of prompts to guide model performance in a challenging AI task. The reason for not giving a full 10 is due to the absence of specific mention of 'hard prefix prompts,' which may indicate this study does not focus exclusively on that aspect of prompt engineering." -bayesian sharpness-aware prompt tuning for cross-domain few-shot learning,8,"The paper presents a novel approach to prompt tuning, specifically Bayesian Sharpness-Aware Prompt Tuning (BSAPT), within the context of few-shot learning and domain adaptation. This is highly relevant to prompt engineering as it directly focuses on enhancing the method through which prompts are constructed and tuned, a core aspect of prompt engineering studies. The application to cross-domain few-shot learning demonstrates an advanced utilization of prompt engineering techniques. The rating is not a full 10 because the abstract suggests a specific application of prompt engineering rather than a comprehensive study of hard prefix prompts in general." -language models as zero-shot planners: extracting actionable knowledge for embodied agents,8,"The paper is highly relevant to prompt engineering as it explores the use of language models to interpret and execute high-level tasks by breaking them down into actionable steps. This indicates a level of prompt engineering where the model is not only responding to prompts but is being evaluated on its ability to translate prompts into a sequence of actions in a simulated environment. Although the title does not explicitly mention 'hard prefix prompts', the concept of prompt engineering is central to the study as it requires effective prompts to guide the model in generating plans that can map to executable actions. The study's focus on grounding tasks and improving the executability of plans derived from language models is at the core of advanced prompt engineering techniques." -slot dependency modeling for zero-shot cross-domain dialogue state tracking,8,"The study's focus on utilizing slot prompts combination in dialogue state tracking is highly relevant to prompt engineering due to its emphasis on prompt construction for capturing dependencies and domain knowledge in natural language processing tasks. Although it is not directly focused on 'hard prefix prompts', the principles of designing and utilizing prompts for zero-shot learning are closely related to prompt engineering, hence the high relevance rating." -multitask prompted training enables zero-shot task generalization,9,"The provided abstract discusses the development of a system for mapping natural language tasks into a prompted form and explicitly training a model on a diverse set of prompts. This is highly relevant to prompt engineering as it explores the creation and use of different prompts to achieve zero-shot task generalization. The focus on prompted datasets is directly tied to the study of how prompts affect language model behavior, a core aspect of prompt engineering. The relevance is not a full 10 because the abstract does not specifically mention 'hard prefix prompts', which could be a more narrow subtopic within prompt engineering." -generating variable explanations via zero-shot prompt learning,8,"The abstract addresses the use of 'zero-shot prompt learning' as a central method in generating explanations for variables in programming, which is relevant to the field of prompt engineering. Prompt engineering typically involves designing and refining prompts to improve interaction with AI models, and the study’s focus on leveraging prompts in a zero-shot context to enhance program comprehension is closely related. However, it does not specifically address 'hard prefix prompts' which would be more directly related to the exact terminology in the prompt engineering study. Hence, a couple of points are deducted for the specialized focus on variable explanations rather than the actual construction or analysis of prompt formats or their impacts in broader applications." -an exploration of prompt-based zero-shot relation extraction method,8,"The relevance to prompt engineering is high because the work involves prompt-tuning, a technique directly related to prompt engineering. It suggests optimizing a model for zero-shot relation extraction by utilizing prompts which influence the model's predictions. Although it's not specifically about 'hard prefix prompts' as the original prompt indicates, prompt-tuning is a subset of prompt engineering and thus highly relevant to studies of prompts and their impact on model performance. The rating is not a full 10 due to the abstract being unavailable ('nan'), which limits the ability to fully assess the relevance, and the absence of direct mention of 'hard prefix prompts', which the original study prompt seems to specify." -prompt-based zero-shot relation extraction with semantic knowledge augmentation,8,"The paper discusses a prompt-based model, which is highly relevant to the field of prompt engineering, particularly in the context of zero-shot learning. The focus on generating prompts with semantic knowledge integration touches on a core area of how prompts can be engineered to improve task performance in natural language processing. The relevance score is not a full 10 because the study seems to emphasize the zero-shot relation extraction aspect alongside prompt engineering, rather than being exclusively focused on the methodologies for creating and optimizing prompts (i.e., hard prefix prompts). Nevertheless, the paper still offers substantial insight into the application of prompt engineering concepts." -injecting commonsense knowledge into prompt learning for zero-shot text classification,8,"The provided abstract is relevant to prompt engineering to a significant extent. The research discusses enhancing prompt learning for NLP tasks in scenarios with limited data by injecting commonsense knowledge from a Knowledge Graph (KG) into Pre-trained Language Models (PLMs). While this does not directly reference 'hard prefix prompts', it does focus on the improvement of prompts (referred to as verbalizer) used in NLP models. Since prompt engineering generally deals with methods for designing and improving prompts to make them more efficient for language models, this research contributes to the wider field of study by proposing a method to enrich prompts with commonsense knowledge for better performance in zero-shot text classification." -knowledge-embedded prompt learning for zero-shot social media text classification,7,"The title and abstract detail a study that focuses on prompt learning which is an aspect of prompt engineering, specifically within the context of zero-shot text classification for social media. While it does not explicitly mention 'hard prefix prompts', it does discuss embedding knowledge within the prompts, which suggests a degree of specificity and deliberation in prompt design that is relevant to the field of prompt engineering. The method seems to enhance the model's performance without large datasets by using prompts effectively, which is a core concern in prompt engineering studies. Therefore, the relevance to prompt engineering is fairly high, but it might be less relevant to a systematic review specifically focused on 'hard prefix prompts'." -spteae: a soft prompt transfer model for zero-shot cross-lingual event argument extraction,8,"The abstract discusses 'SPTEAE', a model which utilizes tunable vectors as prompts, indicating a level of relevancy to prompt engineering. The focus on soft prompts and the mechanism of transferring knowledge from a source language to a target language via prompts are of particular interest to prompt engineering studies, especially in the context of zero-shot cross-lingual tasks. Although the study does not deal with hard prefix prompts directly, the concept of prompt transfer and the use of event type prompts are relevant to the broader field of prompt engineering. The rating is not a full 10 as the specific emphasis of the study is on zero-shot cross-lingual event argument extraction rather than a general exploration of prompt engineering or hard prefix prompts." -prompt-ner: zero-shot named entity recognition in astronomy literature via large language models,8,"The study described in the title and abstract is highly relevant to prompt engineering as it proposes and evaluates a prompt-based strategy (Prompt-NER) for enhancing zero-shot Named Entity Recognition (NER) using Large Language Models (LLMs). Although the application is specific to astronomy literature, the methodology and findings can contribute valuable insights to the broader field of prompt engineering, especially in the development and application of prompts for domain-specific zero-shot learning tasks." -weakly supervised few-shot and zero-shot semantic segmentation with mean instance aware prompt learning,8,"The abstract describes a novel approach in semantic segmentation that leverages language-guided segmentation techniques, which is directly related to prompt engineering as it involves learning from class prompts. However, the focus seems to be more on the application of prompt learning for weakly supervised few-shot and zero-shot semantic segmentation rather than a comprehensive study of hard prefix prompts. The relevance is high as prompt engineering is essential to the proposed MIAPNet system, but it is not a systematic review of hard prefix prompts." -anomalyclip: object-agnostic prompt learning for zero-shot anomaly detection,7,"The abstract describes AnomalyCLIP, a novel approach to adapting the CLIP model for zero-shot anomaly detection by learning object-agnostic text prompts. Although the main focus is on improving anomaly detection, the method involves prompt engineering specifically designed to capture generic concepts of normality and abnormality in images, which is relevant to the study of prompt design and effectiveness. The rating is not a full 10 because the primary application is anomaly detection rather than prompt engineering itself, but the method provides valuable insights into prompt engineering within the context of zero-shot learning." -enhancing zero-shot crypto sentiment with fine-tuned language model and prompt engineering,8,"The abstract provided focuses on the enhancement of sentiment analysis for cryptocurrencies using fine-tuned language models and an investigation into the efficacy of different instruction-based fine-tuning methods. The relevance to prompt engineering lies in the part of the study that examines instruction tuning, which is a form of prompt engineering, as it entails optimizing the instructions given to the model to improve its performance on unseen tasks. Also, it discusses the impact of short and simple versus long and complex instructions on the performance of language models. However, it doesn't explicitly mention the term 'hard prefix prompts,' which suggests that the paper might not delve into that specific area of prompt engineering, instead covering a broader range of instruction-based fine-tuning strategies. Therefore, the relevance is high but not complete, as the connection to 'hard prefix prompts' is not clearly established." -empowering sentence encoders with prompting and label retrieval for zero-shot text classification,9,"The study is highly relevant to prompt engineering as it addresses the enhancement of sentence encoders using prompted label candidates. Additionally, the incorporation of retrieval-based methods to refine the label prompts directly relates to the concept of hard prompts in prompt engineering. Although the study does not exclusively focus on 'hard prefix prompts', the general exploration of leveraging prompts in the context of zero-shot text classification closely aligns with the topic of prompt engineering. The retrieval-augmented approach (RaLP) presented in the study exemplifies a practical application of prompt engineering in improving model performance without the need for fine-tuning on specific tasks. The only reason it does not receive a full score is that it doesn't focus solely on 'hard prefix prompts', but instead encompasses a broader range of prompting techniques." -dialogue state tracking with zero-shot and few-shot learning for generalization: a review,7,"The paper's abstract suggests that one of the categories reviewed in the study is 'DST using a prompt,' which directly relates to prompt engineering as it likely involves the use of prompts to improve the performance of dialogue state tracking models. The relevance to prompt engineering is significant since the study appears to include a systematic review of this method among others. However, the abstract does not focus solely on 'hard prefix prompts' as specified in the initial query, indicating that while relevant, it may not cover the full scope of 'hard prefix prompts.' Therefore, the rating is not a full 10." -kbpt: knowledge-based prompt tuning for zero-shot relation triplet extraction,7,"Despite the absence of an abstract or TLDR, the title indicates the study is related to 'knowledge-based prompt tuning,' which falls under the broader scope of prompt engineering. The application of prompt tuning for zero-shot relation triplet extraction suggests an advanced use of prompts to improve model performance without extra training data which is relevant to prompt engineering. However, without additional information on the study's methodology or results, a full assessment of relevance cannot be completed, thus the rating cannot be maximized." -zero-shot learning by generating task-specific adapters,7,"The relevance to prompt engineering is fairly high as the abstract describes a novel approach to zero-shot learning that includes utilizing task descriptions as prompts, which could be seen as related to 'hard prefix prompts' in the context of designing inputs that guide the model's predictions. The study focuses on improving the model's ability to generalize to new tasks through a meta-learning framework, which aligns with the concept of improving the effectiveness of prompts in a zero-shot learning setting. However, it does not explicitly address 'hard prefix prompts' in any systematic review manner, which would be necessary for a 10 rating. Nonetheless, the connection to prompt engineering is clear enough to warrant a relatively high rating." -domain-aware continual zero-shot learning,7,"The abstract indicates that the study involves a 'class-wise learnable prompt' which is relevant to prompt engineering as it relates to the generation of text representations for facilitating zero-shot learning. However, the focus of the study seems to be more on addressing challenges of domain awareness and continual learning in the context of zero-shot learning, rather than on hard prefix prompts specifically. Therefore, while it is relevant due to its inclusion of a learnable prompt component for class representation, it does not appear to be a comprehensive systematic review or focus directly on hard prefix prompts in prompt engineering, hence the rating of 7 instead of a full 10." -zero-shot cross-lingual summarization via large language models,7,"The reported study is directly related to prompt engineering as it involves using prompts to guide Large Language Models in the task of zero-shot cross-lingual summarization. The relevance is high because it assesses how well prompts can improve the performance of LLMs in a complex task that combines translation and summarization. Nonetheless, the study's primary focus is on cross-lingual summarization rather than on the depth of prompt engineering mechanisms like hard prefix prompts, which reduces the relevance rating slightly." -align your prompts: test-time prompting with distribution alignment for zero-shot generalization,9,"The provided abstract is highly relevant to prompt engineering study, especially in the context of zero-shot generalization and prompt tuning to align feature distributions between source and test data, which are key components of prompt engineering. The paper discusses a specific method of prompt tuning that takes distribution shift into account, a topic that is directly related to the engineering and optimization of prompts for better performance in unseen domains. The only reason it doesn't receive a full 10 is that it doesn't specifically mention 'hard prefix prompts', which was the specific focus mentioned in the initial prompt, but it still seems to represent a significant contribution to the field of prompt engineering broadly." -instruction distillation makes large language models efficient zero-shot rankers,8,"The abstract discusses the instruction distillation method as a means of improving efficiency and performance in zero-shot relevance ranking by LLMs, which is directly related to prompt engineering. This research tackles the issues of complexity and inefficiency in typical prompt-based ranking methods by simplifying instructions. However, it does not focus solely on 'hard prefix prompts,' but rather on instruction distillation for overall efficiency and performance enhancement in a broader context. Thus, the relevance is high but not entirely focused on the specific subtopic of hard prefix prompts." -locally differentially private document generation using zero shot prompting,8,"The abstract discusses the use of 'zero-shot prompting' with pretrained language models to address privacy concerns, which is relevant to prompt engineering. The introduction of DP-Prompt as a mechanism relies on the strategic use of prompts to enhance privacy while maintaining utility. Although the focus is more on privacy preservation than on prompt engineering in itself, the application of zero-shot prompting techniques is at the core of the study, earning a high relevance rating. However, it isn't exclusively focused on 'hard prefix prompts' or a comprehensive systematic review of such prompts, therefore the rating is not a full 10." -supplementary - i2mvformer: large language model generated multi-view document supervision for zero-shot image classification,7,"The abstract discusses the use of a large language model (LLM) for prompting strategy in the context of zero-shot image classification. Although it does not directly reference 'hard prefix prompts' or a 'systematic review', the mention of LLM prompting strategies and the analysis of their robustness is relevant to the broader field of prompt engineering. The abstract suggests an investigation into the effectiveness of different prompts, which is a central concern of prompt engineering studies. Therefore, the relevance rating is moderately high, as the content could provide valuable insights for those studying how prompts can affect the performance of AI models, even though it is not a direct match for a study focused specifically on 'hard prefix prompts'." -a setwise approach for effective and highly efficient zero-shot ranking with large language models,8,"The abstract details a study on zero-shot ranking with Large Language Models (LLMs) through the use of different prompting approaches (Pointwise, Pairwise, Listwise, and a novel Setwise approach). Although the study does not specifically mention 'hard prefix prompts,' it does deeply engage with prompt engineering for zero-shot tasks in LLMs. Since prompt engineering is essential in operationalizing these models for specific tasks, and the study clearly contributes to understanding and innovating in this field, it has high relevance to prompt engineering study. However, it does not directly address 'hard prefix prompts,' hence the rating is not a perfect 10." -reducing negative effects of the biases of language models in zero-shot setting,7,"The paper is relevant to prompt engineering as it addresses the issue of biases in language models, particularly GPTs, which is a key concern when engineering prompts for zero-shot settings. By proposing a method to reduce bias through the use of probing samples and a Calibration Adapter, the study is relevant to the prompt engineering field as it contributes to the development of more fair and balanced prompting strategies. However, the primary focus seems to be on model calibration rather than on designing or structuring prompts, hence the rating is not a perfect 10." -beyond yes and no: improving zero-shot llm rankers via scoring fine-grained relevance labels,9,"The paper discusses improving zero-shot text rankers by refining the prompting mechanism used in large language models (LLMs), specifically by introducing fine-grained relevance labels instead of binary ones. This is highly relevant to prompt engineering as it directly involves optimizing the way prompts are structured to achieve better performance in text ranking tasks. The incorporation of more nuanced labels is a method of prompt engineering aimed at enhancing the model's capability to assess relevance. The study's focus on prompting strategies and its impact on the model's output makes it pertinent to the field of prompt engineering study, hence the high score." -exploring grounding potential of vqa-oriented gpt-4v for zero-shot anomaly detection,7,"The abstract details a study focused on the application of a Large Multimodal Model (GPT-4V) for anomaly detection using the Visual Question Answering paradigm, which includes an aspect of 'Prompt Designing' as one component of the proposed framework. This directly relates to prompt engineering as it involves designing prompts to effectively interact with AI models. However, the study's primary focus seems to be on the application of the model to anomaly detection rather than the intricacies or methodologies behind prompt engineering. Therefore, while prompt engineering is a component of the study, it is not the central theme, which is why the relevance is rated as a 7 rather than a full 10." -zero-shot learning for named entity recognition in software specification documents,8,"The abstract discusses the application of zero-shot learning to Named Entity Recognition (NER) in the context of software specification documents. One of the two zero-shot approaches mentioned employs prompt engineering, achieving a high accuracy of 93%. The relevance to prompt engineering is high because the study specifically involves the use of prompt engineering techniques in an NER task, which is a significant part of language model application. However, the relevance is not rated as a full 10 because the abstract also describes a second approach that diverts from prompt engineering and is based on transforming the problem into a question-answering task. Therefore, while prompt engineering is a central theme, it is not the exclusive focus of the study." -amortized prompt: lightweight fine-tuning for clip in domain generalization,7,"The abstract discusses the use of prompt generation as a novel approach for domain inference with an emphasis on improving domain generalization in image classification using the CLIP model. This is relevant to prompt engineering, as it describes developing a method (Amortized Prompt) related to creating and utilizing prompts to enhance model performance without fine-tuning. Although the study appears to focus more broadly on domain generalization and does not specifically address 'hard prefix prompts,' the concept of prompt generation within this context is still within the domain of prompt engineering, hence the rating of 7. The absence of a direct mention of 'hard prefix prompts' means it is not entirely focused on that specific aspect of prompt engineering, thus not receiving a full score." -understanding prompt engineering may not require rethinking generalization,8,"The provided abstract directly involves the study of prompt engineering within the context of zero-shot learning and vision-language models. It discusses the impact of manual prompt crafting on generalization performance and how classical PAC-Bayes bounds can explain the success of such methods. Although the specific term 'hard prefix prompts' is not mentioned, the abstract's focus on the structural aspects of prompt design and their implications for model performance is highly relevant to the field of prompt engineering. The TLDR further emphasizes the significance of the discrete nature of prompts and language model priors in maintaining tight generalization bounds, which are central considerations in prompt engineering studies." -prompt sketching for large language models,9,"The provided abstract for 'prompt sketching for large language models' discusses an innovative prompting strategy that involves generating a template with variables that the LLM predicts values for, which directly relates to engineering better prompts for LLMs. The approach aims to address issues with current prompting strategies that result in disconnected and verbose responses by proposing a more structured interaction with the model via templated prompts. The abstract mentions the improvement in performance on various benchmarking tasks, indicating a substantial contribution to the study of prompt engineering. The paper's focus on optimizing the generation process and providing control over the model's output through a novel prompting paradigm makes it highly relevant to the field. It is rated slightly less than 10 because the prompt specifically asks for a review on 'hard prefix prompts', and it is not explicitly clear from this abstract whether prompt sketching falls into that category. However, the general relevance to prompt engineering study is evident." -strength in numbers: estimating confidence of large language models by prompt agreement,9,"The paper discusses a method to improve confidence estimates for language model predictions by using a variety of prompts, which is highly relevant to the field of prompt engineering. The study focuses on the generation of multiple prompts to enhance the reliability of large language model outputs, which directly pertains to the design and usage of prompt strategies to elicit more accurate responses from these models. The relevance is not a full 10 only because it does not specifically mention 'hard prefix prompts' but rather the broader concept of improving confidence estimation through the use of diverse prompts." -the language of prompting: what linguistic properties make a prompt successful?,9,"The described study directly relates to prompt engineering because it investigates how linguistic properties of prompts affect the performance of language model tasks. It focuses on the nuances of prompt design, which is a core aspect of prompt engineering, aiming to understand what makes a prompt effective. This is highly relevant as it contributes to the development of guidelines and standards for prompt creation, essential for refining the prompt engineering process. The only reason it does not receive a perfect score is that it does not specify 'hard prefix prompts' but prompts in general, which could include a variety of types beyond the hard prefix category." -modal interaction-enhanced prompt learning by transformer decoder for vision-language models,9,"The title suggests that the study introduces a prompt tuning method specifically designed for improving the performance of transformer decoders in vision-language models. This is highly relevant to prompt engineering as it deals with enhancing model interaction with prompts. Although the term 'hard prefix prompts' from the original query is not explicitly mentioned, the nature of the study seems to be closely related to developing and enhancing prompting strategies. Hence, the relevance rating is high. The abstract being 'nan' does not provide additional information, but the TLDR suggests that the method being proposed has shown improved performance over a baseline model, indicating that this research contributes valuable insights to the field of prompt engineering." -does gpt-3 generate empathetic dialogues? a novel in-context example selection method and automatic evaluation metric for empathetic dialogue generation,8,"The provided abstract directly relates to prompt engineering as it discusses the exploration of GPT-3's ability to generate empathetic dialogues through prompt-based in-context learning, which is a part of the field of prompt engineering. The study's investigation of novel in-context example selection methods and the introduction of a new automatic evaluation metric are also relevant to the development and optimization of prompts, which are essential for fine-tuning the performance of language models in specific tasks. Although it doesn't mention hard prefix prompts specifically, the focus on in-context learning and prompt-based methods makes it highly relevant to the broader field of prompt engineering in the context of empathetic dialogue generation." -cliptexture: text-driven texture synthesis,8,"The abstract discusses a texture synthesis framework that utilizes language-based controls to guide the synthesis process, which is relevant to prompt engineering. The use of text prompts to influence the output of an AI model aligns closely with prompt engineering principles, where the goal is to effectively communicate an intended outcome to the model through language. However, this paper specifically focuses on texture synthesis in images rather than prompt engineering as a broader field of study, hence the rating is not a perfect 10." -effects of target words and their locations in prompts,9,"The researched document is highly relevant to the field of prompt engineering as it directly investigates the effects of target words and their placement within prompts, which are critical components in constructing effective prompts for language models. The study's examination of different prompt structures and their outcomes on model performance, as well as comparisons between models that are instruction tuned (T0) and those that are not (ALBERT), provide valuable insights into prompt design strategies. The focus on varying difficulties and tasks, including NLI, coreference resolution, sentence completion, and multiple choice Q&A, further underscores the study's comprehensive approach to understanding prompt engineering. Although the title does not specifically mention 'hard prefix prompts,' the abstract indicates a thorough examination of prompt-related factors which are indeed pertinent to the study of prompt engineering. The only reason it's not a full 10 is that the thesis does not seem to exclusively focus on 'hard prefix prompts,' which could be construed as a specific type of prompt from the title of the systematic review." -generating domain-specific programs for diagram authoring with large language models,8,"The study addresses the concept of engineering prompts specifically for one-shot learning with Large Language Models (LLMs) to generate domain-specific language (DSL) programs, which is relevant to prompt engineering. Developing structured prompts that can effectively guide LLMs, like the study's use of LLMs for Penrose diagram creation from prose, illustrates a practical application of prompt engineering. This process is central to optimizing LLM performance in specific tasks, thus the high relevance. However, the provided title and abstract do not mention 'hard prefix prompts' as a focused subject within the realm of prompt engineering, which would align directly with the systematic review of hard prefix prompts. Instead, it discusses prompt structures for DSL program creation in general, which may not comprehensively cover all aspects of prompt engineering or the specific topic of hard prefix prompts, leading to a rating slightly less than perfect." -rewriting math word problems with large language models,9,"The abstract provided talks about a study where Large Language Models, specifically GPT-4, were used to rewrite math word problems, following the same guidelines as human authors. It directly relates to prompt engineering as it involves developing and comparing different prompting strategies like zero-shot, few-shot, and chain-of-thought. Furthermore, it discusses the process of encoding mathematical components using GPT´s capacity to write python code, which is an essential aspect of prompt engineering when dealing with specialized tasks such as math word problems. Although the primary focus is on improving learning outcomes rather than prompt optimization, the process of refining the prompts to achieve high-quality rewrites is squarely within prompt engineering methodology. The reason for not rating it a full 10 is because the primary outcome seems to be focused on educational efficacy rather than the refinement of the prompt engineering itself." -eliciting knowledge from language models for event extraction,8,"The paper is clearly relevant to prompt engineering as it discusses the use of prompt-based learning to elicit knowledge from language models for a complex NLP task like event extraction. Designing such prompts is closely related to the concept of prompt engineering, which involves crafting inputs that help elicit desired responses from the model. Although the paper might not focus solely on 'hard prefix prompts' as per the original systematic review topic, it pertains to the general field of study of how prompts can be engineered to improve the extraction of information from language models. The deduction of two points in rating reflects that while it is highly relevant, it might not cover 'hard prefix prompts' specifically if that were the exclusive focus of the review." -a uto g raphex : zero-shot biomedical definition generation with automatic prompting,8,"The abstract discusses a zero-shot definition generation model that leverages prompting with pre-trained language models, specifically in the context of biomedical terminology. While it does not explicitly mention 'hard prefix prompts', it does relate to prompt engineering as it involves automatically generating prompts to facilitate knowledge elicitation from language models. This is highly relevant to studies exploring various aspects of prompt engineering, although it may not address the 'hard prefix prompts' directly. The high relevance is due to the focus on automatic prompting which is a subset of prompt engineering. The rating is not a full 10 as the abstract does not cover the full breadth of prompt engineering, specifically not mentioning the term 'hard prefix prompts'." -relational representation learning for zero-shot relation extraction with instance prompting and prototype rectification,7,"The paper's focus on Instance Prompting as a method to bridge the gap between pre-training and fine-tuning for relation extraction aligns with techniques used in prompt engineering, particularly in the context of tailoring model outputs to specific tasks without extensive additional training data (zero-shot learning scenarios). Additionally, the mechanism of guiding pre-trained models to generate more task-specific representations is akin to the notion of constructing prompts to elicit desired responses from a model. However, the paper does not explicitly address 'hard prefix prompts' or the systematic review of prompt engineering as a broader field, thereby receiving a moderate score instead of a higher one for full relevance." -prompting scientific names for zero-shot species recognition,7,"The study is relevant to prompt engineering because it explores how different forms of prompts (using scientific names vs. common English names) can affect the performance of Vision-Language Models like CLIP in zero-shot species recognition tasks. Although it doesn't focus specifically on 'hard prefix prompts,' it directly examines the impact of prompt design on model accuracy, which is a significant aspect of prompt engineering. The study’s findings that common names yield better results than scientific names for prompts provide insight into effective strategies for prompt creation, thus contributing to the field of prompt engineering." -zero-shot faithfulness evaluation for text summarization with foundation language model,8,"The paper's relevance to prompt engineering study is high, since it investigates the use of a new metric FFLM, which involves prefixing text to evaluate faithfulness in text summarization. This approach is directly related to how prompts, including hard-coded prefixes, can be engineered to improve the predictions of a language model. Although the main focus is on faithfulness evaluation rather than the study of prompts in general, the use of prefixes is a significant component of prompt engineering techniques." -can an embodied agent find your “cat-shaped mug”? llm-guided exploration for zero-shot object navigation,7,"The abstract describes 'Language-guided Exploration' (LGX), which is a novel algorithm that uses Large Language Models (LLMs) to assist an embodied agent in zero-shot object goal navigation. The relevance to prompt engineering is significant in that it involves leveraging LLMs and employing various prompting strategies to improve sequential navigational decisions. The study of different prompting strategies directly pertains to prompt engineering, as it impacts how the language model guides the agent. While the primary focus of the study seems to be on robot navigation and object detection, the aspects where LLMs are being utilized and prompting strategies are analyzed contributes to the field of prompt engineering studies, hence the rating of 7. However, it's not exclusively focused on hard prefix prompts or a comprehensive systematic review of such prompts in prompt engineering, which would have resulted in a higher rating." -is evalita done? on the impact of prompting on the italian nlp evaluation campaign,8,"The provided title and abstract directly relate to prompt-based learning, a key component of prompt engineering. The study assesses the efficacy of these prompts in Italian NLP tasks, which contributes to the understanding of prompt-based learning within a specific linguistic context. Although the study is more focused on the applications and implications for evaluation campaigns, rather than the methodological exploration of 'hard prefix prompts', it remains significantly relevant to the field of prompt engineering, especially in demonstrating the practical implications and current challenges in the field." -arggen: prompting text generation models for document-level event-argument aggregation,8,"The paper is highly relevant to prompt engineering since it discusses the use of prompt-based methods for text generation in Information Extraction tasks, specifically for document-level event-argument aggregation. This demonstrates a practical application of prompt engineering in natural language understanding and reasoning, which aligns with the broader topic of prompt engineering study. However, it may not directly address the systematic review of 'hard prefix prompts,' hence the rating is not a full 10." -set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v,7,"The relevance of this study to prompt engineering lies in its methodology of using a visual prompting method (Set-of-Mark or SoM) to improve the performance of a language model with visual capabilities (GPT-4V). Although the study is focused on enhancing the visual grounding aspects of multimodal models, it does indirectly relate to the broader concept of prompt engineering by demonstrating a specific way to structure input (in this case, visual input) to achieve better performance on tasks that require understanding and interpreting visual information. Thus, the study is somewhat relevant as it expands the scope of prompt engineering into the multimodal domain, demonstrating that the structuring of prompts is important not just in text but also in how models interact with and interpret visual data." -legal syllogism prompting: teaching large language models for legal judgment prediction,8,"The paper focuses on a specific application of prompt engineering in the context of legal judgment prediction using a technique named 'legal syllogism prompting'. Although it is not about 'hard prefix prompts' per se, it explores a similar area by using prompts to direct the response of large language models. This is relevant to prompt engineering as it demonstrates the application of custom prompts to structure logical reasoning in AI, which is in line with the broader study of how prompts can be designed to elicit specific types of responses from language models. The systematic review on hard prefix prompts would likely cover various approaches in prompt engineering including such domain-specific applications; hence, the paper could offer valuable insights into this niche but relevant application within the field." -ramp: retrieval and attribute-marking enhanced prompting for attribute-controlled translation,9,"The study presents 'Retrieval and Attribute-Marking enhanced Prompting (RAMP)', a method that modifies and enhances the standard prompting approach in the context of machine translation, specifically for attribute-controlled translation. The inclusion of attribute annotations and the use of a semantic retrieval component are innovative strategies within prompt engineering. This approach is relevant to prompt engineering as it directly involves manipulating and engineering prompts to improve performance on a language task. It is particularly focused on prompting in the context of large language models, which is a core area of interest in prompt engineering studies. Although the paper is focused on translation tasks, the techniques and concepts discussed may be applicable to prompt engineering in broader contexts as well." -pieclass: weakly-supervised text classification with prompting and noise-robust iterative ensemble training,8,"The paper discusses PIEClass, which includes a pseudo label acquisition module utilizing zero-shot prompting of pre-trained language models (PLMs). This is relevant to prompt engineering because it involves using prompts to facilitate text classification in the absence of extensive datasets. It shows an application of prompt engineering in enhancing understanding beyond static keyword matching, which is a core challenge in the field. The iterative ensemble training module, while interesting as an approach to classifier training, is less directly related to prompt engineering. Hence the score is an 8 instead of a perfect 10, as the relevance is strong but not exclusively focused on prompt engineering." -map: low-data regime multimodal learning with adapter-based pre-training and prompting,7,"The study discusses the use of prompting in the context of vision-language multimodal learning, which is pertinent to prompt engineering. The focus on a moderate-size model (MAP) that leverages adapter-based pretraining and prompting for efficient transfer learning in a low-data regime demonstrates the application of prompting strategies. While the specifics of 'hard prefix prompts' are not mentioned, the concept of prompting is central to the paper, thereby making it relevant to the broader field of prompt engineering studies. However, the relevance is not maximal since the primary focus seems to be on the application of prompting within multimodal learning and not on the systematic review of the prompt engineering itself." -cof-cot: enhancing large language models with coarse-to-fine chain-of-thought prompting for multi-domain nlu tasks,8,"The presented work introduces the Coarse-to-Fine Chain-of-Thought (CoF-CoT) approach as a form of prompt engineering which is highly relevant to the field. It focuses on enhancing the reasoning capabilities of Large Language Models in Natural Language Understanding tasks. While the study might not directly address 'hard prefix prompts,' it proposes a novel way of structuring prompts that allow for a breakdown of tasks into multiple reasoning steps. This is inherently connected to the concept of prompt engineering, as it involves designing prompts that guide the model through a reasoning process, thus fitting well within the scope of prompt engineering studies. The reason for not rating it a 10 is because it doesn't explicitly state a focus on 'hard prefix prompts,' which the original query specified, but it is nonetheless substantially relevant." -a communication theory perspective on prompting engineering methods for large language models,9,"The provided title and abstract offer a high level of relevance to the field of prompt engineering study as it directly discusses prompting methods for large language models, an essential component of prompt engineering. It suggests a novel perspective by framing the review within communication theory, which is crucial for understanding the interactions between humans and AI in the PE context. Additionally, the abstract references practical use-cases in the form of typical tasks and discusses the future developments in PE methodologies, all of which are core to the study of prompt engineering. The only reason it doesn't receive a full score is due to the lack of specific detail on 'hard prefix prompts', which is mentioned in the prompt. However, the general connection to PE is strong, justifying the high rating." -2nd place winning solution for the cvpr2023 visual anomaly and novelty detection challenge: multimodal prompting for data-centric anomaly detection,7,"The technical report describes a methodology for zero-shot anomaly segmentation using multi-modal prompts, which falls under the broader category of prompt engineering. Multimodal prompting constitutes a form of prompt engineering as it involves designing and utilizing prompts that can effectively guide machine learning models, specifically foundation models, for particular tasks such as anomaly detection. This is relevant to prompt engineering study as it includes the formulation and application of prompts; however, the focus on 'hard prefix prompts' is not explicitly stated. Therefore, the relevance is significant but not complete in the context of a systematic review on hard prefix prompts in prompt engineering." -aspiro: any-shot structured parsing-error-induced reprompting for consistent data-to-text generation,7,"The presented abstract details a novel approach (ASPIRO) for structured data verbalization which utilizes prompt engineering techniques such as re-prompting LLMs based on parsing checks. However, the focus appears to be more on reducing parsing errors and improving data-to-text generation consistency than on the study of hard prefix prompts specifically. Therefore, it is moderately relevant to the broader topic of prompt engineering but does not focus on a 'comprehensive systematic review on hard prefix prompts.'" -chain of thought prompt tuning in vision language models,7,"The document discusses 'chain of thought prompt tuning in vision language models,' which is a specific method within prompt engineering that aims at improving the reasoning process of AI models in image-related tasks. While the topic is closely related to the concept of prompt engineering, it is more narrowly focused on vision-language models and does not directly touch on 'hard prefix prompts' which seems to be the focus of the initial inquiry. The relevance is rated as 7 since the technique of chain of thought prompting falls under the wider umbrella of prompt engineering strategies and contributes to the field, even if it is not a direct study on hard prefix prompts." -symbolic math reasoning with language models,7,"The abstract provided discusses the use of large language models (LLMs) such as OpenAI's GPT-3 for solving math word problems and explores their reasoning capabilities. Although the primary focus is on these models' ability to solve mathematical problems symbolically and numerically, it does mention the role of specific prompting techniques and their influence on the model's problem-solving process. Therefore, while the abstract is not directly focused on a review of 'hard prefix prompts,' it does pertain to prompt engineering in the broader context of eliciting reasoning and explanations from a language model. This justifies a moderate-to-high relevance rating, as the paper could potentially contribute valuable insights into the efficacy of prompting strategies in complex problem-solving tasks with language models." -instructexcel: a benchmark for natural language instruction in excel,7,"The provided abstract describes a study involving the creation of a benchmark for assessing Large Language Models' (LLMs) capability to interpret natural language instructions and generate Excel-related code. This directly relates to the field of prompt engineering, as it concerns the design and testing of prompts that efficiently guide a language model to perform domain-specific tasks. However, the study does not explicitly mention 'hard prefix prompts' or a 'systematic review' of such prompts, but rather it is an example of applied prompt engineering in a practical, task-oriented context. Therefore, the relevance is high but not absolute, hence a rating of 7." -enhancing cross-lingual natural language inference by soft prompting with language-independent knowledge,7,"The abstract discusses 'Soft prompt learning framework' and its application in cross-lingual natural language inference, which is relevant to prompt engineering as it deals with a form of prompts—soft prompts. Although it does not specifically address 'hard prefix prompts,' which the original prompt inquires about, the study of soft prompts is related and contributes to the broader field of prompt engineering. It would be more relevant if the specifics of 'hard prefix prompts' were examined, therefore it doesn't receive a full score." -from images to textual prompts: zero-shot visual question answering with frozen large language models,9,"The abstract describes a method (Img2LLM) involving the generation of prompts that effectively allow large language models (LLMs) to perform zero-shot visual question-answering (VQA) tasks. This is highly relevant to prompt engineering because Img2LLM essentially acts as a prompt engineering tool, transforming image content into textual prompts that enable LLMs to understand and respond to visual data without the need for end-to-end training. It directly involves the design and application of effective prompts to improve the utility of LLMs in a cross-modality context. The only reason it does not receive a full 10 rating is because it specifically pertains to visual data and VQA, whereas prompt engineering can also encompass other forms of data and tasks." -chinese text paraphrase recognition based on openprompt introducing hybrid prompts,9,"The abstract discusses the use of hybrid prompts, which are directly related to prompt engineering, offering a method to enhance the knowledge extraction from pretrained language models for paraphrase recognition tasks. It demonstrates a practical application of prompt engineering in the form of OpenPrompt and hybrid prompts, providing relevant outcomes like the improvement in F1 score and accuracy when using such prompts. This study helps in understanding prompt-based methods, hence the high relevance rating to prompt engineering. Only a full read-through could confirm if it tackles 'hard prefix prompts' specifically, but the mention of hybrid prompts with [mask] slots strongly suggests relevance to the field of prompt engineering." -vima: robot manipulation with multimodal prompts,8,"The study described in the abstract illustrates a novel application of prompt-based learning in the domain of robotics rather than just natural language processing. The use of 'multimodal prompts' that includes both textual and visual tokens is directly related to the concept of prompt engineering, as it involves crafting prompts that a machine learning model interprets to perform various tasks. Although it does not explicitly address the engineering of 'hard prefix' prompts, the systematic development of multimodal prompts for robot manipulation is a significant contribution to prompt engineering research. The study's relevance is slightly lessened only due to the lack of a specific focus on 'hard prefix' prompts, which the original query stipulates." -prompt-engineering and transformer-based question generation and evaluation,8,"The study presented involves the application of prompt engineering to improve the performance of a transformer-based question generation model. Since prompt engineering is integral to this research, with the effectiveness of various prompts being directly assessed and compared, it shows high relevance to the field of prompt engineering. However, it does not focus solely on 'hard prefix prompts' specifically, which may be a more nuanced subtopic within prompt engineering. Therefore, the relevance rating is not a full 10." -prompt tuning gpt-2 language model for parameter-efficient domain adaptation of asr systems,8,"The abstract discusses the use of 'domain-prompts,' which seems to be a technique closely related to prompt engineering, as it involves training domain-specific embeddings to adapt a language model to new domains. This method resembles hard prompt tuning where prompts are fixed and designed to prime the model for a specific task or domain. The study's relevance is high for prompt engineering research, particularly within the context of ASR systems and parameter-efficient adaptations. However, it doesn't discuss 'hard prefix prompts' specifically; it mentions 'domain-prompts' which may or may not be exactly the same concept. Hence, the rating is not a full 10, reflecting this small uncertainty." -chinese asr and ner improvement based on whisper fine-tuning,7,"The abstract indicates that the paper explores how to fine-tune Chinese ASR and NER tasks using Whisper, touching on the aspect of designing different prompts for various generative tasks, which is closely related to prompt engineering. While the main focus seems to be on improving ASR and NER performance, the inclusion of prompt design as a part of the fine-tuning process makes it relevant to the study of prompt engineering. However, the mention of prompts is not the central focus of the paper, which suggests that although prompt engineering is covered, it is not the primary subject matter, hence the rating of 7." -prompt generation networks for input-based adaptation of frozen vision transformers,9,"The abstract describes a novel approach to adapt frozen vision transformers via visual prompt learning, which is highly relevant to prompt engineering as it deals with generating and optimizing prompts that can be input-dependent. Although the study focuses on the visual domain, the techniques and concepts of prompt generation, learning, and the mentioned 'prompt inversion' trick are applicable and insightful for prompt engineering for different modalities. It achieves adaptation without modifying the model and is part of the broader discussion on how to efficiently use large-scale models, a significant aspect of prompt engineering. The relevance is slightly less than perfect because the specific focus on vision transformers and input-dependent prompts may not cover the entire scope of hard prefix prompts directly, but the principles are closely related." -optimizing language models for argumentative reasoning,8,"The provided abstract details an investigation into optimizing a language model for argumentative reasoning tasks, which includes an evaluation of different optimization strategies such as prompt programming. Prompt engineering, which refers to the design and usage of prompts to guide language models, is closely related to the study's focus on prompt programming as one of the optimization strategies. Although the term 'hard prefix prompts' is not explicitly mentioned, prompt programming is a technique that often involves the use of hardcoded prompts (which could be considered 'hard prefix prompts') to direct a model's output. Therefore, this study is highly relevant to the broader field of prompt engineering; however, the relevance is slightly lower as the study does not solely concentrate on hard prefix prompts but also considers other optimization strategies." -prompt enhanced generative mrc framework for pancreatic cancer ner,7,"The paper directly engages with prompt engineering through its introduction of continuous prompts to improve the performance of a generative NER task within the context of medical document analysis. The use of prompts in the self-attention mechanism of the Transformer model is relevant to the study of how prompts can be optimized to facilitate better understanding and generation of responses by the model. While the focus is not exclusively on 'hard prefix prompts' and it is more application specific (medical NER), it does contribute to the broader understanding of prompt engineering in NER tasks." -harnessing gpt-3.5-turbo for rhetorical role prediction in legal cases,9,"The provided abstract discusses the implementation of prompting strategies in GPT-3.5-turbo for a specialized task within the legal domain. The focus on one-stage elicitation techniques, the influence of different prompting strategies such as zero-shot learning, task specification, and the exploration of hard prefix prompts (detailed in the mention of the textual context, number of examples, and label definitions) are highly relevant to prompt engineering. Although it doesn't exclusively concentrate on 'hard prefix prompts,' the exploration and systematic review of prompting strategies contributing to performance improvement are central to prompt engineering. The slight deduction in the rating acknowledges that the study is about prompt engineering as a whole rather than solely on 'hard prefix prompts.'" -efficient domain adaptation of language models in asr systems using prompt-tuning,8,"The abstract presents research on using prompt-tuning, a form of prompt engineering, for domain adaptation in ASR systems. Although the focus is on ASR systems and not specifically on 'hard prefix prompts', prompt-tuning is related to prompt engineering studies. The research seems to involve adapting language models to specific domains using prompts, which is a core aspect of prompt engineering. The methodology could be highly relevant to those interested in tailoring LMs for specific applications without the costs associated with maintaining multiple domain-specific models. However, it falls short of a perfect score because it does not address hard prefix prompts specifically, but rather the broader application of prompt-tuning for domain adaptation." -all birds with one stone: multi-task learning for inference with one forward pass,8,"The focus on utilizing a prompt-sharing module to enable a model to handle multiple tasks with a single forward pass is highly relevant to prompt engineering, as it directly pertains to the design and efficiency of prompts in multi-task learning. Although the abstract does not specifically mention 'hard prefix prompts,' the concept of prompt design for task efficiency and model performance improvement is central to the topic of prompt engineering. Therefore, the relevance rating is relatively high, with a couple points deducted for not mentioning the specific aspect of 'hard prefix prompts.'" -ctrl: a conditional transformer language model for controllable generation,8,"The referenced paper describes a language model (CTRL) designed to incorporate control codes that can direct the generation of text according to specified attributes, which is highly relevant to the field of prompt engineering. Although the paper does not directly discuss 'hard prefix prompts,' it is nonetheless pertinent because control codes essentially function as a form of prompts to guide the model output. The ability to use these codes aligns with the broader goal of prompt engineering, which is to control and guide the behavior of language models. Therefore, the paper is quite relevant to the study of prompting methods in AI, even if it doesn't address 'hard prefix prompts' specifically." -exploring visual prompts for adapting large-scale models,8,"The abstract indicates the study focuses on 'visual prompting' to adapt large-scale models in vision, which is a form of prompt engineering. While 'hard prefix prompts' are not directly mentioned, the concept of adapting models by using prompts (here, visual) is central to the discussed approach, thus making it relevant to the field of prompt engineering. The study’s relevance could be even higher if it specifically related to textual prompts and hard prefixes, but its focus on a related concept in the visual domain still provides valuable insights that could be transferable to other forms of prompt engineering." -domain prompts: towards memory and compute efficient domain adaptation of asr systems,8,"The abstract is highly relevant to prompt engineering as it discusses domain-prompts, which is a form of prompt engineering for adapting transformer-based language models to specific domains with minimal additional parameters. While it focuses specifically on ASR systems, the concept of domain adaptation through prompts is applicable to wider studies of prompt engineering. The rating is not a full 10 because the paper does not address 'hard prefix prompts' specifically, but rather uses the concept of domain-specific prompts generally." -text style transfer between classical and modern chinese through prompt-based reinforcement learning,8,"The text discusses the use of an unsupervised prompt-based reinforcement learning (PBRL) framework for style transfer in text, which is highly relevant to prompt engineering as it involves the use of prompts to guide the learning process. While the application is specific to style transfer between classical and modern Chinese, the underlying technique is applicable to prompt engineering broadly. It does not directly study 'hard prefix prompts' as the original study query suggests, but it does contribute to the overall field of prompt engineering." -adpl: adversarial prompt-based domain adaptation for dialogue summarization with knowledge disentanglement,9,"The paper presents an Adversarial Disentangled Prompt Learning (ADPL) model which is relevant to the study of prompt engineering as it involves the creation and utilization of prompts (domain-invariant, domain-specific, and task-oriented) to improve domain adaptation in dialogue summarization. The focus on prompt-based methods for zero-shot learning in this context is highly pertinent to understanding how prompts can be engineered to enhance the performance of language models on specific tasks. Despite not focusing exclusively on 'hard prefix prompts', which the original query asks about, its contribution to prompt engineering methods warrants a high relevance score." -knowledge transfer with visual prompt in multi-modal dialogue understanding and generation,7,"The study described involves the use of prompts in the context of multi-modal data fusion and dialogue generation, which is relevant to prompt engineering in terms of developing methods to maximize the efficacy of prompts. However, the term 'hard prefix prompts' is not mentioned, suggesting that while the study is within the domain of prompting (visual prompts in this case), it may not directly address the particular area of 'hard prefix prompts'. Therefore, the relevance is notable but not complete, hence a rating of 7." -motif-based prompt learning for universal cross-domain recommendation,7,"The abstract describes a motif-based prompt learning framework aimed at enhancing cross-domain recommendation systems. Although the study focuses primarily on recommendations, the use of 'motif-based prompt learning' relates closely to prompt engineering, especially in the context of adapting machine learning models to respond to different kinds of data inputs or prompts. Prompt engineering is about designing prompts that help models perform better on specific tasks. The paper's mention of 'adaptable prompt parameters' and the integration of these into pre-training and fine-tuning paradigms indicates that it deals with adjusting how models interact with prompts. However, it does not strictly focus on 'hard prefix prompts' as the study prompt requests, thus the relevance rating is not a full 10." -"continually detection, rapidly react: unseen rumors detection based on continual prompt-tuning",8,"The paper is highly relevant to the field of prompt engineering due to its focus on 'Continual Prompt-Tuning RD (CPT-RD) framework' which relates directly to the engineering and optimization of prompts in the context of rumor detection. The study addresses challenges such as catastrophic forgetting and knowledge transfer in prompt-tuning, which are central to improving the utility of prompts in continual learning scenarios. The deduction of two points is due to the prompt not directly addressing 'hard prefix prompts' specifically, but the broader context of prompt-tuning is still substantially relevant to the study of prompt engineering." -visual-attribute prompt learning for progressive mild cognitive impairment prediction,7,"The title suggests the study involves a machine learning model using prompts to predict progressive mild cognitive impairment (pMCI), indicating that prompt engineering is a fundamental part of the research. Specifically, the mention of a 'prompt learning model' and 'global prompt token' implies an exploration into how prompts interact with the model to improve performance. This is relevant to prompt engineering as it relates to designing and utilizing prompts to guide machine learning models effectively. However, it does not explicitly mention 'hard prefix prompts' and seems to focus on a specific application rather than a broad systematic review, so it may not be entirely comprehensive in the context of prompt engineering studies." -hetgpt: harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks,7,"While the title and abstract describe a study related to prompt engineering, the context differs from what's typically associated with 'hard prefix prompts' in prompt engineering study, which is usually referenced in the field of Natural Language Processing (NLP). Here, the concept of 'prompting' is being applied to the domain of heterogeneous graph neural networks (HGNNs) and their pre-training routines. Although it does deal with prompts in an abstract sense, and may be relevant to the broader discussion on the utility of prompt-like methods in AI model training, it is not specifically about prompt engineering in the context of language models or text-based neural networks. Therefore, it is tangentially relevant, hence the rating of 7." -virtual node tuning for few-shot node classification,7,"The abstract discusses 'Virtual Node Tuning (VNT),' which involves injecting virtual nodes as 'soft prompts' in the embedding space that can be optimized for few-shot node classification tasks. While this does not directly address 'hard prefix prompts,' it does pertain to the usage of prompts (in this case, soft ones) in the context of machine learning. The technique is a form of prompt engineering but applied within a graph representation learning task rather than natural language processing. This alternative application of prompts in a learning framework is relevant to the broader field of prompt engineering as it provides insight into how prompts can be used to improve performance in tasks with limited labeled data. However, its relevance is somewhat indirect since it does not address hard prefix prompts explicitly or delve into systematic reviews of prompt engineering, thus the rating of 7." -pcbert: parent and child bert for chinese few-shot ner,8,"The abstract talks about 'prompt-tuning', which is a method within prompt engineering, that is being applied for Chinese few-shot Named Entity Recognition (NER). While the specific term 'hard prefix prompts' is not mentioned, the concept of prompt-based techniques, which are at the heart of prompt engineering, is central to the study described in the paper. This suggests that the paper's focus on using prompt-based methods for improving model performance in low-resource settings makes it highly relevant to the field of prompt engineering." -srcb at the ntcir-16 real-mednlp task,8,"The abstract indicates the use of prompt learning as part of the approach for tackling Named Entity Recognition and Adverse Drug Event detection tasks, which are directly related to natural language processing challenges in computational linguistics. The involvement in prompt learning suggests that the paper includes discussion or experimentation with the implementation or optimization of prompts, which is relevant to the study of prompt engineering. However, the abstract does not provide details specifically about 'hard prefix prompts' which might be one of the variations or specific interest within prompt engineering. Therefore, the relevance is high but not complete with respect to the specified topic of 'hard prefix prompts'." -generalizing few-shot named entity recognizers to unseen domains with type-related features,8,"The paper presents a framework (PLTR) that involves a form of prompt engineering by generating unique prompts for unseen examples using type-related features. This is highly relevant to prompt engineering as it directly involves the creation and optimization of prompts for improving the model's performance on few-shot named entity recognition tasks. The reason the rating is not a full 10 is that the study focuses specifically on the NER task and the use of type-related features, which may not cover the broader concept of hard prefix prompts in the context of prompt engineering more generally." -large language models (llms) for natural language processing (nlp) of oil and gas drilling data,7,"The abstract mentions the use of various prompt engineering strategies as part of the methodology to handle text downstream tasks in oil and gas drilling data using large language models. Although the study primarily focuses on the application of LLMs in a specific domain (oil and gas), the inclusion of prompt engineering in the process indicates a significant relevance to the field of prompt engineering study. However, a perfect relevance score is not given because the primary focus of the study is not purely on prompt engineering, but rather on the domain-specific application of large language models which includes prompt engineering as a part of the process." -from humans to machines: can chatgpt-like llms effectively replace human annotators in nlp tasks?,7,"The abstract discusses the potential use of large language models (LLMs) like ChatGPT for NLP tasks, which is relevant to prompt engineering in the sense that prompt engineering could be vital for directing such models to perform annotation tasks. The ability of LLMs to understand and respond to prompts effectively would be central to their use as annotators. Although the focus here is more on annotation than prompt engineering directly, the quality and nature of prompts would inherently affect the success of such an application. Therefore, the study indirectly addresses issues that are significant to the field of prompt engineering." -a progressive prompting approach to conducting context-aware learning activities for natural science courses,7,"The relevance to prompt engineering lies in the exploration of a progressive prompt-based approach to enhance learning outcomes, which is conceptually similar to designing prompts to improve interaction with AI or learning systems. However, the study is situated in the context of mobile learning in natural science courses, not specifically within prompt engineering for AI or computational systems. Nevertheless, the methodologies and findings could have implications for the practice of prompt engineering, particularly in creating adaptive and context-aware prompts for various applications." -make llm a testing expert: bringing human-like interaction to mobile gui testing via functionality-aware decisions,8,"The abstract describes the use of Large Language Models (LLMs) like ChatGPT in automated GUI testing, which involves a novel application of prompt engineering. By formulating the problem as a Q&A task and introducing a functionality-aware prompting mechanism, the study essentially deals with the design and utilization of prompts to enable the LLM to generate useful outputs for testing purposes. This showcases an implementation of prompt engineering to improve the performance of an AI model in a domain-specific task. However, it doesn't directly study the prompt engineering process in a broader context, and therefore doesn't merit a perfect score." -prompts of large language model for commanding power grid operation,8,"The abstract describes a study that is focused on redefining the interaction between humans and a power grid operation system through the use of specifically engineered prompts for a Large Language Model. Given that prompt engineering is central to the process of adapting the LLM to interpret and execute natural language commands in the context of power grid operations, the study is highly relevant to the field. The rating is an 8 instead of a perfect score because, while it is about prompt engineering, the application is very specific to power grid operations and might not cover all aspects of prompt engineering, which could also include a broader range of topics beyond this specific use case." -can large language models explain themselves? a study of llm-generated self-explanations,8,"The abstract addresses the concept of 'self-explanations' generated by LLMs like ChatGPT, which directly pertains to one aspect of prompt engineering—eliciting detailed and insightful explanations from the model. Even though the abstract does not explicitly mention 'hard prefix prompts,' it discusses the broader area of how to effectively prompt LLMs for specific types of outputs, in this case, self-explanations. Since the study contributes to the understanding of how LLMs can be guided to provide explanations, it is relevant to the study of prompt engineering. However, the rating is not a full 10 because the abstract does not focus specifically on hard prefix prompts but rather on the general capability of LLMs to explain their reasoning." -learning profitable nft image diffusions via multiple visual-policy guided reinforcement learning,7,"The study focuses on generating Non-Fungible Token (NFT) images using a combination of language and image generation models, which relates to prompt engineering in that it involves generating detailed prompts to create specific visual attributes in NFTs. The use of a large language model (LLM) to enhance human input into more complex prompts is particularly relevant to prompt engineering. However, the study also diverges into optimization metrics and market value considerations, aspects that are less directly connected to traditional prompt engineering. Hence, the rating acknowledges the relevance of prompt generation and refinement while noting that not all aspects of the paper are centered on prompt engineering." -automatic calibration and error correction for generative large language models via pareto optimal self-supervision,7,"The abstract describes a methodology for improving the calibration and error correction of generative large language models, which is an important aspect of prompt engineering. Effective prompt engineering can benefit greatly from systems that are able to self-evaluate their confidence and error likelihood, providing insight into how prompts might be refined for better outcomes. While the study does not directly deal with 'hard prefix prompts', the proposed framework for self-supervision and dynamic prompting strategy is relevant to the field of prompt engineering as it touches on the calibration and adaptation of prompts based on model confidence. Therefore, the relevance to prompt engineering is significant, although not exclusively focused on 'hard prefix prompts' but rather on the broader issues of model response calibration and error correction." -self-detoxifying language models via toxification reversal,9,"The abstract is highly relevant to prompt engineering study because it directly involves the process of manipulating prompts to achieve a desired behavior in a pretrained language model (PLM). The concept of 'self-detoxification' by reversing the toxification direction is an application of prompt engineering where the input prompt's design has a pivotal role. While it doesn't focus on 'hard prefix prompts' explicitly, it aligns with the core principles of prompt engineering—altering the prompts to influence the model's outputs." -automatic hallucination assessment for aligned large language models via transferable adversarial attacks,8,"The study is highly relevant to prompt engineering as it explores the creation of prompts (in this case, adversarial attacks) that influence language model performance. This involves understanding how prompting affects LLM behavior and assessing the models' reliability, which is a core aspect of prompt engineering. The use of prompting chaining is directly related to the design and engineering of prompts that can manipulate or test the behavior of LLMs. Although the study's focus is on hallucination and the generation of evaluation data, the methods used are a part of prompt engineering practices." -improving few-shot generalization of safety classifiers via data augmented parameter-efficient fine-tuning,8,"The study is highly relevant to prompt engineering as it explores the use of prompt-tuning (a form of prompt engineering) combined with data augmentation to improve the performance of language models on safety classification tasks. This work directly pertains to the field of prompt engineering, as it aims to enhance model generalization using techniques that modify the input prompt structure to better guide the model in few-shot learning scenarios. The approach mentioned, similarity-based data-augmentation + prompt-tuning (DAPT), is a specific instance of prompt engineering, thus making the study quite relevant. Despite the focus on domain-generalized few-shot learning for safety applications and not solely on 'hard prefix prompts', the paper’s exploration of prompt-tuning in practice warrants a high relevance score." -tempera: test-time prompt editing via reinforcement learning,9,"The paper's abstract indicates that the work is highly relevant to prompt engineering as it presents a novel method (TEMPERA) which focuses on editing prompts using reinforcement learning. This directly aligns with innovations and advancements in prompt design strategies for large language models, which is at the heart of prompt engineering studies. The only reason the rating is not a full 10 is that the relevance might be slightly more specific to reinforcement learning techniques in prompt engineering rather than a broad systematic review on 'hard prefix prompts'. However, the contributions to optimizing prompts and improving sample efficiency are very pertinent to the field." -can llms keep a secret? testing privacy implications of language models via contextual integrity theory,7,"The study discusses the implications of information handling by large language models (LLMs), which relates to how these models process and output information based on the instructions (prompts) they receive. While it does not directly address 'hard prefix prompts,' it touches on the broader topic of prompt design and its influence on model behavior, particularly regarding privacy. It is relevant to prompt engineering since understanding and improving the privacy reasoning capabilities of LLMs can lead to the development of better prompts that protect user privacy. The rating is not a perfect 10 because the study's focus is on privacy and not explicitly on the structure or format of the prompts themselves, which would be a central aspect of a study dedicated entirely to prompt engineering." -using global land cover product as prompt for cropland mapping via visual foundation model,7,"The abstract discusses leveraging the 'Pretrain+Prompting' paradigm, which is relevant to prompt engineering as it involves designing prompts to aid in domain adaptation for cropland mapping. The introduction of the auto-prompting (APT) method aligns with prompt engineering by using prompts to modify the behavior of pre-trained models on specific tasks. However, the direct focus on cropland mapping and the use of visual foundation models means it is not exclusively centered on prompt engineering but rather its application in a specific domain. Thus, it is moderately relevant but not a comprehensive systematic review on hard prefix prompts." -tailoring personality traits in large language models via unsupervisedly-built personalized lexicons,7,"The study described in the abstract addresses the manipulation of language models' outputs by tailoring personality traits, which is related to prompt engineering in the sense that it involves guiding the language model to generate text with certain characteristics. Although the main focus is on personality traits via lexical choices rather than 'hard prefix prompts,' it still falls within the broader scope of controlling language model behavior, which is a key aspect of prompt engineering. Thus, the relevance is significant but not directly aligned with hard prefix prompts, hence the rating is not a full 10." -denevil: towards deciphering and navigating the ethical values of large language models via instruction learning,9,"The described paper is highly relevant to prompt engineering, as it develops a novel prompt generation algorithm (DeNEVIL) that interacts with large language models to explore and expose their ethical value alignment through instructions. Although not directly labeled as 'hard prefix prompts,' the concept of generating prompts to induce model behavior aligns with studies concerning prompt design and efficacy. The focus on ethical considerations adds a dimension of value-based prompt engineering, which is a specialized and relevant aspect of the broader field of prompt engineering studies." -vision-language interpreter for robot task planning,7,"The study discussed in the abstract is moderately relevant to prompt engineering, as it deals with the generation of problem descriptions (PDs) from language instructions, which is a component of prompt engineering. In prompt engineering, one must design prompts that effectively communicate tasks to language models, and here, the model is interpreting language to create PDs for robot task planning. Although the study focuses on robot planning and multimodal inputs, the underlying principle of translating natural language into machine-readable formats aligns with the techniques and goals of prompt engineering. The interdisciplinary nature of this research, combining language models with symbolic planners, reflects the complexity encountered in prompt engineering scenarios. However, it does not directly address 'hard prefix prompts,' which suggests it is not fully specialized in the field of prompt engineering but is nonetheless relevant." -chain-of-thought prompt distillation for multimodal named entity recognition and multimodal relation extraction,8,"The abstract discusses leveraging the 'chain of thought' (CoT) as an intermediate reasoning process for distilling knowledge from large language models to a student model, which is highly relevant to prompt engineering. This process directly involves designing prompts to elicit reasoning steps, indicating how the model should approach a problem, thus involving prompt engineering. However, the focus is primarily on multimodal named entity recognition and relation extraction, so it is not entirely within the realm of hard prefix prompts in a strict sense, hence the rating is not a full 10." -litsumm: large language models for literature summarisation of non-coding rnas,9,"The abstract discusses the use of large language models (LLMs) with a series of prompts and checks to automatically generate summaries of literature for non-coding RNAs, which is highly relevant to prompt engineering. The study highlights the importance of prompt design in achieving high-quality output from LLMs. It illustrates a practical application of prompt engineering within the context of automating curation processes in the life science field. This aligns closely with the concept of 'hard prefix prompts' in prompt engineering studies, as it emphasizes the effectiveness of structured input (prompts) in guiding the language model toward the desired task. The sole reason for not rating it a perfect 10 is that the abstract does not focus exclusively on the theory or mechanics of prompt engineering itself, but rather on the application of prompt engineering techniques in a specific domain." -alltogether: investigating the efficacy of spliced prompt for web navigation using large language models,7,"The study addresses the concept of prompt engineering by introducing 'AllTogether,' a prompt template aimed at improving the performance of Large Language Models in web navigation tasks, which is a specialization within prompt engineering. Though the study's focus is not on 'hard prefix prompts' specifically, it is still relevant to the broader domain of prompt engineering because it explores how to optimize prompts to enhance LLMs' understanding of tasks. As such, while it does not cover the full breadth of prompt engineering, especially with regards to systematic reviews of hard prefix prompts, it does contribute to the field by investigating prompt efficacy and template standardization." -wordart designer: user-driven artistic typography synthesis using large language models,8,"The paper describes a framework for artistic typography synthesis that centrally involves the use of Large Language Models (LLMs) to interpret user inputs and generate actionable prompts, which is directly related to prompt engineering. While the title does not explicitly mention 'hard prefix prompts', the 'LLM Engine' described operates with some form of prompt that guides the generation process. This indicates that the study does indeed involve an aspect of prompt engineering, particularly as it pertains to the synthesis of graphic designs. However, as the prompt in question specifically asks for a 'comprehensive systematic review on hard prefix prompts,' an approach or a model that is not the primary subject of this paper, the relevance is not maximal. Therefore, the rating reflects high relevance to prompt engineering in general but not a perfect match to the exact subject of 'hard prefix prompts.'" -de-diffusion makes text a strong cross-modal interface,7,"The title and abstract suggest that the study focuses on encoding images as text for use in a cross-modal interface, which has relevance to prompt engineering considering that prompts are a form of text input. The approach allows for the use of natural language as an interface to interact with images and demonstrates the potential to prompt large language models for multi-modal tasks. The relevance to prompt engineering is significant due to the generation of text representations that can serve as prompts and the improvement in interfacing with text-to-image tools. However, the paper is more focused on the cross-modal exchange and image representation than on the design or optimization of prompts themselves, which are typically the main focus in prompt engineering studies." -llamarec: two-stage recommendation using large language models for ranking,7,"The abstract describes a use of large language models (LLMs) in a two-stage recommendation framework, which includes the use of prompt templates for inputting user interaction history and candidate items into the LLM. Prompt engineering is relevant here because the design of prompt templates can be considered a form of engineering prompts to improve the performance of the LLM in the task of ranking-based recommendation. However, the study does not seem to focus primarily on the 'hard prefix prompts' aspect, but rather on the overall framework of using LLMs for recommendation, which includes prompt engineering as a component. Therefore, the relevance is significant but not exclusive to prompt engineering study." -mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning,9,The abstract describes research that directly relates to prompt engineering by analyzing the stability and consistency of language model predictions in response to different prompting setups. This type of investigation is crucial for understanding how different prompt designs affect model performance and is a core aspect of prompt engineering. The high relevance rating is due to the focus on prompt-based learning and the systematic review of factors that influence the behavior of language models in response to prompts. -collaborative large language model for recommender systems,7,"The abstract mentions the development of CLLM4Rec, which incorporates a 'soft+hard prompting strategy' during the pretraining stage for language modeling on recommendation system-specific corpora. The mention of hard prompts directly ties to prompt engineering, particularly within the context of integrating these prompts to improve the performance of a recommender system driven by a large language model. Given that the paper appears to specifically address and include prompt engineering strategies, it is relevant to studies of prompt engineering albeit focused more on the application within recommender systems rather than a general discussion or a review of hard prefix prompts in a wide array of domains. The rating is not a full 10 because the primary focus is on the recommender systems with prompt engineering being an element of the solution rather than the main subject of the paper." -combating the covid-19 infodemic using prompt-based curriculum learning,7,"The abstract suggests that the study involves a prompt-based curriculum learning method, which is connected to the field of prompt engineering, as it implies the use of prompts to extract reliable information from a text. This method seems to be focused on content verification, relevant to the application of prompt engineering in creating models that combat misinformation—a key aspect of information processing and decision-making for AI language models. However, the absence of specific details on 'hard prefix prompts' means the study may not be exclusively focused on the aspect of 'hard prefix prompts' in prompt engineering, thus not warranting a higher relevance score." -taxonprompt: taxonomy-aware curriculum prompt learning for few-shot event classification,7,"The title suggests that the study involves 'taxonomy-aware curriculum prompt learning' which indicates a connection to 'prompt engineering', as it discusses designing prompts that are aware of a certain taxonomy. This seems relevant for prompt engineering studies since it likely deals with the creation and optimization of prompts for machine learning tasks. However, without an abstract or TLDR, it's difficult to determine the exact focus of the paper and its direct applicability to hard prefix prompts, hence the relevance is not rated higher." -fpc: fine-tuning with prompt curriculum for relation extraction,9,The paper's focus on prompt-based fine-tuning aligns closely with the study of prompt engineering. It explores how prompts can be designed and utilized to improve the performance of relation extraction tasks by capturing the semantics of relation labels. The concept of a 'Prompt Curriculum' contributes to the field by addressing how to incrementally build up a model's capacity through prompts. This is highly relevant to prompt engineering as it deals with strategic prompt design and application in the context of fine-tuning pre-trained language models. The reason it is not a full 10 is because it is specific to relation extraction and may not cover every aspect of prompt engineering in a broader sense. -"conversational challenges in ai-powered data science: obstacles, needs, and design opportunities",7,"The study addresses some core issues related to prompt engineering, such as formulating prompts for complex tasks and refining prompts iteratively. These topics are highly relevant to the field, as effective communication with LLMs is contingent upon constructing well-defined prompts. However, the study seems to focus more broadly on conversational challenges in AI within data science, rather than exclusively on 'hard prefix prompts' or systematic reviews on prompt engineering. Thus, while the content is relevant, it does not specifically target hard prefix prompts or provide a comprehensive systematic review, which the prompt specifically asks for." -exploring the design space of ai based code completion engines,8,"The abstract describes a thesis that has a significant focus on prompt engineering as it pertains to AI-based code completion tools like Github Copilot. It explicitly mentions the study of prompt engineering in the context of providing the AI model with the right context and assessing the impact of that context on the quality of the code suggestions. While the study seems more broadly focused on the overall design and factors affecting code completion tools, prompt engineering is indeed a crucial aspect of the thesis as it can greatly influence the AI model's performance. Therefore, it is highly relevant to the study of prompt engineering, though it might not focus solely on 'hard prefix prompts' as specified in the original prompt." -prompt-tuning in asr systems for efficient domain-adaptation,8,"The paper is highly relevant to the field of prompt engineering as it addresses the application of prompt-tuning, specifically within the context of domain adaptation for Automatic Speech Recognition (ASR) systems. The concept of training a small number of domain-specific token embeddings to adapt a transformer-based language model is a practical example of prompt engineering. By achieving significant performance improvements with a minimal increase in parameters, the study contributes to the field by demonstrating the effectiveness of prompt-based techniques for improving model performance in specialized domains. The lower than perfect score is due to the focus on ASR systems specifically, which is a subset of prompt engineering applications, rather than the entire breadth of prompt engineering." -multimodal prompting with missing modalities for visual recognition supplementary materials,8,"While the study is not specifically focused on 'hard prefix prompts', it does address the broader topic of prompt engineering in the context of multimodal learning and attention mechanisms. The research on the impact of prompt length and the layer at which the prompt is inserted is relevant to the understanding of how prompts can be optimized for improved performance in AI models. Therefore, the paper's relevance to prompt engineering is high, warranting a rating of 8. However, the exact match with the 'hard prefix prompts' focus may be lacking, hence not a full 10." -prompting as multimodal fusing,7,"The abstract describes research on using visual prompts to improve the capability of a language model to perform multi-modal tasks, which is related to the field of prompt engineering. The concept of 'prompting' is central to the study. However, the focus on multimodal tasks and disentangling objectives for the vision encoder introduces specificity that is somewhat tangential to hard prefix prompts in text-based prompt engineering. While the principles of the study could potentially be applied or extended to text-based prompt engineering, the immediate relevance is somewhat indirect, hence the rating of 7." -ddcot: duty-distinct chain-of-thought prompting for multimodal reasoning in language models,8,"While the abstract describes a study focused on multimodal reasoning and Chain of Thought (CoT) with language models, its relevance to prompt engineering lies in the novel DDCoT prompting approach which is presented. The notion of 'negative-space prompting' and tailoring prompts to encourage 'critical thinking' and proper distribution of tasks ('letting everyone do their jobs') within multimodal CoT reasoning are directly related to the design and engineering of effective prompts that enhance AI performance. Consequently, the abstract is highly relevant to the study of prompt engineering, particularly in the context of improving AI's multimodal reasoning capabilities. However, the rating is not a full 10 because it does not focus exclusively on 'hard prefix prompts' but rather on a broader set of techniques within multimodal CoT prompting, leaving some room for more specific relevance to the systematic review aspect of the provided prompt." -prompting chatgpt in mner: enhanced multimodal named entity recognition with auxiliary refined knowledge,7,"The study presents a two-stage framework (PGIM) designed to improve Multimodal Named Entity Recognition (MNER) by using ChatGPT as an implicit knowledge base for generating auxiliary knowledge, which relates to prompt engineering as it involves creating and using prompts to guide ChatGPT in generating useful information for a specific task. However, the paper seems to focus more on improving MNER performance and leveraging implicit knowledge bases rather than on the underlying mechanisms of prompt engineering, such as prompt design or optimization techniques specifically. Therefore, the relevance is significant but not exclusively centered on prompt engineering." -initial images: using image prompts to improve subject representation in multimodal ai generated art,7,"The paper addresses the utilization of image prompts to enhance subject representation in AI-generated art, which falls within the realm of prompt engineering as it involves guiding generative models to achieve desired outputs. Although the study focuses specifically on multimodal interactions (text and image prompts) rather than purely text-based 'hard prefix prompts,' the findings and design guidelines derived from the research could be informative for prompt engineering in a broader context. The aspects of conditioning models and evaluating their performance based on input prompts are directly relevant to the techniques and methodologies of prompt engineering, hence the relatively high relevance rating." -adaptive action prompting: a complementary aid to support task-oriented interaction in explorative user interfaces,7,"The abstract refers to 'Adaptive action prompting,' which is closely related to prompt engineering in that it involves the system generating suggestions or prompts based on various models. This concept aligns with prompt engineering, as it requires understanding how to design and adapt prompts for optimal user interaction. However, the study seems to focus more on user interface interaction rather than the specific linguistic or conversational design of prompts. Therefore, while relevant, it may not fully delve into the 'hard prefix prompts' aspect of the prompt engineering study." -promptmner: prompt-based entity-related visual clue extraction and integration for multimodal named entity recognition,8,"The presented work is highly relevant to prompt engineering as it discusses the utilization of entity-related prompts to improve multimodal named entity recognition. It specifically targets the extraction of visual clues with the help of prompts, which is a novel application of prompt engineering in the field of image processing and analysis. The 'prompt-based' method for extracting visual information addresses the central theme of prompt engineering. However, since the focus is also on modality-aware attention mechanisms and cross-modal fusion, the relevance is not solely on prompt engineering. Therefore, the rating is not a full 10." -towards multimodal computational humanities. using clip to analyze late-nineteenth century magic lantern slides,7,"Although the study does not solely focus on prompt engineering, it does discuss the impact of different textual prompts on the performance of the CLIP model and identifies the lack of effective prompt engineering techniques as an issue affecting the model's stability. Therefore, the paper is relevant to the field of prompt engineering to a noticeable extent, especially regarding the application and challenges of prompt engineering in multimodal learning within the computational humanities." -beyond text-to-image: multimodal prompts to explore generative ai,7,"The abstract and TLDR of 'Beyond Text-to-Image: Multimodal Prompts to Explore Generative AI' are relevant to prompt engineering because they discuss the development of workflows that facilitate the translation of abstract design goals into prompts for AI systems. This aligns with the principles of prompt engineering, which is concerned with the creation and optimization of prompts to effectively guide AI behavior. However, the study appears to focus on the broader context of multimodal interactions and integrating creator contributions rather than hard prefix prompts specifically. Hence, while it is relevant due to its focus on improving the AI prompting process, it does not directly address systematic reviews on hard prefix prompts, thus receiving a rating of 7." -open visual knowledge extraction via relation-oriented multimodality model prompting,7,"The abstract describes a novel approach to visual knowledge extraction that indirectly involves a form of prompt engineering, as it relies on prompting a multimodality model to generate knowledge. Although the primary focus is not on the engineering of text prompts for language models, the concept of 'model prompting' is closely related to prompt engineering, particularly in the context of multimodal models that process both visual and textual data. The mention of employing prompts for knowledge generation aligns with current interests in optimising prompts to improve model performance. However, the direct relevance to 'hard prefix prompts' may be limited, hence a full relevance rating is not given." -generating instruction automatically for the reading strategy of self-questioning,7,"The relevance to prompt engineering is significant since the paper focuses on generating instructional content automatically, which aligns with the creation of prompts for educational purposes. Specifically, breaking down the instruction into describing, modeling, scaffolding, and prompting is similar to the process of designing prompts that are effective in prompting the strategy. The paper also touches upon automatic generation of prompts, which is a core task in prompt engineering. However, the primary objective of the paper is centered around self-questioning in reading comprehension rather than the broader scope of hard prefix prompts or prompt engineering in general, which justifies a rating of 7 instead of a perfect score." -short-term versus long-term effects of cognitive and metacognitive prompts in writing-to-learn,7,"The study is moderately relevant to prompt engineering because it investigates the effects of cognitive and metacognitive prompts on learning and writing. This is related to understanding how prompts can influence cognitive processes and outcomes, which is a key part of prompt engineering. However, as the focus is on educational contexts and long-term effects rather than computational systems or machine learning, it is not directly focused on prompt engineering for language models or other AI systems, hence the rating isn't higher." -connprompt: connective-cloze prompt learning for implicit discourse relation recognition,8,"The paper presents an approach that leverages the prompt engineering paradigm for Implicit Discourse Relation Recognition (IDRR), specifically developing a novel Connective-cloze Prompt (ConnPrompt) which includes Prefix-cloze Prompt (PCP) to improve task performance. This is highly relevant to prompt engineering as it demonstrates an innovative application of prompt-based methods to a natural language processing (NLP) task. The rating is not a full 10 because the study focuses on a specific application of prompt engineering within the IDRR context, rather than on prompt engineering in a more general sense, which may limit its broader relevancy to the field at large." -prompt-learning for short text classification,9,"The provided abstract describes a study on prompt-learning, specifically for the task of short text classification which directly relates to the field of prompt engineering. The approach of using knowledgeable expansion and the incorporation of knowledge graphs into the prompt-learning process are advanced techniques in the area, suggesting that the paper provides detailed insights into the engineering of prompts for language models. The outstanding improvement in accuracy mentioned in the abstract and TLDR indicates a significant contribution to the field. The reason it is not a full 10 is because it doesn't specifically mention 'hard prefix prompts', but it does deal with prompt-learning methods in general, which makes it highly relevant to prompt engineering studies." -knowledge base construction from pre-trained language models by prompt learning,7,"The abstract describes a study that falls within the domain of prompt engineering as it involves designing prompts to extract factual knowledge from pre-trained language models. The relevance to prompt engineering is clear as the authors design prompt templates and explore strategies for generating responses using these models. However, the mention of 'hard prefix prompts' is not explicitly referenced, suggesting this work may not be fully centered on that specific aspect of prompt engineering. Therefore, while the study is related to prompt engineering, its relevance to the specific concept of 'hard prefix prompts' cannot be determined from the abstract alone." -improving sentence classification in abstracts of randomized controlled trial using prompt learning,8,"The study focuses on the application of Prompt Learning (PL) for sentence classification within the context of Randomized Controlled Trial (RCT) abstracts, which is highly relevant to the field of prompt engineering as it entails creating and utilizing prompt templates to guide models in performing specific tasks effectively. Although 'hard prefix prompts' are not specifically mentioned, the deployment of manual templates in PL is closely related to designing effective prompts for language models. The relevance of the study to prompt engineering is not at the maximal score because it does not directly address 'hard prefix prompts' but rather addresses prompt learning in a broad sense." -mtpl-g2t: graph-to-text generation task based on mixed template prompt learning,8,"The abstract discusses an approach to text generation that involves prompt learning, which is a method to guide pre-trained models to perform specific tasks without extensive fine-tuning. It also compares the effectiveness of different prompt templates, including mixed prompt templates. This is relevant to the study of 'hard prefix prompts,' a type of prompt engineering. However, the abstract does not specifically mention 'hard prefix prompts' but discusses prompt learning in a broader context. Therefore, it is highly relevant but not entirely focused on 'hard prefix prompts,' which results in a rating of 8." -masked prompt learning for formal analogies beyond words,9,"The paper's focus on the development of a generative model for analogies using prompt-based fine-tuning within the context of a pre-trained language model (PLM) is highly relevant to the study of prompt engineering. The exploration of masked prompt learning and the systematic approach to handling analogies by reformulating them using prompts deeply contribute to the field of prompt engineering. It addresses how different prompting techniques can enhance language models' ability to generalize beyond simple word-level tasks. The relevance rating is not a full 10 only because the study seems to be specifically tailored to the analogy task, whereas prompt engineering broadly covers a wider range of applications." -promptrgd: prompt learning with relation-aware gradient denoising for low-resource relation extraction,8,"The abstract discusses a framework for semi-supervised prompt learning for relation extraction. Since prompt engineering is about designing and implementing prompts to effectively interact with a model or a system, the paper's focus on 'prompt template construction' and 'relation-aware gradient denoising' directly relates to the design and optimization of such prompts, especially in low-resource settings. The relevance rating is not a perfect 10 because although it deals with prompt engineering, the paper centers more on a specific aspect of relation extraction rather than a comprehensive study of hard prefix prompts in a broader context." -prompt learning for multi-modal covid-19 diagnosis,7,"The paper presents a novel approach that utilizes prompt-based methods for COVID-19 diagnosis, which is relevant to the study of prompt engineering. Prompt learning, a key aspect of prompt engineering, is central to the paper's methodology where a cloze prompt template and label word set are constructed to redefine the diagnosis task. However, the specificity to the 'hard prefix prompts' is not mentioned, which may or may not be within the scope of the presented methods. The relevance is rated moderately high due to the application of prompt learning concepts, but not the maximum score given the potential difference in prompt types being studied." -uper: boosting multi-document summarization with an unsupervised prompt-based extractor,9,"The study is highly relevant to prompt engineering, as the core of this research involves creating 'prompting templates' to harness the knowledge within a Pre-trained Language Model (PLM) for determining the semantic relevance of documents in a multi-document summarization task. This innovative approach leverages prompt engineering to improve document salience assessment and abstract generation. The rating is not a perfect 10 only because the application is specific to multi-document summarization and the details on the 'hard prefix prompts' specifically are not provided, which may not cover all aspects of prompt engineering studied in a comprehensive systematic review on the topic." -a cueing strategy for prompt tuning in relation extraction,7,"The abstract describes a modified approach for utilizing prompt tuning in the context of relation extraction by incorporating task-specific cues. This relates to the concept of prompt engineering because it involves the design and use of prompts to guide pre-trained language models to understand and perform specific tasks more effectively. However, the relevance is not a perfect 10 because the abstract specifically addresses relation extraction and introduces a cueing strategy, rather than discussing 'hard prefix prompts' or providing a systematic review on prompts in general. 'Prompt engineering' covers a broader range of applications and methodologies, including but not limited to the cueing strategy mentioned." -discourse-aware prompt for argument impact classification,8,"The abstract indicates that the paper is about developing a learnable continuous prompt that integrates discourse markers to improve the performance of pre-trained language models (PLMs) on the task of argument impact classification. Prompt engineering is vital for adapting PLMs to specific tasks, and the paper's focus on leveraging discourse information through prompts is relevant to the study of prompt engineering. The improvement in performance metrics (e.g., a 2.5% increase in the F1 score) suggests effective prompt engineering practices. However, the study does not focus on 'hard prefix prompts' specifically; it seems to emphasize the discourse-aware nature of prompts, which might make it slightly less relevant to a systematic review particularly centered on 'hard prefix prompts.'" -prompt learning for developing software exploits,7,"The abstract describes the use of a prompt learning approach, PT4Exploits, with pre-trained language models for generating software exploits and appears to employ prompt engineering by adding trainable prompt tokens. This is relevant to prompt engineering as it is an application of prompts in adjusting language model behavior. However, it is more focused on a specific application related to software vulnerability exploitation, rather than concentrating purely on the methodology of hard prefix prompts for a broad range of applications. Therefore, the relevance is notable but not entirely comprehensive regarding general prompt engineering studies." -b . alternate design choices prompt initialization : table 8,8,"The given abstract discusses prompt initialization strategies and their impact on the performance of a model called MaPLe, which is directly relevant to the study of prompt engineering. It examines different initialization methods such as using a specific template or random initialization, and their effectiveness in different layers of the model. The depth of detail regarding the effect of learnable prompts and hierarchical learning within the layers indicates a high level of relevance, although the prompt engineering study question may be broader and involve other aspects not covered in the abstract. However, since the abstract provides empirical findings related to prompt design choices and their impact on a model's performance, it is substantially relevant to the field of prompt engineering." -ppm: prompt-free prompt-tuning for multi-task learning,8,"The abstract describes a novel approach in prompt-tuning for multi-task learning by using task-specific adapters in place of hand-crafted prompts, which is highly relevant to prompt engineering. It focuses on optimizing the training process and enhancing the model's performance on various downstream tasks without relying on manually designed prompts. While the abstract does not specifically mention 'hard prefix prompts,' it contributes to the broader field of prompt engineering by exploring alternative techniques to improve language models' efficiency in multi-task learning. This is valuable for prompt engineering studies, but not a direct examination of 'hard prefix prompts,' hence the rating is not the maximum." -self-adaptive prompt-tuning for event extraction in ancient chinese literature,8,"The described study demonstrates a direct application of prompt engineering by developing a self-adaptive prompt-tuning mechanism to enhance the performance of a generative event extraction framework. The focus on crafting specialized prompts that account for the unique complexities of ancient Chinese literature and war events shows a sophisticated use of prompt engineering to improve the interpretation and generation capabilities of a pre-trained language model. While this isn't a systematic review of hard prefix prompts specifically, it's a practical application of tuned prompts within a complex domain. Hence, the rating reflects high relevance to prompt engineering but not a perfect match since the study is not a comprehensive review." -sptnet: span-based prompt tuning for video grounding,7,"The study introduces a methodology (SPTNet) that uses prompt tuning, a technique within the field of prompt engineering, to enhance the performance of a PLM in a video grounding task. This is relevant to prompt engineering as it involves the strategic modification of a prompt (via templates and mask tokens) to leverage a pre-trained model's knowledge more effectively. However, the focus on 'hard prefix prompts' is not explicitly mentioned, so while the paper is related to prompt engineering, it might not directly address the comprehensive systematic review on hard prefix prompts specifically." -promptcl: improving event representation via prompt template and contrastive learning,7,"The title 'promptcl: improving event representation via prompt template and contrastive learning' suggests that the study involves prompt engineering by focusing on the improvement of event representation using prompt templates. This implies that the study likely explores the design or optimization of prompts, which are critical in influencing the performance of language models. The use of contrastive learning could indicate an innovative approach to refining these prompts, potentially making the study relevant to the field of prompt engineering. However, without the abstract or a TLDR, it's difficult to ascertain the full scope and direct relevance to hard prefix prompts specifically, hence the rating does not reach the maximum score." -a dataset for cross-domain reasoning via template filling,8,"The relevance to prompt engineering is high, as the abstract discusses the development of a dataset and a method (prompt-template-filling approach) for enabling sequence to sequence models to perform cross-domain reasoning. Prompt engineering involves creating prompts that guide models towards desired outputs; the prompt-template-filling approach is likely related to the construction of such prompts to facilitate reasoning across different domains. Even though it may not directly address 'hard prefix prompts', it does pertain to the broader field of prompt engineering and its application in NLP tasks. The additional focus on cross-domain reasoning is also relevant, as it indicates a level of complexity in the prompt design suited for advanced reasoning. However, without more explicit mention of 'hard prefix prompts', it cannot receive a full score." -incorporating instructional prompts into a unified generative framework for joint multiple intent detection and slot filling,8,"The abstract describes a method for addressing joint multiple Intent Detection (ID) and Slot Filling (SF) using a Unified Generative framework (UGEN) that relies on prompt-based instructions. Since it involves designing templates as instructional prompts in a question-answering format to improve understanding of intents and slots in natural language processing, it is highly relevant to prompt engineering. The focus on instructional prompts aligns with the study of how prompts can enhance performance in language models. However, it doesn't address 'hard prefix prompts' specifically, hence the rating is not a full 10." -a practical three-phase approach to fully automated programming using system decomposition and coding copilots,7,"The study focuses on enhancing the capabilities of language models in generating code, which indirectly relates to prompt engineering in the context of creating prompts that facilitate better code generation. The paper mentions empirical insights to create prompt templates, indicating that the research involves understanding how to structure prompts effectively to improve the performance of the language models. Thus, it has relevance to prompt engineering study, particularly the aspect of designing prompts for coding-related tasks. However, the paper's primary aim is not centered on the study of prompt engineering itself but rather on a neuro-symbolic approach to automated programming. This is why the relevance rating is not higher." -ku x upstage’s submission for the wmt22 quality estimation: critical error detection shared task,8,"The paper discusses the application of prompt-based fine-tuning within the context of quality estimation and critical error detection tasks which is closely related to prompt engineering. The method of reformulating the task to fit a masked language model objective and the efforts to design intuitive templates and label words are directly relevant to the study of engineering effective prompts. Although the focus is on the specific application of QE and CED in machine translation, the techniques and insights derived could be beneficial for prompt engineering study. The rating is not a full 10 because the paper is specialized in QE and CED, which is only a subset of the broader field of prompt engineering." -vision encoders in visual question answering,8,"The relevance of the study to prompt engineering is significant as it examines the impact of strategically formatting prompts on the performance of Visual Language Models in the task of Visual Question Answering. This exploration is an essential aspect of prompt engineering, as it directly relates to how the models' input structure influences their ability to leverage learned knowledge. The improvement in task performance through prompt formatting highlights the importance of prompt engineering for optimizing model efficacy. However, it is not given a full score because the study is specifically focused on VQA tasks and VLMs, rather than the broader field of prompt engineering across various models and tasks." -keyword-optimized template insertion for clinical information extraction via prompt-based learning,9,"The abstract describes a study focused on prompt-based learning, specifically within clinical NLP tasks, and addresses the challenge of prompt design optimization for text classification. Although it doesn't mention 'hard prefix prompts' explicitly, the research on keyword-optimized template insertion is highly relevant to the field of prompt engineering. It explores how the position of the template (i.e., prompt) can affect model performance, which is a core aspect of prompt engineering studies. The research is very pertinent for anyone interested in the effects of prompt design on model efficacy, especially in data-sparse scenarios such as clinical note classification. Thus, it receives a high relevance rating." -kul@smm4h’22: template augmented adaptive pre-training for tweet classification,7,"The paper's relevance to prompt engineering is significant as it discusses the use of template augmentations in pre-training models for tweet classification, which is a form of prompt engineering. The inclusion of 'template augmented task adaptive pre-training' indicates that the study explores how different prompt structures can aid in adapting language models to particular tasks, here being the classification of tweets mentioning Adverse Drug Effects. Although the study is focused on a specific application in the health domain and does not solely focus on 'hard prefix prompts', it demonstrates a practical implementation of prompt engineering through template augmentation. The relevance is not rated higher because the abstract does not directly address a systematic review on prompt engineering or 'hard prefix prompts' as a general concept, but rather reports on a specific application and its outcomes." -research on chinese short text classification based on prefix-vector attention template and probabilistic answer set,8,"The abstract discusses the use of a prefix-vector as a template in prompt learning for text classification, indicating a clear relevance to prompt engineering. It specifically addresses the optimization of prompts for improving performance in text classification tasks, which is a direct application of prompt engineering. However, it doesn't solely focus on 'hard' prefix prompts, hence the rating isn't a full 10." -stt: soft template tuning for few-shot learning,9,"The abstract discusses a new prompt-tuning framework called Soft Template Tuning (STT), which directly relates to prompt engineering as it involves the fine-tuning of prompts for few-shot learning applications with large language models. The study's focus on combining manual prompts and auto-prompts, as well as treating downstream tasks as masked language modeling tasks, is highly relevant to the field of prompt engineering. While it doesn't focus specifically on 'hard prefix prompts,' it does contribute significantly to the overall understanding of prompt tuning, which is a core aspect of prompt engineering. Therefore, it gets a high relevance rating." -cross-domain reasoning via template filling,8,"The paper discusses a prompt-template-filling approach which is highly relevant to the field of prompt engineering as it directly involves designing prompts to facilitate cross-domain reasoning in sequence to sequence models. The relevance is slightly lower than the maximum score because the prompt engineering study specified involves hard prefix prompts, and it is not clear from the abstract if the study specifically addresses hard prefix prompts or if it has a broader scope. Nevertheless, the methodology and case studies presented are likely to be informative for prompt engineering research, particularly in understanding and improving model's abilities in cross-domain applications." -stprompt: semantic-guided and task-driven prompts for effective few-shot classification,9,"The given title and abstract describe an approach to prompt engineering that is specifically tailored to improve few-shot classification performance in language models. The development of the STPrompt model, which utilizes semantic-guided and task-driven prompts, is highly relevant to the field of prompt engineering. The use of prompts that are constructed from semantic dependency trees and task-specific metadata is indicative of advanced prompt engineering techniques. Therefore, the study is almost directly aligned with prompt engineering, with the potential deduction of a point for not addressing 'hard prefix prompts' as the prompt is open-ended regarding the type of prompts studied." -supplementary material for mask-free ovis: open-vocabulary instance segmentation without manual mask annotations,8,"The abstract describes a process of using prompt templates to generate pseudo-captions from image-labels for vision-language models. This is highly relevant to the study of prompt engineering because it involves the creation of templates that structure input for language models in a way that improves their understanding and output generation. While it doesn't directly mention the term 'hard prefix prompts', the use of rigidly structured prompt templates hints at a similar concept. Prompt engineering is crucial in this context to ensure that the model correctly interprets the image categories and generates coherent and accurate captions. The rating isn't a full 10 as it doesn't cover the entire breadth of prompt engineering studies, especially those that pertain to non-image related tasks, but it remains significantly relevant for the subset of prompt engineering it pertains to." -pre-training extractive question-answer prompts for few-shot chinese text classification,8,"The document discusses the use of prompt learning for few-shot text classification, which is a subset of prompt engineering as it involves designing and training prompts to work effectively with pre-trained language models. The relevance to prompt engineering is high because it directly deals with the creation of prompts that fit a specific task, which is extractive question-answering in this case. The study also touches upon improving the efficiency of such prompts using contrastive learning, which is an advanced topic in prompt engineering. However, the specific term 'hard prefix prompts' is not mentioned, which suggests that while the document is highly relevant to prompt engineering, it may not cover the 'hard prefix' aspect explicitly." -grounding language to entities and dynamics for generalization in reinforcement learning,7,"The described study involves creating templates for textual descriptions and has a component of paraphrasing, which relates to prompt engineering in that it deals with the systematic construction and variation of prompts. However, because it is situated within the context of reinforcement learning and generalization rather than directly focused on prompt engineering for language models or search queries, it is not a perfect match for the specific topic of a 'hard prefix prompts' systematic review." diff --git a/data/semantic_scholar_data_cleaned.csv b/data/semantic_scholar_data_cleaned.csv deleted file mode 100644 index 2a2c568..0000000 --- a/data/semantic_scholar_data_cleaned.csv +++ /dev/null @@ -1,4255 +0,0 @@ -Title,First Author,Abstract,TLDR,Open Access PDF URL -Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study,Yi Liu,"Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse. Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts. Initially, we develop a classification model to analyze the distribution of existing prompts, identifying ten distinct patterns and three categories of jailbreak prompts. Subsequently, we assess the jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a dataset of 3,120 jailbreak questions across eight prohibited scenarios. Finally, we evaluate the resistance of ChatGPT against jailbreak prompts, finding that the prompts can consistently evade the restrictions in 40 use-case scenarios. The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention.","{'model': 'tldr@v2.0.0', 'text': 'The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention.'}",http://arxiv.org/pdf/2305.13860 -"""Do Anything Now"": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models",Xinyue Shen,"The misuse of large language models (LLMs) has garnered significant attention from the general public and LLM vendors. In response, efforts have been made to align LLMs with human values and intent use. However, a particular type of adversarial prompts, known as jailbreak prompt, has emerged and continuously evolved to bypass the safeguards and elicit harmful content from LLMs. In this paper, we conduct the first measurement study on jailbreak prompts in the wild, with 6,387 prompts collected from four platforms over six months. Leveraging natural language processing technologies and graph-based community detection methods, we discover unique characteristics of jailbreak prompts and their major attack strategies, such as prompt injection and privilege escalation. We also observe that jailbreak prompts increasingly shift from public platforms to private ones, posing new challenges for LLM vendors in proactive detection. To assess the potential harm caused by jailbreak prompts, we create a question set comprising 46,800 samples across 13 forbidden scenarios. Our experiments show that current LLMs and safeguards cannot adequately defend jailbreak prompts in all scenarios. Particularly, we identify two highly effective jailbreak prompts which achieve 0.99 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and they have persisted online for over 100 days. Our work sheds light on the severe and evolving threat landscape of jailbreak prompts. We hope our study can facilitate the research community and LLM vendors in promoting safer and regulated LLMs.","{'model': 'tldr@v2.0.0', 'text': 'The first measurement study on jailbreak prompts in the wild is conducted, with 6,387 prompts collected from four platforms over six months, and it is shown that current LLMs and safeguards cannot adequately defend jailbreak Prompts in all scenarios.'}",https://arxiv.org/pdf/2308.03825 -"Automatic Prompt Optimization with ""Gradient Descent"" and Beam Search",Reid Pryzant,"Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to this problem, Automatic Prompt Optimization (APO), which is inspired by numerical gradient descent to automatically improve prompts, assuming access to training data and an LLM API. The algorithm uses minibatches of data to form natural language""gradients""that criticize the current prompt. The gradients are then""propagated""into the prompt by editing the prompt in the opposite semantic direction of the gradient. These gradient descent steps are guided by a beam search and bandit selection procedure which significantly improves algorithmic efficiency. Preliminary results across three benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest that Automatic Prompt Optimization can outperform prior prompt editing techniques and improve an initial prompt's performance by up to 31%, by using data to rewrite vague task descriptions into more precise annotation instructions.","{'model': 'tldr@v2.0.0', 'text': ""Preliminary results suggest that Automatic Prompt Optimization can outperform prior prompt editing techniques and improve an initial prompt's performance by up to 31%, by using data to rewrite vague task descriptions into more precise annotation instructions.""}",http://arxiv.org/pdf/2305.03495 -Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models,Huachuan Qiu,"Considerable research efforts have been devoted to ensuring that large language models (LLMs) align with human values and generate safe text. However, an excessive focus on sensitivity to certain topics can compromise the model's robustness in following instructions, thereby impacting its overall performance in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily focused on evaluating the safety of the models without considering their robustness. In this paper, we propose a benchmark that assesses both the safety and robustness of LLMs, emphasizing the need for a balanced approach. To comprehensively study text safety and output robustness, we introduce a latent jailbreak prompt dataset, each involving malicious instruction embedding. Specifically, we instruct the model to complete a regular task, such as translation, with the text to be translated containing malicious instructions. To further analyze safety and robustness, we design a hierarchical annotation framework. We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions). Our results demonstrate that current LLMs not only prioritize certain instruction verbs but also exhibit varying jailbreak rates for different instruction verbs in explicit normal instructions. Code and data are available at https://github.com/qiuhuachuan/latent-jailbreak.","{'model': 'tldr@v2.0.0', 'text': 'This paper introduces a latent jailbreak prompt dataset, and presents a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements, and instruction replacements.'}",https://arxiv.org/pdf/2307.08487 -Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models,Erfan Shayegani,"We introduce new jailbreak attacks on vision language models (VLMs), which use aligned LLMs and are resilient to text-only jailbreak attacks. Specifically, we develop cross-modality attacks on alignment where we pair adversarial images going through the vision encoder with textual prompts to break the alignment of the language model. Our attacks employ a novel compositional strategy that combines an image, adversarially targeted towards toxic embeddings, with generic prompts to accomplish the jailbreak. Thus, the LLM draws the context to answer the generic prompt from the adversarial image. The generation of benign-appearing adversarial images leverages a novel embedding-space-based methodology, operating with no access to the LLM model. Instead, the attacks require access only to the vision encoder and utilize one of our four embedding space targeting strategies. By not requiring access to the LLM, the attacks lower the entry barrier for attackers, particularly when vision encoders such as CLIP are embedded in closed-source LLMs. The attacks achieve a high success rate across different VLMs, highlighting the risk of cross-modality alignment vulnerabilities, and the need for new alignment approaches for multi-modal models.","{'model': 'tldr@v2.0.0', 'text': 'Cross-modality attacks on alignment where adversarial images going through the vision encoder with textual prompts to break the alignment of the language model are developed.'}", -FuzzLLM: A Novel and Universal Fuzzing Framework for Proactively Discovering Jailbreak Vulnerabilities in Large Language Models,Dongyu Yao,"Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit meticulously crafted prompts to elicit content that violates service guidelines, have captured the attention of research communities. While model owners can defend against individual jailbreak prompts through safety training strategies, this relatively passive approach struggles to handle the broader category of similar jailbreaks. To tackle this issue, we introduce FuzzLLM, an automated fuzzing framework designed to proactively test and discover jailbreak vulnerabilities in LLMs. We utilize templates to capture the structural integrity of a prompt and isolate key features of a jailbreak class as constraints. By integrating different base classes into powerful combo attacks and varying the elements of constraints and prohibited questions, FuzzLLM enables efficient testing with reduced manual effort. Extensive experiments demonstrate FuzzLLM's effectiveness and comprehensiveness in vulnerability discovery across various LLMs.","{'model': 'tldr@v2.0.0', 'text': 'FuzzLLM is introduced, an automated fuzzing framework designed to proactively test and discover jailbreak vulnerabilities in LLMs and utilizes templates to capture the structural integrity of a prompt and isolate key features of a jailbreak class as constraints.'}",https://arxiv.org/pdf/2309.05274 -Latent Jailbreak: A Test Suite for Evaluating Both Text Safety and Output Robustness of Large Language Models,Huachuan Qiu,"Warning: This paper contains examples of potentially offensive and harmful text. Considerable research efforts have been devoted to ensuring that large language models (LLMs) align with human values and generate safe text. However, an excessive focus on sensitivity to certain topics can compromise the model’s robustness in following instructions, thereby impacting its over-all performance in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily focused on evaluating the safety of the models without considering their robustness. In this paper, we propose a benchmark that assesses both the safety and robustness of LLMs, emphasizing the need for a balanced approach. To comprehensively study text safety and output robustness, we introduce a latent jailbreak prompt dataset, each involving malicious instruction embedding. Specifically, we instruct the model to complete a regular task, such as translation, with the text to be translated containing malicious instructions. To further analyze safety and robustness, we design a hierarchical annotation framework. We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions). Our results demonstrate that current LLMs not only prioritize certain instruction verbs but also exhibit varying jailbreak rates for different instruction verbs in explicit normal instructions. Code and data are available at https://github.com/ qiuhuachuan/latent-jailbreak.","{'model': 'tldr@v2.0.0', 'text': 'This paper introduces a latent jailbreak prompt dataset, each involving malicious instruction embedding, and presents a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements, and instruction replacements.'}", -Jailbroken: How Does LLM Safety Training Fail?,Alexander Wei,"Large language models trained for safety and harmlessness remain susceptible to adversarial misuse, as evidenced by the prevalence of""jailbreak""attacks on early releases of ChatGPT that elicit undesired behavior. Going beyond recognition of the issue, we investigate why such attacks succeed and how they can be created. We hypothesize two failure modes of safety training: competing objectives and mismatched generalization. Competing objectives arise when a model's capabilities and safety goals conflict, while mismatched generalization occurs when safety training fails to generalize to a domain for which capabilities exist. We use these failure modes to guide jailbreak design and then evaluate state-of-the-art models, including OpenAI's GPT-4 and Anthropic's Claude v1.3, against both existing and newly designed attacks. We find that vulnerabilities persist despite the extensive red-teaming and safety-training efforts behind these models. Notably, new attacks utilizing our failure modes succeed on every prompt in a collection of unsafe requests from the models' red-teaming evaluation sets and outperform existing ad hoc jailbreaks. Our analysis emphasizes the need for safety-capability parity -- that safety mechanisms should be as sophisticated as the underlying model -- and argues against the idea that scaling alone can resolve these safety failure modes.","{'model': 'tldr@v2.0.0', 'text': 'The analysis emphasizes the need for safety-capability parity -- that safety mechanisms should be as sophisticated as the underlying model -- and argues against the idea that scaling alone can resolve these safety failure modes.'}",https://arxiv.org/pdf/2307.02483 -"Tricking LLMs into Disobedience: Understanding, Analyzing, and Preventing Jailbreaks",Abhinav Rao,"Recent explorations with commercial Large Language Models (LLMs) have shown that non-expert users can jailbreak LLMs by simply manipulating the prompts; resulting in degenerate output behavior, privacy and security breaches, offensive outputs, and violations of content regulator policies. Limited formal studies have been carried out to formalize and analyze these attacks and their mitigations. We bridge this gap by proposing a formalism and a taxonomy of known (and possible) jailbreaks. We perform a survey of existing jailbreak methods and their effectiveness on open-source and commercial LLMs (such as GPT 3.5, OPT, BLOOM, and FLAN-T5-xxl). We further propose a limited set of prompt guards and discuss their effectiveness against known attack types.","{'model': 'tldr@v2.0.0', 'text': 'This work performs a survey of existing jailbreak methods and their effectiveness on open-source and commercial LLMs, and proposes a limited set of prompt guards and discusses their effectiveness against known attack types.'}",http://arxiv.org/pdf/2305.14965 -Jailbreaking Black Box Large Language Models in Twenty Queries,Patrick Chao,"There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of these vulnerabilities is therefore instrumental in understanding inherent weaknesses and preventing future misuse. To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an algorithm that generates semantic jailbreaks with only black-box access to an LLM. PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention. In this way, the attacker LLM iteratively queries the target LLM to update and refine a candidate jailbreak. Empirically, PAIR often requires fewer than twenty queries to produce a jailbreak, which is orders of magnitude more efficient than existing algorithms. PAIR also achieves competitive jailbreaking success rates and transferability on open and closed-source LLMs, including GPT-3.5/4, Vicuna, and PaLM-2.","{'model': 'tldr@v2.0.0', 'text': 'PAIR is an algorithm that generates semantic jailbreaks with only black-box access to an LLM with competitive jailbreaking success rates and transferability on open and closed-source LLMs, including GPT-3.5/4, Vicuna, and PaLM.'}",https://arxiv.org/pdf/2310.08419 -Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases,Rishabh Bhardwaj,"Red-teaming has been a widely adopted way to evaluate the harmfulness of Large Language Models (LLMs). It aims to jailbreak a model's safety behavior to make it act as a helpful agent disregarding the harmfulness of the query. Existing methods are primarily based on input text-based red-teaming such as adversarial prompts, low-resource prompts, or contextualized prompts to condition the model in a way to bypass its safe behavior. Bypassing the guardrails uncovers hidden harmful information and biases in the model that are left untreated or newly introduced by its safety training. However, prompt-based attacks fail to provide such a diagnosis owing to their low attack success rate, and applicability to specific models. In this paper, we present a new perspective on LLM safety research i.e., parametric red-teaming through Unalignment. It simply (instruction) tunes the model parameters to break model guardrails that are not deeply rooted in the model's behavior. Unalignment using as few as 100 examples can significantly bypass commonly referred to as CHATGPT, to the point where it responds with an 88% success rate to harmful queries on two safety benchmark datasets. On open-source models such as VICUNA-7B and LLAMA-2-CHAT 7B AND 13B, it shows an attack success rate of more than 91%. On bias evaluations, Unalignment exposes inherent biases in safety-aligned models such as CHATGPT and LLAMA- 2-CHAT where the model's responses are strongly biased and opinionated 64% of the time.","{'model': 'tldr@v2.0.0', 'text': ""A new perspective on LLM safety research is presented i.e., parametric red-teaming through Unalignment, which tunes the model parameters to break model guardrails that are not deeply rooted in the model's behavior.""}", -AutoDAN: Automatic and Interpretable Adversarial Attacks on Large Language Models,Sicheng Zhu,"Safety alignment of Large Language Models (LLMs) can be compromised with manual jailbreak attacks and (automatic) adversarial attacks. Recent work suggests that patching LLMs against these attacks is possible: manual jailbreak attacks are human-readable but often limited and public, making them easy to block; adversarial attacks generate gibberish prompts that can be detected using perplexity-based filters. In this paper, we show that these solutions may be too optimistic. We propose an interpretable adversarial attack, \texttt{AutoDAN}, that combines the strengths of both types of attacks. It automatically generates attack prompts that bypass perplexity-based filters while maintaining a high attack success rate like manual jailbreak attacks. These prompts are interpretable and diverse, exhibiting strategies commonly used in manual jailbreak attacks, and transfer better than their non-readable counterparts when using limited training data or a single proxy model. We also customize \texttt{AutoDAN}'s objective to leak system prompts, another jailbreak application not addressed in the adversarial attack literature. Our work provides a new way to red-team LLMs and to understand the mechanism of jailbreak attacks.","{'model': 'tldr@v2.0.0', 'text': 'An interpretable adversarial attack is proposed, \\texttt{AutoDAN}, that combines the strengths of both types of attacks and automatically generates attack prompts that bypass perplexity-based filters while maintaining a high attack success rate like manual jailbreak attacks.'}", -Latin America’s prisons pose major COVID-19 risks,," Subject COVID-19 and prisons. Significance Authorities in Leticia, Colombia, reported on May 12 that half of inmates at the local prison had tested positive for COVID-19. The news follows a major outbreak at a prison in Villavicencio last month that prompted an attempted jailbreak, and riots at 13 Colombian facilities on March 21, which resulted in 23 deaths at one prison. Concerns are growing about health and security at jails across Latin America, where tensions are building over overcrowding and unsanitary conditions. Impacts The confirmation of COVID-19 at Haiti’s largest jail last week will exacerbate pressure on the region’s most overcrowded prison system. Prisoner releases will become increasingly complicated as infection spreads within jails. Increased poverty due to the COVID-19 crisis could prompt a rise in crime, and thus prison populations. ",, -Visual Prompt Tuning,Menglin Jia,"The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision. Taking inspiration from recent advances in efficiently tuning large language models, VPT introduces only a small amount (less than 1% of model parameters) of trainable parameters in the input space while keeping the model backbone frozen. Via extensive experiments on a wide variety of downstream recognition tasks, we show that VPT achieves significant performance gains compared to other parameter efficient tuning protocols. Most importantly, VPT even outperforms full fine-tuning in many cases across model capacities and training data scales, while reducing per-task storage cost.","{'model': 'tldr@v2.0.0', 'text': 'This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision and shows that VPT achieves significant performance gains compared to other parameter efficient tuning protocols.'}",http://arxiv.org/pdf/2203.12119 -Conditional Prompt Learning for Vision-Language Models,Kaiyang Zhou,"With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept of prompt learning—a recent trend in NLP—to the vision domain for adapting pre-trained vision-language models. Specifically, CoOp turns context words in a prompt into a set of learnable vectors and, with only a few labeled images for learning, can achieve huge improvements over intensively-tuned manual prompts. In our study we identify a critical problem of CoOp: the learned context is not generalizable to wider unseen classes within the same dataset, suggesting that CoOp overfits base classes observed during training. To address the problem, we propose Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to each instance and are thus less sensitive to class shift. Extensive experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset; and yields stronger domain generalization performance as well. Code is available at https://github.com/KaiyangZhou/CoOp.","{'model': 'tldr@v2.0.0', 'text': 'Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector), and yields stronger domain generalization performance as well.'}",https://arxiv.org/pdf/2203.05557 -Prompt-to-Prompt Image Editing with Cross Attention Control,Amir Hertz,"Recent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts. Such text-based synthesis methods are particularly appealing to humans who are used to verbally describe their intent. Therefore, it is only natural to extend the text-driven image synthesis to text-driven image editing. Editing is challenging for these generative models, since an innate property of an editing technique is to preserve most of the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome. State-of-the-art methods mitigate this by requiring the users to provide a spatial mask to localize the edit, hence, ignoring the original structure and content within the masked region. In this paper, we pursue an intuitive prompt-to-prompt editing framework, where the edits are controlled by text only. To this end, we analyze a text-conditioned model in depth and observe that the cross-attention layers are the key to controlling the relation between the spatial layout of the image to each word in the prompt. With this observation, we present several applications which monitor the image synthesis by editing the textual prompt only. This includes localized editing by replacing a word, global editing by adding a specification, and even delicately controlling the extent to which a word is reflected in the image. We present our results over diverse images and prompts, demonstrating high-quality synthesis and fidelity to the edited prompts.","{'model': 'tldr@v2.0.0', 'text': 'This paper analyzes a text-conditioned model in depth and observes that the cross-attention layers are the key to controlling the relation between the spatial layout of the image to each word in the prompt, and presents several applications which monitor the image synthesis by editing the textual prompt only.'}",http://arxiv.org/pdf/2208.01626 -The Power of Scale for Parameter-Efficient Prompt Tuning,Brian Lester,"In this work, we explore “prompt tuning,” a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signals from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3’s few-shot learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method “closes the gap” and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant because large models are costly to share and serve and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed “prefix tuning” of Li and Liang (2021) and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient “prompt ensembling.” We release code and model checkpoints to reproduce our experiments.","{'model': 'tldr@v2.0.0', 'text': 'This work explores “prompt tuning,” a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks and shows that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient “Prompt ensembling.”'}",https://aclanthology.org/2021.emnlp-main.243.pdf -P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks,Xiao Liu,"Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pretrained models. We also find that existing methods of prompt tuning cannot handle hard sequence labeling tasks, indicating a lack of universality. We present a novel empirical finding that properly optimized prompt tuning can be universally effective across a wide range of model scales and NLU tasks. It matches the performance of finetuning while having only 0.1%-3% tuned parameters. Our method P-Tuning v2 is an implementation of Deep Prompt Tuning (CITATION) optimized and adapted for NLU. Given the universality and simplicity of P-Tuning v2, we believe it can serve as an alternative to finetuning and a strong baseline for future research.","{'model': 'tldr@v2.0.0', 'text': 'The method P-Tuning v2 is an implementation of Deep Prompt Tuning (CITATION) optimized and adapted for NLU and can serve as an alternative to finetuning and a strong baseline for future research.'}",https://aclanthology.org/2022.acl-short.8.pdf -"Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing",Pengfei Liu,"This article surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning.” Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x′ that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x̂, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: It allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this article, we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g., the choice of pre-trained language models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts but also release other resources, e.g., a website NLPedia–Pretrain including constantly updated survey and paperlist.","{'model': 'tldr@v2.0.0', 'text': 'The basics of this promising paradigm in natural language processing are introduced, a unified set of mathematical notations that can cover a wide variety of existing work are described, and existing work is organized along several dimensions.'}",https://dl.acm.org/doi/pdf/10.1145/3560815 -Learning to Prompt for Open-Vocabulary Object Detection with Vision-Language Model,Yu Du,"Recently, vision-language pre-training shows great potential in open-vocabulary object detection, where detectors trained on base classes are devised for detecting new classes. The class text embedding is firstly generated by feeding prompts to the text encoder of a pre-trained vision-language model. It is then used as the region classifier to supervise the training of a detector. The key element that leads to the success of this model is the proper prompt, which requires careful words tuning and ingenious design. To avoid laborious prompt engineering, there are some prompt representation learning methods being proposed for the image classification task, which however can only be sub-optimal solutions when applied to the detection task. In this paper, we introduce a novel method, detection prompt (DetPro), to learn continuous prompt representations for open-vocabulary object detection based on the pre-trained vision-language model. Different from the previous classification-oriented methods, DetPro has two highlights: 1) a background interpretation scheme to include the proposals in image background into the prompt training; 2) a context grading scheme to separate proposals in image foreground for tailored prompt training. We assemble DetPro with ViLD, a recent state-of-the-art openworld object detector, and conduct experiments on the LVIS as well as transfer learning on the Pascal VOC, COCO, Objects365 datasets. Experimental results show that our DetPro outperforms the baseline ViLD [7] in all settings, e.g., +3.4 APbox and +3.0 APmask improvements on the novel classes of LVIS. Code and models are available at https://github.com/dyabel/detpro.","{'model': 'tldr@v2.0.0', 'text': 'A novel method, detection prompt (DetPro), to learn continuous prompt representations for open-vocabulary object detection based on the pre-trained vision-language model, which outperforms the baseline ViLD in all settings.'}",https://arxiv.org/pdf/2203.14940 -Large Language Models Are Human-Level Prompt Engineers,Yongchao Zhou,"By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired by classical program synthesis and the human approach to prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic instruction generation and selection. In our method, we treat the instruction as the""program,""optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. To evaluate the quality of the selected instruction, we evaluate the zero-shot performance of another LLM following the selected instruction. Experiments on 24 NLP tasks show that our automatically generated instructions outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 19/24 tasks. We conduct extensive qualitative and quantitative analyses to explore the performance of APE. We show that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts. Please check out our webpage at https://sites.google.com/view/automatic-prompt-engineer.","{'model': 'tldr@v2.0.0', 'text': 'It is shown that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts.'}",http://arxiv.org/pdf/2211.01910 -"Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)",Shijie Geng,"For a long time, different recommendation tasks require designing task-specific architectures and training objectives. As a result, it is hard to transfer the knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches. To deal with such issues, considering that language can describe almost anything and language grounding is a powerful medium to represent various problems or tasks, we present a flexible and unified text-to-text paradigm called “Pretrain, Personalized Prompt, and Predict Paradigm” (P5) for recommendation, which unifies various recommendation tasks in a shared framework. In P5, all data such as user-item interactions, user descriptions, item metadata, and user reviews are converted to a common format — natural language sequences. The rich information from natural language assists P5 to capture deeper semantics for personalization and recommendation. Specifically, P5 learns different tasks with the same language modeling objective during pretraining. Thus, it serves as the foundation model for various downstream recommendation tasks, allows easy integration with other modalities, and enables instruction-based recommendation. P5 advances recommender systems from shallow model to deep model to big model, and will revolutionize the technical form of recommender systems towards universal recommendation engine. With adaptive personalized prompt for different users, P5 is able to make predictions in a zero-shot or few-shot manner and largely reduces the necessity for extensive fine-tuning. On several benchmarks, we conduct experiments to show the effectiveness of P5. To help advance future research on Recommendation as Language Processing (RLP), Personalized Foundation Models (PFM), and Universal Recommendation Engine (URE), we release the source code, dataset, prompts, and pretrained P5 model at https://github.com/jeykigung/P5.","{'model': 'tldr@v2.0.0', 'text': 'A flexible and unified text-to-text paradigm called “Pretrain, Personalized Prompt, and Predict Paradigm” (P5) for recommendation, which unifies various recommendation tasks in a shared framework and will revolutionize the technical form of recommender systems towards universal recommendation engine.'}", -An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels,Lisa P. Argyle,"Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks. Unfortunately, existing prompt engineering methods require significant amounts of labeled data, access to model parameters, or both. We introduce a new method for selecting prompt templates without labeled examples and without direct access to the model. Specifically, over a set of candidate templates, we choose the template that maximizes the mutual information between the input and the corresponding model output. Across 8 datasets representing 7 distinct NLP tasks, we show that when a template has high mutual information, it also has high accuracy on the task. On the largest model, selecting prompts with our method gets 90% of the way from the average prompt accuracy to the best prompt accuracy and requires no ground truth labels.","{'model': 'tldr@v2.0.0', 'text': 'A new method for selecting prompt templates without labeled examples and without direct access to the model is introduced, which gets 90% of the way from the average prompt accuracy to the best prompt accuracy and requires no ground truth labels.'}",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/035D7C8A55B237942FB6DBAD7CAA4E49/S1047198723000025a.pdf/div-class-title-out-of-one-many-using-language-models-to-simulate-human-samples-div.pdf -Learning to Prompt for Vision-Language Models,Kaiyang Zhou,,"{'model': 'tldr@v2.0.0', 'text': 'Context Optimization (CoOp) is proposed, a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition that achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.'}",https://arxiv.org/pdf/2109.01134 -Prompt Distribution Learning,Yuning Lu,"We present prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks. Our method not only learns low-bias prompts from a few samples but also captures the distribution of diverse prompts to handle the varying visual representations. In this way, we provide high-quality task-related content for facilitating recognition. This prompt distribution learning is realized by an efficient approach that learns the output embeddings of prompts instead of the input embeddings. Thus, we can employ a Gaussian distribution to model them effectively and derive a surrogate loss for efficient training. Extensive experiments on 12 datasets demonstrate that our method consistently and significantly outperforms existing methods. For example, with 1 sample per category, it relatively improves the average result by 9.1% compared to human-crafted prompts.","{'model': 'tldr@v2.0.0', 'text': 'This work presents prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks and employs a Gaussian distribution to model them effectively and derive a surrogate loss for efficient training.'}",https://arxiv.org/pdf/2205.03340 -MaPLe: Multi-modal Prompt Learning,Muhammad Uzair Khattak,"Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.","{'model': 'tldr@v2.0.0', 'text': 'The design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions, and the effectiveness of the approach is evaluated on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts.'}",https://arxiv.org/pdf/2210.03117 -Ignore Previous Prompt: Attack Techniques For Language Models,Fábio Perez,"Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.","{'model': 'tldr@v2.0.0', 'text': ""This work investigates two types of attacks -- goal hijacking and prompt leaking -- and demonstrates that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks.""}",https://arxiv.org/pdf/2211.09527 -Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion,Kurt Shuster,"Language models (LMs) have recently been shown to generate more factual responses by employing modularity (Zhou et al., 2021) in combination with retrieval (Adolphs et al., 2021). We extend the recent approach of Adolphs et al. (2021) to include internet search as a module. Our SeeKeR (Search engine->Knowledge->Response) method thus applies a single LM to three modular tasks in succession: search, generating knowledge, and generating a final response. We show that, when using SeeKeR as a dialogue model, it outperforms the state-of-the-art model BlenderBot 2 (Chen et al., 2021) on open-domain knowledge-grounded conversations for the same number of parameters, in terms of consistency, knowledge and per-turn engagingness. SeeKeR applied to topical prompt completions as a standard language model outperforms GPT2 (Radford et al., 2019) and GPT3 (Brown et al., 2020) in terms of factuality and topicality, despite GPT3 being a vastly larger model. Our code and models are made publicly available.","{'model': 'tldr@v2.0.0', 'text': 'It is shown that, when using SeeKeR as a dialogue model, it outperforms the state-of-the-art model BlenderBot 2 on open-domain knowledge-grounded conversations for the same number of parameters, in terms of consistency, knowledge and per-turn engagingness.'}",http://arxiv.org/pdf/2203.13224 -Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models,Manli Shu,"Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT.","{'model': 'tldr@v2.0.0', 'text': 'Test-time prompt tuning (TPT) is proposed, a method that can learn adaptive prompts on the fly with a single test sample and performs on par with the state-of-the-art approaches that use additional training data.'}",http://arxiv.org/pdf/2209.07511 -P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks,Xiao Liu,"Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pretrained models. We also find that existing methods of prompt tuning cannot handle hard sequence labeling tasks, indicating a lack of universality. We present a novel empirical finding that properly optimized prompt tuning can be universally effective across a wide range of model scales and NLU tasks. It matches the performance of finetuning while having only 0.1%-3% tuned parameters. Our method P-Tuning v2 is an implementation of Deep Prompt Tuning \cite{li2021prefix,qin2021learning} optimized and adapted for NLU. Given the universality and simplicity of P-Tuning v2, we believe it can serve as an alternative to finetuning and a strong baseline for future research.Our code and data are released at https://github.com/THUDM/P-tuning-v2.","{'model': 'tldr@v2.0.0', 'text': 'The method P-Tuning v2 is an implementation of Deep Prompt Tuning optimized and adapted for NLU and can serve as an alternative to finetuning and a strong baseline for future research.'}", -DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models,Zijie J. Wang,"With recent advancements in diffusion models, users can generate high-quality images by writing text prompts in natural language. However, generating images with desired details requires proper prompts, and it is often unclear how a model reacts to different prompts or what the best prompts are. To help researchers tackle these critical challenges, we introduce DiffusionDB, the first large-scale text-to-image prompt dataset totaling 6.5TB, containing 14 million images generated by Stable Diffusion, 1.8 million unique prompts, and hyperparameters specified by real users. We analyze the syntactic and semantic characteristics of prompts. We pinpoint specific hyperparameter values and prompt styles that can lead to model errors and present evidence of potentially harmful model usage, such as the generation of misinformation. The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. DiffusionDB is publicly available at: https://poloclub.github.io/diffusiondb.","{'model': 'tldr@v2.0.0', 'text': 'This work introduces DiffusionDB, the first large-scale text-to-image prompt dataset totaling 6.5TB, containing 14 million images generated by Stable Diffusion, 1.8 million unique prompts, and hyperparameters specified by real users, and analyzes the syntactic and semantic characteristics of prompts.'}",https://arxiv.org/pdf/2210.14896 -Learning to Prompt for Continual Learning,Zifeng Wang,"The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge. Typical methods rely on a rehearsal buffer or known task identity at test time to retrieve learned knowl-edge and address forgetting, while this work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time. Our method learns to dynamically prompt (L2P) a pre-trained model to learn tasks sequen-tially under different task transitions. In our proposed framework, prompts are small learnable parameters, which are maintained in a memory space. The objective is to optimize prompts to instruct the model prediction and ex-plicitly manage task-invariant and task-specific knowledge while maintaining model plasticity. We conduct comprehen-sive experiments under popular image classification bench-marks with different challenging continual learning set-tings, where L2P consistently outperforms prior state-of-the-art methods. Surprisingly, L2P achieves competitive results against rehearsal-based methods even without a re-hearsal buffer and is directly applicable to challenging task-agnostic continual learning. Source code is available at https://github.com/google-research/12p.","{'model': 'tldr@v2.0.0', 'text': 'This work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time, and achieves competitive results against rehearsal-based methods even without a re-hearsal buffer.'}",https://arxiv.org/pdf/2112.08654 -Unified Vision and Language Prompt Learning,Yuhang Zang,"Prompt tuning, a parameter- and data-efficient transfer learning paradigm that tunes only a small number of parameters in a model's input space, has become a trend in the vision community since the emergence of large vision-language models like CLIP. We present a systematic study on two representative prompt tuning methods, namely text prompt tuning and visual prompt tuning. A major finding is that none of the unimodal prompt tuning methods performs consistently well: text prompt tuning fails on data with high intra-class visual variances while visual prompt tuning cannot handle low inter-class variances. To combine the best from both worlds, we propose a simple approach called Unified Prompt Tuning (UPT), which essentially learns a tiny neural network to jointly optimize prompts across different modalities. Extensive experiments on over 11 vision datasets show that UPT achieves a better trade-off than the unimodal counterparts on few-shot learning benchmarks, as well as on domain generalization benchmarks. Code and models will be released to facilitate future research.","{'model': 'tldr@v2.0.0', 'text': 'A systematic study on two representative prompt tuning methods, namely text prompt tuning and visual prompt tuning, and proposes a simple approach called Unified Prompt Tuning (UPT), which essentially learns a tiny neural network to jointly optimize prompts across different modalities.'}",http://arxiv.org/pdf/2210.07225 -Prompt-aligned Gradient for Prompt Tuning,Beier Zhu,"Thanks to the large pre-trained vision-language models (VLMs) like CLIP, we can craft a zero-shot classifier by""prompt"", e.g., the confidence score of an image being""[CLASS]""can be obtained by using the VLM provided similarity measure between the image and the prompt sentence""a photo of a [CLASS]"". Therefore, prompt shows a great potential for fast adaptation of VLMs to downstream tasks if we fine-tune the prompt-based similarity measure. However, we find a common failure that improper fine-tuning may not only undermine the prompt's inherent prediction for the task-related classes, but also for other classes in the VLM vocabulary. Existing methods still address this problem by using traditional anti-overfitting techniques such as early stopping and data augmentation, which lack a principled solution specific to prompt. We present Prompt-aligned Gradient, dubbed ProGrad, to prevent prompt tuning from forgetting the the general knowledge learned from VLMs. In particular, ProGrad only updates the prompt whose gradient is aligned (or non-conflicting) to the""general direction"", which is represented as the gradient of the KL loss of the pre-defined prompt prediction. Extensive experiments demonstrate the stronger few-shot generalization ability of ProGrad over state-of-the-art prompt tuning methods. Codes are available at https://github.com/BeierZhu/Prompt-align.","{'model': 'tldr@v2.0.0', 'text': 'Prompt-aligned Gradient is presented, dubbed ProGrad, to prevent prompt tuning from forgetting the the general knowledge learned from VLMs and demonstrates the stronger few-shot generalization ability of ProGrad over state-of-the-art prompt tuning methods.'}",https://arxiv.org/pdf/2205.14865 -Unsupervised Prompt Learning for Vision-Language Models,Hao Huang,"Contrastive vision-language models like CLIP have shown great progress in transfer learning. In the inference stage, the proper text description, also known as prompt, needs to be carefully designed to correctly classify the given images. In order to avoid laborious prompt engineering, recent works such as CoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for downstream image recognition tasks on a small set of labeled data. Though promising improvements are achieved, requiring labeled data from the target datasets may restrict the scalability. In this paper, we explore a different scenario, in which the labels of the target datasets are unprovided, and we present an unsupervised prompt learning (UPL) approach to avoid prompt engineering while simultaneously improving transfer performance of CLIP-like vision-language models. As far as we know, UPL is the first work to introduce unsupervised learning into prompt learning. Experimentally, our UPL outperforms original CLIP with prompt engineering on ImageNet as well as other 10 datasets. An enhanced version of UPL is even competitive with the 8-shot CoOp and the 8-shot TIP-Adapter on most datasets. Code and models are available at https://github.com/tonyhuang2022/UPL.","{'model': 'tldr@v2.0.0', 'text': 'This paper presents an unsupervised prompt learning (UPL) approach to avoid prompt engineering while simultaneously improving transfer performance of CLIP-like vision-language models.'}",http://arxiv.org/pdf/2204.03649 -Domain Adaptation via Prompt Learning,Chunjiang Ge,"Unsupervised domain adaptation (UDA) aims to adapt models learned from a well-annotated source domain to a target domain, where only unlabeled samples are given. Current UDA approaches learn domain-invariant features by aligning source and target feature spaces through statistical discrepancy minimization or adversarial training. However, these constraints could lead to the distortion of semantic feature structures and loss of class discriminability. In this article, we introduce a novel prompt learning paradigm for UDA, named domain adaptation via prompt learning (DAPrompt). In contrast to prior works, our approach learns the underlying label distribution for target domain rather than aligning domains. The main idea is to embed domain information into prompts, a form of representation generated from natural language, which is then used to perform classification. This domain information is shared only by images from the same domain, thereby dynamically adapting the classifier according to each domain. By adopting this paradigm, we show that our model not only outperforms previous methods on several cross-domain benchmarks but also is very efficient to train and easy to implement.","{'model': 'tldr@v2.0.0', 'text': 'This article introduces a novel prompt learning paradigm for UDA, named domain adaptation via prompt learning (DAPrompt), which outperforms previous methods on several cross-domain benchmarks but also is very efficient to train and easy to implement.'}", -HyperPrompt: Prompt-based Task-Conditioning of Transformers,Yun He,"Prompt-Tuning is a new paradigm for finetuning pre-trained language models in a parameter-efficient way. Here, we explore the use of HyperNetworks to generate hyper-prompts: we propose HyperPrompt, a novel architecture for prompt-based task-conditioning of self-attention in Transformers. The hyper-prompts are end-to-end learnable via generation by a HyperNetwork. HyperPrompt allows the network to learn task-specific feature maps where the hyper-prompts serve as task global memories for the queries to attend to, at the same time enabling flexible information sharing among tasks. We show that HyperPrompt is competitive against strong multi-task learning baselines with as few as $0.14\%$ of additional task-conditioning parameters, achieving great parameter and computational efficiency. Through extensive empirical experiments, we demonstrate that HyperPrompt can achieve superior performances over strong T5 multi-task learning baselines and parameter-efficient adapter variants including Prompt-Tuning and HyperFormer++ on Natural Language Understanding benchmarks of GLUE and SuperGLUE across many model sizes.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes HyperPrompt, a novel architecture for prompt-based task-conditioning of self-attention in Transformers, which can achieve superior performances over strong T5 multi-task learning baselines and parameter-efficient adapter variants including Prompt-Tuning and HyperFormer++ on Natural Language Understanding benchmarks of GLUE and SuperGLUE across many model sizes.'}",https://arxiv.org/pdf/2203.00759 -Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction,Yubo Ma,"In this paper, we propose an effective yet efficient model PAIE for both sentence-level and document-level Event Argument Extraction (EAE), which also generalizes well when there is a lack of training data. On the one hand, PAIE utilizes prompt tuning for extractive objectives to take the best advantages of Pre-trained Language Models (PLMs). It introduces two span selectors based on the prompt to select start/end tokens among input texts for each role. On the other hand, it captures argument interactions via multi-role prompts and conducts joint optimization with optimal span assignments via a bipartite matching loss. Also, with a flexible prompt design, PAIE can extract multiple arguments with the same role instead of conventional heuristic threshold tuning. We have conducted extensive experiments on three benchmarks, including both sentence- and document-level EAE. The results present promising improvements from PAIE (3.5% and 2.3% F1 gains in average on three benchmarks, for PAIE-base and PAIE-large respectively). Further analysis demonstrates the efficiency, generalization to few-shot settings, and effectiveness of different extractive prompt tuning strategies. Our code is available at https://github.com/mayubo2333/PAIE.","{'model': 'tldr@v2.0.0', 'text': 'An effective yet efficient model PAIE for both sentence-level and document-level Event Argument Extraction (EAE), which also generalizes well when there is a lack of training data is proposed.'}",https://aclanthology.org/2022.acl-long.466.pdf -Neural Prompt Search,Yuanhan Zhang,"The size of vision models has grown exponentially over the last few years, especially after the emergence of Vision Transformer. This has motivated the development of parameter-efficient tuning methods, such as learning adapter layers or visual prompt tokens, which allow a tiny portion of model parameters to be trained whereas the vast majority obtained from pre-training are frozen. However, designing a proper tuning method is non-trivial: one might need to try out a lengthy list of design choices, not to mention that each downstream dataset often requires custom designs. In this paper, we view the existing parameter-efficient tuning methods as""prompt modules""and propose Neural prOmpt seArcH (NOAH), a novel approach that learns, for large vision models, the optimal design of prompt modules through a neural architecture search algorithm, specifically for each downstream dataset. By conducting extensive experiments on over 20 vision datasets, we demonstrate that NOAH (i) is superior to individual prompt modules, (ii) has a good few-shot learning ability, and (iii) is domain-generalizable. The code and models are available at https://github.com/Davidzhangyuanhan/NOAH.","{'model': 'tldr@v2.0.0', 'text': 'This paper proposes Neural prOmpt seArcH (NOAH), a novel approach that learns, for large vision models, the optimal design of prompt modules through a neural architecture search algorithm, specifically for each downstream dataset.'}",https://arxiv.org/pdf/2206.04673 -Prototypical Verbalizer for Prompt-based Few-shot Tuning,Ganqu Cui,"Prompt-based tuning for pre-trained language models (PLMs) has shown its effectiveness in few-shot learning. Typically, prompt-based tuning wraps the input text into a cloze question. To make predictions, the model maps the output words to labels via a verbalizer, which is either manually designed or automatically built. However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging.In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data. Specifically, ProtoVerb learns prototype vectors as verbalizers by contrastive learning. In this way, the prototypes summarize training instances and are able to enclose rich class-level semantics. We conduct experiments on both topic classification and entity typing tasks, and the results demonstrate that ProtoVerb significantly outperforms current automatic verbalizers, especially when training data is extremely scarce. More surprisingly, ProtoVerb consistently boosts prompt-based tuning even on untuned PLMs, indicating an elegant non-tuning way to utilize PLMs. Our codes are avaliable at https://github.com/thunlp/OpenPrompt.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes the prototypical verbalizer (ProtoVerb) which is built directly from training data and demonstrates that ProtoVerb significantly outperforms current automatic verbalizers, especially when training data is extremely scarce.'}",http://arxiv.org/pdf/2203.09770 -PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks,Yufei Wang,"This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks. We propose Prompt-based Data Augmentation model (PromDA) which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) in the frozen Pre-trained Language Models (PLMs). This avoids human effort in collecting unlabeled in-domain data and maintains the quality of generated synthetic data. In addition, PromDA generates synthetic data via two different views and filters out the low-quality data using NLU models. Experiments on four benchmarks show that synthetic data produced by PromDA successfully boost up the performance of NLU models which consistently outperform several competitive baseline models, including a state-of-the-art semi-supervised model using unlabeled in-domain data. The synthetic data from PromDA are also complementary with unlabeled in-domain data. The NLU models can be further improved when they are combined for training.","{'model': 'tldr@v2.0.0', 'text': 'PromDA, a Prompt-based Data Augmentation model (PromDA) which only trains small-scale Soft Prompt in the frozen Pre-trained Language Models (PLMs) avoids human effort in collecting unlabeled in-domain data and maintains the quality of generated synthetic data.'}",https://aclanthology.org/2022.acl-long.292.pdf -No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence,Chaozheng Wang,"Pre-trained models have been shown effective in many code intelligence tasks. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream data, while in practice, the scenarios with scarce data are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this paper, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with three code intelligence tasks including defect prediction, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all three tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data.","{'model': 'tldr@v2.0.0', 'text': None}",https://arxiv.org/pdf/2207.11680 -Personalized Prompt Learning for Explainable Recommendation,Lei Li,"Providing user-understandable explanations to justify recommendations could help users better understand the recommended items, increase the system’s ease of use, and gain users’ trust. A typical approach to realize it is natural language generation. However, previous works mostly adopt recurrent neural networks to meet the ends, leaving the potentially more effective pre-trained Transformer models under-explored. In fact, user and item IDs, as important identifiers in recommender systems, are inherently in different semantic space as words that pre-trained models were already trained on. Thus, how to effectively fuse IDs into such models becomes a critical issue. Inspired by recent advancement in prompt learning, we come up with two solutions: find alternative words to represent IDs (called discrete prompt learning) and directly input ID vectors to a pre-trained model (termed continuous prompt learning). In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages. To bridge the gap, we further propose two training strategies: sequential tuning and recommendation as regularization. Extensive experiments show that our continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation.","{'model': 'tldr@v2.0.0', 'text': 'Inspired by recent advancement in prompt learning, a continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation and proposes two training strategies: sequential tuning and recommendation as regularization.'}",https://arxiv.org/pdf/2202.07371 -A Taxonomy of Prompt Modifiers for Text-To-Image Generation,J. Oppenlaender,"Text-to-image generation has seen an explosion of interest since 2021. Today, beautiful and intriguing digital images and artworks can be synthesized from textual inputs (""prompts"") with deep generative models. Online communities around text-to-image generation and AI generated art have quickly emerged. This paper identifies six types of prompt modifiers used by practitioners in the online community based on a 3-month ethnographic study. The novel taxonomy of prompt modifiers provides researchers a conceptual starting point for investigating the practice of text-to-image generation, but may also help practitioners of AI generated art improve their images. We further outline how prompt modifiers are applied in the practice of""prompt engineering.""We discuss research opportunities of this novel creative practice in the field of Human-Computer Interaction (HCI). The paper concludes with a discussion of broader implications of prompt engineering from the perspective of Human-AI Interaction (HAI) in future applications beyond the use case of text-to-image generation and AI generated art.",, -PPT: Pre-trained Prompt Tuning for Few-shot Learning,Yuxian Gu,"Prompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks. Among these methods, prompt tuning, which freezes PLMs and only tunes soft prompts, provides an efficient and effective solution for adapting large-scale PLMs to downstream tasks. However, prompt tuning is yet to be fully explored. In our pilot experiments, we find that prompt tuning performs comparably with conventional full-model tuning when downstream data are sufficient, whereas it is much worse under few-shot learning settings, which may hinder the application of prompt tuning. We attribute this low performance to the manner of initializing soft prompts. Therefore, in this work, we propose to pre-train prompts by adding soft prompts into the pre-training stage to obtain a better initialization. We name this Pre-trained Prompt Tuning framework “PPT”. To ensure the generalization of PPT, we formulate similar classification tasks into a unified task form and pre-train soft prompts for this unified task. Extensive experiments show that tuning pre-trained prompts for downstream tasks can reach or even outperform full-model fine-tuning under both full-data and few-shot settings. Our approach is effective and efficient for using large-scale PLMs in practice.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes to pre-train prompts by adding soft prompts into the pre-training stage to obtain a better initialization, and names this Pre-trained Prompt Tuning framework “PPT” to ensure the generalization of PPT.'}",https://aclanthology.org/2022.acl-long.576.pdf -Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning,Xiaolei Wang,"Conversational recommender systems (CRS) aim to proactively elicit user preference and recommend high-quality items through natural language conversations. Typically, a CRS consists of a recommendation module to predict preferred items for users and a conversation module to generate appropriate responses. To develop an effective CRS, it is essential to seamlessly integrate the two modules. Existing works either design semantic alignment strategies, or share knowledge resources and representations between the two modules. However, these approaches still rely on different architectures or techniques to develop the two modules, making it difficult for effective module integration. To address this problem, we propose a unified CRS model named UniCRS based on knowledge-enhanced prompt learning. Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach. In the prompt design, we include fused knowledge representations, task-specific soft tokens, and the dialogue context, which can provide sufficient contextual information to adapt the PLM for the CRS task. Besides, for the recommendation subtask, we also incorporate the generated response template as an important part of the prompt, to enhance the information interaction between the two subtasks. Extensive experiments on two public CRS datasets have demonstrated the effectiveness of our approach. Our code is publicly available at the link: https://github.com/RUCAIBox/UniCRS.","{'model': 'tldr@v2.0.0', 'text': 'This approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach.'}",https://arxiv.org/pdf/2206.09363 -Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos,Muheng Li,"Action recognition models have shown a promising capability to classify human actions in short video clips. In a real scenario, multiple correlated human actions commonly occur in particular orders, forming semantically meaningful human activities. Conventional action recognition approaches focus on analyzing single actions. However, they fail to fully reason about the contextual relations between adjacent actions, which provide potential temporal logic for understanding long videos. In this paper, we propose a prompt-based framework, Bridge-Prompt (Br-Prompt), to model the semantics across adjacent actions, so that it simultaneously exploits both out-of-context and contextual information from a series of ordinal actions in instructional videos. More specifically, we reformulate the individual action labels as integrated text prompts for super-vision, which bridge the gap between individual action semantics. The generated text prompts are paired with corresponding video clips, and together co-train the text encoder and the video encoder via a contrastive approach. The learned vision encoder has a stronger capability for ordinal-action-related downstream tasks, e.g. action segmentation and human activity recognition. We evaluate the performances of our approach on several video datasets: Georgia Tech Egocentric Activities (GTEA), 50Salads, and the Breakfast dataset. Br-Prompt achieves state-of-the-art on multiple benchmarks. Code is available at: https://github.com/ttlmh/Bridge-Prompt.","{'model': 'tldr@v2.0.0', 'text': 'This paper reformulates the individual action labels as integrated text prompts for super-vision, which bridge the gap between individual action semantics, and proposes a prompt-based framework, Bridge-Prompt, to model the semantics across adjacent actions, so that it simultaneously exploits both out-of-context and contextual information from a series of ordinal actions in instructional videos.'}",https://arxiv.org/pdf/2203.14104 -Prompt Consistency for Zero-Shot Task Generalization,Chunting Zhou,"One of the most impressive results of recent NLP history is the ability of pre-trained language models to solve new tasks in a zero-shot setting. To achieve this, NLP tasks are framed as natural language prompts, generating a response indicating the predicted output. Nonetheless, the performance in such settings often lags far behind its supervised counterpart, suggesting a large space for potential improvement. In this paper, we explore methods to utilize unlabeled data to improve zero-shot performance. Specifically, we take advantage of the fact that multiple prompts can be used to specify a single task, and propose to regularize prompt consistency, encouraging consistent predictions over this diverse set of prompts. Our method makes it possible to fine-tune the model either with extra unlabeled training data, or directly on test input at inference time in an unsupervised manner. In experiments, our approach outperforms the state-of-the-art zero-shot learner, T0 (Sanh et al., 2022), on 9 out of 11 datasets across 4 NLP tasks by up to 10.6 absolute points in terms of accuracy. The gains are often attained with a small number of unlabeled examples.","{'model': 'tldr@v2.0.0', 'text': 'This work takes advantage of the fact that multiple prompts can be used to specify a single task, and proposes to regularize prompt consistency, encouraging consistent predictions over this diverse set of prompts, to improve zero-shot performance.'}",http://arxiv.org/pdf/2205.00049 -PromptCap: Prompt-Guided Task-Aware Image Captioning,Yushi Hu,"Knowledge-based visual question answering (VQA) involves questions that require world knowledge beyond the image to yield the correct answer. Large language models (LMs) like GPT-3 are particularly helpful for this task because of their strong knowledge retrieval and reasoning capabilities. To enable LM to understand images, prior work uses a captioning model to convert images into text. However, when summarizing an image in a single caption sentence, which visual entities to describe are often underspecified. Generic image captions often miss visual details essential for the LM to answer visual questions correctly. To address this challenge, we propose PromptCap (Prompt-guided image Captioning), a captioning model designed to serve as a better connector between images and black-box LMs. Different from generic captions, PromptCap takes a natural-language prompt to control the visual entities to describe in the generated caption. The prompt contains a question that the caption should aid in answering. To avoid extra annotation, PromptCap is trained by examples synthesized with GPT-3 and existing datasets. We demonstrate PromptCap's effectiveness on an existing pipeline in which GPT-3 is prompted with image captions to carry out VQA. PromptCap outperforms generic captions by a large margin and achieves state-of-the-art accuracy on knowledge-based VQA tasks (60.4% on OK-VQA and 59.6% on A-OKVQA). Zero-shot results on WebQA show that PromptCap generalizes well to unseen domains.","{'model': 'tldr@v2.0.0', 'text': 'PromptCap (Prompt-guided image Captioning), a captioning model designed to serve as a better connector between images and black-box LMs, achieves state-of-the-art accuracy on knowledge-based VQA tasks and generalizes well to unseen domains.'}",https://arxiv.org/pdf/2211.09699 -SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer,Tu Vu,"There has been growing interest in parameter-efficient methods to apply pre-trained language models to downstream tasks. Building on the Prompt Tuning approach of Lester et al. (2021), which learns task-specific soft prompts to condition a frozen pre-trained model to perform different tasks, we propose a novel prompt-based transfer learning approach called SPoT: Soft Prompt Transfer. SPoT first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task. We show that SPoT significantly boosts the performance of Prompt Tuning across many tasks. More remarkably, across all model sizes, SPoT matches or outperforms standard Model Tuning (which fine-tunes all model parameters) on the SuperGLUE benchmark, while using up to 27,000× fewer task-specific parameters. To understand where SPoT is most effective, we conduct a large-scale study on task transferability with 26 NLP tasks in 160 combinations, and demonstrate that many tasks can benefit each other via prompt transfer. Finally, we propose an efficient retrieval approach that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task.","{'model': 'tldr@v2.0.0', 'text': 'It is shown that SPoT significantly boosts the performance of Prompt Tuning across many tasks, and an efficient retrieval approach is proposed that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task.'}",https://aclanthology.org/2022.acl-long.346.pdf -Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm,Laria Reynolds,"Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications.","{'model': 'tldr@v2.0.0', 'text': 'It is suggested that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning, which motivates rethinking the role of prompts in controlling and evaluating powerful language models.'}",https://arxiv.org/pdf/2102.07350 -CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning,James Smith,"Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen data, which increases memory costs and may violate data privacy. Recently, the emergence of large-scale pre-trained vision transformer models has enabled prompting approaches as an alternative to data-rehearsal. These approaches rely on a key-query mechanism to generate prompts and have been found to be highly resistant to catastrophic forgetting in the well-established rehearsal-free continual learning setting. However, the key mechanism of these methods is not trained end-to-end with the task sequence. Our experiments show that this leads to a reduction in their plasticity, hence sacrificing new task accuracy, and inability to benefit from expanded parameter capacity. We instead propose to learn a set of prompt components which are assembled with input-conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme. Our experiments show that we outperform the current SOTA method DualPrompt on established benchmarks by as much as 4.5% in average final accuracy. We also outperform the state of art by as much as 4.4% accuracy on a continual learning benchmark which contains both class-incremental and domain-incremental task shifts, corresponding to many practical settings. Our code is available at https://github.com/GT-RIPL/CODA-Prompt","{'model': 'tldr@v2.0.0', 'text': 'This work proposes to learn a set of prompt components which are assembled with input- Conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme.'}",https://arxiv.org/pdf/2211.13218 -Prompt Learning with Optimal Transport for Vision-Language Models,Guangyi Chen,"With the increasing attention to large vision-language models such as CLIP, there has been a significant amount of effort dedicated to building efficient prompts. Unlike conventional methods of only learning one single prompt, we propose to learn multiple comprehensive prompts to describe diverse characteristics of categories such as intrinsic attributes or extrinsic contexts. However, directly matching each prompt to the same visual feature is problematic, as it pushes the prompts to converge to one point. To solve this problem, we propose to apply optimal transport to match the vision and text modalities. Specifically, we first model images and the categories with visual and textual feature sets. Then, we apply a two-stage optimization strategy to learn the prompts. In the inner loop, we optimize the optimal transport distance to align visual features and prompts by the Sinkhorn algorithm, while in the outer loop, we learn the prompts by this distance from the supervised data. Extensive experiments are conducted on the few-shot recognition task and the improvement demonstrates the superiority of our method. The code is available at https://github.com/CHENGY12/PLOT.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes to apply optimal transport to match the vision and text modalities and first model images and the categories with visual and textual feature sets and applies a two-stage optimization strategy to learn the prompts.'}",http://arxiv.org/pdf/2210.01253 -IDPG: An Instance-Dependent Prompt Generation Method,Zhuofeng Wu,"Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage. It freezes the pre-trained language model and only optimizes a few task-specific prompts. In this paper, we propose a conditional prompt generation method to generate prompts for each input instance, referred to as the Instance-Dependent Prompt Generation (IDPG). Unlike traditional prompt tuning methods that use a fixed prompt, IDPG introduces a lightweight and trainable component to generate prompts based on each input sentence. Extensive experiments on ten natural language understanding (NLU) tasks show that the proposed strategy consistently outperforms various prompt tuning baselines and is on par with other efficient transfer learning methods such as Compacter while tuning far fewer model parameters.","{'model': 'tldr@v2.0.0', 'text': 'Extensive experiments on ten natural language understanding (NLU) tasks show that the proposed strategy consistently outperforms various prompt tuning baselines and is on par with other efficient transfer learning methods such as Compacter while tuning far fewer model parameters.'}",http://arxiv.org/pdf/2204.04497 -Continual Prompt Tuning for Dialog State Tracking,Qi Zhu,"A desirable dialog system should be able to continually learn new skills without forgetting old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually training a model often leads to a well-known catastrophic forgetting issue. In this paper, we present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks. To avoid forgetting, we only learn and store a few prompt tokens’ embeddings for each task while freezing the backbone pre-trained model. To achieve bi-directional knowledge transfer among tasks, we propose several techniques (continual prompt initialization, query fusion, and memory replay) to transfer knowledge from preceding tasks and a memory-guided technique to transfer knowledge from subsequent tasks. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method on continual learning for dialog state tracking, compared with state-of-the-art baselines.","{'model': 'tldr@v2.0.0', 'text': 'This paper presents Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks, and proposes several techniques to transfer knowledge from preceding tasks and a memory-guided technique to transferknowledge from subsequent tasks.'}",http://arxiv.org/pdf/2203.06654 -Exploring the Universal Vulnerability of Prompt-based Learning Paradigm,Lei Xu,"Prompt-based learning paradigm bridges the gap between pre-training and fine-tuning, and works effectively under the few-shot setting. However, we find that this learning paradigm inherits the vulnerability from the pre-training stage, where model predictions can be misled by inserting certain triggers into the text. In this paper, we explore this universal vulnerability by either injecting backdoor triggers or searching for adversarial triggers on pre-trained language models using only plain text. In both scenarios, we demonstrate that our triggers can totally control or severely decrease the performance of prompt-based models fine-tuned on arbitrary downstream tasks, reflecting the universal vulnerability of the prompt-based learning paradigm. Further experiments show that adversarial triggers have good transferability among language models. We also find conventional fine-tuning models are not vulnerable to adversarial triggers constructed from pre-trained language models. We conclude by proposing a potential solution to mitigate our attack methods. Code and data are publicly available at https://github.com/leix28/prompt-universal-vulnerability","{'model': 'tldr@v2.0.0', 'text': 'This paper demonstrates that backdoor triggers or searching for adversarial triggers on pre-trained language models using only plain text can totally control or severely decrease the performance of prompt-based models fine-tuned on arbitrary downstream tasks, reflecting the universal vulnerability of the prompt- based learning paradigm.'}",http://arxiv.org/pdf/2204.05239 -How many data points is a prompt worth?,Teven Le Scao,"When fine-tuning pretrained models for classification, researchers either use a generic model head or a task-specific prompt for prediction. Proponents of prompting have argued that prompts provide a method for injecting task-specific guidance, which is beneficial in low-data regimes. We aim to quantify this benefit through rigorous testing of prompts in a fair setting: comparing prompted and head-based fine-tuning in equal conditions across many tasks and data sizes. By controlling for many sources of advantage, we find that prompting does indeed provide a benefit, and that this benefit can be quantified per task. Results show that prompting is often worth 100s of data points on average across classification tasks.","{'model': 'tldr@v2.0.0', 'text': 'It is found that prompting does indeed provide a benefit, and that this benefit can be quantified per task, and results show that prompting is often worth 100s of data points on average across classification tasks.'}",https://aclanthology.org/2021.naacl-main.208.pdf -KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction,Xiang Chen,"Recently, prompt-tuning has achieved promising results for specific few-shot classification tasks. The core idea of prompt-tuning is to insert text pieces (i.e., templates) into the input and transform a classification task into a masked language modeling problem. However, for relation extraction, determining an appropriate prompt template requires domain expertise, and it is cumbersome and time-consuming to obtain a suitable label word. Furthermore, there exists abundant semantic and prior knowledge among the relation labels that cannot be ignored. To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt). Specifically, we inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words. Then, we synergistically optimize their representation with structured constraints. Extensive experimental results on five datasets with standard and low-resource settings demonstrate the effectiveness of our approach. Our code and datasets are available in GitHub1 for reproducibility.","{'model': 'tldr@v2.0.0', 'text': 'A Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt) that injects latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.'}",https://arxiv.org/pdf/2104.07650 -Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification,Shengding Hu,"Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification. Particularly, previous studies suggest that prompt-tuning has remarkable superiority in the low-data scenario over the generic fine-tuning methods with extra classifiers. The core idea of prompt-tuning is to insert text pieces, i.e., template, to the input and transform a classification problem into a masked language modeling problem, where a crucial step is to construct a projection, i.e., verbalizer, between a label space and a label word space. A verbalizer is usually handcrafted or searched by gradient descent, which may lack coverage and bring considerable bias and high variances to the results. In this work, we focus on incorporating external knowledge into the verbalizer, forming a knowledgeable prompttuning (KPT), to improve and stabilize prompttuning. Specifically, we expand the label word space of the verbalizer using external knowledge bases (KBs) and refine the expanded label word space with the PLM itself before predicting with the expanded label word space. Extensive experiments on zero and few-shot text classification tasks demonstrate the effectiveness of knowledgeable prompt-tuning.","{'model': 'tldr@v2.0.0', 'text': 'This work focuses on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt Tuning (KPT), to improve and stabilize prompttuning.'}",https://aclanthology.org/2022.acl-long.158.pdf -Do Prompt-Based Models Really Understand the Meaning of Their Prompts?,Albert Webson,"Recently, a boom of papers has shown extraordinary progress in zero-shot and few-shot learning with various prompt-based models. It is commonly argued that prompts help models to learn faster in the same way that humans learn faster when provided with task instructions expressed in natural language. In this study, we experiment with over 30 prompts manually written for natural language inference (NLI). We find that models can learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively “good” prompts. Further, such patterns hold even for models as large as 175 billion parameters (Brown et al., 2020) as well as the recently proposed instruction-tuned models which are trained on hundreds of prompts (Sanh et al., 2021). That is, instruction-tuned models often produce good predictions with irrelevant and misleading prompts even at zero shots. In sum, notwithstanding prompt-based models’ impressive improvement, we find evidence of serious limitations that question the degree to which such improvement is derived from models understanding task instructions in ways analogous to humans’ use of task instructions.","{'model': 'tldr@v2.0.0', 'text': 'It is found that models can learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively “good” prompts, and instruction-tuned models often produce good predictions with irrelevant and misleading prompts even at zero shots.'}",https://aclanthology.org/2022.naacl-main.167.pdf -Pro-tuning: Unified Prompt Tuning for Vision Tasks,Xing Nie,"In computer vision, fine-tuning is the de-facto approach to leverage pre-trained vision models to perform downstream tasks. However, deploying it in practice is quite challenging, due to adopting parameter inefficient global update and heavily relying on high-quality downstream data. Recently, prompt-based learning, which adds a task-relevant prompt to adapt the downstream tasks to pre-trained models, has drastically boosted the performance of many natural language downstream tasks. In this work, we extend this notable transfer ability benefited from prompt into vision models as an alternative to fine-tuning. To this end, we propose parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks. The key to Pro-tuning is prompt-based tuning, i.e., learning task-specific vision prompts for downstream input images with the pre-trained model frozen. By only training a few additional parameters, it can work on diverse CNN-based and Transformer-based architectures. Extensive experiments evidence that Pro-tuning outperforms fine-tuning in a broad range of vision tasks and scenarios, including image classification (generic objects, class imbalance, image corruption, adversarial robustness, and out-of-distribution generalization), and dense prediction tasks such as object detection and semantic segmentation.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks, which outperforms fine- Tuning in a broad range of vision tasks and scenarios, including image classification, and dense prediction tasks such as object detection and semantic segmentation.'}",http://arxiv.org/pdf/2207.14381 -Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models,Hendrik Strobelt,"State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.","{'model': 'tldr@v2.0.0', 'text': 'A workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task, and then allows easy deployment of the newly created ad-hoc models.'}",https://arxiv.org/pdf/2208.07852 -PromptMaker: Prompt-based Prototyping with Large Language Models,Ellen Jiang,"Prototyping is notoriously difficult to do with machine learning (ML), but recent advances in large language models may lower the barriers to people prototyping with ML, through the use of natural language prompts. This case study reports on the real-world experiences of industry professionals (e.g. designers, program managers, front-end developers) prototyping new ML-powered feature ideas via prompt-based prototyping. Through interviews with eleven practitioners during a three-week sprint and a workshop, we find that prompt-based prototyping reduced barriers of access by substantially broadening who can prototype with ML, sped up the prototyping process, and grounded communication between collaborators. Yet, it also introduced new challenges, such as the need to reverse-engineer prompt designs, source example data, and debug and evaluate prompt effectiveness. Taken together, this case study provides important implications that lay the groundwork toward a new future of prototyping with ML.","{'model': 'tldr@v2.0.0', 'text': None}", -Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning,Pan Lu,"Mathematical reasoning, a core ability of human intelligence, presents unique challenges for machines in abstract thinking and logical reasoning. Recent large pre-trained language models such as GPT-3 have achieved remarkable progress on mathematical reasoning tasks written in text form, such as math word problems (MWP). However, it is unknown if the models can handle more complex problems that involve math reasoning over heterogeneous information, such as tabular data. To fill the gap, we present Tabular Math Word Problems (TabMWP), a new dataset containing 38,431 open-domain grade-level problems that require mathematical reasoning on both textual and tabular data. Each question in TabMWP is aligned with a tabular context, which is presented as an image, semi-structured text, and a structured table. There are two types of questions: free-text and multi-choice, and each problem is annotated with gold solutions to reveal the multi-step reasoning process. We evaluate different pre-trained models on TabMWP, including the GPT-3 model in a few-shot setting. As earlier studies suggest, since few-shot GPT-3 relies on the selection of in-context examples, its performance is unstable and can degrade to near chance. The unstable issue is more severe when handling complex problems like TabMWP. To mitigate this, we further propose a novel approach, PromptPG, which utilizes policy gradient to learn to select in-context examples from a small amount of training data and then constructs the corresponding prompt for the test example. Experimental results show that our method outperforms the best baseline by 5.31% on the accuracy metric and reduces the prediction variance significantly compared to random selection, which verifies its effectiveness in selecting in-context examples.","{'model': 'tldr@v2.0.0', 'text': 'A novel approach is proposed, PromptPG, which utilizes policy gradient to learn to select in-context examples from a small amount of training data and then constructs the corresponding prompt for the test example, which verifies its effectiveness in selecting in- context examples.'}",http://arxiv.org/pdf/2209.14610 -Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language,Paul Denny,"GitHub Copilot is an artificial intelligence tool for automatically generating source code from natural language problem descriptions. Since June 2022, Copilot has officially been available for free to all students as a plug-in to development environments like Visual Studio Code. Prior work exploring OpenAI Codex, the underlying model that powers Copilot, has shown it performs well on typical CS1 problems thus raising concerns about its potential impact on how introductory programming courses are taught. However, little is known about the types of problems for which Copilot does not perform well, or about the natural language interactions that a student might have with Copilot when resolving errors. We explore these questions by evaluating the performance of Copilot on a publicly available dataset of 166 programming problems. We find that it successfully solves around half of these problems on its very first attempt, and that it solves 60% of the remaining problems using only natural language changes to the problem description. We argue that this type of prompt engineering, which we believe will become a standard interaction between human and Copilot when it initially fails, is a potentially useful learning activity that promotes computational thinking skills, and is likely to change the nature of code writing skill development.","{'model': 'tldr@v2.0.0', 'text': 'Evaluating the performance of Copilot on a publicly available dataset of 166 programming problems finds that it successfully solves around half of these problems on its very first attempt, and that it solves 60% of the remaining problems using only natural language changes to the problem description.'}",https://eprints.iisc.ac.in/81157/1/SIGCSE_2023.pdf -Design Guidelines for Prompt Engineering Text-to-Image Generative Models,Vivian Liu,"Text-to-image generative models are a new and powerful way to generate visual artwork. However, the open-ended nature of text as interaction is double-edged; while users can input anything and have access to an infinite range of generations, they also must engage in brute-force trial and error with the text prompt when the result quality is poor. We conduct a study exploring what prompt keywords and model hyperparameters can help produce coherent outputs. In particular, we study prompts structured to include subject and style keywords and investigate success and failure modes of these prompts. Our evaluation of 5493 generations over the course of five experiments spans 51 abstract and concrete subjects as well as 51 abstract and figurative styles. From this evaluation, we present design guidelines that can help people produce better outcomes from text-to-image generative models.","{'model': 'tldr@v2.0.0', 'text': 'A study exploring what prompt keywords and model hyperparameters can help produce coherent outputs from text-to-image generative models, structured to include subject and style keywords and investigates success and failure modes of these prompts.'}",https://arxiv.org/pdf/2109.06977 -"ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization",Hanwei Xu,"We propose a multitask pretraining approach ZeroPrompt for zero-shot generalization, focusing on task scaling and zero-shot prompting. While previous models are trained on only a few dozen tasks, we scale to 1,000 tasks for the first time using real-world data. This leads to a crucial discovery that task scaling can be an efficient alternative to model scaling; i.e., the model size has little impact on performance with an extremely large number of tasks. Our results show that task scaling can substantially improve training efficiency by 30 times in FLOPs. Moreover, we present a prompting method that incorporates a genetic algorithm to automatically search for the best prompt for unseen tasks, along with a few other improvements. Empirically, ZeroPrompt substantially improves both the efficiency and the performance of zero-shot learning across a variety of academic and production datasets.","{'model': 'tldr@v2.0.0', 'text': 'The results show that task scaling can substantially improve training efficiency by 30 times in FLOPs, and a prompting method that incorporates a genetic algorithm to automatically search for the best prompt for unseen tasks, along with a few other improvements.'}",https://aclanthology.org/2022.findings-emnlp.312.pdf -Prompt-free and Efficient Few-shot Learning with Language Models,Rabeeh Karimi Mahabadi,"Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score. In this work, we propose Perfect, a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting, which is highly effective given as few as 32 data points. Perfect makes two key design choices: First, we show that manually engineered task prompts can be replaced with task-specific adapters that enable sample-efficient fine-tuning and reduce memory and storage costs by roughly factors of 5 and 100, respectively. Second, instead of using handcrafted verbalizers, we learn new multi-token label embeddings during fine-tuning, which are not tied to the model vocabulary and which allow us to avoid complex auto-regressive decoding. These embeddings are not only learnable from limited data but also enable nearly 100x faster training and inference. Experiments on a wide range of few shot NLP tasks demonstrate that Perfect, while being simple and efficient, also outperforms existing state-of-the-art few-shot learning methods. Our code is publicly available at https://github.com/rabeehk/perfect.","{'model': 'tldr@v2.0.0', 'text': 'Experiments demonstrate that Perfect, a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting, also outperforms existing state-of-the-art few- shot learning methods.'}",http://arxiv.org/pdf/2204.01172 -Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity,Yao Lu,"When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are “fantastic” and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data. Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts. Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.","{'model': 'tldr@v2.0.0', 'text': 'This work uses the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, it identifies performant prompts and yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.'}",https://aclanthology.org/2022.acl-long.556.pdf -PTR: Prompt Tuning with Rules for Text Classification,Xu Han,,"{'model': 'tldr@v2.0.0', 'text': 'This work proposes prompt tuning with rules (PTR) for many-class text classification and applies logic rules to construct prompts with several sub-prompts, indicating that PTR is a promising approach to take advantage of both human prior knowledge and PLMs for those complicated classification tasks.'}", -Ontology-enhanced Prompt-tuning for Few-shot Learning,Hongbin Ye,"Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structured data such as knowledge graphs and ontology libraries has been leveraged to benefit the few-shot setting in various tasks. However, the priors adopted by the existing methods suffer from challenging knowledge missing, knowledge noise, and knowledge heterogeneity, which hinder the performance for few-shot learning. In this study, we explore knowledge injection for FSL with pre-trained language models and propose ontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop the ontology transformation based on the external knowledge graph to address the knowledge missing issue, which fulfills and converts structure knowledge to text. We further introduce span-sensitive knowledge injection via a visible matrix to select informative knowledge to handle the knowledge noise issue. To bridge the gap between knowledge and text, we propose a collective training algorithm to optimize representations jointly. We evaluate our proposed OntoPrompt in three tasks, including relation extraction, event extraction, and knowledge graph completion, with eight datasets. Experimental results demonstrate that our approach can obtain better few-shot performance than baselines.","{'model': 'tldr@v2.0.0', 'text': 'This study develops the ontology transformation based on the external knowledge graph to address the knowledge missing issue and proposes ontology-enhanced prompt-tuning (OntoPrompt), which fulfills and converts structure knowledge to text.'}",https://arxiv.org/pdf/2201.11332 -Iteratively Prompt Pre-trained Language Models for Chain of Thought,Boshi Wang,"While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have been shown incapable of recalling these knowledge to solve tasks requiring complex & multi-step reasoning. Similar to how humans develop a “chain of thought” for these tasks, how can we equip PLMs with such abilities? In this work, we explore an iterative prompting framework, a new prompting paradigm which progressively elicits relevant knowledge from PLMs for multi-step inference. We identify key limitations of existing prompting methods, namely they are either restricted to queries with a single identifiable relation/predicate, or being agnostic to input contexts, which makes it difficult to capture variabilities across different inference steps. We propose an iterative context-aware prompter, which addresses these limitations by learning to dynamically synthesize prompts conditioned on the current step’s contexts. Experiments on three datasets involving multi-step reasoning show the effectiveness of the iterative scheme and the context-aware prompter design.","{'model': 'tldr@v2.0.0', 'text': 'An iterative prompting framework is explored, a new prompting paradigm which progressively elicits relevant knowledge from PLMs for multi-step inference by learning to dynamically synthesize prompts conditioned on the current step’s contexts.'}",https://aclanthology.org/2022.emnlp-main.174.pdf -Black-box Prompt Learning for Pre-trained Language Models,Shizhe Diao,"The increasing scale of general-purpose Pre-trained Language Models (PLMs) necessitates the study of more efficient adaptation across different downstream tasks. In this paper, we establish a Black-box Discrete Prompt Learning (BDPL) to resonate with pragmatic interactions between the cloud infrastructure and edge devices. Particularly, instead of fine-tuning the model in the cloud, we adapt PLMs by prompt learning, which efficiently optimizes only a few parameters of the discrete prompts. Moreover, we consider the scenario that we do not have access to the parameters and gradients of the pre-trained models, except for its outputs given inputs. This black-box setting secures the cloud infrastructure from potential attack and misuse to cause a single-point failure, which is preferable to the white-box counterpart by current infrastructures. Under this black-box constraint, we apply a variance-reduced policy gradient algorithm to estimate the gradients of parameters in the categorical distribution of each discrete prompt. In light of our method, the user devices can efficiently tune their tasks by querying the PLMs bounded by a range of API calls. Our experiments on RoBERTa and GPT-3 demonstrate that the proposed algorithm achieves significant improvement on eight benchmarks in a cloud-device collaboration manner. Finally, we conduct in-depth case studies to comprehensively analyze our method in terms of various data sizes, prompt lengths, training budgets, optimization objectives, prompt transferability, and explanations of the learned prompts. Our code will be available at https://github.com/shizhediao/Black-Box-Prompt-Learning.","{'model': 'tldr@v2.0.0', 'text': 'A Black-box Discrete Prompt Learning (BDPL) is established to resonate with pragmatic interactions between the cloud infrastructure and edge devices and achieves significant improvement on eight benchmarks in a cloud-device collaboration manner.'}", -Visual Prompt Tuning for Test-time Domain Adaptation,Yunhe Gao,"Models should be able to adapt to unseen data during test-time to avoid performance drops caused by inevitable distribution shifts in real-world deployment scenarios. In this work, we tackle the practical yet challenging test-time adaptation (TTA) problem, where a model adapts to the target domain without accessing the source data. We propose a simple recipe called \textit{Data-efficient Prompt Tuning} (DePT) with two key ingredients. First, DePT plugs visual prompts into the vision Transformer and only tunes these source-initialized prompts during adaptation. We find such parameter-efficient finetuning can efficiently adapt the model representation to the target domain without overfitting to the noise in the learning objective. Second, DePT bootstraps the source representation to the target domain by memory bank-based online pseudo-labeling. A hierarchical self-supervised regularization specially designed for prompts is jointly optimized to alleviate error accumulation during self-training. With much fewer tunable parameters, DePT demonstrates not only state-of-the-art performance on major adaptation benchmarks VisDA-C, ImageNet-C, and DomainNet-126, but also superior data efficiency, i.e., adaptation with only 1\% or 10\% data without much performance degradation compared to 100\% data. In addition, DePT is also versatile to be extended to online or multi-source TTA settings.","{'model': 'tldr@v2.0.0', 'text': 'This work tackles the practical yet challenging test-time adaptation (TTA) problem, where a model adapts to the target domain without accessing the source data, with a simple recipe called Data-efficient Prompt Tuning (DePT).'}",https://arxiv.org/pdf/2210.04831 -Repository-Level Prompt Generation for Large Language Models of Code,Disha Shrivastava,"With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex used in GitHub Copilot), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn't require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a remarkably high relative improvement of 36% over Codex, showing the quality of these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. We release our code, data, and trained checkpoints at: \url{https://github.com/shrivastavadisha/repo_level_prompt_generation}.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals that take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files.'}",http://arxiv.org/pdf/2206.12839 -GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks,Mingchen Sun,"Despite the promising representation learning of graph neural networks (GNNs), the supervised training of GNNs notoriously requires large amounts of labeled data from each application. An effective solution is to apply the transfer learning in graph: using easily accessible information to pre-train GNNs, and fine-tuning them to optimize the downstream task with only a few labels. Recently, many efforts have been paid to design the self-supervised pretext tasks, and encode the universal graph knowledge among the various applications. However, they rarely notice the inherent training objective gap between the pretext and downstream tasks. This significant gap often requires costly fine-tuning for adapting the pre-trained model to downstream problem, which prevents the efficient elicitation of pre-trained knowledge and then results in poor results. Even worse, the naive pre-training strategy usually deteriorates the downstream task, and damages the reliability of transfer learning in graph data. To bridge the task gap, we propose a novel transfer learning paradigm to generalize GNNs, namely graph pre-training and prompt tuning (GPPT). Specifically, we first adopt the masked edge prediction, the most simplest and popular pretext task, to pre-train GNNs. Based on the pre-trained model, we propose the graph prompting function to modify the standalone node into a token pair, and reformulate the downstream node classification looking the same as edge prediction. The token pair is consisted of candidate label class and node entity. Therefore, the pre-trained GNNs could be applied without tedious fine-tuning to evaluate the linking probability of token pair, and produce the node classification decision. The extensive experiments on eight benchmark datasets demonstrate the superiority of GPPT, delivering an average improvement of 4.29% in few-shot graph analysis and accelerating the model convergence up to 4.32X. The code is available in: https://github.com/MingChen-Sun/GPPT.","{'model': 'tldr@v2.0.0', 'text': 'This work first adopts the masked edge prediction, the most simplest and popular pretext task, to pre-train GNNs, and proposes the graph prompting function to modify the standalone node into a token pair, and reformulate the downstream node classification looking the same as edge prediction.'}", -AdaPrompt: Adaptive Model Training for Prompt-based NLP,Yulong Chen,"Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has gained much attention in community. The main idea is to bridge the gap between NLP downstream tasks and language modeling (LM), by mapping these tasks into natural language prompts, which are then filled by pre-trained language models (PLMs). However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining. First, prompt information is not necessarily sufficiently present during LM pretraining. Second, task-specific data are not necessarily well represented during pretraining. We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers. Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings. In addition, in zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35\% relative error reduction.","{'model': 'tldr@v2.0.0', 'text': 'Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings and make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers.'}",https://aclanthology.org/2022.findings-emnlp.448.pdf -Visual Prompt Tuning for Generative Transfer Learning,Kihyuk Sohn,"Learning generative image models from various domains efficiently needs transferring knowledge from an image synthesis model trained on a large dataset. We present a recipe for learning vision transformers by generative knowledge transfer. We base our framework on generative vision transformers representing an image as a sequence of visual tokens with the autoregressive or non-autoregressive transformers. To adapt to a new domain, we employ prompt tuning, which prepends learnable tokens called prompts to the image token sequence and introduces a new prompt design for our task. We study on a variety of visual domains with varying amounts of training images. We show the effectiveness of knowledge transfer and a significantly better image generation quality.11https://github.com/google-research/generative_transfer","{'model': 'tldr@v2.0.0', 'text': 'This work presents a recipe for learning vision transformers by generative knowledge transfer, and shows the effectiveness of knowledge transfer and a significantly better image generation quality.'}",https://arxiv.org/pdf/2210.00990 -Legal Prompt Engineering for Multilingual Legal Judgement Prediction,Dietrich Trautmann,"Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide and assist a large language model (LLM) with performing a natural legal language processing (NLLP) skill. Our goal is to use LPE with LLMs over long legal documents for the Legal Judgement Prediction (LJP) task. We investigate the performance of zero-shot LPE for given facts in case-texts from the European Court of Human Rights (in English) and the Federal Supreme Court of Switzerland (in German, French and Italian). Our results show that zero-shot LPE is better compared to the baselines, but it still falls short compared to current state of the art supervised approaches. Nevertheless, the results are important, since there was 1) no explicit domain-specific data used - so we show that the transfer to the legal domain is possible for general-purpose LLMs, and 2) the LLMs where directly applied without any further training or fine-tuning - which in turn saves immensely in terms of additional computational costs.",,https://arxiv.org/pdf/2212.02199 -Prompt Vision Transformer for Domain Generalization,Zangwei Zheng,"Though vision transformers (ViTs) have exhibited impressive ability for representation learning, we empirically find that they cannot generalize well to unseen domains with previous domain generalization algorithms. In this paper, we propose a novel approach DoPrompt based on prompt learning to embed the knowledge of source domains in domain prompts for target domain prediction. Specifically, domain prompts are prepended before ViT input tokens from the corresponding source domain. Each domain prompt learns domain-specific knowledge efficiently since it is optimized only for one domain. Meanwhile, we train a prompt adapter to produce a suitable prompt for each input image based on the learned source domain prompts. At test time, the adapted prompt generated by the prompt adapter can exploit the similarity between the feature of the out-of-domain image and source domains to properly integrate the source domain knowledge. Extensive experiments are conducted on four benchmark datasets. Our approach achieves 1.4% improvements in the averaged accuracy, which is 3.5 times the improvement of the state-of-the-art algorithm with a ViT backbone.","{'model': 'tldr@v2.0.0', 'text': 'A novel approach DoPrompt based on prompt learning to embed the knowledge of source domains in domain prompts for target domain prediction, which achieves 1.4% improvements in the averaged accuracy and 3.5 times the improvement of the state-of-the-art algorithm with a ViT backbone.'}",http://arxiv.org/pdf/2208.08914 -Prompt Tuning for Discriminative Pre-trained Language Models,Yuan Yao,"Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks. However, to the best of our knowledge, existing works focus on prompt-tuning generative PLMs that are pre-trained to generate target tokens, such as BERT. It is still unknown whether and how discriminative PLMs, e.g., ELECTRA, can be effectively prompt-tuned. In this work, we present DPT, the first prompt tuning framework for discriminative PLMs, which reformulates NLP tasks into a discriminative language modeling problem. Comprehensive experiments on text classification and question answering show that, compared with vanilla fine-tuning, DPT achieves significantly higher performance, and also prevents the unstable problem in tuning large PLMs in both full-set and low-resource settings.","{'model': 'tldr@v2.0.0', 'text': 'DPT is presented, the first prompt tuning framework for discriminative PLMs, which reformulates NLP tasks into a discrim inative language modeling problem, and achieves significantly higher performance, and also prevents the unstable problem in tuning large PLMs in both full-set and low-resource settings.'}",https://arxiv.org/pdf/2205.11166 -Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection,Minqian Liu,"Lifelong event detection aims to incrementally update a model with new event types and data while retaining the capability on previously learned old types. One critical challenge is that the model would catastrophically forget old types when continually trained on new data. In this paper, we introduce Episodic Memory Prompts (EMP) to explicitly retain the learned task-specific knowledge. Our method adopts continuous prompt for each task and they are optimized to instruct the model prediction and learn event-specific representation. The EMPs learned in previous tasks are carried along with the model in subsequent tasks, and can serve as a memory module that keeps the old knowledge and transferring to new tasks. Experiment results demonstrate the effectiveness of our method. Furthermore, we also conduct a comprehensive analysis of the new and old event types in lifelong learning.","{'model': 'tldr@v2.0.0', 'text': 'This paper introduces Episodic Memory Prompts (EMP) to explicitly retain the learned task-specific knowledge and conduct a comprehensive analysis of the new and old event types in lifelong learning.'}",http://arxiv.org/pdf/2204.07275 -Prompt-Matched Semantic Segmentation,Lingbo Liu,"The objective of this work is to explore how to effectively and efficiently adapt pre-trained visual foundation models to various downstream tasks of semantic segmentation. Previous methods usually fine-tuned the entire networks for each specific dataset, which will be burdensome to store massive parameters of these networks. A few recent works attempted to insert some extra trainable parameters into the frozen networks to learn visual prompts for parameter-efficient tuning. However, these works showed poor generality as they were designed specifically for Transformers. Moreover, using limited information in these schemes, they exhibited a poor capacity to learn beneficial prompts. To alleviate these issues, we propose a novel Stage-wise Prompt-Matched Framework for generic and effective visual prompt tuning. Specifically, to ensure generality, we divide the pre-trained backbone with frozen parameters into multiple stages and perform prompt learning between different stages, which makes the proposed scheme applicable to various architectures of CNN and Transformer. For effective tuning, a lightweight Semantic-aware Prompt Matcher (SPM) is designed to progressively learn reasonable prompts with a recurrent mechanism, guided by the rich information of interim semantic maps. Working as deep matched filter of representation learning, the proposed SPM can well transform the output of the previous stage into a desirable input for the next stage, thus achieving the better matching/stimulating for the pre-trained knowledge. Extensive experiments on four benchmarks demonstrate that the proposed scheme can achieve a promising trade-off between parameter efficiency and performance effectiveness. Our code and models will be released.","{'model': 'tldr@v2.0.0', 'text': 'A novel Stage-wise Prompt-Matched Framework for generic and effective visual prompt tuning that divides the pre-trained backbone with frozen parameters into multiple stages and performs prompt learning between different stages, which makes the proposed scheme applicable to various architectures of CNN and Transformer.'}",https://arxiv.org/pdf/2208.10159 -Multitask Vision-Language Prompt Tuning,Sheng Shen,"Prompt Tuning, conditioning on task-specific learned prompt vectors, has emerged as a data-efficient and parameter-efficient method for adapting large pretrained vision-language models to multiple downstream tasks. However, existing approaches usually consider learning prompt vectors for each task independently from scratch, thereby failing to exploit the rich shareable knowledge across different vision-language tasks. In this paper, we propose multitask vision-language prompt tuning (MVLPT), which incorporates cross-task knowledge into prompt tuning for vision-language models. Specifically, (i) we demonstrate the effectiveness of learning a single transferable prompt from multiple source tasks to initialize the prompt for each target task; (ii) we show many target tasks can benefit each other from sharing prompt vectors and thus can be jointly learned via multitask prompt tuning. We benchmark the proposed MVLPT using three representative prompt tuning methods, namely text prompt tuning, visual prompt tuning, and the unified vision-language prompt tuning. Results in 20 vision tasks demonstrate that the proposed approach outperforms all single-task baseline prompt tuning methods, setting the new state-of-the-art on the few-shot ELEVATER benchmarks and cross-task generalization benchmarks. To understand where the cross-task knowledge is most effective, we also conduct a large-scale study on task transferability with 20 vision tasks in 400 combinations for each prompt tuning method. It shows that the most performant MVLPT for each prompt tuning method prefers different task combinations and many tasks can benefit each other, depending on their visual similarity and label similarity. Code is available at https://github.com/sIncerass/MVLPT.","{'model': 'tldr@v2.0.0', 'text': 'This paper demonstrates the effectiveness of learning a single transferable prompt from multiple source tasks to initialize the prompt for each target task and shows many target tasks can benefit each other from sharing prompt vectors and thus can be jointly learned via multitask prompt tuning.'}",https://arxiv.org/pdf/2211.11720 -Memory-assisted prompt editing to improve GPT-3 after deployment,Aman Madaan,"Large LMs such as GPT -3 are powerful, but can commit mistakes that are obvious to humans. For example, GPT -3 would mistakenly interpret ""What word is similar to good ?"" to mean a homonym, while the user intended a synonym. Our goal is to effectively correct such errors via user interactions with the sys-tem but without retraining, which will be pro-hibitively costly. We pair GPT -3 with a growing memory of recorded cases where the model misunderstood the user’s intents, along with user feedback for clarification. Such a memory allows our system to produce enhanced prompts for any new query based on the user feedback for error correction on similar cases in the past. On four tasks (two lexical tasks, two advanced ethical reasoning tasks), we show how a (simulated) user can interactively teach a deployed GPT -3, substantially increasing its accuracy over the queries with different kinds of misunderstandings by the GPT -3. Our approach is a step towards the low-cost utility enhancement for very large pre-trained LMs. 1","{'model': 'tldr@v2.0.0', 'text': 'This work pair GPT -3 with a growing memory of recorded cases where the model misunderstood the user’s intents, along with user feedback for clarification, which allows the system to produce enhanced prompts for any new query based on the user feedback on similar cases in the past.'}",https://aclanthology.org/2022.emnlp-main.183.pdf -OpenPrompt: An Open-source Framework for Prompt-learning,Ning Ding,"Prompt-learning has become a new paradigm in modern natural language processing, which directly adapts pre-trained language models (PLMs) to cloze-style prediction, autoregressive modeling, or sequence to sequence generation, resulting in promising performances on various tasks. However, no standard implementation framework of prompt-learning is proposed yet, and most existing prompt- learning codebases, often unregulated, only provide limited implementations for specific scenarios. Since there are many details such as templating strategy, initializing strategy, verbalizing strategy, etc., that need to be considered in prompt-learning, practitioners face impediments to quickly adapting the de-sired prompt learning methods to their applications. In this paper, we present Open- Prompt, a unified easy-to-use toolkit to conduct prompt-learning over PLMs. OpenPrompt is a research-friendly framework that is equipped with efficiency, modularity, and extendibility, and its combinability allows the freedom to combine different PLMs, task for- mats, and prompting modules in a unified paradigm. Users could expediently deploy prompt-learning frameworks and evaluate the generalization of them on different NLP tasks without constraints.","{'model': 'tldr@v2.0.0', 'text': 'Open- Prompt is a unified easy-to-use toolkit to conduct prompt-learning over PLMs equipped with efficiency, modularity, and extendibility, and its combinability allows the freedom to combine different PLMs, task for- mats, and prompting modules in a unified paradigm.'}",https://aclanthology.org/2022.acl-demo.10.pdf -CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models,Yuan Yao,"Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural language in image data, facilitating a broad variety of cross-modal tasks. However, we note that there exists a significant gap between the objective forms of model pre-training and fine-tuning, resulting in a need for large amounts of labeled data to stimulate the visual grounding capability of VL-PTMs for downstream tasks. To address the challenge, we present Cross-modal Prompt Tuning (CPT, alternatively, Colorful Prompt Tuning), a novel paradigm for tuning VL-PTMs, which reformulates visual grounding into a fill-in-the-blank problem with color-based co-referential markers in image and text, maximally mitigating the gap. In this way, CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs. Comprehensive experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin (e.g., 17.3% absolute accuracy improvement, and 73.8% relative standard deviation reduction on average with one shot in RefCOCO evaluation). We make the data and code for this paper publicly available at https://github.com/thunlp/CPT.","{'model': 'tldr@v2.0.0', 'text': 'This work presents Cross-modal Prompt Tuning (CPT), a novel paradigm for tuning VL-PTMs, which reformulates visual grounding into a fill-in-the-blank problem with color-based co-referential markers in image and text, maximally mitigating the gap between model pre-training and fine-tuning.'}", -Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners,Ningyu Zhang,"Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners. However, their effectiveness depends mainly on scaling the model parameters and prompt design, hindering their implementation in most real-world applications. This study proposes a novel pluggable, extensible, and efficient approach named DifferentiAble pRompT (DART), which can convert small language models into better few-shot learners without any prompt engineering. The main principle behind this approach involves reformulating potential natural language processing tasks into the task of a pre-trained language model and differentially optimizing the prompt template as well as the target label with backpropagation. Furthermore, the proposed approach can be: (i) Plugged to any pre-trained language models; (ii) Extended to widespread classification tasks. A comprehensive evaluation of standard NLP tasks demonstrates that the proposed approach achieves a better few-shot performance. Code is available in https://github.com/zjunlp/DART.","{'model': 'tldr@v2.0.0', 'text': 'This study proposes a novel pluggable, extensible, and efficient approach named DifferentiAble pRompT (DART), which can convert small language models into better few-shot learners without any prompt engineering.'}", -Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections,Ruiqi Zhong,"Large pre-trained language models (LMs) such as GPT-3 have acquired a surprising ability to perform zero-shot learning. For example, to classify sentiment without any training examples, we can""prompt""the LM with the review and the label description""Does the user like this movie?"", and ask whether the next word is""yes""or""no"". However, the next word prediction training objective is still misaligned with the target zero-shot learning objective. To address this weakness, we propose meta-tuning, which directly optimizes the zero-shot learning objective by fine-tuning pre-trained language models on a collection of datasets. We focus on classification tasks, and construct the meta-dataset by aggregating 43 existing datasets and annotating 441 label descriptions in a question-answering (QA) format. When evaluated on unseen tasks, meta-tuned models outperform a same-sized QA model and the previous SOTA zero-shot learning system based on natural language inference. Additionally, increasing parameter count from 220M to 770M improves AUC-ROC scores by 6.3%, and we forecast that even larger models would perform better. Therefore, measuring zero-shot learning performance on language models out-of-the-box might underestimate their true potential, and community-wide efforts on aggregating datasets and unifying their formats can help build models that answer prompts better.","{'model': 'tldr@v2.0.0', 'text': 'Meta-tuned models outperform a same-sized QA model and the previous SOTA zero-shot learning system based on natural language inference on unseen tasks, and community-wide efforts on aggregating datasets and unifying their formats can help build models that answer prompts better.'}",https://aclanthology.org/2021.findings-emnlp.244.pdf -Align and Prompt: Video-and-Language Pre-training with Entity Prompts,Dongxu Li,"Yidco-and-language pre-training has shown promising improvements on various downstream tasks. Most previous methods capture cross-modal interactions with a standard transformer-based multimodal encoder, not fully addressing the misalignment between unimodal video and text features. Besides, learning finegrained visual-language alignment usually requires off-the-shelf object detectors to provide object information, which is bottlenecked by the detector's limited vocabulary and expensive computation cost. In this paper, we propose Align and Prompt: a new video-and-language pre-training framework (AlPro), which operates on sparsely-sampled video frames and achieves more effective cross-modal alignment without explicit object detectors. First, we introduce a video-text contrastive (VTC) loss to align unimodal video-text features at the instance level, which eases the modeling of cross-modal interactions. Then, we propose a novel visually-grounded pre-training task, prompting entity modeling (PEM), which learns finegrained alignment between visual region and text entity via an entity prompter module in a self-supervised way. Finally, we pretrain the video-and-language transformer models on large webly-source video-text pairs using the proposed VTC and PEM losses as well as two standard losses of masked language modeling (MLM) and video-text matching (VTM). The resulting pre-trained model achieves state-of-the-art performance on both text-video retrieval and videoQA, outperforming prior work by a substantial margin. Implementation and pre-trained models are available at https://github.com/salesforce/ALPRO.","{'model': 'tldr@v2.0.0', 'text': 'This paper proposes Align and Prompt: a new video-and-language pre-training framework (AlPro), which operates on sparsely-sampled video frames and achieves more effective cross-modal alignment without explicit object detectors.'}",https://arxiv.org/pdf/2112.09583 -Investigating Prompt Engineering in Diffusion Models,Sam Witteveen,"With the spread of the use of Text2Img diffusion models such as DALL-E 2, Imagen, Mid Journey and Stable Diffusion, one challenge that artists face is selecting the right prompts to achieve the desired artistic output. We present techniques for measuring the effect that specific words and phrases in prompts have, and (in the Appendix) present guidance on the selection of prompts to produce desired effects.",,https://arxiv.org/pdf/2211.15462 -Prompt-Learning for Fine-Grained Entity Typing,Ning Ding,"As an effective approach to tune pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using \textit{cloze}-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on three fine-grained entity typing benchmarks (with up to 86 classes) under fully supervised, few-shot and zero-shot settings show that prompt-learning methods significantly outperform fine-tuning baselines, especially when the training data is insufficient.","{'model': 'tldr@v2.0.0', 'text': 'This work develops a simple and effective prompt-learning pipeline, and proposes a self-supervised strategy that carries out distribution-level optimization in prompt- learning to automatically summarize the information of entity types in the zero-shot regime.'}",https://aclanthology.org/2022.findings-emnlp.512.pdf -Template-free Prompt Tuning for Few-shot NER,Ruotian Ma,"Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words. However, when applied to token-level labeling tasks such as NER, it would be time-consuming to enumerate the template queries over all potential entity spans. In this work, we propose a more elegant method to reformulate NER tasks as LM problems without any templates. Specifically, we discard the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position. Meanwhile, we also explore principled ways to automatically search for appropriate label words that the pre-trained models can easily adapt to. While avoiding the complicated template-based process, the proposed LM objective also reduces the gap between different objectives used in pre-training and fine-tuning, thus it can better benefit the few-shot performance. Experimental results demonstrate the effectiveness of the proposed method over bert-tagger and template-based method under few-shot settings. Moreover, the decoding speed of the proposed method is up to 1930.12 times faster than the template-based method.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes a more elegant method to reformulate NER tasks as LM problems without any templates by discarding the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position.'}",https://aclanthology.org/2022.naacl-main.420.pdf -A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models,Woojeong Jin,"Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning.However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed.To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FewVLM, relatively smaller than recent few-shot learners.For FewVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM).Furthermore, we analyze the effect of diverse prompts for few-shot tasks.Experimental results on VQA show that FewVLM with prompt-based learning outperforms Frozen which is 31x larger than FewVLM by 18.2% point and achieves comparable results to a 246x larger model, PICa.In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github.com/woojeongjin/FewVLM","{'model': 'tldr@v2.0.0', 'text': 'This work studies prompt-based low-resource learning of VL tasks with a sequence-to-sequence transformer model with prefix language modeling and masked language modeling, and observes that models with noisy prompts learn as quickly as hand-crafted prompts given larger training data.'}",https://aclanthology.org/2022.acl-long.197.pdf -Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning,Colin Wei,"Pretrained language models have achieved state-of-the-art performance when adapted to a downstream NLP task. However, theoretical analysis of these models is scarce and challenging since the pretraining and downstream tasks can be very different. We propose an analysis framework that links the pretraining and downstream tasks with an underlying latent variable generative model of text -- the downstream classifier must recover a function of the posterior distribution over the latent variables. We analyze head tuning (learning a classifier on top of the frozen pretrained model) and prompt tuning in this setting. The generative model in our analysis is either a Hidden Markov Model (HMM) or an HMM augmented with a latent memory component, motivated by long-term dependencies in natural language. We show that 1) under certain non-degeneracy conditions on the HMM, simple classification heads can solve the downstream task, 2) prompt tuning obtains downstream guarantees with weaker non-degeneracy conditions, and 3) our recovery guarantees for the memory-augmented HMM are stronger than for the vanilla HMM because task-relevant information is easier to recover from the long-term memory. Experiments on synthetically generated data from HMMs back our theoretical findings.","{'model': 'tldr@v2.0.0', 'text': 'An analysis framework is proposed that links the pretraining and downstream tasks with an underlying latent variable generative model of text -- the downstream classifier must recover a function of the posterior distribution over the latent variables.'}", -On Transferability of Prompt Tuning for Natural Language Processing,Yusheng Su,"Prompt tuning (PT) is a promising parameter-efficient method to utilize extremely large pre-trained language models (PLMs), which can achieve comparable performance to full-parameter fine-tuning by only tuning a few soft prompts. However, PT requires much more training time than fine-tuning. Intuitively, knowledge transfer can help to improve the efficiency. To explore whether we can improve PT via prompt transfer, we empirically investigate the transferability of soft prompts across different downstream tasks and PLMs in this work. We find that (1) in zero-shot setting, trained soft prompts can effectively transfer to similar tasks on the same PLM and also to other PLMs with a cross-model projector trained on similar tasks; (2) when used as initialization, trained soft prompts of similar tasks and projected prompts of other PLMs can significantly accelerate training and also improve the performance of PT. Moreover, to explore what decides prompt transferability, we investigate various transferability indicators and find that the overlapping rate of activated neurons strongly reflects the transferability, which suggests how the prompts stimulate PLMs is essential. Our findings show that prompt transfer is promising for improving PT, and further research shall focus more on prompts’ stimulation to PLMs. The source code can be obtained from https://github.com/thunlp/Prompt-Transferability.","{'model': 'tldr@v2.0.0', 'text': 'It is found that the overlapping rate of activated neurons strongly reflects the transferability, which suggests how the prompts stimulate PLMs is essential, and that prompt transfer is promising for improving PT.'}",https://aclanthology.org/2022.naacl-main.290.pdf -PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains,Eyal Ben-David,"Natural Language Processing algorithms have made incredible progress, but they still struggle when applied to out-of-distribution examples. We address a challenging and underexplored version of this domain adaptation problem, where an algorithm is trained on several source domains, and then applied to examples from unseen domains that are unknown at training time. Particularly, no examples, labeled or unlabeled, or any other knowledge about the target domain are available to the algorithm at training time. We present PADA: An example-based autoregressive Prompt learning algorithm for on-the-fly Any-Domain Adaptation, based on the T5 language model. Given a test example, PADA first generates a unique prompt for it and then, conditioned on this prompt, labels the example with respect to the NLP prediction task. PADA is trained to generate a prompt that is a token sequence of unrestricted length, consisting of Domain Related Features (DRFs) that characterize each of the source domains. Intuitively, the generated prompt is a unique signature that maps the test example to a semantic space spanned by the source domains. In experiments with 3 tasks (text classification and sequence tagging), for a total of 14 multi-source adaptation scenarios, PADA substantially outperforms strong baselines.1","{'model': 'tldr@v2.0.0', 'text': 'This work presents PADA: An example-based autoregressive Prompt learning algorithm for on-the-fly Any-Domain Adaptation, based on the T5 language model, which substantially outperforms strong baselines.'}",https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00468/2008061/tacl_a_00468.pdf -Toxicity Detection with Generative Prompt-based Inference,Yau-Shian Wang,"Due to the subtleness, implicity, and different possible interpretations perceived by different people, detecting undesirable content from text is a nuanced difficulty. It is a long-known risk that language models (LMs), once trained on corpus containing undesirable content, have the power to manifest biases and toxicity. However, recent studies imply that, as a remedy, LMs are also capable of identifying toxic content without additional fine-tuning. Prompt-methods have been shown to effectively harvest this surprising self-diagnosing capability. However, existing prompt-based methods usually specify an instruction to a language model in a discriminative way. In this work, we explore the generative variant of zero-shot prompt-based toxicity detection with comprehensive trials on prompt engineering. We evaluate on three datasets with toxicity labels annotated on social media posts. Our analysis highlights the strengths of our generative classification approach both quantitatively and qualitatively. Interesting aspects of self-diagnosis and its ethical implications are discussed.","{'model': 'tldr@v2.0.0', 'text': 'This work explores the generative variant of zero-shot prompt-based toxicity detection with comprehensive trials on prompt engineering on three datasets with toxicity labels annotated on social media posts and highlights the strengths of this approach both quantitatively and qualitatively.'}",https://arxiv.org/pdf/2205.12390 -Few-Shot Bot: Prompt-Based Learning for Dialogue Systems,Andrea Madotto,"Learning to converse using only a few examples is a great challenge in conversational AI. The current best conversational models, which are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL), are language models (LMs) fine-tuned on large conversational datasets. Training these models is expensive, both in terms of computational resources and time, and it is hard to keep them up to date with new conversational skills. A simple yet unexplored solution is prompt-based few-shot learning (Brown et al., 2020) which does not require gradient-based fine-tuning but instead uses a few examples in the LM context as the only source of learning. In this paper, we explore prompt-based few-shot learning in dialogue tasks. We benchmark LMs of different sizes in nine response generation tasks, which include four knowledge-grounded tasks, a task-oriented generations task, three open-chat tasks, and controlled stylistic generation, and five conversational parsing tasks, which include dialogue state tracking, graph path generation, persona information extraction, document retrieval, and internet query generation. The current largest released LM (GPT-J-6B) using prompt-based few-shot learning, and thus requiring no training, achieves competitive performance to fully trained state-of-the-art models. Moreover, we propose a novel prompt-based few-shot classifier, that also does not require any fine-tuning, to select the most appropriate prompt given a dialogue history. Finally, by combining the power of prompt-based few-shot learning and a Skill Selector, we create an end-to-end chatbot named the Few-Shot Bot (FSB), which automatically selects the most appropriate conversational skill, queries different knowledge bases or the internet, and uses the retrieved knowledge to generate a human-like response, all using only few dialogue examples per skill.","{'model': 'tldr@v2.0.0', 'text': 'An end-to-end chatbot named the Few-Shot Bot is created, which automatically selects the most appropriate conversational skill, queries different knowledge bases or the internet, and uses the retrieved knowledge to generate a human-like response, all using only few dialogue examples per skill.'}", -NSP-BERT: A Prompt-based Few-Shot Learner through an Original Pre-training Task —— Next Sentence Prediction,Yi Sun,"Using prompts to utilize language models to perform various downstream tasks, also known as prompt-based learning or prompt-learning, has lately gained significant success in comparison to the pre-train and fine-tune paradigm. Nonetheless, virtually most prompt-based methods are token-level such as PET based on mask language model (MLM). In this paper, we attempt to accomplish several NLP tasks in the zero-shot and few-shot scenarios using a BERT original pre-training task abandoned by RoBERTa and other models——Next Sentence Prediction (NSP). Unlike token-level techniques, our sentence-level prompt-based method NSP-BERT does not need to fix the length of the prompt or the position to be predicted, allowing it to handle tasks such as entity linking with ease. NSP-BERT can be applied to a variety of tasks based on its properties. We present an NSP-tuning approach with binary cross-entropy loss for single-sentence classification tasks that is competitive compared to PET and EFL. By continuing to train BERT on RoBERTa’s corpus, the model’s performance improved significantly, which indicates that the pre-training corpus is another important determinant of few-shot besides model size and prompt method.","{'model': 'tldr@v2.0.0', 'text': 'This paper presents an NSP-tuning approach with binary cross-entropy loss for single-sentence classification tasks that is competitive compared to PET and EFL and indicates that the pre-training corpus is another important determinant of few-shot besides model size and prompt method.'}", -LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER,Xiang Chen,"NER in low-resource domains suffers from insufficient training data. Existing transfer learning approaches for lowresource NER usually have the challenge that the target domain has different sets of entity categories compared with a resource-rich source domain, which can be concluded as class transfer and domain transfer problems. In this paper, we propose a lightweight generative framework with prompt-guided attention for low-resource NER (LightNER). Concretely, instead of tackling the problem by training label-specific discriminative classifiers, we convert sequence labeling to generate the entity pointer index sequence and entity categories without any label-specific classifiers, which can address the class transfer issue. We further propose prompt-guided attention by incorporating continuous prompts into the selfattention layer to re-modulate the attention and adapt pretrained weights. Note that we only tune those continuous prompts with the whole parameter of the pre-trained language model fixed, thus, making our approach lightweight and flexible for low-resource scenarios and can better transfer knowledge across domains. Experimental results show that by tuning only 0.16% of the parameters, LightNER can obtain comparable performance in the standard setting and outperform baselines in low-resource settings.","{'model': 'tldr@v2.0.0', 'text': 'A lightweight generative framework with prompt-guided attention for low-resource NER (LightNER), which converts sequence labeling to generate the entity pointer index sequence and entity categories without any label-specific classifiers, which can address the class transfer issue.'}", -PADA: A Prompt-based Autoregressive Approach for Adaptation to Unseen Domains,Eyal Ben-David,"Natural Language Processing algorithms have made incredible progress recently, but they still struggle when applied to out-of-distribution examples. In this paper, we address a very challenging and previously underexplored version of this domain adaptation problem. In our setup an algorithm is trained on several source domains, and then applied to examples from an unseen domain that is unknown at training time. Particularly, no examples, labeled or unlabeled, or any other knowledge about the target domain are available to the algorithm at training time. We present PADA : A Prompt-based Autoregressive Domain Adaptation algorithm, based on the T5 model. Given a test example, PADA first generates a unique prompt and then, conditioned on this prompt, labels the example with respect to the NLP task. The prompt is a sequence of unrestricted length, consisting of pre-defined Domain Related Features (DRFs) that characterize each of the source domains. Intuitively, the prompt is a unique signature that maps the test example to the semantic space spanned by the source domains. In experiments with two tasks: Rumour Detection and Multi-Genre Natural Language Inference (MNLI), for a total of 10 multi-source adaptation scenarios, PADA strongly outperforms state-of-the-art approaches and additional strong baselines. 1","{'model': 'tldr@v2.0.0', 'text': 'This paper presents PADA : A Prompt-based Autoregressive Domain Adaptation algorithm, based on the T5 model, which strongly outperforms state-of-the-art approaches and additional strong baselines in multi-source adaptation scenarios.'}", -A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT,Jules White,"Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.","{'model': 'tldr@v2.0.0', 'text': 'A catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs to improve the outputs of LLM conversations is described.'}",http://arxiv.org/pdf/2302.11382 -Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion Models,Rongjie Huang,"Large-scale multimodal generative modeling has created milestones in text-to-image and text-to-video generation. Its application to audio still lags behind for two main reasons: the lack of large-scale datasets with high-quality text-audio pairs, and the complexity of modeling long continuous audio data. In this work, we propose Make-An-Audio with a prompt-enhanced diffusion model that addresses these gaps by 1) introducing pseudo prompt enhancement with a distill-then-reprogram approach, it alleviates data scarcity with orders of magnitude concept compositions by using language-free audios; 2) leveraging spectrogram autoencoder to predict the self-supervised audio representation instead of waveforms. Together with robust contrastive language-audio pretraining (CLAP) representations, Make-An-Audio achieves state-of-the-art results in both objective and subjective benchmark evaluation. Moreover, we present its controllability and generalization for X-to-Audio with""No Modality Left Behind"", for the first time unlocking the ability to generate high-definition, high-fidelity audios given a user-defined modality input. Audio samples are available at https://Text-to-Audio.github.io","{'model': 'tldr@v2.0.0', 'text': 'This work proposes Make-An- audio with a prompt-enhanced diffusion model that alleviates data scarcity with orders of magnitude concept compositions by using language-free audios, and presents its controllability and generalization for X-to-Audio with ""No Modality Left Behind"", for the first time unlocking the ability to generate high-definition, high-fidelity audios given a user-defined modality input.'}",http://arxiv.org/pdf/2301.12661 -Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts,J. Zamfirescu-Pereira,"Pre-trained large language models (“LLMs”) like GPT-3 can engage in fluent, multi-turn instruction-taking out-of-the-box, making them attractive materials for designing natural language interactions. Using natural language to steer LLM outputs (“prompting”) has emerged as an important design technique potentially accessible to non-AI-experts. Crafting effective prompts can be challenging, however, and prompt-based interactions are brittle. Here, we explore whether non-AI-experts can successfully engage in “end-user prompt engineering” using a design probe—a prototype LLM-based chatbot design tool supporting development and systematic evaluation of prompting strategies. Ultimately, our probe participants explored prompt designs opportunistically, not systematically, and struggled in ways echoing end-user programming systems and interactive machine learning systems. Expectations stemming from human-to-human instructional experiences, and a tendency to overgeneralize, were barriers to effective prompt design. These findings have implications for non-AI-expert-facing LLM-based tool design and for improving LLM-and-prompt literacy among programmers and the public, and present opportunities for further research.","{'model': 'tldr@v2.0.0', 'text': 'This work explores whether non-AI-experts can successfully engage in “end-user prompt engineering” using a design probe—a prototype LLM-based chatbot design tool supporting development and systematic evaluation of prompting strategies.'}",https://dl.acm.org/doi/pdf/10.1145/3544548.3581388 -The Power of Prompt Tuning for Low-Resource Semantic Parsing,Nathan Schucher,"Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing—the task of mapping natural language utterances onto formal meaning representations. On the low-resource splits of Overnight and TOPv2, we find that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines. We also conduct ablation studies across different model scales and target representations, finding that, with increasing model scale, prompt tuned T5 models improve at generating target representations that are far from the pre-training distribution.","{'model': 'tldr@v2.0.0', 'text': 'This paper investigates prompt tuning for semantic parsing—the task of mapping natural language utterances onto formal meaning representations, and finds that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines.'}",https://aclanthology.org/2022.acl-short.17.pdf -The Biases of Pre-Trained Language Models: An Empirical Study on Prompt-Based Sentiment Analysis and Emotion Detection,Rui Mao,"Thanks to the breakthrough of large-scale pre-trained language model (PLM) technology, prompt-based classification tasks, e.g., sentiment analysis and emotion detection, have raised increasing attention. Such tasks are formalized as masked language prediction tasks which are in line with the pre-training objects of most language models. Thus, one can use a PLM to infer the masked words in a downstream task, then obtaining label predictions with manually defined label-word mapping templates. Prompt-based affective computing takes the advantages of both neural network modeling and explainable symbolic representations. However, there still remain many unclear issues related to the mechanisms of PLMs and prompt-based classification. We conduct a systematic empirical study on prompt-based sentiment analysis and emotion detection to study the biases of PLMs towards affective computing. We find that PLMs are biased in sentiment analysis and emotion detection tasks with respect to the number of label classes, emotional label-word selections, prompt templates and positions, and the word forms of emotion lexicons.","{'model': 'tldr@v2.0.0', 'text': 'It is found that PLMs are biased in sentiment analysis and emotion detection tasks with respect to the number of label classes, emotional label-word selections, prompt templates and positions, and the word forms of emotion lexicons.'}", -AdaPrompt: Adaptive Prompt-based Finetuning for Relation Extraction,Xiang Chen,"In this paper, we reformulate the relation extraction task as mask language modeling and propose a novel adaptive prompt-based finetuning approach. We propose an adaptive label words selection mechanism that scatters the relation label into variable number of label tokens to handle the complex multiple label space. We further introduce an auxiliary entity discriminator object to encourage the model to focus on context representation learning. Extensive experiments on benchmark datasets demonstrate that our approach can achieve better performance on both the few-shot and supervised setting1.","{'model': 'tldr@v2.0.0', 'text': 'An adaptive label words selection mechanism that scatters the relation label into variable number of label tokens to handle the complex multiple label space and introduces an auxiliary entity discriminator object to encourage the model to focus on context representation learning.'}", -Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts,Daniel Khashabi,"Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of extracting a discrete (textual) interpretation of continuous prompts that is faithful to the problem they solve. In practice, we observe a “wayward” behavior between the task solved by continuous prompts and their nearest neighbor discrete projections: We can find continuous prompts that solve a task while being projected to an arbitrary text (e.g., definition of a different or even a contradictory task), while being within a very small (2%) margin of the best continuous prompt of the same size for the task. We provide intuitions behind this odd and surprising behavior, as well as extensive empirical analyses quantifying the effect of various parameters. For instance, for larger model sizes we observe higher waywardness, i.e, we can find prompts that more closely map to any arbitrary text with a smaller drop in accuracy. These findings have important implications relating to the difficulty of faithfully interpreting continuous prompts and their generalization across models and tasks, providing guidance for future progress in prompting language models.","{'model': 'tldr@v2.0.0', 'text': 'This work investigates the feasibility of extracting a discrete (textual) interpretation of continuous prompts that is faithful to the problem they solve, and observes a “wayward” behavior between the task solved by continuous prompts and their nearest neighbor discrete projections.'}",https://aclanthology.org/2022.naacl-main.266.pdf -SentiPrompt: Sentiment Knowledge Enhanced Prompt-Tuning for Aspect-Based Sentiment Analysis,Chengxi Li,"Aspect-based sentiment analysis (ABSA) is an emerging fine-grained sentiment analysis task that aims to extract aspects, classify corresponding sentiment polarities and find opinions as the causes of sentiment. The latest research tends to solve the ABSA task in a unified way with end-to-end frameworks. Yet, these frameworks get fine-tuned from downstream tasks without any task-adaptive modification. Specifically, they do not use task-related knowledge well or explicitly model relations between aspect and opinion terms, hindering them from better performance. In this paper, we propose SentiPrompt to use sentiment knowledge enhanced prompts to tune the language model in the unified framework. We inject sentiment knowledge regarding aspects, opinions, and polarities into prompt and explicitly model term relations via constructing consistency and polarity judgment templates from the ground truth triplets. Experimental results demonstrate that our approach can outperform strong baselines on Triplet Extraction, Pair Extraction, and Aspect Term Extraction with Sentiment Classification by a notable margin.","{'model': 'tldr@v2.0.0', 'text': 'SentiPrompt is proposed to use sentiment knowledge enhanced prompts to tune the language model in the unified framework and inject sentiment knowledge regarding aspects, opinions, and polarities into prompt and explicitly model term relations via constructing consistency and polarity judgment templates from the ground truth triplets.'}", -Exploring Prompt-based Few-shot Learning for Grounded Dialog Generation,Chujie Zheng,"Dialog models can be greatly strengthened through grounding on various external information, but grounded dialog corpora are usually not naturally accessible. In this work, we focus on the few-shot learning for grounded dialog generation (GDG). We first propose a simple prompting method for GDG tasks, where different constructs of model input, such as the grounding source and the conversation context, are distinguished through continuous or discrete prompts. On three typical GDG tasks, we empirically demonstrate and analyze in-depth the effectiveness of our method. We then conduct extensive experiments to thoroughly investigate how our prompting method works with different pre-trained models. We show that prompted language models perform superiorly to conversational models, and further analyze various factors that influence the effects of prompting. Overall, our work introduces a prompt-based perspective to the few-shot learning for GDG tasks, and provides valuable findings and insights for future research.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes a simple prompting method for GDG tasks, where different constructs of model input, such as the grounding source and the conversation context, are distinguished through continuous or discrete prompts.'}", -Automated Cross-prompt Scoring of Essay Traits,Robert Ridley,"The majority of current research in Automated Essay Scoring (AES) focuses on prompt-specific scoring of either the overall quality of an essay or the quality with regards to certain traits. In real-world applications obtaining labelled data for a target essay prompt is often expensive or unfeasible, requiring the AES system to be able to perform well when predicting scores for essays from unseen prompts. As a result, some recent research has been dedicated to cross-prompt AES. However, this line of research has thus far only been concerned with holistic, overall scoring, with no exploration into the scoring of different traits. As users of AES systems often require feedback with regards to different aspects of their writing, trait scoring is a necessary component of an effective AES system. Therefore, to address this need, we introduce a new task named Automated Cross-prompt Scoring of Essay Traits, which requires the model to be trained solely on non-target-prompt essays and to predict the holistic, overall score as well as scores for a number of specific traits for target-prompt essays. This task challenges the model's ability to generalize in order to score essays from a novel domain as well as its ability to represent the quality of essays from multiple different aspects. In addition, we introduce a new, innovative approach which builds on top of a state-of-the-art method for cross-prompt AES. Our method utilizes a trait-attention mechanism and a multi-task architecture that leverages the relationships between each trait to simultaneously predict the overall score and the score of each individual trait. We conduct extensive experiments on the widely used ASAP and ASAP++ datasets and demonstrate that our approach is able to outperform leading prompt-specific trait scoring and cross-prompt AES methods.","{'model': 'tldr@v2.0.0', 'text': 'A new task named Automated Cross- Prompt Scoring of Essay Traits, which requires the model to be trained solely on non-target-prompt essays and to predict the holistic, overall score as well as scores for a number of specific traits for target-promPT essays, is introduced.'}",https://ojs.aaai.org/index.php/AAAI/article/download/17620/17427 -"Inclusive, prompt and non-prompt ${\rm J}/\psi$ production at midrapidity in p$-$Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV",Alice Collaboration,,,https://link.springer.com/content/pdf/10.1007/JHEP06(2022)011.pdf -MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots,Gelei Deng,"Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI) services due to their exceptional proficiency in understanding and generating human-like text. LLM chatbots, in particular, have seen widespread adoption, transforming human-machine interactions. However, these LLM chatbots are susceptible to""jailbreak""attacks, where malicious users manipulate prompts to elicit inappropriate or sensitive responses, contravening service policies. Despite existing attempts to mitigate such threats, our research reveals a substantial gap in our understanding of these vulnerabilities, largely due to the undisclosed defensive measures implemented by LLM service providers. In this paper, we present Jailbreaker, a comprehensive framework that offers an in-depth understanding of jailbreak attacks and countermeasures. Our work makes a dual contribution. First, we propose an innovative methodology inspired by time-based SQL injection techniques to reverse-engineer the defensive strategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat. This time-sensitive approach uncovers intricate details about these services' defenses, facilitating a proof-of-concept attack that successfully bypasses their mechanisms. Second, we introduce an automatic generation method for jailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential of automated jailbreak generation across various commercial LLM chatbots. Our method achieves a promising average success rate of 21.58%, significantly outperforming the effectiveness of existing techniques. We have responsibly disclosed our findings to the concerned service providers, underscoring the urgent need for more robust defenses. Jailbreaker thus marks a significant step towards understanding and mitigating jailbreak threats in the realm of LLM chatbots.","{'model': 'tldr@v2.0.0', 'text': 'Jailbreaker is presented, a comprehensive framework that offers an in-depth understanding of jailbreak attacks and countermeasures, and an automatic generation method for jailbreak prompts is introduced, leveraging a fine-tuned LLM to validate the potential of automated jailbreak generation across various commercial LLM chatbots.'}", -Visual Adversarial Examples Jailbreak Aligned Large Language Models,Xiangyu Qi,"Recently, there has been a surge of interest in integrating vision into Large Language Models (LLMs), exemplified by Visual Language Models (VLMs) such as Flamingo and GPT-4. This paper sheds light on the security and safety implications of this trend. First, we underscore that the continuous and high-dimensional nature of the visual input makes it a weak link against adversarial attacks, representing an expanded attack surface of vision-integrated LLMs. Second, we highlight that the versatility of LLMs also presents visual attackers with a wider array of achievable adversarial objectives, extending the implications of security failures beyond mere misclassification. As an illustration, we present a case study in which we exploit visual adversarial examples to circumvent the safety guardrail of aligned LLMs with integrated vision. Intriguingly, we discover that a single visual adversarial example can universally jailbreak an aligned LLM, compelling it to heed a wide range of harmful instructions that it otherwise would not) and generate harmful content that transcends the narrow scope of a `few-shot' derogatory corpus initially employed to optimize the adversarial example. Our study underscores the escalating adversarial risks associated with the pursuit of multimodality. Our findings also connect the long-studied adversarial vulnerabilities of neural networks to the nascent field of AI alignment. The presented attack suggests a fundamental adversarial challenge for AI alignment, especially in light of the emerging trend toward multimodality in frontier foundation models.","{'model': 'tldr@v2.0.0', 'text': 'A case study in which a single visual adversarial example can universally jailbreak an aligned LLM, compelling it to heed a wide range of harmful instructions that it otherwise would not and suggest a fundamental adversarial challenge for AI alignment, especially in light of the emerging trend toward multimodality in frontier foundation models.'}", -GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts,Jiahao Yu,"Large language models (LLMs) have recently experienced tremendous popularity and are widely used from casual conversations to AI-driven programming. However, despite their considerable success, LLMs are not entirely reliable and can give detailed guidance on how to conduct harmful or illegal activities. While safety measures can reduce the risk of such outputs, adversarial jailbreak attacks can still exploit LLMs to produce harmful content. These jailbreak templates are typically manually crafted, making large-scale testing challenging. In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing framework inspired by the AFL fuzzing framework. Instead of manual engineering, GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs. At its core, GPTFuzz starts with human-written templates as initial seeds, then mutates them to produce new templates. We detail three key components of GPTFuzz: a seed selection strategy for balancing efficiency and variability, mutate operators for creating semantically equivalent or similar sentences, and a judgment model to assess the success of a jailbreak attack. We evaluate GPTFuzz against various commercial and open-source LLMs, including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our results indicate that GPTFuzz consistently produces jailbreak templates with a high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz achieves over 90% attack success rates against ChatGPT and Llama-2 models, even with suboptimal initial seed templates. We anticipate that GPTFuzz will be instrumental for researchers and practitioners in examining LLM robustness and will encourage further exploration into enhancing LLM safety.","{'model': 'tldr@v2.0.0', 'text': 'GPTFuzz is introduced, a novel black-box jailbreak fuzzing framework inspired by the AFL fuzzed framework that automates the generation of jailbreak templates for red-teaming LLMs and consistently produces jailbreaks with a high success rate, surpassing human-crafted templates.'}",https://arxiv.org/pdf/2309.10253 -AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models,Xiaogeng Liu,"The aligned Large Language Models (LLMs) are powerful language understanding and decision-making tools that are created through extensive alignment with human feedback. However, these large models remain susceptible to jailbreak attacks, where adversaries manipulate prompts to elicit malicious outputs that should not be given by aligned LLMs. Investigating jailbreak prompts can lead us to delve into the limitations of LLMs and further guide us to secure them. Unfortunately, existing jailbreak techniques suffer from either (1) scalability issues, where attacks heavily rely on manual crafting of prompts, or (2) stealthiness problems, as attacks depend on token-based algorithms to generate prompts that are often semantically meaningless, making them susceptible to detection through basic perplexity testing. In light of these challenges, we intend to answer this question: Can we develop an approach that can automatically generate stealthy jailbreak prompts? In this paper, we introduce AutoDAN, a novel jailbreak attack against aligned LLMs. AutoDAN can automatically generate stealthy jailbreak prompts by the carefully designed hierarchical genetic algorithm. Extensive evaluations demonstrate that AutoDAN not only automates the process while preserving semantic meaningfulness, but also demonstrates superior attack strength in cross-model transferability, and cross-sample universality compared with the baseline. Moreover, we also compare AutoDAN with perplexity-based defense methods and show that AutoDAN can bypass them effectively.","{'model': 'tldr@v2.0.0', 'text': 'Extensive evaluations demonstrate that AutoDAN not only automates the process while preserving semantic meaningfulness, but also demonstrates superior attack strength in cross-model transferability, and cross-sample universality compared with the baseline.'}",https://arxiv.org/pdf/2310.04451 -Multilingual Jailbreak Challenges in Large Language Models,Yue Deng,"While large language models (LLMs) exhibit remarkable capabilities across a wide range of tasks, they pose potential safety concerns, such as the ``jailbreak'' problem, wherein malicious instructions can manipulate LLMs to exhibit undesirable behavior. Although several preventive measures have been developed to mitigate the potential risks associated with LLMs, they have primarily focused on English data. In this study, we reveal the presence of multilingual jailbreak challenges within LLMs and consider two potential risk scenarios: unintentional and intentional. The unintentional scenario involves users querying LLMs using non-English prompts and inadvertently bypassing the safety mechanisms, while the intentional scenario concerns malicious users combining malicious instructions with multilingual prompts to deliberately attack LLMs. The experimental results reveal that in the unintentional scenario, the rate of unsafe content increases as the availability of languages decreases. Specifically, low-resource languages exhibit three times the likelihood of encountering harmful content compared to high-resource languages, with both ChatGPT and GPT-4. In the intentional scenario, multilingual prompts can exacerbate the negative impact of malicious instructions, with astonishingly high rates of unsafe output: 80.92\% for ChatGPT and 40.71\% for GPT-4. To handle such a challenge in the multilingual context, we propose a novel \textsc{Self-Defense} framework that automatically generates multilingual training data for safety fine-tuning. Experimental results show that ChatGPT fine-tuned with such data can achieve a substantial reduction in unsafe content generation. Data is available at https://github.com/DAMO-NLP-SG/multilingual-safety-for-LLMs. Warning: This paper contains examples with potentially harmful content.","{'model': 'tldr@v2.0.0', 'text': 'A novel \\textsc{Self-Defense} framework that automatically generates multilingual training data for safety fine-tuning is proposed that shows that ChatGPT fine- Tuned with such data can achieve a substantial reduction in unsafe content generation.'}",https://arxiv.org/pdf/2310.06474 -Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online,Ziv Epstein,"Recent research suggests that shifting users’ attention to accuracy increases the quality of news they subsequently share online. Here we help develop this initial observation into a suite of deploy-able interventions for practitioners. We ask (i) how prior results generalize to other approaches for prompting users to consider accuracy, and (ii) for whom these prompts are more versus less effec-tive. In a large survey experiment examining participants’ intentions to share true and false head-lines about COVID-19, we identify a variety of different accuracy prompts that su¬ccessfully increase sharing","{'model': 'tldr@v2.0.0', 'text': 'In a large survey experiment examining participants’ intentions to share true and false head-lines about COVID-19, a variety of different accuracy prompts are identified that Successfully increase sharing.'}",https://misinforeview.hks.harvard.edu/wp-content/uploads/2021/05/epstein_toolkit_covid_19_misinformation_20210518.pdf -Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection,Kai Greshake,"Large Language Models (LLMs) are increasingly being integrated into various applications. The functionalities of recent LLMs can be flexibly modulated via natural language prompts. This renders them susceptible to targeted adversarial prompting, e.g., Prompt Injection (PI) attacks enable attackers to override original instructions and employed controls. So far, it was assumed that the user is directly prompting the LLM. But, what if it is not the user prompting? We argue that LLM-Integrated Applications blur the line between data and instructions. We reveal new attack vectors, using Indirect Prompt Injection, that enable adversaries to remotely (without a direct interface) exploit LLM-integrated applications by strategically injecting prompts into data likely to be retrieved. We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities, including data theft, worming, information ecosystem contamination, and other novel security risks. We demonstrate our attacks' practical viability against both real-world systems, such as Bing's GPT-4 powered Chat and code-completion engines, and synthetic applications built on GPT-4. We show how processing retrieved prompts can act as arbitrary code execution, manipulate the application's functionality, and control how and if other APIs are called. Despite the increasing integration and reliance on LLMs, effective mitigations of these emerging threats are currently lacking. By raising awareness of these vulnerabilities and providing key insights into their implications, we aim to promote the safe and responsible deployment of these powerful models and the development of robust defenses that protect users and systems from potential attacks.","{'model': 'tldr@v2.0.0', 'text': ""It is argued that LLM-Integrated Applications blur the line between data and instructions, and it is shown how processing retrieved prompts can act as arbitrary code execution, manipulate the application's functionality, and control how and if other APIs are called.""}", -Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery,Yuxin Wen,"The strength of modern generative models lies in their ability to be controlled through text-based prompts. Typical""hard""prompts are made from interpretable words and tokens, and must be hand-crafted by humans. There are also""soft""prompts, which consist of continuous feature vectors. These can be discovered using powerful optimization methods, but they cannot be easily interpreted, re-used across models, or plugged into a text-based interface. We describe an approach to robustly optimize hard text prompts through efficient gradient-based optimization. Our approach automatically generates hard text-based prompts for both text-to-image and text-to-text applications. In the text-to-image setting, the method creates hard prompts for diffusion models, allowing API users to easily generate, discover, and mix and match image concepts without prior knowledge on how to prompt the model. In the text-to-text setting, we show that hard prompts can be automatically discovered that are effective in tuning LMs for classification.","{'model': 'tldr@v2.0.0', 'text': 'This work describes an approach to robustly optimize hard text prompts through efficient gradient-based optimization and shows that hard prompts can be automatically discovered that are effective in tuning LMs for classification.'}",http://arxiv.org/pdf/2302.03668 -More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models,Kai Greshake,"We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting . Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following . So far, these attacks assumed that the adversary is directly prompting the LLM. In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs ) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viabil-ity of our attacks, we implemented specific demonstrations","{'model': 'tldr@v2.0.0', 'text': 'This work systematically analyze the resulting threat landscape of Application-Integrated LLMs and discusses a variety of new attack vectors, including poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries.'}",http://arxiv.org/pdf/2302.12173 -Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation,Yangsibo Huang,"The rapid progress in open-source large language models (LLMs) is significantly advancing AI development. Extensive efforts have been made before model release to align their behavior with human values, with the primary goal of ensuring their helpfulness and harmlessness. However, even carefully aligned models can be manipulated maliciously, leading to unintended behaviors, known as""jailbreaks"". These jailbreaks are typically triggered by specific text inputs, often referred to as adversarial prompts. In this work, we propose the generation exploitation attack, an extremely simple approach that disrupts model alignment by only manipulating variations of decoding methods. By exploiting different generation strategies, including varying decoding hyper-parameters and sampling methods, we increase the misalignment rate from 0% to more than 95% across 11 language models including LLaMA2, Vicuna, Falcon, and MPT families, outperforming state-of-the-art attacks with $30\times$ lower computational cost. Finally, we propose an effective alignment method that explores diverse generation strategies, which can reasonably reduce the misalignment rate under our attack. Altogether, our study underscores a major failure in current safety evaluation and alignment procedures for open-source LLMs, strongly advocating for more comprehensive red teaming and better alignment before releasing such models. Our code is available at https://github.com/Princeton-SysML/Jailbreak_LLM.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes the generation exploitation attack, an extremely simple approach that disrupts model alignment by only manipulating variations of decoding methods, and proposes an effective alignment method that explores diverse generation strategies, which can reasonably reduce the misalignment rate under the attack.'}",https://arxiv.org/pdf/2310.06987 -Visual Adversarial Examples Jailbreak Large Language Models,Xiangyu Qi,for,"{'model': 'tldr@v2.0.0', 'text': None}",https://arxiv.org/pdf/2306.13213 -Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations,Zeming Wei,"Large Language Models (LLMs) have shown remarkable success in various tasks, but concerns about their safety and the potential for generating malicious content have emerged. In this paper, we explore the power of In-Context Learning (ICL) in manipulating the alignment ability of LLMs. We find that by providing just few in-context demonstrations without fine-tuning, LLMs can be manipulated to increase or decrease the probability of jailbreaking, i.e. answering malicious prompts. Based on these observations, we propose In-Context Attack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guarding aligned language model purposes. ICA crafts malicious contexts to guide models in generating harmful outputs, while ICD enhances model robustness by demonstrations of rejecting to answer harmful prompts. Our experiments show the effectiveness of ICA and ICD in increasing or reducing the success rate of adversarial jailbreaking attacks. Overall, we shed light on the potential of ICL to influence LLM behavior and provide a new perspective for enhancing the safety and alignment of LLMs.","{'model': 'tldr@v2.0.0', 'text': 'Light is shed on the potential of In-Context Learning (ICL) to influence LLM behavior and a new perspective for enhancing the safety and alignment of LLMs is provided.'}",https://arxiv.org/pdf/2310.06387 -Prompt Injection attack against LLM-integrated Applications,Yi Liu,"Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.","{'model': 'tldr@v2.0.0', 'text': 'This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications and forms HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks.'}",http://arxiv.org/pdf/2306.05499 -Prompt Engineering for Healthcare: Methodologies and Applications,Jiaqi Wang,"This review will introduce the latest advances in prompt engineering in the field of natural language processing (NLP) for the medical domain. First, we will provide a brief overview of the development of prompt engineering and emphasize its significant contributions to healthcare NLP applications such as question-answering systems, text summarization, and machine translation. With the continuous improvement of general large language models, the importance of prompt engineering in the healthcare domain is becoming increasingly prominent. The aim of this article is to provide useful resources and bridges for healthcare NLP researchers to better explore the application of prompt engineering in this field. We hope that this review can provide new ideas and inspire ample possibilities for research and application in medical NLP.","{'model': 'tldr@v2.0.0', 'text': 'This review will introduce the latest advances in prompt engineering in the field of natural language processing (NLP) for the medical domain and highlight its significant contributions to healthcare NLP applications such as question-answering systems, text summarization, and machine translation.'}",http://arxiv.org/pdf/2304.14670 -Equation of State Constraints from the Threshold Binary Mass for Prompt Collapse of Neutron Star Mergers.,A. Bauswein,"Using hydrodynamical simulations for a large set of high-density matter equations of state (EOSs), we systematically determine the threshold mass M_{thres} for prompt black-hole formation in equal-mass and asymmetric neutron star (NS) mergers. We devise the so far most direct, general, and accurate method to determine the unknown maximum mass of nonrotating NSs from merger observations revealing M_{thres}. Considering hybrid EOSs with hadron-quark phase transition, we identify a new, observable signature of quark matter in NS mergers. Furthermore, our findings have direct applications in gravitational wave searches, kilonova interpretations, and multimessenger constraints on NS properties.","{'model': 'tldr@v2.0.0', 'text': 'Considering hybrid EOSs with hadron-quark phase transition, a new, observable signature of quark matter in NS mergers is identified, which has direct applications in gravitational wave searches, kilonova interpretations, and multimessenger constraints on NS properties.'}",https://authors.library.caltech.edu/104636/3/PhysRevLett.125.141103.pdf -Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models,Shuai Zhao,"The prompt-based learning paradigm, which bridges the gap between pre-training and fine-tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot settings. Despite being widely applied, prompt-based learning is vulnerable to backdoor attacks. Textual backdoor attacks are designed to introduce targeted vulnerabilities into models by poisoning a subset of training samples through trigger injection and label modification. However, they suffer from flaws such as abnormal natural language expressions resulting from the trigger and incorrect labeling of poisoned samples. In this study, we propose ProAttack, a novel and efficient method for performing clean-label backdoor attacks based on the prompt, which uses the prompt itself as a trigger. Our method does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack. With extensive experiments on rich-resource and few-shot text classification tasks, we empirically validate ProAttack's competitive performance in textual backdoor attacks. Notably, in the rich-resource setting, ProAttack achieves state-of-the-art attack success rates in the clean-label backdoor attack benchmark without external triggers.","{'model': 'tldr@v2.0.0', 'text': 'This study proposes ProAttack, a novel and efficient method for performing clean-label backdoor attacks based on the prompt, which uses the prompt itself as a trigger, which does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack.'}",http://arxiv.org/pdf/2305.01219 -NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models,Kai Mei,"Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against prompt-based models consider injecting backdoors into the entire embedding layers or word embedding vectors. Such attacks can be easily affected by retraining on downstream tasks and with different prompting strategies, limiting the transferability of backdoor attacks. In this work, we propose transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies. Specifically, NOTABLE injects backdoors into the encoders of PLMs by utilizing an adaptive verbalizer to bind triggers to specific words (i.e., anchors). It activates the backdoor by pasting input with triggers to reach adversary-desired anchors, achieving independence from downstream tasks and prompting strategies. We conduct experiments on six NLP tasks, three popular models, and three prompting strategies. Empirical results show that NOTABLE achieves superior attack performance (i.e., attack success rate over 90% on all the datasets), and outperforms two state-of-the-art baselines. Evaluations on three defenses show the robustness of NOTABLE. Our code can be found at https://github.com/RU-System-Software-and-Security/Notable.","{'model': 'tldr@v2.0.0', 'text': 'This work proposes transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies, and achieves superior attack performance and outperforms two state-of-the-art baselines.'}",http://arxiv.org/pdf/2305.17826 -Prompts Should not be Seen as Secrets: Systematically Measuring Prompt Extraction Attack Success,Yiming Zhang,"The generations of large language models are commonly controlled through prompting techniques, where a user's query to the model is prefixed with a prompt that aims to guide the model's behaviour on the query. The prompts used by companies to guide their models are often treated as secrets, to be hidden from the user making the query. They have even been treated as commodities to be bought and sold. However, there has been anecdotal evidence showing that the prompts can be extracted by a user even when they are kept secret. In this paper, we present a framework for systematically measuring the success of prompt extraction attacks. In experiments with multiple sources of prompts and multiple underlying language models, we find that simple text-based attacks can in fact reveal prompts with high probability.","{'model': 'tldr@v2.0.0', 'text': 'In experiments with multiple sources of prompts and multiple underlying language models, it is found that simple text-based attacks can in fact reveal prompts with high probability.'}",https://arxiv.org/pdf/2307.06865 -SAM on Medical Images: A Comprehensive Study on Three Prompt Modes,D. Cheng,"The Segment Anything Model (SAM) made an eye-catching debut recently and inspired many researchers to explore its potential and limitation in terms of zero-shot generalization capability. As the first promptable foundation model for segmentation tasks, it was trained on a large dataset with an unprecedented number of images and annotations. This large-scale dataset and its promptable nature endow the model with strong zero-shot generalization. Although the SAM has shown competitive performance on several datasets, we still want to investigate its zero-shot generalization on medical images. As we know, the acquisition of medical image annotation usually requires a lot of effort from professional practitioners. Therefore, if there exists a foundation model that can give high-quality mask prediction simply based on a few point prompts, this model will undoubtedly become the game changer for medical image analysis. To evaluate whether SAM has the potential to become the foundation model for medical image segmentation tasks, we collected more than 12 public medical image datasets that cover various organs and modalities. We also explore what kind of prompt can lead to the best zero-shot performance with different modalities. Furthermore, we find that a pattern shows that the perturbation of the box size will significantly change the prediction accuracy. Finally, Extensive experiments show that the predicted mask quality varied a lot among different datasets. And providing proper prompts, such as bounding boxes, to the SAM will significantly increase its performance.","{'model': 'tldr@v2.0.0', 'text': 'To evaluate whether SAM has the potential to become the foundation model for medical image segmentation tasks, more than 12 public medical image datasets are collected and extensive experiments show that the predicted mask quality varied a lot among different datasets.'}",http://arxiv.org/pdf/2305.00035 -From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?,Rodrigo Pedro,"Large Language Models (LLMs) have found widespread applications in various domains, including web applications, where they facilitate human interaction via chatbots with natural language interfaces. Internally, aided by an LLM-integration middleware such as Langchain, user prompts are translated into SQL queries used by the LLM to provide meaningful responses to users. However, unsanitized user prompts can lead to SQL injection attacks, potentially compromising the security of the database. Despite the growing interest in prompt injection vulnerabilities targeting LLMs, the specific risks of generating SQL injection attacks through prompt injections have not been extensively studied. In this paper, we present a comprehensive examination of prompt-to-SQL (P$_2$SQL) injections targeting web applications based on the Langchain framework. Using Langchain as our case study, we characterize P$_2$SQL injections, exploring their variants and impact on application security through multiple concrete examples. Furthermore, we evaluate 7 state-of-the-art LLMs, demonstrating the pervasiveness of P$_2$SQL attacks across language models. Our findings indicate that LLM-integrated applications based on Langchain are highly susceptible to P$_2$SQL injection attacks, warranting the adoption of robust defenses. To counter these attacks, we propose four effective defense techniques that can be integrated as extensions to the Langchain framework. We validate the defenses through an experimental evaluation with a real-world use case application.","{'model': 'tldr@v2.0.0', 'text': 'It is indicated that LLM-integrated applications based on Langchain are highly susceptible to P$_2$SQL injection attacks, warranting the adoption of robust defenses, and four effective defense techniques that can be integrated as extensions to the Langchain framework are proposed.'}",https://arxiv.org/pdf/2308.01990 -INTEGRAL Detection of the First Prompt Gamma-Ray Signal Coincident with the Gravitational-wave Event GW170817,V. Savchenko,"We report the INTernational Gamma-ray Astrophysics Laboratory (INTEGRAL) detection of the short gamma-ray burst GRB 170817A (discovered by Fermi-GBM) with a signal-to-noise ratio of 4.6, and, for the first time, its association with the gravitational waves (GWs) from binary neutron star (BNS) merging event GW170817 detected by the LIGO and Virgo observatories. The significance of association between the gamma-ray burst observed by INTEGRAL and GW170817 is 3.2σ, while the association between the Fermi-GBM and INTEGRAL detections is 4.2σ. GRB 170817A was detected by the SPI-ACS instrument about 2 s after the end of the GW event. We measure a fluence of (1.4 ± 0.4 ± 0.6) × 10−7 erg cm−2 (75–2000 keV), where, respectively, the statistical error is given at the 1σ confidence level, and the systematic error corresponds to the uncertainty in the spectral model and instrument response. We also report on the pointed follow-up observations carried out by INTEGRAL, starting 19.5 hr after the event, and lasting for 5.4 days. We provide a stringent upper limit on any electromagnetic signal in a very broad energy range, from 3 keV to 8 MeV, constraining the soft gamma-ray afterglow flux to <7.1 × 10−11 erg cm−2 s−1 (80–300 keV). Exploiting the unique capabilities of INTEGRAL, we constrained the gamma-ray line emission from radioactive decays that are expected to be the principal source of the energy behind a kilonova event following a BNS coalescence. Finally, we put a stringent upper limit on any delayed bursting activity, for example, from a newly formed magnetar.",,https://iopscience.iop.org/article/10.3847/2041-8213/aa8f94/pdf -A General-relativistic Determination of the Threshold Mass to Prompt Collapse in Binary Neutron Star Mergers,Sven Köppel,"We study the lifetimes of the remnant produced by the merger of two neutron stars and revisit the determination of the threshold mass to prompt collapse, Mth. Using a fully general-relativistic numerical approach and a novel method for a rigorous determination of Mth, we show that a nonlinear universal relation exists between the threshold mass and the maximum compactness. For the temperature-dependent equations of state considered here, our results improve a similar linear relation found recently with methods that are less accurate but yield quantitatively similar results. Furthermore, exploiting the information from GW170817, we use the universal relation to set lower limits on the stellar radii for any mass.",,https://iopscience.iop.org/article/10.3847/2041-8213/ab0210/pdf -Prompt rewetting of drained peatlands reduces climate warming despite methane emissions,A. Günther,,"{'model': 'tldr@v2.0.0', 'text': 'A radiative forcing model is used to compare forcing dynamics of global scenarios for future peatland management using areal data from the Global Peatland Database and shows that CH4 radiativeforcing does not undermine the climate change mitigation potential of peatlands rewetting.'}",https://www.nature.com/articles/s41467-020-15499-z.pdf -Prompt optical emission as a signature of synchrotron radiation in gamma-ray bursts,G. Oganesyan,"Information on the spectral shape of prompt emission in gamma-ray bursts (GRB) is mostly available only at energies ≳10 keV, where the main instruments for GRB detection are sensitive. The origin of this emission is still very uncertain because of the apparent inconsistency with synchrotron radiation, which is the most obvious candidate, and the resulting need for considering less straightforward scenarios. The inclusion of data down to soft X-rays (∼0.5 keV), which are available only in a small fraction of GRBs, has firmly established the common presence of a spectral break in the low-energy part of prompt spectra, and even more importantly, the consistency of the overall spectral shape with synchrotron radiation in the moderately fast-cooling regime, the low-energy break being identified with the cooling frequency. In this work we further extend the range of investigation down to the optical band. In particular, we test the synchrotron interpretation by directly fitting a theoretically derived synchrotron spectrum and making use of optical to gamma-ray data. Secondly, we test an alternative model that considers the presence of a black-body component at ∼keV energies, in addition to a non-thermal component that is responsible for the emission at the spectral peak (100 keV–1 MeV). We find that synchrotron radiation provides a good description of the broadband data, while models composed of a thermal and a non-thermal component require the introduction of a low-energy break in the non-thermal component in order to be consistent with optical observations. Motivated by the good quality of the synchrotron fits, we explore the physical parameter space of the emitting region. In a basic prompt emission scenario we find quite contrived solutions for the magnetic field strength (5 G < B′< 40 G) and for the location of the region where the radiation is produced (Rγ >  1016 cm). We discuss which assumptions of the basic model would need to be relaxed in order to achieve a more natural parameter space.",,https://www.aanda.org/articles/aa/pdf/2019/08/aa35766-19.pdf -Prompt,Cshell Xui,"Array Set the size and location of sub-arrays to be readout for r the next GO. These coordinate are relative to the physical device. You data is normally viewed rotated 90 ̊ clockwise. Prompt Array_icon on the Parameters screen, OBS page. Range x y wid hgt The (x,y) location of the upper left corner and its width and hgt is specified. Please note that these values must be multiples of 8. Initial Full size (0 0 256 256). Syntax ARRAY x y wid hgt",, -Evidence of two spectral breaks in the prompt emission of gamma-ray bursts,M. Ravasio,"The long-lasting tension between the observed spectra of gamma-ray bursts (GRBs) and the predicted synchrotron emission spectrum might be solved if electrons do not completely cool. Evidence of incomplete cooling was recently found in Swift GRBs with prompt observations down to 0.1 keV, and in one bright Fermi burst, GRB 160625B. Here we systematically search for evidence of incomplete cooling in the spectra of the ten brightest short and long GRBs observed by Fermi. We find that in eight out of ten long GRBs there is compelling evidence of a low-energy break (below the peak energy) and good agreement with the photon indices of the synchrotron spectrum (respectively −2/3 and −3/2 below the break and between the break and the peak energy). Interestingly, none of the ten short GRBs analysed shows a break, but the low-energy spectral slope is consistent with −2/3. In a standard scenario, these results imply a very low magnetic field in the emission region (B′∼10 G in the comoving frame), at odd with expectations.",,https://www.aanda.org/articles/aa/pdf/2019/05/aa34987-18.pdf -Digital Forensic Analysis on iDevice : Jailbreak iOS 12.1.1 as a Case Study,Amin Ali,"Jailbreak has an issue in data alteration, as it modifies file(s) in the device to allow user to extract more data than without jailbreaking. This issue raises controversy of the use of jailbreaking in digital forensic investigation, as data integrity is a prominent requirement in a court proceeding. This study aims to analyze the process of jailbreak, what is actually done by the jailbreak code in a device, and what data is actually modified by the jailbreak code. By using the latest version of iOS system, this study uses the voucher_swap exploit as a representation of semi-tethered jailbreaking method to investigate the effects of jailbreak on data integrity on a idevice. The investigation is conducted based on to what extent data can be extracted from the jailbreak device, hash value comparison of the data, and source code analysis to scrutinize the effect of jailbreak to the system and user data inside the device. Results of this study suggest that jailbreak is acceptable to prepare idevice in digital forensic investigations to acquire more data, as it maintains the integrity of user data. These results may help forensic communities in their decision about the acceptability of jailbreaking in idevide forensic investigations.","{'model': 'tldr@v2.0.0', 'text': 'Results of this study suggest that jailbreak is acceptable to prepare idevice in digital forensic investigations to acquire more data, as it maintains the integrity of user data.'}", -Detailed polarization measurements of the prompt emission of five gamma-ray bursts,Shuang-Nan Zhang,,,https://arxiv.org/pdf/1901.04207 -3D prompt gamma imaging for proton beam range verification,E. Draeger,"We tested the ability of a single Compton camera (CC) to produce 3-dimensional (3D) images of prompt gammas (PGs) emitted during the irradiation of a tissue-equivalent plastic phantom with proton pencil beams for clinical doses delivered at clinical dose rates. PG measurements were made with a small prototype CC placed at three different locations along the proton beam path. We evaluated the ability of the CC to produce images at each location for two clinical scenarios: (1) the delivery of a single 2 Gy pencil beam from a hypo-fractionated treatment (~9  ×  108 protons), and (2) a single pencil beam from a standard treatment (~1  ×  108 protons). Additionally, the data measured at each location were combined to simulate measurements with a larger scale, clinical CC and its ability to image shifts in the Bragg peak (BP) range for both clinical scenarios. With our prototype CC, the location of the distal end of the BP could be seen with the CC placed up to 4 cm proximal or distal to the BP distal falloff. Using the data from the simulated full scale clinical CC, 3D images of the PG emission were produced with the delivery of as few as 1  ×  108 protons, and shifts in the proton beam range as small as 2 mm could be detected for delivery of a 2 Gy spot. From these results we conclude that 3D PG imaging for proton range verification under clinical beam delivery conditions is possible with a single CC.","{'model': 'tldr@v2.0.0', 'text': '3D PG imaging for proton range verification under clinical beam delivery conditions is possible with a single Compton camera, and shifts in the proton beam range as small as 2\u2009mm could be detected for delivery of a 2 Gy spot.'}",https://europepmc.org/articles/pmc5808927?pdf=render -GRB 190114C: from prompt to afterglow?,M. Ravasio,"GRB 190114C is the first gamma-ray burst detected at very high energies (VHE, i.e., > 300 GeV) by the MAGIC Cherenkov telescope. The analysis of the emission detected by the Fermi satellite at lower energies, in the 10 keV–100 GeV energy range, up to ∼50 s (i.e., before the MAGIC detection) can hold valuable information. We analyze the spectral evolution of the emission of GRB 190114C as detected by the Fermi Gamma-Ray Burst Monitor (GBM) in the 10 keV–40 MeV energy range up to ∼60 s. The first 4 s of the burst feature a typical prompt emission spectrum, which can be fit by a smoothly broken power-law function with typical parameters. Starting on ∼4 s post-trigger, we find an additional nonthermal component that can be fit by a power law. This component rises and decays quickly. The 10 keV–40 MeV flux of the power-law component peaks at ∼6 s; it reaches a value of 1.7 × 10−5 erg cm−2 s−1. The time of the peak coincides with the emission peak detected by the Large Area Telescope (LAT) on board Fermi. The power-law spectral slope that we find in the GBM data is remarkably similar to that of the LAT spectrum, and the GBM+LAT spectral energy distribution seems to be consistent with a single component. This suggests that the LAT emission and the power-law component that we find in the GBM data belong to the same emission component, which we interpret as due to the afterglow of the burst. The onset time allows us to estimate that the initial jet bulk Lorentz factor Γ0 is about 500, depending on the assumed circum-burst density.",,https://www.aanda.org/articles/aa/pdf/2019/06/aa35214-19.pdf -A full-scale clinical prototype for proton range verification using prompt gamma-ray spectroscopy,F. Hueso-González,"We present a full-scale clinical prototype system for in vivo range verification of proton pencil-beams using the prompt gamma-ray spectroscopy method. The detection system consists of eight LaBr3 scintillators and a tungsten collimator, mounted on a rotating frame. Custom electronics and calibration algorithms have been developed for the measurement of energy- and time-resolved gamma-ray spectra during proton irradiation at a clinical dose rate. Using experimentally determined nuclear reaction cross sections and a GPU-accelerated Monte Carlo simulation, a detailed model of the expected gamma-ray emissions is created for each individual pencil-beam. The absolute range of the proton pencil-beams is determined by minimizing the discrepancy between the measurement and this model, leaving the absolute range of the beam and the elemental concentrations of the irradiated matter as free parameters. The system was characterized in a clinical-like situation by irradiating different phantoms with a scanning pencil-beam. A dose of 0.9 Gy was delivered to a cm3 target with a beam current of 2 nA incident on the phantom. Different range shifters and materials were used to test the robustness of the verification method and to calculate the accuracy of the detected range. The absolute proton range was determined for each spot of the distal energy layer with a mean statistical precision of 1.1 mm at a 95% confidence level and a mean systematic deviation of 0.5 mm, when aggregating pencil-beam spots within a cylindrical region of 10 mm radius and 10 mm depth. Small range errors that we introduced were successfully detected and even large differences in the elemental composition do not affect the range verification accuracy. These results show that our system is suitable for range verification during patient treatments in our upcoming clinical study.","{'model': 'tldr@v2.0.0', 'text': ""Results show that the full-scale clinical prototype system for in vivo range verification of proton pencil-beams using the prompt gamma-ray spectroscopy method is suitable for range verification during patient treatments in the authors' upcoming clinical study.""}",https://europepmc.org/articles/pmc6340397?pdf=render -False Sense of Security: A Study on the Effectivity of Jailbreak Detection in Banking Apps,Ansgar Kellner,"People increasingly rely on mobile devices for banking transactions or two-factor authentication (2FA) and thus trust in the security provided by the underlying operating system. Simultaneously, jailbreaks gain tremendous popularity among regular users for customizing their devices. In this paper, we show that both do not go well together: Jailbreaks remove vital security mechanisms, which are necessary to ensure a trusted environment that allows to protect sensitive data, such as login credentials and transaction numbers (TANs). We find that all but one banking app, available in the iOS App Store, can be fully compromised by trivial means without reverse-engineering, manipulating the app, or other sophisticated attacks. Even worse, 44% of the banking apps do not even try to detect jailbreaks, revealing the prevalent, errant trust in the operating system's security. This study assesses the current state of security of banking apps and pleads for more advanced defensive measures for protecting user data.","{'model': 'tldr@v2.0.0', 'text': 'It is found that all but one banking app, available in the iOS App Store, can be fully compromised by trivial means without reverse-engineering, manipulating the app, or other sophisticated attacks.'}",https://ieeexplore.ieee.org/ielx7/8790377/8806708/08806743.pdf -Intravitreal Ranibizumab for diabetic macular edema with prompt versus deferred laser treatment: 5-year randomized trial results.,M. Elman,,"{'model': 'tldr@v2.0.0', 'text': 'Five-year results suggest focal/grid laser treatment at the initiation of intravitreal ranibizumab is no better than deferring laser treatment for ≥24 weeks in eyes with DME involving the central macula with vision impairment.'}",https://europepmc.org/articles/pmc4520307?pdf=render -TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring,Cancan Jin,"Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for non-target prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task.","{'model': 'tldr@v2.0.0', 'text': 'A two-stage deep neural network (TDNN) is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step, demonstrating a promising improvement for the prompt-independent AES task.'}",https://www.aclweb.org/anthology/P18-1100.pdf -Prompt-gamma monitoring in hadrontherapy: A review,J. Krimmer,,,https://hal.archives-ouvertes.fr/hal-01585334/file/Krimmer2017_nima.pdf -Remote-Controlled Switch Allocation Enabling Prompt Restoration of Distribution Systems,Shunbo Lei,"Remote-controlled switches (RCSs) play an important role in prompt service restoration of distribution systems (DSs). However, the cost of RCSs and the vast footprint of DSs limit widespread utilization of RCSs. In this paper, we present a new approach to RCS allocation for improving the performance of restoration and optimizing reliability benefits with reasonable RCS cost. Specifically, the optimal number and locations of to-be-upgraded switches can be determined with different objectives: maximizing the reduction of customer interruption cost; maximizing the reduction of system average interruption duration index; or maximizing the amount of loads that can be restored using the upgraded RCSs. We show that these models can actually be formulated as mixed-integer convex programming problems. We further introduce a novel method to equivalently transform and efficiently solve each of them. The global optimum can thus be computed within a reasonable amount of time. The IEEE 33-node and 123-node test systems are used to demonstrate the proposed models and algorithms.","{'model': 'tldr@v2.0.0', 'text': 'A new approach to RCS allocation for improving the performance of restoration and optimizing reliability benefits with reasonable RCS cost is presented and a novel method to equivalently transform and efficiently solve each of them is introduced.'}", -Off-axis Prompt X-Ray Transients from the Cocoon of Short Gamma-Ray Bursts,D. Lazzati,"We present the results of numerical simulations of the prompt emission of short-duration gamma-ray bursts. We consider emission from the relativistic jet, the mildly relativistic cocoon, and the non-relativistic shocked ambient material. We find that the cocoon material is confined between off-axis angles and gives origin to X-ray transients with a duration of a few to ∼10 s, delayed by a few seconds from the time of the merger. We also discuss the distance at which such transients can be detected, finding that it depends sensitively on the assumptions that are made about the radiation spectrum. Purely thermal cocoon transients are detectable only out to a few Mpc, while Comptonized transients can instead be detected by the Fermi Gamma-ray Burst Monitor (GBM) out to several tens of Mpc.",,https://iopscience.iop.org/article/10.3847/2041-8213/aa8f3d/pdf -Measurements of prompt charm production cross-sections in pp collisions at s√=5TeV,R. Aaij,"Production cross-sections of prompt charm mesons are measured using data from pp collisions at the LHC at a centre-of-mass energy of 5TeV. The data sample corresponds to an integrated luminosity of 8.60±0.33pb−1 collected by the LHCb experiment. The production cross-sections of D0, D+, D+s, and D∗+ mesons are measured in bins of charm meson transverse momentum, pT, and rapidity, y. They cover the rapidity range 2.0) and the number of photons detected (n′ ?>), i.e. CTR∝τd/n′ ?>. However, it is still an open question to what extent the scintillation rise time (τr ?>) and other fast or prompt photons, e.g. Cherenkov photons, at the beginning of the scintillation process influence the CTR. This paper presents measurements of the scintillation emission rate for different LSO type crystals, i.e. LSO:Ce, LYSO:Ce, LSO:Ce codoped Ca and LGSO:Ce. For the various LSO-type samples measured we find an average value of 70 ps for the scintillation rise time, although some crystals like LSO:Ce codoped Ca seem to have a much faster rise time in the order of 20 ps. Additional measurements for LuAG:Ce and LuAG:Pr show a rise time of 535 ps and 251 ps, respectively. For these crystals, prompt photons (Cherenkov) can be observed at the beginning of the scintillation event. Furthermore a significantly lower rise time value is observed when codoping with calcium. To quantitatively investigate the influence of the rise time to the time resolution we measured the CTR with the same L(Y)SO samples and compared the values to Monte Carlo simulations. Using the measured relative light yields, rise- and decay times of the scintillators we are able to quantitatively understand the measured CTRs in our simulations. Although the rise time is important to fully explain the CTR variation for the different samples tested we determined its influence on the CTR to be in the order of a few percent only. This result is surprising because, if only photonstatistics of the scintillation process is considered, the CTR would be proportional to the square root of the rise time. The unexpected small rise time influence on the CTR can be explained by the convolution of the scintillation rate with the single photon time resolution (SPTR) of the photodetector and the photon travel spread (PTS) in the crystal. The timing benefits of prompt photons at the beginning of the scintillation process (Cherenkov etc) are further studied, which leads to the conclusion that the scintillation rise time, SPTR and PTS have to be lowered simultaneously to fully profit from these fast photons in order to improve the CTR significantly.","{'model': 'tldr@v2.0.0', 'text': 'The timing benefits of prompt photons at the beginning of the scintillation process (Cherenkov etc) are further studied, which leads to the conclusion that theScintillation rise time, SPTR and PTS have to be lowered simultaneously to fully profit from these fast photons in order to improve the CTR significantly.'}", -Promoting End-of-Life Discussions in Advanced Cancer: Effects of Patient Coaching and Question Prompt Lists.,Rachel A Rodenbach,"Purpose To build on results of a cluster randomized controlled trial (RCT) of a combined patient-oncologist intervention to improve communication in advanced cancer, we conducted a post hoc analysis of the patient intervention component, a previsit patient coaching session that used a question prompt list (QPL). We hypothesized that intervention-group participants would bring up more QPL-related topics, particularly prognosis-related topics, during the subsequent oncologist visit. Patients and Methods This cluster RCT with 170 patients who had advanced nonhematologic cancer (and their caregivers) recruited from practices of 24 participating oncologists in western New York. Intervention-group oncologists (n = 12) received individualized communication training; up to 10 of their patients (n = 84) received a previsit individualized communication coaching session that incorporated a QPL. Control-group oncologists (n = 12) and patients (n = 86) received no interventions. Topics of interest identified by patients during the coaching session were summarized from coaching notes; one office visit after the coaching session was audio recorded, transcribed, and analyzed by using linear regression modeling for group differences. Results Compared with controls, more than twice as many intervention-group participants brought up QPL-related topics during their office visits (70.2% v 32.6%; P < .001). Patients in the intervention group were nearly three times more likely to ask about prognosis (16.7% v 5.8%; P =.03). Of 262 topics of interest identified during coaching, 158 (60.3%) were QPL related; 20 (12.7%) addressed prognosis. Overall, patients in the intervention group brought up 82.4% of topics of interest during the office visit. Conclusion A combined coaching and QPL intervention was effective to help patients with advanced cancer and their caregivers identify and bring up topics of concern, including prognosis, during their subsequent oncologist visits. Considering that most patients are misinformed about prognosis, more intensive steps are needed to better promote such discussions.","{'model': 'tldr@v2.0.0', 'text': 'A combined coaching and QPL intervention was effective to help patients with advanced cancer and their caregivers identify and bring up topics of concern, including prognosis, during their subsequent oncologist visits.'}",https://europepmc.org/articles/pmc5455683?pdf=render -Correlated prompt fission data in transport simulations,P. Talou,,,https://arxiv.org/pdf/1710.00107 -Detection of Low-energy Breaks in Gamma-Ray Burst Prompt Emission Spectra,G. Oganesyan,"The radiative process responsible for gamma-ray burst (GRB) prompt emission has not been identified yet. If dominated by fast-cooling synchrotron radiation, the part of the spectrum immediately below the peak energy should display a power-law behavior with slope , which breaks to a higher value (i.e., to a harder spectral shape) at lower energies. Prompt emission spectral data (usually available down to keV) are consistent with one single power-law behavior below the peak, with typical slope , higher than (and then inconsistent with) the expected value . To better characterize the spectral shape at low energy, we analyzed 14 GRBs for which the Swift X-ray Telescope started observations during the prompt. When available, Fermi-GBM observations have been included in the analysis. For 67% of the spectra, models that usually give a satisfactory description of the prompt (e.g., the Band model) fail to reproduce the 0.5–1000 keV spectra: low-energy data outline the presence of a spectral break around a few keV. We then introduce an empirical fitting function that includes a low-energy power law , a break energy , a second power law , and a peak energy . We find ( ), ( ), ( ), and ( ). The values and are very close to expectations from synchrotron radiation. In this context, corresponds to the cooling break frequency. The relatively small ratio suggests a regime of moderately fast cooling, which might solve the long-lasting problem of the apparent inconsistency between measured and predicted low-energy spectral index.",,https://iopscience.iop.org/article/10.3847/1538-4357/aa831e/pdf -Prompt neutrinos from atmospheric charm in the general-mass variable-flavor-number scheme,M. Benzke,,,https://link.springer.com/content/pdf/10.1007/JHEP12(2017)021.pdf -Marginally fast cooling synchrotron models for prompt GRBs,P. Beniamini,"Previous studies have considered synchrotron as the emission mechanism for prompt Gamma-Ray Bursts (GRBs). These works have shown that the electrons must cool on a timescale comparable to the dynamic time at the source in order to satisfy spectral constraints while maintaining high radiative efficiency. We focus on conditions where synchrotron cooling is balanced by a continuous source of heating, and in which these constraints are naturally satisfied. Assuming that a majority of the electrons in the emitting region are contributing to the observed peak, we find that the energy per electron has to be $E\gtrsim 20$ GeV and that the Lorentz factor of the emitting material has to be very large $10^3\lesssim \Gamma_{\rm em} \lesssim 10^4$, well in excess of the bulk Lorentz factor of the jet inferred from GRB afterglows. A number of independent constraints then indicate that the emitters must be moving relativistically, with $\Gamma'\approx 10$, relative to the bulk frame of the jet and that the jet must be highly magnetized upstream of the emission region, $\sigma_{\rm up}\gtrsim 30$. The emission radius is also strongly constrained in this model to $R\gtrsim 10^{16}$cm. These values are consistent with magnetic jet models where the dissipation is driven by magnetic reconnection that takes place far away from the base of the jet.",,https://academic.oup.com/mnras/article-pdf/476/2/1785/24280125/sty340.pdf -Measurement of the prompt J/ψ pair production cross-section in pp collisions at √s = 8 TeV with the ATLAS detector,Atlas Collaboration,,, -First clinical application of a prompt gamma based in vivo proton range verification system.,C. Richter,,"{'model': 'tldr@v2.0.0', 'text': 'For the first time, range verification based on prompt gamma imaging was applied for a clinical proton treatment and the potential to improve the precision of particle therapy with this technique has increased considerably.'}", -Measurement of Prompt D^{0} Meson Azimuthal Anisotropy in Pb-Pb Collisions at sqrt[s_{NN}]=5.02  TeV.,A. Sirunyan,"The prompt D^{0} meson azimuthal anisotropy coefficients, v_{2} and v_{3}, are measured at midrapidity (|y|<1.0) in Pb-Pb collisions at a center-of-mass energy sqrt[s_{NN}]=5.02  TeV per nucleon pair with data collected by the CMS experiment. The measurement is performed in the transverse momentum (p_{T}) range of 1 to 40  GeV/c, for central and midcentral collisions. The v_{2} coefficient is found to be positive throughout the p_{T} range studied. The first measurement of the prompt D^{0} meson v_{3} coefficient is performed, and values up to 0.07 are observed for p_{T} around 4  GeV/c. Compared to measurements of charged particles, a similar p_{T} dependence, but smaller magnitude for p_{T}<6  GeV/c, is found for prompt D^{0} meson v_{2} and v_{3} coefficients. The results are consistent with the presence of collective motion of charm quarks at low p_{T} and a path length dependence of charm quark energy loss at high p_{T}, thereby providing new constraints on the theoretical description of the interactions between charm quarks and the quark-gluon plasma.","{'model': 'tldr@v2.0.0', 'text': 'The results are consistent with the presence of collective motion of Charm quarks at low p_{T} and a path length dependence of charm quark energy loss at high p_{ t}, thereby providing new constraints on the theoretical description of the interactions between charm quarks and the quark-gluon plasma.'}",http://link.aps.org/pdf/10.1103/PhysRevLett.120.202301 -Characterization of gamma-ray burst prompt emission spectra down to soft X-rays,G. Oganesyan,"Detection of prompt emission by Swift-XRT provides a unique tool to study how the prompt spectrum of gamma-ray bursts (GRBs) extends down to the soft X-ray band. This energy band is particularly important for prompt emission studies, since it is towards low energies that the observed spectral shape is in disagreement with the synchrotron predictions. Unfortunately, the number of cases where XRT started observing the GRB location during the prompt phase is very limited. In this work, we collect a sample of 34 GRBs and perform joint XRT+BAT spectral analysis of prompt radiation, extending a previous study focused on the 14 brightest cases. Fermi-GBM observations are included in the analysis when available (11 cases), allowing the characterization of prompt spectra from soft X-rays to MeV energies. In 62% of the spectra, the XRT data reveal a hardening of the spectrum, well described by introducing an additional, low-energy power-law segment (with index α1) into the empirical fitting function. The break energy below which the spectrum hardens has values between 3 keV and 22 keV. A second power-law (α2) describes the spectrum between the break energy and the peak energy. The mean values of the photon indices are 〈α1〉 = −0.51 (σ = 0.24) and 〈α2〉 = −1.56 (σ = 0.26). These are consistent, within one σ, with the synchrotron values in fast cooling regime. As a test, if we exclude XRT data from the fits we find typical results: the spectrum below the peak energy is described by a power law with 〈α〉 = −1.15. This shows the relevance of soft X-ray data in revealing prompt emission spectra consistent with synchrotron spectra. Finally, we do not find any correlation between the presence of the X-ray break energy and the flux, fluence, or duration of the prompt emission.",,https://www.aanda.org/articles/aa/pdf/2018/08/aa32172-17.pdf -Gamma-ray Burst Prompt Correlations: Selection and Instrumental Effects,M. Dainotti,"The prompt emission mechanism of gamma-ray bursts (GRB) even after several decades remains a mystery. However, it is believed that correlations between observable GRB properties, given their huge luminosity/radiated energy and redshift distribution extending up to at least z ≈ 9, are promising possible cosmological tools. They also may help to discriminate among the most plausible theoretical models. Nowadays, the objective is to make GRBs standard candles, similar to supernovae (SNe) Ia, through well-established and robust correlations. However, differently from SNe Ia, GRBs span over several order of magnitude in their energetics, hence they cannot yet be considered standard candles. Additionally, being observed at very large distances, their physical properties are affected by selection biases, the so-called Malmquist bias or Eddington effect. We describe the state of the art on how GRB prompt correlations are corrected for these selection biases to employ them as redshift estimators and cosmological tools. We stress that only after an appropriate evaluation and correction for these effects, GRB correlations can be used to discriminate among the theoretical models of prompt emission, to estimate the cosmological parameters and to serve as distance indicators via redshift estimation.",,https://iopscience.iop.org/article/10.1088/1538-3873/aaa8d7/pdf -The characteristics and effectiveness of Question Prompt List interventions in oncology: a systematic review of the literature,K. Brandes,"Question Prompt Lists (QPLs) have been used extensively in the oncology setting to improve communication, psychological and/or cognitive outcomes. In this systematic review, the objectives were to (a) examine the methodological quality of QPL interventions, (b) review the effectiveness of QPL interventions on communication, psychological and/or cognitive outcomes of cancer patients, (c) gain more insight into the characteristics of QPL interventions (e.g., the number and content of questions, and the mode of delivery) and (d) explore whether the effectiveness of QPL interventions might be explained by their characteristics.","{'model': 'tldr@v2.0.0', 'text': 'This systematic review reviewed the effectiveness of QPL interventions on communication, psychological and/or cognitive outcomes of cancer patients, and gained more insight into the characteristics ofQPL interventions to explore whether the effectiveness might be explained by their characteristics.'}",https://pure.uva.nl/ws/files/2138355/167564_433599.pdf -Prompt atmospheric neutrino fluxes: perturbative QCD models and nuclear effects,A. Bhattacharya,,,https://link.springer.com/content/pdf/10.1007%2FJHEP11%282016%29167.pdf -"Weakly Bound Free Radicals in Combustion: ""Prompt"" Dissociation of Formyl Radicals and Its Effect on Laminar Flame Speeds.",N. Labbe,"Weakly bound free radicals have low-dissociation thresholds such that at high temperatures, time scales for dissociation and collisional relaxation become comparable, leading to significant dissociation during the vibrational-rotational relaxation process. Here we characterize this ""prompt"" dissociation of formyl (HCO), an important combustion radical, using direct dynamics calculations for OH + CH2O and H + CH2O (key HCO-forming reactions). For all other HCO-forming reactions, presumption of a thermal incipient HCO distribution was used to derive prompt dissociation fractions. Inclusion of these theoretically derived HCO prompt dissociation fractions into combustion kinetics models provides an additional source for H-atoms that feeds chain-branching reactions. Simulations using these updated combustion models are therefore shown to enhance flame propagation in 1,3,5-trioxane and acetylene. The present results suggest that HCO prompt dissociation should be included when simulating flames of hydrocarbons and oxygenated molecules and that prompt dissociations of other weakly bound radicals may also impact combustion simulations.","{'model': 'tldr@v2.0.0', 'text': 'It is suggested that HCO prompt dissociation should be included when simulating flames of hydrocarbons and oxygenated molecules and that prompt dissociations of other weakly bound radicals may also impact combustion simulations.'}", -Observation and measurements of the production of prompt and non-prompt J/psi mesons in association with a Z boson in pp collisions at √s = 8 TeV with the ATLAS detector,S. Leontsinis,"The associated production of a vector boson with heavy quarkonia is a key observable for understanding the quarkonium production mechanisms. In this poster the observation of the production of the Z boson in association with a prompt or with a non-prompt J/ψ meson with the ATLAS detector at LHC is presented and its production rate is measured in comparison of the inclusive Z production. Relative contributions to the signal from single and double parton scattering are estimated. Single parton scattering cross-sections are compared to cutting-edge theoretical calculations in the colour singlet and colour octet formalisms. Finally, a lower limit in the double parton scattering effective cross section is extracted.",, -Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake,J. Montagner,,"{'model': 'tldr@v2.0.0', 'text': 'While prompt gravity signal detection with state-of-the-art gravimeters and seismometers is challenged by background seismic noise, its robust detection with gravity gradiometers under development could open new directions in earthquake seismology, and overcome fundamental limitations of current earthquake early-warning systems imposed by the propagation speed of seismic waves.'}",https://www.nature.com/articles/ncomms13349.pdf -Imaging of prompt gamma rays emitted during delivery of clinical proton beams with a Compton camera: feasibility studies for range verification,J. Polf,"The purpose of this paper is to evaluate the ability of a prototype Compton camera (CC) to measure prompt gamma rays (PG) emitted during delivery of clinical proton pencil beams for prompt gamma imaging (PGI) as a means of providing in vivo verification of the delivered proton radiotherapy beams. A water phantom was irradiated with clinical 114 MeV and 150 MeV proton pencil beams. Up to 500 cGy of dose was delivered per irradiation using clinical beam currents. The prototype CC was placed 15 cm from the beam central axis and PGs from 0.2 MeV up to 6.5 MeV were measured during irradiation. From the measured data (2D) images of the PG emission were reconstructed. (1D) profiles were extracted from the PG images and compared to measured depth dose curves of the delivered proton pencil beams. The CC was able to measure PG emission during delivery of both 114 MeV and 150 MeV proton beams at clinical beam currents. 2D images of the PG emission were reconstructed for single 150 MeV proton pencil beams as well as for a 5   ×   5 cm mono-energetic layer of 114 MeV pencil beams. Shifts in the Bragg peak (BP) range were detectable on the 2D images. 1D profiles extracted from the PG images show that the distal falloff of the PG emission profile lined up well with the distal BP falloff. Shifts as small as 3 mm in the beam range could be detected from the 1D PG profiles with an accuracy of 1.5 mm or better. However, with the current CC prototype, a dose of 400 cGy was required to acquire adequate PG signal for 2D PG image reconstruction. It was possible to measure PG interactions with our prototype CC during delivery of proton pencil beams at clinical dose rates. Images of the PG emission could be reconstructed and shifts in the BP range were detectable. Therefore PGI with a CC for in vivo range verification during proton treatment delivery is feasible. However, improvements in the prototype CC detection efficiency and reconstruction algorithms are necessary to make it a clinically viable PGI system.","{'model': 'tldr@v2.0.0', 'text': 'PGI with a CC for in vivo range verification during proton treatment delivery is feasible, however, improvements in the prototype CC detection efficiency and reconstruction algorithms are necessary to make it a clinically viable PGI system.'}", -A revised analysis of gamma-ray bursts’ prompt efficiencies,P. Beniamini,"The prompt Gamma-Ray Bursts' (GRBs) efficiency is an important clue on the emission mechanism producing the $\gamma$-rays. Previous estimates of the kinetic energy of the blast waves, based on the X-ray afterglow luminosity $L_X$, suggested that this efficiency is large, with values above 90\% in some cases. This poses a problem to emission mechanisms and in particular to the internal shocks model. These estimates are based, however, on the assumption that the X-ray emitting electrons are fast cooling and that their Inverse Compton (IC) losses are negligible. The observed correlations between $L_X$ (and hence the blast wave energy) and $E_{\gamma\rm ,iso}$, the isotropic equivalent energy in the prompt emission, has been considered as observational evidence supporting this analysis. It is reasonable that the prompt gamma-ray energy and the blast wave kinetic energy are correlated and the observed correlation corroborates, therefore, the notion $L_X$ is indeed a valid proxy for the latter. Recent findings suggest that the magnetic field in the afterglow shocks is significantly weaker than was earlier thought and its equipartition fraction, $\epsilon_B$, could be as low as $10^{-4}$ or even lower. Motivated by these findings we reconsider the problem, taking now IC cooling into account. We find that the observed $L_X-E_{\gamma\rm ,iso}$ correlation is recovered also when IC losses are significant. For small $\epsilon_B$ values the blast wave must be more energetic and we find that the corresponding prompt efficiency is significantly smaller than previously thought. For example, for $\epsilon_B\sim10^{-4}$ we infer a typical prompt efficiency of $\sim15\%$.",,https://academic.oup.com/mnras/article-pdf/461/1/51/8043156/stw1331.pdf -Proton range verification through prompt gamma-ray spectroscopy,J. Verburg,"We present an experimental study of a novel method to verify the range of proton therapy beams. Differential cross sections were measured for 15 prompt gamma-ray lines from proton-nuclear interactions with 12C and 16O at proton energies up to 150 MeV. These cross sections were used to model discrete prompt gamma-ray emissions along proton pencil-beams. By fitting detected prompt gamma-ray counts to these models, we simultaneously determined the beam range and the oxygen and carbon concentration of the irradiated matter. The performance of the method was assessed in two phantoms with different elemental concentrations, using a small scale prototype detector. Based on five pencil-beams with different ranges delivering 5 × 108 protons and without prior knowledge of the elemental composition at the measurement point, the absolute range was determined with a standard deviation of 1.0–1.4 mm. Relative range shifts at the same dose level were detected with a standard deviation of 0.3–0.5 mm. The determined oxygen and carbon concentrations also agreed well with the actual values. These results show that quantitative prompt gamma-ray measurements enable knowledge of nuclear reaction cross sections to be used for precise proton range verification in the presence of tissue with an unknown composition.","{'model': 'tldr@v2.0.0', 'text': 'Results show that quantitative prompt gamma-ray measurements enable knowledge of nuclear reaction cross sections to be used for precise proton range verification in the presence of tissue with an unknown composition.'}", -SEARCH FOR PROMPT NEUTRINO EMISSION FROM GAMMA-RAY BURSTS WITH ICECUBE,I. C. M. Aartsen,"We present constraints derived from a search of four years of IceCube data for a prompt neutrino flux from gamma-ray bursts (GRBs). A single low-significance neutrino, compatible with the atmospheric neutrino background, was found in coincidence with one of the 506 observed bursts. Although GRBs have been proposed as candidate sources for ultra-high-energy cosmic rays, our limits on the neutrino flux disfavor much of the parameter space for the latest models. We also find that no more than ∼1% of the recently observed astrophysical neutrino flux consists of prompt emission from GRBs that are potentially observable by existing satellites.",,https://iopscience.iop.org/article/10.1088/2041-8205/805/1/L5/pdf -Calculation of conventional and prompt lepton fluxes at very high energy,A. Fedynitch,"An efficient method for calculating inclusive conventional and prompt atmospheric leptons fluxes is presented. The coupled cascade equations are solved numerically by formulating them as matrix equation. The presented approach is very flexible and allows the use of different hadronic interaction models, realistic parametrizations of the primary cosmic-ray flux and the Earth's atmosphere, and a detailed treatment of particle interactions and decays. The power of the developed method is illustrated by calculating lepton flux predictions for a number of different scenarios.",,https://www.epj-conferences.org/articles/epjconf/pdf/2015/18/epjconf-isvhecri2014_08001.pdf -Range assessment in particle therapy based on prompt γ-ray timing measurements,C. Golnik,"Proton and ion beams open up new vistas for the curative treatment of tumors, but adequate technologies for monitoring the compliance of dose delivery with treatment plans in real time are still missing. Range assessment, meaning the monitoring of therapy-particle ranges in tissue during dose delivery (treatment), is a continuous challenge considered a key for tapping the full potential of particle therapies. In this context the paper introduces an unconventional concept of range assessment by prompt-gamma timing (PGT), which is based on an elementary physical effect not considered so far: therapy particles penetrating tissue move very fast, but still need a finite transit time—about 1–2 ns in case of protons with a 5–20 cm range—from entering the patient’s body until stopping in the target volume. The transit time increases with the particle range. This causes measurable effects in PGT spectra, usable for range verification. The concept was verified by proton irradiation experiments at the AGOR cyclotron, KVI-CART, University of Groningen. Based on the presented kinematical relations, we describe model calculations that very precisely reproduce the experimental results. As the clinical treatment conditions entail measurement constraints (e.g. limited treatment time), we propose a setup, based on clinical irradiation conditions, capable of determining proton range deviations within a few seconds of irradiation, thus allowing for a fast safety survey. Range variations of 2 mm are expected to be clearly detectable.","{'model': 'tldr@v2.0.0', 'text': 'This paper proposes a setup, based on clinical irradiation conditions, capable of determining proton range deviations within a few seconds of irradiation, thus allowing for a fast safety survey, and describes model calculations that very precisely reproduce the experimental results.'}",https://iopscience.iop.org/article/10.1088/0031-9155/59/18/5399/pdf -Prompt fission neutron spectra of actinides,R. Capote,,, -Measurement of prompt D-meson production in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV,B. Abelev,"The $p_{\rm T}$-differential production cross sections of the prompt charmed mesons $D^0$, $D^+$, $D^{*+}$ and $D_{\rm s}^{+}$ and their charge conjugate in the rapidity interval $-0.96