-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 95.7 KB
/
index.json
1
[{"authors":["admin"],"categories":null,"content":"I am a post doctoral reseacher at the Caliber Lab in the Department of Psychology, New Mexico State University. I recieved my Ph.D. in Applied Cognitive Psychology at Claremont Graduate University under the supervision of Andrew R.A. Conway, PhD. My primary research interests include the impact of working memory on selective attention, individual differences in cognitive ability, and statistical methods (e.g., structural equation modeling, item response theory, and psychometric network analysis) for psychometric and cognitive modeling of human complex cognition. I am also interested in programming and data visualization with R and Python.\n","date":1633651200,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":1636065049,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"https://hanhao23.github.io/author/han-hao/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/author/han-hao/","section":"authors","summary":"I am a post doctoral reseacher at the Caliber Lab in the Department of Psychology, New Mexico State University. I recieved my Ph.D. in Applied Cognitive Psychology at Claremont Graduate University under the supervision of Andrew R.","tags":null,"title":"Han Hao","type":"authors"},{"authors":null,"categories":null,"content":"Flexibility This feature can be used for publishing content such as:\n Online courses Project or software documentation Tutorials The courses folder may be renamed. For example, we can rename it to docs for software/project documentation or tutorials for creating an online course.\nDelete tutorials To remove these pages, delete the courses folder and see below to delete the associated menu link.\nUpdate site menu After renaming or deleting the courses folder, you may wish to update any [[main]] menu links to it by editing your menu configuration at config/_default/menus.toml.\nFor example, if you delete this folder, you can remove the following from your menu configuration:\n[[main]] name = \u0026quot;Courses\u0026quot; url = \u0026quot;courses/\u0026quot; weight = 50 Or, if you are creating a software documentation site, you can rename the courses folder to docs and update the associated Courses menu configuration to:\n[[main]] name = \u0026quot;Docs\u0026quot; url = \u0026quot;docs/\u0026quot; weight = 50 Update the docs menu If you use the docs layout, note that the name of the menu in the front matter should be in the form [menu.X] where X is the folder name. Hence, if you rename the courses/example/ folder, you should also rename the menu definitions in the front matter of files within courses/example/ from [menu.example] to [menu.\u0026lt;NewFolderName\u0026gt;].\n","date":1536451200,"expirydate":-62135596800,"kind":"section","lang":"en","lastmod":1536451200,"objectID":"59c3ce8e202293146a8a934d37a4070b","permalink":"https://hanhao23.github.io/courses/example/","publishdate":"2018-09-09T00:00:00Z","relpermalink":"/courses/example/","section":"courses","summary":"Learn how to use Academic's docs layout for publishing online courses, software documentation, and tutorials.","tags":null,"title":"Overview","type":"docs"},{"authors":null,"categories":null,"content":"In this tutorial, I\u0026rsquo;ll share my top 10 tips for getting started with Academic:\nTip 1 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.\nNullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.\nCras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.\nSuspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.\nAliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.\nTip 2 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.\nNullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.\nCras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.\nSuspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.\nAliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.\n","date":1557010800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1557010800,"objectID":"74533bae41439377bd30f645c4677a27","permalink":"https://hanhao23.github.io/courses/example/example1/","publishdate":"2019-05-05T00:00:00+01:00","relpermalink":"/courses/example/example1/","section":"courses","summary":"In this tutorial, I\u0026rsquo;ll share my top 10 tips for getting started with Academic:\nTip 1 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum.","tags":null,"title":"Example Page 1","type":"docs"},{"authors":null,"categories":null,"content":"Here are some more tips for getting started with Academic:\nTip 3 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.\nNullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.\nCras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.\nSuspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.\nAliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.\nTip 4 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.\nNullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.\nCras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.\nSuspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.\nAliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.\n","date":1557010800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1557010800,"objectID":"1c2b5a11257c768c90d5050637d77d6a","permalink":"https://hanhao23.github.io/courses/example/example2/","publishdate":"2019-05-05T00:00:00+01:00","relpermalink":"/courses/example/example2/","section":"courses","summary":"Here are some more tips for getting started with Academic:\nTip 3 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum.","tags":null,"title":"Example Page 2","type":"docs"},{"authors":[],"categories":["R","Demo"],"content":"\r\rOverview\rThe goal of this document is to introduce applications of R for item response theory (IRT) modeling. Specifically, this document is focused on introducing basic IRT analyses for beginners using the “mirt” package (Chalmers, 2012). It is not intended to be a full introduction to data analysis in R, nor to basic mathematics of item response theory. Instead, this tutorial will introduce the key concepts of IRT and important features of corresponding R packages/functions that facilitate IRT modeling for beginners. For a quick reference on the basics of IRT, please see the last section of recommended readings.\nIn this tutorial, we will focus on unidimensional IRT models by presenting brief R examples using “mirt”. Specifically, we will talk about:\n1. Key concepts in IRT;\n2. Dichotomous, 1PL Model (Rasch Model);\n3. Dichotomous, 2PL Model;\n4. Polytomous, Generalized Partial Credit Model.\nInstall and Load Packages\rThe first step is to make sure you have the R packages needed in this tutorial. We can obtain the “mirt” package from CRAN (using “install.packages(‘mirt’)”), or install the development version of the package from Github using the following codes:\ninstall.packages(\u0026#39;devtools\u0026#39;)\rlibrary(\u0026#39;devtools\u0026#39;)\rinstall_github(\u0026#39;philchalmers/mirt\u0026#39;)\rWe need the following packages in this tutorial:\nlibrary(tidyverse) # For data wrangling and basic visualizations\rlibrary(psych) # For descriptive stats and assumption checks\rlibrary(mirt) # IRT modeling\r\rPrepare the Data\rThe next step is to read in and prepare corresponding data files for the tutorial. The two data files we are using in this tutorial are available at here: ReadingSpan and RotationSpan.\nThese two datasets consist of item-level responses from 261 subjects on 2 complex span tasks: reading span and rotation span. In a complex span task, each item has a varying number of elements to process and memorize (item size). The responses in the two datasets are integer numbers that reflect the numbers of correctly recalled elements for each item. For the reading span task, there are 15 items presented across 3 blocks, with item sizes varied from 3 to 7. For the rotation span task, there are 12 items presented across 3 blocks, with item sizes varied from 2 to 5.\n# Conway et al. (2019) Data\rwmir \u0026lt;- read.csv(\u0026quot;WMI_Read_Han_wide.csv\u0026quot;)[,-1]\rwmirot \u0026lt;- read.csv(\u0026quot;WMI_Rot_Han_wide.csv\u0026quot;)[,-1]\rcolnames(wmir) \u0026lt;- c(\u0026quot;Subject\u0026quot;, \u0026quot;V1.3\u0026quot;, \u0026quot;V1.4\u0026quot;,\u0026quot;V1.5\u0026quot;, \u0026quot;V1.6\u0026quot;, \u0026quot;V1.7\u0026quot;,\r\u0026quot;V2.3\u0026quot;, \u0026quot;V2.4\u0026quot;,\u0026quot;V2.5\u0026quot;, \u0026quot;V2.6\u0026quot;, \u0026quot;V2.7\u0026quot;,\r\u0026quot;V3.3\u0026quot;, \u0026quot;V3.4\u0026quot;,\u0026quot;V3.5\u0026quot;, \u0026quot;V3.6\u0026quot;, \u0026quot;V3.7\u0026quot;)\rcolnames(wmirot) \u0026lt;- c(\u0026quot;Subject\u0026quot;, \u0026quot;S1.2\u0026quot;,\u0026quot;S1.3\u0026quot;, \u0026quot;S1.4\u0026quot;,\u0026quot;S1.5\u0026quot;, \u0026quot;S2.2\u0026quot;,\u0026quot;S2.3\u0026quot;, \u0026quot;S2.4\u0026quot;,\u0026quot;S2.5\u0026quot;, \u0026quot;S3.2\u0026quot;,\u0026quot;S3.3\u0026quot;, \u0026quot;S3.4\u0026quot;,\u0026quot;S3.5\u0026quot;)\r# Wmi is the full dataset (N = 261)\rwmi \u0026lt;- merge(wmir, wmirot, by = \u0026quot;Subject\u0026quot;)\rhead(wmir)\r## Subject V1.3 V1.4 V1.5 V1.6 V1.7 V2.3 V2.4 V2.5 V2.6 V2.7 V3.3 V3.4 V3.5 V3.6\r## 1 1 3 4 5 2 3 3 4 2 2 1 1 2 4 0\r## 2 2 3 2 5 6 6 3 3 3 4 7 3 3 5 6\r## 3 3 3 4 3 6 6 3 4 5 3 4 3 4 5 5\r## 4 4 2 2 2 2 3 3 2 2 0 0 3 4 5 2\r## 5 5 3 3 3 4 7 3 4 5 6 5 3 1 5 6\r## 6 6 3 4 4 6 2 3 4 5 6 2 3 4 4 6\r## V3.7\r## 1 3\r## 2 7\r## 3 2\r## 4 1\r## 5 7\r## 6 7\rhead(wmirot)\r## Subject S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5\r## 1 1 2 1 0 0 0 3 1 0 1 1 0 0\r## 2 2 2 2 1 1 2 3 2 2 2 2 1 1\r## 3 3 2 1 4 1 2 3 4 1 1 2 0 3\r## 4 4 2 2 3 1 2 0 1 1 2 2 3 4\r## 5 5 2 3 4 5 2 3 4 4 2 3 4 2\r## 6 7 2 3 0 2 2 3 4 3 2 1 2 1\rLabels of the variables in the datasets indicate the corresponding block and item size of a specific item. For example, in the reading span dataset (wmir), the 5th column (“V1.6”) presents subjects’ responses on the item with 6 elements in the 1st block. Subject 1 recalled 2 of the 6 elements correctly.\nFor a detail summary of the two complex span tasks, see Conway et al. (2005) and Hao \u0026amp; Conway (2021).\n\r\rKey Concepts in Item Response Theory\rIn this section we will briefly go over some key concepts and terms we will be using in this IRT tutorial.\nScale: In this tutorial, a scale refers to any quantitative system that is designed to reflect an individual’s standing or level of ability on a latent construct or latent trait. A scale consists of multiple manifest items. These items can be questions in a survey, problems in a test, or trials in an experiment.\n- Dichotomous IRT models are applied to the items with two possible response categories (yes/no, correct/incorrect, etc.)\n- Polytomous IRT models are applicable if the items have more than two possible response categories (Likert-type response scale, questions with partial credits, etc.)\nDimensionality: The number of distinguishable attributes that a scale reflect.\n- For unidimensional IRT models, it is assumed that the scale only reflect one dimension, such that all items in the scale are assumed to reflect a unitary latent trait.\n- For multidimensional IRT models, multiple dimensions can be reflected and estimated, such that the responses to the items in the scales are assumed to reflect properties of multiple latent traits.\nTheta (\\(\\Theta\\)): the latent construct or trait that is measured by the scale. It represents individual differences on the latent construct being measured.\nInformation: an index to characterize the precision of measurement of the item or the test on the underlying latent construct, with high information denoting more precision. In IRT, this index is represented as a function of persons at different levels, such that the information function reflects the range of trait level over which this item or this test is most useful for distinguishing among individuals.\nItem Characteristic Curve (ICC): AKA item trace curve. ICC represents an item response function that models the relationship between a person’s probability for endorsing an item category (p) and the level on the construct measured by the scale (\\(\\Theta\\)). For this purpose, the slope of the item characteristic curve is used to assess whether a specific item mean score has either a steeper curve (i.e., high value) or whether the item has a wider curve (i.e., low value) and, therefore, cannot adequately differentiate based on ability level.\nItem Difficulty Parameter (b): the trait level on the latent scale where a person has a 50% chance of responding positively to the item. This definition of item difficulty applies to dichotomous models. For polytomous models, multiple threshold parameters (ds) are estimated for an item so that the latent trait difference between and beyond the response categories are accounted for.\n- Conceptually, the role of item difficulty parameters in an IRT model is equivalent to the intercepts of manifests in a latent factor model.\nItem Discrimination Parameter (a): how accurately a given item can differentiate individuals based on ability level. describes the strength of an item’s discrimination between people with trait levels below and above the threshold b. This parameter is also interpreted as describing how an item is related to the latent trait measured by the scale. In other words, the a parameter for an item reflects the magnitude of item reliability (how much the item is contributing to total score variance).\n- Conceptually, the role of item discrimination parameters in an IRT model is equivalent to the factor loadings of manifests in a latent factor model.\nThe “mirt” package includes an interactive graphical interface (shiny app) to allow the parameters to be modified in an IRT exemplar item in real time. To facilitate understanding of these key concepts, you can run the line of code below in your R console to activate an interactive shiny app with examplar item trace plots for different types of IRT models.\nitemplot(shiny = TRUE)\r\rUnidimensional Dichotomous IRT Models\rIn this section we will start with the basic unidimensional dichotomous model, in which all items are assumed to measure one latent trait, and the responses to items are all binary (0 = incorrect/no, 1 = correct/yes). We will use the rotation span dataset (wmirot) in this section. As aforementioned, the raw data present numbers of correctly recalled elements for each item, which are not binary responses. Thus, we need to re-score these items using a all-or-nothing unit scoring approach (Conway et al., 2005; p.775), such that only a response with all elements in the item correctly recalled is scored as “correct” (1), while all other responses are scored as “incorrect” (0). The “mirt” package has a built-in function,“key2binary”, to assign binary scores to items in a dataset based on a given answer key. Thus, we can transfer all the initial rotation span responses to a binary response scale.\ndat1 \u0026lt;- key2binary(wmirot[,-1],\rkey = c(2,3,4,5,2,3,4,5,2,3,4,5))\rhead(dat1)\r## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5\r## [1,] 1 0 0 0 0 1 0 0 0 0 0 0\r## [2,] 1 0 0 0 1 1 0 0 1 0 0 0\r## [3,] 1 0 1 0 1 1 1 0 0 0 0 0\r## [4,] 1 0 0 0 1 0 0 0 1 0 0 0\r## [5,] 1 1 1 1 1 1 1 0 1 1 1 0\r## [6,] 1 1 0 0 1 1 1 0 1 0 0 0\rAssumption Checks\rUnidimensional IRT models assume that all items are measuring a single continuous latent variable. There are different ways to test the unidimensionality assumption. For example, we can estimate McDonald’s hierarchical Omega (\\(\\omega_h\\)), which conceptually reflects percentage of variance in the scale scores accounted for by a general factor. An arbitrary cutoff of \\(\\omega_h\\) \u0026gt; .70 is usually used as the rule of thumb. Unfortunately, the current data violated this assumption using this rule of thumb (\\(\\omega_h\\) = .56), but for demonstration purpose, we went along with further analyses.\nFor further details on unidimensionality, see Berge \u0026amp; Socan (2004) and Zinbarg, Yovel, Revelle, \u0026amp; McDonald (2006).\nAnother assumption of IRT is local independence, such that item responses are independent of one another. This assumption can be checked during the modeling fitting process by investigating the residuals and compute local dependence indices using the “residuals” function.\nsummary(omega(dat1, plot = F))\r## Omega ## omega(m = dat1, plot = F)\r## Alpha: 0.7 ## G.6: 0.69 ## Omega Hierarchical: 0.56 ## Omega H asymptotic: 0.77 ## Omega Total 0.73 ## ## With eigenvalues of:\r## g F1* F2* F3* ## 1.72 0.23 0.60 0.43 ## The degrees of freedom for the model is 33 and the fit was 0.07 ## The number of observations was 262 with Chi Square = 17.75 with prob \u0026lt; 0.99 ## ## The root mean square of the residuals is 0.03 ## The df corrected root mean square of the residuals is 0.04 ## ## RMSEA and the 0.9 confidence intervals are 0 0 0\r## BIC = -166.01Explained Common Variance of the general factor = 0.58 ## ## Total, General and Subset omega for each subset\r## g F1* F2* F3*\r## Omega total for total scores and subscales 0.73 0.55 0.48 0.55\r## Omega general for total scores and subscales 0.56 0.45 0.13 0.37\r## Omega group for total scores and subscales 0.11 0.11 0.35 0.18\r\r1PL (Rasch) Model\rWe can start with a 1PL (Rasch) model, in which the discrimination parameters for all items are fixed to 1, while difficulty paramters are freely estimated in the model.\n# Model specification. Here we indicate that all columns in the dataset (1 to 12) measure the same latent factor (\u0026quot;rotation\u0026quot;)\runiDich.model1 \u0026lt;- mirt.model(\u0026quot;rotation = 1 - 12\u0026quot;)\r# Model estimation. Here we indicate that we are estimating a Rasch model, and standard errors for parameters are estimated.\runiDich.result1 \u0026lt;- mirt::mirt(dat1, uniDich.model1, itemtype = \u0026quot;Rasch\u0026quot;, SE = TRUE)\rModel and Item Fits\rwe can now investigate model fit statistics using the “M2” function, which provides the M2 index, the M2-based root mean square error of approximation (RMSEA), the standardized root mean square residual (SRMSR), and comparative fit index (CFI \u0026amp; TLI) to assess adequacy of model fit. A set of arbitrary cutoff values for the fit indices are provided here: RMSEA \u0026lt; .06; SRMSR \u0026lt; .08; CFI \u0026gt; .95; TLI \u0026gt; .95. Models with fit indices that saturate these cutoff values are commonly considered to have good fit. In this example, the non-significant M2 and all fit indices indicated great fit.\nM2(uniDich.result1)\r## M2 df p RMSEA RMSEA_5 RMSEA_95 SRMSR TLI CFI\r## stats 57.35425 65 0.7388255 0 0 0.02786493 0.05863013 1.016119 1\rIn IRT analyses, we can also assess how well each item fits the model. This is especially useful for item inspection and scale revision. The “itemfit” function provides S-X2 index as well as RMSEA values to assess the degree of item fit for each item. Non-significant S-X2 values and RMSEA \u0026lt; .06 are usually considered evidence of adequate fit for an item. In the current example, all items seem to fit the model well based on the indices.\nitemfit(uniDich.result1)\r## item S_X2 df.S_X2 RMSEA.S_X2 p.S_X2\r## 1 S1.2 7.250 5 0.042 0.203\r## 2 S1.3 11.494 6 0.059 0.074\r## 3 S1.4 7.566 6 0.032 0.272\r## 4 S1.5 6.330 5 0.032 0.275\r## 5 S2.2 4.483 6 0.000 0.612\r## 6 S2.3 5.462 6 0.000 0.486\r## 7 S2.4 5.275 6 0.000 0.509\r## 8 S2.5 3.816 5 0.000 0.576\r## 9 S3.2 1.528 6 0.000 0.958\r## 10 S3.3 7.055 6 0.026 0.316\r## 11 S3.4 5.081 6 0.000 0.533\r## 12 S3.5 8.589 5 0.052 0.127\rAlong with the model and item fits we can also check the local independence assumption using the “residuals” function. The following scripts provide the LD matrix as well as dfs and p-values for all LD indices. Large and significant LD indices are indicators of potential issues of local dependence and may require further attention.\nresiduals(uniDich.result1, df.p = T)\r## Degrees of freedom (lower triangle) and p-values:\r## ## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5\r## S1.2 NA 0.812 0.572 0.317 0.736 0.537 0.572 0.038 0.228 0.142 0.224 0.302\r## S1.3 1 NA 0.441 0.326 0.221 0.176 0.827 0.232 0.553 0.508 0.160 0.778\r## S1.4 1 1.000 NA 0.833 0.077 0.643 0.414 0.248 0.694 0.237 0.412 0.482\r## S1.5 1 1.000 1.000 NA 0.486 0.734 0.575 0.144 0.600 0.677 0.879 0.553\r## S2.2 1 1.000 1.000 1.000 NA 0.741 0.758 0.214 0.233 0.084 0.770 0.214\r## S2.3 1 1.000 1.000 1.000 1.000 NA 0.265 0.505 0.261 0.126 0.194 0.222\r## S2.4 1 1.000 1.000 1.000 1.000 1.000 NA 0.429 0.694 0.534 0.063 0.248\r## S2.5 1 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.523 0.187 0.866 0.086\r## S3.2 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.151 0.774 0.872\r## S3.3 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.352 0.717\r## S3.4 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.156\r## S3.5 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA\r## ## LD matrix (lower triangle) and standardized values:\r## ## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3\r## S1.2 NA -0.015 -0.035 0.062 -0.021 -0.038 -0.035 -0.128 0.074 -0.091\r## S1.3 0.057 NA 0.048 0.061 0.076 0.084 0.013 0.074 0.037 0.041\r## S1.4 0.319 0.595 NA -0.013 0.109 -0.029 -0.051 -0.071 -0.024 0.073\r## S1.5 1.002 0.966 0.045 NA -0.043 -0.021 -0.035 -0.090 0.032 0.026\r## S2.2 0.114 1.498 3.120 0.486 NA 0.020 0.019 0.077 0.074 0.107\r## S2.3 0.382 1.827 0.215 0.115 0.109 NA -0.069 -0.041 0.069 0.094\r## S2.4 0.319 0.048 0.669 0.314 0.095 1.242 NA -0.049 -0.024 -0.038\r## S2.5 4.286 1.428 1.335 2.136 1.547 0.444 0.626 NA 0.039 -0.082\r## S3.2 1.453 0.351 0.155 0.274 1.425 1.265 0.155 0.408 NA 0.089\r## S3.3 2.158 0.439 1.401 0.174 2.979 2.336 0.387 1.744 2.059 NA\r## S3.4 1.479 1.976 0.673 0.023 0.085 1.684 3.445 0.029 0.082 0.866\r## S3.5 1.067 0.080 0.495 0.352 1.547 1.491 1.335 2.953 0.026 0.132\r## S3.4 S3.5\r## S1.2 -0.075 -0.064\r## S1.3 0.087 -0.017\r## S1.4 -0.051 0.043\r## S1.5 0.009 -0.037\r## S2.2 0.018 0.077\r## S2.3 0.080 -0.075\r## S2.4 -0.115 -0.071\r## S2.5 0.010 -0.106\r## S3.2 -0.018 -0.010\r## S3.3 0.057 -0.022\r## S3.4 NA -0.088\r## S3.5 2.012 NA\rOther than the model fits and item fits, the “mirt” package also provides methods for calculating person fit statistics such as Zh statistics using the “personfit” function. In general, person fit statistics indicate how much a person’s responses on this test deviates from the the model prediction. See the “mirt” documentation and Drasgow, Levine, and Williams (1985) for further details.\nhead(personfit(uniDich.result1))\r## outfit z.outfit infit z.infit Zh\r## 1 0.5429718 0.02890873 0.9234628 -0.06855285 0.3245494\r## 2 0.3222275 -0.81981691 0.4792737 -1.50551325 1.2108896\r## 3 1.3572362 0.68361018 1.6380112 1.45638097 -1.2704873\r## 4 0.2944648 -0.60327679 0.4464911 -1.66830682 1.2602444\r## 5 0.3693042 -0.15612318 0.6267169 -0.83751758 0.8026636\r## 6 0.5580941 -0.53989282 0.8091663 -0.35965401 0.5450324\r\rIRT Paramters\rWe can obtain the item parameters from the model. As aforementioned, for a Rasch model, all discrimination parameters are fixed to 1, while difficulty parameters are freely estimated. In the output, the second column (“a”) contains the discrimination parameters and the third column (b) contains the difficulty parameters.\nIn this example, we presented the IRT parameters using the conventional approach, such that a larger b parameter indicates higher difficulty of an item. For example, the second item, S1.3 (item size 3 in the 1st block), has a b = -0.94. This indicates that, according to the model estimation, a person with ability level that is 0.94 standard deviation below the average has 50% of chance to answer this item (S1.3) correctly.\n# IRT parameters from the estimated model. For this example, we are obtaining simplified output without SEs/CIs (simplify = TRUE) for conventional IRT parameters (IRTpar = TRUE).\rcoef(uniDich.result1,simplify = TRUE, IRTpar = TRUE)$items\r## a b g u\r## S1.2 1 -3.0033640 0 1\r## S1.3 1 -0.9388653 0 1\r## S1.4 1 0.5864289 0 1\r## S1.5 1 2.4373369 0 1\r## S2.2 1 -2.6395716 0 1\r## S2.3 1 -1.4247025 0 1\r## S2.4 1 0.5864289 0 1\r## S2.5 1 2.3201998 0 1\r## S3.2 1 -2.3398445 0 1\r## S3.3 1 -0.9845155 0 1\r## S3.4 1 0.3420016 0 1\r## S3.5 1 2.3201998 0 1\r\rVisualizing the Item and Scale Characteristics\rWe can visualize corresponding item and scale characteristics from the model by a variety of plot methods in “mirt”. The plots presents how items and the entire scale relate to the latent trait across the scale.\nWe can start with the item trace plots for the 12 items. The item trace plots visualize the probability of responding “1” to an item as a function of \\(\\theta\\). According to the item trace plot of this example, the 3 items with item size 2 (S1.2,S2.2,S3.2) are relatively easy items, in which subjects with average ability (\\(\\theta\\) = 0) are estimated to have about 80% to 90% of chance to answer correctly. On the other hand, the 3 items with size 5 are relatively hard items, in which subjects with average ability are estimated to have about only 10% to 20% of chance to answer correctly.\n# In the function we cam specify the range of theta we\u0026#39;d like to visualize on the x axis of the plots. In this example we set it to -4 to 4 (4 SDs below and above average).\rplot(uniDich.result1, type = \u0026quot;trace\u0026quot;, theta_lim = c(-4,4))\rOther than the item trace plots, we can also look at the item information plots. Item information plots visualize how much “information” about the latent trait ability an item can provide. Conceptually, higher information implies less error of measure, and the peak of an item information curve is at the point of its b parameter. Thus, for easy items (such as the 3 items in the most left column below), little information are provided on subjects with high ability levels (because they will almost always answer correctly).\n# We can specify the exact set of items we want to plot in the ploting function of mirt. Here we can also only visualize the 1st, 5th, and 9th item from the dataset by addin an argument \u0026quot;which.items = c(1,5,9)\u0026quot; in the function. This will make the function to only plot the 3 items with set size 2 in the task. Please feel free to give a try.\rplot(uniDich.result1, type = \u0026quot;infotrace\u0026quot;)\rThe “itemplot” function can provides more details regarding an item. This is an example that visualize the item trace plot of item 1 with confidence envelope.\nitemplot(uniDich.result1, item = 1, type = \u0026quot;trace\u0026quot;, CE = TRUE)\rLastly, we can plot the information curve for the entire test. This is based on the sum of all item information curves and indicate how much information a test can provide at different latent trait levels based on the model. As aforementioned, high information indicate less error (low SE) of measurement. An ideal (but impossible) test would have high test information at all levels of latent trait levels.\nplot(uniDich.result1, type = \u0026quot;infoSE\u0026quot;)\r\r\r2PL Model\rWe can also estimate a 2PL model on the same binary data of rotation span task. In a 2PL model, not only the item difficulty parameters (bs) but also the item discrimination parameters (as) are estimated. Thus, a 2PL model assumes that different items vary in the ability to discriminate between persons with different latent trait levels.\nuniDich.model2 \u0026lt;- mirt.model(\u0026quot;rotation = 1 - 12\u0026quot;)\runiDich.result2 \u0026lt;- mirt::mirt(dat1, uniDich.model2, itemtype = \u0026quot;2PL\u0026quot;, SE = TRUE)\rModel and Item Fits\rSimilarly, we can obtain corresponding statistics of the model such as model fit and item fit statistics. In this example, the overall model fit for the 2PL model is good.\nM2(uniDich.result2)\r## M2 df p RMSEA RMSEA_5 RMSEA_95 SRMSR TLI CFI\r## stats 40.90802 54 0.9054692 0 0 0.01662836 0.04004632 1.033223 1\rItem fit statistics for this 2PL model indicate that the 2nd item (S1.3) may need further attention (significant S-X2 and large RMSEA).\nitemfit(uniDich.result2)\r## item S_X2 df.S_X2 RMSEA.S_X2 p.S_X2\r## 1 S1.2 1.607 6 0.000 0.952\r## 2 S1.3 10.427 4 0.078 0.034\r## 3 S1.4 6.922 5 0.038 0.227\r## 4 S1.5 5.373 5 0.017 0.372\r## 5 S2.2 4.646 5 0.000 0.461\r## 6 S2.3 2.783 5 0.000 0.733\r## 7 S2.4 3.550 6 0.000 0.737\r## 8 S2.5 3.390 5 0.000 0.640\r## 9 S3.2 1.839 5 0.000 0.871\r## 10 S3.3 7.201 6 0.028 0.303\r## 11 S3.4 6.146 6 0.010 0.407\r## 12 S3.5 7.840 5 0.047 0.165\r\rIRT Parameters\rWe can obtain the item parameters from the model. For a 2PL model, both the item discrimination parameters and the item difficulty parameters are freely estimated. Similar to the output of the Rasch model, the second column (“a”) contains the discrimination parameters and the third column (b) contains the difficulty parameters. We can see that, unlike the Rasch model, now every item has a unique discrimination parameter.\nFor a dichotomous 2PL model, the item discrimination parameters reflect how well an item could discriminate between persons with low and high ability/trait levels. Furthermore, the a parameter also reflects the magnitude to which an item is related to the latent trait measured by the scale. Thus, a low discrimination parameter usually indicates potential issues for an item comparing to the general scale.\ncoef(uniDich.result2,simplify = TRUE, IRTpar = TRUE)$items\r## a b g u\r## S1.2 0.8196465 -3.2455844 0 1\r## S1.3 1.7771645 -0.6171063 0 1\r## S1.4 1.3365861 0.4431670 0 1\r## S1.5 1.2963837 1.9023919 0 1\r## S2.2 1.7058414 -1.7589910 0 1\r## S2.3 1.3696765 -1.0753956 0 1\r## S2.4 0.8589407 0.5905375 0 1\r## S2.5 0.9387174 2.2626112 0 1\r## S3.2 1.4477736 -1.7033180 0 1\r## S3.3 1.5741235 -0.6888211 0 1\r## S3.4 1.2459076 0.2655880 0 1\r## S3.5 0.9445384 2.2521152 0 1\r\rVisualizing the Item and Scale Plots\rComparing to the Rasch model, the estimated a parameters in a 2PL model are also reflected in item trace plots, such that the differences in as are reflected by the changes in the steepness of the item trace curves. Higher as would be reflected as steeper item trace curves.\nplot(uniDich.result2, type = \u0026quot;trace\u0026quot;, theta_lim = c(-4,4))\rThe freely estimated discrimination parameters are also reflected in the item information plots. As we can see, comparing to the Rasch model, the peaks of information curves are varying across items in the 2PL model.\nplot(uniDich.result2, type = \u0026quot;infotrace\u0026quot;)\r\r\rModel Specifications\rThe model specification function of “mirt” provides arguments for further constraints in an IRT model and can be used for testing specific assumptions regarding the item characteristics.\nFor example, in the rotation span task, items with the exact same set size are designed in the exact same way. Thus, we consider the items with the same set size equivalent in their ability to discriminate persons with different ability levels. To estimate this model, we can specify the constraints in the model specification function. As is presented, in the function we specify an equal a parameter for items 1, 5, \u0026amp; 9, which are labeled “S1.2”, “S2.2”, and “S3.2” in the dataset (these are the 3 items with set size 2); another equal a for items 2, 6, \u0026amp; 1; another for items 3, 7, and 11; and another for items 4, 8, and 12.\nuniDich.model3 \u0026lt;- mirt.model(\u0026quot;rotation = 1 - 12\rCONSTRAIN = (1,5,9,a1), (2,6,10,a1),(3,7,11,a1),(4,8,12,a1)\u0026quot;)\runiDich.result3 \u0026lt;- mirt::mirt(dat1, uniDich.model3, itemtype = \u0026quot;2PL\u0026quot;, SE = TRUE)\rThe specification in constraints is reflected in the IRT parameters from this model. As we can observe, in the IRT parameters output, items with the same set sizes are estimated to have the exact same a parameters. For all items with size 2, a = 1.31, and for all items with size 3, a = 1.55, etc. On the other hand, the b parameters are still freely estimated regardless of the item size.\ncoef(uniDich.result3,simplify = TRUE, IRTpar = TRUE)$items\r## a b g u\r## S1.2 1.314847 -2.3120591 0 1\r## S1.3 1.551951 -0.6626354 0 1\r## S1.4 1.127870 0.4902232 0 1\r## S1.5 1.036344 2.2120724 0 1\r## S2.2 1.314847 -2.0337955 0 1\r## S2.3 1.551951 -1.0029799 0 1\r## S2.4 1.127870 0.4902232 0 1\r## S2.5 1.036344 2.1036240 0 1\r## S3.2 1.314847 -1.8044209 0 1\r## S3.3 1.551951 -0.6946542 0 1\r## S3.4 1.127870 0.2815835 0 1\r## S3.5 1.036344 2.1036240 0 1\rWe can also do model comparisons for nested models with different constraints. For example, we can test the difference between this constrained model and the previous 2PL model without any constraints on discrimination. This is similar to a model comparison based on chi-squared statistics for nested SEM models. As we can see, results indicate that the two models are not significantly different from each other, \\(\\Delta\\chi^2\\)(8) = 8.68, p = .37.\nanova(uniDich.result2,uniDich.result3)\r## ## Model 1: mirt::mirt(data = dat1, model = uniDich.model3, itemtype = \u0026quot;2PL\u0026quot;, ## SE = TRUE)\r## Model 2: mirt::mirt(data = dat1, model = uniDich.model2, itemtype = \u0026quot;2PL\u0026quot;, ## SE = TRUE)\r## AIC AICc SABIC HQ BIC logLik X2 df p\r## 1 2941.329 2943.549 2947.695 2964.276 2998.422 -1454.664 NaN NaN NaN\r## 2 2948.644 2953.708 2958.194 2983.065 3034.285 -1450.322 8.684 8 0.37\r\r\rUnidimensional Polytomous IRT Model\rIn the previous section we have conducted dichotomous IRT analyses on the rotation span task dataset with binary responses. However, the initial rotation span dataset consist of numbers of correctly recalled elements for each item. In other words, each item actually has more than two possible response categories that are at least ordinal. For example, for an item with set size 2 (2 elements in the item), there are 3 possible response outcomes: 0, 1, and 2. Thus, we could fit and assess a polytomous IRT model to this type of measures, such as partial-scored tests and Likert-type surveys.\ndat2 \u0026lt;- as.matrix(wmirot[,-1])\rhead(dat2)\r## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5\r## [1,] 2 1 0 0 0 3 1 0 1 1 0 0\r## [2,] 2 2 1 1 2 3 2 2 2 2 1 1\r## [3,] 2 1 4 1 2 3 4 1 1 2 0 3\r## [4,] 2 2 3 1 2 0 1 1 2 2 3 4\r## [5,] 2 3 4 5 2 3 4 4 2 3 4 2\r## [6,] 2 3 0 2 2 3 4 3 2 1 2 1\rGeneralized Partial Credit Model\rIn this section, we will apply the generalized partial credit model (GPCM; Muraki, 1992) to the rotation span data. As a polytomous model, GPCM estimates one item threshold parameter for each response category in an item instead of one difficulty parameter for an item. Further more, GPCM assumes an unique item discrimination parameter for each item instead of assuming a unitary reliability across items (like the Rasch model).\nunipoly.model1 \u0026lt;- mirt.model(\u0026quot;rotation = 1 - 12\u0026quot;)\runipoly.result1 \u0026lt;- mirt::mirt(dat2, uniDich.model2, itemtype = \u0026quot;gpcm\u0026quot;, SE = TRUE)\rModel and Item Fits\rSimilarly, we can obtain corresponding statistics of the model such as model fit and item fit statistics. In this example, the overall model fit and all item fits for the GPCM model are good.\nM2(unipoly.result1)\r## M2 df p RMSEA RMSEA_5 RMSEA_95 SRMSR TLI CFI\r## stats 23.66605 24 0.4808204 0 0 0.04927329 0.04771738 1.005052 1\ritemfit(unipoly.result1)\r## item S_X2 df.S_X2 RMSEA.S_X2 p.S_X2\r## 1 S1.2 15.329 11 0.039 0.168\r## 2 S1.3 46.224 32 0.041 0.050\r## 3 S1.4 43.506 48 0.000 0.657\r## 4 S1.5 44.058 58 0.000 0.912\r## 5 S2.2 20.725 13 0.048 0.079\r## 6 S2.3 30.299 26 0.025 0.255\r## 7 S2.4 48.715 50 0.000 0.525\r## 8 S2.5 71.664 61 0.026 0.165\r## 9 S3.2 13.519 17 0.000 0.701\r## 10 S3.3 20.588 32 0.000 0.940\r## 11 S3.4 52.075 51 0.009 0.432\r## 12 S3.5 63.890 63 0.007 0.445\r\rIRT Parameters\rFor a GPCM model, the item discrimination parameters and the item threshold parameters are freely estimated. In a GPCM model, the item threshold parameter is defined as the trait level in which one has an equal probability of choosing the kth response category over the k-1th category in an item. When choosing between the kth and the k-1th category, subjects with trait levels higher than that threshold are more likely to approach the kth, while subjects with trait levels lower than that threshold are more likely to approach the k-1th.\ncoef(unipoly.result1,simplify = TRUE, IRTpars = TRUE)$items\r## a b1 b2 b3 b4 b5\r## S1.2 0.6354780 -2.4238753 -4.50115962 NA NA NA\r## S1.3 0.8367055 -1.5716066 -0.89123631 -1.8001978 NA NA\r## S1.4 0.5989767 -1.8187952 -0.50303114 0.5399410 -1.379714 NA\r## S1.5 0.4660780 -0.7725926 -0.02659485 0.5368132 1.745108 0.07050096\r## S2.2 1.0176875 -1.8131900 -2.77011850 NA NA NA\r## S2.3 0.8606748 -2.3689773 -0.88851301 -2.2413670 NA NA\r## S2.4 0.4985639 -1.6434258 -0.70470153 -0.3802990 -1.060266 NA\r## S2.5 0.4158617 -0.5284300 -1.00120923 1.5317726 1.243526 -0.09643630\r## S3.2 0.8508984 -1.3638275 -3.01408652 NA NA NA\r## S3.3 0.8038394 -1.7838070 -1.03577722 -1.8430426 NA NA\r## S3.4 0.5793661 -0.6192944 -0.94635296 0.0811653 -1.540936 NA\r## S3.5 0.4046533 -0.5014941 -0.68390425 0.3561937 1.278277 0.42457494\rIn this example, the function utilizes the conventional IRT parameterization. In the conventional parameterization, for an item of size p, GPCM estimates p item threshold parameters for each of the categories (from “b_1” for partial scores of 0 and 1 to “b_p” for partial scores p-1 and p), and 1 item discrimination parameter for the item. The second column (“a_1”) contains the discrimination parameters and the later columns (“b_1” to “b_5”) contain all the threshold parameters.\n\rVisualizing the Item and Scale Plots\rSimilar to the dichotomous 2PL model, the estimated a parameters in a GPCM model are reflected in item trace plots, such that the differences in as are reflected by the changes in the steepness of the item trace curves. Higher as would be reflected as steeper item trace curves. On the other hand, the estimated b parameters in a GPCM model are reflected as (x-axis values for) the adjacent points between trace curves for different categories. For example, for Item S3.2, the current GPCM model estimated two threshold parameters: b1 = -1.36 (the adjacent point between curve P1 and P2) and b2 = -3.01 (the adjacent point between curve P2 and P3).\nplot(unipoly.result1, type = \u0026quot;trace\u0026quot;, theta_lim = c(-4,4))\rSimilar to the dichotomous 2PL model, the freely estimated discrimination parameters are also reflected in the item information plots.\nplot(unipoly.result1, type = \u0026quot;infotrace\u0026quot;)\r\rIndividual Scoring\rBased on an estimated model, we can also estimate the individual latent trait scores. Conceptually, the estimated latent trait scores are similar to factor scores estimated in CFAs.\nest.theta \u0026lt;- as.data.frame(fscores(unipoly.result1))\rhead(est.theta)\r## rotation\r## 1 -1.9429664\r## 2 -0.7051405\r## 3 -0.5498356\r## 4 -0.6928339\r## 5 1.4039574\r## 6 -0.3787596\rest.theta %\u0026gt;% ggplot(aes(x=rotation)) +\rgeom_histogram(aes(y=..density..),\rbinwidth=.1,\rcolour=\u0026quot;black\u0026quot;, fill=\u0026quot;white\u0026quot;) +\rgeom_density(alpha=.2, fill=\u0026quot;aquamarine2\u0026quot;)\r\r\r\r","date":1646877527,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1646877527,"objectID":"48e35695646804d62dae1c2d7709eb87","permalink":"https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/project/irttutorial/irt-tutorial-in-r-with-mirt-package/","section":"project","summary":"Overview\rThe goal of this document is to introduce applications of R for item response theory (IRT) modeling. Specifically, this document is focused on introducing basic IRT analyses for beginners using the “mirt” package (Chalmers, 2012).","tags":["Item Response Theory","R Markdown","Working Memory","R Stuff","Psychometrics","Academic","Stats"],"title":"Intro to Item Response Modeling in R","type":"project"},{"authors":["Han Hao","Andrew R. A. Conway"],"categories":null,"content":"","date":1633651200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1633651200,"objectID":"ec930c321953f4b7762fca1e51cde397","permalink":"https://hanhao23.github.io/publication/the-impact-of-auditory-distraction-on-reading-comprehension/","publishdate":"2021-10-08T00:00:00Z","relpermalink":"/publication/the-impact-of-auditory-distraction-on-reading-comprehension/","section":"publication","summary":"Perceptual disfluency of a cognitive task cannot shield attention against auditory distractions, but may moderates the relationship between individual differences of congitive ability and task performance.","tags":["Selective Attention","Working Memory","Disfluency Effect","Multilevel Modeling"],"title":"The Impact of Auditory Distraction on Reading Comprehension","type":"publication"},{"authors":["Han Hao","Kevin Rosales","Jean-Paul Snijder","Kristof Kovacs","Michael J. Kane","Andrew R. A. Conway"],"categories":null,"content":" Click on the Slides button above to view the built-in slides feature. Slides can be added in a few ways:\r- **Create** slides using Academic's [*Slides*](https://sourcethemes.com/academic/docs/managing-content/#create-slides) feature and link using `slides` parameter in the front matter of the talk file\r- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file\r- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://sourcethemes.com/academic/docs/writing-markdown-latex/).\rFurther talk details can easily be added to this page using *Markdown* and $\\rm \\LaTeX$ math code.\r--\r","date":1630713600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1630713600,"objectID":"de5724310f8522ecf36755be71d6bd77","permalink":"https://hanhao23.github.io/talk/2021isir/","publishdate":"2021-09-04T00:00:00Z","relpermalink":"/talk/2021isir/","section":"talk","summary":"Poster in 2021 Annual ISIR Conference","tags":[],"title":"Rethinking the Relationship of Working Memory and Intelligence - A Perspective based on Process Overlap Theory","type":"talk"},{"authors":["Andrew R. A. Conway","Kristof Kovacs","Han Hao","Kevin P. Rosales","Jean-Paul Snijder"],"categories":null,"content":"","date":1625184000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1636065049,"objectID":"d05ec6e2dbd2512b2749a9b7f96671e3","permalink":"https://hanhao23.github.io/publication/individual-differences-in-attention-and-intelligence/","publishdate":"2021-07-02T00:00:00Z","relpermalink":"/publication/individual-differences-in-attention-and-intelligence/","section":"publication","summary":"Process overlap theory (POT) is a new theoretical framework designed to account for the general factor of intelligence (g). According to POT, g does not reflect a general cognitive ability. Instead, g is the result of multiple domain-general executive attention processes and multiple domain-specific processes that are sampled in an overlapping manner across a battery of intelligence tests. POT explains several benchmark findings on human intelligence. However, the precise nature of the executive attention processes underlying g remains unclear. In the current paper, we discuss challenges associated with building a theory of individual differences in attention and intelligence. We argue that the conflation of psychological theories and statistical models, as well as problematic inferences based on latent variables, impedes research progress and prevents theory building. Two studies designed to illustrate the unique features of POT relative to previous approaches are presented. In Study 1, a simulation is presented to illustrate precisely how POT accounts for the relationship between executive attention processes and g. In Study 2, three datasets from previous studies are reanalyzed (N = 243, N = 234, N = 945) and reveal a discrepancy between the POT simulated model and the unity/diversity model of executive function. We suggest that this discrepancy is largely due to methodological problems in previous studies but also reflects different goals of research on individual differences in attention. The unity/diversity model is designed to facilitate research on executive function and dysfunction associated with cognitive and neural development and disease. POT is uniquely suited to guide and facilitate research on individual differences in cognitive ability and the investigation of executive attention processes underlying g.","tags":["Executive Functions","Intelligence","Process Overlap Theory","Working Memory"],"title":"Individual Differences in Attention and Intelligence","type":"publication"},{"authors":["Han Hao","Ester Navarro","Kevin Rosales","Andrew R. A. Conway"],"categories":null,"content":" Click on the Slides button above to view the built-in slides feature. Slides can be added in a few ways:\r- **Create** slides using Academic's [*Slides*](https://sourcethemes.com/academic/docs/managing-content/#create-slides) feature and link using `slides` parameter in the front matter of the talk file\r- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file\r- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://sourcethemes.com/academic/docs/writing-markdown-latex/).\rFurther talk details can easily be added to this page using *Markdown* and $\\rm \\LaTeX$ math code.\r--\r","date":1605916800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1605916800,"objectID":"50a1d1e7091601fda1eca241e92741ac","permalink":"https://hanhao23.github.io/talk/2020psychonomics/","publishdate":"2020-11-21T00:00:00Z","relpermalink":"/talk/2020psychonomics/","section":"talk","summary":"Poster in 2020 Psychonomics Annual Meeting","tags":[],"title":"An Examination of Domain-Specificity Differences in Complex Span Tasks through Item Response Theory","type":"talk"},{"authors":["Andrew R. A. Conway","Kristof Kovacs","Han Hao","Sara A Goring","Christopher Schmank"],"categories":null,"content":"","date":1601510400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1601510400,"objectID":"d3b528a7a0d7966aca5fdf0687550374","permalink":"https://hanhao23.github.io/publication/the-struggle-is-real/","publishdate":"2020-10-01T00:00:00Z","relpermalink":"/publication/the-struggle-is-real/","section":"publication","summary":"Strong theories are sorely lacking in the applied social sciences, especially in psychology. Elko Fried (2020) identifies fundamental problems that are common in social science research and explains how these problems manifest themselves in the literature, impede scientific progress, and contribute to the lack of theory building. We hope to share some insights from our program of research on working memory and intelligence. In our work, we address several of the problems discussed by Fried, and we proposed a solution, Process Overlap Theory (POT), a new approach to intelligence that integrates evidence from cognitive psychology, psychometrics, and neuroscience. The commentary reviews several points of our agreement with Fried, drawing on examples from our research. Then we discuss a few of our concerns and problems with graduate training. We argue that most graduate programs lack the kind of formal training that is necessary to promote theory building. The lead author (Conway) teaches graduate-level statistics and three of the coauthors (Hao, Goring, Schmank) are current graduate student so we offer some advice on how to address this problem.","tags":["Research Methods","Process Overlap Theory","Psychometrics"],"title":"The Struggle Is Real","type":"publication"},{"authors":["Dana Linnell Wanzer","Han Hao","Tom McKlin"],"categories":null,"content":" Click the Slides button above to demo Academic\u0026rsquo;s Markdown slides feature. Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/). -- ","date":1593648000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1593648000,"objectID":"0f8724f0afccce3cee2529102a14e1f4","permalink":"https://hanhao23.github.io/publication/retrospective/","publishdate":"2020-07-02T00:00:00Z","relpermalink":"/publication/retrospective/","section":"publication","summary":"This study examines response shift in an evaluation of a computing education program in which both traditional and retrospective pretests were used across two waves of data collection, by using measurement invariance techniques.","tags":["Research Methods","Structural Equation Modeling","Measurement Invariance","Psychometrics"],"title":"Response or recall bias? Choosing between the traditional and retrospective pretest using measurement invariance techniques","type":"publication"},{"authors":["Ester Navarro","Kevin Rosales","Han Hao","Andrew R. A. Conway"],"categories":null,"content":" Click the Slides button above to demo Academic\u0026rsquo;s Markdown slides feature. Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).\r-- ","date":1593475200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1593475200,"objectID":"5fa56a9cda8c504b9dd1c4324148e5bc","permalink":"https://hanhao23.github.io/publication/uniirt/","publishdate":"2020-06-30T00:00:00Z","relpermalink":"/publication/uniirt/","section":"publication","summary":"This study fitted and compared item response models of item-level responses of different complex span tasks. Results, especially item parameters, may reveal the variety of domain specificity in different complex span tasks.","tags":["Working Memory","complex Span Tasks","Item Response Theory","Psychometrics"],"title":"An Examination of Domain-Specificity Differences in Complex Span Tasks through Item Response Theory","type":"publication"},{"authors":["Andrew R. A. Conway","Kristof Kovacs","Han Hao","Jean-paul Snijder"],"categories":null,"content":" Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software. Click the Slides button above to demo Academic\u0026rsquo;s Markdown slides feature. Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).\r--\r","date":1577059200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577059200,"objectID":"99232c6380c6af95d83d0ef5446d8be4","permalink":"https://hanhao23.github.io/publication/potsimulation/","publishdate":"2019-12-23T00:00:00Z","relpermalink":"/publication/potsimulation/","section":"publication","summary":"Confirmatory latent factor models on simulated data indicate that a higher-order g can emergy even without a common cause.","tags":["Process Overlap Theory","Intelligence","Positive Manifold","Psychometrics","Item Response Theory","Structural Equation Modeling"],"title":"General Intelligence Explained (Away)","type":"publication"},{"authors":["Andrew R. A. Conway","Han Hao"],"categories":null,"content":" Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software. Click the Slides button above to demo Academic\u0026rsquo;s Markdown slides feature. Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).\r-- ","date":1575417600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1575417600,"objectID":"e804824a4eecf97e872a1471d63bc0a3","permalink":"https://hanhao23.github.io/publication/satcommentary/","publishdate":"2020-04-13T00:00:00Z","relpermalink":"/publication/satcommentary/","section":"publication","summary":"Hannon (2019) reports a novel and intriguing pattern of results that could be interpreted as evidence that the SAT is biased against Hispanic students. Specifically, Hannon’s analyses suggest that non-cognitive factors, such as test anxiety, contribute to SAT performance and the impact of test anxiety on the SAT is stronger among Hispanic students than European-American students. Importantly, this pattern of results was observed after controlling for individual differences in cognitive abilities. We argue that there are multiple issues with Hannon’s investigation and interpretation. For instance, Hannon did not include an adequate number or variety of measures of cognitive ability. In addition, the measure of test anxiety was a retrospective self-report survey on evaluated anxiety rather than a direct measure of situational test anxiety associated with the SAT. Based on these and other observations, we conclude that Hannon’s current results do not provide sufficient evidence to suggest that non-cognitive factors play a significant role in the SAT or that they impact European-American and Hispanic students differently.","tags":["Intelligence","SAT","Research Methods"],"title":"The Role of Non-Cognitive Factors in the SAT Remains Unclear - A Commentary on Hannon (2019)","type":"publication"},{"authors":["Han Hao","Andrew R. A. Conway"],"categories":null,"content":" Click on the Slides button above to view the built-in slides feature. Slides can be added in a few ways:\r- **Create** slides using Academic's [*Slides*](https://sourcethemes.com/academic/docs/managing-content/#create-slides) feature and link using `slides` parameter in the front matter of the talk file\r- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file\r- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://sourcethemes.com/academic/docs/writing-markdown-latex/).\rFurther talk details can easily be added to this page using *Markdown* and $\\rm \\LaTeX$ math code.\r--\r","date":1574035200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1574035200,"objectID":"abc80e90355c05ca7a842f1b57ea2fec","permalink":"https://hanhao23.github.io/talk/2019psychonomics/","publishdate":"2019-11-18T00:00:00Z","relpermalink":"/talk/2019psychonomics/","section":"talk","summary":"Poster in 2019 Psychonomics Annual Meeting","tags":[],"title":"The Impact of Auditory Distraction on Reading Comprehension","type":"talk"},{"authors":null,"categories":null,"content":"Academic is designed to give technical content creators a seamless experience. You can focus on the content and Academic handles the rest.\nHighlight your code snippets, take notes on math classes, and draw diagrams from textual representation.\nOn this page, you\u0026rsquo;ll find some examples of the types of technical content that can be rendered with Academic.\nExamples Code Academic supports a Markdown extension for highlighting code syntax. You can enable this feature by toggling the highlight option in your config/_default/params.toml file.\n```python import pandas as pd data = pd.read_csv(\u0026quot;data.csv\u0026quot;) data.head() ``` renders as\nimport pandas as pd data = pd.read_csv(\u0026quot;data.csv\u0026quot;) data.head() Math Academic supports a Markdown extension for $\\LaTeX$ math. You can enable this feature by toggling the math option in your config/_default/params.toml file.\nTo render inline or block math, wrap your LaTeX math with $...$ or $$...$$, respectively.\nExample math block:\n$$\\gamma_{n} = \\frac{ \\left | \\left (\\mathbf x_{n} - \\mathbf x_{n-1} \\right )^T \\left [\\nabla F (\\mathbf x_{n}) - \\nabla F (\\mathbf x_{n-1}) \\right ] \\right |} {\\left \\|\\nabla F(\\mathbf{x}_{n}) - \\nabla F(\\mathbf{x}_{n-1}) \\right \\|^2}$$ renders as\n$$\\gamma_{n} = \\frac{ \\left | \\left (\\mathbf x_{n} - \\mathbf x_{n-1} \\right )^T \\left [\\nabla F (\\mathbf x_{n}) - \\nabla F (\\mathbf x_{n-1}) \\right ] \\right |}{\\left |\\nabla F(\\mathbf{x}_{n}) - \\nabla F(\\mathbf{x}_{n-1}) \\right |^2}$$\nExample inline math $\\nabla F(\\mathbf{x}_{n})$ renders as $\\nabla F(\\mathbf{x}_{n})$.\nExample multi-line math using the \\\\\\\\ math linebreak:\n$$f(k;p_0^*) = \\begin{cases} p_0^* \u0026amp; \\text{if }k=1, \\\\\\\\ 1-p_0^* \u0026amp; \\text {if }k=0.\\end{cases}$$ renders as\n$$f(k;p_0^) = \\begin{cases} p_0^ \u0026amp; \\text{if }k=1, \\\\\n1-p_0^* \u0026amp; \\text {if }k=0.\\end{cases}$$\nDiagrams Academic supports a Markdown extension for diagrams. You can enable this feature by toggling the diagram option in your config/_default/params.toml file or by adding diagram: true to your page front matter.\nAn example flowchart:\n```mermaid graph TD A[Hard] --\u0026gt;|Text| B(Round) B --\u0026gt; C{Decision} C --\u0026gt;|One| D[Result 1] C --\u0026gt;|Two| E[Result 2] ``` renders as\ngraph TD A[Hard] --\u0026gt;|Text| B(Round) B --\u0026gt; C{Decision} C --\u0026gt;|One| D[Result 1] C --\u0026gt;|Two| E[Result 2] An example sequence diagram:\n```mermaid sequenceDiagram Alice-\u0026gt;\u0026gt;John: Hello John, how are you? loop Healthcheck John-\u0026gt;\u0026gt;John: Fight against hypochondria end Note right of John: Rational thoughts! John--\u0026gt;\u0026gt;Alice: Great! John-\u0026gt;\u0026gt;Bob: How about you? Bob--\u0026gt;\u0026gt;John: Jolly good! ``` renders as\nsequenceDiagram Alice-\u0026gt;\u0026gt;John: Hello John, how are you? loop Healthcheck John-\u0026gt;\u0026gt;John: Fight against hypochondria end Note right of John: Rational thoughts! John--\u0026gt;\u0026gt;Alice: Great! John-\u0026gt;\u0026gt;Bob: How about you? Bob--\u0026gt;\u0026gt;John: Jolly good! An example Gantt diagram:\n```mermaid gantt section Section Completed :done, des1, 2014-01-06,2014-01-08 Active :active, des2, 2014-01-07, 3d Parallel 1 : des3, after des1, 1d Parallel 2 : des4, after des1, 1d Parallel 3 : des5, after des3, 1d Parallel 4 : des6, after des4, 1d ``` renders as\ngantt section Section Completed :done, des1, 2014-01-06,2014-01-08 Active :active, des2, 2014-01-07, 3d Parallel 1 : des3, after des1, 1d Parallel 2 : des4, after des1, 1d Parallel 3 : des5, after des3, 1d Parallel 4 : des6, after des4, 1d An example class diagram:\n```mermaid classDiagram Class01 \u0026lt;|-- AveryLongClass : Cool \u0026lt;\u0026lt;interface\u0026gt;\u0026gt; Class01 Class09 --\u0026gt; C2 : Where am i? Class09 --* C3 Class09 --|\u0026gt; Class07 Class07 : equals() Class07 : Object[] elementData Class01 : size() Class01 : int chimp Class01 : int gorilla class Class10 { \u0026lt;\u0026lt;service\u0026gt;\u0026gt; int id size() } ``` renders as\nclassDiagram Class01 \u0026lt;|-- AveryLongClass : Cool \u0026lt;\u0026lt;interface\u0026gt;\u0026gt; Class01 Class09 --\u0026gt; C2 : Where am i? Class09 --* C3 Class09 --|\u0026gt; Class07 Class07 : equals() Class07 : Object[] elementData Class01 : size() Class01 : int chimp Class01 : int gorilla class Class10 { \u0026lt;\u0026lt;service\u0026gt;\u0026gt; int id size() } An example state diagram:\n```mermaid stateDiagram [*] --\u0026gt; Still Still --\u0026gt; [*] Still --\u0026gt; Moving Moving --\u0026gt; Still Moving --\u0026gt; Crash Crash --\u0026gt; [*] ``` renders as\nstateDiagram [*] --\u0026gt; Still Still --\u0026gt; [*] Still --\u0026gt; Moving Moving --\u0026gt; Still Moving --\u0026gt; Crash Crash --\u0026gt; [*] Todo lists You can even write your todo lists in Academic too:\n- [x] Write math example - [x] Write diagram example - [ ] Do something else renders as\n Write math example Write diagram example Do something else Tables Represent your data in tables:\n| First Header | Second Header | | ------------- | ------------- | | Content Cell | Content Cell | | Content Cell | Content Cell | renders as\n First Header Second Header Content Cell Content Cell Content Cell Content Cell Asides Academic supports a shortcode for asides, also referred to as notices, hints, or alerts. By wrapping a paragraph in {{% alert note %}} ... {{% /alert %}}, it will render as an aside.\n{{% alert note %}} A Markdown aside is useful for displaying notices, hints, or definitions to your readers. {{% /alert %}} renders as\n A Markdown aside is useful for displaying notices, hints, or definitions to your readers. Spoilers Add a spoiler to a page to reveal text, such as an answer to a question, after a button is clicked.\n{{\u0026lt; spoiler text=\u0026quot;Click to view the spoiler\u0026quot; \u0026gt;}} You found me! {{\u0026lt; /spoiler \u0026gt;}} renders as\n Click to view the spoiler You found me! Icons Academic enables you to use a wide range of icons from Font Awesome and Academicons in addition to emojis.\nHere are some examples using the icon shortcode to render icons:\n{{\u0026lt; icon name=\u0026quot;terminal\u0026quot; pack=\u0026quot;fas\u0026quot; \u0026gt;}} Terminal {{\u0026lt; icon name=\u0026quot;python\u0026quot; pack=\u0026quot;fab\u0026quot; \u0026gt;}} Python {{\u0026lt; icon name=\u0026quot;r-project\u0026quot; pack=\u0026quot;fab\u0026quot; \u0026gt;}} R renders as\n Terminal\n Python\n R\nDid you find this page helpful? Consider sharing it 🙌 ","date":1562889600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1562889600,"objectID":"07e02bccc368a192a0c76c44918396c3","permalink":"https://hanhao23.github.io/post/writing-technical-content/","publishdate":"2019-07-12T00:00:00Z","relpermalink":"/post/writing-technical-content/","section":"post","summary":"Academic is designed to give technical content creators a seamless experience. You can focus on the content and Academic handles the rest.\nHighlight your code snippets, take notes on math classes, and draw diagrams from textual representation.","tags":null,"title":"Writing technical content in Academic","type":"post"},{"authors":["Han Hao"],"categories":[],"content":"from IPython.core.display import Image Image('https://www.python.org/static/community_logos/python-logo-master-v3-TM-flattened.png') print(\u0026quot;Welcome to Academic!\u0026quot;) Welcome to Academic! Install Python and JupyterLab Install Anaconda which includes Python 3 and JupyterLab.\nAlternatively, install JupyterLab with pip3 install jupyterlab.\nCreate or upload a Jupyter notebook Run the following commands in your Terminal, substituting \u0026lt;MY-WEBSITE-FOLDER\u0026gt; and \u0026lt;SHORT-POST-TITLE\u0026gt; with the file path to your Academic website folder and a short title for your blog post (use hyphens instead of spaces), respectively:\nmkdir -p \u0026lt;MY-WEBSITE-FOLDER\u0026gt;/content/post/\u0026lt;SHORT-POST-TITLE\u0026gt;/ cd \u0026lt;MY-WEBSITE-FOLDER\u0026gt;/content/post/\u0026lt;SHORT-POST-TITLE\u0026gt;/ jupyter lab index.ipynb The jupyter command above will launch the JupyterLab editor, allowing us to add Academic metadata and write the content.\nEdit your post metadata The first cell of your Jupter notebook will contain your post metadata ( front matter).\nIn Jupter, choose Markdown as the type of the first cell and wrap your Academic metadata in three dashes, indicating that it is YAML front matter:\n--- title: My post's title date: 2019-09-01 # Put any other Academic metadata here... --- Edit the metadata of your post, using the documentation as a guide to the available options.\nTo set a featured image, place an image named featured into your post\u0026rsquo;s folder.\nFor other tips, such as using math, see the guide on writing content with Academic.\nConvert notebook to Markdown jupyter nbconvert index.ipynb --to markdown --NbConvertApp.output_files_dir=. Example This post was created with Jupyter. The orginal files can be found at https://github.com/gcushen/hugo-academic/tree/master/exampleSite/content/post/jupyter\n","date":1549324800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1567641600,"objectID":"6e929dc84ed3ef80467b02e64cd2ed64","permalink":"https://hanhao23.github.io/post/jupyter/","publishdate":"2019-02-05T00:00:00Z","relpermalink":"/post/jupyter/","section":"post","summary":"Learn how to blog in Academic using Jupyter notebooks","tags":[],"title":"Display Jupyter Notebooks with Academic","type":"post"},{"authors":[],"categories":[],"content":"Create slides in Markdown with Academic Academic | Documentation\n Features Efficiently write slides in Markdown 3-in-1: Create, Present, and Publish your slides Supports speaker notes Mobile friendly slides Controls Next: Right Arrow or Space Previous: Left Arrow Start: Home Finish: End Overview: Esc Speaker notes: S Fullscreen: F Zoom: Alt + Click PDF Export: E Code Highlighting Inline code: variable\nCode block:\nporridge = \u0026quot;blueberry\u0026quot; if porridge == \u0026quot;blueberry\u0026quot;: print(\u0026quot;Eating...\u0026quot;) Math In-line math: $x + y = z$\nBlock math:\n$$ f\\left( x \\right) = ;\\frac{{2\\left( {x + 4} \\right)\\left( {x - 4} \\right)}}{{\\left( {x + 4} \\right)\\left( {x + 1} \\right)}} $$\n Fragments Make content appear incrementally\n{{% fragment %}} One {{% /fragment %}} {{% fragment %}} **Two** {{% /fragment %}} {{% fragment %}} Three {{% /fragment %}} Press Space to play!\nOne Two Three \n A fragment can accept two optional parameters:\n class: use a custom style (requires definition in custom CSS) weight: sets the order in which a fragment appears Speaker Notes Add speaker notes to your presentation\n{{% speaker_note %}} - Only the speaker can read these notes - Press `S` key to view {{% /speaker_note %}} Press the S key to view the speaker notes!\n Only the speaker can read these notes Press S key to view Themes black: Black background, white text, blue links (default) white: White background, black text, blue links league: Gray background, white text, blue links beige: Beige background, dark text, brown links sky: Blue background, thin dark text, blue links night: Black background, thick white text, orange links serif: Cappuccino background, gray text, brown links simple: White background, black text, blue links solarized: Cream-colored background, dark green text, blue links Custom Slide Customize the slide style and background\n{{\u0026lt; slide background-image=\u0026quot;/img/boards.jpg\u0026quot; \u0026gt;}} {{\u0026lt; slide background-color=\u0026quot;#0000FF\u0026quot; \u0026gt;}} {{\u0026lt; slide class=\u0026quot;my-style\u0026quot; \u0026gt;}} Custom CSS Example Let\u0026rsquo;s make headers navy colored.\nCreate assets/css/reveal_custom.css with:\n.reveal section h1, .reveal section h2, .reveal section h3 { color: navy; } Questions? Ask\n Documentation\n","date":1549324800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1549324800,"objectID":"0e6de1a61aa83269ff13324f3167c1a9","permalink":"https://hanhao23.github.io/slides/example/","publishdate":"2019-02-05T00:00:00Z","relpermalink":"/slides/example/","section":"slides","summary":"An introduction to using Academic's Slides feature.","tags":[],"title":"Slides","type":"slides"},{"authors":null,"categories":null,"content":"","date":1461715200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1461715200,"objectID":"5c2c2493d80d05c46232ef98efd06900","permalink":"https://hanhao23.github.io/project/formfactor/","publishdate":"2016-04-27T00:00:00Z","relpermalink":"/project/formfactor/","section":"project","summary":"An example of formative factor model with lavaan package.","tags":["Stats","R Stuff"],"title":"Formative Model in Lavaan","type":"project"},{"authors":["Han Hao","吳恩達"],"categories":["Demo","教程"],"content":"Create a free website with Academic using Markdown, Jupyter, or RStudio. Choose a beautiful color theme and build anything with the Page Builder - over 40 widgets, themes, and language packs included!\n Check out the latest demo of what you\u0026rsquo;ll get in less than 10 minutes, or view the showcase of personal, project, and business sites.\n 👉 Get Started 📚 View the documentation 💬 Ask a question on the forum 👥 Chat with the community 🐦 Twitter: @source_themes @GeorgeCushen #MadeWithAcademic 💡 Request a feature or report a bug ⬆️ Updating? View the Update Guide and Release Notes ❤️ Support development of Academic: ☕️ Donate a coffee 💵 Become a backer on Patreon 🖼️ Decorate your laptop or journal with an Academic sticker 👕 Wear the T-shirt 👩💻 Contribute Academic is mobile first with a responsive design to ensure that your site looks stunning on every device. Key features:\n Page builder - Create anything with widgets and elements Edit any type of content - Blog posts, publications, talks, slides, projects, and more! Create content in Markdown, Jupyter, or RStudio Plugin System - Fully customizable color and font themes Display Code and Math - Code highlighting and LaTeX math supported Integrations - Google Analytics, Disqus commenting, Maps, Contact Forms, and more! Beautiful Site - Simple and refreshing one page design Industry-Leading SEO - Help get your website found on search engines and social media Media Galleries - Display your images and videos with captions in a customizable gallery Mobile Friendly - Look amazing on every screen with a mobile friendly version of your site Multi-language - 15+ language packs including English, 中文, and Português Multi-user - Each author gets their own profile page Privacy Pack - Assists with GDPR Stand Out - Bring your site to life with animation, parallax backgrounds, and scroll effects One-Click Deployment - No servers. No databases. Only files. Themes Academic comes with automatic day (light) and night (dark) mode built-in. Alternatively, visitors can choose their preferred mode - click the sun/moon icon in the top right of the Demo to see it in action! Day/night mode can also be disabled by the site admin in params.toml.\n Choose a stunning theme and font for your site. Themes are fully customizable.\nEcosystem Academic Admin: An admin tool to import publications from BibTeX or import assets for an offline site Academic Scripts: Scripts to help migrate content to new versions of Academic Install You can choose from one of the following four methods to install:\n one-click install using your web browser (recommended) install on your computer using Git with the Command Prompt/Terminal app install on your computer by downloading the ZIP files install on your computer with RStudio Then personalize and deploy your new site.\nUpdating View the Update Guide.\nFeel free to star the project on Github to help keep track of updates.\nLicense Copyright 2016-present George Cushen.\nReleased under the MIT license.\n","date":1461110400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1555459200,"objectID":"279b9966ca9cf3121ce924dca452bb1c","permalink":"https://hanhao23.github.io/post/getting-started/","publishdate":"2016-04-20T00:00:00Z","relpermalink":"/post/getting-started/","section":"post","summary":"Create a beautifully simple website in under 10 minutes.","tags":["Academic","开源"],"title":"Academic: the website builder for Hugo","type":"post"},{"authors":null,"categories":["R"],"content":"\rR Markdown\rThis is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see http://rmarkdown.rstudio.com.\nYou can embed an R code chunk like this:\nsummary(cars)\r## speed dist ## Min. : 4.0 Min. : 2.00 ## 1st Qu.:12.0 1st Qu.: 26.00 ## Median :15.0 Median : 36.00 ## Mean :15.4 Mean : 42.98 ## 3rd Qu.:19.0 3rd Qu.: 56.00 ## Max. :25.0 Max. :120.00\rfit \u0026lt;- lm(dist ~ speed, data = cars)\rfit\r## ## Call:\r## lm(formula = dist ~ speed, data = cars)\r## ## Coefficients:\r## (Intercept) speed ## -17.579 3.932\r\rIncluding Plots\rYou can also embed plots. See Figure 1 for example:\npar(mar = c(0, 1, 0, 1))\rpie(\rc(280, 60, 20),\rc(\u0026#39;Sky\u0026#39;, \u0026#39;Sunny side of pyramid\u0026#39;, \u0026#39;Shady side of pyramid\u0026#39;),\rcol = c(\u0026#39;#0292D8\u0026#39;, \u0026#39;#F7EA39\u0026#39;, \u0026#39;#C4B632\u0026#39;),\rinit.angle = -50, border = NA\r)\r\rFigure 1: A fancy pie chart.\r\r\r","date":1437703994,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1437703994,"objectID":"10065deaa3098b0da91b78b48d0efc71","permalink":"https://hanhao23.github.io/post/2015-07-23-r-rmarkdown/","publishdate":"2015-07-23T21:13:14-05:00","relpermalink":"/post/2015-07-23-r-rmarkdown/","section":"post","summary":"R Markdown\rThis is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see http://rmarkdown.","tags":["R Markdown","plot","regression"],"title":"Hello R Markdown","type":"post"},{"authors":null,"categories":null,"content":"\r\rPackage and Data Preparation\rlibrary(psych) # For descriptives and ANOVA, and else\rlibrary(ez) # For ANOVA\rlibrary(tidyverse) # This is a collection of packages for data wrangling and visualizing\rlibrary(Rmisc)\rlibrary(reshape2) # For reorganizing data\r#library(lsr)\rlibrary(lme4) # For generalized linear mixed effect model\rlibrary(lmerTest) # For p values in generalized linear mixed effect model\r#library(emmeans)\r#library(dplyr)\r#library(forcats)\rlibrary(DescTools)\r#library(SuppDists)\rlibrary(effsize)\rlibrary(ggpubr)\r#library(MVN)\rlibrary(r2glmm)\rdat \u0026lt;- read.csv(\u0026quot;Demo Data Das.csv\u0026quot;) # Read in the csv data\r\rData forging\r# A quick look at the first several rows of the data. This is the wide format in which each row contains all information from one individual.\rhead(dat)\r## X D1S D2S D3S D1D D2D D3D Speed WMC\r## 1 1 6 6 10 3 4 2 52.57522 0.86260286\r## 2 2 10 6 8 4 5 1 87.14825 0.41562595\r## 3 3 8 10 9 5 2 3 65.23954 -1.74653932\r## 4 4 10 9 9 3 2 3 106.95063 -0.03135097\r## 5 5 7 9 9 3 3 4 89.83073 1.50946304\r## 6 6 7 8 2 5 6 4 161.63888 0.75637916\r# This function melt a wide-format data into long-formant in which each row contains information from one trial.\rdat2 \u0026lt;- melt(dat, measure.vars = c(\u0026quot;D1S\u0026quot;,\u0026quot;D2S\u0026quot;,\u0026quot;D3S\u0026quot;, \u0026quot;D1D\u0026quot;,\u0026quot;D2D\u0026quot;,\u0026quot;D3D\u0026quot;), variable.name = \u0026quot;Condition\u0026quot;, value.name = \u0026quot;Score\u0026quot;)\r# Spliting the string variable \u0026quot;condition\u0026quot; into two seperate (repeated measure) variables\rdat3 \u0026lt;- separate(dat2, Condition, sep = 2, into = c(\u0026quot;Factor1\u0026quot;,\u0026quot;Factor2\u0026quot;), remove = TRUE)\r# I recoded the ID variable to a factor (for the ANOVA analyses, otherwise R will treat it as a DV)\rdat3$X \u0026lt;- as.factor(dat3$X)\r# Factor 1 has 3 levels and I took out the 3rd level (otherwise I will have to have 2 dummy-coded variables for Factor1 in regression).\rdat3Final \u0026lt;-subset(dat3, Factor1 != \u0026quot;D3\u0026quot;)\r# Now the current data are formatted as a \u0026#39;perfect\u0026#39; long format\rhead(dat3)\r## X Speed WMC Factor1 Factor2 Score\r## 1 1 52.57522 0.86260286 D1 S 6\r## 2 2 87.14825 0.41562595 D1 S 10\r## 3 3 65.23954 -1.74653932 D1 S 8\r## 4 4 106.95063 -0.03135097 D1 S 10\r## 5 5 89.83073 1.50946304 D1 S 7\r## 6 6 161.63888 0.75637916 D1 S 7\r\rANOVA with “ezANOVA” Package\rThis package gives 3 options for us to calculate Sums of Squares, and the following note is copied directly from their documentation:\n\rNumeric value (either 1, 2 or 3) specifying the Sums of Squares “type” to employ when data are unbalanced (eg. when group sizes differ). type = 2 is the default because this will yield identical ANOVA results as type = 1 when data are balanced but type = 2 will additionally yield various assumption tests where appropriate. When data are unbalanced, users are warned that they should give special consideration to the value of type. type=3 will emulate the approach taken by popular commercial statistics packages like SAS and SPSS, but users are warned that this approach is not without criticism.\n\r# Type 1\rezANOVA(dat3Final, dv = .(Score), wid = .(X), within = .(Factor1, Factor2), type = 1, return_aov = TRUE, detailed = TRUE)\r## $ANOVA\r## Effect DFn DFd SSn SSd F p p\u0026lt;.05\r## 1 Factor1 1 89 0.40000 94.1000 0.3783209 5.400727e-01 ## 2 Factor2 1 89 846.40000 530.1000 142.1045086 3.864254e-20 *\r## 3 Factor1:Factor2 1 89 20.54444 133.9556 13.6497180 3.798565e-04 *\r## ges\r## 1 0.000527318\r## 2 0.527498096\r## 3 0.026383003\r## ## $aov\r## ## Call:\r## aov(formula = formula(aov_formula), data = data)\r## ## Grand Mean: 6.038889\r## ## Stratum 1: X\r## ## Terms:\r## Residuals\r## Sum of Squares 217.9556\r## Deg. of Freedom 89\r## ## Residual standard error: 1.564909\r## ## Stratum 2: X:Factor1\r## ## Terms:\r## Factor1 Residuals\r## Sum of Squares 0.4 94.1\r## Deg. of Freedom 1 89\r## ## Residual standard error: 1.028253\r## 1 out of 2 effects not estimable\r## Estimated effects are balanced\r## ## Stratum 3: X:Factor2\r## ## Terms:\r## Factor2 Residuals\r## Sum of Squares 846.4 530.1\r## Deg. of Freedom 1 89\r## ## Residual standard error: 2.440529\r## 1 out of 2 effects not estimable\r## Estimated effects are balanced\r## ## Stratum 4: X:Factor1:Factor2\r## ## Terms:\r## Factor1:Factor2 Residuals\r## Sum of Squares 20.54444 133.95556\r## Deg. of Freedom 1 89\r## ## Residual standard error: 1.226833\r## Estimated effects are balanced\r# Type 3\rezANOVA(dat3Final, dv = .(Score), wid = .(X), within = .(Factor1, Factor2), type = 3, return_aov = TRUE, detailed = TRUE)\r## $ANOVA\r## Effect DFn DFd SSn SSd F p p\u0026lt;.05\r## 1 (Intercept) 1 89 13128.54444 217.9556 5360.9115518 2.558857e-81 *\r## 2 Factor1 1 89 0.40000 94.1000 0.3783209 5.400727e-01 ## 3 Factor2 1 89 846.40000 530.1000 142.1045086 3.864254e-20 *\r## 4 Factor1:Factor2 1 89 20.54444 133.9556 13.6497180 3.798565e-04 *\r## ges\r## 1 0.9307951118\r## 2 0.0004096216\r## 3 0.4644141782\r## 4 0.0206133848\r## ## $aov\r## ## Call:\r## aov(formula = formula(aov_formula), data = data)\r## ## Grand Mean: 6.038889\r## ## Stratum 1: X\r## ## Terms:\r## Residuals\r## Sum of Squares 217.9556\r## Deg. of Freedom 89\r## ## Residual standard error: 1.564909\r## ## Stratum 2: X:Factor1\r## ## Terms:\r## Factor1 Residuals\r## Sum of Squares 0.4 94.1\r## Deg. of Freedom 1 89\r## ## Residual standard error: 1.028253\r## 1 out of 2 effects not estimable\r## Estimated effects are balanced\r## ## Stratum 3: X:Factor2\r## ## Terms:\r## Factor2 Residuals\r## Sum of Squares 846.4 530.1\r## Deg. of Freedom 1 89\r## ## Residual standard error: 2.440529\r## 1 out of 2 effects not estimable\r## Estimated effects are balanced\r## ## Stratum 4: X:Factor1:Factor2\r## ## Terms:\r## Factor1:Factor2 Residuals\r## Sum of Squares 20.54444 133.95556\r## Deg. of Freedom 1 89\r## ## Residual standard error: 1.226833\r## Estimated effects are balanced\r\rANOVA with “aov” Function\rAnovaModel \u0026lt;- aov(Score ~ Factor1*Factor2 + Error(X/(Factor1*Factor2)), data = dat3)\rsummary(AnovaModel)\r## ## Error: X\r## Df Sum Sq Mean Sq F value Pr(\u0026gt;F)\r## Residuals 89 255.7 2.873 ## ## Error: X:Factor1\r## Df Sum Sq Mean Sq F value Pr(\u0026gt;F) ## Factor1 2 47.69 23.846 17.19 1.5e-07 ***\r## Residuals 178 246.97 1.387 ## ---\r## Signif. codes: 0 \u0026#39;***\u0026#39; 0.001 \u0026#39;**\u0026#39; 0.01 \u0026#39;*\u0026#39; 0.05 \u0026#39;.\u0026#39; 0.1 \u0026#39; \u0026#39; 1\r## ## Error: X:Factor2\r## Df Sum Sq Mean Sq F value Pr(\u0026gt;F) ## Factor2 1 1822.3 1822 226.6 \u0026lt;2e-16 ***\r## Residuals 89 715.7 8 ## ---\r## Signif. codes: 0 \u0026#39;***\u0026#39; 0.001 \u0026#39;**\u0026#39; 0.01 \u0026#39;*\u0026#39; 0.05 \u0026#39;.\u0026#39; 0.1 \u0026#39; \u0026#39; 1\r## ## Error: X:Factor1:Factor2\r## Df Sum Sq Mean Sq F value Pr(\u0026gt;F) ## Factor1:Factor2 2 120.2 60.08 36.64 4.7e-14 ***\r## Residuals 178 291.8 1.64 ## ---\r## Signif. codes: 0 \u0026#39;***\u0026#39; 0.001 \u0026#39;**\u0026#39; 0.01 \u0026#39;*\u0026#39; 0.05 \u0026#39;.\u0026#39; 0.1 \u0026#39; \u0026#39; 1\r# Effect sizes\rEtaSq(AnovaModel, type = 1)\r## eta.sq eta.sq.part eta.sq.gen\r## Factor1 0.01362519 0.1618527 0.03061484\r## Factor2 0.52062030 0.7180224 0.54684319\r## Factor1:Factor2 0.03432802 0.2916487 0.07370411\r\rANOVA Plot\rThis is just a quick visualization of the condition differences.\nDescribeSummary \u0026lt;- summarySE(dat3Final, measurevar = \u0026quot;Score\u0026quot;, groupvars = c(\u0026quot;Factor1\u0026quot;,\u0026quot;Factor2\u0026quot;))\rpd = position_dodge(0.9)\rggplot(DescribeSummary, aes(x=Factor1, y=Score, fill=Factor2)) + geom_errorbar(aes(ymin=Score-se, ymax=Score+se), width=.2, size=1, position=pd) +\rgeom_bar(position = \u0026quot;dodge\u0026quot;, stat = \u0026quot;identity\u0026quot;, alpha = 0.7) +\rcoord_cartesian(ylim=c(2,9))+\rtheme_classic() +\rscale_fill_grey(start = .1, end = .8) +\rtheme(\raxis.title.y = element_text(vjust= 1.8),\raxis.title.x = element_text(vjust= -0.5),\raxis.title = element_text(face = \u0026quot;bold\u0026quot;))\r\rGeneralized Linear Mixed Effect Model\rI just attached the scripts that I used in my own project, but I also just started to use GLMM so there is still a huge lot I am not 100% clear about the analysis.\nHere I dummy-coded the two factors, and specify the random intercept without considering any indivdiual level effect of the fectors (nothing except (1|X) in the “random term” in the formula).\nThis is some data I fictioned so here it seems the fitted model “lmmodel1” is singular: there might be too few variance in at least one effect, or it could also be a miss specification of the model.\n# Dummy coding\rdat3Final$F1Dummy \u0026lt;- dummy(dat3Final$Factor1,\u0026quot;D2\u0026quot;)\rdat3Final$F2Dummy \u0026lt;- dummy(dat3Final$Factor2,\u0026quot;S\u0026quot;)\r# Model specification and estimation\rlmmodel1 \u0026lt;- lmer(Score ~ WMC*F1Dummy*F2Dummy + (1|X), data = dat3Final, REML = FALSE)\rsummary(lmmodel1)\r## Linear mixed model fit by maximum likelihood . t-tests use Satterthwaite\u0026#39;s\r## method [lmerModLmerTest]\r## Formula: Score ~ WMC * F1Dummy * F2Dummy + (1 | X)\r## Data: dat3Final\r## ## AIC BIC logLik deviance df.resid ## 1395.6 1434.5 -687.8 1375.6 350 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -3.3869 -0.7239 0.1256 0.7391 1.8144 ## ## Random effects:\r## Groups Name Variance Std.Dev. ## X (Intercept) 9.840e-19 9.919e-10\r## Residual 2.673e+00 1.635e+00\r## Number of obs: 360, groups: X, 90\r## ## Fixed effects:\r## Estimate Std. Error df t value Pr(\u0026gt;|t|) ## (Intercept) 4.77778 0.17234 360.00000 27.722 \u0026lt; 2e-16 ***\r## WMC -0.06142 0.21428 360.00000 -0.287 0.77458 ## F1Dummy -0.54444 0.24373 360.00000 -2.234 0.02611 * ## F2Dummy 2.58889 0.24373 360.00000 10.622 \u0026lt; 2e-16 ***\r## WMC:F1Dummy 0.13748 0.30304 360.00000 0.454 0.65035 ## WMC:F2Dummy 0.35115 0.30304 360.00000 1.159 0.24733 ## F1Dummy:F2Dummy 0.95556 0.34469 360.00000 2.772 0.00586 ** ## WMC:F1Dummy:F2Dummy -0.04944 0.42857 360.00000 -0.115 0.90823 ## ---\r## Signif. codes: 0 \u0026#39;***\u0026#39; 0.001 \u0026#39;**\u0026#39; 0.01 \u0026#39;*\u0026#39; 0.05 \u0026#39;.\u0026#39; 0.1 \u0026#39; \u0026#39; 1\r## ## Correlation of Fixed Effects:\r## (Intr) WMC F1Dmmy F2Dmmy WMC:F1Dm WMC:F2 F1D:F2\r## WMC 0.000 ## F1Dummy -0.707 0.000 ## F2Dummy -0.707 0.000 0.500 ## WMC:F1Dummy 0.000 -0.707 0.000 0.000 ## WMC:F2Dummy 0.000 -0.707 0.000 0.000 0.500 ## F1Dmmy:F2Dm 0.500 0.000 -0.707 -0.707 0.000 0.000 ## WMC:F1D:F2D 0.000 0.500 0.000 0.000 -0.707 -0.707 0.000\r## optimizer (nloptwrap) convergence code: 0 (OK)\r## boundary (singular) fit: see ?isSingular\r# Looking at the group effects\ranova(lmmodel1, type = 3)\r## Type III Analysis of Variance Table with Satterthwaite\u0026#39;s method\r## Sum Sq Mean Sq NumDF DenDF F value Pr(\u0026gt;F) ## WMC 0.220 0.220 1 360 0.0821 0.774576 ## F1Dummy 13.339 13.339 1 360 4.9898 0.026111 * ## F2Dummy 301.606 301.606 1 360 112.8248 \u0026lt; 2.2e-16 ***\r## WMC:F1Dummy 0.550 0.550 1 360 0.2058 0.650347 ## WMC:F2Dummy 3.589 3.589 1 360 1.3427 0.247334 ## F1Dummy:F2Dummy 20.544 20.544 1 360 7.6853 0.005857 ** ## WMC:F1Dummy:F2Dummy 0.036 0.036 1 360 0.0133 0.908226 ## ---\r## Signif. codes: 0 \u0026#39;***\u0026#39; 0.001 \u0026#39;**\u0026#39; 0.01 \u0026#39;*\u0026#39; 0.05 \u0026#39;.\u0026#39; 0.1 \u0026#39; \u0026#39; 1\r# Effect sizes. I googled and found this R2beta people report but I am still trying to understand what it really means.\rr2beta(model = lmmodel1, partial = T)\r## Effect Rsq upper.CL lower.CL\r## 1 Model 0.487 0.555 0.426\r## 4 F2Dummy 0.245 0.321 0.175\r## 7 F1Dummy:F2Dummy 0.022 0.061 0.002\r## 3 F1Dummy 0.014 0.049 0.000\r## 6 WMC:F2Dummy 0.004 0.028 0.000\r## 5 WMC:F1Dummy 0.001 0.017 0.000\r## 2 WMC 0.000 0.016 0.000\r## 8 WMC:F1Dummy:F2Dummy 0.000 0.015 0.000\r\rGLMM Visualization\rI use these codes to visualize my GLMM data. It should be of help to break the conditions down and visualize the correlations beween the continous predictor and the outcome (performance) by unique conditions.\nsplit_plot \u0026lt;- ggplot(aes(WMC, Score), data = dat3Final) + geom_point() + stat_smooth(method = \u0026quot;lm\u0026quot;, col = \u0026quot;red\u0026quot;, size = 2, alpha = 0.3) +\rfacet_wrap(~ Factor1*Factor2) + theme_classic2() + xlab(\u0026quot;WMC\u0026quot;) + ylab(\u0026quot;Test score\u0026quot;)\rsplit_plot\r\r","date":-1694044800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":-1694044800,"objectID":"26f66858ace86fc26b5a927d90ecedf8","permalink":"https://hanhao23.github.io/project/glmm/","publishdate":"1916-04-27T00:00:00Z","relpermalink":"/project/glmm/","section":"project","summary":"An example of generalized mixed effects model (GLMM) in with lmer package.","tags":["R Stuff","Stats"],"title":"Data Analysis with Repeat Measures in Generalized Mixed Effects Model","type":"project"}]