-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.xml
1679 lines (1576 loc) · 121 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Han Hao Cog Psych</title>
<link>https://hanhao23.github.io/</link>
<atom:link href="https://hanhao23.github.io/index.xml" rel="self" type="application/rss+xml" />
<description>Han Hao Cog Psych</description>
<generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><copyright>hanhao23 © 2023</copyright><lastBuildDate>Wed, 09 Mar 2022 17:58:47 -0800</lastBuildDate>

<item>
<title>Example Page 1</title>
<link>https://hanhao23.github.io/courses/example/example1/</link>
<pubDate>Sun, 05 May 2019 00:00:00 +0100</pubDate>
<guid>https://hanhao23.github.io/courses/example/example1/</guid>
<description><p>In this tutorial, I&rsquo;ll share my top 10 tips for getting started with Academic:</p>
<h2 id="tip-1">Tip 1</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.</p>
<p>Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.</p>
<p>Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.</p>
<p>Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.</p>
<p>Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.</p>
<h2 id="tip-2">Tip 2</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.</p>
<p>Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.</p>
<p>Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.</p>
<p>Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.</p>
<p>Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.</p>
</description>
</item>
<item>
<title>Example Page 2</title>
<link>https://hanhao23.github.io/courses/example/example2/</link>
<pubDate>Sun, 05 May 2019 00:00:00 +0100</pubDate>
<guid>https://hanhao23.github.io/courses/example/example2/</guid>
<description><p>Here are some more tips for getting started with Academic:</p>
<h2 id="tip-3">Tip 3</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.</p>
<p>Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.</p>
<p>Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.</p>
<p>Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.</p>
<p>Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.</p>
<h2 id="tip-4">Tip 4</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.</p>
<p>Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.</p>
<p>Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.</p>
<p>Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.</p>
<p>Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.</p>
</description>
</item>
<item>
<title>Intro to Item Response Modeling in R</title>
<link>https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/</link>
<pubDate>Wed, 09 Mar 2022 17:58:47 -0800</pubDate>
<guid>https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/</guid>
<description>
<script src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/header-attrs/header-attrs.js"></script>
<div id="overview" class="section level1">
<h1>Overview</h1>
<p>The goal of this document is to introduce applications of R for item response theory (IRT) modeling. Specifically, this document is focused on introducing basic IRT analyses for beginners using the <a href="https://cran.r-project.org/web/packages/mirt/mirt.pdf">“mirt” package</a> (Chalmers, 2012). It is not intended to be a full introduction to data analysis in R, nor to basic mathematics of item response theory. Instead, this tutorial will introduce the key concepts of IRT and important features of corresponding R packages/functions that facilitate IRT modeling for beginners. For a quick reference on the basics of IRT, please see the last section of recommended readings.</p>
<p>In this tutorial, we will focus on unidimensional IRT models by presenting brief R examples using “mirt”. Specifically, we will talk about:<br />
1. Key concepts in IRT;<br />
2. Dichotomous, 1PL Model (Rasch Model);<br />
3. Dichotomous, 2PL Model;<br />
4. Polytomous, Generalized Partial Credit Model.</p>
<div id="install-and-load-packages" class="section level2">
<h2>Install and Load Packages</h2>
<p>The first step is to make sure you have the R packages needed in this tutorial. We can obtain the “mirt” package from CRAN (using “install.packages(‘mirt’)”), or install the development version of the package from Github using the following codes:</p>
<pre class="r"><code>install.packages(&#39;devtools&#39;)
library(&#39;devtools&#39;)
install_github(&#39;philchalmers/mirt&#39;)</code></pre>
<p>We need the following packages in this tutorial:</p>
<pre class="r"><code>library(tidyverse) # For data wrangling and basic visualizations
library(psych) # For descriptive stats and assumption checks
library(mirt) # IRT modeling</code></pre>
</div>
<div id="prepare-the-data" class="section level2">
<h2>Prepare the Data</h2>
<p>The next step is to read in and prepare corresponding data files for the tutorial. The two data files we are using in this tutorial are available at here: <a href="https://hanhao23.github.io/files/WMI_Read_Han_Wide.csv">ReadingSpan</a> and <a href="https://hanhao23.github.io/files/WMI_Rot_Han_Wide.csv">RotationSpan</a>.</p>
<p>These two datasets consist of item-level responses from 261 subjects on 2 complex span tasks: reading span and rotation span. In a complex span task, each item has a varying number of elements to process and memorize (item size). The responses in the two datasets are integer numbers that reflect the numbers of correctly recalled elements for each item. For the reading span task, there are 15 items presented across 3 blocks, with item sizes varied from 3 to 7. For the rotation span task, there are 12 items presented across 3 blocks, with item sizes varied from 2 to 5.</p>
<pre class="r"><code># Conway et al. (2019) Data
wmir &lt;- read.csv(&quot;WMI_Read_Han_wide.csv&quot;)[,-1]
wmirot &lt;- read.csv(&quot;WMI_Rot_Han_wide.csv&quot;)[,-1]
colnames(wmir) &lt;- c(&quot;Subject&quot;,
&quot;V1.3&quot;, &quot;V1.4&quot;,&quot;V1.5&quot;, &quot;V1.6&quot;, &quot;V1.7&quot;,
&quot;V2.3&quot;, &quot;V2.4&quot;,&quot;V2.5&quot;, &quot;V2.6&quot;, &quot;V2.7&quot;,
&quot;V3.3&quot;, &quot;V3.4&quot;,&quot;V3.5&quot;, &quot;V3.6&quot;, &quot;V3.7&quot;)
colnames(wmirot) &lt;- c(&quot;Subject&quot;,
&quot;S1.2&quot;,&quot;S1.3&quot;, &quot;S1.4&quot;,&quot;S1.5&quot;,
&quot;S2.2&quot;,&quot;S2.3&quot;, &quot;S2.4&quot;,&quot;S2.5&quot;,
&quot;S3.2&quot;,&quot;S3.3&quot;, &quot;S3.4&quot;,&quot;S3.5&quot;)
# Wmi is the full dataset (N = 261)
wmi &lt;- merge(wmir, wmirot, by = &quot;Subject&quot;)
head(wmir)</code></pre>
<pre><code>## Subject V1.3 V1.4 V1.5 V1.6 V1.7 V2.3 V2.4 V2.5 V2.6 V2.7 V3.3 V3.4 V3.5 V3.6
## 1 1 3 4 5 2 3 3 4 2 2 1 1 2 4 0
## 2 2 3 2 5 6 6 3 3 3 4 7 3 3 5 6
## 3 3 3 4 3 6 6 3 4 5 3 4 3 4 5 5
## 4 4 2 2 2 2 3 3 2 2 0 0 3 4 5 2
## 5 5 3 3 3 4 7 3 4 5 6 5 3 1 5 6
## 6 6 3 4 4 6 2 3 4 5 6 2 3 4 4 6
## V3.7
## 1 3
## 2 7
## 3 2
## 4 1
## 5 7
## 6 7</code></pre>
<pre class="r"><code>head(wmirot)</code></pre>
<pre><code>## Subject S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5
## 1 1 2 1 0 0 0 3 1 0 1 1 0 0
## 2 2 2 2 1 1 2 3 2 2 2 2 1 1
## 3 3 2 1 4 1 2 3 4 1 1 2 0 3
## 4 4 2 2 3 1 2 0 1 1 2 2 3 4
## 5 5 2 3 4 5 2 3 4 4 2 3 4 2
## 6 7 2 3 0 2 2 3 4 3 2 1 2 1</code></pre>
<p>Labels of the variables in the datasets indicate the corresponding block and item size of a specific item. For example, in the reading span dataset (wmir), the 5th column (“V1.6”) presents subjects’ responses on the item with 6 elements in the 1st block. Subject 1 recalled 2 of the 6 elements correctly.</p>
<p>For a detail summary of the two complex span tasks, see <a href="https://doi.org/10.3758/BF03196772">Conway et al. (2005)</a> and <a href="https://doi.org/10.3758/s13421-021-01242-6">Hao &amp; Conway (2021)</a>.</p>
</div>
</div>
<div id="key-concepts-in-item-response-theory" class="section level1">
<h1>Key Concepts in Item Response Theory</h1>
<p>In this section we will briefly go over some key concepts and terms we will be using in this IRT tutorial.</p>
<p><strong>Scale</strong>: In this tutorial, a scale refers to any quantitative system that is designed to reflect an individual’s standing or level of ability on a latent construct or latent trait. A scale consists of multiple manifest items. These items can be questions in a survey, problems in a test, or trials in an experiment.<br />
- <strong>Dichotomous IRT models</strong> are applied to the items with two possible response categories (yes/no, correct/incorrect, etc.)<br />
- <strong>Polytomous IRT models</strong> are applicable if the items have more than two possible response categories (Likert-type response scale, questions with partial credits, etc.)</p>
<p><strong>Dimensionality</strong>: The number of distinguishable attributes that a scale reflect.<br />
- For <strong>unidimensional IRT models</strong>, it is assumed that the scale only reflect one dimension, such that all items in the scale are assumed to reflect a unitary latent trait.<br />
- For <strong>multidimensional IRT models</strong>, multiple dimensions can be reflected and estimated, such that the responses to the items in the scales are assumed to reflect properties of multiple latent traits.</p>
<p><strong>Theta</strong> (<span class="math inline">\(\Theta\)</span>): the latent construct or trait that is measured by the scale. It represents individual differences on the latent construct being measured.</p>
<p><strong>Information</strong>: an index to characterize the precision of measurement of the item or the test on the underlying latent construct, with high information denoting more precision. In IRT, this index is represented as a function of persons at different levels, such that the information function reflects the range of trait level over which this item or this test is most useful for distinguishing among individuals.</p>
<p><strong>Item Characteristic Curve</strong> (ICC): AKA item trace curve. ICC represents an item response function that models the relationship between a person’s probability for endorsing an item category (<em>p</em>) and the level on the construct measured by the scale (<span class="math inline">\(\Theta\)</span>). For this purpose, the slope of the item characteristic curve is used to assess whether a specific item mean score has either a steeper curve (i.e., high value) or whether the item has a wider curve (i.e., low value) and, therefore, cannot adequately differentiate based on ability level.</p>
<p><strong>Item Difficulty Parameter</strong> (<em>b</em>): the trait level on the latent scale where a person has a 50% chance of responding positively to the item. This definition of item difficulty applies to dichotomous models. For polytomous models, multiple threshold parameters (<em>d</em>s) are estimated for an item so that the latent trait difference between and beyond the response categories are accounted for.<br />
- Conceptually, the role of item difficulty parameters in an IRT model is equivalent to the intercepts of manifests in a latent factor model.</p>
<p><strong>Item Discrimination Parameter</strong> (<em>a</em>): how accurately a given item can differentiate individuals based on ability level. describes the strength of an item’s discrimination between people with trait levels below and above the threshold <em>b</em>. This parameter is also interpreted as describing how an item is related to the latent trait measured by the scale. In other words, the <em>a</em> parameter for an item reflects the magnitude of item reliability (how much the item is contributing to total score variance).<br />
- Conceptually, the role of item discrimination parameters in an IRT model is equivalent to the factor loadings of manifests in a latent factor model.</p>
<p>The “mirt” package includes an interactive graphical interface (shiny app) to allow the parameters to be modified in an IRT exemplar item in real time. To facilitate understanding of these key concepts, you can run the line of code below in your R console to activate an interactive shiny app with examplar item trace plots for different types of IRT models.</p>
<pre class="r"><code>itemplot(shiny = TRUE)</code></pre>
</div>
<div id="unidimensional-dichotomous-irt-models" class="section level1">
<h1>Unidimensional Dichotomous IRT Models</h1>
<p>In this section we will start with the basic unidimensional dichotomous model, in which all items are assumed to measure one latent trait, and the responses to items are all binary (0 = incorrect/no, 1 = correct/yes). We will use the rotation span dataset (wmirot) in this section. As aforementioned, the raw data present numbers of correctly recalled elements for each item, which are not binary responses. Thus, we need to re-score these items using a all-or-nothing unit scoring approach (Conway et al., 2005; p.775), such that only a response with all elements in the item correctly recalled is scored as “correct” (1), while all other responses are scored as “incorrect” (0). The “mirt” package has a built-in function,“key2binary”, to assign binary scores to items in a dataset based on a given answer key. Thus, we can transfer all the initial rotation span responses to a binary response scale.</p>
<pre class="r"><code>dat1 &lt;- key2binary(wmirot[,-1],
key = c(2,3,4,5,2,3,4,5,2,3,4,5))
head(dat1)</code></pre>
<pre><code>## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5
## [1,] 1 0 0 0 0 1 0 0 0 0 0 0
## [2,] 1 0 0 0 1 1 0 0 1 0 0 0
## [3,] 1 0 1 0 1 1 1 0 0 0 0 0
## [4,] 1 0 0 0 1 0 0 0 1 0 0 0
## [5,] 1 1 1 1 1 1 1 0 1 1 1 0
## [6,] 1 1 0 0 1 1 1 0 1 0 0 0</code></pre>
<div id="assumption-checks" class="section level2">
<h2>Assumption Checks</h2>
<p>Unidimensional IRT models assume that all items are measuring a single continuous latent variable. There are different ways to test the unidimensionality assumption. For example, we can estimate McDonald’s hierarchical Omega (<span class="math inline">\(\omega_h\)</span>), which conceptually reflects percentage of variance in the scale scores accounted for by a general factor. An arbitrary cutoff of <span class="math inline">\(\omega_h\)</span> &gt; .70 is usually used as the rule of thumb. Unfortunately, the current data violated this assumption using this rule of thumb (<span class="math inline">\(\omega_h\)</span> = .56), but for demonstration purpose, we went along with further analyses.</p>
<p>For further details on unidimensionality, see <a href="https://doi.org/10.1007/BF02289858">Berge &amp; Socan (2004)</a> and <a href="https://doi.org/10.1177/0146621605278814">Zinbarg, Yovel, Revelle, &amp; McDonald (2006)</a>.</p>
<p>Another assumption of IRT is local independence, such that item responses are independent of one another. This assumption can be checked during the modeling fitting process by investigating the residuals and compute local dependence indices using the “residuals” function.</p>
<pre class="r"><code>summary(omega(dat1, plot = F))</code></pre>
<pre><code>## Omega
## omega(m = dat1, plot = F)
## Alpha: 0.7
## G.6: 0.69
## Omega Hierarchical: 0.56
## Omega H asymptotic: 0.77
## Omega Total 0.73
##
## With eigenvalues of:
## g F1* F2* F3*
## 1.72 0.23 0.60 0.43
## The degrees of freedom for the model is 33 and the fit was 0.07
## The number of observations was 262 with Chi Square = 17.75 with prob &lt; 0.99
##
## The root mean square of the residuals is 0.03
## The df corrected root mean square of the residuals is 0.04
##
## RMSEA and the 0.9 confidence intervals are 0 0 0
## BIC = -166.01Explained Common Variance of the general factor = 0.58
##
## Total, General and Subset omega for each subset
## g F1* F2* F3*
## Omega total for total scores and subscales 0.73 0.55 0.48 0.55
## Omega general for total scores and subscales 0.56 0.45 0.13 0.37
## Omega group for total scores and subscales 0.11 0.11 0.35 0.18</code></pre>
</div>
<div id="pl-rasch-model" class="section level2">
<h2>1PL (Rasch) Model</h2>
<p>We can start with a 1PL (Rasch) model, in which the discrimination parameters for all items are fixed to 1, while difficulty paramters are freely estimated in the model.</p>
<pre class="r"><code># Model specification. Here we indicate that all columns in the dataset (1 to 12) measure the same latent factor (&quot;rotation&quot;)
uniDich.model1 &lt;- mirt.model(&quot;rotation = 1 - 12&quot;)
# Model estimation. Here we indicate that we are estimating a Rasch model, and standard errors for parameters are estimated.
uniDich.result1 &lt;- mirt::mirt(dat1, uniDich.model1, itemtype = &quot;Rasch&quot;, SE = TRUE)</code></pre>
<div id="model-and-item-fits" class="section level3">
<h3>Model and Item Fits</h3>
<p>we can now investigate model fit statistics using the “M2” function, which provides the M2 index, the M2-based root mean square error of approximation (RMSEA), the standardized root mean square residual (SRMSR), and comparative fit index (CFI &amp; TLI) to assess adequacy of model fit. A set of arbitrary cutoff values for the fit indices are provided here: RMSEA &lt; .06; SRMSR &lt; .08; CFI &gt; .95; TLI &gt; .95. Models with fit indices that saturate these cutoff values are commonly considered to have good fit. In this example, the non-significant M2 and all fit indices indicated great fit.</p>
<pre class="r"><code>M2(uniDich.result1)</code></pre>
<pre><code>## M2 df p RMSEA RMSEA_5 RMSEA_95 SRMSR TLI CFI
## stats 57.35425 65 0.7388255 0 0 0.02786493 0.05863013 1.016119 1</code></pre>
<p>In IRT analyses, we can also assess how well each item fits the model. This is especially useful for item inspection and scale revision. The “itemfit” function provides S-X2 index as well as RMSEA values to assess the degree of item fit for each item. Non-significant S-X2 values and RMSEA &lt; .06 are usually considered evidence of adequate fit for an item. In the current example, all items seem to fit the model well based on the indices.</p>
<pre class="r"><code>itemfit(uniDich.result1)</code></pre>
<pre><code>## item S_X2 df.S_X2 RMSEA.S_X2 p.S_X2
## 1 S1.2 7.250 5 0.042 0.203
## 2 S1.3 11.494 6 0.059 0.074
## 3 S1.4 7.566 6 0.032 0.272
## 4 S1.5 6.330 5 0.032 0.275
## 5 S2.2 4.483 6 0.000 0.612
## 6 S2.3 5.462 6 0.000 0.486
## 7 S2.4 5.275 6 0.000 0.509
## 8 S2.5 3.816 5 0.000 0.576
## 9 S3.2 1.528 6 0.000 0.958
## 10 S3.3 7.055 6 0.026 0.316
## 11 S3.4 5.081 6 0.000 0.533
## 12 S3.5 8.589 5 0.052 0.127</code></pre>
<p>Along with the model and item fits we can also check the local independence assumption using the “residuals” function. The following scripts provide the LD matrix as well as dfs and p-values for all LD indices. Large and significant LD indices are indicators of potential issues of local dependence and may require further attention.</p>
<pre class="r"><code>residuals(uniDich.result1, df.p = T)</code></pre>
<pre><code>## Degrees of freedom (lower triangle) and p-values:
##
## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5
## S1.2 NA 0.812 0.572 0.317 0.736 0.537 0.572 0.038 0.228 0.142 0.224 0.302
## S1.3 1 NA 0.441 0.326 0.221 0.176 0.827 0.232 0.553 0.508 0.160 0.778
## S1.4 1 1.000 NA 0.833 0.077 0.643 0.414 0.248 0.694 0.237 0.412 0.482
## S1.5 1 1.000 1.000 NA 0.486 0.734 0.575 0.144 0.600 0.677 0.879 0.553
## S2.2 1 1.000 1.000 1.000 NA 0.741 0.758 0.214 0.233 0.084 0.770 0.214
## S2.3 1 1.000 1.000 1.000 1.000 NA 0.265 0.505 0.261 0.126 0.194 0.222
## S2.4 1 1.000 1.000 1.000 1.000 1.000 NA 0.429 0.694 0.534 0.063 0.248
## S2.5 1 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.523 0.187 0.866 0.086
## S3.2 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.151 0.774 0.872
## S3.3 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.352 0.717
## S3.4 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA 0.156
## S3.5 1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 NA
##
## LD matrix (lower triangle) and standardized values:
##
## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3
## S1.2 NA -0.015 -0.035 0.062 -0.021 -0.038 -0.035 -0.128 0.074 -0.091
## S1.3 0.057 NA 0.048 0.061 0.076 0.084 0.013 0.074 0.037 0.041
## S1.4 0.319 0.595 NA -0.013 0.109 -0.029 -0.051 -0.071 -0.024 0.073
## S1.5 1.002 0.966 0.045 NA -0.043 -0.021 -0.035 -0.090 0.032 0.026
## S2.2 0.114 1.498 3.120 0.486 NA 0.020 0.019 0.077 0.074 0.107
## S2.3 0.382 1.827 0.215 0.115 0.109 NA -0.069 -0.041 0.069 0.094
## S2.4 0.319 0.048 0.669 0.314 0.095 1.242 NA -0.049 -0.024 -0.038
## S2.5 4.286 1.428 1.335 2.136 1.547 0.444 0.626 NA 0.039 -0.082
## S3.2 1.453 0.351 0.155 0.274 1.425 1.265 0.155 0.408 NA 0.089
## S3.3 2.158 0.439 1.401 0.174 2.979 2.336 0.387 1.744 2.059 NA
## S3.4 1.479 1.976 0.673 0.023 0.085 1.684 3.445 0.029 0.082 0.866
## S3.5 1.067 0.080 0.495 0.352 1.547 1.491 1.335 2.953 0.026 0.132
## S3.4 S3.5
## S1.2 -0.075 -0.064
## S1.3 0.087 -0.017
## S1.4 -0.051 0.043
## S1.5 0.009 -0.037
## S2.2 0.018 0.077
## S2.3 0.080 -0.075
## S2.4 -0.115 -0.071
## S2.5 0.010 -0.106
## S3.2 -0.018 -0.010
## S3.3 0.057 -0.022
## S3.4 NA -0.088
## S3.5 2.012 NA</code></pre>
<p>Other than the model fits and item fits, the “mirt” package also provides methods for calculating person fit statistics such as Zh statistics using the “personfit” function. In general, person fit statistics indicate how much a person’s responses on this test deviates from the the model prediction. See the “mirt” documentation and <a href="https://doi.org/10.1111/j.2044-8317.1985.tb00817.x">Drasgow, Levine, and Williams (1985)</a> for further details.</p>
<pre class="r"><code>head(personfit(uniDich.result1))</code></pre>
<pre><code>## outfit z.outfit infit z.infit Zh
## 1 0.5429718 0.02890873 0.9234628 -0.06855285 0.3245494
## 2 0.3222275 -0.81981691 0.4792737 -1.50551325 1.2108896
## 3 1.3572362 0.68361018 1.6380112 1.45638097 -1.2704873
## 4 0.2944648 -0.60327679 0.4464911 -1.66830682 1.2602444
## 5 0.3693042 -0.15612318 0.6267169 -0.83751758 0.8026636
## 6 0.5580941 -0.53989282 0.8091663 -0.35965401 0.5450324</code></pre>
</div>
<div id="irt-paramters" class="section level3">
<h3>IRT Paramters</h3>
<p>We can obtain the item parameters from the model. As aforementioned, for a Rasch model, all discrimination parameters are fixed to 1, while difficulty parameters are freely estimated. In the output, the second column (“a”) contains the discrimination parameters and the third column (b) contains the difficulty parameters.</p>
<p>In this example, we presented the IRT parameters using the conventional approach, such that a larger <em>b</em> parameter indicates higher difficulty of an item. For example, the second item, S1.3 (item size 3 in the 1st block), has a b = -0.94. This indicates that, according to the model estimation, a person with ability level that is 0.94 standard deviation below the average has 50% of chance to answer this item (S1.3) correctly.</p>
<pre class="r"><code># IRT parameters from the estimated model. For this example, we are obtaining simplified output without SEs/CIs (simplify = TRUE) for conventional IRT parameters (IRTpar = TRUE).
coef(uniDich.result1,simplify = TRUE, IRTpar = TRUE)$items</code></pre>
<pre><code>## a b g u
## S1.2 1 -3.0033640 0 1
## S1.3 1 -0.9388653 0 1
## S1.4 1 0.5864289 0 1
## S1.5 1 2.4373369 0 1
## S2.2 1 -2.6395716 0 1
## S2.3 1 -1.4247025 0 1
## S2.4 1 0.5864289 0 1
## S2.5 1 2.3201998 0 1
## S3.2 1 -2.3398445 0 1
## S3.3 1 -0.9845155 0 1
## S3.4 1 0.3420016 0 1
## S3.5 1 2.3201998 0 1</code></pre>
</div>
<div id="visualizing-the-item-and-scale-characteristics" class="section level3">
<h3>Visualizing the Item and Scale Characteristics</h3>
<p>We can visualize corresponding item and scale characteristics from the model by a variety of plot methods in “mirt”. The plots presents how items and the entire scale relate to the latent trait across the scale.<br />
We can start with the item trace plots for the 12 items. The item trace plots visualize the probability of responding “1” to an item as a function of <span class="math inline">\(\theta\)</span>. According to the item trace plot of this example, the 3 items with item size 2 (S1.2,S2.2,S3.2) are relatively easy items, in which subjects with average ability (<span class="math inline">\(\theta\)</span> = 0) are estimated to have about 80% to 90% of chance to answer correctly. On the other hand, the 3 items with size 5 are relatively hard items, in which subjects with average ability are estimated to have about only 10% to 20% of chance to answer correctly.</p>
<pre class="r"><code># In the function we cam specify the range of theta we&#39;d like to visualize on the x axis of the plots. In this example we set it to -4 to 4 (4 SDs below and above average).
plot(uniDich.result1, type = &quot;trace&quot;, theta_lim = c(-4,4))</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unidich%20Rasch%20ploting1-1.png" width="672" />
Other than the item trace plots, we can also look at the item information plots. Item information plots visualize how much “information” about the latent trait ability an item can provide. Conceptually, higher information implies less error of measure, and the peak of an item information curve is at the point of its b parameter. Thus, for easy items (such as the 3 items in the most left column below), little information are provided on subjects with high ability levels (because they will almost always answer correctly).</p>
<pre class="r"><code># We can specify the exact set of items we want to plot in the ploting function of mirt. Here we can also only visualize the 1st, 5th, and 9th item from the dataset by addin an argument &quot;which.items = c(1,5,9)&quot; in the function. This will make the function to only plot the 3 items with set size 2 in the task. Please feel free to give a try.
plot(uniDich.result1, type = &quot;infotrace&quot;)</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unidich%20Rasch%20ploting2-1.png" width="672" />
The “itemplot” function can provides more details regarding an item. This is an example that visualize the item trace plot of item 1 with confidence envelope.</p>
<pre class="r"><code>itemplot(uniDich.result1, item = 1, type = &quot;trace&quot;, CE = TRUE)</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unidich%20Rasch%20ploting3-1.png" width="672" />
Lastly, we can plot the information curve for the entire test. This is based on the sum of all item information curves and indicate how much information a test can provide at different latent trait levels based on the model. As aforementioned, high information indicate less error (low SE) of measurement. An ideal (but impossible) test would have high test information at all levels of latent trait levels.</p>
<pre class="r"><code>plot(uniDich.result1, type = &quot;infoSE&quot;)</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unidich%20Rasch%20ploting4-1.png" width="672" /></p>
</div>
</div>
<div id="pl-model" class="section level2">
<h2>2PL Model</h2>
<p>We can also estimate a 2PL model on the same binary data of rotation span task. In a 2PL model, not only the item difficulty parameters (<em>b</em>s) but also the item discrimination parameters (<em>a</em>s) are estimated. Thus, a 2PL model assumes that different items vary in the ability to discriminate between persons with different latent trait levels.</p>
<pre class="r"><code>uniDich.model2 &lt;- mirt.model(&quot;rotation = 1 - 12&quot;)
uniDich.result2 &lt;- mirt::mirt(dat1, uniDich.model2, itemtype = &quot;2PL&quot;, SE = TRUE)</code></pre>
<div id="model-and-item-fits-1" class="section level3">
<h3>Model and Item Fits</h3>
<p>Similarly, we can obtain corresponding statistics of the model such as model fit and item fit statistics. In this example, the overall model fit for the 2PL model is good.</p>
<pre class="r"><code>M2(uniDich.result2)</code></pre>
<pre><code>## M2 df p RMSEA RMSEA_5 RMSEA_95 SRMSR TLI CFI
## stats 40.90802 54 0.9054692 0 0 0.01662836 0.04004632 1.033223 1</code></pre>
<p>Item fit statistics for this 2PL model indicate that the 2nd item (S1.3) may need further attention (significant S-X2 and large RMSEA).</p>
<pre class="r"><code>itemfit(uniDich.result2)</code></pre>
<pre><code>## item S_X2 df.S_X2 RMSEA.S_X2 p.S_X2
## 1 S1.2 1.607 6 0.000 0.952
## 2 S1.3 10.427 4 0.078 0.034
## 3 S1.4 6.922 5 0.038 0.227
## 4 S1.5 5.373 5 0.017 0.372
## 5 S2.2 4.646 5 0.000 0.461
## 6 S2.3 2.783 5 0.000 0.733
## 7 S2.4 3.550 6 0.000 0.737
## 8 S2.5 3.390 5 0.000 0.640
## 9 S3.2 1.839 5 0.000 0.871
## 10 S3.3 7.201 6 0.028 0.303
## 11 S3.4 6.146 6 0.010 0.407
## 12 S3.5 7.840 5 0.047 0.165</code></pre>
</div>
<div id="irt-parameters" class="section level3">
<h3>IRT Parameters</h3>
<p>We can obtain the item parameters from the model. For a 2PL model, both the item discrimination parameters and the item difficulty parameters are freely estimated. Similar to the output of the Rasch model, the second column (“a”) contains the discrimination parameters and the third column (b) contains the difficulty parameters. We can see that, unlike the Rasch model, now every item has a unique discrimination parameter.</p>
<p>For a dichotomous 2PL model, the item discrimination parameters reflect how well an item could discriminate between persons with low and high ability/trait levels. Furthermore, the <em>a</em> parameter also reflects the magnitude to which an item is related to the latent trait measured by the scale. Thus, a low discrimination parameter usually indicates potential issues for an item comparing to the general scale.</p>
<pre class="r"><code>coef(uniDich.result2,simplify = TRUE, IRTpar = TRUE)$items</code></pre>
<pre><code>## a b g u
## S1.2 0.8196465 -3.2455844 0 1
## S1.3 1.7771645 -0.6171063 0 1
## S1.4 1.3365861 0.4431670 0 1
## S1.5 1.2963837 1.9023919 0 1
## S2.2 1.7058414 -1.7589910 0 1
## S2.3 1.3696765 -1.0753956 0 1
## S2.4 0.8589407 0.5905375 0 1
## S2.5 0.9387174 2.2626112 0 1
## S3.2 1.4477736 -1.7033180 0 1
## S3.3 1.5741235 -0.6888211 0 1
## S3.4 1.2459076 0.2655880 0 1
## S3.5 0.9445384 2.2521152 0 1</code></pre>
</div>
<div id="visualizing-the-item-and-scale-plots" class="section level3">
<h3>Visualizing the Item and Scale Plots</h3>
<p>Comparing to the Rasch model, the estimated <em>a</em> parameters in a 2PL model are also reflected in item trace plots, such that the differences in <em>a</em>s are reflected by the changes in the steepness of the item trace curves. Higher <em>a</em>s would be reflected as steeper item trace curves.</p>
<pre class="r"><code>plot(uniDich.result2, type = &quot;trace&quot;, theta_lim = c(-4,4))</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unidich%202PL%20ploting1-1.png" width="672" /></p>
<p>The freely estimated discrimination parameters are also reflected in the item information plots. As we can see, comparing to the Rasch model, the peaks of information curves are varying across items in the 2PL model.</p>
<pre class="r"><code>plot(uniDich.result2, type = &quot;infotrace&quot;)</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unidich%202PL%20ploting2-1.png" width="672" /></p>
</div>
</div>
<div id="model-specifications" class="section level2">
<h2>Model Specifications</h2>
<p>The model specification function of “mirt” provides arguments for further constraints in an IRT model and can be used for testing specific assumptions regarding the item characteristics.<br />
For example, in the rotation span task, items with the exact same set size are designed in the exact same way. Thus, we consider the items with the same set size equivalent in their ability to discriminate persons with different ability levels. To estimate this model, we can specify the constraints in the model specification function. As is presented, in the function we specify an equal <em>a</em> parameter for items 1, 5, &amp; 9, which are labeled “S1.2”, “S2.2”, and “S3.2” in the dataset (these are the 3 items with set size 2); another equal <em>a</em> for items 2, 6, &amp; 1; another for items 3, 7, and 11; and another for items 4, 8, and 12.</p>
<pre class="r"><code>uniDich.model3 &lt;- mirt.model(&quot;rotation = 1 - 12
CONSTRAIN = (1,5,9,a1), (2,6,10,a1),(3,7,11,a1),(4,8,12,a1)&quot;)
uniDich.result3 &lt;- mirt::mirt(dat1, uniDich.model3, itemtype = &quot;2PL&quot;, SE = TRUE)</code></pre>
<p>The specification in constraints is reflected in the IRT parameters from this model. As we can observe, in the IRT parameters output, items with the same set sizes are estimated to have the exact same <em>a</em> parameters. For all items with size 2, <em>a</em> = 1.31, and for all items with size 3, <em>a</em> = 1.55, etc. On the other hand, the <em>b</em> parameters are still freely estimated regardless of the item size.</p>
<pre class="r"><code>coef(uniDich.result3,simplify = TRUE, IRTpar = TRUE)$items</code></pre>
<pre><code>## a b g u
## S1.2 1.314847 -2.3120591 0 1
## S1.3 1.551951 -0.6626354 0 1
## S1.4 1.127870 0.4902232 0 1
## S1.5 1.036344 2.2120724 0 1
## S2.2 1.314847 -2.0337955 0 1
## S2.3 1.551951 -1.0029799 0 1
## S2.4 1.127870 0.4902232 0 1
## S2.5 1.036344 2.1036240 0 1
## S3.2 1.314847 -1.8044209 0 1
## S3.3 1.551951 -0.6946542 0 1
## S3.4 1.127870 0.2815835 0 1
## S3.5 1.036344 2.1036240 0 1</code></pre>
<p>We can also do model comparisons for nested models with different constraints. For example, we can test the difference between this constrained model and the previous 2PL model without any constraints on discrimination. This is similar to a model comparison based on chi-squared statistics for nested SEM models. As we can see, results indicate that the two models are not significantly different from each other, <span class="math inline">\(\Delta\chi^2\)</span>(8) = 8.68, <em>p</em> = .37.</p>
<pre class="r"><code>anova(uniDich.result2,uniDich.result3)</code></pre>
<pre><code>##
## Model 1: mirt::mirt(data = dat1, model = uniDich.model3, itemtype = &quot;2PL&quot;,
## SE = TRUE)
## Model 2: mirt::mirt(data = dat1, model = uniDich.model2, itemtype = &quot;2PL&quot;,
## SE = TRUE)</code></pre>
<pre><code>## AIC AICc SABIC HQ BIC logLik X2 df p
## 1 2941.329 2943.549 2947.695 2964.276 2998.422 -1454.664 NaN NaN NaN
## 2 2948.644 2953.708 2958.194 2983.065 3034.285 -1450.322 8.684 8 0.37</code></pre>
</div>
</div>
<div id="unidimensional-polytomous-irt-model" class="section level1">
<h1>Unidimensional Polytomous IRT Model</h1>
<p>In the previous section we have conducted dichotomous IRT analyses on the rotation span task dataset with binary responses. However, the initial rotation span dataset consist of numbers of correctly recalled elements for each item. In other words, each item actually has more than two possible response categories that are at least ordinal. For example, for an item with set size 2 (2 elements in the item), there are 3 possible response outcomes: 0, 1, and 2. Thus, we could fit and assess a polytomous IRT model to this type of measures, such as partial-scored tests and Likert-type surveys.</p>
<pre class="r"><code>dat2 &lt;- as.matrix(wmirot[,-1])
head(dat2)</code></pre>
<pre><code>## S1.2 S1.3 S1.4 S1.5 S2.2 S2.3 S2.4 S2.5 S3.2 S3.3 S3.4 S3.5
## [1,] 2 1 0 0 0 3 1 0 1 1 0 0
## [2,] 2 2 1 1 2 3 2 2 2 2 1 1
## [3,] 2 1 4 1 2 3 4 1 1 2 0 3
## [4,] 2 2 3 1 2 0 1 1 2 2 3 4
## [5,] 2 3 4 5 2 3 4 4 2 3 4 2
## [6,] 2 3 0 2 2 3 4 3 2 1 2 1</code></pre>
<div id="generalized-partial-credit-model" class="section level2">
<h2>Generalized Partial Credit Model</h2>
<p>In this section, we will apply the generalized partial credit model (GPCM; <a href="https://doi.org/10.1002/j.2333-8504.1992.tb01436.x">Muraki, 1992</a>) to the rotation span data. As a polytomous model, GPCM estimates one item threshold parameter for <strong>each response category</strong> in an item instead of one difficulty parameter for an item. Further more, GPCM assumes an unique item discrimination parameter for each item instead of assuming a unitary reliability across items (like the Rasch model).</p>
<pre class="r"><code>unipoly.model1 &lt;- mirt.model(&quot;rotation = 1 - 12&quot;)
unipoly.result1 &lt;- mirt::mirt(dat2, uniDich.model2, itemtype = &quot;gpcm&quot;, SE = TRUE)</code></pre>
<div id="model-and-item-fits-2" class="section level3">
<h3>Model and Item Fits</h3>
<p>Similarly, we can obtain corresponding statistics of the model such as model fit and item fit statistics. In this example, the overall model fit and all item fits for the GPCM model are good.</p>
<pre class="r"><code>M2(unipoly.result1)</code></pre>
<pre><code>## M2 df p RMSEA RMSEA_5 RMSEA_95 SRMSR TLI CFI
## stats 23.66605 24 0.4808204 0 0 0.04927329 0.04771738 1.005052 1</code></pre>
<pre class="r"><code>itemfit(unipoly.result1)</code></pre>
<pre><code>## item S_X2 df.S_X2 RMSEA.S_X2 p.S_X2
## 1 S1.2 15.329 11 0.039 0.168
## 2 S1.3 46.224 32 0.041 0.050
## 3 S1.4 43.506 48 0.000 0.657
## 4 S1.5 44.058 58 0.000 0.912
## 5 S2.2 20.725 13 0.048 0.079
## 6 S2.3 30.299 26 0.025 0.255
## 7 S2.4 48.715 50 0.000 0.525
## 8 S2.5 71.664 61 0.026 0.165
## 9 S3.2 13.519 17 0.000 0.701
## 10 S3.3 20.588 32 0.000 0.940
## 11 S3.4 52.075 51 0.009 0.432
## 12 S3.5 63.890 63 0.007 0.445</code></pre>
</div>
<div id="irt-parameters-1" class="section level3">
<h3>IRT Parameters</h3>
<p>For a GPCM model, the item discrimination parameters and the item threshold parameters are freely estimated. In a GPCM model, the item threshold parameter is defined as the trait level in which one has an equal probability of choosing the <em>k</em>th response category over the <em>k-1</em>th category in an item. When choosing between the <em>k</em>th and the <em>k-1</em>th category, subjects with trait levels higher than that threshold are more likely to approach the <em>k</em>th, while subjects with trait levels lower than that threshold are more likely to approach the <em>k-1</em>th.</p>
<pre class="r"><code>coef(unipoly.result1,simplify = TRUE, IRTpars = TRUE)$items</code></pre>
<pre><code>## a b1 b2 b3 b4 b5
## S1.2 0.6354780 -2.4238753 -4.50115962 NA NA NA
## S1.3 0.8367055 -1.5716066 -0.89123631 -1.8001978 NA NA
## S1.4 0.5989767 -1.8187952 -0.50303114 0.5399410 -1.379714 NA
## S1.5 0.4660780 -0.7725926 -0.02659485 0.5368132 1.745108 0.07050096
## S2.2 1.0176875 -1.8131900 -2.77011850 NA NA NA
## S2.3 0.8606748 -2.3689773 -0.88851301 -2.2413670 NA NA
## S2.4 0.4985639 -1.6434258 -0.70470153 -0.3802990 -1.060266 NA
## S2.5 0.4158617 -0.5284300 -1.00120923 1.5317726 1.243526 -0.09643630
## S3.2 0.8508984 -1.3638275 -3.01408652 NA NA NA
## S3.3 0.8038394 -1.7838070 -1.03577722 -1.8430426 NA NA
## S3.4 0.5793661 -0.6192944 -0.94635296 0.0811653 -1.540936 NA
## S3.5 0.4046533 -0.5014941 -0.68390425 0.3561937 1.278277 0.42457494</code></pre>
<p>In this example, the function utilizes the conventional IRT parameterization. In the conventional parameterization, for an item of size <em>p</em>, GPCM estimates <em>p</em> item threshold parameters for each of the categories (from “<em>b_1</em>” for partial scores of 0 and 1 to “<em>b_p</em>” for partial scores p-1 and p), and 1 item discrimination parameter for the item. The second column (“<em>a_1</em>”) contains the discrimination parameters and the later columns (“<em>b_1</em>” to “<em>b_5</em>”) contain all the threshold parameters.</p>
</div>
<div id="visualizing-the-item-and-scale-plots-1" class="section level3">
<h3>Visualizing the Item and Scale Plots</h3>
<p>Similar to the dichotomous 2PL model, the estimated <em>a</em> parameters in a GPCM model are reflected in item trace plots, such that the differences in <em>a</em>s are reflected by the changes in the steepness of the item trace curves. Higher <em>a</em>s would be reflected as steeper item trace curves. On the other hand, the estimated <em>b</em> parameters in a GPCM model are reflected as (x-axis values for) the adjacent points between trace curves for different categories. For example, for Item S3.2, the current GPCM model estimated two threshold parameters: b1 = -1.36 (the adjacent point between curve P1 and P2) and b2 = -3.01 (the adjacent point between curve P2 and P3).</p>
<pre class="r"><code>plot(unipoly.result1, type = &quot;trace&quot;, theta_lim = c(-4,4))</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unipoly%202PL%20ploting1-1.png" width="672" /></p>
<p>Similar to the dichotomous 2PL model, the freely estimated discrimination parameters are also reflected in the item information plots.</p>
<pre class="r"><code>plot(unipoly.result1, type = &quot;infotrace&quot;)</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unipoly%202PL%20ploting2-1.png" width="672" /></p>
</div>
<div id="individual-scoring" class="section level3">
<h3>Individual Scoring</h3>
<p>Based on an estimated model, we can also estimate the individual latent trait scores. Conceptually, the estimated latent trait scores are similar to factor scores estimated in CFAs.</p>
<pre class="r"><code>est.theta &lt;- as.data.frame(fscores(unipoly.result1))
head(est.theta)</code></pre>
<pre><code>## rotation
## 1 -1.9429664
## 2 -0.7051405
## 3 -0.5498356
## 4 -0.6928339
## 5 1.4039574
## 6 -0.3787596</code></pre>
<pre class="r"><code>est.theta %&gt;%
ggplot(aes(x=rotation)) +
geom_histogram(aes(y=..density..),
binwidth=.1,
colour=&quot;black&quot;, fill=&quot;white&quot;) +
geom_density(alpha=.2, fill=&quot;aquamarine2&quot;)</code></pre>
<p><img src="https://hanhao23.github.io/project/irttutorial/irt-tutorial-in-r-with-mirt-package/index_files/figure-html/unipoly%202PL%20scoring-1.png" width="672" /></p>
</div>
</div>
</div>
</description>
</item>
<item>
<title>The Impact of Auditory Distraction on Reading Comprehension</title>
<link>https://hanhao23.github.io/publication/the-impact-of-auditory-distraction-on-reading-comprehension/</link>
<pubDate>Fri, 08 Oct 2021 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/publication/the-impact-of-auditory-distraction-on-reading-comprehension/</guid>
<description><!---
Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
files/HanHaoCGU_Psychonomics_2019.png
-->
</description>
</item>
<item>
<title>Rethinking the Relationship of Working Memory and Intelligence - A Perspective based on Process Overlap Theory</title>
<link>https://hanhao23.github.io/talk/2021isir/</link>
<pubDate>Sat, 04 Sep 2021 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/talk/2021isir/</guid>
<description><!---
<div class="alert alert-note">
<div>
Click on the <strong>Slides</strong> button above to view the built-in slides feature.
</div>
</div>
Slides can be added in a few ways:
- **Create** slides using Academic's [*Slides*](https://sourcethemes.com/academic/docs/managing-content/#create-slides) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
Further talk details can easily be added to this page using *Markdown* and $\rm \LaTeX$ math code.
-->
</description>
</item>
<item>
<title>Individual Differences in Attention and Intelligence</title>
<link>https://hanhao23.github.io/publication/individual-differences-in-attention-and-intelligence/</link>
<pubDate>Fri, 02 Jul 2021 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/publication/individual-differences-in-attention-and-intelligence/</guid>
<description></description>
</item>
<item>
<title>An Examination of Domain-Specificity Differences in Complex Span Tasks through Item Response Theory</title>
<link>https://hanhao23.github.io/talk/2020psychonomics/</link>
<pubDate>Sat, 21 Nov 2020 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/talk/2020psychonomics/</guid>
<description><!---
<div class="alert alert-note">
<div>
Click on the <strong>Slides</strong> button above to view the built-in slides feature.
</div>
</div>
Slides can be added in a few ways:
- **Create** slides using Academic's [*Slides*](https://sourcethemes.com/academic/docs/managing-content/#create-slides) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
Further talk details can easily be added to this page using *Markdown* and $\rm \LaTeX$ math code.
-->
</description>
</item>
<item>
<title>The Struggle Is Real</title>
<link>https://hanhao23.github.io/publication/the-struggle-is-real/</link>
<pubDate>Thu, 01 Oct 2020 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/publication/the-struggle-is-real/</guid>
<description></description>
</item>
<item>
<title>Response or recall bias? Choosing between the traditional and retrospective pretest using measurement invariance techniques</title>
<link>https://hanhao23.github.io/publication/retrospective/</link>
<pubDate>Thu, 02 Jul 2020 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/publication/retrospective/</guid>
<description><!---
<div class="alert alert-note">
<div>
Click the <em>Slides</em> button above to demo Academic&rsquo;s Markdown slides feature.
</div>
</div>
Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
--><blockquote>
</blockquote>
</description>
</item>
<item>
<title>An Examination of Domain-Specificity Differences in Complex Span Tasks through Item Response Theory</title>
<link>https://hanhao23.github.io/publication/uniirt/</link>
<pubDate>Tue, 30 Jun 2020 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/publication/uniirt/</guid>
<description><!---
<div class="alert alert-note">
<div>
Click the <em>Slides</em> button above to demo Academic&rsquo;s Markdown slides feature.
</div>
</div>
Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
--><blockquote>
</blockquote>
</description>
</item>
<item>
<title>General Intelligence Explained (Away)</title>
<link>https://hanhao23.github.io/publication/potsimulation/</link>
<pubDate>Mon, 23 Dec 2019 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/publication/potsimulation/</guid>
<description><!---
<div class="alert alert-note">
<div>
Click the <em>Cite</em> button above to demo the feature to enable visitors to import publication metadata into their reference management software.
</div>
</div>
<div class="alert alert-note">
<div>
Click the <em>Slides</em> button above to demo Academic&rsquo;s Markdown slides feature.
</div>
</div>
Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
-->
</description>
</item>
<item>
<title>The Role of Non-Cognitive Factors in the SAT Remains Unclear - A Commentary on Hannon (2019)</title>
<link>https://hanhao23.github.io/publication/satcommentary/</link>
<pubDate>Wed, 04 Dec 2019 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/publication/satcommentary/</guid>
<description><!---
<div class="alert alert-note">
<div>
Click the <em>Cite</em> button above to demo the feature to enable visitors to import publication metadata into their reference management software.
</div>
</div>
<div class="alert alert-note">
<div>
Click the <em>Slides</em> button above to demo Academic&rsquo;s Markdown slides feature.
</div>
</div>
Supplementary notes can be added here, including [code and math](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
--><blockquote>
</blockquote>
</description>
</item>
<item>
<title>The Impact of Auditory Distraction on Reading Comprehension</title>
<link>https://hanhao23.github.io/talk/2019psychonomics/</link>
<pubDate>Mon, 18 Nov 2019 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/talk/2019psychonomics/</guid>
<description><!---
<div class="alert alert-note">
<div>
Click on the <strong>Slides</strong> button above to view the built-in slides feature.
</div>
</div>
Slides can be added in a few ways:
- **Create** slides using Academic's [*Slides*](https://sourcethemes.com/academic/docs/managing-content/#create-slides) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://sourcethemes.com/academic/docs/writing-markdown-latex/).
Further talk details can easily be added to this page using *Markdown* and $\rm \LaTeX$ math code.
-->
</description>
</item>
<item>
<title>Writing technical content in Academic</title>
<link>https://hanhao23.github.io/post/writing-technical-content/</link>
<pubDate>Fri, 12 Jul 2019 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/post/writing-technical-content/</guid>
<description><p>Academic is designed to give technical content creators a seamless experience. You can focus on the content and Academic handles the rest.</p>
<p><strong>Highlight your code snippets, take notes on math classes, and draw diagrams from textual representation.</strong></p>
<p>On this page, you&rsquo;ll find some examples of the types of technical content that can be rendered with Academic.</p>
<h2 id="examples">Examples</h2>
<h3 id="code">Code</h3>
<p>Academic supports a Markdown extension for highlighting code syntax. You can enable this feature by toggling the <code>highlight</code> option in your <code>config/_default/params.toml</code> file.</p>
<pre><code>```python
import pandas as pd
data = pd.read_csv(&quot;data.csv&quot;)
data.head()
```
</code></pre>
<p>renders as</p>
<pre><code class="language-python">import pandas as pd
data = pd.read_csv(&quot;data.csv&quot;)
data.head()
</code></pre>
<h3 id="math">Math</h3>
<p>Academic supports a Markdown extension for $\LaTeX$ math. You can enable this feature by toggling the <code>math</code> option in your <code>config/_default/params.toml</code> file.</p>
<p>To render <em>inline</em> or <em>block</em> math, wrap your LaTeX math with <code>$...$</code> or <code>$$...$$</code>, respectively.</p>
<p>Example <strong>math block</strong>:</p>
<pre><code class="language-tex">$$\gamma_{n} = \frac{
\left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T
\left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}
{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}$$
</code></pre>
<p>renders as</p>
<p>$$\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left |\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right |^2}$$</p>
<p>Example <strong>inline math</strong> <code>$\nabla F(\mathbf{x}_{n})$</code> renders as $\nabla F(\mathbf{x}_{n})$.</p>
<p>Example <strong>multi-line math</strong> using the <code>\\\\</code> math linebreak:</p>
<pre><code class="language-tex">$$f(k;p_0^*) = \begin{cases} p_0^* &amp; \text{if }k=1, \\\\
1-p_0^* &amp; \text {if }k=0.\end{cases}$$
</code></pre>
<p>renders as</p>
<p>$$f(k;p_0^<em>) = \begin{cases} p_0^</em> &amp; \text{if }k=1, \\<br>
1-p_0^* &amp; \text {if }k=0.\end{cases}$$</p>
<h3 id="diagrams">Diagrams</h3>
<p>Academic supports a Markdown extension for diagrams. You can enable this feature by toggling the <code>diagram</code> option in your <code>config/_default/params.toml</code> file or by adding <code>diagram: true</code> to your page front matter.</p>
<p>An example <strong>flowchart</strong>:</p>
<pre><code>```mermaid
graph TD
A[Hard] --&gt;|Text| B(Round)
B --&gt; C{Decision}
C --&gt;|One| D[Result 1]
C --&gt;|Two| E[Result 2]
```
</code></pre>
<p>renders as</p>
<pre><code class="language-mermaid">graph TD
A[Hard] --&gt;|Text| B(Round)
B --&gt; C{Decision}
C --&gt;|One| D[Result 1]
C --&gt;|Two| E[Result 2]
</code></pre>
<p>An example <strong>sequence diagram</strong>:</p>
<pre><code>```mermaid
sequenceDiagram
Alice-&gt;&gt;John: Hello John, how are you?
loop Healthcheck
John-&gt;&gt;John: Fight against hypochondria
end
Note right of John: Rational thoughts!
John--&gt;&gt;Alice: Great!
John-&gt;&gt;Bob: How about you?
Bob--&gt;&gt;John: Jolly good!
```
</code></pre>
<p>renders as</p>
<pre><code class="language-mermaid">sequenceDiagram
Alice-&gt;&gt;John: Hello John, how are you?
loop Healthcheck
John-&gt;&gt;John: Fight against hypochondria
end
Note right of John: Rational thoughts!
John--&gt;&gt;Alice: Great!
John-&gt;&gt;Bob: How about you?
Bob--&gt;&gt;John: Jolly good!
</code></pre>
<p>An example <strong>Gantt diagram</strong>:</p>
<pre><code>```mermaid
gantt
section Section
Completed :done, des1, 2014-01-06,2014-01-08
Active :active, des2, 2014-01-07, 3d
Parallel 1 : des3, after des1, 1d
Parallel 2 : des4, after des1, 1d
Parallel 3 : des5, after des3, 1d
Parallel 4 : des6, after des4, 1d
```
</code></pre>
<p>renders as</p>
<pre><code class="language-mermaid">gantt
section Section
Completed :done, des1, 2014-01-06,2014-01-08
Active :active, des2, 2014-01-07, 3d
Parallel 1 : des3, after des1, 1d
Parallel 2 : des4, after des1, 1d
Parallel 3 : des5, after des3, 1d
Parallel 4 : des6, after des4, 1d
</code></pre>
<p>An example <strong>class diagram</strong>:</p>
<pre><code>```mermaid
classDiagram
Class01 &lt;|-- AveryLongClass : Cool
&lt;&lt;interface&gt;&gt; Class01
Class09 --&gt; C2 : Where am i?
Class09 --* C3
Class09 --|&gt; Class07
Class07 : equals()
Class07 : Object[] elementData
Class01 : size()
Class01 : int chimp
Class01 : int gorilla
class Class10 {
&lt;&lt;service&gt;&gt;
int id
size()
}
```
</code></pre>
<p>renders as</p>
<pre><code class="language-mermaid">classDiagram
Class01 &lt;|-- AveryLongClass : Cool
&lt;&lt;interface&gt;&gt; Class01
Class09 --&gt; C2 : Where am i?
Class09 --* C3
Class09 --|&gt; Class07
Class07 : equals()
Class07 : Object[] elementData
Class01 : size()
Class01 : int chimp
Class01 : int gorilla
class Class10 {
&lt;&lt;service&gt;&gt;
int id
size()
}
</code></pre>
<p>An example <strong>state diagram</strong>:</p>
<pre><code>```mermaid
stateDiagram
[*] --&gt; Still
Still --&gt; [*]
Still --&gt; Moving
Moving --&gt; Still
Moving --&gt; Crash
Crash --&gt; [*]
```
</code></pre>
<p>renders as</p>
<pre><code class="language-mermaid">stateDiagram
[*] --&gt; Still
Still --&gt; [*]
Still --&gt; Moving
Moving --&gt; Still
Moving --&gt; Crash
Crash --&gt; [*]
</code></pre>
<h3 id="todo-lists">Todo lists</h3>
<p>You can even write your todo lists in Academic too:</p>
<pre><code class="language-markdown">- [x] Write math example
- [x] Write diagram example
- [ ] Do something else
</code></pre>
<p>renders as</p>
<ul>
<li><input checked="" disabled="" type="checkbox"> Write math example</li>
<li><input checked="" disabled="" type="checkbox"> Write diagram example</li>
<li><input disabled="" type="checkbox"> Do something else</li>
</ul>
<h3 id="tables">Tables</h3>
<p>Represent your data in tables:</p>
<pre><code class="language-markdown">| First Header | Second Header |
| ------------- | ------------- |
| Content Cell | Content Cell |
| Content Cell | Content Cell |
</code></pre>
<p>renders as</p>
<table>
<thead>
<tr>
<th>First Header</th>
<th>Second Header</th>
</tr>
</thead>
<tbody>
<tr>
<td>Content Cell</td>
<td>Content Cell</td>
</tr>
<tr>
<td>Content Cell</td>
<td>Content Cell</td>
</tr>
</tbody>
</table>
<h3 id="asides">Asides</h3>
<p>Academic supports a
<a href="https://sourcethemes.com/academic/docs/writing-markdown-latex/#alerts" target="_blank" rel="noopener">shortcode for asides</a>, also referred to as <em>notices</em>, <em>hints</em>, or <em>alerts</em>. By wrapping a paragraph in <code>{{% alert note %}} ... {{% /alert %}}</code>, it will render as an aside.</p>
<pre><code class="language-markdown">{{% alert note %}}
A Markdown aside is useful for displaying notices, hints, or definitions to your readers.
{{% /alert %}}
</code></pre>
<p>renders as</p>
<div class="alert alert-note">
<div>
A Markdown aside is useful for displaying notices, hints, or definitions to your readers.
</div>
</div>
<h3 id="spoilers">Spoilers</h3>
<p>Add a spoiler to a page to reveal text, such as an answer to a question, after a button is clicked.</p>
<pre><code class="language-markdown">{{&lt; spoiler text=&quot;Click to view the spoiler&quot; &gt;}}
You found me!
{{&lt; /spoiler &gt;}}
</code></pre>
<p>renders as</p>
<div class="spoiler " >
<p>
<a class="btn btn-primary" data-toggle="collapse" href="#spoiler-1" role="button" aria-expanded="false" aria-controls="spoiler-1">
Click to view the spoiler
</a>
</p>
<div class="collapse card " id="spoiler-1">
<div class="card-body">
You found me!
</div>
</div>
</div>
<h3 id="icons">Icons</h3>
<p>Academic enables you to use a wide range of
<a href="https://sourcethemes.com/academic/docs/page-builder/#icons" target="_blank" rel="noopener">icons from <em>Font Awesome</em> and <em>Academicons</em></a> in addition to
<a href="https://sourcethemes.com/academic/docs/writing-markdown-latex/#emojis" target="_blank" rel="noopener">emojis</a>.</p>
<p>Here are some examples using the <code>icon</code> shortcode to render icons:</p>
<pre><code class="language-markdown">{{&lt; icon name=&quot;terminal&quot; pack=&quot;fas&quot; &gt;}} Terminal
{{&lt; icon name=&quot;python&quot; pack=&quot;fab&quot; &gt;}} Python
{{&lt; icon name=&quot;r-project&quot; pack=&quot;fab&quot; &gt;}} R
</code></pre>
<p>renders as</p>
<p>
<i class="fas fa-terminal pr-1 fa-fw"></i> Terminal<br>
<i class="fab fa-python pr-1 fa-fw"></i> Python<br>
<i class="fab fa-r-project pr-1 fa-fw"></i> R</p>
<h3 id="did-you-find-this-page-helpful-consider-sharing-it-">Did you find this page helpful? Consider sharing it 🙌</h3>
</description>
</item>
<item>
<title>Display Jupyter Notebooks with Academic</title>
<link>https://hanhao23.github.io/post/jupyter/</link>
<pubDate>Tue, 05 Feb 2019 00:00:00 +0000</pubDate>
<guid>https://hanhao23.github.io/post/jupyter/</guid>
<description><pre><code class="language-python">from IPython.core.display import Image
Image('https://www.python.org/static/community_logos/python-logo-master-v3-TM-flattened.png')
</code></pre>
<p><img src="./index_1_0.png" alt="png"></p>
<pre><code class="language-python">print(&quot;Welcome to Academic!&quot;)
</code></pre>
<pre><code>Welcome to Academic!
</code></pre>
<h2 id="install-python-and-jupyterlab">Install Python and JupyterLab</h2>
<p>
<a href="https://www.anaconda.com/distribution/#download-section" target="_blank" rel="noopener">Install Anaconda</a> which includes Python 3 and JupyterLab.</p>
<p>Alternatively, install JupyterLab with <code>pip3 install jupyterlab</code>.</p>
<h2 id="create-or-upload-a-jupyter-notebook">Create or upload a Jupyter notebook</h2>
<p>Run the following commands in your Terminal, substituting <code>&lt;MY-WEBSITE-FOLDER&gt;</code> and <code>&lt;SHORT-POST-TITLE&gt;</code> with the file path to your Academic website folder and a short title for your blog post (use hyphens instead of spaces), respectively:</p>
<pre><code class="language-bash">mkdir -p &lt;MY-WEBSITE-FOLDER&gt;/content/post/&lt;SHORT-POST-TITLE&gt;/
cd &lt;MY-WEBSITE-FOLDER&gt;/content/post/&lt;SHORT-POST-TITLE&gt;/
jupyter lab index.ipynb
</code></pre>
<p>The <code>jupyter</code> command above will launch the JupyterLab editor, allowing us to add Academic metadata and write the content.</p>
<h2 id="edit-your-post-metadata">Edit your post metadata</h2>
<p>The first cell of your Jupter notebook will contain your post metadata (
<a href="https://sourcethemes.com/academic/docs/front-matter/" target="_blank" rel="noopener">front matter</a>).</p>
<p>In Jupter, choose <em>Markdown</em> as the type of the first cell and wrap your Academic metadata in three dashes, indicating that it is YAML front matter:</p>
<pre><code>---
title: My post's title
date: 2019-09-01
# Put any other Academic metadata here...
---
</code></pre>
<p>Edit the metadata of your post, using the
<a href="https://sourcethemes.com/academic/docs/managing-content" target="_blank" rel="noopener">documentation</a> as a guide to the available options.</p>
<p>To set a
<a href="https://sourcethemes.com/academic/docs/managing-content/#featured-image" target="_blank" rel="noopener">featured image</a>, place an image named <code>featured</code> into your post&rsquo;s folder.</p>
<p>For other tips, such as using math, see the guide on
<a href="https://sourcethemes.com/academic/docs/writing-markdown-latex/" target="_blank" rel="noopener">writing content with Academic</a>.</p>
<h2 id="convert-notebook-to-markdown">Convert notebook to Markdown</h2>
<pre><code class="language-bash">jupyter nbconvert index.ipynb --to markdown --NbConvertApp.output_files_dir=.
</code></pre>
<h2 id="example">Example</h2>
<p>This post was created with Jupyter. The orginal files can be found at
<a href="https://github.com/gcushen/hugo-academic/tree/master/exampleSite/content/post/jupyter" target="_blank" rel="noopener">https://github.com/gcushen/hugo-academic/tree/master/exampleSite/content/post/jupyter</a></p>
</description>
</item>
<item>