From b73b4fb84c358b105a6a7a7a51030fc3e5044ab2 Mon Sep 17 00:00:00 2001 From: Kindson The Genius <35315234+KindsonTheGenius@users.noreply.github.com> Date: Thu, 20 Apr 2023 07:45:02 +0000 Subject: [PATCH] Fixed all tagging issues --- docs/cadl-doc.md | 4 +- docs/cli-cli2-merging-docs.md | 92 +++++++++++++++---------------- docs/json-schema-mappings.md | 14 +++-- docs/morpir-developers-guide.md | 74 ++++++++++++------------- docs/scala-backend.md | 40 +++++++------- docs/scala-json-codecs-backend.md | 16 +++--- docs/spark-backend.md | 46 ++++++++-------- docs/spark-testing-framework.md | 2 +- docs/user-guide-index.md | 2 +- 9 files changed, 147 insertions(+), 143 deletions(-) diff --git a/docs/cadl-doc.md b/docs/cadl-doc.md index efed7202c..52aee6a5d 100644 --- a/docs/cadl-doc.md +++ b/docs/cadl-doc.md @@ -1,8 +1,10 @@ +## Morphir-Cadl Mapping # [Morphir](https://package.elm-lang.org/packages/finos/morphir-elm/18.1.0/Morphir-IR-Type) to [Cadl Type](https://microsoft.github.io/cadl/docs/language-basics/type-relations/) Mappings -## Overview This is a documentation of the mapping strategy from Morphir types to Cadl types. This document describes how types in Morphir Models are represented in Cadl. Below is a quick overview of the mapping in the table: + + | | Type | Cadl Type | Comment | |-------------------------------------------------------|-------------------------------------|------------------------------------------|-----------------------------------------------------| | [Basic Types](#basic-types) | | | | diff --git a/docs/cli-cli2-merging-docs.md b/docs/cli-cli2-merging-docs.md index eb1711d89..82c7b9ca0 100644 --- a/docs/cli-cli2-merging-docs.md +++ b/docs/cli-cli2-merging-docs.md @@ -3,49 +3,49 @@ This document contains cli commands that were migrated from the old cli, which is `morphir-elm`, into the new all cli commands supported by the old cli, and the **current** cli, `morphir`. -| Command | Flags | Description | Old CLI | New CLI | New Command | Comment | -|-------------------|-------------------------------------------------------|--------------------------------------------------------------------------------|:--------:|:--------:|:------------------------:|----------------------------------------------------| -| `make` | | | ✅ | ✅ | `morphir make` | | -| | -p , --project-dir \ | Root directory of the project where morphir.json is located. | | | | | -| | -o , --output \ | Target file location where the Morphir IR will be saved. | | | | | -| | -t, --types-only | Only include type information in the IR, no values. | | | | | -| | -f, --fallback-cli | Use old cli make function. | | | | | -| `test` | | | ✅ | ✘ | `morphir test` | | -| | -p, --project-dir \ | Root directory of the project where morphir.json is located. | | | | | -| `stats` | | | ✘ | ✅ | `morphir stats` | | -| | -i, --input \ | Source location where the Morphir IR will be loaded from. | | | | | -| | -o, --output \ | Target location where the generated code will be saved. | | | | | -| `dockerize` | | | ✘ | ✅ | `morphir dockerize` | | -| | -p, --project-dir \ | Root directory of the project where morphir.json is located | | | | | -| | -f, --force | Overwrite any Dockerfile in target location | | | | -| `gen` | | | ✅ | ✘ | | | -| | -i, --input | Source location where the Morphir IR will be loaded from. | | | | | -| | -o, --output | Target location where the generated code will be saved. | | | | | -| | -t, --target | Language to Generate (Scala | Cadl | JsonSchema | TypeScript). | | | | To generate Scala, use command `morphir scala-gen` | -| | -e, --target-version | Language version to Generate. | | | | | -| | -c, --copy-deps | Copy the dependencies used by the generated code to the output path | | | | | -| | -m, --modules-to-include | Limit the set of modules that will be included | | | | | -| | -s, --include-codecs | Generate JSON codecs | | | | | -| | -f, --filename | Filename of the generated JSON Schema | | | | | -| | -cc, --custom-config | A filepath to load additional configuration for the backend | | | | | -| `scala-gen` | | | ✘ | ✅ | `morphir scala gen` | | -| | -i, --input \ | Source location where the Morphir IR will be loaded from | | | | | -| | -o, --output \ | Target location where the generated code will be saved | | | | | -| | -t, --target \ | Language to Generate | | | | Deprecated | -| | -e, --target-version \ | Language version to Generate | | | | | -| | -c, --copy-deps | Copy the dependencies used by the generated code to the output path | | | | | -| | -s, --include-codecs | Generate the scala codecs as well | | | | | -| | -m, --limitToModules | Limit the set of modules that will be included | | | | | -| `json-schema-gen` | | | ✘ | ✅ | `morphir jsonschema gen` | | -| | -i, --input \ | Source location where the Morphir IR will be loaded from | | | | | -| | -o, --output \ | Target location where the generated code will be saved | | | | | -| | -t, --target \ | Language to Generate | | | | Deprecated | -| | -e, --target\-version \ | Language version to Generate | | | | | -| | -f, --filename | Filename of the generated JSON Schema | | | | | -| | -m, --limit-to-modules | Limit the set of modules that will be included | | | | | -| | -g, --group-schema-by | Group generate schema by package, module or type | | | | | -| | -c, --use-config | Use configuration specified in the config file | | | | | -| | -ls, --include \' | Limit what will be included | | | | | -| `develop` | | | ✅ | ✘ | `morphir develop` | | | -| | -p, --port \ | Port to bind the web server to | | | | | -| | -o, --host \ | Host to bind the web server to. | | | | | +| Command | Flags | Description | Old CLI | New CLI | New Command | Comment | +|-------------------|------------------------------------------------------|--------------------------------------------------------------------------------|:--------:|:--------:|:------------------------:|----------------------------------------------------| +| `make` | | | ✅ | ✅ | `morphir make` | | +| | -p , --project-dir `` | Root directory of the project where morphir.json is located. | | | | | +| | -o , --output `` | Target file location where the Morphir IR will be saved. | | | | | +| | -t, --types-only | Only include type information in the IR, no values. | | | | | +| | -f, --fallback-cli | Use old cli make function. | | | | | +| `test` | | | ✅ | ✘ | `morphir test` | | +| | -p, --project-dir `` | Root directory of the project where morphir.json is located. | | | | | +| `stats` | | | ✘ | ✅ | `morphir stats` | | +| | -i, --input `` | Source location where the Morphir IR will be loaded from. | | | | | +| | -o, --output `` | Target location where the generated code will be saved. | | | | | +| `dockerize` | | | ✘ | ✅ | `morphir dockerize` | | +| | -p, --project-dir `` | Root directory of the project where morphir.json is located | | | | | +| | -f, --force | Overwrite any Dockerfile in target location | | | | +| `gen` | | | ✅ | ✘ | | | +| | -i, --input `` | Source location where the Morphir IR will be loaded from. | | | | | +| | -o, --output `` | Target location where the generated code will be saved. | | | | | +| | -t, --target `` | Language to Generate (Scala | Cadl | JsonSchema | TypeScript). | | | | To generate Scala, use command `morphir scala-gen` | +| | -e, --target-version `` | Language version to Generate. | | | | | +| | -c, --copy-deps | Copy the dependencies used by the generated code to the output path | | | | | +| | -m, --modules-to-include `` | Limit the set of modules that will be included | | | | | +| | -s, --include-codecs | Generate JSON codecs | | | | | +| | -f, --filename `` | Filename of the generated JSON Schema | | | | | +| | -cc, --custom-config `` | A filepath to load additional configuration for the backend | | | | | +| `scala-gen` | | | ✘ | ✅ | `morphir scala gen` | | +| | -i, --input `` | Source location where the Morphir IR will be loaded from | | | | | +| | -o, --output `` | Target location where the generated code will be saved | | | | | +| | -t, --target `` | Language to Generate | | | | Deprecated | +| | -e, --target-version `` | Language version to Generate | | | | | +| | -c, --copy-deps | Copy the dependencies used by the generated code to the output path | | | | | +| | -s, --include-codecs `` | Generate the scala codecs as well | | | | | +| | -m, --limitToModules `` | Limit the set of modules that will be included | | | | | +| `json-schema-gen` | | | ✘ | ✅ | `morphir jsonschema gen` | | +| | -i, --input `` | Source location where the Morphir IR will be loaded from | | | | | +| | -o, --output `` | Target location where the generated code will be saved | | | | | +| | -t, --target `` | Language to Generate | | | | Deprecated | +| | -e, --target\-version `` | Language version to Generate | | | | | +| | -f, --filename `` | Filename of the generated JSON Schema | | | | | +| | -m, --limit-to-modules `` | Limit the set of modules that will be included | | | | | +| | -g, --group-schema-by `` | Group generate schema by package, module or type | | | | | +| | -c, --use-config | Use configuration specified in the config file | | | | | +| | -ls, --include `` | Limit what will be included | | | | | +| `develop` | | | ✅ | ✘ | `morphir develop` | | | +| | -p, --port `` | Port to bind the web server to | | | | | +| | -o, --host `` | Host to bind the web server to. | | | | | diff --git a/docs/json-schema-mappings.md b/docs/json-schema-mappings.md index bc08f485d..be22f7bdb 100644 --- a/docs/json-schema-mappings.md +++ b/docs/json-schema-mappings.md @@ -2,7 +2,7 @@ This is a documentation of the mapping strategy from Morphir types to Json Schema. This document describes how Morphir Models maps to Json Schema. Json Schema Reference can be found [here](http://json-schema.org/understanding-json-schema/reference/index.html) -
+\ Additional reading: * [Sample Json Schema](json-schema-sample.json) * [Testing Strategy](json-schema-backend-testplan.md) @@ -36,16 +36,16 @@ Run the ```elm morphir-elm gen -t JsonSchema``` to generate the Json Schema **Note** - The generated schema is named `.json` by default. But you can specify the filename optionally for the schema using the -f flag. -
-

+ +\ Next, we will get into some specific cases that may need further explanation. The rest of the explains how each Morphir type maps to the Json Schema Types. 1. ### [ SDK Types](#sdk-types) #### [1.1. Basic types](#basic-types) - [1.1.1. Bool ](#bool)
+ [1.1.1. Bool ](#bool)\ [1.1.2. Int ](#int)\ [1.1.3. Float ](#float)\ [1.1.4. Char ](#char)\ @@ -188,7 +188,7 @@ The format attribute in the JSON schema is used to provide the format for the ti Month types in Morphir are mapped OneOf schema type with a enum list of all the month names #### 1.3. Optional values (Maybe) -

A Maybe type in Morphir refers to a value that may not exist. This means that it could either be a value or a null. There are two approaches to handling Maybes.\ +A Maybe type in Morphir refers to a value that may not exist. This means that it could either be a value or a null. There are two approaches to handling Maybes.\ 1. Set the value of the type to an array of two strings: the type, and "null" \ 2. Treat a Maybe as a [custom type](#custom-types) with two constructors: the type and null Here, we adopt the second approach. @@ -519,6 +519,7 @@ type alias Address = } ``` Json Schema: + ```json "Records.Bank": { "type": "object", @@ -534,4 +535,5 @@ Json Schema: ``` ## anyOf -The anyOf keyword is used to relate a schema with it's subschemas. +The anyOf keyword is used to relate a schema +with it's subschemas. diff --git a/docs/morpir-developers-guide.md b/docs/morpir-developers-guide.md index 61ded36fb..a78b707d4 100644 --- a/docs/morpir-developers-guide.md +++ b/docs/morpir-developers-guide.md @@ -8,45 +8,45 @@ Finally, it provides a step by step walk-throughs on how various Morphir compone 2. Existing team members intending to improve their abilities on Language Design concepts ##Content -1. [Getting Started with Morphir](https://github.com/finos/morphir-elm/blob/main/README.md)
+1. [Getting Started with Morphir](https://github.com/finos/morphir-elm/blob/main/README.md) \ 2. [Overview of Morphir](#) -3. [The Morphir Architecture](#)
-4. [The Morphir SDK](#)
-5. [Morphir Commands Processing](Morphir-elm Commands Processing)
- 1. [morphir-elm make](#)
- 2. [morphir-elm gen](#)
- 3. [morphir-elm test](#)
- 4. [morphir-elm develop](#)
-6. [Interoperability With JavaScript](#)
-7. [Testing Framework](#)
-8. [The Morphir IR](#)
- 1. [Overview of the Morphir IR](#)
- 2. [Distribution](#)
- 3. [Package](#)
- 4. [Module](#)
- 5. [Types](#)
- 6. [Values](#)
- 7. [Names](#)
-9. [The Morphir Frontends](#)
- 1. [Elm Frontend](#)
- 2. [Elm Incremental Frontend](#)
-10. [The Morphir Backends](#)
+3. [The Morphir Architecture](#) \ +4. [The Morphir SDK](#) \ +5. [Morphir Commands Processing](Morphir-elm Commands Processing) \ + 1. [morphir-elm make](#) \ + 2. [morphir-elm gen](#) \ + 3. [morphir-elm test](#) \ + 4. [morphir-elm develop](#) \ +6. [Interoperability With JavaScript](#) \ +7. [Testing Framework](#) \ +8. [The Morphir IR](#) \ + 1. [Overview of the Morphir IR](#) \ + 2. [Distribution](#) \ + 3. [Package](#) \ + 4. [Module](#) \ + 5. [Types](#) \ + 6. [Values](#) \ + 7. [Names](#) \ +9. [The Morphir Frontends](#) \ + 1. [Elm Frontend](#) \ + 2. [Elm Incremental Frontend](#) \ +10. [The Morphir Backends](#) \ 1. [Scala Backend](#) -11. [Working with CODECS](#)
- 1. [Introduction to Encoding/Decoding](#)
- 2. [JSON Decoder Building Blocks](#)
- 3. [Combining Decoders](#)
- 4. [JSON Decode Pipeline](#)
- 5. [Writing Encoders and Decoders in Elm](#)
- 6. [Standard Codecs in Morphir](#)
-12. [NPM and Elm Packages](#)
-13. [Introduction to Combinator Parsing in Scala](#)
- 1. [Overview of Combinator Parsing](#)
- 2. [Parser or Basic Arithmetic Expression](#)
- 3. [Implementing Parsers in Scala](#)
- 4. [Regular Expressions Parser](#)
- 5. [JSON Parser](#)
- 6. [Low-Level Pull Parser API](#)
+11. [Working with CODECS](#) \ + 1. [Introduction to Encoding/Decoding](#) \ + 2. [JSON Decoder Building Blocks](#) \ + 3. [Combining Decoders](#) \ + 4. [JSON Decode Pipeline](#) \ + 5. [Writing Encoders and Decoders in Elm](#) \ + 6. [Standard Codecs in Morphir](#) \ +12. [NPM and Elm Packages](#) \ +13. [Introduction to Combinator Parsing in Scala](#) \ + 1. [Overview of Combinator Parsing](#) \ + 2. [Parser or Basic Arithmetic Expression](#) \ + 3. [Implementing Parsers in Scala](#) \ + 4. [Regular Expressions Parser](#) \ + 5. [JSON Parser](#) \ + 6. [Low-Level Pull Parser API](#) \ diff --git a/docs/scala-backend.md b/docs/scala-backend.md index 13f58cdbe..1dc82dba8 100644 --- a/docs/scala-backend.md +++ b/docs/scala-backend.md @@ -3,9 +3,9 @@ The Scala backend takes the Morphir IR as the input and returns an in-memory representation of files generated - FileMap The consumer is responsible for getting the input IR and saving the output to the file-system. -The transformation from the Morphir IR to the FileMap is based on the Scala AST.

-[1. Reading the Input IR](#)
-[2. Scala Code Generation](#)
+The transformation from the Morphir IR to the FileMap is based on the Scala AST.\\ +[1. Reading the Input IR](#) \ +[2. Scala Code Generation](#)\ [3. Writing Output to File System](#) ## **1. Reading Input IR** @@ -37,7 +37,7 @@ fileMap = The code generation phase consists of functions that transform the distribution into a FileMap -
+\ ## **2. Code Generation** The code generation consists of a number of mapping functions that map the Morphir IR types to Scala Types. @@ -47,15 +47,15 @@ This is the entry point for the Scala backend. This function take Morphir IR (as a Distribution type) and generates the FileMap of Scala Source codes. A FileMap is a Morphir type and is a dictionary of File path and file content. -
+\ #### mapPackageDefinition This function takes the Distribution, Package path and Package definition and returns a FileMap. This function maps through the modules in the package definition and for each module, it generate a compilation unit for each module by calling the PrettyPrinter.mapCompilationUnit -which returns a compilation unit.
+which returns a compilation unit. \ A compilation unit is a record type with the following fields -

+\\ ``` type alias CompilationUnit = @@ -67,13 +67,13 @@ type alias CompilationUnit = } ``` -
+\ #### mapFQNameToPathAndName Takes a Morphir IR fully-qualified name and maps it to tuple of Scala path and name. A fully qualified name consists of packagPath, modulePath and localName. -
+\ #### mapFQNameToTypeRef Maps a Morphir IR fully-qualified name to Scala type reference. It extracts the path and name @@ -89,45 +89,45 @@ mapFQNameToTypeRef fQName = Scala.TypeRef path (name |> Name.toTitleCase) ``` -
+\ #### mapTypeMember This function maps a type declaration in Morphir to a Scala member declaration. -
+\ #### mapModuleDefinition This function maps a module definition to a list of Scala compilation units. -
+\ #### mapCustomTypeDefinition Maps a custom type to a List of Scala member declaration -
+\ #### mapType Maps a Morphir IR Type to a Scala type -
+\ #### mapFunctionBody Maps an IR value defintion to a Scala value. -
+\ #### mapValue Maps and IR Value type to a Scala value. -
+\ #### mapPattern Maps an IR Pattern type to a Scala Pattern type -
+\ #### mapValueName @@ -136,19 +136,19 @@ Maps an IR value name (List String) to a Scala value (String) #### scalaKeywords A set of Scala keywords that cannot be used as a variable name. -
+\ #### javaObjectMethods We cannot use any method names in `java.lang.Object` because values are represented as functions/values in a Scala object which implicitly inherits those methods which can result in name collisions. -
+\ #### uniqueVarName -
+\ ## **3. Saving Generated Files** diff --git a/docs/scala-json-codecs-backend.md b/docs/scala-json-codecs-backend.md index d1a0705f7..d113b9929 100644 --- a/docs/scala-json-codecs-backend.md +++ b/docs/scala-json-codecs-backend.md @@ -1,7 +1,7 @@ # Scala JSON-Codecs Backend Documentation This document provides a description of the JSON codecs backend. -The Json Codecs Backend for Scala contains functions to generate Codecs from types in the IR.
+The Json Codecs Backend for Scala contains functions to generate Codecs from types in the IR.\ [Circe](https://circe.github.io/circe/) is used as the base JSON library. ```mermaid @@ -9,7 +9,7 @@ graph TD; Backend-->Core; Backend-->Features ``` -The Scala backend is split into two aspects:
+The Scala backend is split into two aspects: \ 1. **Core** - the core Scala codes representing the user business logic 2. **Feature** - the Codecs for each type defined in the input model @@ -66,16 +66,16 @@ Generates and decoder reference for the input type using the FQName #### mapTypeDefinitionToEncoder -Type definition could be any of the following:
-_**Type Alias Definition**_ - maps to an encoder for record type
+Type definition could be any of the following:\ +_**Type Alias Definition**_ - maps to an encoder for record type \ **_Custom Type Definition_** - uses helper functions to build encoder for custom -types

+types \\ #### mapTypeDefinitionToDecoder -Type definition could be any of the following:
-_**Type Alias Definition**_ - maps to an encoder for record type
+Type definition could be any of the following:\ +_**Type Alias Definition**_ - maps to an encoder for record type \ **_Custom Type Definition_** - uses helper functions to build encoder for custom -types

+types \\ #### mapCustomTypeDefinitionToEncoder diff --git a/docs/spark-backend.md b/docs/spark-backend.md index 72c565079..9bdec665c 100644 --- a/docs/spark-backend.md +++ b/docs/spark-backend.md @@ -8,10 +8,10 @@ The Spark API defines types for working with Spark. ## Values The current Spark API defines the following values: -**_spark_**
+**_spark_** \ This is a variable maps to a Scala value -**_dataFrame_**
+**_dataFrame_** \ A DataFrame maps to a Scala type and references org.apache.spark.sql.DataFrame. A dataframe is an untyped view of a spark datase. A dataframe is a dataset of Rows @@ -22,9 +22,9 @@ dataFrame = Scala.TypeRef [ "org", "apache", "spark", "sql" ] "DataFrame" ``` -
+\ -**_literal_**
+**_literal_** \ A literal returns a Scala.Value by applying a Scala.Ref type to a list of arguments. ``` dataFrame : Scala.Type @@ -33,7 +33,7 @@ dataFrame = ``` -_**column**_
+_**column**_ \ This is a reference to a column in a Dataframe. This returns a Scala value by applying a reference to column to a string literal representing the name of the column ``` @@ -43,16 +43,16 @@ dataFrame = ``` -_**when**_
+_**when**_ \ This method takes condition and a then branch and returns a Scala value -**_andWhen_**
+**_andWhen_** \ This method takes three parameters: a condition, a then branch and soFar and return a new value -**_otherwise_**
+**_otherwise_** \ This method takes an elseBranch and a soFar and returns new Scala Value ``` dataFrame : Scala.Type @@ -61,7 +61,7 @@ dataFrame = ``` -_**alias**_
+_**alias**_ \ An alias is used to reference column name, typed column and data set. ``` dataFrame : Scala.Type @@ -70,7 +70,7 @@ dataFrame = ``` -**_select_**
+**_select_** \ Select is used to represent a projection in a relation. It takes a list of columns and returns a Scala Value representing a dataset @@ -87,7 +87,7 @@ select columns from = ) ``` -**_filter_**
+**_filter_** \ This method is used to return a subset of the data items from a dataset ``` @@ -103,7 +103,7 @@ filter predicate from = ``` -_**join**_
+_**join**_ \ This function is used to represent a join operation between two or more relations. ``` filter : Scala.Value -> Scala.Value -> Scala.Value @@ -117,7 +117,7 @@ filter predicate from = ] ``` -_**case statements**_
+_**case statements**_ \ Simple case statements of: * a series of literals * ends with a default case @@ -144,7 +144,7 @@ org.apache.spark.sql.functions.when( ).otherwise(false) ``` -_**List.member constructs**_
+_**List.member constructs**_ \ When List.member is used in conjunction with a List.filter function, such as in the following examples: ``` testEnumListMember : List { product : Product } -> List { product : Product } @@ -317,7 +317,7 @@ antiques.select( ## Types The current Spark API processes the following types: -_**Bool**_
+_**Bool**_ \ Values translated form basic Elm Booleans are treated as basic Scala Booleans, i.e. ``` testBool : List { foo : Bool } -> List { foo : Bool } @@ -336,7 +336,7 @@ gets translated into source.filter((org.apache.spark.sql.functions.col("foo")) === (false)) ``` -_**Float**_
+_**Float**_ \ Values translated from basic Elm Floating-point numbers are treated as basic Scala Doubles. They use `org.apache.spark.sql.types.DoubleType` and their literals do not have a trailing 'f', i.e. `1.23` not `1.23f`. i.e. @@ -357,7 +357,7 @@ gets translated into source.filter((org.apache.spark.sql.functions.col("foo")) === (9.99)) ``` -_**Int**_
+_**Int**_ \ Values translated from basic Elm Integers are treated as basic Scala Integers, i.e. ``` testInt : List { foo : Int } -> List { foo : Int } @@ -376,7 +376,7 @@ gets translated into source.filter((org.apache.spark.sql.functions.col("foo")) === (13)) ``` -_**String**_
+_**String**_ \ Values translated from basic Elm Strings are treated as basic Scala Strings, i.e. ``` testString : List { foo : String } -> List { foo : String } @@ -395,7 +395,7 @@ gets translated into source.filter((org.apache.spark.sql.functions.col("foo")) === ("bar")) ``` -_**Enum**_
+_**Enum**_ \ Elm Union types with no arguments (or Constructors in Morphir IR), are translated into String Literals. For instance: ``` @@ -418,12 +418,12 @@ gets translated into Currently the implementation doesn't check that the Constructor has no arguments. Nor does the current implementation check whether a Constructor has Public access, and only consider those to be Enums. -_**Maybe**_
+_**Maybe**_ \ Maybes are how Elm makes it possible to return a value or Nothing, equivalent to null in other languages. In Spark, all pure Spark functions and operators handle null gracefully (usually returning null itself if any operand is null). -**Just**
+**Just** \ In Elm, comparison against Maybes must be explicitly against `Just `. In Spark, no special treatment is needed to compare against a value that may be null. Therefore, elm code that looks like: @@ -444,7 +444,7 @@ gets translated in Spark to source.filter((org.apache.spark.sql.functions.col("foo")) === (true)) ``` -**Nothing**
+**Nothing** \ Where Elm code compares against Nothing, Morphir will translate this into a comparison to null in Spark. For example, to take a list of records and filter out null records in Elm, we would do: @@ -466,7 +466,7 @@ def testMaybeBoolConditional( source.filter(org.apache.spark.sql.functions.isnull(org.apache.spark.sql.functions.col("foo"))) ``` -**Maybe.map**
+**Maybe.map** \ Elm has `Maybe.map` and `Maybe.defaultValue` to take a Maybe and execute one branch of code if the Maybe isn't Nothing, and another branch if it is. diff --git a/docs/spark-testing-framework.md b/docs/spark-testing-framework.md index e9b2df949..60e4cb6bd 100644 --- a/docs/spark-testing-framework.md +++ b/docs/spark-testing-framework.md @@ -46,7 +46,7 @@ testExampleMax source = ``` #### 2) Write code to generate input test data -- If a new type has been designed for the test then new data of that type needs to be generated to use as an input for the test. This can be done in `/morphir-elm/tests-integration/spark/elm-tests/tests` using some elm code placed in a Generate....elm file. This can be done using the 'flatten' and 'generator' functions defined in GenerateAntiqueTestData.elm. It is simple to create code to generate input data by following the example set by the other Generate....elm files in the same directory. All Generate*.elm files should be placed in the `/morphir-elm/tests-integration/spark/elm-tests/tests` directory. The Generate*.elm file and its counterpart *DataSource.elm must be of the form " GenerateData.elm" and "DataSource.elm". +- If a new type has been designed for the test then new data of that type needs to be generated to use as an input for the test. This can be done in `/morphir-elm/tests-integration/spark/elm-tests/tests` using some elm code placed in a Generate....elm file. This can be done using the 'flatten' and 'generator' functions defined in GenerateAntiqueTestData.elm. It is simple to create code to generate input data by following the example set by the other Generate....elm files in the same directory. All Generate*.elm files should be placed in the `/morphir-elm/tests-integration/spark/elm-tests/tests` directory. The Generate*.elm file and its counterpart *DataSource.elm must be of the form " Generate `` Data.elm" and " `` DataSource.elm". Some example data generation code in elm can be seen below. ``` diff --git a/docs/user-guide-index.md b/docs/user-guide-index.md index 2866c4687..83cbe0828 100644 --- a/docs/user-guide-index.md +++ b/docs/user-guide-index.md @@ -8,7 +8,7 @@ The purpose of the document is to provide a detailed explanation of how to use t 1. [Background](#) 2. [What is all About](#) 3. [The Morphir Community](#) -4. [Getting Started with Morphir](https://github.com/finos/morphir-elm/blob/main/README.md)
+4. [Getting Started with Morphir](https://github.com/finos/morphir-elm/blob/main/README.md) \ 5. [Quick Install](#) 6. [How to Write Business Logic](#) 1. [What Makes a Good Model](#)