Original version by Hiroto Shioi (https://github.com/HirotoShioi).
The purpose of this project is to analyze Cardano log file and provide a solution to the end user while collecting statistics so Cardano developers can prioritize the issues.
Installation Instructions are available at INSTALL.md
Please note that you have to provide a enviroment variable called LCPATH
where all the key files are.
Usage instructions are available at USAGE.md.
- https://youtu.be/NQJVNvxgDqg?t=615
- https://youtu.be/NQJVNvxgDqg?t=1579
- https://nixos.org/nixops/manual/#sec-deploying-to-ec2
reflex-frp/reflex-platform#116 https://github.com/hexresearch/reflex-material-bootstrap
- Log-classifier is a command line interface IOHK help desk agents can use to classify tickets.
- Currently, identifying the issue manually takes a decent amount of time and knowledge. Using classifier, the agent will be able to automate the identification of the issue without having to know all the known issues.
- This is a Haskell program so the agent will need stack and Nix in order to use the classifier. Instruction on how to use it will be documented on different file.
- The classifier cannot find any unknown/unique issues.
This is a use case diagram. Use case diagrams overview the usage requirements for a system. They are useful for presentations to management and/or project stakeholders, but for actual development, you will find that use cases provide significantly more value because they describe "the meat" of the actual requirements. For more details, please see here
The general architecture can be seen here. It's still missing some things, but it's sufficient for most of the test we currently have.
A layer is a list of functions grouped together. For example, a database has functions for interacting with - the database! Yaay. The problem is that in most of the cases we need to be able to stub those functions - if we want to test some of the functions that depend on the database, then we have no option but to make them something that we can replace in runtime. And to replace them in runtime, we place them in a "layer", a record-of-functions that allows us to replace them easily. Like:
data IOLayer m = IOLayer
{ iolAppendFile :: FilePath -> Text -> m ()
, iolPrintText :: Text -> m ()
, iolReadFile :: FilePath -> m Text
, iolLogDebug :: Text -> m ()
, iolLogInfo :: Text -> m ()
}
We can think of a layer like a changeable module system - we export the functions in the data structure, but we can change them in runtime.
The data source is the place we get our data from. Currently, it's fixated on Zendesk. This is an abstraction towards any form of persistant data. We could say that this is a layer, but it actually contains quite a bit more.
Currently, the Zendesk types themselves are a pretty big part of the DataSource
module.
So the data layer is the abstraction layer which can currently be specialized to the @HTTPLayer@ or @DBLayer@. It makes sense to abstract this away, since this is the group of functions that interact with any data in the system.
Obviously, the direct way we can fetch data is using HTTP JSON requests on the Zendesk API (https://developer.zendesk.com/rest_api/docs/core/introduction). This layer is the layer responsible for that. It contains common functions that allows us to communicate with the Zendesk REST API.
This layer is the layer responsible for the low level HTTP communication. The @HTTPLayer@ is the layer that communicates with the Zendesk REST API using this layer.
This layer is responsible for caching the results that come from the @HTTPLayer@ and then allows us to fetch data from the database rather then the HTTP REST api which has request limits and takes much longer.
- Many of the Daedalus's issues can be identified by analyzing the log file. The classifier will utilize this by analyzing the log file and map with possible solution and problem which can be provided to the end user.
- Ticket sent from Daedalus bug report has log file attached. The classifier will analyze the log file that is attached to the ticket. If any known issues where found, it'll then post a comment to that ticket. (See example below)
- It uses CSV (basically small record file) which contains all the information need to perform the analysis. This makes it easy to add new known issues.
- Help desk agent will use command line interface in order to run the classifier.
- The classifier will use the Zendesk API to perform any action. Because of this, agents will need to provide their email address and password to the classifier.
In order to use this, the user must possess an IOHK Zendesk agent ID and API key.
If you want to deploy this, you also need a app_users.json
file that looks something like this:
[
{ "_username": "ksaric"
, "_password": "pass"
},
{ "_username": "hshioi"
, "_password": "pass2"
},
{ "_username": "rcohen"
, "_password": "pass3"
}
]
All these files must be provided in tmp-secrets
with the following names and they will get deployed on the machine along with the application:
- token
- assign_to
- app_users.json
The classifier will analyze the log file attached to the ticket. If any issues where found, it will post a comment which states
- Error code
- What is the issue
- Possible solution
- Error text classifier caught in the log file that is related to the issue.
After classifier has done analysis on the ticket, it'll add tags to the ticket which can be used to identify what the issues are. Agents can later use this tags to collect statistics.
The classifier can collect all the tags assigned to the ticket, accumulate them and provide to the Zendesk agent as statistics.
There are currently **** tickets in the system assigned to *********@iohk.io
*** tickets have been analyzed by the classifier.
**** tickets are not analyzed.
Below are statistics:
analyzed: **
can-not-sync: **
cannot-get-db-size: **
cannot_connect_to_wallet_using_network: **
closed_by_merge: **
coin_redemption: **
connection-refused: **
db-corrupted: **
db-path-error: **
directory-not-found: **
.... and so on
In this case:
./log-classifier-cli fetch-tickets-from --time=12.07.2018
IOHK help desk agent finds ticket submitted from the end user with log file attached and wants to perform analysis on it
In this case, one can run the command below.
./log-classifier-cli process-ticket <TICKET_ID>
This will parse the log file attached to the <TICKET_ID>
and provide the result to the agent using private comment in the Zendesk UI. The agent then use that comment to help troubleshoot the end user.
IOHK help desk agent wants to parse every ticket that is sent from Daedalus bug report with log file attached
In this case, one can run the command below.
./log-classifier-cli process-tickets
This will parse any tickets with log file attached that are sent from the Daedalus bug report. It then post analysis result as private comment on any ticket that it has parsed so that the agents can provide the end user with solution.
IOHK help desk agent wants to know the statistics of the occurring issues so he/she can report to the Cardano-SL team which will categorize/prioritize the issue
In this case, one can run the command below.
./log-classifier-cli show-stats
This will collect all the tags that are attached to the ticket then group them so that the agent can report the dev team. The team will then categorize/prioritize the issue.
The command should be run as follows:
./log-classifier-cli inspect-local-zip --file-path="/home/ksaric/Downloads/Windows-Logs.zip"
This will return the error if the error is encountered or else the report from the analysis.