From 9836472a38c327692b626abad3ba554dbd9f8761 Mon Sep 17 00:00:00 2001 From: marcodeltufo <marco.deltufo@exact-lab.it> Date: Thu, 27 Jul 2023 11:33:35 +0200 Subject: [PATCH] cleaned html entities --- .../apis/java-javascript-v3-api.md | 12 ++--- .../apis/personal-access-tokens.md | 4 +- .../eln-lims-web-ui-extensions.md | 2 +- .../client-side-extensions/openbis-webapps.md | 2 +- .../architectural-overview.md | 6 +-- .../installation-and configuration-guide.md | 6 +-- .../system-requirements.md | 6 +-- .../server-side-extensions/as-api-listener.md | 2 +- .../server-side-extensions/core-plugins.md | 2 +- .../authentication-systems.md | 6 +-- .../advanced-features/maintenance-tasks.md | 54 +++++++++---------- .../advanced-features/share-ids.md | 6 +-- .../docker-installation-and-configuration.md | 6 +-- .../installation/architectural-overview.md | 6 +-- .../installation-and-configuration-guide.md | 16 +++--- ...tional-application-server-configuration.md | 6 +-- ...optional-datastore-server-configuration.md | 6 +-- .../installation/system-requirements.md | 6 +-- .../customise-the-main-menu.md | 2 +- .../masterdata-exports-and-imports.md | 2 +- .../admins-documentation/space-management.md | 4 +- .../custom-database-queries.md | 10 ++-- .../properties-handled-by-scripts.md | 30 +++++------ .../general-users/data-export.md | 4 +- .../general-users/data-upload.md | 6 +-- .../inventory-of-materials-and-methods.md | 4 +- .../general-users/lab-notebook.md | 2 +- .../managing-lab-stocks-and-orders-2.md | 16 +++--- ...-for-analysis-of-data-stored-in-openbis.md | 2 +- .../openbis-kinme-nodes.md | 8 +-- 30 files changed, 122 insertions(+), 122 deletions(-) diff --git a/docs/software-developer-documentation/apis/java-javascript-v3-api.md b/docs/software-developer-documentation/apis/java-javascript-v3-api.md index ba6a8f55e1e..663df3215ee 100644 --- a/docs/software-developer-documentation/apis/java-javascript-v3-api.md +++ b/docs/software-developer-documentation/apis/java-javascript-v3-api.md @@ -34,14 +34,14 @@ The Java V3 API consists of two interfaces: Please check our JavaDoc for more details: <https://openbis.ch/javadoc/20.10.x/javadoc-api-v3/index.html> -All V3 API jars are packed in openBIS-API-V3-<VERSION>.zip which -is part of openBIS-clients-and-APIs-<VERSION>.zip (the latest -version can be downloaded at [Sprint Releases](#) > Clients and APIs) +All V3 API jars are packed in openBIS-API-V3-<VERSION>.zip which +is part of openBIS-clients-and-APIs-<VERSION>.zip (the latest +version can be downloaded at [Sprint Releases](#) > Clients and APIs) ### The Javascript API The Javascript V3 API consists of a module hosted at -<OPENBIS\_URL>/resources/api/v3/openbis.js, for instance +<OPENBIS\_URL>/resources/api/v3/openbis.js, for instance <http://localhost/openbis>/ resources/api/v3/openbis.js. Please check the openbis.js file itself for more details. @@ -3981,8 +3981,8 @@ library. Downloading is done in two steps: new FastDownloader(downloadSession).downloadTo(destinationFolder); - The files are stored in the destination folder in <data set - code>/<relative file path as in the data store on openBIS>. + The files are stored in the destination folder in <data set + code>/<relative file path as in the data store on openBIS>. Here is a complete example: diff --git a/docs/software-developer-documentation/apis/personal-access-tokens.md b/docs/software-developer-documentation/apis/personal-access-tokens.md index 527b14b5fb6..9ee7177ff08 100644 --- a/docs/software-developer-documentation/apis/personal-access-tokens.md +++ b/docs/software-developer-documentation/apis/personal-access-tokens.md @@ -115,10 +115,10 @@ Instead, each PAT should have a well defined validity period after which it should be replaced with a new PAT with a different hash. To make this transition as smooth as possible please use the following guide: -- create PAT\_1 with sessionName = <MY\_SESSION> and use it in +- create PAT\_1 with sessionName = <MY\_SESSION> and use it in your integration - when PAT\_1 is soon to be expired, create PAT\_2 with the same - sessionName = <MY\_SESSION> (both PAT\_1 and PAT\_2 will work + sessionName = <MY\_SESSION> (both PAT\_1 and PAT\_2 will work at this point and will refer to the same openBIS session) - replace PAT\_1 with PAT\_2 in your integration diff --git a/docs/software-developer-documentation/client-side-extensions/eln-lims-web-ui-extensions.md b/docs/software-developer-documentation/client-side-extensions/eln-lims-web-ui-extensions.md index d7e252ea38b..73266064fef 100644 --- a/docs/software-developer-documentation/client-side-extensions/eln-lims-web-ui-extensions.md +++ b/docs/software-developer-documentation/client-side-extensions/eln-lims-web-ui-extensions.md @@ -85,7 +85,7 @@ Pattern](https://en.wikipedia.org/wiki/Interceptor_pattern) - beforeViewPaint - afterViewPaint - + - Template methods are only needed to add custom components to from views. Best examples of how to use these can be found in diff --git a/docs/software-developer-documentation/client-side-extensions/openbis-webapps.md b/docs/software-developer-documentation/client-side-extensions/openbis-webapps.md index 3889340c538..52262359597 100644 --- a/docs/software-developer-documentation/client-side-extensions/openbis-webapps.md +++ b/docs/software-developer-documentation/client-side-extensions/openbis-webapps.md @@ -246,7 +246,7 @@ Notes about subtab identifiers: webapp core-plugin folder, i.e. \[technology\]/\[version\]/as/webapps/\[WEBAPP\_CODE\]) -Cross communication openBIS > DSS +Cross communication openBIS > DSS ------------------------------------ ### Background diff --git a/docs/software-developer-documentation/development-environment/architectural-overview.md b/docs/software-developer-documentation/development-environment/architectural-overview.md index dc65bec12a5..db3844f0799 100644 --- a/docs/software-developer-documentation/development-environment/architectural-overview.md +++ b/docs/software-developer-documentation/development-environment/architectural-overview.md @@ -1,4 +1,4 @@ -Architectural Overview -====================== - +Architectural Overview +====================== + hello world \ No newline at end of file diff --git a/docs/software-developer-documentation/development-environment/installation-and configuration-guide.md b/docs/software-developer-documentation/development-environment/installation-and configuration-guide.md index 6cfd30cd4b0..8cfc3063ec0 100644 --- a/docs/software-developer-documentation/development-environment/installation-and configuration-guide.md +++ b/docs/software-developer-documentation/development-environment/installation-and configuration-guide.md @@ -1,4 +1,4 @@ -Installation And Configuration Guide -==================================== - +Installation And Configuration Guide +==================================== + hello world \ No newline at end of file diff --git a/docs/software-developer-documentation/development-environment/system-requirements.md b/docs/software-developer-documentation/development-environment/system-requirements.md index a1deff5b1a8..50b27cd6a3c 100644 --- a/docs/software-developer-documentation/development-environment/system-requirements.md +++ b/docs/software-developer-documentation/development-environment/system-requirements.md @@ -1,4 +1,4 @@ -System Requirements -=================== - +System Requirements +=================== + hello world \ No newline at end of file diff --git a/docs/software-developer-documentation/server-side-extensions/as-api-listener.md b/docs/software-developer-documentation/server-side-extensions/as-api-listener.md index 45806712862..0c66bc7154a 100644 --- a/docs/software-developer-documentation/server-side-extensions/as-api-listener.md +++ b/docs/software-developer-documentation/server-side-extensions/as-api-listener.md @@ -32,7 +32,7 @@ It is required to provide an 'operation-listener.class' indicating the class name of the listener that will be loaded. Additionally any number of properties following the -pattern 'operation-listener.<your-custom-name>' can be provided. +pattern 'operation-listener.<your-custom-name>' can be provided. Custom properties are provided to help maintainability, they give an opportunity to the integrator to only need to compile the listener once and configure it differently for different instances. diff --git a/docs/software-developer-documentation/server-side-extensions/core-plugins.md b/docs/software-developer-documentation/server-side-extensions/core-plugins.md index 6ca96076a1e..0928e3da9d2 100644 --- a/docs/software-developer-documentation/server-side-extensions/core-plugins.md +++ b/docs/software-developer-documentation/server-side-extensions/core-plugins.md @@ -283,7 +283,7 @@ rules: ## Using Java libraries in Core Plugins OpenBIS allows you to include Java libraries in core plugin folders. The -\*.jar files have to be stored in "<code plugin folder>/lib" +\*.jar files have to be stored in "<code plugin folder>/lib" folder. For instance, in order to use "my-lib.jar" in "my-dropbox" a following file structure is needed: diff --git a/docs/system-admin-documentation/advanced-features/authentication-systems.md b/docs/system-admin-documentation/advanced-features/authentication-systems.md index e60b24d6e1d..25fd581ed9c 100644 --- a/docs/system-admin-documentation/advanced-features/authentication-systems.md +++ b/docs/system-admin-documentation/advanced-features/authentication-systems.md @@ -1,4 +1,4 @@ -Authentication Systems -====================== - +Authentication Systems +====================== + To be written \ No newline at end of file diff --git a/docs/system-admin-documentation/advanced-features/maintenance-tasks.md b/docs/system-admin-documentation/advanced-features/maintenance-tasks.md index 5cb3691c072..ca453c8d6e6 100644 --- a/docs/system-admin-documentation/advanced-features/maintenance-tasks.md +++ b/docs/system-admin-documentation/advanced-features/maintenance-tasks.md @@ -34,7 +34,7 @@ The following properties are common for all maintenance tasks: | start | A time at which the task should be executed the first time. Format: HH:mm. where HH is a two-digit hour (in 24h notation) and mm is a two-digit minute. By default the task is execute at server startup. | | run-schedule | Scheduling plan for task execution. Properties execute-only-once, interval, and start will be ignored if specified. Crontab syntax: -cron: <second> <minute> <hour> <day> <month> <weekday> +cron: <second> <minute> <hour> <day> <month> <weekday> Examples: cron: 0 0 * * * *: the top of every hour of every day. cron: */10 * * * * *: every ten seconds. @@ -45,15 +45,15 @@ cron: 0 0 9-17 * * MON-FRI: on the hour nine-to-five weekdays. cron: 0 0 0 25 12 ?: every Christmas Day at midnight. Non-crontab syntax: Comma-separated list of definitions with following syntax: -[[<counter>.]<week day>] [<month day>[.<month>]] <hour>[:<minute>] -where <counter> counts the specified week day of the month. <week day> is MO, MON, TU, TUE, WE, WED, TH, THU, FR, FRI, SA, SAT, SU, or SUN (ignoring case). <month> is either the month number (followed by an optionl '.') or JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, or DEC (ignoring case). +[[<counter>.]<week day>] [<month day>[.<month>]] <hour>[:<minute>] +where <counter> counts the specified week day of the month. <week day> is MO, MON, TU, TUE, WE, WED, TH, THU, FR, FRI, SA, SAT, SU, or SUN (ignoring case). <month> is either the month number (followed by an optionl '.') or JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, or DEC (ignoring case). Examples: 6, 18: every day at 6 AM and 6 PM. 3.FR 22:15: every third friday of a month at 22:15. 1. 15:50: every first day of a month at 3:50 PM. SAT 1:30: every saturday at 1:30 AM. 1.Jan 5:15, 1.4. 5:15, 1.7 5:15, 1. OCT 5:15: every first day of a quarter at 5:15 AM. | -| run-schedule-file | File where the timestamp for next execution is stored. It is used if run-schedule is specified. Default: <installation folder>/<plugin name>_<class name> | +| run-schedule-file | File where the timestamp for next execution is stored. It is used if run-schedule is specified. Default: <installation folder>/<plugin name>_<class name> | | retry-intervals-after-failure | Optional comma-separated list of time intervals (format as for interval) after which a failed execution will be retried. Note, that a maintenance task will be execute always when the next scheduled timepoint occurs. This feature allows to execute a task much earlier in case of temporary errors (e.g. temporary unavailibity of another server). | ## Feature @@ -180,13 +180,13 @@ properties need to scanned they should be added to the plugin.properties |----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | dataset-types | Comma-separated list of regular expressions of data set types. All FASTA and FASTQ files from those data sets are handled. All data sets of types not matching at least one of the regular expression are not handled. | | entity-sequence-properties | Comma-separated list of descriptions of entity properties with sequences. A description is of the form -<entity kind>+<entity type code>+<property type code> -where <entity kind> is either EXPERIMENT, SAMPLE or DATA_SET (Materials are not supported). | +<entity kind>+<entity type code>+<property type code> +where <entity kind> is either EXPERIMENT, SAMPLE or DATA_SET (Materials are not supported). | | file-types | Space separated list of file types. Data set files of those file types have to be FASTA or FASTQ files. Default: .fasta .fa .fsa .fastq | | blast-tools-directory | Path in the file system where all BLAST tools are located. If it is not specified or empty the tools directory has to be in the PATH environment variable. | -| blast-databases-folder | Path to the folder where all BLAST databases are stored. Default: <data store root>/blast-databases | -| blast-temp-folder | Path to the folder where temporary FASTA files are stored. Default: <blast-databases-folder>/tmp | -| last-seen-data-set-file | Path to the file which stores the id of the last seen data set. Default: <data store root>/last-seen-data-set-for-BLAST-database-creation | +| blast-databases-folder | Path to the folder where all BLAST databases are stored. Default: <data store root>/blast-databases | +| blast-temp-folder | Path to the folder where temporary FASTA files are stored. Default: <blast-databases-folder>/tmp | +| last-seen-data-set-file | Path to the file which stores the id of the last seen data set. Default: <data store root>/last-seen-data-set-for-BLAST-database-creation | **Example**: @@ -325,7 +325,7 @@ some criteria. This tasks needs the archive plugin to be configured in | Property Key | Description | |-----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | excluded-data-set-types | Comma-separated list of data set types. Data sets of such types are not archived. Default: No data set type is excluded. | -| estimated-data-set-size-in-KB.<data set type> | Specifies for the data set type <data set type> the average size in KB. If <data set type> is DEFAULT it will be used for all data set types with unspecified estimated size. | +| estimated-data-set-size-in-KB.<data set type> | Specifies for the data set type <data set type> the average size in KB. If <data set type> is DEFAULT it will be used for all data set types with unspecified estimated size. | | free-space-provider.class | Fully qualified class name of the free space provider (implementing ch.systemsx.cisd.common.filesystem.IFreeSpaceProvider). Depending on the free space provider additional properties, all starting with prefix free-space-provider., might be needed. Default: ch.systemsx.cisd.common.filesystem.SimpleFreeSpaceProvider | | monitored-dir | Path to the directory to be monitored by the free space provider. | | minimum-free-space-in-MB | Minimum free space in MB. If the free space is below this limit the task archives data sets. Default: 1 GB | @@ -381,8 +381,8 @@ are organized hierachical in accordance to their experiment and samples | storeroot-dir | Path to the root directory of the store. Used if storeroot-dir-link-path is not specified. | | hierarchy-root-dir | Path to the root directory of mirrored store. | | link-naming-strategy.class | Fully qualified class name of the strategy to generate the hierarchy (implementing ch.systemsx.cisd.etlserver.plugins.IHierarchicalStorageLinkNamingStrategy). Depending on the actual strategy additional properties, all starting with prefix link-naming-strategy., mighty be needed. Default: ch.systemsx.cisd.etlserver.plugins.TemplateBasedLinkNamingStrategy | -| link-source-subpath.<data set type> | Link source subpath for the specified data set type. Only files and folder in this relative path inside a data set will be mirrored. Default: The complete data set folder will be mirroed. | -| link-from-first-child.<data set type> | Flag which specifies whether only the first child of or the complete folder (either the data set or the one specified by link-source-subpath.<data set type>). Default: False | +| link-source-subpath.<data set type> | Link source subpath for the specified data set type. Only files and folder in this relative path inside a data set will be mirrored. Default: The complete data set folder will be mirroed. | +| link-from-first-child.<data set type> | Flag which specifies whether only the first child of or the complete folder (either the data set or the one specified by link-source-subpath.<data set type>). Default: False | | with-meta-data | Flag, which specifies whether directories with meta-data.tsv and a link should be created or only links. The default behavior is to create links-only. Default: false | | link-naming-strategy.template | The exact form of link paths produced by TemplateBasedLinkNamingStrategy is defined by this template. The variables dataSet, dataSetType, sample, experiment, project and space will be recognized and replaced in the actual link path. @@ -522,9 +522,9 @@ data set is the starting point when the task is executed next time. | Property Key | Description | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | compute-checksum | If true the CRC32 checksum (and optionally a checksum of the type specified by checksum-type) of all files will be calculated and stored in pathinfo database. Default value: false | -| checksum-type | Optional checksum type. If specified and compute-checksum = true two checksums are calculated: CRC32 checksum and the checksum of specified type. The type and the checksum are stored in the pathinfo database. An allowed type has to be supported by MessageDigest.getInstance(<checksum type>). For more details see http://docs.oracle.com/javase/8/docs/api/java/security/MessageDigest.html#getInstance-java.lang.String-. | +| checksum-type | Optional checksum type. If specified and compute-checksum = true two checksums are calculated: CRC32 checksum and the checksum of specified type. The type and the checksum are stored in the pathinfo database. An allowed type has to be supported by MessageDigest.getInstance(<checksum type>). For more details see http://docs.oracle.com/javase/8/docs/api/java/security/MessageDigest.html#getInstance-java.lang.String-. | | data-set-chunk-size | Number of data sets requested from AS in one chunk if it is used as a maintenance task. Default: 1000 | -| max-number-of-chunks | Maximum number of chunks of size data-set-chunk-size are processed if it is used as a maintenance task. If it is <= 0 and time-limit isn't defined all data sets are processed. Default: 0 | +| max-number-of-chunks | Maximum number of chunks of size data-set-chunk-size are processed if it is used as a maintenance task. If it is <= 0 and time-limit isn't defined all data sets are processed. Default: 0 | | time-limit | Limit of execution time of this task if it is used as a maintenance task. The task is stopped before reading next chunk if the time has been used up. If it is specified it is an alternative way to limit the number of data sets to be processed instead of specifying max-number-of-chunks. This parameter can be specified with one of the following time units: ms, msec, s, sec, m, min, h, hours, d, days. Default time unit is sec. | **Example**: @@ -685,7 +685,7 @@ When specified this task stops checking after the specified pausing time point a After all data sets have been checked the task checks again all data sets started by the oldest one specified by checking-time-interval. | | continuing-time-point | Time point where checking continous. Format: HH:mm. where HH is a two-digit hour (in 24h notation) and mm is a two-digit minute. Ignored when pausing-time-point isn't specified. Default value: Time when the task is executed. | | chunk-size | Maximum number of data sets retrieved from AS. Ignored when pausing-time-point isn't specified. Default value: 1000 | -| state-file | File to store registration time stamp and code of last considered data set. This is only used when pausing-time-point has been specified. Default: <store root>/DataSetAndPathInfoDBConsistencyCheckTask-state.txt | +| state-file | File to store registration time stamp and code of last considered data set. This is only used when pausing-time-point has been specified. Default: <store root>/DataSetAndPathInfoDBConsistencyCheckTask-state.txt | **Example**: The following example checks all data sets of the last ten years. It does the check only during the night and continues next night. @@ -759,21 +759,21 @@ makes several assumptions on the database schema: The general format of the mapping file is as follows: -\[<Material Type Code>: <table Name>, <code column -name>\] +\[<Material Type Code>: <table Name>, <code column +name>\] -<Property Type Code>: <column name> +<Property Type Code>: <column name> -<Property Type Code>: <column name> +<Property Type Code>: <column name> ... -\[<Material Type Code>: <table Name>, <code column -name>\] +\[<Material Type Code>: <table Name>, <code column +name>\] -<Property Type Code>: <column name> +<Property Type Code>: <column name> -<Property Type Code>: <column name> +<Property Type Code>: <column name> ... @@ -1045,7 +1045,7 @@ data source for key 'path-info-db'. | Property Key | Description | |---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| checksum-type | Optional checksum type. If specified two checksums are calculated: CRC32 checksum and the checksum of specified type. The type and the checksum are stored in the pathinfo database. An allowed type has to be supported by MessageDigest.getInstance(<checksum type>). For more details see http://docs.oracle.com/javase/8/docs/api/java/security/MessageDigest.html#getInstance-java.lang.String-. | +| checksum-type | Optional checksum type. If specified two checksums are calculated: CRC32 checksum and the checksum of specified type. The type and the checksum are stored in the pathinfo database. An allowed type has to be supported by MessageDigest.getInstance(<checksum type>). For more details see http://docs.oracle.com/javase/8/docs/api/java/security/MessageDigest.html#getInstance-java.lang.String-. | **Example**: @@ -1083,12 +1083,12 @@ the pathinfo database. | Property Key | Description | |---------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| time-stamp-of-youngest-data-set | Time stamp of the youngest data set to be considered. The format has to be <4 digit year>-<month>-<day> <hour>:<minute>:<second>. | +| time-stamp-of-youngest-data-set | Time stamp of the youngest data set to be considered. The format has to be <4 digit year>-<month>-<day> <hour>:<minute>:<second>. | | compute-checksum | If true the CRC32 checksum (and optionally a checksum of the type specified by checksum-type) of all files will be calculated and stored in pathinfo database. Default value: true | -| checksum-type | Optional checksum type. If specified and compute-checksum = true two checksums are calculated: CRC32 checksum and the checksum of specified type. The type and the checksum are stored in the pathinfo database. An allowed type has to be supported by MessageDigest.getInstance(<checksum type>). For more details see http://docs.oracle.com/javase/8/docs/api/java/security/MessageDigest.html#getInstance-java.lang.String-. | +| checksum-type | Optional checksum type. If specified and compute-checksum = true two checksums are calculated: CRC32 checksum and the checksum of specified type. The type and the checksum are stored in the pathinfo database. An allowed type has to be supported by MessageDigest.getInstance(<checksum type>). For more details see http://docs.oracle.com/javase/8/docs/api/java/security/MessageDigest.html#getInstance-java.lang.String-. | | chunk-size | Number of data sets requested from AS in one chunk. Default: 1000 | | data-set-type | Optional data set type. If specified, only data sets of the specified type are considered. Default: All data set types. | -| state-file | File to store registration time stamp and code of last considered data set. Default: <store root>/PathInfoDatabaseRefreshingTask-state.txt | +| state-file | File to store registration time stamp and code of last considered data set. Default: <store root>/PathInfoDatabaseRefreshingTask-state.txt | **Example**: diff --git a/docs/system-admin-documentation/advanced-features/share-ids.md b/docs/system-admin-documentation/advanced-features/share-ids.md index 2eaf5ad0ea9..7f228a00ef6 100644 --- a/docs/system-admin-documentation/advanced-features/share-ids.md +++ b/docs/system-admin-documentation/advanced-features/share-ids.md @@ -1,4 +1,4 @@ -Share IDs -========= - +Share IDs +========= + To be written \ No newline at end of file diff --git a/docs/system-admin-documentation/docker-installation/docker-installation-and-configuration.md b/docs/system-admin-documentation/docker-installation/docker-installation-and-configuration.md index 1b5697bdad4..d75b8295aab 100644 --- a/docs/system-admin-documentation/docker-installation/docker-installation-and-configuration.md +++ b/docs/system-admin-documentation/docker-installation/docker-installation-and-configuration.md @@ -1,4 +1,4 @@ -Docker Installation And Configuration -===================================== - +Docker Installation And Configuration +===================================== + To be written \ No newline at end of file diff --git a/docs/system-admin-documentation/installation/architectural-overview.md b/docs/system-admin-documentation/installation/architectural-overview.md index e4cc8172296..b919163d7a7 100644 --- a/docs/system-admin-documentation/installation/architectural-overview.md +++ b/docs/system-admin-documentation/installation/architectural-overview.md @@ -1,4 +1,4 @@ -Architectural Overview -====================== - +Architectural Overview +====================== + To be written \ No newline at end of file diff --git a/docs/system-admin-documentation/installation/installation-and-configuration-guide.md b/docs/system-admin-documentation/installation/installation-and-configuration-guide.md index a6d2b8646fc..ac90a776a2e 100644 --- a/docs/system-admin-documentation/installation/installation-and-configuration-guide.md +++ b/docs/system-admin-documentation/installation/installation-and-configuration-guide.md @@ -481,7 +481,7 @@ openBIS database. They are all mandatory. | `database.create-from-scratch` | If true the database will be dropped and an empty database will be created. In productive use always set this value to false . | | `database.script-single-step-mode` | If true all SQL scripts are executed in single step mode. Useful for localizing errors in SQL scripts. Should be always false in productive mode. | | `database.url-host-part` | Part of JDBC URL denoting the host of the database server. If openBIS Application Server and database server are running on the same machine this property should be an empty string. | -| `database.kind` | Part of the name of the database. The full name reads openbis_< kind >. | +| `database.kind` | Part of the name of the database. The full name reads openbis_< kind >. | | `database.admin-user` | ID of the user on database server with admin rights, like creation of tables. Should be an empty string if default admin user should be used. In case of PostgreSQL the default admin user is assumed to be postgres. | | database.admin-password | Password for admin user. Usual an empty string. | | `database.owner` | ID of the user owning the data. This should generally be openbis. The openbis role and password need to be created. In case of an empty string it is the same user who started up openBIS Application Server. | @@ -1314,8 +1314,8 @@ configured: | Property | Description | |---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| <database>.data-space | To which data-space this database belongs to (optional, i.e. a query database can be configured not to belong to one data space by leaving this configuration value empty). | -| <database>.creator-minimal-role | What role is required to be allowed to create / edit queries on this database (optional, default: INSTANCE_OBSERVER if data-space is not set, POWER_USER otherwise). | +| <database>.data-space | To which data-space this database belongs to (optional, i.e. a query database can be configured not to belong to one data space by leaving this configuration value empty). | +| <database>.creator-minimal-role | What role is required to be allowed to create / edit queries on this database (optional, default: INSTANCE_OBSERVER if data-space is not set, POWER_USER otherwise). | The given parameters data-space and creator-minimal-role are used by openBIS to enforce proper authorization. @@ -1406,17 +1406,17 @@ The table below describes the possible commands and their arguments. | Command | Argument(s) | Default Value | Description | |--------------------------------------|--------------------------------------------------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | log-service-calls | 'on', 'off' | 'off' | Turns on / off detailed service call logging. -When this feature is enabled, openBIS will log about start and end of every service call it executes to file <installation directory>/servers/openBIS-server/jetty/log/openbis_service_calls.txt | +When this feature is enabled, openBIS will log about start and end of every service call it executes to file <installation directory>/servers/openBIS-server/jetty/log/openbis_service_calls.txt | | log-long-running-invocations | 'on', 'off' | 'on' | Turns on / off logging of long running invocations. -When this feature is enabled, openBIS will periodically create a report of all service calls that have been in execution more than 15 seconds to file <installation directory>/servers/openBIS-server/jetty/log/openbis_long_running_threads.txt. | +When this feature is enabled, openBIS will periodically create a report of all service calls that have been in execution more than 15 seconds to file <installation directory>/servers/openBIS-server/jetty/log/openbis_long_running_threads.txt. | | debug-db-connections | 'on', 'off' | 'off' | Turns on / off logging about database connection pool activity. -When this feature is enabled, information about every borrow and return to database connection pool is logged to openBIS main log in file <installation directory>/servers/openBIS-server/jetty/log/openbis_log.txt | -| log-db-connections | no argument / minimum connection age (in milliseconds) | 5000 | When this command is executed without an argument, information about every database connection that has been borrowed from the connection pool is written into openBIS main log in file <installation directory>/servers/openBIS-server/jetty/log/openbis_log.txt +When this feature is enabled, information about every borrow and return to database connection pool is logged to openBIS main log in file <installation directory>/servers/openBIS-server/jetty/log/openbis_log.txt | +| log-db-connections | no argument / minimum connection age (in milliseconds) | 5000 | When this command is executed without an argument, information about every database connection that has been borrowed from the connection pool is written into openBIS main log in file <installation directory>/servers/openBIS-server/jetty/log/openbis_log.txt If the "minimum connection age" argument is specified, only connections that have been out of the pool longer than the specified time are logged. The minimum connection age value is given in milliseconds. | | record-stacktrace-db-connections | 'on', 'off' | 'off' | Turns on / off logging of stacktraces. When this feature is enabled AND debug-db-connections is enabled, the full stack trace of the borrowing thread will be recorded with the connection pool activity logs. | | log-db-connections-separate-log-file | 'on', 'off' | 'off' | Turns on / off database connection pool logging to separate file. -When this feature is disabled, the database connection pool activity logging is done only to openBIS main log. When this feature is enabled, the activity logging is done ALSO to file <installation directory>/servers/openBIS-server/jetty/log/openbis_db_connections.txt. | +When this feature is disabled, the database connection pool activity logging is done only to openBIS main log. When this feature is enabled, the activity logging is done ALSO to file <installation directory>/servers/openBIS-server/jetty/log/openbis_db_connections.txt. |  diff --git a/docs/system-admin-documentation/installation/optional-application-server-configuration.md b/docs/system-admin-documentation/installation/optional-application-server-configuration.md index 104437f36d2..871c0661e4f 100644 --- a/docs/system-admin-documentation/installation/optional-application-server-configuration.md +++ b/docs/system-admin-documentation/installation/optional-application-server-configuration.md @@ -1,4 +1,4 @@ -Optional Application Server Configuration -========================================= - +Optional Application Server Configuration +========================================= + To be written \ No newline at end of file diff --git a/docs/system-admin-documentation/installation/optional-datastore-server-configuration.md b/docs/system-admin-documentation/installation/optional-datastore-server-configuration.md index 4cbbee2e0b8..be53e40b748 100644 --- a/docs/system-admin-documentation/installation/optional-datastore-server-configuration.md +++ b/docs/system-admin-documentation/installation/optional-datastore-server-configuration.md @@ -1,4 +1,4 @@ -Optional Datastore Server Configuration -======================================= - +Optional Datastore Server Configuration +======================================= + To be written \ No newline at end of file diff --git a/docs/system-admin-documentation/installation/system-requirements.md b/docs/system-admin-documentation/installation/system-requirements.md index 62f9a29fe06..96f449d14f8 100644 --- a/docs/system-admin-documentation/installation/system-requirements.md +++ b/docs/system-admin-documentation/installation/system-requirements.md @@ -1,4 +1,4 @@ -System Requirements -=================== - +System Requirements +=================== + To be written \ No newline at end of file diff --git a/docs/user-documentation/general-admin-users/admins-documentation/customise-the-main-menu.md b/docs/user-documentation/general-admin-users/admins-documentation/customise-the-main-menu.md index f4139ebf3ce..e242e4c231f 100644 --- a/docs/user-documentation/general-admin-users/admins-documentation/customise-the-main-menu.md +++ b/docs/user-documentation/general-admin-users/admins-documentation/customise-the-main-menu.md @@ -73,7 +73,7 @@ The main menu can be customised from the **Settings**, under Profile](https://openbis.readthedocs.io/en/latest/user-documentation/general-admin-users/admins-documentation/user-registration.html#user-profile)) will be hidden. 12. **showZenodoExportBuilder**: if unselected, the **Zenodo - Export** under **Utilities -> Exports** in the main menu + Export** under **Utilities -> Exports** in the main menu ([Export to Zenodo](https://openbis.readthedocs.io/en/latest/user-documentation/general-users/data-export.html#export-to-zenodo)) will be hidden. diff --git a/docs/user-documentation/general-admin-users/admins-documentation/masterdata-exports-and-imports.md b/docs/user-documentation/general-admin-users/admins-documentation/masterdata-exports-and-imports.md index 01445268bae..4613f97e14b 100644 --- a/docs/user-documentation/general-admin-users/admins-documentation/masterdata-exports-and-imports.md +++ b/docs/user-documentation/general-admin-users/admins-documentation/masterdata-exports-and-imports.md @@ -60,7 +60,7 @@ explained above:  -1. Go to the **Tools** section and select **Import -> All** from the +1. Go to the **Tools** section and select **Import -> All** from the menu. 2. Upload the file you exported before using the **CHOOSE FILE** button. diff --git a/docs/user-documentation/general-admin-users/admins-documentation/space-management.md b/docs/user-documentation/general-admin-users/admins-documentation/space-management.md index 00d527989e9..1b76ea68470 100644 --- a/docs/user-documentation/general-admin-users/admins-documentation/space-management.md +++ b/docs/user-documentation/general-admin-users/admins-documentation/space-management.md @@ -74,7 +74,7 @@ In the core UI:  -1. Select **Admin -> Spaces** +1. Select **Admin -> Spaces** 2. Click **Add Space** at the bottom of the page 3. Enter the *Space* **Code**, e.g. **EQUIPMENT** 4. **Save** @@ -199,7 +199,7 @@ In the core UI:   -1. Select **Admin -> Spaces** +1. Select **Admin -> Spaces** 2. Click **Add Space** at the bottom of the page 3. Enter the Space **Code**, e.g. **EQUIPMENT** 4. **Save** diff --git a/docs/user-documentation/general-admin-users/custom-database-queries.md b/docs/user-documentation/general-admin-users/custom-database-queries.md index 4e13447488c..52e8829329a 100644 --- a/docs/user-documentation/general-admin-users/custom-database-queries.md +++ b/docs/user-documentation/general-admin-users/custom-database-queries.md @@ -105,7 +105,7 @@ Server](#) for an explanation on how to do this. Running a Parametrized Query ---------------------------- -1. Choose menu item **Queries -> Run Predefined Query**. The tab +1. Choose menu item **Queries -> Run Predefined Query**. The tab *Predefined Query* opens. 2. Choose a query using the query combo box. Queries specified for all configured databases are selected transparently using the same combo @@ -130,7 +130,7 @@ Running a SELECT statement This feature is only for users with *creator role*. It is useful for exploring the database by ad hoc queries. -1. Choose menu item **Queries -> Run Custom SQL Query**. The tab +1. Choose menu item **Queries -> Run Custom SQL Query**. The tab *Custom SQL Query* opens. 2. Enter a SELECT statement in the text area, select database and click on the **Execute** button. The result appears below in tabular form. @@ -142,7 +142,7 @@ This feature is only for users with *creator role*. ### Define a Query -1. Choose menu item **Queries -> Browse Query Definitions**. The tab +1. Choose menu item **Queries -> Browse Query Definitions**. The tab *Query Definitions* opens. It shows all definitions where the user has access rights. 2. Click on **Add Query Definition** for defining a new parametrized @@ -247,7 +247,7 @@ the SQL statement should be one of the following **magic** words: ### Edit a Query -1. Choose menu item **Queries -> Browse Query Definitions**. The tab +1. Choose menu item **Queries -> Browse Query Definitions**. The tab *Query Definitions* opens. 2. Select a query and click on button **Edit**. The same dialog as for defining a query pops up. @@ -272,7 +272,7 @@ experiment of type `EXP`). ### How to create/edit entity custom queries Entity custom queries can be created and edited in the same way as -`Generic` queries (**Queries -> Browse Query Definitions**), but the +`Generic` queries (**Queries -> Browse Query Definitions**), but the value of **`Query Type`** field should be set to Experiment, Sample, Data Set or Material. diff --git a/docs/user-documentation/general-admin-users/properties-handled-by-scripts.md b/docs/user-documentation/general-admin-users/properties-handled-by-scripts.md index f51580e53c9..427572c7c23 100644 --- a/docs/user-documentation/general-admin-users/properties-handled-by-scripts.md +++ b/docs/user-documentation/general-admin-users/properties-handled-by-scripts.md @@ -35,7 +35,7 @@ and one script type to perform validations on entities: 2. **Managed Property Handler** (for properties referred to as *Managed Properties*) - + 1. - for properties that will be **indirectly modified by users**, - the script alters default handling of a property by openBIS by @@ -44,7 +44,7 @@ and one script type to perform validations on entities: view (e.g. as a table), - **input fields** for modifying the property, - +  - - **translation** and/or **validation** of user input. @@ -59,22 +59,22 @@ To create a property that should be handled by a script perform the following steps. 1. Define a property type with appropriate name and data type - (Administration->Property Types->New). + (Administration->Property Types->New). 2. Define a script that will handle the property - (Administration->Scripts) or deploy a Java plugin. For details + (Administration->Scripts) or deploy a Java plugin. For details and examples of usage go to pages: - [Dynamic Properties](/display/openBISDoc2010/Dynamic+Properties) - [Managed Properties](/display/openBISDoc2010/Managed+Properties) - [Entity validation scripts](/display/openBISDoc2010/Entity+validation+scripts) 3. Assign the created property type to chosen entity type using the - created script (e.g. for samples: Administration->Property - Types->Assign to Sample Type): + created script (e.g. for samples: Administration->Property + Types->Assign to Sample Type): - select Handled By Script checkbox, - select the appropriate Script Type - choose the Script 4. The validation scripts are assigned to the type in the "Edit Type" - section. (e.g Admin->Types->Samples. Select sample and click + section. (e.g Admin->Types->Samples. Select sample and click edit.)  @@ -352,7 +352,7 @@ procedure: Jython scripts and Java plugins. ### Defining a Jython validation script -1. Go to Admin -> Plugins -> Add Plugin. +1. Go to Admin -> Plugins -> Add Plugin. 2. Select "Entity Validator" as the plugin type 3. Choose name, entity kind, and description. 4. Prepare a script (see paragraph "Script specification" below) @@ -406,8 +406,8 @@ does not have any properties defined: To make the validation active per entity type you have to select the validation script for each type: -- Admin -> Types -> <Entity Kind> you selected also in the - script definition -> +- Admin -> Types -> <Entity Kind> you selected also in the + script definition -> - Select a Sample Type and edit it - You find a property which is called 'Validation Script' (see screen shot below). Just select your defined Script and hit save. @@ -504,7 +504,7 @@ To create a Managed Property: ### Creating scripts To browse and edit existing scripts or add new ones, select -Administration->Scripts from the top menu. +Administration->Scripts from the top menu. The scripts should be written in standard Jython syntax. The following functions are invoked by openBIS, some of them are mandatory: @@ -529,14 +529,14 @@ script: table model builder. It will be used in `configureUI` to create tabular data to be shown in openBIS GUI. - + - `ValidationException ValidationException(String message)`: Creates a Validation Exception with specified message which should be raised in functions `updateFromUI` and `updateFromBatchInput` in case of invalid input. - + - ` IManagedInputWidgetDescriptionFactory inputWidgetFactory()`: returns a factory that can be used to create descriptions of input @@ -544,7 +544,7 @@ script: [IManagedInputWidgetDescription](http://svnsis.ethz.ch/doc/openbis/current/ch/systemsx/cisd/openbis/generic/shared/basic/dto/api/IManagedInputWidgetDescription.html) and [example](#ManagedProperties-Example3)). - + - ` IElementFactory elementFactory()`: returns a factory that can be used to create @@ -552,7 +552,7 @@ script: See [\#Storing structured content in managed properties](#ManagedProperties-Storingstructuredcontentinmanagedproperties). - + - ` IStructuredPropertyConverter xmlPropertyConverter()`: returns a converter that can translate diff --git a/docs/user-documentation/general-users/data-export.md b/docs/user-documentation/general-users/data-export.md index 4f59b8c7aac..51fd0b0ec78 100644 --- a/docs/user-documentation/general-users/data-export.md +++ b/docs/user-documentation/general-users/data-export.md @@ -112,7 +112,7 @@ stored in openBIS, with the following procedure: To export data to Zenodo: -1. Go to **Exports** -> **Export to Zenodo** under **Utilities** in +1. Go to **Exports** -> **Export to Zenodo** under **Utilities** in the main menu. 2. Select the data you want to export from the menu. 3. enter a **Submission** **Title.** @@ -162,7 +162,7 @@ To export data to the ETH Research Collection:  -1. Go to **Utilities** -> **Exports** -> **Export to Research +1. Go to **Utilities** -> **Exports** -> **Export to Research Collection**. 2. Select what to export from the tree. 3. Select the **Submission Type** from the available list: *Data diff --git a/docs/user-documentation/general-users/data-upload.md b/docs/user-documentation/general-users/data-upload.md index 595ffbe8e6a..dce0bae2671 100644 --- a/docs/user-documentation/general-users/data-upload.md +++ b/docs/user-documentation/general-users/data-upload.md @@ -120,13 +120,13 @@ on the eln-lims-dropbox folder.  -In case of uploads of data >100GB we recommend to configure the +In case of uploads of data >100GB we recommend to configure the **eln-lims-dropbox-marker**. The set up and configuration need to be done by a *system admin*. The process of data preparation is the same as described above, however in this case the data move to the openBIS final storage only starts when a markerfile is placed in the eln-lims-dropbox-marker folder. The marker file is an empty file with -this name:  **.MARKER\_is\_finished\_<folder-to-upload-name>. +this name:  **.MARKER\_is\_finished\_<folder-to-upload-name>. **Please note the “.†at the start of the name, which indicates that this is a hidden file. This file should also not have any extension. For example, if the folder to be uploaded has the following name: @@ -185,7 +185,7 @@ other text editor will also work. Shift + . (period)**. 5. The file you saved before has an extension, that needs to be removed. If the extension is not shown in your Finder, go to Finder - > Preferences menu, select the Advanced tab, and check the “Show + > Preferences menu, select the Advanced tab, and check the “Show all filename extensions†box. 6. Remove the extension from the file. diff --git a/docs/user-documentation/general-users/inventory-of-materials-and-methods.md b/docs/user-documentation/general-users/inventory-of-materials-and-methods.md index c746ab9b302..c6094f747f4 100644 --- a/docs/user-documentation/general-users/inventory-of-materials-and-methods.md +++ b/docs/user-documentation/general-users/inventory-of-materials-and-methods.md @@ -136,7 +136,7 @@ Excel file. Please note that codes are not case-sensitive, but labels are. Codes and labels of vocabulary terms can be seen under -**Utilities -> Vocabulary Browser**. +**Utilities -> Vocabulary Browser**. #### Assign parents @@ -205,7 +205,7 @@ together, as shown in the template provided above: completely remove the **identifier** column from the file. 2. **Lists**. In fields that have lists to choose from (called **Controlled Vocabularies**), the code of the term needs to be - entered. Term codes can be seen under **Utilities -> Vocabulary + entered. Term codes can be seen under **Utilities -> Vocabulary Browser**. 3. **Parents**. Use the following syntax to enter parents: **identifier1, identifier2, identifier3.** diff --git a/docs/user-documentation/general-users/lab-notebook.md b/docs/user-documentation/general-users/lab-notebook.md index 379db705f48..c3893e51652 100644 --- a/docs/user-documentation/general-users/lab-notebook.md +++ b/docs/user-documentation/general-users/lab-notebook.md @@ -475,7 +475,7 @@ you want to access. Note: if you encounter the error message “*SSH connection failed: Could not find a part of the path*.†you can fix this by disabling the cache -(Drives -> Advanced -> Enable Caching), and disabling log files. +(Drives -> Advanced -> Enable Caching), and disabling log files. The error is caused by an attempt to create files in a folder not available to Windows. diff --git a/docs/user-documentation/general-users/managing-lab-stocks-and-orders-2.md b/docs/user-documentation/general-users/managing-lab-stocks-and-orders-2.md index 2646289146d..10ea709ce75 100644 --- a/docs/user-documentation/general-users/managing-lab-stocks-and-orders-2.md +++ b/docs/user-documentation/general-users/managing-lab-stocks-and-orders-2.md @@ -45,8 +45,8 @@ Catalog**. To build the catalog of all suppliers used for purchasing products by the lab: -> 1. Go to the **Supplier Collection** folder under **Stock** *->* -> **Stock Catalog***->* **Suppliers** in the main menu. +> 1. Go to the **Supplier Collection** folder under **Stock** *->* +> **Stock Catalog***->* **Suppliers** in the main menu. > 2. Click on the **+ New Supplier** button in the *Collection* page. > 3. Follow the steps explained in the [Register > Entries](https://openbis.ch/index.php/docs/user-documentation-20-10-3/inventory-of-materials-and-methods/register-single-entries-in-a-collection/) @@ -66,8 +66,8 @@ Collection.](https://openbis.ch/index.php/docs/user-documentation-20-10-3/invent To build the catalog of all products purchased in the lab: -> 1. Go to the **Product Collection** folder under **Stock** *->* -> **Stock Catalog***->* **Products** in the main menu. +> 1. Go to the **Product Collection** folder under **Stock** *->* +> **Stock Catalog***->* **Products** in the main menu. > 2. Click the **+ New Product** button in the *Collection* page. @@ -100,8 +100,8 @@ Collection.](https://openbis.ch/index.php/docs/user-documentation-20-10-3/invent Every lab member can create requests for products that need to be ordered: -> 1. Go to the **Request Collection** folder under **Stock** *->* -> **Stock Catalog***->* **Requests** in the main menu. +> 1. Go to the **Request Collection** folder under **Stock** *->* +> **Stock Catalog***->* **Requests** in the main menu. > 2. Click the **+ New Request** button in the *Collection* page.  @@ -159,8 +159,8 @@ based on the requests created in the Stock Catalog by every lab member. To create orders of products from requests created in the Stock Catalog: -> 1. Go to the **Order Collection** folder under **Stock** *->* -> **Stock Orders***->* **Orders** in the main menu. +> 1. Go to the **Order Collection** folder under **Stock** *->* +> **Stock Orders***->* **Orders** in the main menu. > 2. Click the **+ New Order** button in the *Collection* page.  diff --git a/docs/user-documentation/general-users/tools-for-analysis-of-data-stored-in-openbis.md b/docs/user-documentation/general-users/tools-for-analysis-of-data-stored-in-openbis.md index 392b6943220..c2b22cb9ec8 100644 --- a/docs/user-documentation/general-users/tools-for-analysis-of-data-stored-in-openbis.md +++ b/docs/user-documentation/general-users/tools-for-analysis-of-data-stored-in-openbis.md @@ -40,7 +40,7 @@ Jupyter notebooks can be opened at every level of the openBIS hierarchy If you get a similar error as the one shown below when you try to launch a notebook from an entity, you need to start the JupyterHub server by -going to the main menu **Utilities** -> **Jupyter Workspace**. This +going to the main menu **Utilities** -> **Jupyter Workspace**. This error appears when the JupyterHub server is restarted (e.g. after an upgrade), because the user profile needs to be recreated. diff --git a/docs/user-documentation/legacy-advance-features/openbis-kinme-nodes.md b/docs/user-documentation/legacy-advance-features/openbis-kinme-nodes.md index 7961fa667c2..5618b6f9cb5 100644 --- a/docs/user-documentation/legacy-advance-features/openbis-kinme-nodes.md +++ b/docs/user-documentation/legacy-advance-features/openbis-kinme-nodes.md @@ -31,7 +31,7 @@ Usage ----- All openBIS KNIME nodes can be found in Node Repository under Community -Nodes -> openBIS: +Nodes -> openBIS:  @@ -100,7 +100,7 @@ user will be asked for the passwords after loading a workflow.  If user ID and password are entered directly in the node setting dialog -the KNIME master key on the preferences page **KNIME -> Master Key** +the KNIME master key on the preferences page **KNIME -> Master Key** should be activated. Otherwise passwords will be stored unencrypted! ### openBIS Query Reader @@ -284,8 +284,8 @@ with `knime-`. The specifications of such services are the following: exception with stack trace will be created and thrown in KNIME. It will appear in KNIME log. For each row either the first cell isn't empty or the five other cells are not empty. In the first case the - value of the first column is of the form <exception - class>:<exception message>. If the first column is empty + value of the first column is of the form <exception + class>:<exception message>. If the first column is empty the row represents a stack trace entry where the other columns are interpreted as class name, method name, file name, and line number. -- GitLab