diff --git a/docs/uncategorized/jython-based-reporting-and-processing-plugins.md b/docs/uncategorized/jython-based-reporting-and-processing-plugins.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd0dea791d1fb22ae19f20a228f2a47e69b649a9
--- /dev/null
+++ b/docs/uncategorized/jython-based-reporting-and-processing-plugins.md
@@ -0,0 +1,709 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FJython-based%2BReporting%2Band%2BProcessing%2BPlugins)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FJython-based%2BReporting%2Band%2BProcessing%2BPlugins)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (4)
+        ](/pages/viewpageattachments.action?pageId=53746027 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53746027)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53746027)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53746027#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53746027)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53746027)
+    -   [ Export to Word ](/exportword?pageId=53746027)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53746027)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53746027&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  **…**
+3.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+4.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+5.  [Guides](/display/openBISDoc2010/Guides)
+
+-   []( "Unrestricted")[](/pages/viewpageattachments.action?pageId=53746027&metadataLink=true "4 attachments")
+-   [Jira links]()
+
+[Jython-based Reporting and Processing Plugins](/display/openBISDoc2010/Jython-based+Reporting+and+Processing+Plugins)
+----------------------------------------------------------------------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A), last modified by [Kovtun
+    Viktor (ID)](%20%20%20%20/display/~vkovtun%0A) on [Mar 07,
+    2023](/pages/diffpagesbyversion.action?pageId=53746027&selectedPageVersions=10&selectedPageVersions=11 "Show changes")
+
+Jython-based Reporting and Processing Plugins
+---------------------------------------------
+
+### Overview
+
+It is possible to implement logic of reporting and processing plugins as
+Jython scripts instead of providing new classes implementing
+`IReportingPluginTask` and `IProcessingPluginTask` Java interfaces.
+
+The scripts require only a single method to be implemented and have
+access to a few services that simplify tasks such as data retrieval,
+creation of tables or sending emails:
+
+-   access to files of a data set is done via
+    [IHierarchicalContent](https://openbis.ch/javadoc/20.10.x/javadoc-openbis-common/ch/systemsx/cisd/openbis/common/io/hierarchical_content/api/IHierarchicalContent.html)
+    interface (through
+    [IDataSet.getContent()](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/server/plugins/jython/api/IDataSet.html#getContent%28%29))
+-   access to openBIS AS via
+    [ISearchService](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/shared/api/internal/v2/ISearchService.html)
+    (through `searchService` and `searchServiceUnfiltered` variables),
+-   access to data sources specified in DSS via
+    [IDataSourceQueryService](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/shared/api/internal/IDataSourceQueryService.html)
+    (through `queryService` variable),
+-   creation of tables in reporting script via
+    [ISimpleTableModelBuilderAdaptor](https://openbis.ch/javadoc/20.10.x/javadoc-openbis/ch/systemsx/cisd/openbis/generic/shared/managed_property/api/ISimpleTableModelBuilderAdaptor.html)
+    (provided as a function argument),
+-   sending emails via
+    [IMailService](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/server/plugins/jython/api/IMailService.html)
+    (through `mailService`variable)  
+    -   it is easy to send a file or a text as an attachment to the user
+        (subject and text body can be provided optionally)
+    -   it is also possible to use a reporting script as a processing
+        plugin sending the report as an attachment to the user
+-   checking user access privileges via the
+    [IAuthorizationService](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/shared/api/internal/v2/authorization/IAuthorizationService.html)
+      
+     (available from the `authorizationService` variable).
+
+All jython plugins use Jython version configured by the
+service.properties property `jython-version` which can be either 2.5 or
+2.7.
+
+### Configuration
+
+Jython-based plugins are configured in exactly the same way as other
+reporting and processing plugins. Apart from standard mandatory plugin
+properties one needs to specify a path to the script - `script-path`.
+
+Additional third-party JAR files have to be added to the core plugin in
+a sub-folder `lib/`.
+
+Here are some configuration examples for core plugins of the type
+reporting-plugins and processing-plugins, respectively:
+
+##### Jython-based Reporting Plugin
+
+**plugin.properties**
+
+    label = Jython Reporting
+    dataset-types = HCS_IMAGE
+    class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.jython.JythonBasedReportingPlugin
+    script-path = data-set-reporting.py
+
+##### Jython Aggregation Service 
+
+**plugin.properties**
+
+    label = Jython Aggregation Reporting
+    class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.jython.JythonAggregationService
+    script-path = aggregating.py
+
+##### Jython Ingestion Service
+
+**plugin.properties**
+
+    label = Jython Aggregation Reporting
+    class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.jython.JythonIngestionService
+    script-path = processing.py
+
+Note, that property dataset-types is not needed and will be ignored.
+
+##### Jython-based Processing Plugin
+
+**plugin.properties**
+
+    label = Jython Processisng
+    dataset-types = HCS_IMAGE
+    class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.jython.JythonBasedProcessingPlugin
+    script-path = data-set-processing.py
+
+##### Processing Plugin based on Reporting Script
+
+One can also configure a special processing plugin
+(`ch.systemsx.cisd.openbis.dss.generic.server.plugins.jython.ReportingBasedProcessingPlugin`)
+which uses a reporting script instead of processing script. The
+reporting script's task is as usual - to describe contents of a table.
+The processing plugin will then convert the generated table to a text
+form and send it in an email as an attachment. This feature facilitates
+code reuse - one can write one script and use it for both reporting and
+processing plugins.
+
+Configuration of the plugin is as simple as the one for basic
+jython-based plugins with a few additional properties for specifying
+email content:
+
+**service.properties**
+
+    ...
+
+    # --------------------------------------------------------------------------------------------------
+    # Jython-based Processing Plugin based on Reporting Script
+    # --------------------------------------------------------------------------------------------------
+    jython-processing-with-report.label = Jython Processing based on Reporting Script
+    jython-processing-with-report.dataset-types = HCS_IMAGE
+    jython-processing-with-report.class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.jython.ReportingBasedProcessingPlugin
+    jython-processing-with-report.script-path = /resource/examples/data-set-reporting.py
+    # Optional properties:
+    # - subject of the email with generated report;
+    #   defaults to empty subject
+    #jython-processing-with-report.email-subject = Report
+    # - body of the email with generated report -
+    #   defaults to empty subject
+    #jython-processing-with-report.email-body = The report was successfuly generated and is attached to this email.
+    # - name of the attachment with generated report;
+    #   defaults to 'report.txt'
+    #jython-processing-with-report.attachment-name = report-attachment.txt
+    # - whether there should be a single report for all processed data sets generated and send in an email to the user,
+    #   or rather should it be done for processed every data set separately (with one report & email per data set);
+    #   defaults to false
+    #jython-processing-with-report.single-report = true
+
+    ...
+
+Sending an email directly in a processing script is more flexible in
+defining content of the email than what is described above. It might be
+preferable if one wants to decide on the email's subject, body or
+attachment name dynamically, based e.g. on metadata.
+
+### Script interfaces and environment
+
+##### Reporting script
+
+The script file (e.g. "data-set-reporting.py") needs to implement one
+method:
+
+    describe(dataSets, tableBuilder)
+
+which takes a list of data sets (implementing
+[IDataSet](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/server/plugins/jython/api/IDataSet.html)
+interface) and a table builder
+([ISimpleTableModelBuilderAdaptor](https://openbis.ch/javadoc/20.10.x/javadoc-openbis/ch/systemsx/cisd/openbis/generic/shared/managed_property/api/ISimpleTableModelBuilderAdaptor.html))
+that will be used to generate the table shown in openBIS AS or sent in
+an email. The method shouldn't return anything. Instead one should call
+methods of the table builder and a the table model will be created
+outside of the script using the builder.
+
+##### Aggregation Service script
+
+The script file (e.g. "aggregating.py") needs to implement one method:
+
+    aggregate(parameters, tableBuilder)
+
+which takes some parameters (a java.util.Map with String keys and
+generic Object values) and a table builder
+([ISimpleTableModelBuilderAdaptor](https://openbis.ch/javadoc/20.10.x/javadoc-openbis/ch/systemsx/cisd/openbis/generic/shared/managed_property/api/ISimpleTableModelBuilderAdaptor.html))
+that will be used to generate the table shown in openBIS AS or sent in
+an email. The method shouldn't return anything. Instead one should call
+methods of the table builder and a the table model will be created
+outside of the script using the builder.
+
+##### Ingestion Service script
+
+The script file (e.g. "processing.py") needs to implement one method:
+
+    process(transaction, parameters, tableBuilder)
+
+which takes a transaction, some parameters (a java.util.Map with String
+keys and generic Object values) and a table builder
+([ISimpleTableModelBuilderAdaptor](https://openbis.ch/javadoc/20.10.x/javadoc-openbis/ch/systemsx/cisd/openbis/generic/shared/managed_property/api/ISimpleTableModelBuilderAdaptor.html))
+that will be used to generate the table shown in openBIS AS or sent in
+an email. The method shouldn't return anything. Instead one should call
+methods of the table builder and a the table model will be created
+outside of the script using the builder. The transaction interface is
+the same as what is provided to a dropbox.
+See [Dropboxes](/display/openBISDoc2010/Dropboxes) for a description of
+what can be done with a transaction.
+
+##### Processing script
+
+The script file (e.g. "data-set-processing.py") needs to implement one
+method:
+
+    process(dataSet)
+
+which takes a single data set (implementing
+[IDataSet](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/server/plugins/jython/api/IDataSet.html)
+interface). The method shouldn't return anything.
+
+##### Environment
+
+Both Processing and Reporting script functions are invoked in an
+environment with following services available as global variables:
+
+[TABLE]
+
+From method arguments one can:
+
+-   retrieve basic metadata information about data set and access its
+    files through object implementing
+    [IHierarchicalContent](https://openbis.ch/javadoc/20.10.x/javadoc-openbis-common/ch/systemsx/cisd/openbis/common/io/hierarchical_content/api/IHierarchicalContent.html)
+    (received by calling
+    [IDataSet.getContent()](https://openbis.ch/javadoc/20.10.x/javadoc-datastore-server/ch/systemsx/cisd/openbis/dss/generic/server/plugins/jython/api/IDataSet.html#getContent%28%29)),
+-   (reporting script) build contents of a table using
+    `ISimpleTableModelBuilderAdaptor`,
+
+### Example scripts
+
+##### Simple reporting plugin
+
+    CODE = "Code"
+    TYPE = "Type"
+    SIZE = "Size"
+    LOCATION = "Location"
+    SPEED_HINT = "Speed Hint"
+    MAIN_PATTERN = "Main Data Set Pattern"
+    MAIN_PATH = "Main Data Set Path"
+    INSTANCE = "Instance"
+    SPACE = "Space"
+    PROJECT = "Project"
+    EXPERIMENT_CODE = "Experiment Code"
+    EXPERIMENT_IDENTIFIER = "Experiment Identifier"
+    EXPERIMENT_TYPE = "Experiment Type"
+    SAMPLE_CODE = "Sample Code"
+    SAMPLE_IDENTIFIER = "Sample Identifier"
+    SAMPLE_TYPE = "Sample Type"
+
+    def describe(dataSets, tableBuilder):
+
+        tableBuilder.addHeader(CODE)
+        tableBuilder.addHeader(TYPE)
+        tableBuilder.addHeader(SIZE)
+        tableBuilder.addHeader(LOCATION)
+        tableBuilder.addHeader(SPEED_HINT)
+        tableBuilder.addHeader(MAIN_PATTERN)
+        tableBuilder.addHeader(MAIN_PATH)
+        tableBuilder.addHeader(INSTANCE)
+        tableBuilder.addHeader(SPACE)
+        tableBuilder.addHeader(PROJECT)
+        tableBuilder.addHeader(EXPERIMENT_CODE)
+        tableBuilder.addHeader(EXPERIMENT_IDENTIFIER)
+        tableBuilder.addHeader(EXPERIMENT_TYPE)
+        tableBuilder.addHeader(SAMPLE_CODE)
+        tableBuilder.addHeader(SAMPLE_IDENTIFIER)
+        tableBuilder.addHeader(SAMPLE_TYPE)
+
+        for dataSet in dataSets:
+            print "script reporting " + dataSet.getDataSetCode()
+
+            row = tableBuilder.addRow()
+            row.setCell(CODE, dataSet.getDataSetCode())
+            row.setCell(TYPE, dataSet.getDataSetTypeCode())
+            row.setCell(SIZE, dataSet.getDataSetSize())
+            row.setCell(LOCATION, dataSet.getDataSetLocation())
+            row.setCell(SPEED_HINT, dataSet.getSpeedHint())
+            row.setCell(MAIN_PATTERN, dataSet.getMainDataSetPattern())
+            row.setCell(MAIN_PATH, dataSet.getMainDataSetPath())
+            row.setCell(INSTANCE, dataSet.getInstanceCode())
+            row.setCell(SPACE, dataSet.getSpaceCode())
+            row.setCell(PROJECT, dataSet.getProjectCode())
+            row.setCell(EXPERIMENT_CODE, dataSet.getExperimentCode())
+            row.setCell(EXPERIMENT_IDENTIFIER, dataSet.getExperimentIdentifier())
+            row.setCell(EXPERIMENT_TYPE, dataSet.getExperimentTypeCode())
+            row.setCell(SAMPLE_CODE, dataSet.getSampleCode())
+            row.setCell(SAMPLE_IDENTIFIER, dataSet.getSampleIdentifier())
+            row.setCell(SAMPLE_TYPE, dataSet.getSampleTypeCode())
+
+##### Reporting plugin accessing openBIS AS
+
+    CODE = "Data Set Code"
+    EXPERIMENT_IDENTIFIER = "Experiment Identifier"
+    EXPERIMENT_TYPE = "Experiment Type"
+    EXPERIMENT_DESCRIPTION = "Description"
+
+    def describe(dataSets, tableBuilder):
+
+        tableBuilder.addHeader(CODE)
+        tableBuilder.addHeader(EXPERIMENT_IDENTIFIER)
+        tableBuilder.addHeader(EXPERIMENT_TYPE)
+        tableBuilder.addHeader(EXPERIMENT_DESCRIPTION)
+
+        for dataSet in dataSets:
+            projectIdentifier = "/" + dataSet.getSpaceCode() + "/" + dataSet.getProjectCode()
+            print "script reporting " + dataSet.getDataSetCode() + " from " + projectIdentifier
+            experiments = searchService.listExperiments(projectIdentifier)
+
+            for experiment in experiments:
+                row = tableBuilder.addRow()
+                row.setCell(CODE, dataSet.getDataSetCode())
+                row.setCell(EXPERIMENT_IDENTIFIER, experiment.getExperimentIdentifier())
+                row.setCell(EXPERIMENT_TYPE, experiment.getExperimentType())
+                row.setCell(EXPERIMENT_DESCRIPTION, experiment.getPropertyValue("DESCRIPTION"))
+
+##### Reporting plugin accessing external DB
+
+Lets assume that a [Path Info
+Database](/display/openBISDoc2010/Installation+and+Administrators+Guide+of+the+openBIS+Data+Store+Server#InstallationandAdministratorsGuideoftheopenBISDataStoreServer-InstallationandAdministratorsGuideoftheopenBISDataStoreServer-PathInfoDatabase)
+was configured as a data source named `"path-info-db"`.
+
+One shouldn't assume anything about Path Info DB schema. Code below
+should serve just as an example of accessing an external DB in a jython
+reporting/processing script.
+
+    DATA_SOURCE = "path-info-db"
+    QUERY = """
+        SELECT ds.code as "data_set_code", dsf.*
+        FROM data_sets ds, data_set_files dsf
+        WHERE ds.code = ?{1} AND dsf.dase_id = ds.id
+    """
+
+    """reporting table column names"""
+    DATA_SET_CODE = "Data Set"
+    RELATIVE_PATH = "Relative Path"
+    FILE_NAME = "File Name"
+    SIZE_IN_BYTES = "Size"
+    IS_DIRECTORY = "Is Directory?"
+    LAST_MODIFIED = "Last Modified"
+
+    def describe(dataSets, tableBuilder):
+
+        tableBuilder.addHeader(DATA_SET_CODE)
+        tableBuilder.addHeader(RELATIVE_PATH)
+        tableBuilder.addHeader(FILE_NAME)
+        tableBuilder.addHeader(SIZE_IN_BYTES)
+        tableBuilder.addHeader(IS_DIRECTORY)
+        tableBuilder.addHeader(LAST_MODIFIED)
+
+        for dataSet in dataSets:
+            results = queryService.select(DATA_SOURCE, QUERY, [dataSet.getDataSetCode()])
+            print "Found " + str(len(results)) + " results for data set '" + dataSet.getDataSetCode() + "':"
+            for r in results:
+                print r # debugging
+                row = tableBuilder.addRow()
+                row.setCell(DATA_SET_CODE, r.get("DATA_SET_CODE".lower()))
+                row.setCell(RELATIVE_PATH, r.get("RELATIVE_PATH".lower()))
+                row.setCell(FILE_NAME, r.get("FILE_NAME".lower()))
+                row.setCell(SIZE_IN_BYTES, r.get("SIZE_IN_BYTES".lower()))
+                row.setCell(IS_DIRECTORY, r.get("IS_DIRECTORY".lower()))
+                row.setCell(LAST_MODIFIED, r.get("LAST_MODIFIED".lower()))
+            results.close()
+
+##### Reporting plugin accessing file contents
+
+    import java.util.Date as Date
+
+    CODE = "Code"
+    FILE_NAME = "File Name"
+    RELATIVE_PATH = "Relative Path"
+    LAST_MODIFIED = "Last Modified"
+    SIZE = "Size"
+
+    def describe(dataSets, tableBuilder):
+        tableBuilder.addHeader(CODE)
+        tableBuilder.addHeader(FILE_NAME)
+        tableBuilder.addHeader(RELATIVE_PATH)
+        tableBuilder.addHeader(LAST_MODIFIED)
+        tableBuilder.addHeader(SIZE)
+        for dataSet in dataSets:
+            print "script reporting " + dataSet.getDataSetCode()
+            describeNode(dataSet.getContent().getRootNode(), dataSet.getDataSetCode(), tableBuilder)
+
+
+    def describeNode(node, dataSetCode, tableBuilder):
+        print "describe node: " + dataSetCode + "/" + node.getRelativePath()
+        if node.isDirectory():
+            for child in node.getChildNodes():
+                describeNode(child, dataSetCode, tableBuilder)
+        else:
+            row = tableBuilder.addRow()
+            row.setCell(CODE, dataSetCode)
+            row.setCell(FILE_NAME, node.getName())
+            row.setCell(RELATIVE_PATH, node.getRelativePath())
+            row.setCell(LAST_MODIFIED, Date(node.getLastModified()))
+            row.setCell(SIZE, node.getFileLength())
+
+##### Aggregation Service Reporting plugin accessing openBIS, external database and file contents
+
+    from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto import SearchCriteria
+    from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto import SearchSubCriteria
+    from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto.SearchCriteria import MatchClause
+    from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto.SearchCriteria import MatchClauseAttribute
+
+    EXPERIMENT = "Experiment"
+    CODE = "Data Set Code"
+    NUMBER_OF_FILES = "Number of Files"
+    NUMBER_OF_PROTEINS = "Number of Proteins"
+
+    def countFiles(node):
+        sum = 1
+        if node.isDirectory():
+            for child in node.getChildNodes():
+                sum = sum + countFiles(child)
+        return sum
+
+    def getNumberOfProteins(dataSetCode):
+        result = queryService.select("protein-db", "select count(*) as count from proteins where data_set = ?{1}", [dataSetCode])
+        return result[0].get("count")
+
+    def aggregate(parameters, tableBuilder):
+        experimentCode = parameters.get('experiment-code')
+        searchCriteria = SearchCriteria()
+        subCriteria = SearchCriteria()
+        subCriteria.addMatchClause(MatchClause.createAttributeMatch(MatchClauseAttribute.CODE, experimentCode))
+        searchCriteria.addSubCriteria(SearchSubCriteria.createExperimentCriteria(subCriteria))
+        dataSets = searchService.searchForDataSets(searchCriteria)
+        tableBuilder.addHeader(EXPERIMENT)
+        tableBuilder.addHeader(CODE)
+        tableBuilder.addHeader(NUMBER_OF_FILES)
+        tableBuilder.addHeader(NUMBER_OF_PROTEINS)
+        for dataSet in dataSets:
+            dataSetCode = dataSet.getDataSetCode()
+            content = contentProvider.getContent(dataSetCode)
+            row = tableBuilder.addRow()
+            row.setCell(EXPERIMENT, dataSet.experiment.experimentIdentifier)
+            row.setCell(CODE, dataSetCode)
+            row.setCell(NUMBER_OF_FILES, countFiles(content.rootNode))
+            row.setCell(NUMBER_OF_PROTEINS, getNumberOfProteins(dataSetCode))
+
+##### Simple processing plugin
+
+    import org.apache.commons.io.IOUtils as IOUtils
+
+    def process(dataSet):
+        dataSetCode = dataSet.getDataSetCode()
+        print "script processing " + dataSetCode
+        processNode(dataSet.getContent().getRootNode(), dataSet.getDataSetCode())
+
+    def processNode(node, dataSetCode):
+        print "process node: " + dataSetCode + "/" + node.getRelativePath()
+        if node.isDirectory():
+            for child in node.getChildNodes():
+                processNode(child, dataSetCode)
+        else:
+            print "content (" + str(node.getFileLength()) + "): " + \
+                  IOUtils.readLines(node.getInputStream()).toString()
+
+##### Processing plugin sending emails
+
+    import org.apache.commons.io.IOUtils as IOUtils
+
+    def process(dataSet):
+        dataSetCode = dataSet.getDataSetCode()
+        print "script processing " + dataSetCode
+        processNode(dataSet.getContent().getRootNode(), dataSet.getDataSetCode())
+
+    def processNode(node, dataSetCode):
+        print "process node: " + dataSetCode + "/" + node.getRelativePath()
+        if node.isDirectory():
+            for child in node.getChildNodes():
+                processNode(child, dataSetCode)
+        else:
+            fileAsString = IOUtils.readLines(node.getInputStream()).toString()
+            fileName = node.getName()
+
+            if fileName.endswith(".txt"):
+                mailService.createEmailSender().\
+                    withSubject("processed text file " + fileName).\
+                    withBody("see the attached file").\
+                    withAttachedText(fileAsString, fileName).\
+                    send()
+            else:
+                filePath = node.getFile().getPath()
+                mailService.createEmailSender().\
+                    withSubject("processed file " + fileName).\
+                    withBody("see the attached file").\
+                    withAttachedFile(filePath, fileName).\
+                    send()
+
+##### Example of Webapps that interact with Aggregation and Ingestion services
+
+[Webapps and Services](#)
+
+  
+
+Screening Extensions
+--------------------
+
+For each of the above processing plugins, there is a screening variant
+that makes the
+[IScreeningOpenbisServiceFacade](https://openbis.ch/javadoc/20.10.x/javadoc-screening-api/ch/systemsx/cisd/openbis/plugin/screening/client/api/v1/IScreeningOpenbisServiceFacade.html)
+available to plugins. If you need access to the services it provides,
+use the screening variant.
+
+[TABLE]
+
+In each case, an additional variable, *screeningFacade* is made
+available to the script.
+
+[TABLE]
+
+### Reporting Plugin Example
+
+An example that uses the service facade to determine the mapping from
+wells to materials.
+
+    """A reporting plugin that displays a table of plate wells with their materials."""
+
+    from ch.systemsx.cisd.openbis.plugin.screening.shared.api.v1.dto import PlateIdentifier
+    import java.util
+
+    # The columns -- these are used both for the column headers and putting data into the table
+    PLATE = "Plate"
+    ROW = "Row"
+    COL = "Col"
+    MATERIALS_COUNT = "Number of Materials"
+    MATERIALS = "Materials"
+
+    # The sample type we are interested in
+    PLATE_SAMPLE_TYPE = "PLATE"
+
+    def getPlatesToQueryFromDataSets(dataSets):
+      """Given a collection of data sets, return a list of the plates they are associated with"""
+      platesToQuery = []
+      for dataSet in dataSets:
+        if dataSet.getSampleTypeCode() == PLATE_SAMPLE_TYPE:
+          platesToQuery.append(PlateIdentifier.createFromAugmentedCode(dataSet.getSampleIdentifier()))
+      return platesToQuery
+
+    def displayStringForMaterials(materials):
+      """Convert a collection of materials into a string we can show the user."""
+      elements = []
+      for material in materials:
+        elements.append(material.getAugmentedCode())
+      return ", ".join(elements)
+
+    def addHeadersToTable(tableBuilder):
+      """Set the table headers"""
+      tableBuilder.addHeader(PLATE)
+      tableBuilder.addHeader(ROW)
+      tableBuilder.addHeader(COL)
+      tableBuilder.addHeader(MATERIALS_COUNT)
+      tableBuilder.addHeader(MATERIALS)
+
+    def addDataRowToTable(tableBuilder, mapping, row, col):
+      """For each well, show the materials it refers to."""
+      tableRow = tableBuilder.addRow()
+      tableRow.setCell(PLATE, mapping.getPlateIdentifier().getAugmentedCode())
+      tableRow.setCell(ROW, row)
+      tableRow.setCell(COL, col)
+      materials = mapping.getMaterialsForWell(row, col)
+      tableRow.setCell(MATERIALS_COUNT, materials.size())
+      tableRow.setCell(MATERIALS, displayStringForMaterials(materials))
+
+    def describe(dataSets, tableBuilder):
+      """Show a table displaying the mapping from wells to materials."""
+      platesToQuery = getPlatesToQueryFromDataSets(dataSets)
+
+      # Need to convert any arguments that are jython objects to normal Java objects
+      plateWellMappings = screeningFacade.listPlateMaterialMapping(java.util.ArrayList(platesToQuery), None)
+
+      addHeadersToTable(tableBuilder)
+
+      # Add the data to the table
+      for mapping in plateWellMappings:
+        width = mapping.getPlateGeometry().getWidth()
+        height = mapping.getPlateGeometry().getHeight()
+        for y in range(1, height + 1):
+          for x in range(1, width + 1):
+            addDataRowToTable(tableBuilder, mapping, y, x)
+
+Invoking Reporting and Processing Plugins
+-----------------------------------------
+
+Reporting and processing plugins run against data sets. They are thus
+made available from data set tables. To run a reporting plugin, simply
+pick it from the drop-down box (see screenshots below). To run a
+processing plugin, pick it from the "Actions" button. Optionally, one
+can select data sets in the table to run the plugin against just the
+selected data sets.
+
+Aggregation service reporting plugins do not run against data sets. They
+can only be invoked via Query API.
+
+### Selecting Data Sets
+
+![](/download/thumbnails/53746027/01%20Select-Data-Sets.png?version=1&modificationDate=1601541481779&api=v2)  
+![](/download/thumbnails/53746027/03%20Pick-Which-Data-Sets.png?version=1&modificationDate=1601541481802&api=v2)
+
+### Selecting a Reporting Plugin
+
+![](/download/thumbnails/53746027/02%20Select-Reporting-Plugin.png?version=1&modificationDate=1601541481796&api=v2)
+
+### Viewing the results
+
+![](/download/thumbnails/53746027/04%20See-Result.png?version=1&modificationDate=1601541481789&api=v2)
+
+-   [aggregation](/label/openBISDoc2010/aggregation)
+-   [ingestion](/label/openBISDoc2010/ingestion)
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 147, "requestCorrelationId": "c51e852cbb0fd3fa"}
diff --git a/docs/uncategorized/jython-datasetvalidator.md b/docs/uncategorized/jython-datasetvalidator.md
new file mode 100644
index 0000000000000000000000000000000000000000..3290f259b89af425d389bc96d726b41a544152ab
--- /dev/null
+++ b/docs/uncategorized/jython-datasetvalidator.md
@@ -0,0 +1,238 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FJython%2BDataSetValidator)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FJython%2BDataSetValidator)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (0)
+        ](/pages/viewpageattachments.action?pageId=53746030 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53746030)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53746030)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53746030#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53746030)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53746030)
+    -   [ Export to Word ](/exportword?pageId=53746030)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53746030)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53746030&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  **…**
+3.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+4.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+5.  [Guides](/display/openBISDoc2010/Guides)
+
+-   []( "Unrestricted")
+-   [Jira links]()
+
+[Jython DataSetValidator](/display/openBISDoc2010/Jython+DataSetValidator)
+--------------------------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A) on [Oct 01,
+    2020](/pages/viewpreviousversions.action?pageId=53746030 "Show changes")
+
+### Overview
+
+Jython dataset validators are an option for implementing validation of
+data sets using the python scripting language when using a jython
+dropbox. See [Dropboxes](/display/openBISDoc2010/Dropboxes) for the
+basic configuration. The validators can also be run on clients, either
+the command-line dss client or the web start Data Set Batch Uploader,
+though there are some additional restrictions on which scripts can be
+run within the batch uploader.
+
+### Configuration
+
+To configure a validator, add the configuration parameter
+"validation-script-path" to the thread definition. For example:
+
+**plugin.properties**
+
+    # --------------------------------------------------------------------------------------------------
+    # Jython thread
+    # --------------------------------------------------------------------------------------------------
+    # The directory to watch for incoming data.
+    incoming-dir = /local0/openbis/data/incoming-jython
+    top-level-data-set-handler = ch.systemsx.cisd.etlserver.registrator.JythonTopLevelDataSetHandler
+    incoming-data-completeness-condition = auto-detection
+    strip-file-extension = true
+    storage-processor = ch.systemsx.cisd.etlserver.DefaultStorageProcessor
+    script-path = data-set-handler.py
+    validation-script-path = data-set-validator.py
+
+The script file (in this case "data-set-validator.py") needs to
+implement one method, validate\_data\_set\_file(file), which takes a
+file object as an argument and returns a collection of validation error
+objects as a result. If the collection is empty, then it is assumed that
+there were no validation errors.
+
+There are convenience methods to create various kinds of validation
+errors. These methods are:
+
+-   `createFileValidationError(message: String)`,
+-   `createDataSetTypeValidationError(message : String)`,
+-   `createOwnerValidationError(message: String)` and
+-   `createPropertyValidationError(property : String, message : String)`.  
+    In the context of the validation scripts as they are currently
+    implemented, the first one is probably the most relevant.
+
+These methods are defined on the class
+ch.systemsx.cisd.openbis.dss.generic.shared.api.v1.validation.ValidationError.
+The documentation for this class should be available here:
+
+<http://svnsis.ethz.ch/doc/openbis/current/ch/systemsx/cisd/openbis/dss/generic/shared/api/v1/validation/ValidationError.html>
+
+### Example scripts
+
+One can use both python standard libraries and Java libraries.
+
+#### Simple script using python libraries:
+
+    import os
+    import re
+
+    def validate_data_set_file(file):
+        found_match = False
+        if re.match('foo-.*bar', file.getName()):
+            found_match = True
+
+        errors = []
+        if found_match:
+            errors.append(createFileValidationError(file.getName() + " is not a valid data set."))
+
+        return errors
+
+#### Simple script using only java libraries:
+
+    def validate_data_set_file(file):
+        found_match = False
+    # Note we use the python startswith method here.
+        if file.getName().startswith('foo'):
+            found_match = True
+
+        errors = []
+        if found_match:
+            errors.append(createFileValidationError(file.getName() + " is not a valid data set."))
+
+        return errors
+
+### Extracting Displaying Metadata
+
+The module that validates a data set may, in addition to performing
+validation, implement a function that extracts metadata. This makes it
+possible to give the user immediate feedback about how the system
+interprets the data, giving her an opportunity to correct any
+inconsistencies she detects.
+
+To do this, implement a function call `extract_metadata` in the module
+that implements `valadate_data_set_file`. The function
+`extract_metadata` should return a dictionary where the keys are the
+property codes and values are property values.
+
+#### Example
+
+    def extract_metadata(file):
+        return { 'FILE-NAME' : file.getName() }
+
+### Testing
+
+#### Validation Scripts
+
+Scripts can be tested using the command-line client's "testvalid"
+command. This command takes the same arguments as put, plus an optional
+script parameter. If the script is not specified, the data set is
+validated against the server's validation script.
+
+Examples:
+
+    # Use the server script
+    ./dss_client.sh testvalid -u username -p password -s openbis-url experiment E-TEST-2 /path/to/data/set
+
+    # Use a local script
+    ./dss_client.sh testvalid -u username -p password -s openbis-url experiment E-TEST-2 /path/to/data/set /path/to/script
+
+#### Extract Metadata Scripts
+
+The extract metadata script can be tested with the `testextract` command
+in the command-line client. The arguments are the same as for
+`testvalid`.
+
+-   No labels
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 119, "requestCorrelationId": "2a740d2e1d20a368"}
diff --git a/docs/uncategorized/jython-master-data-scripts.md b/docs/uncategorized/jython-master-data-scripts.md
new file mode 100644
index 0000000000000000000000000000000000000000..0cb4b4852f180c7119ba66d4f186fe8663136860
--- /dev/null
+++ b/docs/uncategorized/jython-master-data-scripts.md
@@ -0,0 +1,208 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FJython%2BMaster%2BData%2BScripts)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FJython%2BMaster%2BData%2BScripts)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (0)
+        ](/pages/viewpageattachments.action?pageId=53746018 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53746018)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53746018)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53746018#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53746018)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53746018)
+    -   [ Export to Word ](/exportword?pageId=53746018)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53746018)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53746018&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  **…**
+3.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+4.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+5.  [Guides](/display/openBISDoc2010/Guides)
+
+-   []( "Unrestricted")
+-   [Jira links]()
+
+[Jython Master Data Scripts](/display/openBISDoc2010/Jython+Master+Data+Scripts)
+--------------------------------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A), last modified by [Kovtun
+    Viktor (ID)](%20%20%20%20/display/~vkovtun%0A) on [Mar 10,
+    2023](/pages/diffpagesbyversion.action?pageId=53746018&selectedPageVersions=1&selectedPageVersions=2 "Show changes")
+
+Introduction
+------------
+
+openBIS defines as "Master data" all metadata configurations needed
+before the import of the real raw data. Master data includes
+experiment/sample/data set/property/file types, vocabularies and
+property assignments.
+
+### API Basics
+
+Similarly to the [Jython Dropbox
+API](/pages/viewpage.action?pageId=53746029) the script can access a
+global variable named `service`, which can be used to create
+transactions.
+
+    transaction = service.transaction()
+
+The transactions are a focal API concept offering to create new types
+(e.g. `createNewSampleType`, `createNewDataSetType`) and new property
+assignments (e.g. `assignPropertyType`).
+
+The complete Javadoc for the API is available at
+
+[TABLE]
+
+### Simple example
+
+    import ch.systemsx.cisd.openbis.generic.client.jython.api.v1.DataType as DataType
+
+    tr = service.transaction()
+
+    expType = tr.createNewExperimentType('EXPERIMENT-TYPE')
+    expType.setDescription('Experiment type description.')
+
+    sampleType = tr.createNewSampleType('SAMPLE-TYPE')
+    sampleType.setDescription('Sample type description.')
+    sampleType.setSubcodeUnique(True)
+    sampleType.setAutoGeneratedCode(True)
+    sampleType.setGeneratedCodePrefix("G_");
+
+    dataSetType = tr.createNewDataSetType('DATA-SET-TYPE')
+    dataSetType.setContainerType(True)
+    dataSetType.setDescription('Data set type description.')
+
+    materialType = tr.createNewMaterialType('MATERIAL-TYPE')
+    materialType.setDescription('Material type description.')
+
+    stringPropertyType = tr.createNewPropertyType('VARCHAR-PROPERTY-TYPE', DataType.VARCHAR)
+    stringPropertyType.setDescription('Varchar property type description.')
+    stringPropertyType.setLabel('STRING')
+
+    materialPropertyType = tr.createNewPropertyType('MATERIAL-PROPERTY-TYPE', DataType.MATERIAL)
+    materialPropertyType.setDescription('Material property type description.')
+    materialPropertyType.setLabel('MATERIAL')
+    materialPropertyType.setMaterialType(materialType)
+    materialPropertyType.setManagedInternally(False)
+
+    # assigns the newly created property 'MATERIAL-PROPERTY-TYPE'
+    # as a mandatory property for 'SAMPLE-TYPE'
+    materialAssignment = tr.assignPropertyType(sampleType, materialPropertyType)
+    materialAssignment.setMandatory(True)
+
+    # assigns the newly created property 'VARCHAR-PROPERTY-TYPE'
+    # as an optional property for 'EXPERIMENT-TYPE' with default value 'FOO_BAR'
+    stringAssignement = tr.assignPropertyType(expType, stringPropertyType)
+    stringAssignement.setMandatory(False)
+    stringAssignement.setDefaultValue('FOO_BAR')
+
+### Command line tools
+
+#### Executing master data scripts
+
+Make sure openBIS AS is up and running prior script execution. Go to the
+openBIS AS installation folder. Assuming your script is
+`/local/master-data-script.py` and openBIS AS is started on the URL
+`     http://localhost:8888/openbis   ` execute the command
+
+    > cd /local0/openbis/servers/openBIS-server/jetty/bin
+    > /register-master-data.sh -s http://localhost:8888/openbis/openbis -f /local/master-data-script.py
+
+You will be prompted for username/password before the script execution.
+Please note that the second 'openbis' is needed in the server address,
+so that you connect via the API.
+
+#### Exporting master data
+
+You can export the master data from a running openBIS system as script
+by running the command
+
+    > cd /local0/openbis/servers/openBIS-server/jetty/bin
+    > /export-master-data.sh -s http://localhost:8888/openbis/openbis
+
+This command will create a folder `exported-master-data-DATE` which will
+contain the exported master data script - `master-data.py`
+
+-   No labels
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 132, "requestCorrelationId": "dd6427c828fba420"}
diff --git a/docs/uncategorized/multi-data-set-archiving.md b/docs/uncategorized/multi-data-set-archiving.md
new file mode 100644
index 0000000000000000000000000000000000000000..73fcc1cb66488db896fbe94fbc54ce7a61993d1c
--- /dev/null
+++ b/docs/uncategorized/multi-data-set-archiving.md
@@ -0,0 +1,563 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FMulti%2Bdata%2Bset%2Barchiving)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FMulti%2Bdata%2Bset%2Barchiving)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (0)
+        ](/pages/viewpageattachments.action?pageId=53746022 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53746022)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53746022)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53746022#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53746022)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53746022)
+    -   [ Export to Word ](/exportword?pageId=53746022)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53746022)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53746022&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  **…**
+3.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+4.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+5.  [Guides](/display/openBISDoc2010/Guides)
+
+-   []( "Unrestricted")
+-   [Jira links]()
+
+[Multi data set archiving](/display/openBISDoc2010/Multi+data+set+archiving)
+----------------------------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A), last modified by [Kupczyk
+    Piotr](%20%20%20%20/display/~pkupczyk%0A) on [Apr 26,
+    2023](/pages/diffpagesbyversion.action?pageId=53746022&selectedPageVersions=10&selectedPageVersions=11 "Show changes")
+
+  
+
+Introduction
+------------
+
+Multi data set archiver is a tool to archive several datasets together
+in chunks of relatively large size. When a group of datasets is selected
+for archive it is verified if they are all together of proper size and
+then they are being stored as one big container file (tar) on the
+destination storage.
+
+When unarchiving data sets from a multi data set archive the following
+rules are obeyed:
+
+-   Unarchiving of data sets from different containers is possible as
+    long as the maximum unarchiving cap specified in the
+    plugin.properties file is not exceeded.
+-   All data sets from a container are unarchived even though
+    unarchiving has been requested only for a sub set.
+-   The data sets are unarchived into a share which is marked as an
+    unarchiving scratch share.
+-   In case of not enough free space in the scratch share the oldest
+    (defined by modification time stamp) data sets are removed from the
+    scratch share to free space. For those data sets the archiving
+    status is set back to ARCHIVED.
+
+To test the archiver find the datasets you want to archive in openbis
+GUI and "add to archive".
+
+Important technical details
+---------------------------
+
+The archiver requires configuration of three important entities.
+
+-   An archive destination (e.g. on Strongbox).
+-   A PostgreSQL database for mapping information (i.e. which data set
+    is in which container file).
+-   An unarchiving scratch share.
+
+Multi dataset archiver is not compatible with other archivers. You
+should have all data available before configuring this archiver.
+
+Workflows
+---------
+
+The multi data set archiver can be configured for four different
+workflows. The workflow is selected by the presence/absence of the
+properties `staging-destination` and `replicated-destination`.
+
+### Simple workflow
+
+None of the properties  `staging-destination`
+and `replicated-destination` are present.
+
+1.  Wait for enough free space on the archive destination.
+2.  Store the data set in a container file directly on the archive
+    destination.
+3.  Perform sanity check. That is, getting the container file to the
+    local disk and compare the content with the content of all data sets
+    in the store.
+4.  Add mapping data to the PostgreSQL database.
+5.  Remove data sets from the store if requested.
+6.  Update archiving status for all data sets.
+
+### Staging workflow
+
+Property `staging-destination` is specified but
+`replicated-destination` is not.
+
+1.  Store the data sets in a container file in the staging folder.
+2.  Wait for enough free space on the archive destination.
+3.  Copy the container file from the staging folder to the archive
+    destination.
+4.  Perform sanity check.
+5.  Remove container file from the staging folder.
+6.  Add mapping data to the PostgreSQL database.
+7.  Remove data sets from the store if requested.
+8.  Update archiving status for all data sets.
+
+### Replication workflow
+
+Property `     replicated`-destination is specified but
+`     staging`-destination is not.
+
+1.  Wait for enough free space on the archive destination.
+2.  Store the data set in a container file directly on the archive
+    destination.
+3.  Perform sanity check.
+4.  Add mapping data to the PostgreSQL database.
+5.  Wait until the container file has also been copied (by some external
+    process) to a replication folder.
+6.  Remove data sets from the store if requested.
+7.  Update archiving status for all data sets.
+
+Some remarks:
+
+-   Steps 5 to 7 will be performed asynchronously from the first four
+    steps because step 5 can take quite long. In the meantime the next
+    archiving task can already be performed.
+-   If the container file isn't replicated after some time archiving
+    will be rolled back and scheduled again.
+
+### Staging and replication workflow
+
+When both properties `staging-destination` and `replicated-destination`
+are present staging and replication workflow will be combined.
+
+Clean up
+--------
+
+In case archiving fails all half-baked container files have to be
+removed. By default this is done immediately.
+
+But in context of tape archiving systems (e.g. Strongbox) immediate
+deletion might not always be possible all the time. In this case a
+deletion request is schedule. The request will be stored in file. It
+will be handled in a separate thread in regular time intervals (polling
+time). If deletion isn't possible after some timeout an e-mail will be
+sent. Such deletion request will still be handled but the e-mail allows
+manual intervention/deletion. Note, that deletion requests for
+non-existing files will always be handled successfully.
+
+Configuration steps
+-------------------
+
+-   Disable existing archivers
+    -   Find all properties of a form `archiver.*` in
+        `servers/datastore_server/etc/service.properties` and remove
+        them.
+    -   Find all DSS core plugins of type `miscellaneous` which define
+        an archiver. Disable them by adding an empty marker file
+        named `disabled`.
+
+-   Enable archiver
+    -   Configure a new DSS core plugin of type `miscellaneous`:
+
+        **multi-dataset-archiver/1/dss/miscellaneous/archiver/plugin.properties**
+
+            archiver.class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.standard.archiver.MultiDataSetArchiver
+
+            # Temporary folder (needed for sanity check). Default: Value provided by Java system property java.io.tmpdir. Usually /tmp
+            # archiver.temp-folder = <java temp folder>
+
+            # Archive destination
+            archiver.final-destination = path/to/strongbox/as/mounted/resource
+
+            # Staging folder (needed for 'staging workflow' and 'staging and replication workflow')
+            archiver.staging-destination = path/to/local/stage/area
+
+            # Replication folder (needed for 'replication workflow' and 'staging and replication workflow')
+            archiver.replicated-destination = path/to/mounted/replication/folder
+
+            # The archiver will refuse to archive group of data sets, which together are smaller than this value
+            archiver.minimum-container-size-in-bytes = 15000000
+
+            # The archiver will refuse to archive group of data sets, which together are bigger than this value.
+            # The archiver will ignore this value, when archiving single data set.
+            archiver.maximum-container-size-in-bytes = 35000000
+
+            # This variable is meant for another use case, than this archiver, but is shared among all archivers.
+            # For this archiver it should be specified for something safely larger than maximum-container-size-in-bytes
+            archiver.batch-size-in-bytes = 80000000
+
+            # (since version 20.10.4) Check consistency between file meta data of the files in the store and from the pathinfo database.
+            # Default value: true 
+            # check-consistency-between-store-and-pathinfo-db = true
+
+            # Archiving can be speed up if setting this flag to false (default value: true). But this works only if the data sets
+            # to be archived do not contain hdf5 files which are handled as folders (like the thumbnail h5ar files in screening/microscopy).
+            # archiver.hdf5-files-in-data-set = true
+
+            # Whether all data sets should be archived in a top level directory of archive or with sharding (the way data sets are stored in openbis internal store)
+            # archiver.with-sharding = false
+
+            # Polling time for evaluating free space on archive destination
+            # archiver.waiting-for-free-space-polling-time = 1 min
+
+            # Maximum waiting time for free space on archive destination
+            # archiver.waiting-for-free-space-time-out = 4 h
+
+            # If set to true, then an initial waiting time will be added before starting a sanity check.
+            # If the sanity check fails, it will be retried. The time between each sanity check attempt is doubled,
+            # starting from the initial waiting time up to the maximum waiting time (see properties below).
+            # Default: false
+            archiver.wait-for-sanity-check = true
+
+            # Initial waiting time before starting a sanity check. Works only if 'wait-for-sanity-check = true'
+            # Default: 10sec
+            archiver.wait-for-sanity-check-initial-waiting-time = 120 sec
+
+            # Maximum total waiting time for failed sanity check attempts. Works only if 'wait-for-sanity-check = true'
+            # Default: 30min
+            archiver.wait-for-sanity-check-max-waiting-time = 5 min
+
+            # A template of a shell command to be executed before unarchiving. The template may use ${container-path} and ${container-id} variables which will be replaced with an absolute container path (full path of the tar file to be unarchived)
+            # and a container id (id of the container to be unarchived used in the archiving database). The command created from the template is executed only once for a given container (just before the first unarchiving attempt) and is not retried.
+            # The unarchiver waits for the command to finish before proceeding. If the command exits with status zero, then the unarchiving is started. If the command exits with a non-zero value, then the archiving is marked as failed.
+            #
+            # Example: tar -tf ${container-path}
+            # Default: null
+            archiver.unarchiving-prepare-command-template
+
+            # If set to true, then the unarchiver waits for T flag to be removed from the file in the final destination before it tries to read the file.
+            # Default: false
+            archiver.unarchiving-wait-for-t-flag = true
+
+            # Maximum total waiting time for failed unarchiving attempts.
+            # Default: null
+            archiver.unarchiving-max-waiting-time = 1d
+
+            # Polling time for waiting on unarchiving.
+            # Default: null
+            archiver.unarchiving-polling-time = 5 min
+
+            # If set to true, then the archiver waits for T flag to be set on the file in the replicated destination. The check is done before a potential sanity check of the replicated file (see 'finalizer-sanity-check').
+            # Default: false
+            archiver.finalizer-wait-for-t-flag = true
+
+            # If set to true, then a sanity check for the replicated destination is also performed (in addition to a sanity check for the final destination which is always executed).
+            # Default: false
+            archiver.finalizer-sanity-check = true
+
+            # Minimum required free space at final destination before triggering archiving if > 0. This threshold can be
+            # specified as a percentage of total space or number of bytes. If both are specified the threshold is given by
+            # the maximum of both values.
+            # archiver.minimum-free-space-at-final-destination-in-percentage
+            # archiver.minimum-free-space-at-final-destination-in-bytes
+
+            # Minimum free space on archive destination after container file has been added.
+            # archiver.minimum-free-space-in-MB = 1024
+
+            # Polling time for waiting on replication. Only needed if archiver.replicated-destination is specified.
+            # archiver.finalizer-polling-time = 1 min
+
+            # Maximum waiting time for replication finished.  Only needed if archiver.replicated-destination is specified.
+            # archiver.finalizer-max-waiting-time = 1 d
+
+            # Maximum total size (in MB) of data sets that can be scheduled for unarchiving at any given time. When not specified, defaults to 1 TB.
+            # Note also that the value specified must be consistent with the scratch share size. 
+            # maximum-unarchiving-capacity-in-megabytes = 200000
+
+            # Delay unarchiving. Needs MultiDataSetUnarchivingMaintenanceTask.
+            # archiver.delay-unarchiving = false
+
+            # Size of the buffer used for copying data. Default value: 1048576 (i.e. 1 MB). This value is only important in case of accurate
+            # measurements of data transfer rates. In case of expected fast transfer rates a larger value (e.g. 10 MB) should be used.
+            # archiver.buffer-size = 1048576
+
+            # Maximum size of the writing queue for copying data. Reading from the data store and writing to the TAR file is 
+            # done in parallel. The default value 5 * archiver.buffer-size. 
+            # archiver.maximum-queue-size-in-bytes = 5242880
+
+            # Path (absolute or relative to store root) of an empty file. If this file is present starting 
+            # archiving will be paused until this file has been removed. 
+            # This property is useful for archiving media/facilities with maintenance downtimes.
+            # archiver.pause-file = pause-archiving
+
+            # Time interval between two checks whether pause file still exists or not.
+            # archiver.pause-file-polling-time = 10 min
+
+            #-------------------------------------------------------------------------------------------------------
+            # Clean up properties
+            # 
+            # A comma-separated list of path to folders which should be cleaned in a separate thread
+            #archiver.cleaner.file-path-prefixes-for-async-deletion = <absolute path 1>, <absolute path 2>, ...
+
+            # A folder which will contain deletion request files. This is a mandatory property if 
+            # archiver.cleaner.file-path-prefixes-for-async-deletion is specified.
+            #archiver.cleaner.deletion-requests-dir = <some local folder>
+
+            # Polling time interval for looking and performing deletion requests. Default value is 10 minutes.
+            #archiver.cleaner.deletion-polling-time = 10 min
+
+            # Time out of deletion requests. Default value is one day.
+            #archiver.cleaner.deletion-time-out = 24 h
+
+            # Optional e-mail address. If specified every integer multiple of the timeout period an e-mail is send to 
+            # this address listing all deletion requests older than specified timeout.
+            #archiver.cleaner.email-address = <some valid e-mail address>
+
+            # Optional e-mail address for the 'from' field.
+            #archiver.cleaner.email-from-address = <some well-formed e-mail address>
+
+            # Subject for the 'subject' field. Mandatory if an e-mail address is specified.
+            #archiver.cleaner.email-subject = Deletion failure
+
+            # Template with variable ${file-list} for the body text of the e-mail. The variable will be replaced by a list of
+            # lines. Two lines for each deletion request. One for the absolute file path and one of the request time stamp.
+            # Mandatory if an e-mail address is specified.
+            #archiver.cleaner.email-template = The following files couldn't be deleted:\n${file-list}
+
+            #-------------------------------------------------------------------------------------------------------
+            # The following properties are necessary in combination with data source configuration
+            multi-dataset-archive-database.kind = prod
+            multi-dataset-archive-sql-root-folder = datastore_server/sql/multi-dataset-archive
+
+        You should make sure that all destination directories exist and
+        DSS has read/write privileges before attempting archiving
+        (otherwise the operation will fail).  
+        Add the core plugin module name (here `multi-dataset-archiver`)
+        to the property `enabled-modules` of `core-plugin.properties`.
+
+-   Enable PostgreSQL datasource
+    -   Configure a new DSS core plugin of type `data-sources`:
+
+        **multi-dataset-archiver/1/dss/data-sources/multi-dataset-archiver-db/plugin.properties**
+
+            version-holder-class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.standard.archiver.dataaccess.MultiDataSetArchiverDBVersionHolder
+            databaseEngineCode = postgresql
+            basicDatabaseName = multi_dataset_archive
+            urlHostPart = ${multi-dataset-archive-database.url-host-part:localhost}
+            databaseKind = ${multi-dataset-archive-database.kind:prod}
+            scriptFolder = ${multi-dataset-archive-sql-root-folder:}
+            owner = ${multi-dataset-archive-database.owner:}
+            password = ${multi-dataset-archive-database.password:}
+
+-   Create a share which will be used exclusively as a scratch share for
+    unarchiving. To mark it for this purpose add a `share.properties`
+    file to the share (e.g. `<mounted share>/store/1/share.properties`)
+    with property `unarchiving-scratch-share = true`.  
+    In addition the maximum size of the share can be specified. Example:
+
+    **share.properties**
+
+        unarchiving-scratch-share = true
+        unarchiving-scratch-share-maximum-size-in-GB = 100
+
+-   It is recommended to do archiving in a separate queue in order to
+    avoid situation when fast processing plugin tasks are not processes
+    because multi data set archiving tasks can take quite long. If one
+    of the two workflows with replication is selected
+    (i.e. `archiver.replicated-destination`) a second processing plugin
+    (ID `Archiving Finalizer`) is used. It should run in a queue
+    different from the queue used for archiving. The following setting
+    in DSS `service.properties` covers all workflows:
+
+    **service.properties**
+
+        data-set-command-queue-mapping = archiving:Archiving|Copying data sets to archive, unarchiving:Unarchiving, archiving-finalizer:Archiving Finalizer
+
+Clean up Unarchiving Scratch Share
+----------------------------------
+
+(Since version 20.10.4) Data sets in the unarchiving scratch share can
+be removed any times because they are already present in archive.
+Normally this happens during unarchving if there is not enough free
+space available in the scratch share. But this may fail for some reason.
+This can lead to the effect that unarchiving doesn't work because they
+are data sets in the scratch share which can be removed because they are
+archived.
+
+Therefore, it is recommended to setup a
+[CleanUpUnarchivingScratchShareTask](/display/openBISDoc2010/Maintenance+Tasks#MaintenanceTasks-CleanUpUnarchivingScratchShareTask)
+which removes data sets from the scratch share which fulfill the
+following conditions:
+
+-   The data set is in state ARCHIVED and the flag `presentInArchive` 
+    is set.
+-   The data set is found in the Multi Data Set Archive database and the
+    corresponding TAR archive file exists.  
+
+Deletion of archived Data Sets
+------------------------------
+
+(Since version 20.10.3) Archived data sets can be deleted permanently.
+But they are still in the container files. In order to remove them also
+from the container files a
+[MultiDataSetDeletionMaintenanceTask](/display/openBISDoc2010/Maintenance+Tasks#MaintenanceTasks-MultiDataSetDeletionMaintenanceTask)
+has to be configured.
+
+Recovery from corrupted archiving queues
+----------------------------------------
+
+In case the queues with the archiving commands get corrupted, they
+cannot be used any more, they need to be deleted before the DSS starts
+and a new one will be created. The typical scenario where this happens
+is when you get out of space on the disk where the queues are stored.
+
+The following steps describe how to recover from such a situation.
+
+1.  Finding out the data sets that are in 'ARCHIVE\_PENDING' status.
+
+        SELECT id, size, present_in_archive, share_id, location FROM external_data WHERE status = 'ARCHIVE_PENDING';
+         
+        openbis_prod=> SELECT id, size, present_in_archive, share_id, location FROM external_data WHERE status = 'ARCHIVE_PENDING'; 
+         data_id |    size     | present_in_archive | share_id |                               location                                
+        ---------+-------------+--------------------+----------+-----------------------------------------------------------------------
+            3001 | 34712671864 | f                  | 1        | 585D8354-92A3-4C24-9621-F6B7063A94AC/17/65/a4/20170712111421297-37998
+            3683 | 29574172672 | f                  | 1        | 585D8354-92A3-4C24-9621-F6B7063A94AC/39/6c/b0/20171106181516927-39987
+            3688 | 53416316928 | f                  | 1        | 585D8354-92A3-4C24-9621-F6B7063A94AC/ca/3b/93/20171106183212074-39995
+            3692 | 47547908096 | f                  | 1        | 585D8354-92A3-4C24-9621-F6B7063A94AC/b7/26/85/20171106185354378-40002
+
+2.  The data sets found, can be or not in the archiving process. This is
+    not easy to find out instantly. It's easier just to execute the
+    above statement in subsequent days.
+
+3.  If the data sets are still in 'ARCHIVE\_PENDING' after a sensible
+    amount of time (1 week for example) and there is no other issues,
+    like the archiving destination is not available there is a good
+    change, they are really stuck on the process.
+
+4.  Reaching this point, the data sets are most likely still on the data
+    store as indicated by the combination of share ID and location
+    indicated. Verify this! If they are not there hope they are archived
+    or you are on trouble.
+
+5.  If they are on the store, you need to set the status to available
+    again using a SQL statement.
+
+         openbis_prod=> UPDATE external_data SET status = 'AVAILABLE', present_in_archive = 'f'  WHERE id IN (SELECT id FROM data where code in ('20170712111421297-37998', '20171106181516927-39987')); 
+
+      
+
+    If there is half copied files on the archive destination, these need
+    to be delete too, to find them run the next queries.
+
+      
+
+        # To find out the containers:
+         
+        SELECT * FROM data_sets WHERE CODE IN('20170712111421297-37998', '20171106181516927-39987', '20171106183212074-39995', '20171106185354378-40002');
+
+        multi_dataset_archive_prod=> SELECT * FROM data_sets WHERE CODE IN('20170712111421297-37998', '20171106181516927-39987', '20171106183212074-39995', '20171106185354378-40002');
+         id  |          code           | ctnr_id | size_in_bytes 
+        -----+-------------------------+---------+---------------
+         294 | 20170712111421297-37998 |      60 |   34712671864
+         295 | 20171106185354378-40002 |      61 |   47547908096
+         296 | 20171106183212074-39995 |      61 |   53416316928
+         297 | 20171106181516927-39987 |      61 |   29574172672
+        (4 rows)
+
+        multi_dataset_archive_prod=> SELECT * FROM containers WHERE id IN(60, 61);
+         id |                    path                     | unarchiving_requested 
+        ----+---------------------------------------------+-----------------------
+         60 | 20170712111421297-37998-20171108-105339.tar | f
+         61 | 20171106185354378-40002-20171108-130342.tar | f
+         
+
+    NOTE: We have never seen it but if there is a container with data
+    sets in different archiving status then, you need to recover the
+    ARCHIVED data sets from the container and copy them manually to the
+    data store before being able to continue.
+
+        multi_dataset_archive_prod=> SELECT * FROM data_sets WHERE ctnr_id IN(SELECT ctnr_id FROM data_sets WHERE CODE IN('20170712111421297-37998', '20171106181516927-39987', '20171106183212074-39995', '20171106185354378-40002'));
+
+6.  After deleting the files clean up the multi dataset archiver
+    database.
+
+        openbis_prod=> DELETE FROM containers WHERE id IN (SELECT ctnr_id FROM data_sets WHERE CODE IN('20170712111421297-37998', '20171106181516927-39987', '20171106183212074-39995', '20171106185354378-40002'));
+
+-   No labels
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 112, "requestCorrelationId": "33045c2ba97fd694"}
diff --git a/docs/uncategorized/register-master-data-via-the-admin-interface.md b/docs/uncategorized/register-master-data-via-the-admin-interface.md
new file mode 100644
index 0000000000000000000000000000000000000000..2a049207e5fc042c36de7f8e5921b4d492331706
--- /dev/null
+++ b/docs/uncategorized/register-master-data-via-the-admin-interface.md
@@ -0,0 +1,434 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FRegister%2BMaster%2BData%2Bvia%2Bthe%2BAdmin%2BInterface)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FRegister%2BMaster%2BData%2Bvia%2Bthe%2BAdmin%2BInterface)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (20)
+        ](/pages/viewpageattachments.action?pageId=53745926 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53745926)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53745926)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53745926#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53745926)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53745926)
+    -   [ Export to Word ](/exportword?pageId=53745926)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53745926)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53745926&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+3.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+
+-   []( "Unrestricted")[](/pages/viewpageattachments.action?pageId=53745926&metadataLink=true "20 attachments")
+-   [Jira links]()
+
+[Register Master Data via the Admin Interface](/display/openBISDoc2010/Register+Master+Data+via+the+Admin+Interface)
+--------------------------------------------------------------------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A), last modified by [Barillari
+    Caterina (ID)](%20%20%20%20/display/~barillac%0A) on [Oct 22,
+    2021](/pages/diffpagesbyversion.action?pageId=53745926&selectedPageVersions=1&selectedPageVersions=2 "Show changes")
+
+  
+
+This documentation describes how to register master data via the core
+UI. The documentation for the new admin UI can be found here:
+<https://openbis.ch/index.php/docs/admin-documentation/new-entity-type-registration/> 
+
+  
+
+openBIS master data are:
+
+1.  Spaces
+2.  Experiment/Collection types
+3.  Object types
+4.  Dataset types
+5.  Property types
+
+  
+
+How to register a Space 
+------------------------
+
+  
+
+1.  Go to *Admin → Spaces*  
+      
+     ![](/download/thumbnails/53745926/Space-registration-1.png?version=1&modificationDate=1601541490182&api=v2)
+2.  Go to *Add Space* at the bottom of the page  
+      
+    ![](/download/attachments/53745926/Space-registration-2.png?version=1&modificationDate=1601541490176&api=v2)  
+      
+3.  Enter a *Code* and, if you wish, a *Description* for the Space  
+      
+    ![](/download/attachments/53745926/Space-registration-3.png?version=1&modificationDate=1601541490172&api=v2)  
+      
+4.  *Save*
+
+  
+
+How to Register an Experiment/Collection type
+---------------------------------------------
+
+  
+
+1.  Go to *Admin → Types → CollectionTypes*  
+    *  
+    *![](/download/attachments/53745926/Collection-type-registration-1.png?version=1&modificationDate=1601541490164&api=v2)
+2.  Select *Add* at the bottom of the page  
+      
+    ![](/download/attachments/53745926/Collection-type-registration-2.png?version=1&modificationDate=1601541490156&api=v2)  
+      
+3.  Now enter the *Code* for the Experiment/Collection type. E.g. for a
+    microscopy experiment, the code could be EXPERIMENT\_MICROSCOPY.  
+      
+    ![](/download/attachments/53745926/Collection-type-registration-3.png?version=1&modificationDate=1601541490152&api=v2)
+4.  *Description*: fill in this field if you want to provide some
+    details about this Collection/Experiment type
+5.  *Validation plugin*: If you want to have data validation, a script
+    needs to be written (=validation plugin) and can be selected from
+    here. An example of data validation would be if you have two
+    properties, one called *Start date* and one called *End date*, the
+    *End date* should never be earlier than the S*tart date*.  
+      
+6.  *Add properties.* These are the fields that you need for this
+    Collection/Experiment. Select *Entity: Add* at the bottom of the
+    page. You have two options:  
+      
+    1.  choose from a list of existing properties  
+        ![](/download/attachments/53745926/Add-existing-property.png?version=1&modificationDate=1601541490147&api=v2)  
+        The dropdown Property type (see screenshot above) gives you the
+        list of all registered properties in openBIS. The full list of
+        registered properties is under *Admin → Types → Browse Property
+        Types*  
+        *  
+        *
+    2.  create a new property  
+          
+        ![](/download/attachments/53745926/Add-new-property.png?version=1&modificationDate=1601541490142&api=v2)  
+        To register a new property you need to provide:  
+          
+        1.  *Code*: this is the unique identifier for this property.
+            Codes only take alphanumeric characters and no spaces.
+        2.  *Label*: This is what is shown in the user interface. Labels
+            are not unique.
+        3.  *Description*: this field provides a hint to what should be
+            entered in the property field
+        4.  *Data type*: what type of property this is (see below for
+            list of available Data Types)
+        5.  *Handled by plugin:* if this is a dynamic property or
+            managed property, whose value is computed by a plugin, this
+            needs to be specified here
+        6.  *Mandatory*: It is possible to set mandatory properties   
+              
+
+        After choosing the data type, two new fields are added to the
+        widget in the screenshot above:  
+          
+        ![](/download/attachments/53745926/Property-registration-fields-after-trype-selection.png?version=1&modificationDate=1601541490139&api=v2)  
+          
+        1.  *Section*: sections are ways of grouping together some
+            properties. For example properties such as *Storage
+            Condition*, *Storage location*, *Box Name*, can all belong
+            to a Section called *Storage information.* There are no
+            pre-defined Sections in the system, they always need to be
+            defined by an admin by entering the desired *Section Name*
+            in the *Section field.*
+        2.  *Position after:* this allows to specify the position of the
+            Property in the user interface.
+
+### Data Types available in openBIS
+
+The following data types are available in openBIS:
+
+![](/download/attachments/53745926/openBIS-data-types.png?version=1&modificationDate=1601541490134&api=v2)
+
+  
+
+  
+
+1.  *Boolean*: True or false
+2.  *Controlled Vocabulary*: list of values to choose form. Only 1 value
+    can be selected from a list
+3.  *Hyperlink*: URL
+4.  *Integer*: integer number
+5.  *Material*: not to be used, it will be soon discontinued
+6.  *Multiline varchar*: long text
+7.  *Real*: decimal number 
+8.  *Timestamp*: date (and timestamp)
+9.  *Varchar*: one-line text
+10. *XML*: to be used for Managed properties and for spreadsheet fields 
+
+  
+
+#### Controlled Vocabularies
+
+A Controlled Vocabulary is a pre-defined list of terms to choose from.
+Only one term can be selected.
+
+When you choose CONTROLLEDVOCABULARY as data type, you can then either
+choose from existing vocabularies (drop down) or create a new vocabulary
+(+ next to dropdown).
+
+![](/download/attachments/53745926/Register-property-controllevocabulary.png?version=1&modificationDate=1601541490130&api=v2)
+
+To create a new vocabulary, you need to enter the *Code* and the *list
+of terms* belonging to the vocabulary. 
+
+For example, we want to have a drop down list for different storage
+conditions: -80°C, -20°C, 4°C, room temperature.
+
+We can use STORAGE\_CONDITION as vocabulary code (this is the unique
+identifier of this vocabulary). Then we can specify the list of terms,
+either in the interface or we can load them from a file.
+
+Vocabulary terms have codes and labels. The code is always the unique
+identifier and should be written with alphanumeric characters and no
+spaces; labels are shown in the user interface (if present) and they can
+be written as normal text.
+
+Taking as example the Storage conditions mentioned above, we could have
+the following codes and labels:
+
+             
+
+[TABLE]
+
+  
+
+1.  **specify list of terms in the interface.** 
+
+![](/download/attachments/53745926/Controlled-vocabulary-list.png?version=1&modificationDate=1601541490126&api=v2)
+
+  
+
+In this case, in the Terms field, we can only enter vocabulary codes
+(not labels) separated by a comma, or alternatively 1 code per line. If
+we use this approach, we need to add the label in a second step, by
+editing the Controlled Vocabulary.
+
+2**. load terms from a file**
+
+![](/download/attachments/53745926/Controlle-vocabulary-from-file.png?version=1&modificationDate=1601541490121&api=v2)
+
+  
+
+In this case a tab separated file that contains at least one column for
+code and one column for label can be uploaded. Following this procedure,
+codes and labels can be added in one single step.
+
+  
+
+#### Editing Controlled Vocabularies
+
+It is possible to edit existing vocabulary terms, for example to add a
+label, and also to add new terms to an existing vocabulary.
+
+1.  Go to *Admin→ Vocabularies*
+
+ ![](/download/attachments/53745926/Controlled-vocabulry-list.png?version=1&modificationDate=1601541490114&api=v2)
+
+2\. Select the desired vocabulary in the table (click on the blue link)
+
+3\. Add a new term by selecting *Entity: Add* at the bottom of the page
+
+or
+
+Edit an existing term by selecting it in the table and then going to
+*Entity:Edit* at the bottom of the page.
+
+ ![](/download/attachments/53745926/Controlled-vocabulary-add-term.png?version=1&modificationDate=1601541490109&api=v2)
+
+  
+
+How to Register an Object type
+------------------------------
+
+  
+
+1.  Go to *Admin → Types → Object Types*  
+      
+    ![](/download/attachments/53745926/Collection-type-registration-1.png?version=1&modificationDate=1601541490164&api=v2)
+2.  To register a new type select *Entity:Add* at the bottom of the
+    page. To edit an existing Object type, select the desired type from
+    the table and go to *Entity:Edit* at the bottom of the page.  
+      
+    ![](/download/attachments/53745926/Object-type-registration-1.png?version=1&modificationDate=1601541490104&api=v2)  
+      
+3.  In the Object Type registration page a few fields need to be filled
+    in (see screenshot below)
+    1.  *Code*: the name of the object type. Codes can only have
+        alpha-numeric characters.
+    2.  *Description*: fill in this field if you want to provide some
+        details about this Object type.
+    3.  *Validation plugin*: If you want to have data validation, a
+        script needs to be written (=validation plugin) and can be
+        selected from here. An example of data validation would be if
+        you have two properties, one called *Start date* and one
+        called *End date*, the* End date* should never be earlier than
+        the S*tart date*.
+    4.  *Listable*: if checked, the object appears in the "Browse
+        object" dropdown of the admin UI. Please note that this does not
+        apply to the ELN UI.
+    5.  *Show container*: if checked, container objects are shown.
+        Please note that this does not apply to the ELN UI.
+    6.  *Show parents*: if checked, parents of the object are shown 
+    7.  *Unique subcodes*: this applies to contained samples, which can
+        have unique subcodes if this property is checked. Please note
+        that the concept of *container* and *contained samples* are not
+        used in the ELN.
+    8.  *Generate Codes automatically*: check this if you want to have
+        Object codes automatically generated by openBIS
+    9.  *Show parent metadata*: check this if you wnat to have parents
+        metadata shown. If not, only parents' codes will be shown
+    10. *Generated Code prefix*: this is the prefix of the code used for
+        each new registered object. A good convention is to use the
+        first 3 letters of the Object Type Code ad Code Prefix. E.g. If
+        the Object Type Code is CHEMICAL, the Code prefix can be CHE.
+        Each new chemical registered in openBIS will have CHE1, CHE2,
+        CHE3... CHEn as codes.  
+          
+        ![](/download/attachments/53745926/Screenshot%202020-05-15%20at%2015.15.19.png?version=1&modificationDate=1601541490099&api=v2)
+4.  Add properties: these are the fields that you need for this Object
+    Type. Select *Entity: Add* at the bottom of the page.
+    See [HowtoRegisteranExperiment/Collectiontype](#RegisterMasterDataviatheAdminInterface-HowtoRegisteranExperiment/Collectiontype).
+
+  
+
+How to register a Data Set type
+-------------------------------
+
+  
+
+1.  Go to *Admin → Types → Data Set Types*  
+      
+    ![](/download/attachments/53745926/Collection-type-registration-1.png?version=1&modificationDate=1601541490164&api=v2)
+2.  Select *Entity:Add* at the bottom of the page  
+    ![](/download/attachments/53745926/Screenshot%202020-05-15%20at%2016.04.15.png?version=1&modificationDate=1601541490094&api=v2)
+3.  The Data Set Type registration form has the following fields:
+    1.  *Code*: name of the data set (e.g. RAW\_DATA). Code can only
+        take alphanumeric characters and cannot contain spaces.
+    2.  *Description*: you can provide a short description of the data
+        set
+    3.  *Validation plugin*:
+    4.  *Disallow deletion*: if checked, all datasets belonging to this
+        type cannot be deleted
+    5.  *Main Data Set Pattern: *if there is just one data set matching
+        the chosen 'main data set' pattern, it will be automatically
+        displayed. A regular expression is expected. E.g.: '.\*.jpg'
+    6.  *Main Data Set Path:* The path (relative to the top directory of
+        a data set) that will be used as a starting point of 'main data
+        set' lookup. E.g. 'original/images/'
+
+    ![](/download/attachments/53745926/Screenshot%202020-05-15%20at%2016.04.35.png?version=1&modificationDate=1601541490089&api=v2)
+4.  Add properties: these are the fields that you need for this Object
+    Type. Select *Entity: Add* at the bottom of the page.
+    See [HowtoRegisteranExperiment/Collectiontype](#RegisterMasterDataviatheAdminInterface-HowtoRegisteranExperiment/Collectiontype).
+
+  
+
+Property Types
+--------------
+
+The full list of properties registered in openBIS is accessible by
+navigating to *Admin → Types → Browse Property Types* 
+
+  
+
+![](/download/attachments/53745926/Collection-type-registration-1.png?version=1&modificationDate=1601541490164&api=v2)
+
+  
+
+In the Property Browser page it is possible to:
+
+1.  Add new properties → *Entity:Add Property Type. *
+2.  Edit existing properties → *Entity:Edit.* It is possible to change
+    the *Label* and *Description* of the property.
+3.  Delete Existing properties → *Entity:Delete.* Deleting a property
+    will delete also all associated values, if the property is in use. A
+    warning is issued: please read carefully before deleting properties!
+
+  
+
+![](/download/attachments/53745926/Screenshot%202020-05-15%20at%2016.21.19.png?version=1&modificationDate=1601541490082&api=v2)
+
+-   No labels
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 179, "requestCorrelationId": "24f6130dae86bb66"}
diff --git a/docs/uncategorized/service-plugins.md b/docs/uncategorized/service-plugins.md
new file mode 100644
index 0000000000000000000000000000000000000000..5ddade27b1b45baac3f543d7974eb29f7256bded
--- /dev/null
+++ b/docs/uncategorized/service-plugins.md
@@ -0,0 +1,348 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FService%2BPlugins)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FService%2BPlugins)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (0)
+        ](/pages/viewpageattachments.action?pageId=53745982 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53745982)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53745982)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53745982#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53745982)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53745982)
+    -   [ Export to Word ](/exportword?pageId=53745982)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53745982)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53745982&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+3.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+
+-   []( "Unrestricted")
+-   [Jira links]()
+
+[Service Plugins](/display/openBISDoc2010/Service+Plugins)
+----------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A) on [Oct 01,
+    2020](/pages/viewpreviousversions.action?pageId=53745982 "Show changes")
+
+Introduction
+------------
+
+A service plugin runs on a DSS. It is a java servlet that processes
+incoming requests and generates the responses. A user can trigger a
+service plugin by accessing an url the servlet has been set up for. A
+service plugin is configured on the DSS best by introducing a [core
+plugin](/display/openBISDoc2010/Core+Plugins) of type services. All
+service plugins have the following properties in common:
+
+[TABLE]
+
+Service Plugins
+---------------
+
+### ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.OaipmhServlet
+
+A servlet that handles OAI-PMH protocol requests (see
+<http://www.openarchives.org/OAI/openarchivesprotocol.html> for more
+details on OAI-PMH). The requests are handled in two steps:
+
+-   user authentication
+
+-   response generation
+
+The user authentication step is handled by
+ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.IAuthenticationHandler.
+The handler is configured via "authentication-handler" property. The
+response generation step is handled
+by ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.IRequestHandler.
+The handler is configured via "request - handler" property. An example
+of such a configuration is presented below:
+
+**Example**:
+
+**plugin.properties**
+
+    class = ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.OaipmhServlet
+    path = /oaipmh/*
+    request-handler = ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.JythonBasedRequestHandler
+    request-handler.script-path = handler.py
+    authentication-handler = ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.BasicHttpAuthenticationHandler
+
+**Configuration**:
+
+[TABLE]
+
+##### ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.BasicHttpAuthenticationHandler
+
+Handler that performs Basic HTTP authentication as described here:
+<http://en.wikipedia.org/wiki/Basic_access_authentication>.
+
+##### ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.AnonymousAuthenticationHandler
+
+Handler that allows clients to access the OAI-PMH service without any
+authentication. The handler automatically authenticates as a user
+specified in the configuration.
+
+**Configuration:**
+
+[TABLE]
+
+##### ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.JythonBasedRequestHandler
+
+OAI-PMH response handler that delegates a response generation to a
+Jython script. The script can be configured via "script-path" property.
+The script should define a function with a following signature:
+
+**handler.py**
+
+    def handle(request, response)
+
+where request is javax.servlet.http.HttpServletRequest request and
+response is javax.servlet.http.HttpServletResponse. Following variables
+are available in the script:
+
+-   searchService
+    - ch.systemsx.cisd.openbis.dss.generic.shared.api.internal.v2.ISearchService
+-   searchServiceUnfiltered -
+    ch.systemsx.cisd.openbis.dss.generic.shared.api.internal.v2.ISearchService
+-   mailService -
+    ch.systemsx.cisd.openbis.dss.generic.server.plugins.jython.api.IMailService
+-   queryService -
+    ch.systemsx.cisd.openbis.dss.generic.shared.api.internal.IDataSourceQueryService
+-   authorizationService -
+    ch.systemsx.cisd.openbis.dss.generic.shared.api.internal.v2.authorization.IAuthorizationService
+-   sessionWorkspaceProvider -
+    ch.systemsx.cisd.openbis.dss.generic.shared.api.internal.ISessionWorkspaceProvider
+-   contentProvider -
+    ch.systemsx.cisd.openbis.dss.generic.shared.api.internal.v2.IDataSetContentProvider
+-   contentProviderUnfiltered -
+    ch.systemsx.cisd.openbis.dss.generic.shared.api.internal.v2.IDataSetContentProvider
+-   userId
+
+**Configuration:**
+
+[TABLE]
+
+An example of a jython script that can be used for handling OAI-PMH
+responses is presented below. The example uses XOAI java library
+(see <https://github.com/lyncode/xoai> for more details on the library)
+to provide dataset metadata. XOAI library is available in openBIS and
+can be used without any additional configuration.
+
+**handler.py**
+
+    #! /usr/bin/env python
+    from java.util import Date
+    from java.text import SimpleDateFormat
+    from xml.etree import ElementTree
+    from xml.etree.ElementTree import Element, SubElement 
+    from com.lyncode.xoai.dataprovider import DataProvider
+    from com.lyncode.xoai.dataprovider.model import Context, MetadataFormat, Item
+    from com.lyncode.xoai.dataprovider.repository import Repository, RepositoryConfiguration
+    from com.lyncode.xoai.dataprovider.parameters import OAIRequest
+    from com.lyncode.xoai.dataprovider.handlers.results import ListItemIdentifiersResult, ListItemsResults
+    from com.lyncode.xoai.model.oaipmh import OAIPMH, DeletedRecord, Granularity, Metadata 
+    from com.lyncode.xoai.xml import XmlWriter
+    from ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.xoai import SimpleItemIdentifier, SimpleItem, SimpleItemRepository, SimpleSetRepository
+    from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto import SearchCriteria
+    from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto.SearchCriteria import MatchClause, MatchClauseAttribute, MatchClauseTimeAttribute, CompareMode 
+    DATE_FORMAT = SimpleDateFormat("yyyy-MM-dd")
+    TIME_ZONE = "0"
+
+    def handle(req, resp):
+        context = Context();
+        context.withMetadataFormat(MetadataFormat().withPrefix("testPrefix").withTransformer(MetadataFormat.identity()));
+        configuration = RepositoryConfiguration();
+        configuration.withMaxListSets(100);
+        configuration.withMaxListIdentifiers(100);
+        configuration.withMaxListRecords(100);
+        configuration.withAdminEmail("test@test");
+        configuration.withBaseUrl("http://localhost");
+        configuration.withDeleteMethod(DeletedRecord.NO);
+        configuration.withEarliestDate(Date(0));
+        configuration.withRepositoryName("TEST");
+        configuration.withGranularity(Granularity.Day);
+        repository = Repository();
+        repository.withConfiguration(configuration);
+        repository.withItemRepository(ItemRepository());
+        repository.withSetRepository(SimpleSetRepository());
+        provider = DataProvider(context, repository);
+        params = {}
+        for param in req.getParameterNames():
+            values = []
+            for value in req.getParameterValues(param):
+                values.append(value)
+            params[param] = values
+        request = OAIRequest(params);
+        response = provider.handle(request);
+        writer = XmlWriter(resp.getOutputStream());
+        response.write(writer);
+        writer.flush();
+
+    class ItemRepository(SimpleItemRepository):
+      
+        def doGetItem(self, identifier):
+            criteria = SearchCriteria()
+            criteria.addMatchClause(MatchClause.createAttributeMatch(MatchClauseAttribute.CODE, identifier))
+            dataSets = searchService.searchForDataSets(criteria)
+            
+            if dataSets:
+                return createItem(dataSets[0])
+            else:
+                return None
+        def doGetItemIdentifiers(self, filters, offset, length, setSpec, fromDate, untilDate):
+            results = self.doGetItems(filters, offset, length, setSpec, fromDate, untilDate)
+            return ListItemIdentifiersResult(results.hasMore(), results.getResults(), results.getTotal())
+        
+        def doGetItems(self, filters, offset, length, setSpec, fromDate, untilDate):
+            criteria = SearchCriteria()
+            if fromDate:
+                criteria.addMatchClause(MatchClause.createTimeAttributeMatch(MatchClauseTimeAttribute.REGISTRATION_DATE, CompareMode.GREATER_THAN_OR_EQUAL, DATE_FORMAT.format(fromDate), TIME_ZONE))
+            if untilDate:
+                criteria.addMatchClause(MatchClause.createTimeAttributeMatch(MatchClauseTimeAttribute.REGISTRATION_DATE, CompareMode.LESS_THAN_OR_EQUAL, DATE_FORMAT.format(untilDate), TIME_ZONE))
+            dataSets = searchService.searchForDataSets(criteria)
+            if dataSets:
+                hasMoreResults = (offset + length) < len(dataSets)
+                results = [createItem(dataSet) for dataSet in dataSets[offset:(offset + length)]]
+                total = len(dataSets)
+                return ListItemsResults(hasMoreResults, results, total)
+            else:
+                return ListItemsResults(False, [], 0)
+
+
+    def createItemMetadata(dataSet):
+        properties = Element("properties")
+        
+        for propertyCode in dataSet.getAllPropertyCodes():
+            property = SubElement(properties, "property")
+            property.set("code", propertyCode)
+            property.text = dataSet.getPropertyValue(propertyCode) 
+            
+        return Metadata(ElementTree.tostring(properties))
+
+    def createItem(dataSet):
+        item = SimpleItem()
+        item.setIdentifier(dataSet.getDataSetCode())
+        item.setDatestamp(Date())
+        item.setMetadata(createItemMetadata(dataSet))
+        return item
+
+now assuming that the OaipmhServlet has been configured at /oaipmh path
+try accessing the following urls:
+
+-   &lt;data store url&gt;/oaipmh/?verb=Identify - returns information
+    about this OAI-PMH repository
+-   &lt;data store
+    url&gt;/oaipmh/?verb=ListIdentifiers&metadataPrefix=testPrefix -
+    returns the first 100 of data set codes and a resumption token if
+    there is more than 100 data sets available
+-   &lt;data store
+    url&gt;/oaipmh/?verb=ListIdentifiers&resumptionToken=&lt;resumption
+    token&gt; - returns another 100 of data set codes
+-   &lt;data store
+    url&gt;/oaipmh/?verb=ListRecords&metadataPrefix=testPrefix - returns
+    the first 100 of data set records and a resumption token if there is
+    more than 100 data sets available
+-   &lt;data store
+    url&gt;/oaipmh/?verb=ListRecords&resumptionToken=&lt;resumption
+    token&gt; - returns another 100 of data set records
+-   &lt;data store
+    url&gt;/oaipmh/?verb=GetRecord&metadataPrefix=testPrefix&identifier=&lt;data
+    set code&gt; - returns a record for a data set with the specified
+    code
+
+##### ch.systemsx.cisd.openbis.dss.screening.server.oaipmh.ScreeningJythonBasedRequestHandler
+
+Screening version of
+ch.systemsx.cisd.openbis.dss.generic.server.oaipmh.JythonBasedRequestHandler.
+It works exactly the same as the generic counterpart, but it defines an
+additional variable that is available in the script:
+
+-   screeningFacade
+    - ch.systemsx.cisd.openbis.plugin.screening.client.api.v1.IScreeningOpenbisServiceFacade
+
+ 
+
+-   No labels
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 138, "requestCorrelationId": "d0d83453e1773ed3"}
diff --git a/docs/uncategorized/sharing-databases.md b/docs/uncategorized/sharing-databases.md
new file mode 100644
index 0000000000000000000000000000000000000000..7172eacf8b7f2a46fa842915f2881bc2aae81cdc
--- /dev/null
+++ b/docs/uncategorized/sharing-databases.md
@@ -0,0 +1,295 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FSharing%2BDatabases)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FSharing%2BDatabases)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (0)
+        ](/pages/viewpageattachments.action?pageId=53745952 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53745952)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53745952)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53745952#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53745952)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53745952)
+    -   [ Export to Word ](/exportword?pageId=53745952)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53745952)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53745952&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+3.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+
+-   []( "Unrestricted")
+-   [Jira links]()
+
+[Sharing Databases](/display/openBISDoc2010/Sharing+Databases)
+--------------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A) on [Oct 01,
+    2020](/pages/viewpreviousversions.action?pageId=53745952 "Show changes")
+
+### 
+
+### Introduction
+
+Application server and data store server(s) can share the same database.
+For example, openBIS screening uses a database for image meta data
+(called imaging-db) which is used by DSS to register and delivering
+images. It is also used by AS to provide information about available
+images and transformations.
+
+For configuration of the data bases [core
+plugins](/display/openBISDoc2010/Core+Plugins) on the AS and for each
+DSS have to be defined. For a DSS it is a core plugin of
+type `data-sources` and for AS it is a core plugin of
+type `dss-data-sources`. Optionally the AS can get configuration
+parameters from its registered DSS instances by defining a mapping file
+`etc/dss-datasource-mapping` for the AS.
+
+When a DSS is registering itself at the AS all its data source
+definitions are provided and stored on the AS. This allows the AS (if a
+mapping file is defined)
+
+-   to reduce configuration of core plugins of type `dss-data-sources`
+    to a minimum.
+-   to have only one core plugin of type `dss-data-sources` independent
+    of the number of technologies/modules and DSS instances.
+
+The AS can have only one data source per pair defined by data store code
+and module code.
+
+### Share Databases without Mapping File
+
+Without a mapping file specified data sources are independently defined
+for DSS and AS. For details see [DSS Data
+Sources](/display/openBISDoc2010/Installation+and+Administrators+Guide+of+the+openBIS+Data+Store+Server#InstallationandAdministratorsGuideoftheopenBISDataStoreServer-DataSources)
+and [AS Data
+Sources](/display/openBISDoc2010/Installation+and+Administrator+Guide+of+the+openBIS+Server#InstallationandAdministratorGuideoftheopenBISServer-ConfiguringDSSDataSources),
+respectively. Note, that the properties `database-driver`
+and `database-url` are mandatory for AS.
+
+### Share Databases with Mapping File
+
+When a mapping file is used the configuration doesn't change for data
+sources defined for DSS. But the configuration parameters for an
+actually used data source in AS can come from three sources:
+
+-   AS core plugins of type `dss-data-sources`
+-   Data source definitions as provided by the data stores
+-   Mapping file `etc/dss-datasource-mapping`
+
+AS core plugins no longer need to define the properties
+ `database-driver` and `database-url` because they are provided by DSS
+or the mapping file. The same is true for properties `username`
+and `password.` In fact the` plugin.properties` can be empty. Usually
+only parameters for logging and connection pooling are used.
+
+The mapping file is used to pick the right AS core plugin and the right
+data source provided by the DSS. In addition database credentials can be
+overwritten by the mapping file.
+
+Only those properties in a core plugin of type `dss-data-source` are
+overwritten which are **undefined**.
+
+The mapping file is a text file with lines of the following syntax:
+
+&lt;data store code pattern&gt;.&lt;module code pattern&gt;.&lt;type&gt;
+= &lt;value&gt;
+
+where &lt;data store code pattern&gt; and &lt;module code pattern&gt;
+are wildcard patterns for the data store code and module/technology
+code, repectively. The &lt;type&gt; can have one of the following
+values:
+
+[TABLE]
+
+Empty lines and lines starting with '\#' will be ignored.
+
+When AS needs a data source for a specific data store and module it will
+consult the mapping file line by line. For each type it considers only
+the last line matching the actual data store and module code. From this
+information it is able to pick the right AS core plugin of
+type `dss-data-sources`, the data source definitions provided by DSS at
+registration, and the values for the host part of the URL, database
+name, user and password.
+
+If there is no matching line of type `config` found the AS core plugin
+with key &lt;actual data store code&gt;\[&lt;actual module code&gt;\] is
+used.
+
+If there is no matching line of type `data-source-code` found it is
+assumed that the data store has one and only one data source. Thus data
+store code has to be defined in the mapping file if the data store has
+more than one data source. Remember, per data store and module there can
+be only one data source for AS.
+
+Here are some examples for various use cases:
+
+#### Mapping all DSSs on one
+
+**etc/dss-datasource-mapping**
+
+    *.*.config = dss
+
+This means that any request for data source for data store x and
+module/technology y will be mapped to the same configuration. If one of
+the properties driver class, URL, user name, and password is missing it
+will be replaced by the data source definition provided by data store
+server x at registration. This works only, if all DSS instances have
+only **one** data source specified.
+
+The following mapping file is similar:
+
+**etc/dss-datasource-mapping**
+
+    *.*.config = dss[*]
+
+This means that any request for data source for data store x and
+module/technology y will be mapped to AS core plugin DSS of module y.
+
+#### Mapping all DSSs on one per module
+
+**etc/dss-datasource-mapping**
+
+    *.proteomics.config = dss1[proteomics]
+    *.proteomics.data-source-code = proteomics-db
+    *.screening.config = dss1[screening]
+    *.screening.data-source-code = imaging-db
+
+All DSS instances for the same module are mapped onto an AS core plugin
+named DSS1 for the corresponding module. This time the data source code
+is also specified. This is needed if the corrsponding DSS has more than
+one data source defined. For example in screening `path-info-db` is
+often used in addition to `imaging-db` to speed up file browsing in the
+data store.
+
+#### Overwriting Parameters
+
+Reusing the same AS dss-data-sources core plugin is most flexible with
+the mapping file if no driver, URL, username and password have been
+defined in such a core plugin. In this case all these parameters come
+form the data source information provided at DSS registration. If DSS
+and AS are running on the same machine AS can usually use these
+parameters. In this case mapping files like  in the previous examples
+are enough.
+
+The situation is different if the DSS instances, AS and the database
+server running on different machines. The following example assumes that
+the AS and the database server running on the same machine but at least
+one of the DSS instances are running on a different machine. In this
+case the database URL for the such a DSS instances could be different
+than the URL for the AS.
+
+**etc/dss-datasource-mapping**
+
+    *.screening.config = dss1[screening]
+    *.screening.data-source-code = imaging-db
+    *.screening.host-part = localhost 
+
+Also database name (aka sid), user, and password can be overwritten in
+the same way.
+
+#### Overwriting Generic Settings
+
+**etc/dss-datasource-mapping**
+
+    *.screening.config = dss1[screening]
+    *.screening.data-source-code = imaging-db
+    *.screening.host-part = localhost 
+    *.screening.username = openbis
+    *.screening.password = !a7zh93jP.
+    DSS3.screening.host-part = my.domain.org:1234
+    DSS3.screening.username = ob
+    DSS3.screening.password = 8uij.hg6
+
+This is an example where all DSS instances except DSS3 are accessing the
+same database server which is on the same machine as the AS. Username
+and password are also set in order to ignore corresponding data source
+definitions of all DSS instances. DSS3 uses a different database server
+which could be on the same machine as DSS3. Also username and password
+are different.
+
+Note, that the generic mapping definitions (i.e. definitions with wild
+cards for data store codes or module codes) should appear before the
+more specific definitions.
+
+-   No labels
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 114, "requestCorrelationId": "600905da03ae638d"}
diff --git a/docs/uncategorized/user-group-management-for-multi-groups-openbis-instances.md b/docs/uncategorized/user-group-management-for-multi-groups-openbis-instances.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4f9c625f991016bb4600700c7d8b6c385837ec6
--- /dev/null
+++ b/docs/uncategorized/user-group-management-for-multi-groups-openbis-instances.md
@@ -0,0 +1,628 @@
+[Log
+in](https://unlimited.ethz.ch/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FUser%2BGroup%2BManagement%2Bfor%2BMulti-groups%2BopenBIS%2BInstances)
+
+Linked Applications
+
+Loading…
+
+[![Confluence](/download/attachments/327682/atl.site.logo?version=1&modificationDate=1563454119905&api=v2)](/)
+
+-   [Spaces](/spacedirectory/view.action "Spaces")
+-   [Create ](# "Create from template")
+
+-   Hit enter to search
+
+-   [Help](# "Help")
+    -   [Online
+        Help](https://docs.atlassian.com/confluence/docs-82/ "Visit the Confluence documentation home")
+    -   [Keyboard Shortcuts](# "View available keyboard shortcuts")
+    -   [Feed
+        Builder](/dashboard/configurerssfeed.action "Create your custom RSS feed.")
+    -   [What’s
+        new](https://confluence.atlassian.com/display/DOC/Confluence+8.2+Release+Notes)
+    -   [Available Gadgets](# "Browse gadgets provided by Confluence")
+    -   [About
+        Confluence](/aboutconfluencepage.action "Get more information about Confluence")
+
+-   
+
+-   
+
+-   
+
+-   [Log
+    in](/login.action?os_destination=%2Fdisplay%2FopenBISDoc2010%2FUser%2BGroup%2BManagement%2Bfor%2BMulti-groups%2BopenBIS%2BInstances)
+
+  
+
+[![openBIS Documentation Rel.
+20.10](/images/logo/default-space-logo.svg)](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+[openBIS Documentation Rel.
+20.10](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home "openBIS Documentation Rel. 20.10")
+
+-   [Pages](/collector/pages.action?key=openBISDoc2010)
+-   [Blog](/pages/viewrecentblogposts.action?key=openBISDoc2010)
+
+### Page tree
+
+[](/collector/pages.action?key=openBISDoc2010)
+
+Browse pages
+
+ConfigureSpace tools
+
+[](#)
+
+-   [ ](#)
+    -   [ Attachments (0)
+        ](/pages/viewpageattachments.action?pageId=53746056 "View Attachments")
+    -   [ Page History
+        ](/pages/viewpreviousversions.action?pageId=53746056)
+
+    -   [ Page Information ](/pages/viewinfo.action?pageId=53746056)
+    -   [ Resolved comments ](#)
+    -   [ View in Hierarchy
+        ](/pages/reorderpages.action?key=openBISDoc2010&openId=53746056#selectedPageInHierarchy)
+    -   [ View Source
+        ](/plugins/viewsource/viewpagesrc.action?pageId=53746056)
+    -   [ Export to PDF
+        ](/spaces/flyingpdf/pdfpageexport.action?pageId=53746056)
+    -   [ Export to Word ](/exportword?pageId=53746056)
+    -   [ View Visio File
+        ](/plugins/lucidchart/selectVisio.action?contentId=53746056)
+
+    -   [ Copy
+        ](/pages/copypage.action?idOfPageToCopy=53746056&spaceKey=openBISDoc2010)
+
+1.  [Pages](/collector/pages.action?key=openBISDoc2010)
+2.  **…**
+3.  [openBIS Documentation Rel. 20.10
+    Home](/display/openBISDoc2010/openBIS+Documentation+Rel.+20.10+Home)
+4.  [openBIS 20.10
+    Documentation](/display/openBISDoc2010/openBIS+20.10+Documentation)
+5.  [Guides](/display/openBISDoc2010/Guides)
+
+-   []( "Unrestricted")
+-   [Jira links]()
+
+[User Group Management for Multi-groups openBIS Instances](/display/openBISDoc2010/User+Group+Management+for+Multi-groups+openBIS+Instances)
+--------------------------------------------------------------------------------------------------------------------------------------------
+
+-   Created by [Fuentes Serna Juan Mariano
+    (ID)](%20%20%20%20/display/~juanf%0A), last modified by [Elmer
+    Franz-Josef (ID)](%20%20%20%20/display/~felmer%0A) on [Dec 04,
+    2022](/pages/diffpagesbyversion.action?pageId=53746056&selectedPageVersions=10&selectedPageVersions=11 "Show changes")
+
+### 
+
+### Introduction
+
+Running openBIS as a facility means that different groups share the same
+openBIS instance. Therefore the following demands have to be addressed
+by correct configuration of such an instance:
+
+-   A user should have only access to data of groups to which he or she
+    belongs.
+-   Each group should have its own disk space on DSS by assigning each
+    group to a specific
+    [share](/display/openBISDoc2010/Installation+and+Administrators+Guide+of+the+openBIS+Data+Store+Server#InstallationandAdministratorsGuideoftheopenBISDataStoreServer-SegmentedStore).
+-   Make openBIS available for a new group.
+-   Optional usage reports should be sent regularly.
+
+In order to fulfill these demands
+
+-   a `UserManagementMaintenanceTask` has to be configured on AS
+-   an `EagerShufflingTask` for the `PostRegistrationTask` has to be
+    configured on DSS.
+-   optionally a `UsageReportingTask `has to be configured on AS.
+
+If a new group is added
+
+-   a new share has to be added to the DSS store folder (a symbolic link
+    to an NFS directory)
+-   a group definition has to be added to a configuration file by added
+    LDAP group keys or an explicit list of user ids.
+
+### Configuration
+
+Two types of configurations are needed:
+
+-   Static configurations: Changes in these configuration need a restart
+    of openBIS (AS and/or DSS)
+-   Dynamic configurations: Changes apply without the need of a restart
+    of openBIS
+
+#### Static Configurations
+
+The necessary static configurations have to be specified in two places:
+AS and DSS service.properties.
+
+##### AS service.properties
+
+Here an LDAPAuthenticationService (only if needed) and a
+UserManagementMaintenanceTask are configured:
+
+**AS service.properties**
+
+    # Authentication service. 
+    # Usually a stacked service were first file-based service is asked (for users like etl-server, i.e. DSS)
+    # and second the LDAP service if the file-based service fails. 
+    authentication-service = file-ldap-authentication-service
+
+    # When a new person is created in the database the authentication service is asked by default whether this
+    # person is known by the authentication service. 
+    # In the case of single-sign-on this doesn't work. In this case the authentication service shouldn't be asked.
+    # and the flag 'allow-missing-user-creation' should be set 'true' (default: 'false')
+    #
+    # allow-missing-user-creation = false
+
+    # The URL of the LDAP server, e.g. "ldaps://ldaps-hit-1.ethz.ch"
+    ldap.server.url = <LDAP URL>
+    # The distinguished name of the security principal, e.g. "CN=carl,OU=EthUsers,DC=d,DC=ethz,DC=ch"
+    ldap.security.principal.distinguished.name = <distinguished name to login to the LDAP server>
+    # Password of the LDAP user account that will be used to login to the LDAP server to perform the queries
+    ldap.security.principal.password = <password of the user to connect to the LDAP server>
+    # The search base, e.g. "ou=users,ou=nethz,ou=id,ou=auth,o=ethz,c=ch"
+    ldap.searchBase = <search base>
+    ldap.queryTemplate = (%s)
+    ldap.queryEmailForAliases = true
+
+    # Maintenance tasks for user management
+    maintenance-plugins = user-management, usage-reporting
+
+    user-management.class = ch.systemsx.cisd.openbis.generic.server.task.UserManagementMaintenanceTask
+    # Start time in 24h notation
+    user-management.start = 01:15
+    # Time interval of execution
+    user-management.interval = 1 days
+    # Path to the file with dynamic configuration
+    user-management.configuration-file-path = ../../../data/user-management-maintenance-config.json
+    # Path to the file with information which maps groups to data store shares. 
+    # Will be created by the maintenance task and is needed by DSS (EagerShufflingTask during post registration)
+    user-management.shares-mapping-file-path = ../../../data/shares-mapping.txt
+    # Path to the audit log file. Default: logs/user-management-audit_log.txt
+    # user-management.audit-log-file-path =
+
+    usage-reporting.class = ch.systemsx.cisd.openbis.generic.server.task.UsageReportingTask
+    # Time interval of execution and also length report period
+    usage-reporting.interval = 7 days
+    # Path to the file with group definition
+    usage-reporting.configuration-file-path = ${user-management.configuration-file-path}
+    # User reporting type. Possible values are NONE, ALL, OUTSIDE_GROUP_ONLY. Default: ALL
+    usage-reporting.user-reporting-type = OUTSIDE_GROUP_ONLY
+    # Comma-separated list of e-mail addresses for report sending
+    usage-reporting.email-addresses = <address 1>, <address 2>, ... 
+
+    # Mail server configuration is needed by UsageReportingTask
+    mail.from = openbis@<host>
+    mail.smtp.host = <SMTP host>
+    mail.smtp.user = <can be empty>
+    mail.smtp.password = <can be empty>
+
+With this template configuration the UserManagementMaintenanceTask runs
+every night at 1:15 am. It reads the configuration
+file `<installation path>/data/user-management-maintenance-config.json`
+and creates `<installation path>/data/shares-mapping.txt`. Every week a
+usage report file of the previous week is sent to the specified
+addresses.
+
+For the LDAP configuration `ldap.server.url`,
+`ldap.security.principal.distingished.name`, `ldap.security.principal.password`
+and `ldap.searchBase` have to be specified.
+
+The LDAP service is not only used for authenticating users but also to
+obtain all users of a group. In the later case an independent query
+template can be specified by the property `ldap-group-query-template` of
+the `plugin.properties` of `UserManagementMaintenanceTask` (since
+20.10.1.1). The % character in this template will be replaced by the
+LDAP group key.
+
+###### Active Directory
+
+If the LDAP service is actually an Active Directory service the
+configuration is a bit different. These are the changes:
+
+-   Remove `ldap.queryTemplate`. This means that the default
+    value `(&(objectClass=organizationalPerson)(objectCategory=person)(objectClass=user)(%s))`
+    will be used.
+
+-   It might be necessary to increase the timeout. The default value is
+    10 second. Example: `ldap.timeout = 1 min`
+
+-   Add the following line to the AS service.properties:
+
+    **AS service.properties**
+
+        user-management.filter-key = memberOf:1.2.840.113556.1.4.1941:
+
+The ldap group keys described below in section *Dynamic Configurations*
+have to be full distinguished names (DN) like
+e.g. `CN=id-sis-source,OU=Custom,OU=EthLists,DC=d,DC=ethz,DC=ch`. To
+find the correct DN an LDAP browsing tool (like Apache Directory Studio
+<https://directory.apache.org/studio/>) might be useful.
+
+##### DSS service.properties
+
+Here the PostRegistrationMaintenanceTask has be extended for eager
+shuffling.
+
+**DSS service.properties**
+
+    # Lists of post registrations tasks for each data set executed in the specified order. 
+    # Note, that pathinfo-feeding is already defined.
+    post-registration.post-registration-tasks = pathinfo-feeding, eager-shuffling
+    post-registration.eager-shuffling.class = ch.systemsx.cisd.etlserver.postregistration.EagerShufflingTask
+    post-registration.eager-shuffling.share-finder.class = ch.systemsx.cisd.openbis.dss.generic.shared.MappingBasedShareFinder
+    # Path to the file with information which maps groups to data store shares. 
+    post-registration.eager-shuffling.share-finder.mapping-file = ../../data/shares-mapping.txt
+
+Eager shuffling moves the just registered data set from share 1 to the
+share of the group as specified
+in `<installation path>/data/shares-mapping.txt`. For more details about
+share mapping see [Mapping File for Share Ids and Archiving
+Folders](/display/openBISDoc2010/Mapping+File+for+Share+Ids+and+Archiving+Folders).
+
+#### Dynamic Configurations
+
+Each time the UserManagementMaintenanceTask is executed it reads the
+configuration file specified
+in `user-management.configuration-file-path` of AS `service.properties`.
+It is a text file in JSON format which has the following structure, that
+needs to be created manually:
+
+    {
+        "globalSpaces": ["<space 1>", "<space 2>", ...],
+        "commonSpaces":
+        {
+            "<role 1>": ["<space post-fix 11>", "<space post-fix 12>", ...],
+            "<role 2>": ["<space post-fix 21>", "<space post-fix 22>", ...],
+            ...
+        },
+        "commonSamples":
+        {
+            "<sample identifier template 1>": "<sample type 1>", 
+            "<sample identifier template 2>": "<sample type 2>",
+            ...
+        },
+        "commonExperiments": 
+        [
+            {
+                "identifierTemplate" : "<experiment identifier template 1>",
+                "experimentType"   :  "<experiment type 1>", 
+                "<property code 1>"  :  "<property value 1>",
+                "<property code 2>"  :  "<property value 2>",
+                ... 
+            }, 
+            {
+                "identifierTemplate" : "<experiment identifier template 2>",
+                "experimentType"   :  "<experiment type 2>", 
+                "<property code 1>"  :  "<property value 1>",
+                "<property code 2>"  :  "<property value 2>",
+                ... 
+            },  
+            ...
+        ],
+        "instanceAdmins": ["<instance admin user id 1>", "<instance admin user id 1>"],
+        "groups":
+        [
+            {
+                "name": "<human readable group name 1>",
+                "key": "<unique group key 1>",
+                "ldapGroupKeys": ["<ldap group key 11>", "<ldap group key 12>", ...],
+                "users": ["<user id 11>", "<user id 12>", ...],
+                "admins": ["<user id 11>", "<user id 12>", ...],
+                "shareIds": ["<share id 11>", "<share id 12>", ...],
+                "useEmailAsUserId": true/false (default: false),
+                "createUserSpace": true/false (default: true),
+                "userSpaceRole" : <role> (default: non)
+       },
+            {
+                "name": "<human readable group name 2>",
+                "key": "<unique group key 2>",
+                "ldapGroupKeys": ["<ldap group key 21>", "<ldap group key 22>", ...],
+                "admins": ["<user id 21>", "<user id 22>", ...],
+                "shareIds": ["<share id 21>", "<share id 22>", ...],
+                "useEmailAsUserId": true/false (default: false),
+                "createUserSpace": true/false (default: true),
+                "userSpaceRole" : <role> (default: non)
+        },
+            ...
+        ]
+    }
+
+Example:
+
+    {
+        "globalSpaces": ["ELN_SETTINGS"],
+        "commonSpaces":
+        {
+            "USER": ["INVENTORY", "MATERIALS", "METHODS", "STORAGE", "STOCK_CATALOG"],
+            "OBSERVER": ["ELN_SETTINGS", "STOCK_ORDERS"]
+        },
+        "commonSamples":
+        {
+            "ELN_SETTINGS/ELN_SETTINGS": "GENERAL_ELN_SETTINGS"
+        }, 
+        "commonExperiments":
+        [
+            {
+                "identifierTemplate" : "ELN_SETTINGS/TEMPLATES/TEMPLATES_COLLECTION",
+                "experimentType" : "COLLECTION",
+                "$NAME" : "Templates Collection",
+                "$DEFAULT_OBJECT_TYPE" : null,
+                "$DEFAULT_COLLECTION_VIEW" : "LIST_VIEW"
+            },
+            {
+                "identifierTemplate" : "ELN_SETTINGS/STORAGES/STORAGES_COLLECTION",
+                "experimentType" : "COLLECTION",
+                "$NAME" : "Storages Collection",
+                "$DEFAULT_OBJECT_TYPE" : "STORAGE",
+                "$DEFAULT_COLLECTION_VIEW" : "LIST_VIEW"
+            },
+            {
+                "identifierTemplate" : "PUBLICATIONS/PUBLIC_REPOSITORIES/PUBLICATION_COLLECTION",
+                "experimentType" : "COLLECTION",
+                "$NAME" : "Publication Collection",
+                "$DEFAULT_OBJECT_TYPE" : "PUBLICATION",
+                "$DEFAULT_COLLECTION_VIEW" : "LIST_VIEW"
+            },
+            {
+                "identifierTemplate" : "STOCK_ORDERS/ORDERS/ORDER_COLLECTION",
+                "experimentType" : "COLLECTION",
+                "$NAME" : "Order Collection",
+                "$DEFAULT_OBJECT_TYPE" : "ORDER",
+                "$DEFAULT_COLLECTION_VIEW" : "LIST_VIEW"
+            },
+            {
+                "identifierTemplate" : "STOCK_CATALOG/PRODUCTS/PRODUCT_COLLECTION",
+                "experimentType" : "COLLECTION",
+                "$NAME" : "Product Collection",
+                "$DEFAULT_OBJECT_TYPE" : "PRODUCT",
+                "$DEFAULT_COLLECTION_VIEW" : "LIST_VIEW"
+            },
+            {
+                "identifierTemplate" : "STOCK_CATALOG/REQUESTS/REQUEST_COLLECTION",
+                "experimentType" : "COLLECTION",
+                "$NAME" : "Request Collection",
+                "$DEFAULT_OBJECT_TYPE" : "REQUEST",
+                "$DEFAULT_COLLECTION_VIEW" : "LIST_VIEW"
+            },
+            {
+                "identifierTemplate" : "STOCK_CATALOG/SUPPLIERS/SUPPLIER_COLLECTION",
+                "experimentType" : "COLLECTION",
+                "$NAME" : "Supplier Collection",
+                "$DEFAULT_OBJECT_TYPE" : "SUPPLIER",
+                "$DEFAULT_COLLECTION_VIEW" : "LIST_VIEW"
+            }
+        ],
+        "groups": 
+        [
+            {
+                "name":"ID SIS",
+                "key":"SIS",
+                "ldapGroupKeys": ["id-sis-source"],
+                "admins": ["abc", "def"],
+                "shareIds": ["2", "3"],
+                "createUserSpace": false
+            }
+        ]
+    }
+
+##### Section `globalSpaces`
+
+Optional. A list of space codes. If the corresponding spaces do not
+exist they will be created. All users of all groups will have
+SPACE\_OBSERVER rights on these spaces. For this reason the
+authorization group `ALL_GROUPS` will be created.
+
+##### Section `commonSpaces`
+
+Optional. The following roles are allowed:
+
+ADMIN, USER, POWER\_USER, OBSERVER.
+
+For each role a list of space post-fix codes are specified. For each
+group of the group section a space with code
+`<group key>_<space post-fix>` will be created. Normal users of the
+group will have access right SPACE\_&lt;ROLE&gt; and admin users will
+have access right SPACE\_ADMIN.
+
+##### Section `commonSamples`
+
+Optional. A list of key-value pairs where the key is a sample identifier
+template and the value is an existing sample type. The template has the
+form
+
+`<space post-fix code>/<sample post-fix code>`
+
+The space post-fix code has to be in one of the lists of common spaces.
+For each group of the group section a sample with identifier
+
+`<group key>_<space post-fix code>/<group key>_<sample post-fix code>`
+
+of specified type will be created.
+
+##### Section `commonExperiments`
+
+Optional. A list of maps where every key represents the different
+experiment attributes, allowing the not only set the type but also set
+property values. The template has the form
+
+`<space post-fix code>/<project post-fix code>/<experiment post-fix code> `
+
+The space post-fix code has to be in one of the lists of common spaces.
+For each group of the group section an experiment with identifier
+
+`<group key>_<space post-fix code>/<group key>_<project post-fix code>/<group key>_<experiment post-fix code>`
+
+of specified type will be created.
+
+##### Section `instanceAdmins` (since version 20.10.6)
+
+Optional. A list of users for which INSTANCE\_ADMIN rights will be
+established. If such users are no longer known by the authetication
+service they will not be revoked`.`
+
+##### Section `groups`
+
+A list of group definitions. A group definition has the following
+sections:
+
+-   `name`: The human readable name of the group.
+-   `key`: A unique alphanumerical key of the group that follows the
+    same rules as openBIS codes (letters, digits, '-', '.' but no '\_'),
+    for this particular purpose using only capital letters is
+    recommended. It is used to created the two authorization groups
+    `<group key>` and `<group key>_ADMIN.`
+-   `ldapGroupKeys`: A list of group keys known by the LDAP
+    authentication service.
+-   `users`: An explicit list of user ids.
+-   `admins`: A list of user ids. All admin users have SPACE\_ADMIN
+    rights to all spaces (common and user ones) which belong to the
+    group.
+-   `shareIds`: This is a list of ids of data store shares. This list is
+    only needed if `shares-mapping-file-path` has been specified.
+-   `useEmailAsUserId`: (since 20.10.1) If `true` the email address will
+    be used instead of the user ID to determine the code of the user's
+    space. Note, that the '@' symbol in the email address will be
+    replaced by '\_AT\_'. This flag should be used if [Single Sign
+    On](/display/openBISDoc2010/Single+Sign+On+Authentication) is used
+    for authentication but LDAP for managing the users of a group.
+    Default: `false.`
+-   `createUserSpace`: (since 20.10.1) This is a flag that controls a
+    creation of personal user spaces for the users of this group. By
+    default it is set to true, i.e. the personal user spaces will be
+    created. If set to false, then the personal user spaces won't be
+    created for this group.
+-   `userSpaceRole`: Optional access role (either ADMIN, USER,
+    POWER\_USER, or OBSERVER) for all users of the group on all personal
+    user spaces. (since version 20.10.3)
+
+### What UserManagementMaintenanceTask does
+
+Each time this maintenance task is executed (according to the scheduling
+interval of `plugin.properties`) the JSON configuration file will be
+read first. The task does the following:
+
+1.  Updates mapping file of data store shares if
+    `shares-mapping-file-path` has been specified.
+2.  Creates global spaces if they do not exist and allows
+    SPACE\_OBSERVER access by all users of all groups.
+3.  Revokes all users unknown by the authentication service. These users
+    will not be deleted but deactivated. This includes removing home
+    space and all authorization rights.
+4.  Does for each specified group the following:
+    1.  Creates the following two authorization groups if they do not
+        exist:
+        1.  `<group key>`: All users of the group will a member of this
+            authorization group. This group has access rights to common
+            spaces as specified.
+        2.  `<group key>_ADMIN`: All admin users of the group will be
+            member of this authorization group. This group has
+            SPACE\_ADMIN rights to all common spaces and all personal
+            user spaces.
+    2.  Creates common spaces if they do not exist and assign roles for
+        these space to the authorization groups.
+    3.  Creates for each user of the LDAP groups or the explicit list of
+        user ids a personal user space with SPACE\_ADMIN access right
+        (NOTE: since 20.10.1 creation of personal user spaces can be
+        disabled by setting "createUserSpace" flag in the group
+        configuration to false). The space code read  
+        `<group key>_<user id>[_<sequence number>]         `A sequence
+        number will be used if there is already a space with code
+        `<group key>_<user_id`&gt;. There are two reason why this can
+        happen:
+        1.  A user leaving the group and joining it again later but was
+            always known by the authentication service.
+        2.  A user leaving the group and the institution. That it, the
+            user is no longer known by the authentication service. But
+            later another user with the same user id is joining the
+            group.
+    4.  Creates common samples if they do not exist.
+    5.  Creates common experiments (and necessary projects) if they do
+        not exist.
+5.  Assigns home spaces in accordance to the following rules:
+    1.  If the user has no home space the personal user space of the
+        first group of the JSON configuration file will become the home
+        space.
+    2.  The home space will not be changed if its code doesn't start
+        with `<group key>_<user id>` for all groups.
+    3.  If the user leaves a group the home space will be removed.
+
+    Note, if a user is moved from one group to another group the home
+    space of the user will be come the personal user space of the new
+    group.
+
+### Content of the Report File sent by UsageReportingTask
+
+The report file is a TSV text file with following columns:
+
+[TABLE]
+
+-   The first line in the report (after the column headers) shows always
+    the summary (with unspecified 'group name').
+-   If `configuration-file-path` is specified usage for each specified
+    group (in alphabetic order) is listed.
+-   Finally usage by individual users follows if `user-reporting-type`
+    isn't NONE
+
+### Common use cases
+
+Here are some common uses cases. No openBIS restart is needed for these
+use cases.
+
+#### Adding a new group
+
+In order to make openBIS available for a new group three things have to
+be done by an administrator:
+
+1.  Add one or more shares to the DSS store. These are symbolic links to
+    (remote) disk space which belongs to the new group. Note, that
+    symbolic link has to be a number which is the share ID.
+2.  Define a new group in the LDAP service and add all persons which
+    should belong to the group. Note, a person can be in more than one
+    group.
+3.  Add to the above mentioned JSON configuration file a new section
+    under `groups`.
+
+#### Making a user an group admin
+
+Add the user ID to the `admins` list of the group in the JSON
+configuration file.
+
+#### Remove a user from a group
+
+The user has to be removed from the LDAP group on the LDAP service.
+
+#### Adding more disk space
+
+1.  Add a new share for the new disk to DSS store.
+2.  Add the share id to the `shareIds` list.
+
+  
+
+### Manual configuration of multi-group instances
+
+See [Manual configuration of Multi-groups openBIS
+instances](/display/openBISDoc2010/Manual+configuration+of+Multi-groups+openBIS+instances)
+
+  
+
+-   No labels
+
+Overview
+
+Content Tools
+
+Apps
+
+-   Powered by [Atlassian
+    Confluence](https://www.atlassian.com/software/confluence) 8.2.0
+-   Printed by Atlassian Confluence 8.2.0
+-   [Report a bug](https://support.atlassian.com/confluence-server/)
+-   [Atlassian News](https://www.atlassian.com/company)
+
+[Atlassian](https://www.atlassian.com/)
+
+{"serverDuration": 135, "requestCorrelationId": "2e637ddf8ef2ecd0"}