Skip to content
Snippets Groups Projects
Commit 9b76e771 authored by kaloyane's avatar kaloyane
Browse files

[LMS-2149] Create an integration test for archiving (eager archiving + rsync)

SVN: 20428
parent 1cc110cb
No related branches found
No related tags found
No related merge requests found
......@@ -449,7 +449,7 @@ function shutdown_openbis_server {
# unpack everything, override default configuration with test configuation
function install_dsss {
local install_dss=$1
local dss_dirs="datastore_server1 datastore_server2 datastore_server_yeastx"
local dss_dirs="datastore_server1 datastore_server2 datastore_server_yeastx datastore_server_archiving"
if [ $install_dss == "true" ]; then
unpack datastore_server-
for dss_dir in $dss_dirs; do
......
#!/bin/bash
# all tests to be executed
TEST_FILE_PATTERN=./test-*.sh
EXIT_WITH_ERROR=false
function print_result {
local result=$1
local testFile=$1
local result=$2
if [ $result -ne 0 ]; then
echo ERROR: Test has failed.
echo ERROR: Test "$testFile" has failed.
EXIT_WITH_ERROR=true
fi
}
echo Testing Screening Workflow
./test-screening.sh $@
result_hcs=$?
print_result $result_hcs
echo Testing 3V Screening Workflow
./test-3v.sh $@
result_3v=$?
print_result $result_3v
echo Testing YeastX Workflow
./test-yeastx.sh $@
result_yeastx=$?
print_result $result_yeastx
for testScript in $TEST_FILE_PATTERN; do
echo Executing test $testScript
./$testScript $@
result=$?
print_result $testScript $result
done
if [ $result_3v -ne 0 -o $result_yeastx -ne 0 -o $result_hcs -ne 0 ]; then
if [ $EXIT_WITH_ERROR ]; then
exit 1;
fi
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris non
egestas enim. Nulla id augue dolor, a fringilla libero. Sed scelerisque
odio imperdiet odio lacinia et rhoncus erat interdum. Aliquam porta
justo id lectus varius a tempus justo consectetur. In id mauris est.
Integer tincidunt luctus sem, id suscipit urna blandit eu. Nunc leo
felis, sagittis sit amet lobortis faucibus, consectetur sagittis enim.
Nulla arcu lacus, ultricies sed semper quis, viverra molestie lorem.
Aliquam sed pellentesque erat. Duis eleifend vehicula metus, mollis
tempus ipsum rutrum a. Nam placerat suscipit diam ut malesuada. Praesent
in augue et risus varius suscipit sit amet at mi. Nam egestas nibh non
tortor volutpat sodales. Nam eget sapien quis ligula semper pretium at
ut odio. Fusce consequat ipsum sed risus dignissim faucibus. Aliquam
mollis urna ut mi dignissim sit amet posuere neque pharetra. Integer
posuere pellentesque massa non rhoncus. Vestibulum eu erat nunc. Donec
ipsum orci, euismod in ornare vel, varius at neque. Nulla metus quam,
aliquet ut gravida at, lacinia semper magna.
Sed id nisi sed lectus congue vehicula. Cras egestas justo eget risus
bibendum non sollicitudin lectus sodales. Integer quis augue vitae arcu
viverra condimentum eget quis leo. Proin vel eros quis nibh iaculis
facilisis. Maecenas fermentum iaculis nibh, in luctus dui ultrices eget.
Praesent convallis ante vulputate augue facilisis euismod. In nec odio
nunc, non elementum leo. Aliquam mattis ultricies urna in blandit. Morbi
a laoreet orci. Nam molestie rhoncus dui a blandit.
Vestibulum non tortor metus. Morbi nec lorem eget erat pharetra pulvinar
a vitae leo. Nulla facilisi. Vivamus dapibus turpis eu lectus mollis
iaculis. Aliquam aliquet ante non eros porta viverra. Etiam vitae dolor
posuere purus iaculis pharetra et et eros. Donec libero metus,
vestibulum in ullamcorper quis, vestibulum ut tellus. Sed dignissim
turpis vel mi vestibulum blandit tempus velit adipiscing. Nulla auctor,
sem in pellentesque aliquet, erat ante porttitor leo, at pulvinar urna
erat id neque. In laoreet libero sed magna gravida ac imperdiet massa
eleifend. Nunc nec suscipit sem. Aliquam a felis nisl, eget dictum
neque. Curabitur et neque sit amet ligula rutrum fringilla. Nunc nisi
nunc, semper sit amet dapibus vel, ultrices a massa. Quisque sapien
metus, congue nec tincidunt et, aliquet eu felis. Vestibulum ante ipsum
primis in faucibus orci luctus et ultrices posuere cubilia Curae;
Cras auctor diam quis mauris rhoncus mattis viverra nulla vehicula.
Fusce commodo sagittis felis, eu imperdiet ante fermentum nec. Nam vel
lorem sapien. Sed vehicula elementum velit, sed feugiat sapien rhoncus
eu. Etiam orci urna, viverra eu malesuada a, ultrices ut magna. Nullam
aliquet sem sit amet nunc luctus id viverra neque iaculis. Vestibulum
ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia
Curae; Proin augue massa, eleifend id iaculis sit amet, vehicula sit
amet turpis. Vestibulum feugiat est cursus nunc sodales molestie.
Maecenas eget felis sem, id pretium tellus.
Fusce sapien felis, auctor sit amet dapibus ut, ultrices at ligula.
Donec varius urna orci. Donec arcu felis, vehicula at tincidunt quis,
scelerisque id eros. Nunc id diam urna, blandit egestas odio. Etiam ac
libero vel purus consectetur faucibus a eu nisl. Cras et nulla ut sem
luctus pretium. Nullam tristique, purus tempus scelerisque tempor, orci
sapien aliquam quam, in aliquam nisl massa at dui. Proin at erat in
nulla varius pretium eget in est. Nunc sit amet sem et orci consectetur
tristique vel a enim. Cum sociis natoque penatibus et magnis dis
parturient montes, nascetur ridiculus mus. Nulla auctor commodo auctor.
Donec varius imperdiet tellus sit amet eleifend. Nulla facilisi. Cras
aliquet libero sit amet neque accumsan vel sollicitudin nulla sodales.
Proin vitae quam ut lacus facilisis fermentum sit amet a risus.
Vestibulum porta dui a nibh eleifend eu dignissim eros suscipit. Ut
iaculis, justo vitae vehicula sodales, mauris erat vulputate purus, ut
aliquam ligula magna eu nulla. Sed enim lectus, congue at pulvinar eu,
sagittis non leo. Fusce lobortis, lacus et convallis dictum, lectus odio
porttitor purus, in vulputate sem dui eget sapien. Etiam a interdum nisi.
\ No newline at end of file
tr = service.transaction()
dataSet = tr.createNewDataSet()
tr.moveFile(incoming.getAbsolutePath(), dataSet)
dataSet.setExperiment(tr.getExperiment("/TEST/TEST_PROJECT/EXP_TEST"))
#
# Data Store Server configuration file
#
#
# Home directory of the JRE that should be used
#
#JAVA_HOME=${JAVA_HOME:=/usr/java/latest}
#
# Options to the JRE
#
JAVA_OPTS=${JAVA_OPTS:=-server}
# Enable debugging if needed
#JAVA_OPTS=${JAVA_OPTS:=-server -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8101}
#
# Maximal number of log files to keep
#
MAXLOGS=5
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j='http://jakarta.apache.org/log4j/'>
<appender name="DEFAULT" class="org.apache.log4j.DailyRollingFileAppender">
<param name="File" value="log/datastore_server_log.txt"/>
<param name="DatePattern" value="'.'yyyy-MM-dd"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %-5p [%t] %c - %m%n"/>
</layout>
</appender>
<root>
<priority value ="info" />
<appender-ref ref="DEFAULT" />
</root>
</log4j:configuration>
# Unique code of this Data Store Server. Not more than 40 characters.
data-store-server-code = DSS-ARCHIVING
root-dir=..
# The root directory of the data store
storeroot-dir = ${root-dir}/data/main-store
# The directory where the command queue file is located; defaults to storeroot-dir
commandqueue-dir =
# Port
port = 8444
# Session timeout in minutes
session-timeout = 30
# Path to the keystore
keystore.path = etc/openBIS.keystore
# Password of the keystore
keystore.password = changeit
# Key password of the keystore
keystore.key-password = changeit
# The check interval (in seconds)
check-interval = 2
# The time-out for clean up work in the shutdown sequence (in seconds).
# Note that that the maximal time for the shutdown sequence to complete can be as large
# as twice this time.
shutdown-timeout = 2
# The period of no write access that needs to pass before an incoming data item is considered
# complete and ready to be processed (in seconds) [default: 300].
# Valid only when auto-detection method is used to determine if an incoming data are ready to be processed.
quiet-period = 5
# If free disk space goes below value defined here, a notification email will be sent.
# Value must be specified in kilobytes (1048576 = 1024 * 1024 = 1GB). If no high water mark is
# specified or if value is negative, the system will not be watching.
highwater-mark = 1048576
# The URL of the LIMS server
server-url = https://localhost:8443/openbis/openbis
# The username to use when contacting the LIMS server
username = etlserver1
# The password to use when contacting the LIMS server
password = <change this>
# The base URL for Web client access.
download-url = https://localhost:8444
# SMTP properties (must start with 'mail' to be considered).
mail.smtp.host = file://${root-dir}/email
mail.from = datastore_server@localhost
mail.smtp.user =
mail.smtp.password =
# Maximum number of retries if renaming failed.
# renaming.failure.max-retries = 12
# The number of milliseconds to wait before retrying to execute the renaming process.
# renaming.failure.millis-to-sleep = 5000
# Globally used separator character which separates entities in a data set file name
data-set-file-name-entity-separator = _
# Comma separated names of processing threads. Each thread should have configuration properties prefixed with its name
# E.g. 'code-extractor' property for the thread 'my-etl' should be specified as 'my-etl.code-extractor'
inputs=jython-dropbox
jython-dropbox.incoming-dir = ${root-dir}/data/incoming-jython
jython-dropbox.top-level-data-set-handler = ch.systemsx.cisd.etlserver.registrator.JythonTopLevelDataSetHandler
jython-dropbox.incoming-data-completeness-condition = auto-detection
jython-dropbox.strip-file-extension = true
jython-dropbox.storage-processor = ch.systemsx.cisd.etlserver.DefaultStorageProcessor
jython-dropbox.script-path = ${root-dir}/data-archiving/scripts/jython-dropbox.py
# ---------------------------------------------------------------------------
# (optional) archiver configuration
# ---------------------------------------------------------------------------
# Configuration of an archiver task. All properties are prefixed with 'archiver.'.
archiver.class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.standard.RsyncArchiver
archiver.destination = localhost:/tmp/integration-tests/archiving/rsync-archive
# Comma separated names of maintenance plugins.
# Each plugin should have configuration properties prefixed with its name.
# Mandatory properties for each <plugin> include:
# <plugin>.class - Fully qualified plugin class name
# <plugin>.interval - The time between plugin executions (in seconds)
# Optional properties for each <plugin> include:
# <plugin>.start - Time of the first execution (HH:mm)
# <plugin>.execute-only-once - If true the task will be executed exactly once,
# interval will be ignored. By default set to false.
maintenance-plugins = auto-archiver
# Performs automatic archivization of 'AVAILABLE' data sets based on their properties
auto-archiver.class = ch.systemsx.cisd.etlserver.plugins.AutoArchiverTask
# The time between subsequent archive operations (in seconds)
auto-archiver.interval = 10
auto-archiver.remove-datasets-from-store=false
#!/bin/sh
# author: Kaloyan Enimanev
#
# The integration test scenario for archiving.
# assumptions:
# 1) postgres is running on the local machine
# 2) The command "ssh locahost" completes sussessfully without requiring any user input (e.g. password input)
#
# -------------------
# - openBIS + DataStore servers are launched
# - STEP 1
# * one data set is registered via a python dropbox
# * a backup copy of the data set is expected to appear in the archive
# - STEP 2
# * the data in the archive is damaged ot purpose
# * archiving + delete from store is triggered
# * we expect the archiving process to "repair" the broken archive copy before deleting
# the data set from the data store
#
# --- include external sources ------------------------
source common.bash
# ----- local constants --------------------
TIME_TO_COMPLETE=60 # time (in seconds) needed by the whole pipeline to process everything
TEST_DATA_DIR=templates/data-archiving
ARCHIVE_DIR=/tmp/integration-tests/archiving/rsync-archive
ARCHIVE_DATASET=ARCHIVE_DATASET
DATA_STORE_DIR=$WORK/data/main-store
DSS_SERVICE_PROPS=$WORK/datastore_server_archiving/etc/service.properties
# ---- Testcase-specific preparation steps ------------
function prepare_step1 {
echo Recreating an emtpy archive directory "$ARCHIVE_DIR"
rm -fr $ARCHIVE_DIR
mkdir -p $ARCHIVE_DIR
echo Copying test dataset to jython dropbox...
local DIR=$WORK/data/incoming-jython
copy_test_data $ARCHIVE_DATASET $DIR
}
function prepare_step2 {
damage_archive
reconfigure_datastore_server
unset_presentInArchiveFlag_DB
}
function damage_archive {
echo "Inserting invalid content in the archived dataset copy..."
echo "INVALID CONTENT AT THE END OF ARCHIVE" >> $ARCHIVED_DATASET_DIR/archive-me.txt
}
function reconfigure_datastore_server {
# after the reconfiguration
# the auto-archiving will start deleting datasets from store
echo "Reconfiguring auto-archiving to remove datasets from store ...."
local tmpFile=$DSS_SERVICE_PROPS.tmp
sed -e "s/auto-archiver.remove-datasets-from-store=false/auto-archiver.remove-datasets-from-store=true/g" < $DSS_SERVICE_PROPS > $tmpFile
mv $tmpFile $DSS_SERVICE_PROPS
}
function unset_presentInArchiveFlag_DB {
local psql=`run_psql`
$psql -U postgres -d $DATABASE -c "update external_data set present_in_archive=false"
}
#
# --------- assertions --------------------
#
function assert_last_dataset_content_in_database {
local pattern=$1
echo ==== assert correct last dataset content in database with pattern $pattern ====
local psql=`run_psql`
local dataset=`$psql -U postgres -d $DATABASE \
-c "select d.code, ed.status, ed.present_in_archive \
from data as d \
left join external_data as ed on ed.data_id = d.id \
where d.id = (select max(id) from data)" \
| awk '/ +[0-9]+/' \
| awk '{gsub(/ /,"");print}' \
| awk '{gsub(/\|/,";");print}'`
local lines=`echo "$dataset" | grep "$pattern" | wc -l`
if [ $lines == 0 ]; then
report_error Last dataset does not match pattern "$pattern": $dataset
fi
}
function asserts_step1 {
assert_last_dataset_content_in_database ".*;AVAILABLE;t"
ARCHIVED_DATASET_DIR=`find $ARCHIVE_DIR -type d -name "$ARCHIVE_DATASET"`
if [ "$ARCHIVED_DATASET_DIR" == "" ]; then
report_error "Cannot find archived dataset copy under $ARCHIVE_DIR"
else
asserts_valid_archive_copy
fi
}
function asserts_step2 {
assert_last_dataset_content_in_database ".*;ARCHIVED;t"
asserts_valid_archive_copy
local dataset_in_store=`find $DATA_STORE_DIR -type d -name "$ARCHIVE_DATASET"`
if [ -d "$dataset_in_store" ]; then
report_error Data set \"$dataset_in_store\" should be deleted from the datastore !
fi
}
function asserts_valid_archive_copy {
assert_same_content $TEST_DATA_DIR/$ARCHIVE_DATASET $ARCHIVED_DATASET_DIR
}
# --- helper functions ---------
function logAndSleep {
local sleepSec=$1
echo "Sleeping for $sleepSec seconds ..."
sleep $sleepSec
}
function copy_test_data {
local NAME=$1
local DIR=$2
cp -RPpv $TEST_DATA_DIR/$NAME $DIR
clean_svn $DIR/$NAME
}
# Prepare template incoming data and some destination data structures
function prepare_directory_structures {
echo Re-creating in/out directory structures
for data_folder in data data-archiving; do
work_destination=$WORK/$data_folder
rm -fr $work_destination
mkdir -p $work_destination
cp -Rv $TEMPLATE/$data_folder $WORK
clean_svn $work_destination
done
}
#
# ------- test workflow --------------
#
function test_step1 {
#
# step 1 : expect new data set to be archived
#
prepare_step1
switch_dss "on" datastore_server_archiving
logAndSleep $TIME_TO_COMPLETE
switch_dss "off" datastore_server_archiving
logAndSleep 5
asserts_step1
}
function test_step2 {
#
# step 2 : damage archive copy and expect the Rsync archiverto detect the error
#
prepare_step2
switch_dss "on" datastore_server_archiving
logAndSleep $TIME_TO_COMPLETE
switch_dss "off" datastore_server_archiving
logAndSleep 5
asserts_step2
}
function integration_test {
prepare_directory_structures
build_and_install $@
test_step1
# TODO KE: remove this comment when we implement filesize check in the RSyncArchiver
#test_step2
shutdown_openbis_server $OPENBIS_SERVER
exit_if_assertion_failed
}
#
# ------ CLI utility/HELP functions ----------------
#
function print_help {
echo "Usage: $0 [ (--dss | --openbis)* | --all [ --local-source ]]"
echo " --dss, --openbis, build chosen components only"
echo " --all build all components"
echo " --local-source use local source code during building process instead of downloading it from svn"
echo " --reinstall-all reinstalls all packeges new from the zip file which is in the installation direcory (also reinstall the packages which are not build)"
echo " --clean clean and exit"
echo " --help displays this help"
echo "If no option is given, integration tests will be restarted without building anything."
echo "Examples:"
echo "- Rebuild everything, fetch sources from svn:"
echo " $0 --all"
echo "- Use openbis server and client installation from previous tests, rebuild data store server using local source:"
echo " $0 --dss --local-source"
echo "- Rebuild data store server only fetching sources from svn:"
echo " $0 --dss"
}
#
# -- MAIN (copied from)------------
#
if [ "$1" = "--clean" ]; then
clean_after_tests
else
install_dss=false
install_dmv=false
install_openbis=false
use_local_source=false
reinstall_all=false
while [ ! "$1" = "" ]; do
case "$1" in
'-e'|'--dss')
install_dss=true
;;
'-o'|'--openbis')
install_openbis=true
;;
'-a'|'--all')
install_dss=true
install_openbis=true
;;
'--local-source')
use_local_source=true
;;
'--reinstall-all')
reinstall_all=true
;;
'--help')
print_help
exit 0
;;
*)
echo "Illegal option $1."
print_help
exit 1
;;
esac
shift
done
integration_test $install_dss $install_dmv $install_openbis $use_local_source $reinstall_all
fi
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment