First of all, it's very imp to have a deployment architecture of the existing nodes/instances in an environment. Based on that you can determine the no. of nodes and the infra/hardware needed for the new instances (where upgrade has to be performed)
AMQ SETUP – Initial (when offline indexing is to be started first
time)
·
Login to the AMQ node with your emp id and
switch to amqadmin (su amqadmin)
·
Go to /software/ActiveMQ/apache-activemq-5.16.0
and delete the ‘data’ folder or ‘data-OLD’ folders from here. You can rename
the data folder like data-BAK if needed. But make sure ‘data’ folder should not
exist.
·
Create a new ‘data’ folder and ‘kahadb’ folder
inside it. OR You can also delete all contents inside data
and kahadb folders. Just keep these two folders – data and kahadb.
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/conf,
and edit jetty.xml.
·
Change the bean id=”jettyPort” as follows:
·
<bean
id="jettyPort"
class="org.apache.activemq.web.WebConsolePort"
init-method="start">
·
<!-- the default port number
for the web console -->
·
<!--<property
name="host" value="127.0.0.1"/>-->
·
<property name="host"
value="172.xx.1.xx3"/>
·
<property name="port"
value="8161"/>
·
</bean>
·
Save the jetty.xml file.
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/bin/linux-x86-64
·
Ensure that you are logged in as amqadmin
·
Start activemq with following command: ./activemq start
·
Check if process is running with grep command.
Also check activemq logs (activemq.log under /software/ActiveMQ/apache-activemq-5.16.0/data).
·
Keep the AMQ service up and running.
AMQ SETUP - Later
·
Login to the AMQ node with your emp id and
switch to amqadmin (su amqadmin)
·
Go to /software/ActiveMQ/apache-activemq-5.16.0
and delete the ‘data’ folder or ‘data-OLD’ folders from here. You can rename
the data folder like data-BAK if needed. But make sure ‘data’ folder should not
exist here before we proceed with next step.
·
When old ACS5.2 Prod envt is in stopped state, we need
to extract the AMQ data from the old ACS5.2 envt.
·
Login to old jasper node 1 of ACS5.2 (172.xx.12.xx)
as this is the instance where AMQ was running.
o
Go to /opt/apache-activemq-5.15.4.
o
Ensure that AMQ is not running here.
o
Zip the data folder using the zip command. (zip -r data.zip data)
o
Once data.zip is created, transfer this zip to
new AMQ (dedicated) node of new prod. – either through rsync command (if
enabled), OR ask AWS team to transfer this zip from old to new AMQ server (under
path à /software/ActiveMQ/apache-activemq-5.16.0)
using their temp s3 bucket.
·
Login to the AMQ node with your emp id and
switch to amqadmin (su amqadmin)
·
Go to /software/ActiveMQ/apache-activemq-5.16.0
·
Verify if data.zip exists here which AWS team
copied.
·
Verify the disk space available on this
instance. Ideally sufficient space should be available for data.zip to inflate.
·
Unzip the data.zip file (unzip data.zip)
·
‘data’ folder will be created at /software/ActiveMQ/apache-activemq-5.16.0.
·
Verify the size of data folder (with du -sh
command) and compare it with the size of the old one (that of old Prod jasper
node).
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/conf,
and edit jetty.xml.
·
Change the bean id=”jettyPort” as follows:
·
<bean
id="jettyPort" class="org.apache.activemq.web.WebConsolePort"
init-method="start">
·
<!-- the default port number
for the web console -->
·
<!--<property
name="host" value="127.0.0.1"/>-->
·
<property name="host"
value="172.xx.1.xxx"/>
·
<property name="port"
value="8161"/>
·
</bean>
·
Save the jetty.xml file.
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/bin/linux-x86-64
·
Ensure that you are logged in as amqadmin
·
Start activemq with following command: ./activemq start
·
Check if process is running with grep command.
Also check activemq logs (activemq.log under /software/ActiveMQ/apache-activemq-5.16.0/data).
·
Keep the AMQ service up and running.
Transformation node SETUP
o
Login with your emp id and su alfadmin
NNOTE: Use openjdk11.0.4 instead of openjdk11.0.2 to avoid alfresco server crashing issues in higher envts with high concurrency and high volume of data.
o
Check java home by java -version command. If not installed, then
follow these steps:
o
Go to /etc/profile.d
o vi java_home.sh
o
Check the following entry is present:
o export JAVA_HOME=/software/java/jdk-11.0.2
o export PATH=$PATH:$JAVA_HOME/bin
o
Hit the command: vi ~/.bash_profile
o
Check the following entry is present:
o export JAVA_HOME='/software/java/jdk-11.0.2'
o export PATH=$PATH:$JAVA_HOME/bin
o
Hit the command: source ~/.bash_profile
o
Hit the command: java -version to
verify java installed/present correctly.
o
Output as follows:
o openjdk version "11.0.2" 2019-01-15
o OpenJDK Runtime Environment 18.9 (build 11.0.2+9)
o OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed
mode)
o
We need to ensure that Libreoffice and
ImageMagick are installed on transformation node as these softwares are
required by ATS (Alfresco Transformation Services) jars to run.
o
If not installed, follow the steps below
IMAGEMAGICK SETUP
(if not already installed)
·
Install the installation file – sudo dnf
install ImageMagick-libs-7.0.11-13.x86_64.rpm at /software
·
Also run sudo dnf install ImageMagick-7.0.11-13.x86_64.rpm
in /software
·
In above two steps if some libs are missing
install that libs first and then install rpm again
·
make sure above two rpm are installed
successfully with command “dnf list installed | grep ImageMagick” – it should
give two results
·
Set imagemagick in classpath as follows:
Go to /etc/profile.d
·
vi java_home.sh
·
Make sure you have the following entry:
·
export IMAGEMAGICK=/bin
·
export
PATH=$PATH:$IMAGEMAGICK
·
Verify the version with the command: magick -version
·
Output as follows:
·
Version:
ImageMagick 7.0.11-13 Q16 x86_64 2021-05-17 https://imagemagick.org
·
Copyright:
(C) 1999-2021 ImageMagick Studio LLC
·
License: https://imagemagick.org/script/license.php
·
Features:
Cipher DPC HDRI Modules OpenMP(4.5)
·
Delegates
(built-in): bzlib cairo djvu fontconfig freetype gslib jng jp2 jpeg lcms ltdl
lzma openexr pangocairo png ps raqm raw rsvg tiff webp wmf x xml zlib
LIBREOFFICE SETUP
(if not already installed)
·
Extract the tar file – tar -xvf
LibreOffice_6.3.5.1_Linux_x86-64_rpm.tar.gz in /software
·
Once extracted, install the rpm file – sudo rpm -ivh *.rpm in /
/software/LibreOffice_6.3.5.1_Linux_x86-64_rpm/RPMS
·
Set libreoffice in classpath as follows:
Go to /etc/profile.d
·
vi
java_home.sh
·
Make sure you have the following entry:
·
export
LIBREOFFICE=/opt/libreoffice6.3/program
·
export
PATH=$PATH:$LIBREOFFICE
·
Verify by checking version with following
command : “libreoffice6.3 –version”
·
Output as follows:
·
LibreOffice
6.3.5.1 9a62adaf9abe90e8fef419f29114b0176dd66801
Once LibreOffice and
ImageMagick are installed on Transformation node, continue the below steps:
·
Login to this transformation node with your emp
id and switch to alfadmin (su alfadmin)
·
Go to /software/alfresco-transform-service.
·
Give execution rights to ats.sh file if not
having.
·
Run ats.sh (./ats.sh start)
·
Check the logs using à tail -f
nohup* and see if any errors are found.
Check
if all 3 jars are up and running with grep command.
Assuming that vanilla search-services 2.0.2
zip file is unzipped. So the folder structure of search-services-2.0.2 should
be present inside /software/alfresco/alfresco-search-services.
·
Do
changes in
/software/alfresco/alfresco-search-services/solrhome/templates/rerank/conf/
solrcore.properties of each solr node.
o alfresco.host=<TRACKER_IP_OR_REPO_IP_AS_PER_ARCHITECTURE>
o alfresco.port=8080
o alfresco.baseUrl=/alfresco
o
alfresco.secureComms=none
o
alfresco.socketTimeout=3600000 //increased based on high index and ACL size
# Setting below properties to increase performance
of indexing
o
merge.policy.maxMergedSegmentMB=10240
o
merge.policy.maxMergeAtOnce=5
o
merge.policy.segmentsPerTier=5
o
merger.maxMergeCount=16
o
merger.maxThreadCount=8
# Disable content indexing (as per requirement)
·
alfresco.index.transformContent=false
·
alfresco.ignore.datatype.1=d:content
#Increase the
maxBooleanClauses limit to 100000 if no. of ACLs txns and ACE counts inside each ACL txn is huge.
·
solr.maxBooleanClauses=60000
Update deletion policy in solrconfig.xml
cd /software/alfresco/alfresco-search-services/solrhome/templates/rerank/conf
vi solrconfig.xml
<!-- Enable deletion policy to delete tlog files created during
indexing. -->
<deletionPolicy class="solr.SolrDeletionPolicy">
<!-- The number of commit points to be kept -->
<str name="maxCommitsToKeep">1</str>
<!-- The number of optimized commit points to be kept -->
<str name="maxOptimizedCommitsToKeep">0</str>
<!--
Delete all commit points once they have reached the given age.
Supports DateMathParser syntax e.g.
-->
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
</deletionPolicy>
To increase the log size, Edit log4j.properties and set log size to 100 MB.
NOTE: For search-services-2.0.3 to work without https/mtls and without secret token, rename the security.json file to security.json.bak.
·
Start each solr – Go to
/software/alfresco/alfresco-search-services/solr/bin and run ./solr start
·
After starting each solr (vanilla
search-services), run the following URLs for each shard as follows from
browser.:
-----------------------------------------------------------------------------------------------------------------------
·
http://<IP_ADDRESS>:8983/solr/admin/cores?action=newCore&core=alfresco&storeRef=workspace://SpacesStore&numShards=12&numNodes=1&nodeInstance=1&template=rerank&property.data.dir.root=/software/alfresco/alfresco-search-services/indexes/workspace-SpacesStore&property.alfresco.host=ALF_IP&property.alfresco.port=8080&shardIds=0
·
http://<IP_ADDRESS:8983/solr/admin/cores?action=newCore&core=alfresco&storeRef=workspace://SpacesStore&numShards=12&numNodes=1&nodeInstance=1&template=rerank&property.data.dir.root=/software/alfresco/alfresco-search-services/indexes/workspace-SpacesStore&property.alfresco.host=ALF_IP&property.alfresco.port=8080&shardIds=1
· Continue upto 11th Shard i.e shardIds=11
·
Running above URLs creates the core and shard
structure/taxonomy.
·
Stop all solr nodes.
·
Edit solr.in.sh and set the following
properties shard wise:
o
SOLR_JAVA_MEM="-XmsAAg -XmxBBg" //can be 248G, 378G, 750G, etc //depending on the requirement
o
SOLR_SOLR_HOST=<Shard-Host-IP>
o
SOLR_ALFRESCO_HOST=<Tracker-Host-IP>
o
SOLR_ALFRESCO_PORT=8080
·
The values for solr.in.sh located in
/software/alfresco/alfresco-search-services/ for each shard as follows –
1.
For Tracker 1/Repo 1 (based on your configuration) -
o
Shard 1 - SOLR01 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=SOLR_IP
§
SOLR_ALFRESCO_HOST=TRACKER_1_IP
§
SOLR_ALFRESCO_PORT=8080
o
Shard 2 - SOLR02 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=SOLR_IP
§
SOLR_ALFRESCO_HOST=TRACKER_1_IP
§
SOLR_ALFRESCO_PORT=8080
o
Shard 3 - SOLR03 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=SOLR_IP
§
SOLR_ALFRESCO_HOST=TRACKER_1_IP
§
SOLR_ALFRESCO_PORT=8080
2.
For Tracker 2/Repo 2 (based on your configuration) -
o
Shard 4 - SOLR04 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=SOLR_IP
§
SOLR_ALFRESCO_HOST=TRACKER_2_IP
§
SOLR_ALFRESCO_PORT=8080
o
Shard 5 - SOLR05 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=SOLR_IP
§
SOLR_ALFRESCO_HOST=TRACKER_2_IP
§
SOLR_ALFRESCO_PORT=8080
o
Shard 6 - SOLR06 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=SOLR_IP
§
SOLR_ALFRESCO_HOST=TRACKER_2_IP
§ SOLR_ALFRESCO_PORT=8080
3. Same for tracker 3 and 4.
·
Edit shared.properties in /software/alfresco/alfresco-search-services/solrhome/conf/
and uncomment the following properties if not already done on all shards. NOTE: Without uncommenting the below 3 properties, the exact term queries (like =) does not work.
o
solr.host=<SHARD-<N>-IP_ADDRESS>
o
alfresco.cross.locale.datatype.0={http://www.alfresco.org/model/dictionary/1.0}text
o
alfresco.cross.locale.datatype.1={http://www.alfresco.org/model/dictionary/1.0}content
o
alfresco.cross.locale.datatype.2={http://www.alfresco.org/model/dictionary/1.0}mltext
·
Verify solrcore.proprties in /software/alfresco/alfresco-search-services/solrhome/rerank--alfresco--shards--12-x-1--node--1-of-1/alfresco-n/conf
on each shard. The tracker IP address should be rightly configured.
o
data.dir.root=/software/alfresco/alfresco-search-services/indexes/workspace-SpacesStore
o
alfresco.host=TRACKER_IP
o
shard.count=12
o
shard.instance=0
o
data.dir.store=alfresco-0
o
alfresco.port=8080
o
alfresco.baseUrl=/alfresco
o
alfresco.fingerprint=false
o
alfresco.socketTimeout=3600000
o
alfresco.secureComms=none
o
alfresco.metadata.ignore.datatype.1=app\:configurations
o
alfresco.metadata.ignore.datatype.0=cm\:person
o
merge.policy.maxMergeAtOnce=5
o
merge.policy.segmentsPerTier=5
o
merge.policy.maxMergedSegmentMB=10240
o
merger.maxMergeCount=16
o
merger.maxThreadCount=8
·
Verify solrconfig.xml in /software/alfresco/alfresco-search-services/solrhome/rerank--alfresco--shards--12-x-1--node--1-of-1/alfresco-1/conf.
The following entry should be present:
<!-- Enable deletion policy to delete
tlog files created during indexing. -->
<deletionPolicy class="solr.SolrDeletionPolicy">
<!-- The number of commit points to be kept -->
<str name="maxCommitsToKeep">1</str>
<!-- The number of optimized commit points to be kept -->
<str name="maxOptimizedCommitsToKeep">0</str>
<!--
Delete all commit points once they have reached the given age.
Supports DateMathParser syntax e.g.
-->
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
</deletionPolicy>
o
·
All shard configurations are ready now.
Delete
the contents inside /software/alfresco/alfresco-search-services/solrhome/alfrescoModels and also inside /software/alfresco/alfresco-search-services/indexes/workspace-SpacesStore/alfresco-0/index
, so if there are any existing indexes, it will be cleared. So that we can
start with fresh indexing on solr startup.
Start 1st Tracker/Repo
·
./alfresco.sh start from
/software/alfresco/alfresco-content-services; tail the logs – tail -f
catalina.out from /software/alfresco/alfresco-content-services/tomcat/logs
·
Note the time to server start and update the
schema (1 or 2 mins max time taken)
·
Verify the config in /alfresco url like ACS7.1
version, audit disabled, etc.
o
If following errors come up while starting
alfresco:
§
Address bind exception : Port 5701 already in use, OR
Hazelcast cannot start. Port [5701] is already in use and auto-increment is
disabled. Then –
·
Stop Alfresco
·
Stop Arender service running on the same
machine.
·
Start Alfresco
·
Start Arender service
§ ERROR [web.context.ContextLoader]
[main] Context initialization failed
§ org.alfresco.error.AlfrescoRuntimeException:
01220021 Not all patches could be applied ,
### Error updating database.
Cause: java.sql.SQLException: ORA-01461: can bind a LONG value only for
insert into a LONG column
§
§ ### The error may involve
alfresco.appliedpatch.update_AppliedPatch-Inline
§ ### The error occurred while setting
parameters
§ ### SQL: update
alf_applied_patch set description = ?, fixes_from_schema = ?, fixes_to_schema = ?, target_schema = ?, applied_to_schema = ?, applied_on_date = ?, applied_to_server = ?, was_executed = ?, succeeded = ?, report = ? where id = ?
§
### Cause: java.sql.SQLException: ORA-01461: can bind a
LONG value only for insert into a LONG column. Then –
·
Stop Alfresco
·
Go to oracle sql developer tool, login with
ALFRESCO_OWNER.
·
Check the ALF_APPLIED_PATCH table for the id
found in above error.
·
If no entry for this ID exists , insert a new
entry by running the following query.
·
insert into
ALF_APPLIED_PATCH values(
'ID',
'NAME',
0,
NUM,
NUM,
99999,
'DATETIME',
'ALF_VERSION',
1,
1,
'TEXT_WITH_NODE_PATH_AND_NODEREF');
§
While
starting alfresco, the logs might get stuck and not move forward even after
waiting for 10-15 mins. Then –
·
Stop Alfresco
·
Clear the contents of /temp and /work dir in
tomcat
·
Start Alfresco
·
Give it some time to start successfully.
·
Check the solr config in /alfresco/s/enterprise/admin/admin-searchservice
page:
o
Content Tracking enabled checkbox should be
selected by default
o
Solr hostname property should be the correct
solr LB URL and Solr port (non-ssl) should be 80.
Solr port (ssl) let it be 8443.
o
After making above changes, click Save button
at the bottom of the page.
o
This value will be persisted to the alfresco
DB.
o
So when you start and access other trackers or
repo nodes in future, this same value will be displayed on this page.
o
Perform the above steps on all trackers and
repo nodes
·
Check the config in /alfresco/s/enterprise/admin/admin-flocs
page:
o
Dynamic shard instance registration checkbox
should be selected by default
o
12 shards (for perf 12 shards) would be
displayed below.
o
Has content radio button will be disabled
(red) as contentless indexing is done
o
If you notice double no. of shards (ex: 24 instead
of 12), the other 12 shards would be in silent state and not in active state.
You can click ‘clean’ button on this page, it will remove the silent ones.
o
Perform the above steps on all trackers and
repo nodes.
·
Stop ACS - ./alfresco.sh stop from
/software/alfresco/alfresco-content-services
·
So we started tracker 1, allowed it to upgrade
the DB schema, and stopped it.
·
Now we can apply the same configuration in
other 3 trackers and start them.
Start all Trackers and its respective Shards
·
Comment the db.schema.update=true property in
alfresco-global.properties file if its not commented. Mostly this property will
be present only in tracker 1 but check in all trackers. Comment it in all
trackers before proceeding with next step.
·
Start each Tracker by going to /software/alfresco/alfresco-content-services/
and run ./alfresco.sh start and tail the logs, and verify from browser url (/alfresco)
once up.
·
·
For each shard, perform the following steps:
o
Start solr - ./solr/bin/solr start from
/software/alfresco/alfresco-search-services
o
Check solr logs. Folder structure of solr
shard should be created.
o
Verify details in solr admin console from
browser url
o
Allow the indexing to start (it will take some
time approx. 30 mins for indexing to start and the txRemaining and numFound
count to change)
o
Monitor the indexing.
o
Check solr logs for any errors
o
Monitor the memory usage on trackers as well
as shards for high utilization or disk space not getting full.
REPO SETUP
·
Ask for provisioning the no. of repo nodes (as per reqt) with ports
opened.
·
If already provisioned, start the following
configurations.
o
Put share.war from vanilla distribution zip to
this instance tomcat/webapps on both repo nodes
o
Edit/uncomment share.xml file in
/software/alfresco/alfresco-content-services/tomcat/conf/Catalina/localhost
o
Copy the latest project custom code/jars in alfresco-content-services/modules/platform and /alfresco-content-services/modules/share
folder.
o
Check if the amps are applied in alfresco.war
and share.war by hitting this command (from /alfresco-content-services/bin)
:
§
java -jar alfresco-mmt.jar list
../tomcat/webapps/alfresco.war
§
java -jar alfresco-mmt.jar list
../tomcat/webapps/share.war
o
If saml-repo amp, javascript-console amp are
not displayed in the list, then run the following two points by applying those
amps.
o
Apply saml and javascript console amps from
amps folder to alfresco.war
§
java -jar alfresco-mmt.jar install
/software/alfresco/alfresco-content-services/amps/alfresco-saml-repo-1.2.1.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/alfresco.war
§
java -jar alfresco-mmt.jar install
/software/alfresco/alfresco-content-services/amps/javascript-console-repo-0.7.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/alfresco.war
o
Apply saml and javascript console amps from
amps_share folder to share.war
§
java -jar alfresco-mmt.jar install
/software/alfresco/alfresco-content-services/amps_share/alfresco-saml-share-1.2.1.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/share.war
§
java -jar alfresco-mmt.jar install /software/alfresco/alfresco-content-services/amps_share/javascript-console-share-0.7.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/share.war
o
Update alfresco-global.properties:
§
rds details
§
url details like Transform core properties
compare with existing (old) repo nodes and add
if any mandatory properties are needed
o Update share-config-custom.xml with details like repository-url, endpoint-url for alfresco, alfresco-api and
alfresco-feed.
If there are your custom share jars, place it under modules/share.
o
Start alfresco - ./alfresco.sh start at /software/alfresco/alfresco-content-services;
tail the logs – tail -f
catalina.out at /software/alfresco/alfresco-content-services/tomcat/logs
o
Verify from browser urls once up.
o
If following errors come up while starting
alfresco:
§
Address bind exception : Port 5701 already in use, OR
Hazelcast cannot start. Port [5701] is already in use and auto-increment is
disabled. Then –
·
Stop Alfresco
·
Stop Arender service running on the same
machine.
·
Start Alfresco
Start
Arender service
§
While
starting alfresco, the logs might get stuck and not move forward even after
waiting for 10-15 mins. Then –
·
Stop Alfresco
·
Clear the contents of /temp and /work dir in
tomcat
·
Start Alfresco
·
Give it some time to start successfully.
·
ArenderHMI deployment and configuration on all
5 repo nodes
o
Stop alfresco - ./alfresco.sh stop at /software/alfresco/alfresco-content-services
o
Copy ARenderHMI.war to tomcat/webapps folder
Edit
arender.properties file – vi arender.properties from /software/alfresco/alfresco-content-services/tomcat/webapps/ARenderHMI/WEB-INF/classes.
Set the values accordingly and save. The following property might be required to be added if you get wss socket error while loading arender preview page.
§ arender.web.socket.enabled=false
Edit arender-server-custom-alfresco.properties at /software/alfresco/alfresco-content-services/tomcat/webapps/ARenderHMI/WEB-INF/classes.
Set the values accordingly and save:
§
arender.server.rendition.hosts=http://IP_ADDRESS:8761/
§ arender.server.alfresco.atom.pub.url=http://localhost:8080/alfresco/api/-default-/cmis/versions/1.1/atom
§ arender.server.alfresco.soap.ws.url=http://localhost:8080/alfresco/cmisws/cmis?wsdl
§ arender.server.url.parsers.beanNames=customCmisUrlParser,DefaultURLParser,DocumentIdURLParser,FileattachmentURLParser,ExternalBeanURLParser,AlterContentParser,FallbackURLParser
§ arender.server.alfresco.use.soap.ws=true
arender.server.alfresco.annotation.path=/Data
Dictionary
In the same location, edit arender-custom-server-integration.xml, set
the values accordingly and save:
§
<?xml version="1.0"
encoding="UTF-8"?>
<beans
default-lazy-init="true" default-autowire="no"
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--
xml imported by ARender Java Web Server side, please add any customization you
wish to see loaded in this file-->
<bean id="customCmisUrlParser"
class="com.arondor.viewer.cmis.CustomCMISURLParser">
<property
name="cmisConnection" value="cmisConnection"/>
<property name="alfHost"
value="http://localhost:8080"/>
</bean>
<bean
id="xfdfAnnotationAccessor" class="com.arondor.viewer.xfdf.annotation.CustomXFDFAnnotationAccessor"scope="prototype">
<property
name="contentAccessor">
<bean
class="com.arondor.viewer.xfdf.annotation.FileSerializedContentAccessor">
<property name="path"
value="annotations/"/>
</bean>
</property>
<property name="alfHost"
value="http://localhost:8080"/>
<property name="annotationCreationPolicy">
<bean
class="com.arondor.viewer.client.api.annotation.AnnotationCreationPolicy">
<property
name="canCreateAnnotations" value="true"/>
<property
name="textAnnotationsSupportHtml" value="true"/>
<property name="textAnnotationsSupportReply"
value="true"/>
<property
name="textAnnotationsSupportStatus" value="true"/>
<property
name="annotationsSupportSecurity" value="false"/>
<property
name="availableSecurityLevels">
<ref bean="availableSecurityLevels"/>
</property>
<property
name="annotationTemplateCatalog">
<ref
bean="annotationTemplateCatalog"/>
</property>
</bean>
</property>
</bean>
<bean
id="annotationAccessorFactory"class="com.arondor.viewer.common.annotation.BeanAnnotationAccessorFactory">
<property name="beanName"
value="xfdfAnnotationAccessor"/>
<property
name="fallBackBeanNames"ref="fallBackAnnotationAccessorBeanNames"
/>
</bean>
<bean id="cmisConnection" class="com.arondor.viewer.cmis.CMISConnection"scope="prototype">
<property
name="atomPubURL"value="${arender.server.alfresco.atom.pub.url}"/>
<property
name="soapWSURL"value="${arender.server.alfresco.soap.ws.url}"/>
<property
name="annotationsPath"
value="${arender.server.alfresco.annotation.path}"/>
<property
name="annotationFolderName"
value="${arender.server.alfresco.annotation.folder.name}"/>
<property name="useSoapWS"
value="${arender.server.alfresco.use.soap.ws}"/>
<property name="user"
value="${arender.server.alfresco.user}"/>
<property name="password"
value="${arender.server.alfresco.password}"/>
</bean>
</beans>
Ensure that the required jars are present at /software/alfresco/alfresco-content-services/tomcat/webapps/ARenderHMI/WEB-INF/lib (this is needed if you use arender.server.alfresco.use.soap.ws=true )
§
jaxws-api-2.2.11.jar
§ javax.jws-3.0.jar
§ arondor-arender-for-company-project-4.6.0-beta0.jar
§ saaj-api-RELEASE120.jar
json-20160810.jar
o
Create a new file “application.properties” on
path “/software/ARender4.7.1/modules/TaskConversion/”
with below details
-
# soffice path (used only in Libreoffice
context)
rendition.soffice.path=/opt/libreoffice6.3/program/soffice
o
Start alfresco and tail the logs.
o
Verify from browser urls once up.
ARENDER SETUP
(Arender Rendition engine setup) on dedicated Arender node
·
Login with alfadmin user.
·
On production, the Arender node was cloned
from Performance Arender node. So skip “Part A” and jump to “Part B”. If not
cloned, and Arender has to be setup from scratch and follow Part A as well as
Part B.
·
Part A:
·
Ensure that LibreOffice is installed here on
this node. Follow the same steps of LibreOffice installation mentioned in this
document, and check if LibreOffice is set in classpath as well as running this
command “libreoffice6.3
--version” and “sudo libreoffice6.3
--version” shows the correct output.
·
Also ensure the java is installed and it’s
classpath is set correctly as well as running this command “java -version” and “sudo java -version” shows
the correct output.
·
Reach out to AWS/infra team if both or one of
them are not working.
·
Check with following commands to check env
variables
o
env
o
show-environment
·
To set environment variable, you can set as
follows:
o
Syntax: set-environment
VARIABLE_NAME=VARIABLE_VALUE
o
Example: set-environment
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/software/jdk-11.0.2/bin:/opt/libreoffice6.3/program:/bin:/usr/lib:/usr/local/lib
o
Make sure that you copy the existing PATH
variable value and then add/append your variable
·
·
Go to the location where rendition-engine jar
file (rendition-engine-installer-4.7.1-rendition.jar) is present.
·
Before starting the installation, create
folder “ARender4.7.1” under /software.
·
Start the installation (with alfadmin) using
the command –>
java -jar
rendition-engine-installer-4.7.1-rendition.jar
On
path prompt of where to install Arender, provide this path à /software/Arender4.7.1
·
It will prompt for entering the username: Provide
your user as username
·
Part B:
·
Once the installation completes, go to
/etc/systemd/system location, edit the file ARenderRenditionEngineService.service
and change the content as follows:
·
[Unit]
Description=ARender rendition engine service
After=syslog.target
·
·
[Service]
User=alfadmin
ExecStart=/software/ARender4.7.1/service/unix/service-mode-rendition-engine-4.7.1.jar
SuccessExistStatus=143
·
·
[Install]
WantedBy=multi-user.target
·
·
Verify through ps aux| grep ARender and
confirm ARender service is running or not. If not , go to /etc/systemd/system,
and run the command:
o sudo systemctl start
ARenderRenditionEngineService.service
·
To stop, sudo systemctl stop
ARenderRenditionEngineService.service
·
To check status, sudo systemctl status ARenderRenditionEngineService.service
·
If arender service does not start
successfully, then:
o
Check if Libreoffice is installed correctly on
this node
Connect
with AWS/infra team if Arender service is not running. Linux level (service
rights) fix can be applied to make it work.
Sanity Check
·
Team should verify through browser urls of
repo nodes as well as solr nodes and check if all are accessible
·
Repo nodes’ IP specific URL should be
accessed and share login to be done with admin and a non-admin user and see if
search and other basic functionalities are working fine.
·
LB url should be accessed and login,
search, and other basic functionalities should be checked.
Post go-live few changes might be needed
Day 1
·
Tomcat server.xml parameters were added for
compression and maxthreads.
·
Missed firewall port opening 5701 was
implemented
Day 2
·
JVM garbage collection parameters were added
·
ALFRESCO_OWNER schema stats gather and Index
Rebuild
·
Solr formData limit was changed from 2 MB to 2
GB
·
Solr number of facets was changed to 40 in
solrcore.properties
Day 3
·
Ulimit change – Number of open files to
unlimited (check with ulimit – a command)
·
Alfresco Node Change –
o
Custom-tx-cache-context.xml file was added into tomcat/shared/classes/alfresco/extension
directory.
o
solr.http.connection.timeout=0
o
solr.http.connection.timeout=0
o
search.solrTrackingSupport.ignorePathsForSpecificAspects=true
o
search.solrTrackingSupport.ignorePathsForSpecificTypes=true
·
DB
Change –
o
OPTIMIZER_INDEX_COST_ADJ=5
o
OPTIMIZER_INDEX_CACHING=50
PROJECTNAME_OWNER schema
gather statistics was triggered