 <?xml-stylesheet type="text/css" href="http://mspbuilder.com/Data/style/rss1.css" ?> <?xml-stylesheet type="text/xsl" href="http://mspbuilder.com/Data/style/rss1.xsl" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd">
  <channel>
    <title>MSP Builder Blog</title>
    <link>http://mspbuilder.com/blog</link>
    <description />
    <docs>http://www.rssboard.org/rss-specification</docs>
    <generator>mojoPortal Blog Module</generator>
    <language>en-US</language>
    <ttl>120</ttl>
    <atom:link href="http://mspbuilder.com/Blog/RSS.aspx?p=3~3~-1" rel="self" type="application/rss+xml" />
    <itunes:owner />
    <itunes:explicit>no</itunes:explicit>
    <item>
      <title>The Human Firewall</title>
      <description><![CDATA[<p><span style="font-family:&quot;Segoe UI&quot;,sans-serif">Cybersecurity is crucial for any business that wants to protect its sensitive information and assets. But having a robust security infrastructure in place isn't enough; it's also essential to ensure that both your staff and your client's staff is well-trained on how to spot and avoid common threats like phishing scams.</span></p>

<p><span style="font-family:&quot;Segoe UI&quot;,sans-serif">One of the most significant advantages of phishing attacks is that they often prey on human error, which means that well-trained staff are more likely to spot and avoid them. By providing your clients&nbsp;with security awareness training, you can help them to understand the common tactics used by phishers and how to identify suspicious emails and websites.</span></p>

<p><span style="font-family:&quot;Segoe UI&quot;,sans-serif">End-user security awareness training should cover a range of topics, including how to recognize phishing attempts, how to use strong passwords, and how to spot and report suspicious activity. Training should be regularly updated to reflect the latest threats.</span></p>

<p><span style="font-family:&quot;Segoe UI&quot;,sans-serif">However, training your staff is just one part of a comprehensive cybersecurity strategy. MSPs should also implement robust technical security measures, such as firewalls, antivirus software, Zero-Day remediation, and intrusion detection systems. </span></p>

<p><span style="font-family:&quot;Segoe UI&quot;,sans-serif">When it comes to cybersecurity, end-user training is essential, but it must be part of a bigger strategy that includes multiple layers of protection. Without the right training and security measures in place, you and your client's&nbsp;business' are&nbsp;at risk of falling victim to a devastating cyberattack.</span></p>

<p><span style="font-family:&quot;Segoe UI&quot;,sans-serif">Investing in end-user security awareness training is one of the most effective ways to improve your clients&nbsp;cybersecurity posture. With well-trained staff who are able to spot and avoid phishing scams and other threats, you can reduce the risk of a successful cyberattack and protect your business's sensitive information and assets.</span></p>
<br /><a href='http://mspbuilder.com/the-human-firewall'>lbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/the-human-firewall'>...</a>]]></description>
      <link>http://mspbuilder.com/the-human-firewall</link>
      <author>lbarnas@mspbuilder.com (lbarnas)</author>
      <comments>http://mspbuilder.com/the-human-firewall</comments>
      <guid isPermaLink="true">http://mspbuilder.com/the-human-firewall</guid>
      <pubDate>Wed, 18 Jan 2023 20:23:00 GMT</pubDate>
    </item>
    <item>
      <title>Endpoint Security - Local Accounts</title>
      <description><![CDATA[<p>Maintaining secure local access accounts can be a challenging prospect for MSPs. Learn how the RMM Suite allows MSPs to create accounts and change passwords on any frequency they desire without any manual effort.</p>

<h5>The LAUSER Account</h5>

<p>Back in 2019, a customer related an experience of when a VIP user at a major customer was in an airport lounge. The user needed to print their presentation and needed admin access to install the print driver. The WiFi was limited to http protocol access, which prevented the MSP from using their RMM to provide support. They had no choice but to provide the user with&nbsp;<em>their</em>&nbsp;internal use password. This required changing the password throughout that customer environment. We suggested that a commonly named account ("lauser") could be created and our automation could maintain and update the credentials on a weekly basis. We rolled that process into ITP and began deploying this account for our customers soon after.</p>

<p>A few years later, MSPs are looking to improve security and decide to use the LAUSER account for their local access. This led to an additional improvement in this component, allowing multiple accounts to be created using this process.</p>

<p>One of the unique security features of this process is that the password is generated based on using the date, time, and hostname, along with other logic, to seed the password generation logic. This ensures that the password is machine-specific and impossible to re-synthesize.</p>

<h5>The RAUSER&nbsp;&amp; CAUSER Accounts</h5>

<p>The RMM Suite has long supported the use of per-client accounts for the MSP (RAUSER) and the customer (CAUSER), first via Managed Variables in Kaseya VSA and now via self-ciphering Cloud Script Variables on all RMM platforms. These accounts offer the flexibility of selecting the actual login ID, display name, and password. These credentials apply to groups of agents, whether an entire customer organization or specific location or department.</p>

<h5>RMM Suite Account Management Tools</h5>

<p>The RMM Suite continues to support&nbsp;multiple methods of local account management.</p>

<ul>
	<li>If the RAUSER (or CAUSER) Cloud Script Variable (CSV) is defined (both UserID and Password), the account will be created, added to the local administrators group, and the defined password will be set on the account. This happens automatically the first time that an agent checks in. These accounts can be updated at any time by updating the account password stored in the CSV and then executing the appropriate WIN-Local Account script on the RMM platform, targeting the endpoints where the account should be&nbsp;updated.</li>
	<li>The LAUSER technology has been enhanced and migrated into our Daily Maintenance tool. Simply create a Weekly or Monthly task to run the LAUSER command. This will generate a long, complex password; create the account and add it to the Administrators group if necessary; then set the password. The password will be ciphered and written to the system registry, where it can be collected by the Daily Audit tool, deciphered, and pushed into the RMM or your documentation engine such as Hudu or IT Glue. You can define multiple tasks in Maintenance with&nbsp;the LAUSER command to create any number of local admin accounts with unique credentials.&nbsp;
	<ul>
		<li>If no argument is defined, the account name "lauser" is targeted. This maintains the process we implemented several years earlier and allows this account to be given to the user as necessary. It may be appropriate to update the frequency of this account change.</li>
		<li>If an argument is provided, it will be used as the account name. This argument should be a single word without spaces, following the usual guidelines for user account IDs.&nbsp;</li>
	</ul>
	</li>
</ul>
<br /><a href='http://mspbuilder.com/blog-endpoint-security-local-accounts'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-endpoint-security-local-accounts'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-endpoint-security-local-accounts</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-endpoint-security-local-accounts</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-endpoint-security-local-accounts</guid>
      <pubDate>Thu, 15 Dec 2022 15:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Endpoint Management - Service Level Automation</title>
      <description><![CDATA[<p>Many MSPs can benefit from offering different levels of service to their customers - it allows them to tailor their product&nbsp;to the size and budget of the organization they serve. The challenge is finding ways to automate this to deliver consistency without significant effort. Some common methods we've seen range from defining automation policies to run scripts to deploy software and linking these policies to each customer to just manually running the scripts needed to deploy the applications. The challenge with this - like all manual actions - is consistency. The RMM Suite solves this through Service Class Automation.</p>

<h5>Service Class Automation</h5>

<p>Just to clarify the term, "Service Class" (or Class of Service) usually assigns a name to the delivery of specific services. A good example is the classic Bronze / Silver / Gold terms, where Bronze might provide basic monitoring and AV while Gold provides advanced monitoring, proactive maintenance, and comprehensive endpoint security services. This can related to several services within an MSP practice, including monitoring, software,&nbsp;maintenance, patching, and security.</p>

<p>The RMM Suite employs a basic Service Class of Unmanaged and Managed, which is used broadly to apply or block automation.&nbsp;</p>

<p><strong>Unmanaged</strong>&nbsp;- This can be a "break/fix" or "time and materials" customer with no automation. RMM Suite customers also use this mode to onboard new clients. Since an unmanaged customer receives no automation, it allows a period of time after deploying agents to perform discovery actions. This can lead to preparing custom configurations, setting up software licenses for automated deployments, and identifying any special monitoring requirements. Once all customer preparation is completed, a client can be switched to Managed. This is defined using either a Customer Custom Field or - in VSA - a Machine Group root name.</p>

<p><strong>Managed -&nbsp;</strong>This represents a generic state where ALL automated services can be applied. The automation policies specifically look for the "unmanaged" status, treating all other status types as "managed". This allows a generic classification of "managed" as well as specific sub-classifications&nbsp;or Service Classes.&nbsp;The service classes can also be used to drive client billing.</p>

<p><strong>Service Classes</strong>&nbsp;- These are codes - whether colors, metals, animals, or simply an alpha-numeric ID - that define a specific set of services. These codes can be distinct or cumulative - that's completely up to the MSP. Cumulative codes take a bit more planning and configuration effort, but can simplify certain aspects of the automation.</p>

<h5>Distinct Code Mapping</h5>

<p>Distinct codes will map a set of specific components and services to a single code. A system filter identifies the code and applies the appropriate services. Note that the same services can be associated with multiple Service Class codes.</p>

<p class="text-indent-1"><strong>Iron</strong>&nbsp;- Basic AV, Patching</p>

<p class="text-indent-1"><strong>Steel</strong>&nbsp;- Basic AV, Antimalware, Patching, Application Updating, Basic Monitoring</p>

<p class="text-indent-1"><strong>Titanium</strong> - Advanced AV, Endpoint Security, Antimalware, Patching, Application Updating, Basic Monitoring, Advanced Monitoring</p>

<p>There are three automation policies and three filters. The filter checks for the Service Level code and applies the automation policy. The policy applies the products and services that are part of the Service Class. You will see that two policies have Basic AV, Antimalware, and Application Updating, three have Patching, and two have products unique to that class, This is a simple mapping of code to services and works well when there are a small set of&nbsp;classes and products.</p>

<h5>Cumulative Code Mapping</h5>

<p>This method creates a filter and automation policy for&nbsp;<em>each distinct product or service</em>&nbsp;instead of the service class. The filter applies a specific product or service when it matches one or more Service Class codes. This is how it works:</p>

<p class="text-indent-1"><strong>Basic AV</strong>&nbsp;- Filter triggers at Iron OR Steel levels</p>

<p class="text-indent-1"><strong>Advanced AV</strong>&nbsp;- Filter triggers at Titanium level</p>

<p class="text-indent-1"><strong>Antimalware </strong>- Filter triggers at Steel OR Titanium levels</p>

<p class="text-indent-1"><strong>Patching </strong>- Filter triggers at Iron OR Steel OR Titanium levels</p>

<p class="text-indent-1"><strong>Basic Monitoring</strong> - Filter triggers&nbsp;at Steel OR Titanium levels</p>

<p class="text-indent-1"><strong>Advanced&nbsp;Monitoring</strong> - Filter triggers&nbsp;at&nbsp;Titanium level</p>

<p>While this is certainly more complex and requires distinct filters and automation policies for each service, it provides greater flexibility when there are additional Service Classes. Consider adding a new "Tin" service class that only provides patching, and an "Aluminum" level with Patching and Application Updating. By simply updating the filter associated with the products to trigger on these new service classes, the automation applies without the need to create both new filters AND automation policies.&nbsp;</p>

<h5>How the RMM Suite uses Service Class Mapping</h5>

<p>Each day, when the Daily Audit application runs, it determines the Service Class code assigned to the customer. This starts by checking for a Customer Custom Field called CCOS. The value - if defined - is mapped to the "SC:<em>id</em>"&nbsp;tag and written to the System Roles Agent Custom Field, along with any other TAGs based on the applications and services found. The TAGs can be used to drive views to apply policies, which is useful for applying the monitors associated with these Service Classes. The TAG can also be used directly by the Daily Maintenance tool to install application components, either by local script or RMM script.</p>

<p>A second advantage of this method is the Service Class identity is added to a machine-specific field. Some RMMs do not expose the Customer Custom Fields to agent scripting and this circumvents that deficiency.</p>
<br /><a href='http://mspbuilder.com/blog-endpoint-management-service-level-automation'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-endpoint-management-service-level-automation'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-endpoint-management-service-level-automation</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-endpoint-management-service-level-automation</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-endpoint-management-service-level-automation</guid>
      <pubDate>Tue, 15 Nov 2022 15:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Endpoint Management - Maintaining Components</title>
      <description><![CDATA[<p>Maintaining custom applications can be challenging - you need to detect where the software is installed, which version is present, and then run the appropriate scripts to update the applications. While most RMM platforms can accomplish this, many can't easily identify these custom applications, and tying the detection, version identification, and update automation together isn't a trivial task. That's where the RMM Suite can help.</p>

<h5>Detecting the Application</h5>

<p>The RMM Suite provides a highly customizable Daily Audit feature that can identify applications either by registration with Add/Remove programs or as an installed service. The first step is to determine the best detection method.&nbsp;</p>

<h6>Application Name/Version Detection</h6>

<p>Start by examining the SysInfo.INI file from&nbsp;the agent where the software is installed. This is the Daily Audit cache file and is located in the PROGRAMDATA\MSPB folder.&nbsp;Many RMM platforms will collect this file and store it in the agent's data folder automatically when the Daily Audit completes. Open the file in a text editor such as Notepad and locate the SWINFO section. This section lists all of the application / version / vendor data reported by Windows. Determine if the application is listed -&nbsp;if it is -&nbsp;&nbsp;copy the application ID into the AUDIT.INI configuration in the APP VERSION ROLES section and define a unique TAG value. Tags should be 3-4 alpha-numeric values to identify an application and version.&nbsp; Note that multiple detections are possible - a check for "Accounting - 12.6" can map to "AA126" while a generic "Accounting - " can map to "ACCT". The generic tag can be used to detect ANY version while the specific can identify a particular version. Once this entry is added to the Audit config data, detections will begin during the next daily operational cycle.</p>

<h6>Service Detection</h6>

<p>A Windows service provides a simple and direct detection method. Start by identifying the name of the service by running "Net Start" in a command prompt on the endpoint where the application is running. Identify the correct service name and add it to the SERVICE ROLES section in the AUDIT.INI configuration data, assigning an appropriate TAG value. This detection will begin during the next daily operational cycle.&nbsp;</p>

<h5>Leveraging TAGs in Daily Maintenance</h5>

<p>Each task in Daily Maintenance can be tied to one or more System Role Tags. Actions can be taken when a tag is present or missing, and can be combined to require multiple related tags. Tasks can be used to directly upgrade software by looking for an outdated version TAG or can run multiple tasks to uninstall the existing version and then install the new version. The tasks can be local scripts or RMM Scripts run by API, or any combination. Daily Maintenance can directly unzip and execute packages deployed from an RMM script, or download a package hosted by MSP Builder. (MSP Builder packages utilize a security token to assure authenticity. We host your custom packages at no additional cost and add the security token when created or updated.)&nbsp;</p>

<p>Note that Daily Maintenance tasks are executed in the sequence that they are defined, so be sure to order them so an uninstall is run before the update process.&nbsp;</p>

<h5>Summary</h5>

<p>The RMM Suite automation provides a rapid and low-impact mechanism for application and endpoint maintenance with several significant advantages:</p>

<ul>
	<li>Tasks run every day on every endpoint, providing maximum delivery exposure.</li>
	<li>Daily Audit runs immediately prior to Daily Maintenance, identifying software and services.</li>
	<li>Daily Maintenance can leverage the TAGs set by the audit to update components&nbsp;that are vulnerable or outdated.</li>
	<li>Daily Audit runs again, after Daily Maintenance completes, reporting on the now-current application and component status. This also updates the tags, disabling further update operations.</li>
	<li>No additional RMM automation is needed - no Filters/Views, automation policies, or scheduling, significantly reducing the load and complexity on the RMM platform.</li>
</ul>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/endpoint-management-maintaining-components'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/endpoint-management-maintaining-components'>...</a>]]></description>
      <link>http://mspbuilder.com/endpoint-management-maintaining-components</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/endpoint-management-maintaining-components</comments>
      <guid isPermaLink="true">http://mspbuilder.com/endpoint-management-maintaining-components</guid>
      <pubDate>Thu, 20 Oct 2022 14:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Onboarding Automation - Deploying New Components</title>
      <description><![CDATA[<p>Do you need to add new components to a standard configuration for one, several, or all customers?<br />
The RMM Suite Onboard Automation (OBA)&nbsp;tool will help you get this done through a single config file update.</p>

<h5>Using a Standard Configuration</h5>

<p>This concept starts by defining a set of configuration settings and software that needs to be deployed within your support stack. This should include settings and software that you deploy to most or all customers, then settings and software deployed to specific customers. The latter is usually LOB applications, as the RMM Suite install scripts can leverage Cloud Script Variables (CSVs) to both control the deployment globally while delivering customer-specific content.</p>

<p>As your product stack changes - either by adding or replacing products - your Standard Configuration changes. This has been a difficult process for many as configurations may need to change and software uninstalled before installing a new set of products. This is where the RMM Suite OBA tool can help.</p>

<h5>Deploying the Standard Configuration</h5>

<p>The OBA tool runs when an agent first checks in to deploy software and configure the endpoint to meet the Standard Configuration requirements. See this <a href="https://www.mspbuilder.com/blog-onboarding-automation-hands-off-workstation-build-1" target="_blank">blog post</a> for&nbsp;full&nbsp;information on using the OBA Tool and the Standard Configuration.&nbsp;</p>

<h5>Dealing with Change</h5>

<p>Change is inevitable, but it should not be difficult! A typical change to a Standard Configuration is using a different product, such as Antivirus software. Let's assume your Standard Configuration utilized Iron-Man AV, but now you are switching to the more powerful Titanium-Man AV product. You need to remove the old product and then install the new one. This requires just 2 scripts and 3 changes to your OBA config file:</p>

<p>Script: <strong>Uninstall Iron-Man AV</strong> - Create an RMM script to uninstall the Iron-Man AV product, suppressing any reboots.</p>

<p>Script:&nbsp;<strong>Install Titanium-Man AV</strong>&nbsp;- Create an RMM script to install the new Titanium-Man AV product, suppressing any reboots</p>

<p>Change the OBA configuration file:</p>

<ul>
	<li>Disable or Remove the definition that installed the IronMan AV product</li>
	<li>Add a definition to run the <strong>Uninstall Iron-Man AV</strong> script</li>
	<li>Add a definition to run the <strong>Install Titanium-Man AV</strong> script</li>
</ul>

<p>Once these changes are in your OBA configuration file, the next daily cycle will discover that these two tasks have never been run and will run them on all endpoints (based on the Task Category where these are defined, of course). The next time the endpoint is online and runs the Daily Tasks, these changes to your Standard Configuration will be processed and the endpoint will be compliant with your new standards.</p>

<h5>Summary</h5>

<p>Despite the "Onboarding Automation" name, the capabilities of the OBA tool extend to helping you maintain a "Standard Configuration" without complex scripting or other RMM automation tools. The OBA tool also works hand in hand with the Daily Maintenance tool that can be used to deploy and update components on the endpoint, especially when using Customer Class of Service (CCOS) tags. Daily Maintenance&nbsp;could be used to deploy and maintain either Iron-Man AV or Titanium-Man AV based on the CCOS tag being "Iron" or "titanium" (or any other level-identification term).&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1</guid>
      <pubDate>Tue, 30 Aug 2022 14:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Onboarding Automation - Hands-Off Workstation Build</title>
      <description><![CDATA[<p><em>Can you imagine deploying 60-70 new computers per day for multiple customers with just 1 or two techs?</em><br />
The RMM Suite and your RMM platform make this "childs play"!</p>

<p>The RMM Suite provides an Onboard Automation tool that can fully automate the ongoing deployment of endpoint software and configuration tasks. It's a powerful tool that takes just a few minutes of planning and setup to effectively leverage. Follow along as we walk through a typical setup scenario.</p>

<h5>General Concepts</h5>

<p>The&nbsp;Onboard Automation (OBA) tool runs when an agent first checks into the RMM platform. If an alarm is configured (recommended), then this happens within 2-3 minutes of the agent first being installed and checking in. Without the alarm, it can take up to 24-hours for the RMM to schedule the daily automation tasks, so using the alarm is essential if you are going to deploy many new systems from your tech bench.&nbsp;</p>

<p>The OBA tool consults a configuration file for a list of RMM scripts to run. It uses the APIs to run these scripts based on an agent's classification within several task categories. This initial run usually executes many scripts to deploy software and configure the endpoint, first for MSP tasks and then for customer-specific tasks. This provides a high degree of flexibility in a highly automated process.</p>

<p>Once the initial execution runs, the OBA tool continues to run each day, comparing the current task list with what has already been completed. If a new task is found, it is run and the process logged. This allows a "Standard Configuration" to be defined by the MSP for internal and client-specific settings that is maintained automatically.&nbsp;</p>

<h5>Task Categories</h5>

<p>There are several categories of tasks that the&nbsp;OBA&nbsp;tool uses to decide what it should do.&nbsp;There are three "general" categories that allow the MSP to perform their tasks, called "All Agents", "All Servers", and "All Workstations". Scripts defined in these categories run on all endpoints unless specifically excluded for a particular customer. This allows, for example, to deploy a configuration script to every workstation, but exclude a specific customer that needs an alternate setting. The concept allows tailoring an otherwise broadly deployed process.</p>

<p>Additional categories are next tied to specific customers, allowing execution of scripts to all agents, just servers, just workstations, or even workstations in a specific site location.&nbsp;</p>

<h5>Automation Controls</h5>

<p>The OBA configuration file simply defines each of the task categories, then lists the names of the scripts that should be executed. Each script has a control parameter associated with it - Yes | No | All - that controls how it will be run. "No" allows the script to remain in the config file but it is disabled and will be ignored. "Yes" will cause the script to be executed if the agent belongs to a "managed" group. This will skip any agent that is considered "unmanaged" (break/fix or not yet managed/onboarding stage). The "All" option allows the script to execute on all endpoints regardless of the managed/unmanaged status. This is especially useful for scripts that configure the endpoint or the RMM agent itself.</p>

<h5>Preparation And Planning</h5>

<p>Preparation mainly consists of creating the RMM scripts needed to perform the application installation and endpoint configuration processes. These should be created and tested before defining them in the OBA config file. Testing can be completed simply by manually executing the script from the RMM platform.</p>

<p>Planning requires an understanding of what processes should be performed globally, which customers should be excluded from global tasks, and then selecting the customer-specific tasks. Something to consider here - the RMM Suite app installers are often generic and employ a Cloud Script Variable to assign a license key or similar configuration setting. Customers that don't have a key will abort that script without a "failure", allowing this script to potentially be applied globally, yet executed only where a CSV value has been defined.&nbsp;</p>

<h5>Operation</h5>

<p>This is the easy part! Depending on your RMM, you may need to enable the "New Agent" alarm to allow the onboarding process to run immediately after the first check in. Everything from that point onward is automated. Simply maintain the OBA config file to add scripts as the standard configuration changes, knowing that these will run automatically the next time the Daily Tasks are run.</p>

<p>Note that you can define scripts that install, remove, or update endpoint components, but once a specific script has run successfully, it will NOT be run again. Updates can be deployed by including some specific text in the name such as "Update XXX to V3.45" or "uninstall XXX V3.0". The RMM Suite maintains this tracking in the "init" sub-key of our registry path, so an alternative would be to clear or remove the key if you need to run a task again.&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-onboarding-automation-hands-off-workstation-build-1'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-onboarding-automation-hands-off-workstation-build-1'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-onboarding-automation-hands-off-workstation-build-1</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-onboarding-automation-hands-off-workstation-build-1</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-onboarding-automation-hands-off-workstation-build-1</guid>
      <pubDate>Mon, 15 Aug 2022 14:00:00 GMT</pubDate>
    </item>
    <item>
      <title>The RMM Suite: Rewritten</title>
      <description><![CDATA[<p>Our core value of continuous improvement has been the winning component in making The RMM Suite as great as it is and continues to be. That said, we’ve been hard at work creating our largest update yet!</p>

<p>In our upcoming RMM Suite Version 3, we’re happy to announce that we’re now <b>platform independent</b>, opening the door for more than just Kaseya VSA clients to reap the benefits of efficient and highly automated RMM management. And that’s just one of the dozens of improvements included in this new release.</p>

<p>Every aspect of The RMM Suite has been redesigned and rewritten from the ground up; built over the last 4 years by working with our customers to make it as relevant and purposeful as possible to you. This is <i>your</i> RMM Suite.</p>

<p><strong>Some of the new updates include:</strong></p>

<ul>
	<li>Quick-audits that <strong>remediate Zero-Day Vulnerabilities</strong> <strong>immediately</strong><br />
	&nbsp;</li>
	<li>Platform Telemetry - we know what is or isn’t working in <i>real time</i> and can proactively address any issues<br />
	&nbsp;</li>
	<li>Secure configuration - Individual variables that are used for credentials can be set as "ciphered" without requiring additional effort or software. Ciphered data is stored and delivered securely and deciphered only when it is actually used.<br />
	&nbsp;</li>
	<li>Application Updating is now available directly from MSP Builder, providing a comprehensive update solution using the best features of Ninite Pro, Chocolaty, and direct deployment- without having to buy the products individually</li>
</ul>

<p><strong>Take a look below to see the complete list of updates!</strong></p>

<p>&nbsp;<a href="https://mspbwebcdn.azureedge.net/Documents/Manuals/Release_Notes_3.0.22-175.pdf">Version 3.0 Release Notes</a></p>
<br /><a href='http://mspbuilder.com/the-rmm-suite-rewritten'>lbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/the-rmm-suite-rewritten'>...</a>]]></description>
      <link>http://mspbuilder.com/the-rmm-suite-rewritten</link>
      <author>lbarnas@mspbuilder.com (lbarnas)</author>
      <comments>http://mspbuilder.com/the-rmm-suite-rewritten</comments>
      <guid isPermaLink="true">http://mspbuilder.com/the-rmm-suite-rewritten</guid>
      <pubDate>Wed, 29 Jun 2022 18:47:00 GMT</pubDate>
    </item>
    <item>
      <title>Implementing Security Roles the Right Way</title>
      <description><![CDATA[<p>Defining effective user security roles provides you with an added layer of security within your VSA. User Roles define which modules and settings a user can access from the VSA console. While there is no “one size fits all” model, the concepts presented here will provide appropriate access for technicians, engineers, VSA admins, and VSA managers.</p>

<p>&nbsp;In our typical VSA implementation, we create four distinct access levels, and several sub-types for MSP employees, plus three roles for customer access. <b>No user has Master role rights in our deployment configuration.</b> These roles should have a “NOC” or “MSP” prefix to designate them as internal roles.</p>

<p><b>Level 0</b> – Support A role designed for support staff to access VSA for running reports, getting agent counts, or checking use and available licensing. No access to automation is available, but these users can view agents and have virtually unlimited access to the reporting functions.</p>

<p><b>&nbsp;Level 1</b> – Technician This role, which we name “NOC-1-Tech”, grants the ability to perform basic agent administration, view audit and other configuration settings, and access remote control features. This provides the ability to perform about 80% of what a technician would do on a daily basis for end-user support.</p>

<p><b>Level 2</b> – Administrator Named “NOC-2-Admin” in our system, it grants additional capabilities to run procedures, deploy AV and AM, and perform most agent configurations. Neither of the above roles permit changing the configuration of VSA-wide settings.</p>

<p><b>Level 5</b> – Specialist These roles grant VSA administration rights to specific features, distributing the administration tasks among multiple users. In our practice, we use the following specialist types:</p>

<ul>
	<li><b>Security</b> – provides the ability to perform all Auth Anvil configuration and management tasks.</li>
	<li><b>AV-Malware </b>– grants access to administer the Antivirus and Malware components, including definition of profiles, policies, and assigning them to customers.</li>
	<li><b>Updating</b> – allows administration of all Patch Management and Software Management components. It may also allow access to other application updating components.</li>
	<li><b>Backup</b> – Allows configuration of all VSA settings related to backup operations.</li>
	<li><b>Manager </b>– grants a combination of roles, usually assigned to the Dispatch, helpdesk or Technical Manager(s).</li>
</ul>

<p><strong>Implementing these security roles will allow for better security and organization within your VSA infrastructure. To learn more, <a href="https://www.mspbuilder.com/request-demo2">schedule a demo for MSP Builder’s RMM Suite!</a></strong></p>

<p>&nbsp;</p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/implementing-security-roles'>lbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/implementing-security-roles'>...</a>]]></description>
      <link>http://mspbuilder.com/implementing-security-roles</link>
      <author>lbarnas@mspbuilder.com (lbarnas)</author>
      <comments>http://mspbuilder.com/implementing-security-roles</comments>
      <guid isPermaLink="true">http://mspbuilder.com/implementing-security-roles</guid>
      <pubDate>Mon, 16 Aug 2021 15:39:00 GMT</pubDate>
    </item>
    <item>
      <title>STOP! How outdated are your management scripts?</title>
      <description><![CDATA[<div>During a recent audit of an MSP's onboarding processes, I found several Agent Procedures that seemed interesting. I had not seen any other MSP performing some of these configuration steps, so I looked more deeply at the logic in these procedures. What I found would have turned any hair I had left white!</div>

<div>&nbsp;</div>

<div>One procedure in particular was named "Set Access Rights for PerfMon Folders". "What PerfMon folders?" I wondered.. Looking at the procedure, the description stated that it was modifying the Kaseya working folder permissions to allow PerfMon to access the KLogs folder. It did this by changing the permissions to "Everyone:Full Control"!&nbsp;</div>

<div>&nbsp;</div>

<div>Looking closer, I was able to determine that this procedure was quite old, and likely developed for VSA version 6 or earlier and had never been updated. While it's possible that older versions of VSA did not provide adequate access to the KWorking folder, that is no longer the case. Administrators have full control, and even users have Read &amp; Execute, so there is no issue with PerfMon reading this location.&nbsp;</div>

<div>&nbsp;</div>

<div>The most important thing to realize is that things change. If you have processes that haven't changed in years, it's time to afford them a review and decide if they are still needed, or in need of an update. This procedure, if not identified, would introduce significant risk into the MSP environment by granting Full Control rights to every account to a critical system folder. Imagine a malicious user could replace an EXE or update a script to call malware or ransomware. If the agent procedure doesn't replace these scripts and blindly calls them - often with SYSTEM rights - the damage could be extensive.</div>

<div>&nbsp;</div>

<div>Why risk this? Take time to review your procedures and tools to make sure they are still required and operate in compliance with today's security model. Remove processes that are no longer needed, and update those that are still needed to follow current security requirements. The business you save might be your own!</div>
<br /><a href='http://mspbuilder.com/blog-how-outdated-are-your-management-scripts'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-how-outdated-are-your-management-scripts'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-how-outdated-are-your-management-scripts</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-how-outdated-are-your-management-scripts</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-how-outdated-are-your-management-scripts</guid>
      <pubDate>Sun, 23 Feb 2020 14:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Guest Blog-Why are we always surprised?</title>
      <description><![CDATA[<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">We are approaching the five-year anniversary of our first MSP-Ignite Peer Group Meeting. I find myself thinking about all of the conversations and common issues that have been discussed. One universal truth surrounds the discovery made after any of us loses a member of our team. </font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">The phenomena, usually, occurs after several months of debating whether or not to let a problem employee go. The shocking discovery, however, seems to be the same no matter what terms the former employee leaves under. Of course, compounding the entire issue is the fact that as owners or managers we no longer believe that we remember how to do many of the tasks performed by others.</font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">As MSPs we are constantly discussing the “pro-active” measures we take on behalf of our clients. Yet somehow, we don’t think pro-actively when it comes to our own businesses. We work tirelessly to build a team of people that handle every aspect of our businesses. We remove ourselves from the day-to-day operations of the business and celebrate the fact that we did so as if it is our lifelong goal. </font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">Perhaps, after we finish celebrating, we should apply that pro-active mentality to the health of our businesses. When’s the last time you jumped in and just worked a ticket? Spent half a day as a Service Coordinator? Handled the Approve &amp; Post process? Performed a Strategic Business Review? Not only should you and your leadership team jump in every once in a while, you should do so in areas that are not necessarily in your wheelhouse. </font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">Actually, pull the documentation and follow it to the end. Analyze everything about the environment surrounding the task and take note of where it can be improved. Take the time to look at how previous tickets for the client were handled, how the billing was done the previous month, what was discussed at the last SBR. In other words, pro-actively look for where the processes are not being followed or can be improved. Look for areas where your staff is less than perfect. You don’t have to necessarily take actions on any of your findings. Just use the information to avoid surprises down the road.</font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">Steve Alexander / MSP-IGNITE</font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p><i><span style="font-size:12pt; margin:0px"><span style="font-family:&quot;Calibri&quot;,sans-serif"><font color="#000000">Steve Alexander has over 30 years of experience running multiple IT Service Providers and MSPs. As the owner of MSP-Ignite he facilitates industry leading peer groups with a unique approach designed to make every member feel like they are being guided towards their own goals by a private consultant.</font></span></span></i></p>
<br /><a href='http://mspbuilder.com/guest-blog-why-are-we-always-surprised'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/guest-blog-why-are-we-always-surprised'>...</a>]]></description>
      <link>http://mspbuilder.com/guest-blog-why-are-we-always-surprised</link>
      <comments>http://mspbuilder.com/guest-blog-why-are-we-always-surprised</comments>
      <guid isPermaLink="true">http://mspbuilder.com/guest-blog-why-are-we-always-surprised</guid>
      <pubDate>Mon, 11 Nov 2019 15:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Security - Multi-Factor Authentication</title>
      <description><![CDATA[<p>There’s no doubt that Multi-Factor Authentication is a hot topic and an excellent way to improve secure access to your infrastructure. Remote access to your RMM and PSA tools, as well as the RDP Gateway will benefit from using MFA. But what about access when you are in the office – do you need MFA</p>

<p>Since you are already in a protected environment (you lock your doors and have a firewall and other logical and physical security – right?), you don’t need to require MFA. Most MFA solutions provide one or more methods of “whitelisting”. Which method you choose will make the difference between being secure and not…</p>

<h4>User Whitelisting</h4>

<p>User whitelisting is used for application accounts that would not be accessed externally, or support accounts that need to be used by external support teams. You would <i>never</i> whitelist your employees! Unfortunately, we see this configuration all too often. When we point it out, the response is typically “yes, but we trust our team!”</p>

<p>Sure – you can trust your employee, but you <i>can’t trust their credentials!</i> That’s the distinction that MFA makes. If an employee’s credentials are compromised, any bad actor can try to log in. If their account is whitelisted, there would be no Multi-Factor authentication and access would be granted!</p>

<h4>Network Whitelisting</h4>

<p>Network whitelisting identifies the internal network range(s) that you trust – typically the office network <i>public addresses</i>. In most situations, this would be the public IP address assigned to your external firewall (or firewalls if you have redundant Internet connections). This is the preferred way to allow your techs to work without MFA when in the office, but require it when they are at home, customer sites, or otherwise outside of the office.</p>

<h4>MSP Builder Tools</h4>

<p>Many of the MSP Builder tools utilize the VSA APIs to perform their tasks. While these tools use the API to authenticate over an SSL connection, there are some processes that we follow to improve the security of these tools. As each tool runs, it requests an authorization token to do its work by authenticating to VSA. The tasks that the tools perform take anywhere from a few milliseconds to about 3 seconds to complete. Once the task completes, the tool closes the session, invalidating the authorization token.</p>

<p>Another level of security that we use is MSP Builder License Authorization. All tools that utilize the APIs must first authenticate to the MSP Builder licensing server. This authorization is such that it is extremely difficult (if not impossible) to circumvent, requiring multiple data parts to complete a license validation. In a sense, this is a form of MFA for the tools that utilize the Kaseya APIs. Our API account does not require the password to be distributed to the systems that use it, and is designed to be changed on a 24-hour cycle, increasing the difficulty of a brute-force attack.</p>

<h4>Summary</h4>

<p>Multi-Factor Authentication is an excellent method to improve the security and integrity of your environment, but it requires careful and correct configuration. Incorrect settings will negate the security you’re trying to deploy, so take your time, double and triple-check your configuration, and use whitelisting options properly!</p>
<br /><a href='http://mspbuilder.com/blog-security-multi-factor-authentication'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-security-multi-factor-authentication'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-security-multi-factor-authentication</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-security-multi-factor-authentication</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-security-multi-factor-authentication</guid>
      <pubDate>Sat, 12 Oct 2019 16:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Kaseya Connect IT 2019</title>
      <description><![CDATA[<p>Kaseya Connect is always an exciting event. We look forward to meeting new people, walking 4+ miles each day, learning new things that will grow our business, eating, drinking, and having fun. Did I mention long walks? This year was especially gratifying for the MSP Builder team. Lauren Barnas, our Digital Marketing Manager, joined us for her first Connect experience. We had productive meetings with the VSA and Traverse engineering teams, as well as most of Kaseya's senior management team. We also got to meet so many of our RMM Suite customers, many for the first time face to face. It's so nice to be able to put a face with so many of the people we talk to.</p>

<p><img alt="Young Mike Puglia?" class="image-right" src="http://mspbuilder.com/Data/Sites/1/media/Images/c19_youngmike.jpg" width="200" />There were some surprises, like when Mike Puglia shared a photo from an early career path. At least, that's what I was told - I had stepped out of the room at that moment. At least I can see where he gets his determination and focus from!</p>

<p><img alt="Elixir of Life" class="image-left" src="http://mspbuilder.com/Data/Sites/1/media/Images/c19_lifegiving.jpg" width="200" />Of course, there were regular gatherings at the dispensers of the magical Elixir of Life, especially during the morning breaks. Many of us fought for space at tables close to these oasis'. I even tipped one of the urns to allow a fellow Kaseyan to enjoy one more cup. Squeezing the urn did not help, however. These sessions were a great place to meet and share experiences and ideas.</p>

<p>A personal highlight of this event was my presentation, which - thanks to confusion surrounding both the title and the description - became known as "Automate or Die!". Thanks to Lauren, a last-minute post provided clarity about the topic and content, and we had far more people attend than the handful I expected.</p>

<p><img alt="" class="image-right" src="http://mspbuilder.com/Data/Sites/1/media/Images/c19_presentation1.jpg" width="300" /></p>

<p><img alt="" class="image-left" src="http://mspbuilder.com/Data/Sites/1/media/Images/c19_presentation3.jpg" width="250" />The session covered many of the best practices related to using the VSA that allow you to use automation effectively.</p>

<p>One topic that we covered was how to use Custom Fields effectively. These can be used for reporting as well as to control the automation in your VSA. Using these effectively, with proper planning will pay huge automation dividends.</p>

<p>&nbsp;</p>

<p>Some of the best moments of the event came during the awards ceremony. The MSP practice (Baroan Technologies) that spawned <strong>MSP Builder </strong>was nominated for the <img alt="" class="image-right" height="323" src="http://mspbuilder.com/Data/Sites/1/media/Images/c19_award2.jpg" width="450" /><strong>MSP Efficiency Award</strong>. Shortly thereafter, we were awarded the <strong>Community Award</strong> - which is awarded to "<em>companies that have demonstrated thought leadership within the IT Services industry by sharing best practices with peers to shape a better technical community</em>". Here, Dimitri Miaoulis looks on as I accept the award. We're all honored to be recognized, and are happy to continue to share our knowledge and experience with the MSP and IT communities.</p>

<p><img alt="" class="image-left" src="http://mspbuilder.com/Data/Sites/1/media/Images/c19_award4.jpg" /></p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p class="image-center"><img alt="Techie Community Award" src="http://mspbuilder.com/Data/Sites/1/media/Images/c19_techie.jpg" /></p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-kaseya-connect-it-2019'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-kaseya-connect-it-2019'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-kaseya-connect-it-2019</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-kaseya-connect-it-2019</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-kaseya-connect-it-2019</guid>
      <pubDate>Sat, 11 May 2019 14:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Agent Offline is NOT Server Down!</title>
      <description><![CDATA[<p>When I work with MSPs, I almost always find that they apply the "Agent Offline" monitor to servers with a 5 minute (or less) threshold. They tell us that false alarms are one of their biggest challenges. Changing how to monitor server availability is one of the easiest ways to reduce false alerts.</p>

<p>The first thing to recognize is that the monitor for Agent Check-In has nothing to do with whether a server is functioning. The agent is simply a service-based application on the server. It's job is to communicate with the VSA on a regular schedule and determine if it should either perform tasks or deliver information. This is considered a low-priority service, and - by design - the agent will relinquish resources when the server is under stressful load conditions. This could prevent the agent software from checking in with VSA for several minutes. When this triggers an alarm, it clearly isn't because the server has crashed or become otherwise unavailable. It's just busy.Servers starting to run backups shortly after midnight would trigger a rash of agent offline alerts, waking up the on-call team member, who would find everything working just fine.</p>

<p>So - how can this be improved? The first step is to set the check in time to a longer period to identify when the agent software has a problem. Our default is one hour.</p>

<p>Next, use an Out of Band monitor to check server health. Kaseya Network Monitor is built into VSA and can handle the job nicely. Don't use ping, because a smart NIC will reply even if the O/S has crashed. We check for the Server service, which typically must be running for the server to function, but more importantly, to get a response, the Operating System has to be <em>functional</em>. We set the time on this monitor long enough for the server to reboot <em>without</em> triggering an alarm, but short enough so that a failure will be detected and reported in a timely manner. The default alarm time we use is 15 minutes for most servers, and 7 minutes for critical systems.</p>

<p>We then add a set of monitors that report when the server has rebooted during business hours or booted into system recovery modes. These are Smart Monitors that run at startup and check for these specific conditions. Alarms for business hours reboots fire immediately, while recovery mode detection delays a brief period so an engineer can boot, perform a quick recovery operation, and reboot again.</p>

<p>Finally, we configure the agent to generate alarms for certain crash conditions. This might result in two alarms for the same event - one for the work-day reboot and one for the reason why.</p>

<p>With this method, our customers get better information, faster and more appropriate alerting, and virtually zero false alarms for server-down conditions.</p>
<br /><a href='http://mspbuilder.com/blog-agent-offline-is-not-server-down'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-agent-offline-is-not-server-down'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-agent-offline-is-not-server-down</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-agent-offline-is-not-server-down</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-agent-offline-is-not-server-down</guid>
      <pubDate>Sat, 27 Apr 2019 14:30:00 GMT</pubDate>
    </item>
    <item>
      <title>Security Starts with Best Practices</title>
      <description><![CDATA[<p>Security is the latest buzzword in technology, and everyone is looking at products and services to help secure their environment. What amazes me is that when you ask about documented Standards and Practices, I usually hear crickets chirping. Consider these common "best practices" you already follow:</p>

<p>When you go to the shopping mall, you lock your car, right?</p>

<p>If you have your tech kit with you, you lock it in the trunk, right?</p>

<p>When you leave for work in the morning, you lock the front door of your house, right?</p>

<p>You tell your kids not to take candy from strangers, right?</p>

<p>These are all forms of "best practices" that we follow without thinking, so why do we still open port 3389 for RDP? The common answer is that the client can't afford to implement an RDP gateway or licenses for VPN access. Then there are the outdated and unpatched systems. These are often needed to provide access to archival data, but are still on the network. Take them off the network or put them in a separate network with restricted access.</p>

<p>The questions you need to ask are:</p>

<ul>
	<li>Can your customer's business survive if they lost every bit of data they had?</li>
	<li>Could they pay the "ransom" if their data was encrypted?</li>
	<li>What would be the impact to <em>your business</em> when your client suffers a loss because you didn't observe (or enforce) good practices.</li>
</ul>

<p>We have <strong>Standards and Practices </strong>documents for many aspects of our MSP practice. These ensure that the work we do is consistent and follows reasonable and secure methods. Our "SAP" documents cover things like building web servers, creating a network time infrastructure, server disk partitioning, securing admin accounts, network segmentation, printer management, and building virtualization platforms (with specifics for VMware and Hyper-V). These range from as few as three pages to two dozen or more, depending on the topic and number of variations.</p>

<p>These are guidelines, not rules, that establish the specific configuration and "build" documents that the engineers follow. This takes some effort, but standards breed consistency and that provides a level of control over the environments. When you control an environment, you increase the overall reliability, which reduces your work and improves customer satisfaction.</p>

<p>The entire MSP Builder solution stack is built on standards and consistency. This allows us to develop automation that works well because it leverages the standards in our foundational products. This also helps with security - everything is designed and documented, not merely "implemented". If a risk is discovered, it's easy to identify the scope and remediate.</p>

<p>So - start small.. create some basic operational standards <em>and follow them!</em> Get buy-in from your engineers, because your business is only as good as your staff. Not sure where to start? We've posted our Standards Documents in the document library on the downloads page, available for free to registered users. Use Google and see what others are doing and adapt to your environment. Don't delay.</p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-security-starts-with-best-practices'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-security-starts-with-best-practices'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-security-starts-with-best-practices</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-security-starts-with-best-practices</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-security-starts-with-best-practices</guid>
      <pubDate>Sun, 21 Oct 2018 14:33:00 GMT</pubDate>
    </item>
    <item>
      <title>Recovering Policies that have Disappeared</title>
      <description><![CDATA[<p><span class="font-large"><strong>When Policies Disappear...</strong></span></p>

<p>Many MSPs have complained recently about policies (and possibly other objects) disappearing when they are moved within a specific group, or moved and dropped into a group. If you are on On-Prem VSA user, the following SQL queries can be used to recover these objects. The queries can also be provided to support if your VSA is hosted in SaaS - they will need to be adjusted for the proper tenant ID.</p>

<p><span class="font-large"><strong>The problem</strong></span></p>

<p>...occurs when you try to move an object within a folder (change the display order) or move an object into a different folder. This causes the object to "disappear". The problem, specifically, is that the objects within a folder are identified by a folder path and a parent ID. The two usually match, but the current bug seems to change the parent ID value. This prevents the object from appearing in the folder location even though it is identified as belonging in that folder path.</p>

<p><span class="font-large"><strong>Workarounds</strong></span></p>

<p>Until the problem is fixed, don't manually re-order objects within a folder, no matter how much it bothers your desire for order and structure! :)</p>

<p>If you need to move an object into another folder, drop it onto the folder icon directly, not into the folder in-between other objects. If you hold the object over the folder icon momentarily, you'll notice a small flash - the text switches to italic and back to normal. This indicates that the target has been validated - that's when it is OK to drop the object.</p>

<p><span class="font-large"><strong>Recovery</strong></span></p>

<p>There are two methods for recovery, and both work equally well. Choose one or the other based on your comfort level with performing SQL Queries on your VSA.</p>

<p><strong>Non-SQL Method</strong></p>

<p>This method takes a bit longer and is a bit more invasive, but requires no skill with running SQL update queries.</p>

<ol>
	<li>Export ALL the objects from the folder containing the missing object via Import Center - the missing object should be visible there!</li>
	<li>Edit the exported XML - compare the "missing" object with the other objects - the "parentId" will probably be different for the missing object. Update the parentId value to match the other records.</li>
	<li>Delete the original folder with all objects. Deleting the folder will also delete the hidden objects.</li>
	<li>Import the edited XML file. It should return all objects, including the missing one.</li>
</ol>

<p><span class="font-large"><strong>SQL Update Query Method</strong></span></p>

<p>This method is fast and the results immediate, but does require understanding of the SQL commands to locate and then update the "parentId" field with the correct value. Please be sure you have good backups and are familiar with SQL queries before choosing this method!</p>

<ol>
	<li>Run the following Query to list all of the objects in the specific folder where the missing object should be:<br />
	<code>SELECT * FROM tree.treenode WHERE treeFullPath LIKE '%&lt;FOLDER_PATH&gt;%'</code><br />
	where "&lt;FOLDER_PATH&gt;" contains enough of the folder path to uniquely identify it. In our case, the folder's unique name had "Agent Settings (" in it, so that's what we used. This will return a list of all of the objects in that folder, including the missing one(s). Take note of the objects that have matching treeNodeTypeFK values, then locate the missing objects. These will have a different parentID value from the other objects that are visible in the folder. Identify and record the parentID value of a visible object.</li>
	<li>.Define the following query, but do not execute it yet!<br />
	<code>UPDATE [ksubscribers].[tree].[treenode] SET parentId = &lt;parent&gt;</code><br />
	<code>WHERE ref LIKE '&lt;policy name&gt;'</code></li>
	<li>Edit the query:
	<ul>
		<li>Replace "&lt;parent&gt;" with the parentID value determined in step 1 - this is the Parent ID of the visible objects.</li>
		<li>Replace "&lt;policy name&gt;" with the name of the missing policy.</li>
	</ul>
	</li>
	<li>Confirm that you are replacing the parentID value on the record where the ref field matches the specific missing object, then execute the query.</li>
	<li>Re-run the query in step 1 to verify that the missing object now has the correct parentID value.</li>
	<li>Check the VSA to confirm that the object is once again visible in the correct folder.</li>
</ol>

<p>Steps 3.2, 4, and 5 can be run for additional missing objects in the same folder. Repeat all of the steps to recover missing objects from other folders.</p>

<p>In our testing with System Policies, the missing policy reappears&nbsp; in it's original location after performing the above steps and the VSA page is refreshed.</p>

<p><strong>NOTE</strong><br />
MSP Builder does not assume any liability for issues that arise through the use or misuse of this process. It is assumed that you are comfortable with VSA administration and qualified to perform the techniques presented herein.</p>
<br /><a href='http://mspbuilder.com/blog-recovering-policies-that-have-disappeared'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-recovering-policies-that-have-disappeared'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-recovering-policies-that-have-disappeared</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-recovering-policies-that-have-disappeared</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-recovering-policies-that-have-disappeared</guid>
      <pubDate>Wed, 20 Jun 2018 16:39:00 GMT</pubDate>
    </item>
    <item>
      <title>An Effective Patching Process</title>
      <description><![CDATA[<p>Of all the MSPs that I speak with, patching is the one component that everyone says that they use. Once we dig in a little deeper, I’m amazed by how few MSPs use this fundamental platform component as effectively as it can be.</p>

<p>Most of the configurations that I review indicate that the goal was to get patching deployed quickly following the most basic onboarding instructions provided by Kaseya. What’s missed is that this information is an example of what can be done, but not the most effective way to do it. MSPs often report that patching causes as many issues as it solves or doesn’t work the way they think it should. They’re surprised when we tell them that our patching process requires little ongoing maintenance to achieve reliable patch deployment across several thousand systems.</p>

<p>Here are just some of the patch configurations we find deployed by MSPs that can be improved:</p>

<p><strong><span class="font-large">Patch Policies that auto-approve every patch.</span></strong></p>

<p>This mostly eliminates the monthly review and approval process, but also eliminates all control over patching. This is a dangerous configuration as patches are immediately approved for deployment. There are several categories of patches that are “optional” and will install or update applications that could actually add vulnerabilities to the environment that would not exist otherwise. While many categories can be set for auto-approval, these categories should be reviewed and approved manually to minimize this risk.</p>

<p><strong><span class="font-large">“One size fits all” policies.</span></strong></p>

<p>I’ve seen MSPs with just one patch policy for all agents, or (slightly better) a policy for workstations and another for servers. These may have some manually approved categories, but this doesn’t provide the flexibility to accommodate customers with different needs. Some patches can affect customer LOB applications, and when this happens, the updates are excluded from the Patch Policy and thus excluded from all customers. This clearly leaves some clients vulnerable.</p>

<p><span class="font-large"><strong>One patch policy for each configuration needed by customers, or one per customer.</strong></span></p>

<p>Having one Patch Policy for every configuration or for individual customers may be a step in the right direction for reducing vulnerabilities, but this increases the amount of manual review and approval needed every month. This gets old really fast, and we find that MSPs skip this for a month or two, then find themselves with hundreds of patches to review. Patching becomes unmanageable very quickly using this method.</p>

<p><strong><span class="font-large">Patch settings applied manually.</span></strong></p>

<p>Here’s a method that works for about a month after onboarding VSA. During onboarding, all of the customers are reviewed and Patch Policies and schedules are applied. “OK! We’re done!” is the thought and patching works – mostly. When we perform an audit, we usually find many machines that don’t have an update schedule or Patch Policy applied, much to the surprise of the MSP. Automating this task will eliminate both this maintenance task and the vulnerability that not patching represents.</p>

<p><span class="font-large"><strong>Patch Updates applied using a “shotgun” approach on servers.</strong></span></p>

<p>While providing training for our Core Automation Suite recently, we were covering the patch process. We have 48 distinct “patch windows” covering three weeks every cycle. The MSP’s engineer asked why we did this, since their VSA training suggested creating a “Servers” patch policy and then scheduling all servers for updating starting at midnight on Saturday. I pointed out that this method insures that all servers get patched, but won’t assure that servers that are inter-dependent will be started in the proper sequence to allow the application to come back up cleanly. The six patch windows we have on Saturday night allow you to update servers in a specific order, eliminating application restart sequence issues. The light dawned, and the engineer said “No wonder we need to manually restart the servers at 3 different clients every month!|”.&nbsp; Creating and deploying these patch windows took time and effort, but the ability to automate the patching for hundreds of servers year after year has paid that initial cost several times over.</p>

<p><strong><span class="font-large">Good Practices</span></strong></p>

<p>The following methods are used in our Core Automation Suite to automate the bulk of ongoing patch management.</p>

<ul>
	<li><strong>Layered Patch Policies</strong> – We utilize 3 core policies – Baseline, Servers, and Workstations. These policies have just one role – approve everything <em>except </em>the patches that we never want to install on any system, on servers, or on workstations. It’s a pretty small list, and usually limited to the Optional Software category (think “Zune media player”). The we have additional policies that block a specific class or category. These might include DotNET, various IE versions, optional updates, and the like. Every agent is a member of at least two patch policies – Baseline and either Server or Workstation. Customers that have specific exclusion requirements get additional policies applied. All told, we have about 14 Patch Policies to accommodate the 3 baseline and 11 custom blocking configurations.</li>
	<li><strong>Effective Auto-Approval Policies</strong> – The Baseline policy auto-approves most categories, and the Server/Workstation policies set a few categories to be reviewed and manually approved. The remaining policies are “blocking” policies and approve most categories except the ones that contain the updates we might want to restrict. This means we have to review and approve updates for the 9 distinct Patch Policies that we’ve created, but it’s a small number to review each month and generally takes less than 30 minutes to complete. This is a small price to pay for low-risk updates to client machines.</li>
	<li><strong>Automation for Patch Policies and Schedules</strong> – Leverage System Policies to define the patch update schedule, policies, and other patch-related configuration settings. Policies merge, so a single policy can configure settings for all servers, and a second policy can define the update schedule. Since policies can perform multiple tasks, you can easily perform pre and post update tasks such as disabling monitors and forcing pre-update reboots. Policies also override settings when applied at a lower (client or machine-group) level. A policy that uses additional Patch Policies can be applied to a client folder to prevent specific update categories or types. Utilize Views to control the application of these policies, limiting them to server or workstation class systems and identifying specific schedules or other restrictions.</li>
	<li><strong>Run Weekly Patch Scans</strong> – insure that sufficient time exists between the scan and the update schedule. We run our scans on Mondays for all agents (servers during early AM) and schedule patches for Wednesday or later during the week. (We patch some servers during mid-morning or noon on Wednesdays if they can’t be done during normal update windows.)</li>
	<li><strong>Pay Attention!</strong> – a small amount of review each month will make sure that the automation is functioning and identify gaps in your process. Does each agent have a Scan scheduled? What about an Update schedule? The systems without update schedules – are they being manually updated because of application or customer service requirements? Check these and the system policies each month when reviewing the new patches for approval – this only adds another 5 minutes to the monthly patch management tasks.</li>
</ul>

<p>The effort and time to perform this level of patch management pays for itself with a reliable and highly automated process. The time to develop this system can be significant – our patch components consist of 14 Patch Policies, 54 patch-related System Policies with associated views, and took about a month to develop, test, and document. Not only will updating be applied automatically to workstations and to servers with just a custom field update, but we’ve developed ways to automatically exclude systems and customers that don’t subscribe to patching. These policies and views are just one part of what makes the Core Automation Suite’s cost a true bargain.</p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-an-effective-patching-process'>Admin</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-an-effective-patching-process'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-an-effective-patching-process</link>
      <author>support@mspbuilder.com (Admin)</author>
      <comments>http://mspbuilder.com/blog-an-effective-patching-process</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-an-effective-patching-process</guid>
      <pubDate>Sun, 15 Apr 2018 19:03:00 GMT</pubDate>
    </item>
    <item>
      <title>The Real Price of Automation</title>
      <description><![CDATA[<p>I talk to a lot of MSPs about automation, and invariably we discuss our automation tools. It always surprises me when an MSP says, “we’re too small for that” or “I don’t think we could afford that!”. That usually leads to an interesting conversation, something like:</p>

<p>So – how many endpoints do you have? <em>Around 600, including roughly 80 servers. That’s across 26 clients, and four of them are break-fix – we use the agents just for remote support.</em></p>

<p>What do you charge for basic support package – monitoring, patching, AV, AM, and maintenance?<br />
<em>&nbsp;Charge? We only charge for the actual time we spend fixing stuff. We do charge $5 for the AV license.</em></p>

<p><em>Other answers range from “$25 for the basic package of patching and AV/AM" to "$150 for everything, including call-in support".</em></p>

<p>Well, with the initial cost of the VSA agent and the monthly costs for agent and AV/AM licensing, if you monitor and respond to alerts, you need to generate $250-400 per year on each agent just to break even. That means you need to charge at least $28 for just the basic monitoring and AV/AM, or find a way to reduce costs.<em> (gulp!) I didn’t think of it that way.</em></p>

<p>How many monitoring alerts do you get each day? <em>Too many (laughs). We used to get over 200 each day, so we turned off the monitors.</em></p>

<p>How many techs do you have on the help desk?<em> Four – two L-1, an L-2, and an L-3.</em></p>

<p>How’s that working out – is their time fully utilized? <em>It is – we could add another tech if we could afford it! The L-3 engineer is pretty busy with escalations and some manual maintenance tasks.</em></p>

<p>What’s your rough payroll for your four techs? <em>Around $300,000, plus benefits and such.</em></p>

<p>What about customer projects – how do you handle those? <em>We have one dedicated engineer for projects, and press the other techs into service&nbsp; as time permits, but then the tickets pile up.</em></p>

<p>Do you do any ongoing system maintenance? <em>We tried, but the procedures are difficult, don’t always work, and we’re too busy to spend more time on it.</em></p>

<p>Don’t you think that regular system maintenance might reduce the calls and alerts?<em> It might, but that’s complicated to set up, and we just don’t have that expertise.</em></p>

<p>Well, I think that if you took a look at the results that our tools provide, you might change your mind. Here’s an example from an MSP that we first developed these tools for.</p>

<p>When we started working with them, they had around 1200 managed agents (only 16 were break-fix). They had two L-2 and three L-3 engineers on the helpdesk, with base tech salaries hitting just over $500,000. Another two engineers working on projects pushed tech salaries to $720,000. Those endpoints generated over 200 alerts per day – that’s one in six machines generating alerts through monitoring. There was one engineer per 240 endpoints, so each endpoint had an “employee cost” of $421.</p>

<p>During the first year, we developed standards that helped streamline their operations, we removed the sample monitors and deployed ones that were engineered to their customer requirements, and added some basic automation to VSA to deploy monitors based on detected services and schedule patching and updating. This is what we now sell as our Core Automation Suite.</p>

<p>After the $1500 investment in this solution, they were able to increase their managed endpoints to just over 1800. They eliminated the L-2 helpdesk positions and promoted “one call and done” help-desk services to the clients using only L-3 engineers. Alerts dropped to 75 per day – one in twenty-four. With one engineer per 600 endpoints, the “employee cost” of an endpoint was now down to $192 – a savings per endpoint of $229.</p>

<p>The following year, we helped them implement the EMM Suite, which includes Smart Monitors and Daily Maintenance. The Smart Monitors reduced alerts by auto-remediating many common conditions and dynamically setting reasonable alert thresholds. Daily Maintenance improved the operation of the endpoints and further reduced the help-desk load. By now, they had grown to almost 3000 managed endpoints, yet alerts dropped to under 16 per day (one in every 188 agents)! Customer calls had also dropped since they were proactively maintaining the systems. One help-desk engineer now supported almost 1000 endpoints, dropping the “employee cost” of an endpoint to just $110 per year. After a year using the EMM suite, they reported that the help desk team spent nearly 50% less time on tickets because the basic remediation tasks were completed through automation.</p>

<p>Thus – a $1500 investment in automation allowed the elimination of two L-2 help desk seats, freeing those employees for project work and still supported a 50% increase in managed endpoints. Using our automation for patching and daily maintenance, end-user calls dropped, and customer satisfaction increased. The $0.50 cost for these improvements was easily absorbed by the increased profitability, and our end-user interface that reported what was being done eliminated the “I pay you all this money – what do you do for me?” question that so many customers ask! We tell them every day what we’ve done.</p>

<p><span class="font-large"><strong>Here’s some facts based on clients that we’ve helped.</strong></span></p>

<p>Typical “small” MSP has around 750 managed endpoints, and a few hundred “break-fix” systems. They have 3 techs on help desk at L-1 to L-3, plus another 2-3 on staff for project work.</p>

<p><strong>Without automation:</strong></p>

<ul>
	<li>3 Help Desk techs base salaries (major city region) cost $250,000</li>
	<li>About 125 alerts per day are generated – 42 per tech, and hard to review and close all of them.</li>
	<li>Each managed endpoint requires $334 per year to break even on salaries alone.</li>
</ul>

<p><strong>With automation and maintenance:</strong></p>

<ul>
	<li>2 Help Desk techs (L-2 and L-3) base salaries are $195,000. $55,000 per year savings.</li>
	<li>$4,500 per year in EMM licensing, still a $50,000+ annual savings (not counting that the 3rd tech can now be billing for project work with no additional salary cost!)</li>
	<li>Fewer than 5 alerts per day – 2-3 per tech, easily handled, allows direct support of client calls.</li>
	<li>Each managed endpoint requires $266 per year to break even on salaries and the $6 EMM license costs. (Yes, EMM costs just $6 per agent per year.)</li>
	<li>At the industry average of 500 agents per help-desk agent, the break-even cost is around $150, and we’ve seen 1100-1200 endpoints per agent possible once the environment has been patched and maintained for a few months.</li>
</ul>

<p>The question is – what’s more expensive? $3000 plus $6 per&nbsp;agent per year or the salary of another tech? Plus benefits, payroll taxes, insurance… Other considerations include:</p>

<ul>
	<li>What about your employee satisfaction vs. the frustration of fighting a losing battle against alerts?</li>
	<li>Then there’s client satisfaction – their network gets “quiet”, the “fires” stop, and their employees become more productive.</li>
	<li>Can you do this yourself? Sure, but at what cost? Can you dedicate an engineer to build the monitors and the tools and then maintain it? Or will they get pulled into customer support?</li>
	<li>Then there’s the extra billing capacity for new work by employees that aren’t tied to the help desk!</li>
</ul>

<p><strong>NOTE: </strong>Salary costs were based on averages in major metropolitan areas like NYC, Boston, and LA. Level 1 tech salary is $55,000, Level 2 is $80,000, and Level 3 is $110,000. EMM costs are based on an average distribution of 12% servers and 88% workstations.</p>
<br /><a href='http://mspbuilder.com/blog-the-real-price-of-automation'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-the-real-price-of-automation'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-the-real-price-of-automation</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-the-real-price-of-automation</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-the-real-price-of-automation</guid>
      <pubDate>Fri, 30 Mar 2018 23:56:00 GMT</pubDate>
    </item>
    <item>
      <title>Managed Variables - Take 2</title>
      <description><![CDATA[<p><strong>VSA Managed Variables, Problems, and SaaS-Friendly Solutions</strong></p>

<p>Well, just one month ago, I discussed using Managed Variables, and reviewed some of the challenges with them.&nbsp; A current bug in VSA leaves us with the inablilty to deal with undefined Managed Variables, which means we can't use the presence or abscense of a Managed Variable to decide IF we should perform a task.</p>

<p>I use Managed Variables - a lot - for everything from passing arguments to our utilities to control how they function to defining local accounts and even customer licensing and configuration information for commonly deployed applications. At last count, I've got 16 Managed Variables. Nearly all of them, in fact, all but two of them, define customer-specific information that may not exist for every customer, so the ability to detect when the data is defined is crucial.</p>

<p>Last month, I provided a workaround that we've been using internally for almost 4 years. It's a simple SQL Query that uses two arguments - the name of the Managed Variable and a default response. It reads the Managed Variable and returns the data if its defined or the default value if it isn't. This makes it easy to detect that the default value was returned and skip the action that depends on the variable being defined. Easy-peasy! Well, that's the case if you're an on-prem user of VSA. Kaseya SaaS users have, well, let's just say "challenges".</p>

<p>I understand that caution is needed in a shared environment, and that allowing access to the SQL back-end, even for queries, can create great angst. We've been waiting over 6 weeks to get the query approved for our TAP instance of Kaseya SaaS so that we can certify our automation. This delay was just too much to bear, especially with SaaS users asking about our automation suite, so - it was time to dig in and find an alternate solution. The challenge was that reading an undefined Managed Variable didn't just fail, it caused the procedure where it was referenced to crash at that point! The answer came - literally - while lying awake at night...</p>

<p><em>"If reading an empty Managed Variable crashes the procedure, then let a different procedure crash!"</em></p>

<p>Here's the process, in a nutshell. Start by creating a procedure that assigns the Managed Variable(s) that you need to global variables:</p>

<pre>
<code>getVariable("ConstantValue", "&lt;InitArgs&gt;", "global:MV_InitArgs", "All Operating Systems", "Continue on Fail")</code></pre>

<p>The global Managed Variable name uses a "MV_" prefix to identify it as a Managed Variable in this example, and uses a name that makes its purpose clear. In this case, it will contain any custom arguments used by our agent initialization utility. In some cases, there are multiple Managed Variables needed, and we simply collect all of them in the same procedure - each with a unique name. One such example is an application the needs the customer's product serial number, license key, and customer ID to perform an installation.</p>

<p>Next, in the procedure that needs the Managed Variable(s):</p>

<pre>
<code>getVariable("ConstantValue", "xFALSEx", "global:MV_InitArgs", "All Windows Operating Systems", "Halt on Fail")
executeProcedure("ALL-GetManagedVar-InitArgs", "", "Immediate", "All Operating Systems", "Continue on Fail")
If checkVariable("#global:MV_InitArgs#") Contains "FALSE"
  executeShellCommand("CMD.exe /c RMMINIT.BMS", "Execute as System", "All Windows Operating Systems", "Continue on Fail")
else
  executeShellCommand("CMD.exe /c RMMINIT.BMS #global:MV_InitArgs#", "Execute as System", "All Windows Operating Systems", "Continue on Fail")</code></pre>

<p><br />
This calls the first procedure with "Continue on Fail". If the Managed Variables aren't defined, that procedure fails, but does not affect the primary procedure. That procedure is able to determine that the default value is still defined and will take appropriate action. In this case, it runs the init command without passing arguments, but if the default value is not detected, it runs the init command with the arguments in the global variable. Note that the global variable is defined and set to a default value here before calling the external procedure that will reset it if the Managed Variable is defined.</p>

<p>I'll admit, this is a bit kludgey, but it does allow one to leverage Managed Variables on all platforms, including SaaS, without causing the primary procedure to crash.</p>
<br /><a href='http://mspbuilder.com/blog-managed-variables-take-2'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-managed-variables-take-2'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-managed-variables-take-2</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-managed-variables-take-2</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-managed-variables-take-2</guid>
      <pubDate>Mon, 19 Mar 2018 20:15:00 GMT</pubDate>
    </item>
    <item>
      <title>Leveraging VSA Machine Groups</title>
      <description><![CDATA[<p><span class="font-large"><strong>Planning, Structure, and Consistency</strong></span></p>

<p>These are the essentials for creating a VSA platform that is "automation friendly". If you read my article on Monitor Sets, you'll recall that our monitors utilize a machine-readable subject line. This allows our automation to break down the alert, attempt remediation, and ultimately route the alert properly. This is possible because every header has the same structure, and every field has the same kind of data in it. It's time to apply this logic to the Machine Groups and create a hierarchy.</p>

<p>Kaseya defaults to a machine group below the Customer ID called "root". This meaning may have become lost over time, but it's intent is the root where an object <em>hierarchy </em>is built. This is the point where System Policies are linked. Using a rooted machine group with subgroups is the only way to apply a policy to all agents for a specific client and easily differentiate between managed and unmanaged locations. <strong><em>Important concept #1 – Organization is critical for effective automation!</em></strong></p>

<p>So often when working with an MSP, I find the default "root" group present and containing most of the agents, and then other groups at the root of the Customer ID representing different locations, machine types, or other groupings. This, sadly, is not a good plan as any policy common to all agents must be linked multiple times. This introduces risk through the possibility of not linking critical policies to all groups or linking them to the wrong group. The primary purpose of a machine group is very similar to Active Directory OUs - organize objects to apply policies. To do this effectively, you need to design a hierarchy based on how you might apply policies.<strong><em> Important concept #2 - System Policies are the Heart of Automation. </em></strong>To effectively use policies for automation, the machine groups must form a reasonable hierarchy. Consider how you might automate things and how policies themselves work. A policy can run procedures when a condition is met, schedule procedures, and define configuration settings. Configuration settings is a big consideration, since they are often different between workstations and servers, right? So - you should have groups that allow linking policies based on the class of system.</p>

<p>Our <strong>Core Automation Suite </strong>depends upon a standardized machine group structure, and we leverage the Dickens out of it. We can configure and schedule patching &amp; application updating, configure AV and AM products, apply monitor sets, and much more, all with a minimum of manual involvement. (In fact, for almost all customer onboarding, all we do manually is define the server patching sequence.)</p>

<p><span class="font-large"><strong>Change is <s>Bad</s> Good!</strong></span></p>

<p>Almost every MSP we've worked with to help optimize their platform has had their staff initially complain about the changes to the structure. Not because the structure was bad or confusing, but simply because it was - change! Every MSP, after using the new organizational hierarchy for a couple of days, universally agreed that it was easier to find agents by type, site, and class of service, and that the automation that this organization allowed reduced everyone's workload. Of course, this takes effort - whiteboard the structure based on policy linking, accommodate requirements of different customers, and different kinds of clients, and a method to define sites that works globally, not just in your back yard. Once the planning is complete, you can create the machine group structure for the clients and move the agents, cleaning up old groups when the last agent has been moved.</p>

<p>Oh, remember "consistency"? If you create a site group for a customer that has 10 sites, you should create the same structure for a customer with one site. Why? CONSISTENCY, of course. It's all about the ability to automate and know exactly what the data format will always be in! And - should the customer expand and add a second site, you simply create a second site group and add the new agents there - no need to restructure, add 2 groups, and move agents before you can add agents from the new location. To simplify the decision process and remove any “emotion” from the process, we utilize the United Nations LOCODE standard for identifying the locations – they have IDs for virtually every town or city on the planet.</p>

<p><span class="font-large"><strong>Why have a workstation and server group – I use Views to handle that!</strong></span></p>

<p>Sure – we use over 300 views, and around 150 of those just for policy management, but we strongly recommend separate groups for workstations and servers in each client group. This allows you to accommodate situations where a developer might use a server O/S for their workstation, or – more commonly – a workstation O/S is used as some kind of “server”. I can link a view to a policy that alters the monitoring or configuration settings simply because an agent is in a “*.servers.*” group – it’s a workstation, but should be monitored or configured like a server! This can’t be done if you simply apply views based on the O/S type. We also recommend the use of a group called “special”. Any view should exclude this group – it is a simple way to temporarily stop all automated processing by simply moving the agent into that group.</p>

<p>So, to summarize, we create a root group that identifies whether the customer is managed or unmanaged, then subgroups for servers, workstations, and special. The servers and workstations groups have subgroups for each location that the customer has, and only the location groups and the special group contain agents. This took time and energy to design and define, but the payback has been extensive. When an agent checks in, the automation runs, figures out what the agent has, where it is, and automatically applies monitors, schedules patching, daily maintenance, and application updating. This is all made possible by using a consistent structure that the automation can leverage.</p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-leveraging-vsa-machine-groups'>Admin</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-leveraging-vsa-machine-groups'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-leveraging-vsa-machine-groups</link>
      <author>support@mspbuilder.com (Admin)</author>
      <comments>http://mspbuilder.com/blog-leveraging-vsa-machine-groups</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-leveraging-vsa-machine-groups</guid>
      <pubDate>Sun, 25 Feb 2018 15:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Using VSA Managed Variables</title>
      <description><![CDATA[<p>VSA Managed Variables offer an excellent way to provide customer and machine-group specific values for use in Agent Procedures. We have more than 20 unique Managed Variables defined to help configure application licenses, define local account credentials, and control how applications are deployed and used. It suffers, however, from one very debilitating bug. If the variable is not defined for all customers and machine groups, it can cause the procedure to fail when the variable is not defined. There is no way to test the variable to determine if it is defined or not, and the simple act of referencing the undefined variable will terminate the procedure, even if "Continue on Fail" is selected.</p>

<p>MSP Builder has developed a work-around for this using a SQL Query. The query is invoked with two user arguments - the name of the Managed Variable and a Default Value. The query looks up the Managed Variable based on the agent's machine.group value. If the value is defined, it is returned, otherwise the default value is returned instead. This allows you to perform tasks using Managed Variables with greater flexibility - either use a custom or default value, or perform a task only when a custom value is defined. We use this to create a customer-specific local account, only for customers that request this.</p>

<p>You can download the XML file that defines the SQL Query. The Zip file contains the XML file and a readme that illustrates how to use this in a procedure, and describes the installation steps. For SaaS customers, you will need to create a request to have this query installed into your instance. Do not change the XML filename or contents to insure that your request is not delayed. Kaseya will need to review the procedure and approve it before installation for SaaS customers.</p>

<p><span class="font-large"><strong><em>UPDATE!!</em></strong></span></p>

<p><span class="font-normal">See "Managed Vars - Take 2" for a method that works on SAAS or On-Prem without any SQL code!</span></p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-using-vsa-managed-variables'>Admin</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-using-vsa-managed-variables'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-using-vsa-managed-variables</link>
      <author>support@mspbuilder.com (Admin)</author>
      <comments>http://mspbuilder.com/blog-using-vsa-managed-variables</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-using-vsa-managed-variables</guid>
      <pubDate>Mon, 19 Feb 2018 17:53:00 GMT</pubDate>
    </item>
  </channel>
</rss>