 <?xml-stylesheet type="text/css" href="http://mspbuilder.com/Data/style/rss1.css" ?> <?xml-stylesheet type="text/xsl" href="http://mspbuilder.com/Data/style/rss1.xsl" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd">
  <channel>
    <title>MSP Builder Blog</title>
    <link>http://mspbuilder.com/blog</link>
    <description />
    <docs>http://www.rssboard.org/rss-specification</docs>
    <generator>mojoPortal Blog Module</generator>
    <language>en-US</language>
    <ttl>120</ttl>
    <atom:link href="http://mspbuilder.com/Blog/RSS.aspx?p=3~3~6" rel="self" type="application/rss+xml" />
    <itunes:owner />
    <itunes:explicit>no</itunes:explicit>
    <item>
      <title>Endpoint Security - Local Accounts</title>
      <description><![CDATA[<p>Maintaining secure local access accounts can be a challenging prospect for MSPs. Learn how the RMM Suite allows MSPs to create accounts and change passwords on any frequency they desire without any manual effort.</p>

<h5>The LAUSER Account</h5>

<p>Back in 2019, a customer related an experience of when a VIP user at a major customer was in an airport lounge. The user needed to print their presentation and needed admin access to install the print driver. The WiFi was limited to http protocol access, which prevented the MSP from using their RMM to provide support. They had no choice but to provide the user with&nbsp;<em>their</em>&nbsp;internal use password. This required changing the password throughout that customer environment. We suggested that a commonly named account ("lauser") could be created and our automation could maintain and update the credentials on a weekly basis. We rolled that process into ITP and began deploying this account for our customers soon after.</p>

<p>A few years later, MSPs are looking to improve security and decide to use the LAUSER account for their local access. This led to an additional improvement in this component, allowing multiple accounts to be created using this process.</p>

<p>One of the unique security features of this process is that the password is generated based on using the date, time, and hostname, along with other logic, to seed the password generation logic. This ensures that the password is machine-specific and impossible to re-synthesize.</p>

<h5>The RAUSER&nbsp;&amp; CAUSER Accounts</h5>

<p>The RMM Suite has long supported the use of per-client accounts for the MSP (RAUSER) and the customer (CAUSER), first via Managed Variables in Kaseya VSA and now via self-ciphering Cloud Script Variables on all RMM platforms. These accounts offer the flexibility of selecting the actual login ID, display name, and password. These credentials apply to groups of agents, whether an entire customer organization or specific location or department.</p>

<h5>RMM Suite Account Management Tools</h5>

<p>The RMM Suite continues to support&nbsp;multiple methods of local account management.</p>

<ul>
	<li>If the RAUSER (or CAUSER) Cloud Script Variable (CSV) is defined (both UserID and Password), the account will be created, added to the local administrators group, and the defined password will be set on the account. This happens automatically the first time that an agent checks in. These accounts can be updated at any time by updating the account password stored in the CSV and then executing the appropriate WIN-Local Account script on the RMM platform, targeting the endpoints where the account should be&nbsp;updated.</li>
	<li>The LAUSER technology has been enhanced and migrated into our Daily Maintenance tool. Simply create a Weekly or Monthly task to run the LAUSER command. This will generate a long, complex password; create the account and add it to the Administrators group if necessary; then set the password. The password will be ciphered and written to the system registry, where it can be collected by the Daily Audit tool, deciphered, and pushed into the RMM or your documentation engine such as Hudu or IT Glue. You can define multiple tasks in Maintenance with&nbsp;the LAUSER command to create any number of local admin accounts with unique credentials.&nbsp;
	<ul>
		<li>If no argument is defined, the account name "lauser" is targeted. This maintains the process we implemented several years earlier and allows this account to be given to the user as necessary. It may be appropriate to update the frequency of this account change.</li>
		<li>If an argument is provided, it will be used as the account name. This argument should be a single word without spaces, following the usual guidelines for user account IDs.&nbsp;</li>
	</ul>
	</li>
</ul>
<br /><a href='http://mspbuilder.com/blog-endpoint-security-local-accounts'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-endpoint-security-local-accounts'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-endpoint-security-local-accounts</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-endpoint-security-local-accounts</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-endpoint-security-local-accounts</guid>
      <pubDate>Thu, 15 Dec 2022 15:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Endpoint Management - Service Level Automation</title>
      <description><![CDATA[<p>Many MSPs can benefit from offering different levels of service to their customers - it allows them to tailor their product&nbsp;to the size and budget of the organization they serve. The challenge is finding ways to automate this to deliver consistency without significant effort. Some common methods we've seen range from defining automation policies to run scripts to deploy software and linking these policies to each customer to just manually running the scripts needed to deploy the applications. The challenge with this - like all manual actions - is consistency. The RMM Suite solves this through Service Class Automation.</p>

<h5>Service Class Automation</h5>

<p>Just to clarify the term, "Service Class" (or Class of Service) usually assigns a name to the delivery of specific services. A good example is the classic Bronze / Silver / Gold terms, where Bronze might provide basic monitoring and AV while Gold provides advanced monitoring, proactive maintenance, and comprehensive endpoint security services. This can related to several services within an MSP practice, including monitoring, software,&nbsp;maintenance, patching, and security.</p>

<p>The RMM Suite employs a basic Service Class of Unmanaged and Managed, which is used broadly to apply or block automation.&nbsp;</p>

<p><strong>Unmanaged</strong>&nbsp;- This can be a "break/fix" or "time and materials" customer with no automation. RMM Suite customers also use this mode to onboard new clients. Since an unmanaged customer receives no automation, it allows a period of time after deploying agents to perform discovery actions. This can lead to preparing custom configurations, setting up software licenses for automated deployments, and identifying any special monitoring requirements. Once all customer preparation is completed, a client can be switched to Managed. This is defined using either a Customer Custom Field or - in VSA - a Machine Group root name.</p>

<p><strong>Managed -&nbsp;</strong>This represents a generic state where ALL automated services can be applied. The automation policies specifically look for the "unmanaged" status, treating all other status types as "managed". This allows a generic classification of "managed" as well as specific sub-classifications&nbsp;or Service Classes.&nbsp;The service classes can also be used to drive client billing.</p>

<p><strong>Service Classes</strong>&nbsp;- These are codes - whether colors, metals, animals, or simply an alpha-numeric ID - that define a specific set of services. These codes can be distinct or cumulative - that's completely up to the MSP. Cumulative codes take a bit more planning and configuration effort, but can simplify certain aspects of the automation.</p>

<h5>Distinct Code Mapping</h5>

<p>Distinct codes will map a set of specific components and services to a single code. A system filter identifies the code and applies the appropriate services. Note that the same services can be associated with multiple Service Class codes.</p>

<p class="text-indent-1"><strong>Iron</strong>&nbsp;- Basic AV, Patching</p>

<p class="text-indent-1"><strong>Steel</strong>&nbsp;- Basic AV, Antimalware, Patching, Application Updating, Basic Monitoring</p>

<p class="text-indent-1"><strong>Titanium</strong> - Advanced AV, Endpoint Security, Antimalware, Patching, Application Updating, Basic Monitoring, Advanced Monitoring</p>

<p>There are three automation policies and three filters. The filter checks for the Service Level code and applies the automation policy. The policy applies the products and services that are part of the Service Class. You will see that two policies have Basic AV, Antimalware, and Application Updating, three have Patching, and two have products unique to that class, This is a simple mapping of code to services and works well when there are a small set of&nbsp;classes and products.</p>

<h5>Cumulative Code Mapping</h5>

<p>This method creates a filter and automation policy for&nbsp;<em>each distinct product or service</em>&nbsp;instead of the service class. The filter applies a specific product or service when it matches one or more Service Class codes. This is how it works:</p>

<p class="text-indent-1"><strong>Basic AV</strong>&nbsp;- Filter triggers at Iron OR Steel levels</p>

<p class="text-indent-1"><strong>Advanced AV</strong>&nbsp;- Filter triggers at Titanium level</p>

<p class="text-indent-1"><strong>Antimalware </strong>- Filter triggers at Steel OR Titanium levels</p>

<p class="text-indent-1"><strong>Patching </strong>- Filter triggers at Iron OR Steel OR Titanium levels</p>

<p class="text-indent-1"><strong>Basic Monitoring</strong> - Filter triggers&nbsp;at Steel OR Titanium levels</p>

<p class="text-indent-1"><strong>Advanced&nbsp;Monitoring</strong> - Filter triggers&nbsp;at&nbsp;Titanium level</p>

<p>While this is certainly more complex and requires distinct filters and automation policies for each service, it provides greater flexibility when there are additional Service Classes. Consider adding a new "Tin" service class that only provides patching, and an "Aluminum" level with Patching and Application Updating. By simply updating the filter associated with the products to trigger on these new service classes, the automation applies without the need to create both new filters AND automation policies.&nbsp;</p>

<h5>How the RMM Suite uses Service Class Mapping</h5>

<p>Each day, when the Daily Audit application runs, it determines the Service Class code assigned to the customer. This starts by checking for a Customer Custom Field called CCOS. The value - if defined - is mapped to the "SC:<em>id</em>"&nbsp;tag and written to the System Roles Agent Custom Field, along with any other TAGs based on the applications and services found. The TAGs can be used to drive views to apply policies, which is useful for applying the monitors associated with these Service Classes. The TAG can also be used directly by the Daily Maintenance tool to install application components, either by local script or RMM script.</p>

<p>A second advantage of this method is the Service Class identity is added to a machine-specific field. Some RMMs do not expose the Customer Custom Fields to agent scripting and this circumvents that deficiency.</p>
<br /><a href='http://mspbuilder.com/blog-endpoint-management-service-level-automation'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-endpoint-management-service-level-automation'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-endpoint-management-service-level-automation</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-endpoint-management-service-level-automation</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-endpoint-management-service-level-automation</guid>
      <pubDate>Tue, 15 Nov 2022 15:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Onboarding Automation - Deploying New Components</title>
      <description><![CDATA[<p>Do you need to add new components to a standard configuration for one, several, or all customers?<br />
The RMM Suite Onboard Automation (OBA)&nbsp;tool will help you get this done through a single config file update.</p>

<h5>Using a Standard Configuration</h5>

<p>This concept starts by defining a set of configuration settings and software that needs to be deployed within your support stack. This should include settings and software that you deploy to most or all customers, then settings and software deployed to specific customers. The latter is usually LOB applications, as the RMM Suite install scripts can leverage Cloud Script Variables (CSVs) to both control the deployment globally while delivering customer-specific content.</p>

<p>As your product stack changes - either by adding or replacing products - your Standard Configuration changes. This has been a difficult process for many as configurations may need to change and software uninstalled before installing a new set of products. This is where the RMM Suite OBA tool can help.</p>

<h5>Deploying the Standard Configuration</h5>

<p>The OBA tool runs when an agent first checks in to deploy software and configure the endpoint to meet the Standard Configuration requirements. See this <a href="https://www.mspbuilder.com/blog-onboarding-automation-hands-off-workstation-build-1" target="_blank">blog post</a> for&nbsp;full&nbsp;information on using the OBA Tool and the Standard Configuration.&nbsp;</p>

<h5>Dealing with Change</h5>

<p>Change is inevitable, but it should not be difficult! A typical change to a Standard Configuration is using a different product, such as Antivirus software. Let's assume your Standard Configuration utilized Iron-Man AV, but now you are switching to the more powerful Titanium-Man AV product. You need to remove the old product and then install the new one. This requires just 2 scripts and 3 changes to your OBA config file:</p>

<p>Script: <strong>Uninstall Iron-Man AV</strong> - Create an RMM script to uninstall the Iron-Man AV product, suppressing any reboots.</p>

<p>Script:&nbsp;<strong>Install Titanium-Man AV</strong>&nbsp;- Create an RMM script to install the new Titanium-Man AV product, suppressing any reboots</p>

<p>Change the OBA configuration file:</p>

<ul>
	<li>Disable or Remove the definition that installed the IronMan AV product</li>
	<li>Add a definition to run the <strong>Uninstall Iron-Man AV</strong> script</li>
	<li>Add a definition to run the <strong>Install Titanium-Man AV</strong> script</li>
</ul>

<p>Once these changes are in your OBA configuration file, the next daily cycle will discover that these two tasks have never been run and will run them on all endpoints (based on the Task Category where these are defined, of course). The next time the endpoint is online and runs the Daily Tasks, these changes to your Standard Configuration will be processed and the endpoint will be compliant with your new standards.</p>

<h5>Summary</h5>

<p>Despite the "Onboarding Automation" name, the capabilities of the OBA tool extend to helping you maintain a "Standard Configuration" without complex scripting or other RMM automation tools. The OBA tool also works hand in hand with the Daily Maintenance tool that can be used to deploy and update components on the endpoint, especially when using Customer Class of Service (CCOS) tags. Daily Maintenance&nbsp;could be used to deploy and maintain either Iron-Man AV or Titanium-Man AV based on the CCOS tag being "Iron" or "titanium" (or any other level-identification term).&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-onboarding-automation-deploying-new-components-1</guid>
      <pubDate>Tue, 30 Aug 2022 14:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Implementing Security Roles the Right Way</title>
      <description><![CDATA[<p>Defining effective user security roles provides you with an added layer of security within your VSA. User Roles define which modules and settings a user can access from the VSA console. While there is no “one size fits all” model, the concepts presented here will provide appropriate access for technicians, engineers, VSA admins, and VSA managers.</p>

<p>&nbsp;In our typical VSA implementation, we create four distinct access levels, and several sub-types for MSP employees, plus three roles for customer access. <b>No user has Master role rights in our deployment configuration.</b> These roles should have a “NOC” or “MSP” prefix to designate them as internal roles.</p>

<p><b>Level 0</b> – Support A role designed for support staff to access VSA for running reports, getting agent counts, or checking use and available licensing. No access to automation is available, but these users can view agents and have virtually unlimited access to the reporting functions.</p>

<p><b>&nbsp;Level 1</b> – Technician This role, which we name “NOC-1-Tech”, grants the ability to perform basic agent administration, view audit and other configuration settings, and access remote control features. This provides the ability to perform about 80% of what a technician would do on a daily basis for end-user support.</p>

<p><b>Level 2</b> – Administrator Named “NOC-2-Admin” in our system, it grants additional capabilities to run procedures, deploy AV and AM, and perform most agent configurations. Neither of the above roles permit changing the configuration of VSA-wide settings.</p>

<p><b>Level 5</b> – Specialist These roles grant VSA administration rights to specific features, distributing the administration tasks among multiple users. In our practice, we use the following specialist types:</p>

<ul>
	<li><b>Security</b> – provides the ability to perform all Auth Anvil configuration and management tasks.</li>
	<li><b>AV-Malware </b>– grants access to administer the Antivirus and Malware components, including definition of profiles, policies, and assigning them to customers.</li>
	<li><b>Updating</b> – allows administration of all Patch Management and Software Management components. It may also allow access to other application updating components.</li>
	<li><b>Backup</b> – Allows configuration of all VSA settings related to backup operations.</li>
	<li><b>Manager </b>– grants a combination of roles, usually assigned to the Dispatch, helpdesk or Technical Manager(s).</li>
</ul>

<p><strong>Implementing these security roles will allow for better security and organization within your VSA infrastructure. To learn more, <a href="https://www.mspbuilder.com/request-demo2">schedule a demo for MSP Builder’s RMM Suite!</a></strong></p>

<p>&nbsp;</p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/implementing-security-roles'>lbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/implementing-security-roles'>...</a>]]></description>
      <link>http://mspbuilder.com/implementing-security-roles</link>
      <author>lbarnas@mspbuilder.com (lbarnas)</author>
      <comments>http://mspbuilder.com/implementing-security-roles</comments>
      <guid isPermaLink="true">http://mspbuilder.com/implementing-security-roles</guid>
      <pubDate>Mon, 16 Aug 2021 15:39:00 GMT</pubDate>
    </item>
    <item>
      <title>STOP! How outdated are your management scripts?</title>
      <description><![CDATA[<div>During a recent audit of an MSP's onboarding processes, I found several Agent Procedures that seemed interesting. I had not seen any other MSP performing some of these configuration steps, so I looked more deeply at the logic in these procedures. What I found would have turned any hair I had left white!</div>

<div>&nbsp;</div>

<div>One procedure in particular was named "Set Access Rights for PerfMon Folders". "What PerfMon folders?" I wondered.. Looking at the procedure, the description stated that it was modifying the Kaseya working folder permissions to allow PerfMon to access the KLogs folder. It did this by changing the permissions to "Everyone:Full Control"!&nbsp;</div>

<div>&nbsp;</div>

<div>Looking closer, I was able to determine that this procedure was quite old, and likely developed for VSA version 6 or earlier and had never been updated. While it's possible that older versions of VSA did not provide adequate access to the KWorking folder, that is no longer the case. Administrators have full control, and even users have Read &amp; Execute, so there is no issue with PerfMon reading this location.&nbsp;</div>

<div>&nbsp;</div>

<div>The most important thing to realize is that things change. If you have processes that haven't changed in years, it's time to afford them a review and decide if they are still needed, or in need of an update. This procedure, if not identified, would introduce significant risk into the MSP environment by granting Full Control rights to every account to a critical system folder. Imagine a malicious user could replace an EXE or update a script to call malware or ransomware. If the agent procedure doesn't replace these scripts and blindly calls them - often with SYSTEM rights - the damage could be extensive.</div>

<div>&nbsp;</div>

<div>Why risk this? Take time to review your procedures and tools to make sure they are still required and operate in compliance with today's security model. Remove processes that are no longer needed, and update those that are still needed to follow current security requirements. The business you save might be your own!</div>
<br /><a href='http://mspbuilder.com/blog-how-outdated-are-your-management-scripts'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-how-outdated-are-your-management-scripts'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-how-outdated-are-your-management-scripts</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-how-outdated-are-your-management-scripts</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-how-outdated-are-your-management-scripts</guid>
      <pubDate>Sun, 23 Feb 2020 14:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Guest Blog-Why are we always surprised?</title>
      <description><![CDATA[<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">We are approaching the five-year anniversary of our first MSP-Ignite Peer Group Meeting. I find myself thinking about all of the conversations and common issues that have been discussed. One universal truth surrounds the discovery made after any of us loses a member of our team. </font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">The phenomena, usually, occurs after several months of debating whether or not to let a problem employee go. The shocking discovery, however, seems to be the same no matter what terms the former employee leaves under. Of course, compounding the entire issue is the fact that as owners or managers we no longer believe that we remember how to do many of the tasks performed by others.</font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">As MSPs we are constantly discussing the “pro-active” measures we take on behalf of our clients. Yet somehow, we don’t think pro-actively when it comes to our own businesses. We work tirelessly to build a team of people that handle every aspect of our businesses. We remove ourselves from the day-to-day operations of the business and celebrate the fact that we did so as if it is our lifelong goal. </font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">Perhaps, after we finish celebrating, we should apply that pro-active mentality to the health of our businesses. When’s the last time you jumped in and just worked a ticket? Spent half a day as a Service Coordinator? Handled the Approve &amp; Post process? Performed a Strategic Business Review? Not only should you and your leadership team jump in every once in a while, you should do so in areas that are not necessarily in your wheelhouse. </font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">Actually, pull the documentation and follow it to the end. Analyze everything about the environment surrounding the task and take note of where it can be improved. Take the time to look at how previous tickets for the client were handled, how the billing was done the previous month, what was discussed at the last SBR. In other words, pro-actively look for where the processes are not being followed or can be improved. Look for areas where your staff is less than perfect. You don’t have to necessarily take actions on any of your findings. Just use the information to avoid surprises down the road.</font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p style="margin:0px"><font color="#000000"><font face="Calibri"><font size="3">Steve Alexander / MSP-IGNITE</font></font></font></p>

<p style="margin:0px">&nbsp;</p>

<p><i><span style="font-size:12pt; margin:0px"><span style="font-family:&quot;Calibri&quot;,sans-serif"><font color="#000000">Steve Alexander has over 30 years of experience running multiple IT Service Providers and MSPs. As the owner of MSP-Ignite he facilitates industry leading peer groups with a unique approach designed to make every member feel like they are being guided towards their own goals by a private consultant.</font></span></span></i></p>
<br /><a href='http://mspbuilder.com/guest-blog-why-are-we-always-surprised'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/guest-blog-why-are-we-always-surprised'>...</a>]]></description>
      <link>http://mspbuilder.com/guest-blog-why-are-we-always-surprised</link>
      <comments>http://mspbuilder.com/guest-blog-why-are-we-always-surprised</comments>
      <guid isPermaLink="true">http://mspbuilder.com/guest-blog-why-are-we-always-surprised</guid>
      <pubDate>Mon, 11 Nov 2019 15:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Security - Multi-Factor Authentication</title>
      <description><![CDATA[<p>There’s no doubt that Multi-Factor Authentication is a hot topic and an excellent way to improve secure access to your infrastructure. Remote access to your RMM and PSA tools, as well as the RDP Gateway will benefit from using MFA. But what about access when you are in the office – do you need MFA</p>

<p>Since you are already in a protected environment (you lock your doors and have a firewall and other logical and physical security – right?), you don’t need to require MFA. Most MFA solutions provide one or more methods of “whitelisting”. Which method you choose will make the difference between being secure and not…</p>

<h4>User Whitelisting</h4>

<p>User whitelisting is used for application accounts that would not be accessed externally, or support accounts that need to be used by external support teams. You would <i>never</i> whitelist your employees! Unfortunately, we see this configuration all too often. When we point it out, the response is typically “yes, but we trust our team!”</p>

<p>Sure – you can trust your employee, but you <i>can’t trust their credentials!</i> That’s the distinction that MFA makes. If an employee’s credentials are compromised, any bad actor can try to log in. If their account is whitelisted, there would be no Multi-Factor authentication and access would be granted!</p>

<h4>Network Whitelisting</h4>

<p>Network whitelisting identifies the internal network range(s) that you trust – typically the office network <i>public addresses</i>. In most situations, this would be the public IP address assigned to your external firewall (or firewalls if you have redundant Internet connections). This is the preferred way to allow your techs to work without MFA when in the office, but require it when they are at home, customer sites, or otherwise outside of the office.</p>

<h4>MSP Builder Tools</h4>

<p>Many of the MSP Builder tools utilize the VSA APIs to perform their tasks. While these tools use the API to authenticate over an SSL connection, there are some processes that we follow to improve the security of these tools. As each tool runs, it requests an authorization token to do its work by authenticating to VSA. The tasks that the tools perform take anywhere from a few milliseconds to about 3 seconds to complete. Once the task completes, the tool closes the session, invalidating the authorization token.</p>

<p>Another level of security that we use is MSP Builder License Authorization. All tools that utilize the APIs must first authenticate to the MSP Builder licensing server. This authorization is such that it is extremely difficult (if not impossible) to circumvent, requiring multiple data parts to complete a license validation. In a sense, this is a form of MFA for the tools that utilize the Kaseya APIs. Our API account does not require the password to be distributed to the systems that use it, and is designed to be changed on a 24-hour cycle, increasing the difficulty of a brute-force attack.</p>

<h4>Summary</h4>

<p>Multi-Factor Authentication is an excellent method to improve the security and integrity of your environment, but it requires careful and correct configuration. Incorrect settings will negate the security you’re trying to deploy, so take your time, double and triple-check your configuration, and use whitelisting options properly!</p>
<br /><a href='http://mspbuilder.com/blog-security-multi-factor-authentication'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-security-multi-factor-authentication'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-security-multi-factor-authentication</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-security-multi-factor-authentication</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-security-multi-factor-authentication</guid>
      <pubDate>Sat, 12 Oct 2019 16:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Agent Offline is NOT Server Down!</title>
      <description><![CDATA[<p>When I work with MSPs, I almost always find that they apply the "Agent Offline" monitor to servers with a 5 minute (or less) threshold. They tell us that false alarms are one of their biggest challenges. Changing how to monitor server availability is one of the easiest ways to reduce false alerts.</p>

<p>The first thing to recognize is that the monitor for Agent Check-In has nothing to do with whether a server is functioning. The agent is simply a service-based application on the server. It's job is to communicate with the VSA on a regular schedule and determine if it should either perform tasks or deliver information. This is considered a low-priority service, and - by design - the agent will relinquish resources when the server is under stressful load conditions. This could prevent the agent software from checking in with VSA for several minutes. When this triggers an alarm, it clearly isn't because the server has crashed or become otherwise unavailable. It's just busy.Servers starting to run backups shortly after midnight would trigger a rash of agent offline alerts, waking up the on-call team member, who would find everything working just fine.</p>

<p>So - how can this be improved? The first step is to set the check in time to a longer period to identify when the agent software has a problem. Our default is one hour.</p>

<p>Next, use an Out of Band monitor to check server health. Kaseya Network Monitor is built into VSA and can handle the job nicely. Don't use ping, because a smart NIC will reply even if the O/S has crashed. We check for the Server service, which typically must be running for the server to function, but more importantly, to get a response, the Operating System has to be <em>functional</em>. We set the time on this monitor long enough for the server to reboot <em>without</em> triggering an alarm, but short enough so that a failure will be detected and reported in a timely manner. The default alarm time we use is 15 minutes for most servers, and 7 minutes for critical systems.</p>

<p>We then add a set of monitors that report when the server has rebooted during business hours or booted into system recovery modes. These are Smart Monitors that run at startup and check for these specific conditions. Alarms for business hours reboots fire immediately, while recovery mode detection delays a brief period so an engineer can boot, perform a quick recovery operation, and reboot again.</p>

<p>Finally, we configure the agent to generate alarms for certain crash conditions. This might result in two alarms for the same event - one for the work-day reboot and one for the reason why.</p>

<p>With this method, our customers get better information, faster and more appropriate alerting, and virtually zero false alarms for server-down conditions.</p>
<br /><a href='http://mspbuilder.com/blog-agent-offline-is-not-server-down'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-agent-offline-is-not-server-down'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-agent-offline-is-not-server-down</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-agent-offline-is-not-server-down</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-agent-offline-is-not-server-down</guid>
      <pubDate>Sat, 27 Apr 2019 14:30:00 GMT</pubDate>
    </item>
    <item>
      <title>Security Starts with Best Practices</title>
      <description><![CDATA[<p>Security is the latest buzzword in technology, and everyone is looking at products and services to help secure their environment. What amazes me is that when you ask about documented Standards and Practices, I usually hear crickets chirping. Consider these common "best practices" you already follow:</p>

<p>When you go to the shopping mall, you lock your car, right?</p>

<p>If you have your tech kit with you, you lock it in the trunk, right?</p>

<p>When you leave for work in the morning, you lock the front door of your house, right?</p>

<p>You tell your kids not to take candy from strangers, right?</p>

<p>These are all forms of "best practices" that we follow without thinking, so why do we still open port 3389 for RDP? The common answer is that the client can't afford to implement an RDP gateway or licenses for VPN access. Then there are the outdated and unpatched systems. These are often needed to provide access to archival data, but are still on the network. Take them off the network or put them in a separate network with restricted access.</p>

<p>The questions you need to ask are:</p>

<ul>
	<li>Can your customer's business survive if they lost every bit of data they had?</li>
	<li>Could they pay the "ransom" if their data was encrypted?</li>
	<li>What would be the impact to <em>your business</em> when your client suffers a loss because you didn't observe (or enforce) good practices.</li>
</ul>

<p>We have <strong>Standards and Practices </strong>documents for many aspects of our MSP practice. These ensure that the work we do is consistent and follows reasonable and secure methods. Our "SAP" documents cover things like building web servers, creating a network time infrastructure, server disk partitioning, securing admin accounts, network segmentation, printer management, and building virtualization platforms (with specifics for VMware and Hyper-V). These range from as few as three pages to two dozen or more, depending on the topic and number of variations.</p>

<p>These are guidelines, not rules, that establish the specific configuration and "build" documents that the engineers follow. This takes some effort, but standards breed consistency and that provides a level of control over the environments. When you control an environment, you increase the overall reliability, which reduces your work and improves customer satisfaction.</p>

<p>The entire MSP Builder solution stack is built on standards and consistency. This allows us to develop automation that works well because it leverages the standards in our foundational products. This also helps with security - everything is designed and documented, not merely "implemented". If a risk is discovered, it's easy to identify the scope and remediate.</p>

<p>So - start small.. create some basic operational standards <em>and follow them!</em> Get buy-in from your engineers, because your business is only as good as your staff. Not sure where to start? We've posted our Standards Documents in the document library on the downloads page, available for free to registered users. Use Google and see what others are doing and adapt to your environment. Don't delay.</p>

<p>&nbsp;</p>
<br /><a href='http://mspbuilder.com/blog-security-starts-with-best-practices'>gbarnas</a>&nbsp;&nbsp;<a href='http://mspbuilder.com/blog-security-starts-with-best-practices'>...</a>]]></description>
      <link>http://mspbuilder.com/blog-security-starts-with-best-practices</link>
      <author>gbarnas@mspbuilder.com (gbarnas)</author>
      <comments>http://mspbuilder.com/blog-security-starts-with-best-practices</comments>
      <guid isPermaLink="true">http://mspbuilder.com/blog-security-starts-with-best-practices</guid>
      <pubDate>Sun, 21 Oct 2018 14:33:00 GMT</pubDate>
    </item>
  </channel>
</rss>